Ollama on external drive

Explore the benefits of running Large Language Models (LLMs) locally using Ollama, with a focus on data security, reduced latency, and customization. Learn how to set up Ollama on an external drive for efficient model management and storage, giving users full control over their data and model deployment. This guide covers the advantages of local LLMs, Ollama's features, and step-by-step instructions for installation and configuration on external storage.

Why run LLMs locally?

  1. Data security:
    Local LLMs can process data on-site, reducing the risk of data breaches by eliminating the need to transmit data over the internet. This can also help meet regulatory requirements for data privacy and security.

  2. Reduced latency:
    Running LLMs locally can reduce the response time between a request and the model's response. This can be especially beneficial for applications that require real-time data processing.

  3. Customization:
    Local LLMs can be tailored to specific needs and requirements, allowing for better performance than general-purpose models.

  4. Control:
    Local deployment gives users complete control over their hardware, data, and the LLMs themselves. This can be useful for optimization and customization according to specific needs and regulations.

  5. Flexibility:
    Local deployment can also provide greater flexibility than working with third-party servers, which may limit businesses to pre-defined models and functionality.

But why Ollama?

Ollama bridges the gap between large language models (LLMs) and local development, allowing you to run powerful LLMs directly on your machine. Here’s how Ollama empowers you:

  1. Simplified LLM Interaction:
    Ollama’s straightforward CLI and API make it easy to create, manage, and interact with LLMs, accessible to a wide range of users.

  2. Pre-built Model Library:
    Access a curated collection of ready-to-use LLM models, saving you time and effort.

  3. Customization Options:
    Fine-tune models to your specific needs, customize prompts, or import models from various sources for greater control.

How to setup?

1. Download Ollama for your OS from here.

To checkout the code base and community integrations, check out the Ollama GitHub repository.

2. After downloading, run your first model.

This will install the model at ~/.ollama/models/ and allow you to interact with it.

ollama run llama3 # or any other model you want

3. How to install models in an external drive?

For this, follow these steps after connecting your hard drive to your machine.

1. Execute this command to create a models directory in your external drive:

mkdir -p ai_models/ollama/models

This will generate a directory structure like this in your external drive:

ai_models/
└── ollama/
    └── models/

The Ollama is designed to search for its models in the ~/.ollama/models directory by default. However, if you want to store your Ollama models on an external drive (in this case, ~/Volumes/drive/ai_models/ollama/models), you can create a symlink to redirect the library's search to the external location.

  1. Remove the default models directory in your ~/.ollama directory.
sudo rm -rf models
# enter the password 🔑
  1. Create a symlink.
Replace lines: 88-88
sudo ln -s /Volumes/drive/ai_models/ollama/models ~/.ollama/models

Replace lines: 91-91 3. Check the ~/.ollama for symlink. Execute this command to list all items with info:

ls -la

You will see a similar output:

total 48
drwxr-xr-x@   8 gopalverma  staff   256 Jun 20 00:38 .
drwxr-x---+ 107 gopalverma  staff  3424 Jun 22 10:57 ..
-rw-r--r--@   1 gopalverma  staff  6148 Jun 19 17:34 .DS_Store
-rw-------    1 gopalverma  staff  4976 Jun 20 00:38 history
-rw-------@   1 gopalverma  staff   387 Jun 19 14:40 id_ed25519
-rw-r--r--@   1 gopalverma  staff    81 Jun 19 14:40 id_ed25519.pub
 
Replace lines: 108-108
drwxr-xr-x@   3 gopalverma  staff    96 Jun 19 14:40 logs
lrwxr-xr-x    1 root        staff    41 Jun 19 16:32 models -> /Volumes/GopalSSD/ai_models/ollama/models
# above line means that the models directory is linked to the external drive
  1. Install and Run the model.
Replace lines: 116-116
ollama run llama3

This will install the model at /Volumes/drive/ai_models/ollama/models and allow you to interact with it. Use it!

Conclusion

Running LLMs locally with Ollama on an external drive offers numerous benefits, including enhanced data security, reduced latency, and greater control over your AI models. By following the steps outlined in this guide, you can easily set up Ollama to use models stored on an external drive, giving you the flexibility to manage large model files without consuming your main system's storage.

This approach not only allows you to leverage the power of LLMs locally but also provides a scalable solution for storing and accessing multiple models. As AI continues to evolve, having a setup that allows for easy expansion and management of your model library will become increasingly valuable.

Remember, the key advantages of this setup include:

  • Improved data privacy and security
  • Reduced dependency on cloud services
  • Flexibility in model selection and customization
  • Efficient use of storage resources

By mastering the use of Ollama with external storage, you're well-positioned to explore and experiment with various LLMs while maintaining full control over your AI environment.