The simplest and fastest way to install Ollama AI is to use the Portainer template prepared by our team. This method minimizes manual configuration and allows you to deploy a stable, ready-to-use system within just a few minutes.

Requirements:

  • A VPS with Portainer installed using the Unihost script 

  • A deployed NPM (Nginx Proxy Manager) 

After connecting to Portainer, go to the Application section, select Ollama AI, and fill in the required fields.

Installing Ollama AI with Portainer - Image 1

Fill in all the required fields and select an AI model. Portainer provides a list of pre-installed models (for example, TinyLlama, Qwen3, Gemma3, Mistral, DeepSeek, Phi-4-mini). These options can be used for testing, but in practice it is often necessary to choose a specific model with a defined tag.

For this, the Custom mode is available — specify the desired model tag in the Custom model tag field to download it. This is the main way to obtain exactly the model you need.

Installing Ollama AI with Portainer - Image 2

After this, a new ollama stack will appear in the Stacks section. Inside this stack, the containers required for its operation will be deployed.

Installing Ollama AI with Portainer - Image 3

Now you need to go to NPM and add proxying to Ollama. You can use the example from the screenshot — in our case, we used a local domain added through the hosts file for the connection.

Installing Ollama AI with Portainer - Image 4

Now, when you open the domain, you will see the message “Ollama is running” — this confirms that the service has been successfully launched and proxying through NPM is working.

This means:

  • The Ollama container is accessible via its internal address,

  • NPM correctly forwards the requests,

  • The local domain ollama.local is resolved through the hosts file.

If this screen is displayed, the Ollama + NPM + local domain setup is correct, and you can proceed with integrations (for example, with n8n).

Installing Ollama AI with Portainer - Image 5

To connect Ollama to n8n, simply install n8n (you can follow our guide Installing n8n from the Portainer template) and add a node for Ollama.

The screenshots below show an example of using Ollama.

Installing Ollama AI with Portainer - Image 6 Installing Ollama AI with Portainer - Image 7

Conclusion

Using the ready-made Ollama AI template in Portainer allows you to quickly deploy a working environment without complex manual configuration. In combination with NPM, you get convenient domain access and automatic SSL, while integration with n8n makes it possible to use Ollama in your automation workflows. As a result, you have a ready-to-use “out-of-the-box” system that saves time and simplifies running AI models.