Skip to content

Ollama ​

Ollama lets you run large language models locally on your machine.

Page Assist supports Ollama by default. You don't need to configure anything if you are using Ollama on localhost:11434. Page Assist will automatically detect it.

If you face any issues with Ollama, please check the Ollama Connection Issues guide.

Multiple Ollama Instances ​

You can configure multiple Ollama instances by following these steps:

  1. Click on the Page Assist icon on the browser toolbar.

  2. Click on the Settings icon.

  3. Go to the OpenAI Compatible API tab.

  4. Click on the Add Provider button.

  5. Select Ollama from the dropdown.

  6. Enter the Ollama URL.

  7. Click on the Save button.

You don't need to add any models since Page Assist will automatically fetch them from the Ollama instance you have configured.

MIT Licensed Open Source Project