Ollama ​
Ollama lets you run large language models locally on your machine.
Page Assist supports Ollama by default. You don't need to configure anything if you are using Ollama on localhost:11434
. Page Assist will automatically detect it.
If you face any issues with Ollama, please check the Ollama Connection Issues guide.
Multiple Ollama Instances ​
You can configure multiple Ollama instances by following these steps:
Click on the Page Assist icon on the browser toolbar.
Click on the
Settings
icon.Go to the
OpenAI Compatible API
tab.Click on the
Add Provider
button.Select
Ollama
from the dropdown.Enter the
Ollama URL
.Click on the
Save
button.
You don't need to add any models since Page Assist will automatically fetch them from the Ollama instance you have configured.