Together.ai

Connecting Local Open WebUI to Together.ai Models

I wanted to learn more about running Open WebUI locally through Docker while also testing out the new Llama 3.2 Vision models (11B and 90B), but my system with a 6GB VRAM video card wasn’t going to be able to run the inference model locally, so I needed a free or inexpensive option; enter together.ai. Their free account offers the 11B model at no charge while also having easy access to the 90B model. So, I downloaded Docker Desktop as the getting started docs suggested and then my inexperience hit a roadblock: How to run a docker terminal command and what command to run?

I already had my API key from together.ai, but I didn’t know how to use it. I knew that I was going to use together.ai API endpoints and that those were modeled off of OpenAI, so what I needed was to alter the environmental variables being set for WebUI and being passed in the docker run command. Here is the command I used:

docker run -d -p 3000:8080 -e OPENAI_API_BASE_URL=https://api.together.xyz/v1 -e OPENAI_API_KEY=your_secret_key -v open-webui:/app/backend/data --name open-webui --restart always ghcr.io/open-webui/open-webui:main

I replaced “your_secret_key” with my actual together.ai API key. Also, if you also have never used Docker Desktop before, the >_ Terminal button is in the bottom-right of the window.

That’s it! Now, once WebUI container is up and running, you’ll be able to select from all of Together.ai’s models. Just make sure you set the model default to Meta Llama Vision Free!

One thought on “Connecting Local Open WebUI to Together.ai Models

Leave a Reply

Your email address will not be published. Required fields are marked *