Bring Your Own Model with GitHub Copilot

Bring Your Own Model with GitHub Copilot

2025, May 04    

I use GitHub Copilot all the time in my day-to-day tasks. In fact, in a previous post I spent a day during a hackathon to write a chat extension for Visual Studio Code which allowed Copilot to better understand a SQL Database. Now that MCP is a thing, that’s a bit less relevant.

One thing that I get asked a lot is “If GitHub Copilot isn’t limited to OpenAI models anymore, why can’t we provide our own AI models?” It’s an interesting question, and one that I’d also been wondering about for a while. Well, about a month ago, I stumbled upon a LinkedIn Post by Rory Preddy which caught my attention. It had two main statements in it: “🎉 Now in preview 👉Bring Your Own Key/Model in VS Code Insiders” and, more importantly, “run Ollama models.” I use Ollama for local LLM experimentation with models like codegemma and codellama and codestral, lately, phi4. If I could combine these Ollama models with Copilot, how well would they work? Well, I decided to find out.

All in all, I’d say this feature is great. It was really easy to set up, and overall I’ve been impressed with how well the other models are performing on my local machine. While I’ve got nothing against the models provided by GitHub it was really interesting to see how different models reacted to various prompts and tasks from Copilot. This feature, combined with using MCP services with Copilot, could really boost GitHub Copilot’s usefulness by empowering it with the right services and models for the task at hand.