Lately, I wanted to see if I can run a local LLM on my Macbook Air M2 and to use it as a provider for the Zed editor.
tldr; it works but is slow. You probably need a more powerful machine.
(I have since moved to OpenRouter as my provider, however that is not a local option.)
Run a Local LLM
I use LM Studio. Make sure to install the binary into the standard /Applications/
-folder if you’re using a Mac. In that case, the installer will also install the necessary CLI lms
.
Download a model. You can do that via the GUI, see the docs of LM Studio.
I have tried:
- deepseek/deepseek-r1-0528-qwen3-8b
- all-hands_openhands-lm-7b-v0.1
Others didn’t work at all on my Macbook Air. Even those two are sluggish.
Now make sure that the desired model is loaded. The CLI is the easiest way to make sure:
lms load <model of choice>
# e.g., lms load deepseek/deepseek-r1-0528-qwen3-8b
Don’t forget to start the server! I was missing this step originally:
lms server start
Zed Editor
This is an example of how you could configure zed.dev (in settings.json
):
{
"agent": {
"default_profile": "ask",
"default_model": {
"provider": "lmstudio",
"model": "deepseek/deepseek-r1-0528-qwen3-8b"
},
"inline_assistant_model": {
"provider": "lmstudio",
"model": "all-hands_openhands-lm-7b-v0.1"
},
"commit_message_model": {
"provider": "lmstudio",
"model": "all-hands_openhands-lm-7b-v0.1"
},
"thread_summary_model": {
"provider": "lmstudio",
"model": "all-hands_openhands-lm-7b-v0.1"
}
},
}
The above example shows feature-specific models for the different AI-powered features in the Zed editor.