Local LLM for Devlopers
By Nobody 2025-11-01
Introduction
The paid LLM developer tools integrated into the IDE from Redmond are pretty neat. But what if we want to run offline,
experiment with different LLMs and play with settings? Suppose we do not like the telemetry?
Enter the local LLM server.
Using LM Studio to serve LLMs to Xcode
- Install LM Studio.
- Select a model for initial trial with Xcode: qwen3/qwen3-coder-30B published by quen.
- Try the model in LM Studio.
- Update MacOS and Xcode.
- In LM Studio select Developer icon.
- Select Server Settings.
- Set server port, 1234 is default.
- Disable serve on local network.
- Enable per-request remote MCPs.
- Enable cross origin resource sharing (CORS).
- Disable just-in-time model loading.
- Enable the server.
- Run Xcode, open Xcode settings (CMD ,).
- Select Intelligence > Add a Model Provider.
- Select Locally Hosted.
- Set the port.
- Give a description then select Add.
First Try
- In Xcode, select “Coding Assistant” icon (star-like with text-lke background lines. Weird.).
This worked. Though quen as configured is a little too verbose, it is quite fast, at least
as fast as corporate tools from Redmond.
Next Steps
- Try serving the LLM from Macos to Linux clients.
- Try to have the Linux client integrate the LLM with Kate.