I’d like to set up a local coding assistant so that I can stop using Google to ask complex questions to for search results.

I really don’t know what I’m doing or if there’s anything that’s available that respects privacy. I don’t necessarily trust search results for this kind of query either.

I want to run it on my desktop, Ryzen 7 5800xt + Radeon RX 6950xt + 32gb of RAM. I don’t need or expect data center performance out of this thing.

Something like LM Studio and Qwen sounds like it’s what I’m looking for, but since I’m unfamiliar with what exists I figured I would ask for Lemmy’s opinion.

Is LM Studio + Qwen a good combo for my needs? Are there alternatives?

  • TomAwezome@lemmy.world
    link
    fedilink
    English
    arrow-up
    8
    arrow-down
    2
    ·
    21 hours ago

    I get good mileage out of the Jan client and Void editor, various models will work but Jan-4B tends to do OK, maybe a Meta-Llama model could do alright too. The Jan client has settings where you can start up a local OpenAI-compatible server, and Void can be configured to point to that localhost URL+port and specific models. If you want to go the extra mile for privacy and you’re on a Linux distro, install firejail from your package manager and run both Void and Jan inside the same namespace with outside networking disabled so it only can talk on localhost. E.g.: firejail --noprofile --net=none --name=nameGoesHere Jan and firejail --noprofile --net=none --join=nameGoesHere void, where one of them sets up the namespace (–name=) and the other one joins the namespace (–join=)