I could in theory upgrade the power supply to go beyond the 150W target, but then I’d also need a better chassis because it is already quite warm with my current 130W card.
Hoping to stick with AMD, but if my wishes to play around with local LLMs and image upscaling makes Nvidia a more practical choice, I can live with that compromise.
Working with a budget of 200 US, I’m fine going with a used GPU.


FWIW I did that for a bit https://fabien.benetou.fr/Content/SelfHostingArtificialIntelligence and I stopped doing it. I did it mostly from FOMO and that, maybe, truly, it wasn’t just hype. Well I stopped. Sure most of those (then state of the art) models are impressive. Yes there are some radical progresses on all fronts, from software to hardware to mathematics underpinning ALL this… and yet, that is ACTUALLY useful in there? IMHO not much.
Once you did try models and confirm that yes indeed it makes “something” then the usefulness is so rare it make the whole endeavor not worth it for me. I would still do it again in retrospect because it helps to learn but… honestly NOT doing it and leaving others to benchmark, review, etc or “just” spending 10 bucks on a commercial model will save you a LOT of time.
So… do what you want but I’d argue gaming remains by far the best usage of a local GPU.