I’ve a minipc running an AMD 5700U where I host some services, including ollama and openwebui.

Unfortunately the support of rocm isn’t quite there yet and not to mention that of mobile GPUs.

Surprisingly the prompts work when configured to use the CPU, but the speed is just… well, not good.

So, what’d be a cheap and energy efficient setup to run sone kind of LLM for personal use, but still get decent speed?

I was thinking about getting an e-gpu case, but I’m not sure about how solid this would end up.

  • Lemongrab@lemmy.one
    link
    fedilink
    English
    arrow-up
    1
    ·
    edit-2
    6 months ago

    Jesus. Kinda overkill depending on how many parameters the model is and the float precision