• CeeBee@lemmy.world
      link
      fedilink
      arrow-up
      7
      ·
      11 months ago

      Stable diffusion SXDL Turbo model running in Automatic1111 for image generation.

      Ollama with Ollama-webui for an LLM. I like the Solar:7b model. It’s lightweight, fast, and gives really good results.

      I have some beefy hardware that I run it on, but it’s not necessary to have.

    • Ookami38@sh.itjust.works
      link
      fedilink
      arrow-up
      2
      ·
      11 months ago

      Depends on what AI you’re looking for. I don’t know of an LLM (a language model,think chatgpt) that works decently on personal hardware, but I also haven’t really looked. For art generation though, look up automatic1111 installation instructions for stable diffusion. If you have a decent GPU (I was running it on a 1060 slowly til I upgraded) it’s a simple enough process to get started, there’s tons of info online about it, and it’s all run on local hardware.

      • CeeBee@lemmy.world
        link
        fedilink
        arrow-up
        2
        ·
        11 months ago

        I don’t know of an LLM that works decently on personal hardware

        Ollama with ollama-webui. Models like solar-10.7b and mistral-7b work nicely on local hardware. Solar 10.7b should work well on a card with 8GB of vram.