I’m currently shopping around for something a bit faster than ollama and because I could not get it to use a different context and output length, which seems to be a known and long ignored issue. Somehow everything I’ve tried so far did miss one or more critical features, like:

  • “Hot” model replacement, so loading and unloading models on demand
  • Function calling
  • Support of most models
  • OpenAI API compatibility (to work well with Open WebUI)

I’d be happy about any recommendations!

  • theunknownmuncher@lemmy.world
    link
    fedilink
    English
    arrow-up
    8
    arrow-down
    1
    ·
    edit-2
    2 days ago

    Ummm… did you try /set parameter num_ctx # and /set parameter num_predict #? Are you using a model that actually supports the context length that you desire…?

    • RandomlyRight@sh.itjust.worksOP
      link
      fedilink
      English
      arrow-up
      2
      ·
      2 days ago

      Yeah, but there are many open issues on GitHub related to these settings not working right. I’m using the API, and just couldn’t get it to work. I used a request to generate a json file, and it never generated one longer than about 500 lines. With the same model on vllm, it worked instantly and generated about 2000 lines

  • Possibly linux@lemmy.zip
    link
    fedilink
    English
    arrow-up
    4
    arrow-down
    1
    ·
    2 days ago

    I don’t think you are going to find anything faster. Ollama is pretty much as fast as it gets

    • CaptnBook@feddit.org
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      2 days ago

      It’s not, by far. But vllm or SGLang don’t support switching the model… such a shame.

  • hendrik@palaver.p3x.de
    link
    fedilink
    English
    arrow-up
    2
    arrow-down
    1
    ·
    2 days ago

    I’m also aware of LocalAI with automatic model swapping and OpenAI compatible API.

    But unless I’m mistaken, they all use ggml behind the scenes? So you might want to look for something that uses vllm or exllama or something if you want a completely different backend.

    • Daughter3546@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 day ago

      I would not recommend LocalAI. There documentation is somewhat lacking and it’s an all in one utility with many moving parts. The parts also tend to break, quite often.

    • hendrik@palaver.p3x.de
      link
      fedilink
      English
      arrow-up
      5
      ·
      2 days ago

      Btw, Ollama is a software to run AI models. Deepseek is just a company. Or a model file or a service. But that’s not what OP is looking for. They want to run a model. And that needs software like Ollama.

        • hendrik@palaver.p3x.de
          link
          fedilink
          English
          arrow-up
          1
          ·
          edit-2
          11 hours ago

          Yes, Deepseek V3 is a model. But what I was trying to say, you download the file. But then what? Just having the file stored on your harddisk doesn’t do much. You need to run it. That’s called “inference” in machine learning/AI terms. The repository you linked, contains some example code how to do it with Huggingface’s Transformer library. But there are quite some frameworks out there for running AI models. Ollama would be another one. And it’s not just some example code where to start with your own Python program, but a ready-made project/framework with tools and frontends available and an interface for other software to hook into.

          And generally, you need some software to actually do something. And how fast it is, depends on the software used, the hardware it’s executed on. And in this case, also on the size of the AI model and its architecture. But yeah, Deepseek V3 has some tricks up it’s sleeves to make it very efficient. Though, it is really big for home use. I think we’re looking at a six-figure price for the hardware to run it. Usually, people use Deepseek R1 models. Or other smaller AI models if they run them themselves.