this post was submitted on 26 Sep 2023
11 points (78.9% liked)

Free Open-Source Artificial Intelligence

2873 readers
4 users here now

Welcome to Free Open-Source Artificial Intelligence!

We are a community dedicated to forwarding the availability and access to:

Free Open Source Artificial Intelligence (F.O.S.A.I.)

More AI Communities

LLM Leaderboards

Developer Resources

GitHub Projects

FOSAI Time Capsule

founded 1 year ago
MODERATORS
top 6 comments
sorted by: hot top controversial new old
[–] micheal65536@lemmy.micheal65536.duckdns.org 3 points 1 year ago* (last edited 1 year ago)

WizardLM 13B (I didn't notice any significant improvement with the 30B version), tends to be a bit confined to a standard output format at the expense of accuracy (e.g. it will always try to give both sides of an argument even if there isn't another side or the question isn't an argument at all) but is good for simple questions

LLaMa 2 13B (not the chat tuned version), this one takes some practice with prompting as it doesn't really understand conversation and won't know what it's supposed to do unless you make it clear from contextual clues, but it feels refreshing to use as the model is (as far as is practical) unbiased/uncensored so you don't get all the annoying lectures and stuff

[–] DrakeRichards@lemmy.world 3 points 1 year ago (1 children)

I do image generation for RPGs, so AZovya’s RPG v3 model is easily my favorite. It does a wide range of styles very well and understands a lot of RPG-specific tokens. I’m really hoping they update it for SDXL, because all of the models I’ve seen so far are disappointing compared to what’s available with SD 1.5.

I don’t have an answer for LLMs, but I’m curious what others will reply with. Aren’t there only like… 3 or 4 models in common use for LLMs? I’m used to having hundreds to pick from with Stable Diffusion; I don’t think I understand how LLM models are different.

There are only a few popular LLM models. A few more if you count variations such as "uncensored" etc. Most of the others tend to not perform well or don't have much difference from the more popular ones.

I would think that the difference is likely for two reasons:

  • LLMs require more effort in curating the dataset for training. Whereas a Stable Diffusion model can be trained by grabbing a bunch of pictures of a particular subject or style and throwing them in a directory, an LLM requires careful gathering and reformatting of text. If you want an LLM to write dialog for a particular character, for example, you would need to try to find or write a lot of existing dialog for that character, which is generally harder than just searching for images on the internet.

  • LLMs are already more versatile. For example, most of the popular LLMs will already write dialog for a particular character (or at least attempt to) just by being given a description of the character and possibly a short snippet of sample dialog. Fine-tuning doesn't give any significant performance improvement in that regard. If you want the LLM to write in a specific style, such as Old English, it is usually sufficient to just instruct it to do so and perhaps prime the conversation with a sentence or two written in that style.

[–] Blaed@lemmy.world 2 points 1 year ago

Loved to read everyone's comments on this one. If you're here and reading this post now, check out this related thread - you might be interested!

[–] librecat@lemmy.basedcount.com 2 points 1 year ago

Anything based on llama2 tbh. It's fast enough and logical enough to handle the kinds of programming related tasks I want to use a llm for (writing boilerplate code, generating placeholder data, simple refactoring). With the release of the vicuna and codellama models things are getting even better.

[–] j4k3@lemmy.world 2 points 1 year ago
  • Technical/general/code - Llama2 70B GGUF Q5k_M instruct

  • Learning - Llama2 70B GGUF Q5k_M chat

Chat/Roleplay

  • Pygmalion 2 (the trick to using it in Oobabooga [without code mods] is to add the special tokens to the chat character profile sections. I can't type the tokens directly in Lemmy because the way Lemmy is coded, but you should be able to figure this out. The readme for the model has the special token syntax. Put the user and character (model) tokens in front of the names in the top boxes, then start the context with the system token. Don't use the user or model tokens in the context, use the names instead. This isn't perfect, like you can't also use Silero TTS with this method, but it will work.)
  • GPT4chan sourcing instructions and links are in the main Oobabooga readme.