The article discusses the mysterious nature of large language models and their remarkable capabilities, focusing on the challenges of understanding why they work. Researchers at OpenAI stumbled upon unexpected behavior while training language models, highlighting phenomena such as “grokking” and “double descent” that defy conventional statistical explanations. Despite rapid advancements, deep learning remains largely trial-and-error, lacking a comprehensive theoretical framework. The article emphasizes the importance of unraveling the mysteries behind these models, not only for improving AI technology but also for managing potential risks associated with their future development. Ultimately, understanding deep learning is portrayed as both a scientific puzzle and a critical endeavor for the advancement and safe implementation of artificial intelligence.
It’s really so much worse than this article even suggests.
For example, one of the things it doesn’t really touch on is the unexpected results emerging over the last year that a trillion parameter network may develop capabilities which can then be passed on to a network with less than a hundredth the parameter size by generating synthetic data from the larger model to feed into the smaller. (I doubt even a double digit percentage of researchers would have expected that result before it showed up.)
Even weirder was a result that CoT prompting models to improve their answers and then feeding the questions and final answers into a new model but without the ‘chain’ from the CoT will still train the second network in the content of the chain.
The degree to which very subtle details in the training data is ending up modeled seems to go beyond even some of the wilder expectations by researchers right now. Just this past week I saw a subtle psychological phenomenon I used to present about appearing very clearly and very by the book in GPT-4 outputs given the correct social context. I didn’t expect that to be the case for at least another generation or two of models and hadn’t expected the current SotA models to replicate it at all.
For the first time two weeks ago I saw a LLM code switch to a different language when there was a more fitting translation to the concept being discussed. There’s no way the most statistical likelihood of discussing motivations in English was to drop into a language barely represented in English speaking countries. This was with the new Gemini, which also seems to have internalized a bias towards symbolic representations in its generation, to the point they appear to be filtering out emojis (in the past I’ve found examples where switching from nouns to emojis improves critical reasoning abilities of models as it breaks token similarity patterns in favor of more abstracted capabilities).
Adding the transformer’s self attention to diffusion models has suddenly resulted in correctly simulating things like fluid dynamics and physics in Sora’s video generation.
We’re only just starting to unravel some of the nuances of self-attention, such as recognizing the attention sinks in the first tokens and the importance of preserving them across larger sliding context windows.
For the last year at least, especially after GPT-4 leapfrogged expectations, it’s very much been feeling as the article states - this field is eerily like the early 20th century in Physics where experimental results were regularly turning a half century of accepted theories on their head and fringe theories generally dismissed were suddenly being validated by multiple replicated results.
This article, along with others covering the topic, seem to foster an air of mystery about machine learning which I find quite offputting.
Known as generalization, this is one of the most fundamental ideas in machine learning—and its greatest puzzle. Models learn to do a task—spot faces, translate sentences, avoid pedestrians—by training with a specific set of examples. Yet they can generalize, learning to do that task with examples they have not seen before.
Sounds a lot like Category Theory to me which is all about abstracting rules as far as possible to form associations between concepts. This would explain other phenomena discussed in the article.
Like, why can they learn language? I think this is very mysterious.
Potentially because language structures can be encoded as categories. Any possible concept including the whole of mathematics can be encoded as relationships between objects in Category Theory. For more info see this excellent video.
He thinks there could be a hidden mathematical pattern in language that large language models somehow come to exploit: “Pure speculation but why not?”
Sound familiar?
models could seemingly fail to learn a task and then all of a sudden just get it, as if a lightbulb had switched on.
Maybe there is a threshold probability of a positied association being correct and after enough iterations, the model flipped it to “true”.
I’d prefer articles to discuss the underlying workings, even if speculative like the above, rather than perpetuating the “It’s magic, no one knows.” narrative. Too many people (especially here on Lemmy it has to be said) pick that up and run with it rather than thinking critically about the topic and formulating their own hypotheses.
Yeah pretty much this. My understanding of the way LLMs function is that they operate on statistical associations of words which would amount to categories in Category Theory. Basically the training phase is classifying words into categories based on the examples in the training input. Then when you feed it a prompt it just uses those categories to parse and “solve” your prompt. It’s not “mysterious” it’s just opaque because it’s an incredibly complicated model. Exactly the sort of thing that people are really bad at working with, but which computers are really good with.
Here is an alternative Piped link(s):
Piped is a privacy-respecting open-source alternative frontend to YouTube.
I’m open-source; check me out at GitHub.
“The magic is not that the model can learn math problems in English and then generalize to new math problems in English,” says Barak, “but that the model can learn math problems in English, then see some French literature, and from that generalize to solving math problems in French. That’s something beyond what statistics can tell you about.”
It is not magic and all this “it’s magic” discourse is IMO counter-productive. When a model does something interesting, people need to dig on what it’s doing and why, for better models; and by “interesting” I mean both accurate and inaccurate (enough of this “it’s hallu, move on!” nonsense).
And it’s still maths and statistics. Yes, even if it’s complex enough to make you lose track of. To give you an example, it’s like trying to determine exactly the position of every atom of oxygen and silicon in a quartz crystal, to know how it should behave - it should be doable if not by the scale.
Now, explaining it: LLMs are actually quite good at translation (or at least better than other machine-based translation methods). Three things might be happening here:
- It converts the prompt into French, then operates on French tokens.
- It operates on English tokens, then converts the output to French tokens.
- It converts the logical problem itself into an abstract layer, then into French.
I find #1 unlikely, #2 the most likely, but the one that would interest me the most is #3. It would be closer to how humans handle language; we don’t really think too much by chaining morphemes (“tokens”), we mostly handle what those morphemes convey.
It would be far, far, far more interesting if this was coded explicitly into the model, but if it appeared as emergent behaviour it would be better than nothing.
Yep my sentiment entirely.
I had actually written a couple more paragraphs using weather models as an analogy akin to your quartz crystal example but deleted them to shorten my wall of text…
We have built up models which can predict what might happen to particular weather patterns over the next few days to a fair degree of accuracy. However, to get a 100% conclusive model we’d have to have information about every molecule in the atmosphere, which is just not practical when we have a good enough models to have an idea what is going on.
The same is true for any system of sufficient complexity.
It converts the prompt into French, then operates on French tokens. It operates on English tokens, then converts the output to French tokens. It converts the logical problem itself into an abstract layer, then into French.
What does any of that actually mean?
You download an LLM. Now what? How do you test this?
What does any of that actually mean?
I was partially rambling so I expressed the three hypotheses poorly. A better way to convey it would be which set of tokens is the LLM using to solve the problem? 1. from French?, 2. from English?, or 3. neither?
In #1 and #2 it’s still doing nothing “magic”, it’s just handling tokens as it’s supposed to. In #3 it’s using the tokens for something more interesting - still not “magic”, but cool.
You download an LLM. Now what? How do you test this?
For maths problems, I don’t know a way to test it. However, for general problems:
If the LLM is handling problems through the tokens of a specific language, it should fall for a similar “trap” as plenty monolinguals do, when 2+ concepts are conveyed through the same word and they confuse said concepts.
For example. Let’s say that we train an LLM with the following corpuses:
- English corpus talking about software, but omitting any clarification distinguishing between free “unrestricted” (as Linux) and free “costless” (as Skype).
- French corpus that includes the words “libre” (free/unrestricted) and “gratuit” (free/costless), enough context to associate each with their semantic fields, and to associate both with English “free”.
Then we start asking it about free software, in both languages. Will the LLM be able to distinguish between both concepts?
This makes some very strong assumptions about what’s going on inside the model. We don’t know that we can think of concepts as being internally represented or that these concepts would make sense to humans.
Suppose a model sometimes seems to confuse the concept. There will be wrong examples in the training data. For all we know, it may have learned that this should be done if there was an uneven number of words since the last punctuation mark.
To feed text into an LLM, it has to be encoded. The normal schemes are for different purposes and not suitable. A text is broken down into tokens. A token can be a single character or an emoji, part of a word, or even more than a word. A token is represented by numbers and that’s what the model takes as input and gives as output. A text, turned into numbers, is called an embedding.
The process of turning a text into an embedding is quite involved. It uses its own neural net. The numbers should already relate to the meaning. Because of the way these are trained, English words are often a single token, while words from other languages are dissected into smaller parts.
If an LLM “thinks” in tokens, then that’s something it has learned. If it “knows” that a token has a language, then it has learned that.
This makes some very strong assumptions about what’s going on inside the model.
I explicitly marked the potential explanations as “hypotheses”, acknowledging that this shit that I said might be wrong. So no, I am clearly not assuming (i.e. taking the dubious for certain).
We don’t know that we can think of concepts as being internally represented or that these concepts would make sense to humans. [implied: “you’re assuming that LLMs represent concepts internally.”]
The implication is incorrect.
“Concept” in this case is simply a convenient abstraction, based on how humans would interpret the output. I’m not claiming that the LLM developed them as an emergent behaviour. If the third hypothesis is correct it would be worth investigating that, but as I said, I’m placing my bets on the second one.
The focus of the test is to understand how the LLM behaves based on what we know that it handles (tokens) and something visible for us (the output).
Feel free to suggest other tests that you believe that might throw some light on the phenomenon from the article (LLM trained on English maths problems being able to solve them for French).