- cross-posted to:
- technology@lemmit.online
- cross-posted to:
- technology@lemmit.online
Over just a few months, ChatGPT went from correctly answering a simple math problem 98% of the time to just 2%, study finds. Researchers found wild fluctuations—called drift—in the technology’s abi…::ChatGPT went from answering a simple math correctly 98% of the time to just 2%, over the course of a few months.
My personal pet theory is that a lot of people were doing work that involved getting multiple LLMs in communication. When those conversations were then used in the RL loop we start seeing degradation similar to what’s been in the news recently with regards to image generation models. I believe this is the paper that got everybody talking about it recently: https://arxiv.org/pdf/2307.01850.pdf
This is peer-reviewed? they use a line in the discussion which seems relatively unprofessional, telling people to join a 12-step program if they like to use artificial training data.
I think arvix has no rule requiring a paper be per reviewed before uploading.
deleted by creator
Not affiliated with the paper in any way. Have just been following the news around it.
ArXiv papers are never peer reviewed.
Thank you