• snooggums@lemmy.world
    link
    fedilink
    English
    arrow-up
    21
    ·
    edit-2
    7 days ago

    By design, LLMs can get faster but cannot be more accurate without a massive intentional approach to verifying their datasets, which isn’t feasible because that would counter anything not fact based as LLMs don’t understand context. Basically, the training approach means that they get filled with whatever the builders can get their hands on and then they fall back to web searches which return all kinds of unreliable stuff because LLMs don’t have a way of verifying reliability.

    Even if they were perfect, they will not be able to keep up with the content flood of new information that comes out every minute when used as general purpose answer anything tools.

    What AI actually excels at is pattern matching in controlled settings.

    • slate@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      17
      ·
      7 days ago

      And now, lots of web searches return results of AI SEO slop chock full of incorrect information, which then fules subsequent training sets and LLM web searches and creates a negative feedback loop that could destroy the internet.

      • snooggums@lemmy.world
        link
        fedilink
        English
        arrow-up
        11
        ·
        7 days ago

        The AI SEO slop is already destroying the internet, although that negative feedback loop is certainly accelerating it.

      • Ramblingman@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        5 days ago

        Apparently gpt-5 is much worse, or so the subreddit dedicated to it says. I wonder if that loop has already started?