• admiralteal@kbin.social
    link
    fedilink
    arrow-up
    7
    arrow-down
    1
    ·
    11 months ago

    That’s not really innovative though. Auto moderator bots have been sending out warnings like this based on simple keyword criteria for years.

    • TimeSquirrel@kbin.social
      link
      fedilink
      arrow-up
      6
      arrow-down
      2
      ·
      11 months ago

      Yes, to suppress swearing or offensive content, not suppress ideas. You could still talk about a touchy subject by filtering out keywords and using substitutions.

      • admiralteal@kbin.social
        link
        fedilink
        arrow-up
        1
        arrow-down
        1
        ·
        edit-2
        11 months ago

        It could search for all kinds of keywords to enforce rules. For example, scan titles to find question identifiers to suggest a user maybe needed to check an FAQ/wiki, or that kind of thing. Find keywords to detect probable off-topics. That sort of stuff.

        At the end of the day, is what the LLM bot doing really any different? I’d say it’s more sophisticated but the same fundamental thing.

    • snooggums@kbin.social
      link
      fedilink
      arrow-up
      4
      ·
      11 months ago

      Exactly.

      AI moderation is just word and phrase filtering, the latter of which wasn’t done earlier because it is really complicated due to the vast number of possible combinations of words and context. It also has the same failure issues as word filtering where it will end up being overly restrictive to the point of hilarity or will soon show that no matter what you filter someone will find a way around it.

      • admiralteal@kbin.social
        link
        fedilink
        arrow-up
        3
        arrow-down
        2
        ·
        edit-2
        11 months ago

        I mean, suppose the LLM bot is actually good at avoiding false positives/misunderstandings – doesn’t that simply remove one of the biggest weaknesses of old-fashioned keyword identification? I really just see this as a natural evolution of the technology and not some new, wild thing. It’s just an incremental improvement.

        What it absolutely does NOT do is replace the need for human judgement. You’ll still need an appeals process and a person at the wheel to deal with errors and edge cases. But it’s pretty easy to imagine an LLM bot doing at least as well a job as the average volunteer Reddit/Discord mod.

        Of course, it’s kind of a moot point. Running a full LLM bot, parsing every comment against some custom-design model, as your automoderator would be expensive. I really cannot see it happening routinely, at least not with current tech costs. Maybe in a few years the prices will have come down enough, but not right now.

        • snooggums@kbin.social
          link
          fedilink
          arrow-up
          3
          ·
          11 months ago

          suppose the LLM bot is actually good at avoiding false positives/misunderstandings

          No, I don’t think I will.