Google apologizes for ‘missing the mark’ after Gemini generated racially diverse Nazis::Google says it’s aware of historically inaccurate results for its Gemini AI image generator, following criticism that it depicted historically white groups as people of color.

  • kromem@lemmy.world
    link
    fedilink
    English
    arrow-up
    8
    arrow-down
    13
    ·
    9 months ago

    It’s putting human biases on full display at a grand scale.

    Not human biases. Biases in the labeled data set. Those could sometimes correlate with human biases, but they could also not correlate.

    But these LLMs ingest so much of it and simplify the data all down into simple sentences and images that it becomes very clear how common the unspoken biases we have are.

    Not LLMs. The image generation models are diffusion models. The LLM only hooks into them to send over the prompt and return the generated image.

      • kromem@lemmy.world
        link
        fedilink
        English
        arrow-up
        4
        ·
        9 months ago

        If you train on Shutterstock and end up with a bias towards smiling, is that a human bias, or a stock photography bias?

        Data can be biased in a number of ways, that don’t always reflect broader social biases, and even when they might appear to, the cause vs correlation regarding the parallel isn’t necessarily straightforward.

        • VoterFrog@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          9 months ago

          I mean “taking pictures of people who are smiling” is definitely a bias in our culture. How we collectively choose to record information is part of how we encode human biases.

          I get what you’re saying in specific circumstances. Sure, a dataset that is built from a single source doesn’t make its biases universal. But these models were trained on a very wide range of sources. Wide enough to cover much of the data we’ve built a culture around.

          • kromem@lemmy.world
            link
            fedilink
            English
            arrow-up
            2
            ·
            edit-2
            9 months ago

            Except these kinds of data driven biases can creep in from all sorts of ways.

            Is there a bias in what images have labels and what don’t? Did they focus only on English labeling? Did they use a vision based model to add synthetic labels to unlabeled images, and if so did the labeling model introduce biases?

            Just because the sampling is broad doesn’t mean the processes involved don’t introduce procedural bias distinct from social biases.