We can generate AI images, we can generate AI text, but text in an image is a no go?

  • gerryflap@feddit.nl
    link
    fedilink
    arrow-up
    18
    arrow-down
    3
    ·
    edit-2
    11 months ago

    Generating meaningful text in an image is very complex. Most of these models like Dall-E and simple diffusion are essentially guided denoising algorithms. They get images of pure noise, and are being told that it’s actually just a very noisy image of whatever the description is. So all they do is remove some noise for many steps in a row until a clear image emerges. You can kinda imagine it as the “AI” staring into the noise to see the image that you described.

    Most real-world objects are of course quite complex. If it sees a tree branch in the noise, it also need to make sure that the rest of the tree fits. And a car headlight only makes sense if the rest of the car is also there. But for text these kind of correlations are even way way harder. In order to generate meaningful text it not only needs to understand how text is usually spaced, and that letters usually are written in a consistent font, it also needs to learn the entire English language. All that just to generate something that is probably overall of less influence to it’s “score” on images form the dataaset than learning how to draw a realistic car.

    So in order to generate meaningful text, the model requires a lot of capacity. Otherwise, since it’s not specifically motivated to learn to write meaningful text, it’ll do whatever it’s doing now. Honestly I’m sometimes quite impressed with how well these models do generate text, given all these considerations.

    EDIT: Another few things came to mind:

    • Relating images and text (and thus guiding the image generator) was in the past done using a different (AI) model. Not sure if that’s still the case. So 2 models need to understand the English language to generate meaningful text: generator and the image to text translation model.

    • So why can AI like ChatGPT generate meaningful text? Well in short, they are fully dedicated to outputting language. They output the text as text and thus can be easily scored on it. The neural network architecture is also way more suited to it and they see way more text

    • Iamdanno@lemmynsfw.com
      link
      fedilink
      English
      arrow-up
      1
      ·
      11 months ago

      So all they do is remove some noise for many steps in a row until a clear image emerges.

      So it’s like Mark Twain? said, writing is easy, all you do is write everything down, then cross out all the wrong words. Or something to that effect.