We Asked A.I. to Create the Joker. It Generated a Copyrighted Image.::Artists and researchers are exposing copyrighted material hidden within A.I. tools, raising fresh legal questions.

  • KinNectar@kbin.run
    link
    fedilink
    arrow-up
    60
    arrow-down
    6
    ·
    10 months ago

    Copyright issues aside, can we talk about how this implies accurate recall of an image from a never before achievable data compression ratio? If these models can actually recall the images they have been fed this could be a quantum leap in compression technology.

    • Mirodir@discuss.tchncs.de
      link
      fedilink
      English
      arrow-up
      34
      ·
      edit-2
      10 months ago

      It’s not as accurate as you’d like it to be. Some issues are:

      • It’s quite lossy.
      • It’ll do better on images containing common objects vs rare or even novel objects.
      • You won’t know how much the result deviates from the original if all you’re given is the prompt/conditioning vector and what model to use it on.
      • You cannot easily “compress” new images, instead you would have to either finetune the model (at which point you’d also mess with everyone else’s decompression) or do an adversarial attack onto the model with another model to find the prompt/conditioning vector most likely to create something as close as possible to the original image you have.
      • It’s rather slow.

      Also it’s not all that novel. People have been doing this with (variational) autoencoders (another class of generative model). This also doesn’t have the flaw that you have no easy way to compress new images since an autoencoder is a trained encoder/decoder pair. It’s also quite a bit faster than diffusion models when it comes to decoding, but often with a greater decrease in quality.

      Most widespread diffusion models even use an autoencoder adjacent architecture to “compress” the input. The actual diffusion model then works in that “compressed data space” called latent space. The generated images are then decompressed before shown to users. Last time I checked, iirc, that compression rate was at around 1/4 to 1/8, but it’s been a while, so don’t quote me on this number.

      edit: fixed some ambiguous wordings.

    • TORFdot0@lemmy.world
      link
      fedilink
      English
      arrow-up
      18
      arrow-down
      2
      ·
      10 months ago

      You can hardly consider it compression when you need a compute expensive model with hundreds of gigabytes (if not bigger) to accurately rehydrate it

      • TheRealKuni@lemmy.world
        link
        fedilink
        English
        arrow-up
        7
        ·
        edit-2
        10 months ago

        You can hardly consider it compression when you need a compute expensive model with hundreds of gigabytes (if not bigger) to accurately rehydrate it

        You can run Stable Diffusion with custom models, variational auto encoders, LoRAs, etc, on an iPhone from 2018. I don’t know what the NYTimes used, but AI image generation is surprisingly cheap once the hard work of creating the models is done. Most SD1.5 model checkpoints are around 2GB in size.

        Edit: But yes, the idea of using this as image compression is absurd.

    • azuth@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      16
      arrow-down
      2
      ·
      10 months ago

      If you ignore the fact that the generated images are not accurate, maybe.

      They are very similar so they are infringing but nobody would use this method for compression over an image codec

    • linearchaos@lemmy.world
      link
      fedilink
      English
      arrow-up
      10
      arrow-down
      1
      ·
      10 months ago

      I was thinking about this back when they first started talking about news articles coming back word for word.

      There’s no way for us to tell how much of the original data even in a lossy fashion can be directly recovered. If this was as common as these articles would leave you to believe you just be able to pull anything you wanted out on demand.

      But here we have every news agency vying to make headlines about copyright infringement and we’re seeing an article here and there with a close or relatively close result

      There are millions and millions of people using this technology and most of us aren’t running across blatant full screen reproductions of stuff.

      You can tell from some of the artifacts that they’ve trained from some watermark images because the watermarks kind of show up but for the most part you wouldn’t know who made the watermarking if all the watermarking companies didn’t use rather unique patterns.

      The image that we’re seeing on this news site of the joker is quite exceptional, even from a lossy standpoint, but honestly it’s just feeding the confirmation bias.

      • mindlesscrollyparrot@discuss.tchncs.de
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        1
        ·
        10 months ago

        “how much of the data is the original data”?

        Even if you could reverse the process perfectly, what you would prove is that something fed into the AI was identical to a copyrighted image. But the image’s license isn’t part of that data. The question is: did the license cover use as training data?

        In the case of watermarked images, the answer is clearly no, so then the AI companies have to argue that only tiny parts of any given image come from any given source image, so it still doesn’t violate the license. That’s pretty questionable when waternarks are visible.

        In these examples, it’s clear that all parts of the image come directly or indirectly (perhaps some source images were memes based on the original) from the original, so there goes the second line of defence.

        The fact that the quality is poor is neither here nor there. You can’t run an image through a filter that adds noise and then say it’s no longer copyrighted.

        • wewbull@iusearchlinux.fyi
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          3
          ·
          10 months ago

          The trained model is a work derived from masses of copywrite material. Distribution of that model is infringement, same as distributing copies of movies. Public access to that model is infringement, just as a public screening of a movie is.

          People keep thinking it’s “the picture the AI drew” that’s the issue. They’re wrong. It’s the “AI” itself.

    • AFaithfulNihilist@lemmy.world
      link
      fedilink
      English
      arrow-up
      9
      arrow-down
      2
      ·
      edit-2
      10 months ago

      Chat GPT it’s over 500 gigs of training data plus over 300 gigs of RAM, and Sam Altman has been quite adamant about how another order of magnitude worth of storage capacity is needed in order to advance the tech.

      I’m not convinced that these are compressed much at all. I would bet this image in its entirety is actually stored in there someplace albeit in an exploded format.

      • フ卂ㄖ卄乇卂卄@lemy.lol
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        3
        ·
        10 months ago

        I purchased a 128 GB flash drive for around 12-15$ (I forgot the exact price) last year, and on Amazon, there are 10 TB hard drives for $100. So, the actual storage doesn’t seem to be an issue.

        RAM is expensive 128 GB of RAM on Amazon is $500.

        But then again, I am talking about the consumer grade stuff. It might be different for the people who are making AI’s as they might be using the industrial/whatever it’s called grade stuff.

        • AFaithfulNihilist@lemmy.world
          link
          fedilink
          English
          arrow-up
          5
          ·
          10 months ago

          It depends on what kind of RAM you’re getting.

          You could get Dell R720 with two processors and 128 gigs of RAM for $500 right now on eBay, but it’s going to be several generations old.

          I’m not saying that the model is taking up astronomical amounts of space, but it doesn’t have to store movies or even high resolution images. It is also not being expected to know every reference, just the most popular ones.

          I have 120tb storage server in the basement. So the footprint of this learning model is not particularly massive by comparison, but It does contain this specific whole joker image. It’s not something that could have been generated without the original to draw from.

          In order to build a bigger model they would need not necessarily just more storage but actually a new way of having more and faster RAM connected to lower latency storage. LLMs are the kinds of software that become hard to subdivide to be distributed across purpose-built arrays of hardware.

    • antihumanitarian@lemmy.world
      link
      fedilink
      English
      arrow-up
      7
      arrow-down
      1
      ·
      10 months ago

      Compression is actually a mathematical field that’s fairly well explored, and this isn’t compression. There are theoretical limits on how much you can compress data, so the data is always somewhere, either in the dictionary or the input. Trained models like these are gigantic, so even if it was perfect recall the ratio still wouldn’t be good. Lossy “compression” is another issue entirely, more of an engineering problem of determining how much data you can throw out while making acceptable compromises.

    • LadyAutumn@lemmy.blahaj.zone
      link
      fedilink
      English
      arrow-up
      4
      ·
      10 months ago

      Results vary wildly. Some images are near pixel perfect. Others, it clearly knows what image it is intended to be replicating. Like it gets all the conceptual pieces in the right places but fails to render an exact copy.

      Not a very good compression ratio if the image you get back isn’t the one you wanted, but merely an image that is conceptually similar.

    • JPAKx4@lemmy.blahaj.zone
      link
      fedilink
      English
      arrow-up
      3
      ·
      10 months ago

      I mean, only if you have the entire model downloaded and your computer does a ton of work to figure it out. And then if any new images are created the model will have to be retrained. Maybe if there were a bunch of presets of colors to choose from that everyone had downloaded and then you only send data describing changes to the image

    • peopleproblems@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      arrow-down
      2
      ·
      10 months ago

      Holy shit I didn’t even think about that.

      Essentially the model is compressing the image into a prompt.

      Instead of the bitmap being 8MB being condensed down into whatever the jpeg equivalent is, it’s still more than a text file with that exact prompt that gave.

    • timetravel@lemmings.world
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      5
      ·
      10 months ago

      I made a novel type of language model, and from my calculations after about 30gb it would cross over an event horizon of compression, where it would hold infinitely more pieces of text without getting bigger. With lower vocabulary it would do this at a lower size. For images it’s still pretty lossy but it’s pretty cool. Honestly I can’t mental image much better without drawing it out.

      • owen@lemmy.ca
        link
        fedilink
        English
        arrow-up
        1
        ·
        10 months ago

        Hmm this sounds like a similar technology to the time cube

  • gmtom@lemmy.world
    link
    fedilink
    English
    arrow-up
    62
    arrow-down
    20
    ·
    10 months ago

    God I fucking hate this braindesd AI boogeyman nonsense.

    Yeah, no shit you ask the AI to create a picture of a specific actor from a specific movie, its going yo look like a still from that movie.

    Or if you ask it to create “an animated sponge wearing pants” it’s going to give you spongebob.

    You should think of these AIs as if you asking an artist freind of yours to draw a picture for you. So if you say “draw an Italian video games chsracter” then obviously they’re going to draw Mario.

    And also I want to point out they interview some professor of English for some reason, but they never interview, say, a professor of computer science and AI, because they don’t want people that actually know what they’re talking about giving logical answers, they want random bloggers making dumb tests and “”“exposing”“” AI and how it steals everything!!!1!!! Because that’s what gets clicks.

  • antihumanitarian@lemmy.world
    link
    fedilink
    English
    arrow-up
    36
    arrow-down
    3
    ·
    edit-2
    10 months ago

    This is a classic problem for machine learning systems, sometimes called over fitting or memorization. By analogy, it’s the difference between knowing how to do multiplication vs just memorizing the times tables. With enough training data and large enough storage AI can feign higher “intelligence”, and that is demonstrably what’s going on here. It’s a spectrum as well. In theory, nearly identical recall is undesirable, and there are known ways of shifting away from that end of the spectrum. Literal AI 101 content.

    Edit: I don’t mean to say that machine learning as a technique has problems, I mean that implementations of machine learning can run into these problems. And no, I wouldn’t describe these as being intelligent any more than a chess algorithm is intelligent. They just have a much more broad problem space and the natural language processing leads us to anthropomorphize it.

    • KeenFlame@feddit.nu
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      20
      ·
      10 months ago

      No it is not. What is going on nobody calls intelligence. They train a model to draw this so that is what it does. Nothing here has anything to do with any problems with machine learning

          • gamermanh@lemmy.dbzer0.com
            link
            fedilink
            English
            arrow-up
            1
            arrow-down
            2
            ·
            10 months ago

            Ellipses are used in quotes to remove irrelevant parts without changing the meaning of the sentence. Makes it take less time to quote someone

            Apparently you’re unfamiliar with basic concepts of the language we’re using here

            • KairuByte@lemmy.dbzer0.com
              link
              fedilink
              English
              arrow-up
              2
              arrow-down
              1
              ·
              edit-2
              10 months ago

              You typically wrap those ellipses in square brackets when making such a change. In fact, you do so with any editorial changes to a quote to make things more clear.

              For example, if Mike was quoted about the war in Ukraine as saying “I just think this whole thing is silly, they should stop” you could alter the quote as such: Mike said “I just think […] [Russia] should stop.”

  • silentdon@lemmy.world
    link
    fedilink
    English
    arrow-up
    42
    arrow-down
    10
    ·
    10 months ago

    We asked A.I. to create a copyrighted image from the Joker movie. It generated a copyrighted image as expected.

    Ftfy

    • Fisk400@feddit.nu
      link
      fedilink
      English
      arrow-up
      20
      arrow-down
      10
      ·
      10 months ago

      What it proves is that they are feeding entire movies into the training data. It is excellent evidence for when WB and Disney decides to sue the shit out of them.

      • DudeDudenson@lemmings.world
        link
        fedilink
        English
        arrow-up
        23
        arrow-down
        3
        ·
        10 months ago

        Does it really have to be entire movies when theres a ton of promotional images and memes with similar images?

        • Jarix@lemmy.world
          link
          fedilink
          English
          arrow-up
          5
          arrow-down
          2
          ·
          10 months ago

          Yes. Thats what these things are, extremely large catalogues of data. As much data as possible is their goal.

          • EdibleFriend@lemmy.world
            link
            fedilink
            English
            arrow-up
            10
            ·
            10 months ago

            True but it didn’t pick some random frame somewhere in the movie it chose a extremely memorable shot that is posted all over the place. I won’t deny that they are probably feeding it movies but this is not a sign of that.

            This image is literally the top result on Google images for me.

            • Jarix@lemmy.world
              link
              fedilink
              English
              arrow-up
              2
              arrow-down
              2
              ·
              edit-2
              10 months ago

              Why would it pick some random frame in the middle of its data set instead of a frame it has the most to reference. It can still use all those other frames to then pick the frame if has the most references to.

              But im starting to think maybe i misunderstood the comment i replied to.

              Sorry, im way out of context with my reply, totally my fault for reflexively replying.

              Uhhh would you accept i didnt have my coffee yet and hadnt got out of bed yet as an explanation?

      • Mirodir@discuss.tchncs.de
        link
        fedilink
        English
        arrow-up
        12
        arrow-down
        1
        ·
        edit-2
        10 months ago

        I think it’s much more likely whatever scraping they used to get the training data snatched a screenshot of the movie some random internet user posted somewhere. (To confirm, I typed “joaquin phoenix joker” into Google and this very image was very high up in the image results) And of course not only this one but many many more too.

        Now I’m not saying scraping copyrighted material is morally right either, but I’d doubt they’d just feed an entire movie frame by frame (or randomly spaced screenshots from throughout a movie), especially because it would make generating good labels for each frame very difficult.

        • otp@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          5
          ·
          10 months ago

          I just googled “what does joker look like” and it was the fourth hit on image search.

          Well, it was actually an article (unrelated to AI) that used the image.

          But then I went simpler – googling “joker” gives you the image (from the IMDb page) as the second hit.

      • Even_Adder@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        10
        arrow-down
        3
        ·
        edit-2
        10 months ago

        The way it was done if I remember correctly is that someone found out v6 was trained partially with Stockbase images-caption pairs, so they went to Stockbase and found some images and used those exact tags in the prompts.

      • orclev@lemmy.world
        link
        fedilink
        English
        arrow-up
        6
        arrow-down
        5
        ·
        10 months ago

        WB and Disney would lose, at least without an amendment to copyright law. That in fact just happened in one court case. It was ruled that using a copyrighted work to train AI does not violate that works copyright.

        • asret@lemmy.zip
          link
          fedilink
          English
          arrow-up
          7
          arrow-down
          1
          ·
          10 months ago

          Using it to train on is very different from distributing derived works.

            • asret@lemmy.zip
              link
              fedilink
              English
              arrow-up
              4
              ·
              10 months ago

              Something transformative from the original works. And arguably not being being distributed. The model producing and distributing derivative works is entirely different though. No one really gives a shit about data being used to train models - there’s nothing infringing about that which is exactly why they won their case. The example in the post is an entirely different situation though.

      • LainTrain@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        5
        ·
        10 months ago

        I have that exact same .jpeg stored on my computer and I don’t even know where it came from. I don’t even watch superhero films

        • wildginger@lemmy.myserv.one
          link
          fedilink
          English
          arrow-up
          6
          arrow-down
          1
          ·
          10 months ago

          And if you tried to sell that, you would be breaking the law.

          Which is what these AI models are doing

          • LainTrain@lemmy.dbzer0.com
            link
            fedilink
            English
            arrow-up
            3
            arrow-down
            4
            ·
            10 months ago

            They’re not selling it though, they’re selling a machine with which you could commit copyright infringement. Like my PC, my HDD, my VCR…

            • wildginger@lemmy.myserv.one
              link
              fedilink
              English
              arrow-up
              5
              arrow-down
              1
              ·
              10 months ago

              No, they are selling you time in a digital room with a machine, and all of the things it spits out at you.

              You dont own the program generating these images. You are buying these images and the time to tinker with the AI interface.

              • LainTrain@lemmy.dbzer0.com
                link
                fedilink
                English
                arrow-up
                1
                arrow-down
                1
                ·
                edit-2
                9 months ago

                I’m not buying anything, most AI is free as in free beer and open source e.g. Stable Diffusion, Mistral…

                Unlike hardware it’s actually accessible to everyone with sufficient know-how.

                • wildginger@lemmy.myserv.one
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  arrow-down
                  1
                  ·
                  9 months ago

                  Youre pretty young, huh. When something on the internet from a big company is free, youre the product.

                  Youre bug and stress testing their hardware, and giving them free advertising. While using the cheapest, lowest quality version that exists, and only for as long as they need the free QA.

                  The real AI, and the actual quality outputs, cost money. And once they are confident in their server stability, the scraps youre picking over will get a price tag too.

    • Rentlar@lemmy.ca
      link
      fedilink
      English
      arrow-up
      7
      ·
      10 months ago

      When they asked for an Italian video game character it returned something with unmistakable resemblance to Mario with other Nintendo property like Luigi, Toad etc. … so you don’t even have to ask for a “screencapture” directly for it to use things that are clearly based on copyrighted characters.

      • sir_reginald@lemmy.world
        link
        fedilink
        English
        arrow-up
        7
        arrow-down
        6
        ·
        edit-2
        10 months ago

        you’re still asking for a character from a video game, which implies copyrighted material. write the same thing in google and take a look at the images. you get what you ask for.

        you can’t, obviously, use any image of Mario for anything outside fair use, no matter if AI generated or you got it from the internet.

        • doctorcrimson@lemmy.world
          link
          fedilink
          English
          arrow-up
          3
          arrow-down
          3
          ·
          10 months ago

          But the AI didn’t credit the clear inspiration. That’s the problem, that is what makes it theft: you need permission to profit off of the works of others.

          • sir_reginald@lemmy.world
            link
            fedilink
            English
            arrow-up
            2
            arrow-down
            1
            ·
            10 months ago

            you need permission to profit off of the works of others.

            but that’s exactly what I said. you can’t grab an image of Mario from google and profit from it as you can’t draw a fan art of Mario and profit from it as well as you can’t generate an image of Mario and profit from it.

            It doesn’t matter if you’re generating it with software or painting it on canvas, if it contains intellectual property of others, you can’t (legally) use it for profit.

            however, generating it and posting it as a meme on the internet falls under fair use, just like using original art and making a meme.

            • doctorcrimson@lemmy.world
              link
              fedilink
              English
              arrow-up
              1
              arrow-down
              2
              ·
              edit-2
              10 months ago

              The users are allowed to ask for those things

              The AI company should not be allowed to give it in return for monetary gain.

      • Jilanico@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        4
        ·
        edit-2
        10 months ago

        If you asked me to draw an Italian video game character, I’d draw Mario too. Why can’t an AI make copyrighted character inspired pics as long as they aren’t being sold?

        • doctorcrimson@lemmy.world
          link
          fedilink
          English
          arrow-up
          3
          ·
          10 months ago

          You credited it just now as Mario, a Nintendo property, which the AI failed to do. Plus, if you were paid to draw Mario then you’d have broken laws about IP. Why don’t those same rules apply to AI?

        • cecinestpasunbot@lemmy.ml
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          2
          ·
          10 months ago

          Well that’s exactly the problem. If people use AI generated images for commercial purposes they may accidentally infringe on someone else’s copyright. Since AI models are a black box there isn’t really a good way to avoid this.

          • doctorcrimson@lemmy.world
            link
            fedilink
            English
            arrow-up
            2
            arrow-down
            2
            ·
            10 months ago

            Sure there is, force the AI to properly credit artists and if they don’t have permission to use the character then the prompt fails. Or the AI operators have no legal rights to charge for services and should be sued into the ground.

    • esc27@lemmy.world
      link
      fedilink
      English
      arrow-up
      6
      arrow-down
      6
      ·
      10 months ago

      Voyager just loaded a copyrighted image on my phone. Guess someone’s gonna have to sue them too.

      • Vincent Adultman@lemmy.world
        link
        fedilink
        English
        arrow-up
        6
        arrow-down
        2
        ·
        10 months ago

        Yeah man, Voyager is making millions with the images on the app. It makes me so mad, they Voyager people make you think they are generating content on their own, but in reality is just feeding you unlicensed content from others.

        • eric@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          5
          ·
          edit-2
          10 months ago

          You’re completely missing the point. Making money doesn’t change the legality. YouTube was threatened by the RIAA before they even started showing ads. Displaying an image from a copyrighted work on an AI platform is not much different technologically than Voyager or even Google Images displaying the same image, and both could also be interpreted as “feeding you unlicensed content from others.”

          • MadBigote@lemmy.world
            link
            fedilink
            English
            arrow-up
            6
            arrow-down
            1
            ·
            10 months ago

            Making money doesn’t change the legality.

            Except that it actually does? That’s the point of copyright laws. The LLM/AIs are using copyright protected material as source without paying for it, and then selling it’s output as "original '.

            • wewbull@feddit.uk
              link
              fedilink
              English
              arrow-up
              3
              ·
              10 months ago

              Oh! That’s why torrent sites aren’t under constant threat despite hosting tons of free copyright material.

              Hang on… Yes they are!

      • otp@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        0
        arrow-down
        2
        ·
        10 months ago

        I just remembered a copyrighted image. Oops.

        Hey, I bet there were complaints about Google showing image results at some point too! Lol

  • trackcharlie@lemmynsfw.com
    link
    fedilink
    English
    arrow-up
    69
    arrow-down
    42
    ·
    10 months ago

    “Generate this copyrighted character”

    “Look, it showed us a copyrighted character!”

    Does everyone that writes for the NYTimes have a learning disability?

    • Ross_audio@lemmy.world
      link
      fedilink
      English
      arrow-up
      41
      arrow-down
      6
      ·
      10 months ago

      The point is to prove that copyrighted material has been used as training data. As a reference.

      If a human being gets asked to draw the joker, gets a still from the film, then copies it to the best of their ability. They can’t sell that image. Technically speaking they’ve broken the law already by making a copy. Lots of fan art is illegal, it’s just not worth going after (unless you’re Disney or Nintendo).

      As a subscription service that’s what AI is doing. Selling the output.

      Held to the same standards as a human artist, this is illegal.

      If AI is allowed to copy art under copyright, there’s no reason a human shouldn’t be allowed to do the same thing.

      Proving the reference is all important.

      If an AI or human only ever saw public domain artwork and was asked to draw the joker, they might come up with a similar character. But it would be their own creation. There are copyright cases that hinge on proving the reference material. (See Blurred Lines by Robin Thick)

      The New York Times is proving that AI is referencing an image under copyright because it comes out precisely the same. There are no significant changes at all.

      In fact even if you come up with a character with no references. If it’s identical to a pre-existing character the first creator gets to hold copyright on it.

      This is undefendable.

      Even if that AI is a black box we can’t see inside. That black box is definitely breaking the law. There’s just a different way of proving it when the black box is a brain and when the black box is an AI.

      • KeenFlame@feddit.nu
        link
        fedilink
        English
        arrow-up
        5
        arrow-down
        5
        ·
        10 months ago

        But that’s just a lie? You may draw from copyright material. Nobody can stop you from drawing anything. Thankfully.

        • Ross_audio@lemmy.world
          link
          fedilink
          English
          arrow-up
          7
          arrow-down
          2
          ·
          10 months ago

          Nobody can stop you.

          But because our copyright laws are so overreaching you probably are breaching copyright.

          It’s just not worth a company suing you for the financial “damages” they’ve “suffered” because you drew a character instead of buying a copy from them.

          Certain exceptions exist, not least “De Minimus” and education.

          You can argue that you’re learning to draw. Then put that drawing in a drawer and probably fine.

          But’s pretty clear cut in law that putting it even on your own wall is a copyright breach if you could have bought it as a poster.

          The world doesn’t work that way but suddenly AI doing what an individual does thousands of times, means thousands times the potential damage.

          Just as if you loaded up a printing press.

          De Minimus no longer applies and the actual laws will get tested in court.

          Even though this isn’t like a press in that each image can be different, thousands of different images breaking copyright aren’t much different to printing thousands of the same image.

            • Ross_audio@lemmy.world
              link
              fedilink
              English
              arrow-up
              3
              ·
              10 months ago

              Unfortunately I have studied this.

              So we’ll just have to decide to agree to disagree and hope neither ends up on the wrong side of the law.

              Like I say. Copyright is based upon damage to the copyright holder. It’s quite obvious when that happens and it’s hard to do enough as an individual to be worth suing.

              But making a single copy without permission, without being covered by any exemptions, is copyright infringement.

              Copy right. The right to copy.

              You don’t have it unless you pay for it.

              • KeenFlame@feddit.nu
                link
                fedilink
                English
                arrow-up
                1
                ·
                9 months ago

                In my country we can draw anything and not get sued or break the law. I think that’s pretty good too. It’s when you sell stuff you get into those things.

                • Ross_audio@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  ·
                  9 months ago

                  If your country is a signatory to the international copyright treaties with most of the Anglosphere (Like the EU, US, AUS, NZ). Then that is not correct.

                  You cannot draw anything.

                  It’s just never worth suing you over.

                  A crime so small it’s irrelevant is almost a legal act. But it’s not actually a legal act.

            • Flying Squid@lemmy.world
              link
              fedilink
              English
              arrow-up
              3
              arrow-down
              1
              ·
              10 months ago

              Much like @Ross_audio, I have studied this intently for business reasons. They are absolutely right. This is not a transformative work. This is a direct copy of a trademarked and/or copyrighted character for the purpose of generating revenue. That’s simply not legal for the same reason that you can’t draw and sell your own Spider-Man comics about a teenager that gains the proportional strength and abilities of a spider, but you can sell your own Grasshopper-Man comics about a teenager that gains the proportional strength and abilities of a grasshopper. As long as you use your own designs and artwork. Because then it is transformative. And parody. Both are legal. What Midjourney is doing is neither transformative nor parody.

              • KeenFlame@feddit.nu
                link
                fedilink
                English
                arrow-up
                1
                ·
                9 months ago

                Yeah it would not be strange to me if that’s how it works in the states, but I think drawing something (not selling, the example was not monetary) does not have international reach

      • Random_Character_A@lemmy.world
        link
        fedilink
        English
        arrow-up
        5
        arrow-down
        8
        ·
        10 months ago

        Tough question is, can a tool be infringing anything?

        Although I’d see a legal case if AI companies were to bill picture by picture, but now they are just billing for a tool subscription.

        Still, would Microsoft be liable for my copy-pastes if they charged a penny every time I use it, or am I, if I sell a art piece that uses that infringing image?

        AI could be scraping that picture from anywhere.

          • Random_Character_A@lemmy.world
            link
            fedilink
            English
            arrow-up
            3
            arrow-down
            9
            ·
            edit-2
            10 months ago

            Can a tool create? It generated.

            Anyway, in case like this, is creation even a factor in liability?

            In my opinion one who gets monetary value first from the piece should be liable.

            NYTimes?

            • wildginger@lemmy.myserv.one
              link
              fedilink
              English
              arrow-up
              10
              arrow-down
              2
              ·
              10 months ago

              “I didnt kill him, officer, my murder robot did. Oh, sure, I built it and programmed it to stab jenkins to death for an hour. Oh, yes, I charged it, set it up in his house, and made sure all the programming was set. Ah, but your honor, I didnt press the on switch! Jenkins did, after I put a note on it that said ‘not an illegal murderbot’ next to the power button. So really, the murderbot killed him, and if you like maybe even jenkins did it! But me? No, sir, Im innocent!”

                • Ross_audio@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  3
                  arrow-down
                  1
                  ·
                  10 months ago

                  And someone created the AI programming too.

                  Then someone trained that AI.

                  It didn’t just come out of the aether, there’s a manual on how to do it.

            • Ross_audio@lemmy.world
              link
              fedilink
              English
              arrow-up
              7
              arrow-down
              3
              ·
              10 months ago

              So by that logic. I prompted you with a question. Did I create your comment?

              I used you as a tool to generate language. If it was a Pulitzer winning response could I gain the plaudits and profit, or should you?

              If it then turned out it was plagiarism by yourself, should I get the credit for that?

              Am I liable for what you say when I have had no input into the generation of your personality and thoughts?

              The creation of that image required building a machine learning model.

              It required training a machine learning model.

              It required prompting that machine learning model.

              All 3 are required steps to produce that image and all part of its creation.

              The part copyright holders will focus on is the training.

              Human beings are held liable if they see and then copy an image for monetary gain.

              An AI has done exactly this.

              It could be argued that the most responsible and controlled element of the process. The most liable. Is the input of training data.

              Either the AI model is allowed to absorb the world and create work and be held liable under the same rules as a human artist. The AI is liable.

              Or the AI model is assigned no responsibility itself but should never have been given copyrighted work without a license to reproduce it.

              Either way the owners have a large chunk of liability.

              If I ask a human artist to produce a picture of Donald Duck, they legally can’t, even though they might just break the law Disney could take them to court and win.

              The same would be true of any business.

              The same is true of an AI as either its own entity, or the property of a business.

              • Random_Character_A@lemmy.world
                link
                fedilink
                English
                arrow-up
                2
                ·
                10 months ago

                I’m not non-sentient construct that creates stuff.

                …and when the copyright law was written there was no non-sentient things gererating stuff.

                • Ross_audio@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  arrow-down
                  1
                  ·
                  10 months ago

                  There is literally no way to prove whether you’re sentient.

                  Decart found that limitation.

                  The only definition in law is whether you have competency to be responsible. The law assumes you do as an adult unless it’s proven you don’t.

                  Given the limits of AI the court is going to assume it to be a machine. And a machine has operators, designers, and owners. Those are humans responsible for that machine.

                  It’s perfectly legitimate to sue a company for using a copyright breaking machine.

        • wewbull@iusearchlinux.fyi
          link
          fedilink
          English
          arrow-up
          7
          ·
          edit-2
          10 months ago

          They are showing that the author of the tool has comitted massive copyright infringement in the process construction of the tool.

          …unless they licensed all the copyright works they trained the model on. (Hint: they didn’t, and we know they didn’t because the copyright holders haven’t licensed their work for that purpose. )

          It doesn’t matter if a company charges or not for anything. It’s not a factor in copyright law.

      • LainTrain@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        9
        arrow-down
        13
        ·
        10 months ago

        It’s not selling that image (or any image), any more than a VCR is selling you a taped version of Die Hard you got off cable TV.

        It is a tool that can help you infringe copyright, but as it has non-infringing uses, it doesn’t matter.

            • Flying Squid@lemmy.world
              link
              fedilink
              English
              arrow-up
              1
              ·
              9 months ago

              Because they aren’t doing anything to violate copyright themselves. You might, but that’s different. AI art is created by the software. Supposedly it’s original art. This article shows it is not.

              • LainTrain@lemmy.dbzer0.com
                link
                fedilink
                English
                arrow-up
                1
                arrow-down
                1
                ·
                9 months ago

                It is original art, even the images in question have differences, but it’s ultimately on the user to ensure they do not use copyrighted material commercially, same as with fanart.

                • Flying Squid@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  ·
                  9 months ago

                  If I draw a very close picture to a screenshot of a Mickey Mouse cartoon and try to pass it off as original art because there are a handful of differences, I don’t think most people would buy it.

      • fine_sandy_bottom@discuss.tchncs.de
        link
        fedilink
        English
        arrow-up
        4
        arrow-down
        9
        ·
        10 months ago

        If a human being gets asked to draw the joker, gets a still from the film, then copies it to the best of their ability. They can’t sell that image. Technically speaking they’ve broken the law already by making a copy.

        Is this really true? Breaking the law implies contravening some legislation which in the case of simply drawing a copyrighted character, you wouldn’t be in most jurisdictions. It’s a civil issue in that if some company has the rights to a character and some artist starts selling images of that character then whoever owns the rights might sue that artist for loss of income or unauthorised use of their intellectual property.

        Regardless, all human artists have learned from images of characters which are the intellectual property of some company.

        If I hired a human as an employee, and asked them to draw me a picture of the joker from some movie, there’s no contravention of any law I’m aware of, and the rights holder wouldn’t have much of a claim against me.

        As a layperson, who hasn’t put much thought into this, the outcome of a claim against these image generators is unclear. IMO, it will come down to whether or not a model’s abilities are significantly derived from a specific category of works.

        For example, if a model learned to draw super heros exclusively from watching marvel movies then that’s probably a copyright infringement. OTOH if it learned to draw super heroes from a wide variety of published works then IMO it’s much more difficult to make a case that the model is undermining the right’s holder’s revenue.

        • Ross_audio@lemmy.world
          link
          fedilink
          English
          arrow-up
          7
          arrow-down
          1
          ·
          edit-2
          10 months ago

          Copyright law is incredibly far reaching and only enforced up to a point. This is a bad thing overall.

          When you actually learn what companies could do with copyright law, you realise what a mess it is.

          In the UK for example you need permission from a composer to rearrange a piece of music for another ensemble. Without that permission it’s illegal to write the music down. Even just the melody as a single line.

          In the US it’s standard practice to first write the arrangement and then ask the composer to licence it. Then you sell it and both collect and pay royalties.

          If you want to arrange a piece of music in the UK by a composer with an American publisher, you essentially start by breaking the law.

          This all gives massive power to corporations over individual artists. It becomes a legal fight the corporation can always win due to costs.

          Corporations get the power of selective enforcement. Whenever they think they will get a profit.

          AI is creating an image based on someone else’s property. The difference is it’s owned by a corporation.

          It’s not legitimate to claim the creation is solely that of the one giving the instructions. Those instructions are not in themselves creating the work.

          The act of creating this work includes building the model, training the model, maintaining the model, and giving it that instruction.

          So everyone involved in that process is liable for the results to differing amounts.

          Ultimately the most infringing part of the process is the input of the original image in the first place.

          So we now get to see if a massive corporation or two can claim an AI can be trained on and output anything publicly available (not just public domain)without infringing copyright. An individual human can’t.

          I suspect the work of training a model solely on public domain will be complete about the time all these cases get settled in a few years.

          Then controls will be put on training data.

          Then barriers to entry to AI will get higher.

          Then corporations will be able to own intellectual property and AI models.

          The other way this can go is AI being allowed to break copyright, which then leads to a precedent that breaks a lot of copyright and the corporations lose a lot of power and control.

          The only reason we see this as a fight is because corporations are fighting each other.

          If AI needs data and can’t simply take it publicly from published works, the value of licensing that data becomes a value boost for the copyright holder.

          The New York Times has a lot to gain.

          There are explicit exceptions limited to copyright law. Education being one. Academia and research another.

          All hinge into infringement the moment it becomes commercial.

          AI being educated and trained isn’t infringement until someone gains from published works or prevents the copyright holder from gaining from it.

          This is why writers are at the forefront. Writing is the first area where AI can successfully undermine the need to read the New York Times directly. Reducing the income from the intellectual property it’s been trained on.

          • wewbull@iusearchlinux.fyi
            link
            fedilink
            English
            arrow-up
            4
            ·
            10 months ago

            AI is creating an image based on someone else’s property. The difference is it’s owned by a corporation.

            This isn’t the issue. The copyright infringement is the creation of the model using the copywrite work as training data.

            All NYT is doing is demonstrating that the model must have been created using copywrite works, and hence infringement has taken place. They are not stating that the model is committing an infringement itself.

            • Ross_audio@lemmy.world
              link
              fedilink
              English
              arrow-up
              4
              arrow-down
              1
              ·
              10 months ago

              I agree, but it is useful to ask if a human isn’t allowed to do something, why is a machine?

              By putting them on the same level. A human creating an output vs. an AI creating an output, it shows that an infringement has definitely taken place.

              I find it helpful to explain it to people as the AI breaching copyright simply because from that angle the law can logically be applied in both scenarios.

              Showing a human a piece of copyright material available to view in public isn’t infringement.

              Showing a generic AI a piece of copyright material available to view in public isn’t infringement.

              The infringing act is the production of the copy.

              By law a human can decide to do that or not, they are liable.

              An AI is a program which in this case is designed to have a tendency to copy and the programmer is responsible for that part. That’s not necessarily infringement because the programmer doesn’t feed in copyright material.

              But the trainer showing an AI known to have a tendency to copy some copyright material isn’t much different to someone putting that material on a photocopier.

              I get many replies from people who think this isn’t infringement because they believe a human is actually allowed to do it. That’s the misunderstanding some have. The framing of the machine making copies and breaching copyright helps. Even if ultimately I’m saying the photocopier is breaching copyright to begin with.

              Ultimately someone is responsible for this machine, and that machine is breaking copyright. The actions used to make, train, and prompt the machine lead to the outcome.

              As the AI is a black box, an AI becomes a copyright infringing photocopier the moment it’s fed copyright material. It is in itself an infringing work.

              The answer is to train a model solely on public domain work and I’d love to play around with that and see what it produces.

    • skarlow181@lemmy.world
      link
      fedilink
      English
      arrow-up
      23
      arrow-down
      1
      ·
      10 months ago

      The crux is that they went “draw me a cartoon mouse” and Midjourney went “here is Disney’s Mickey Mouse™”. A simple prompt should not be able to generate that specific of an image. If you want something specific, you should need to specific it, otherwise the AI failed to generalize or is somehow heavily biased towards existing images.

    • wildginger@lemmy.myserv.one
      link
      fedilink
      English
      arrow-up
      28
      arrow-down
      10
      ·
      10 months ago

      Or you do? The point is that these machines are just regurgitating the copyrighted data they are fed, and not actually doing all that transformative work their creators claim in order to legally defend feeding them work they dont have the rights to.

      Its recreating the images it was fed. Not completing the prompt in unique and distinct ways. Just taking a thing it ate and plopping it into your hands.

      It doesnt matter that you asked it to do that, because the whole point was that it “isnt supposed to” do that in order for them to have the legal protection of feeding it artwork they didnt pay the rights to.

    • festus@lemmy.ca
      link
      fedilink
      English
      arrow-up
      18
      arrow-down
      2
      ·
      10 months ago

      I’m pretty pro AI but I think their point was that the generated images were near identical to existing images. For example, they generate one from Dune that even has whisps of hair in the same place.

      • KeenFlame@feddit.nu
        link
        fedilink
        English
        arrow-up
        7
        arrow-down
        11
        ·
        10 months ago

        They just didn’t use a clean model, this is actually so frustrating to read this many “experts” talk about stable diffusion… It’s really not hard to teach a model to draw a specific image. This is like running people over with a car going LOOK! It’s a killing machine!

    • SpaceCowboy@lemmy.ca
      link
      fedilink
      English
      arrow-up
      16
      arrow-down
      6
      ·
      10 months ago

      It just proves that there is not actual intelligence going on with this AI. It’s basically just a glorified search engine that claims the work of others as it’s own. It wouldn’t be as much of a problem if it attributed it’s sources, but they can’t do that because that opens them up to copyright infringement lawsuits. It’s still copyright infringement, just combined with plagiarism. But it’s claimed to be a creation of “AI” to muddy the waters enough to delay the inevitable avalanche of copyright lawsuits long enough to siphon as much investment dollars as possible before the whole thing comes crashing down.

      • trackcharlie@lemmynsfw.com
        link
        fedilink
        English
        arrow-up
        8
        arrow-down
        1
        ·
        10 months ago

        Calling anything we have now “AI” is a marketing gimmick.

        There is not one piece of software that exists currently that can truly be labelled AI, it’s just advertising for the general population that doesn’t educate themselves on current computing technology.

        • SpaceCowboy@lemmy.ca
          link
          fedilink
          English
          arrow-up
          4
          ·
          10 months ago

          Yeah I agree with this for the most part. Though I have some suspicions that some of the machine learning algorithms used by social media have been exhibiting some emergent behavior. But given that their directive is to sell as many ads as possible, and the fact that advertising is basically just low level emotional manipulation to convince people to buy shit, any emergent behavior would be surrounding emotionally manipulating people.

          Kinda getting into tin foil hat territory here, but developing AI under the direction of marketing assholes doesn’t seem like it’s going to go anywhere good.

  • taranasus@lemmy.world
    link
    fedilink
    English
    arrow-up
    25
    arrow-down
    2
    ·
    10 months ago

    I took a gun, pointed it at another person, pulled the trigger and it killed that person.

        • wewbull@feddit.uk
          link
          fedilink
          English
          arrow-up
          3
          arrow-down
          1
          ·
          edit-2
          10 months ago

          Open sourcing something is granting permissive licenses on copyright works. Again, it’s a concept built assuming that copyright exists.

          What you mean is “abolish copyright”, and that means nobody can exclusivly benefit from creating something, especially in a digital world. Not you, or I, or your favorite author, or song writer. Publishers can just sell works without recognizing the author.

          • KairuByte@lemmy.dbzer0.com
            link
            fedilink
            English
            arrow-up
            1
            ·
            10 months ago

            The first part of your comment is such an “aktually” moment it hurts. Apply it elsewhere: “Free all the slaves implies slavery is still around, it’s a concept built assuming that slavery still exists. What you mean is “abolish slavery”.”

            Everyone understood what they meant.

  • ombremad@lemmy.blahaj.zone
    link
    fedilink
    English
    arrow-up
    21
    arrow-down
    4
    ·
    10 months ago

    I don’t know why everybody pretends we need to come up with a bunch of new laws to protect artists and copyright against “AI”. The problem isn’t AI. The problem is data scraping.

    An example: Apple’s iOS allows you to record your own voice in order to make it a full speech synthesis, that you can use within the system. It’s currently tooted as an accessibility feature (like, if you have a disability preventing you from speaking out loud all of the time, you can use your phone to speak on your behalf, with your own custom voice). In this case, you provide the data, and the AI processes it on-device over night. Simple. We could also think about an artist making a database of their own works in order to try and come up with new ideas with quick prompts, in their own style.

    However, right now, a lot of companies are building huge databases by scraping data from everywhere without consent from the artists that, most of the time, don’t even know their work was scraped. And they even dare to advise that publicly, pretend they have a right to do that, sell those services. That’s stealing of intellectual property, always has been, always will be. You don’t need new laws to get it right. You might need better courts in order to enforce it, depending on which country you live in.

    There’s legal use of AI, and unlawful use of AI. If you use what belongs to you and use the computer as a generative tool to make more things out of it: AI good. If you take from others what don’t belong to you in order to generate stuff based on it: AI bad. Thanks for listening to my TED talk.

  • RememberTheApollo@lemmy.world
    link
    fedilink
    English
    arrow-up
    16
    ·
    10 months ago

    For fun I asked an AI to create a Joker “in the style of Batman movies and comics”.

    The Heath Ledger Joker is so prominent that a variation on that movie’s version is what I got back. It’s so close that without comparing a side-by-side to a real image it’s hard to know what the differences are.

    • BeMoreCareful@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      ·
      edit-2
      10 months ago

      I’d be delighted if we go through all the fret and worry about AI deleting humanity only to find out that AI is actually super lazy.

  • BreakDecks@lemmy.ml
    link
    fedilink
    English
    arrow-up
    17
    arrow-down
    3
    ·
    10 months ago

    The fundamental philosophical question we need to answer here is whether Generative Art simply has the ability to infringe intellectual property, or if that ability makes Generative Art an infringement in and of itself.

    I am personally in the former camp. AI models are just tools that have to be used correctly. There’s also no reason that you shouldn’t be allowed to generate existing IP with those models insofar as it isn’t done for commercial purposes, just as anyone with a drawing tablet and Adobe can draw unlicensed fan art of whatever they want.

    I don’t really care if AI can draw a convincing Ironman. Wake me when someone uses AI in such a way that actually threatens Disney. It’s still the responsibility of any publisher or commercial entity not to brazenly use another company’s IP without permission, that the infringement was done with AI feel immaterial.

    Also, the “memorization” issue seems like it would only be an issue for corporate IP that has the highest risk of overrepresentation in an image dataset, not independent artists who would actually see a real threat from an AI lifting their IP.

  • afraid_of_zombies@lemmy.world
    link
    fedilink
    English
    arrow-up
    30
    arrow-down
    16
    ·
    10 months ago

    Get rid of copyright law. It only benefits the biggest content owners and deprives the rest of us of our own culture.

    It says so much that the person who created an image can be bared from making it.

  • Thorny_Insight@lemm.ee
    link
    fedilink
    English
    arrow-up
    37
    arrow-down
    24
    ·
    edit-2
    10 months ago

    Asks AI to generate copyrighted image; AI generates a copyrighted image.

    Pikatchu.jpg

    • realharo@lemm.ee
      link
      fedilink
      English
      arrow-up
      8
      arrow-down
      4
      ·
      10 months ago

      It is a point against those “it’s just like humans learning” arguments.

      • McArthur@lemmy.world
        link
        fedilink
        English
        arrow-up
        5
        arrow-down
        3
        ·
        10 months ago

        I mean if you asked a human to draw a copyrighted image you would also get the copyrighted image. If the human had seen that copyrighted image enough times they might even have memorised The smallest details and give you a really good or near perfect copy.

        I agree with your point but this example does not prove it.

        • wewbull@feddit.uk
          link
          fedilink
          English
          arrow-up
          3
          arrow-down
          1
          ·
          10 months ago

          …and they are also infringing copyright if they have not been given a right to copy that work.

            • wewbull@feddit.uk
              link
              fedilink
              English
              arrow-up
              1
              ·
              10 months ago

              Maybe you weren’t trying to make the point I thought you were.

              I assumed you were trying to say that a human can draw a picture of (for example) the Joaquin Phoenix Joker and not be committing copyright infringement, therefore an AI can do the same. I was pointing out that the basis of that argument is false. A human drawing that would be infringing the copyright.

        • realharo@lemm.ee
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          1
          ·
          edit-2
          10 months ago

          Not from memory, without looking at the original during painting - at least not to this level of detail. No human will just incidentally “learn” to draw such a near-perfect copy. Not unless they’re doing it on purpose with the explicit goal of “learn to re-create this exact picture”. Which does not describe how any humans typically learn.

  • 8000mark@discuss.tchncs.de
    link
    fedilink
    English
    arrow-up
    16
    arrow-down
    3
    ·
    10 months ago

    I think AI in this case is doing exactly what it’s best at: Automating unbelievably boring chores on the basis of past “experiences”. In this case the boring chore was “Draw me [insert character name] just how I know him/her”.

    Too many people mistakenly assume generative AI is originative or imaginative. It’s not. It certainly can seem that way because it can transform human ideas and words into a picture that has ideally never before existed and that notion is very powerful. But we have to accept that, until now, human creativity is unique to us, the humans. As far as I can tell, the authors were not trying to prove generative AI is unimaginative, they were showing just how blatant copyright infringement in the context of generative AI is happening. No more, no less.

  • Facelesscog@lemmy.world
    link
    fedilink
    English
    arrow-up
    32
    arrow-down
    22
    ·
    10 months ago

    I’m so sick of these examples with zero proof. Just two pictures side by side and your word that one of them was created (easily, it’s implied) by AI. Cool. How? Explain to me how you did it, please.

    • RememberTheApollo_@lemmy.world
      link
      fedilink
      English
      arrow-up
      10
      arrow-down
      2
      ·
      edit-2
      10 months ago

      Really? I’ll hold your hand and go through it:

      I went to MidJourney on Discord. Typed /imagine joker in the style of Batman movies and comics. Hd 4k realistic —ar 2:3 —chaos 1.5

      And it spat out a spitting image of a Heath Ledger Joker.

      That’s how you do it.

      • Facelesscog@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        10 months ago

        joker in the style of Batman movies and comics. Hd 4k realistic —ar 2:3 —chaos 1.5 It really seems like you’re trying to be hurtful/angry, but this is genuinely the information I’m looking for from OP. Can you replicate an artist’s image near perfectly, like OP did? That’s the part that has me curious. Is that ok?

        • RememberTheApollo_@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          10 months ago

          My rebuttal was to someone’s unreasonable anger over there being “no proof” when it sounds like they did zero investigating in their own.

          Here is the image I created with the stated prompt. I made no effort to try to specify a look, film, or actor. This is simply what the AI chose.