I know a lot of people want to interpret copyright law so that allowing a machine to learn concepts from a copyrighted work is copyright infringement, but I think what people will need to consider is that all that’s going to do is keep AI out of the hands of regular people and place it specifically in the hands of people and organizations who are wealthy and powerful enough to train it for their own use.

If this isn’t actually what you want, then what’s your game plan for placing copyright restrictions on AI training that will actually work? Have you considered how it’s likely to play out? Are you going to be able to stop Elon Musk, Mark Zuckerberg, and the NSA from training an AI on whatever they want and using it to push propaganda on the public? As far as I can tell, all that copyright restrictions will accomplish to to concentrate the power of AI (which we’re only beginning to explore) in the hands of the sorts of people who are the least likely to want to do anything good with it.

I know I’m posting this in a hostile space, and I’m sure a lot of people here disagree with my opinion on how copyright should (and should not) apply to AI training, and that’s fine (the jury is literally still out on that). What I’m interested in is what your end game is. How do you expect things to actually work out if you get the laws that you want? I would personally argue that an outcome where Mark Zuckerberg gets AI and the rest of us don’t is the absolute worst possibility.

  • Ragnell@kbin.social
    link
    fedilink
    arrow-up
    0
    ·
    1 year ago

    Except an AI is not taking inspiration, it’s compiling information to determine mathematical averages.

    A human can be inspired because they are a human being. A Large Language Model cannot. Stable Diffusion is not near the complexity of a human brain. Just because it does it faster doesn’t mean it’s doing it the same way. Human beings have free will and a host of human rights. A human being is paid for the work they do, an AI program’s creator is paid for the work it did. And if that creator used copyrighted work, then he should be having to get permission to use it, because he’s profitting off this AI program.

    I would tend to agree with you on this one, although we don’t need bad copyright legislation to deal with it, since laws can deal with it more directly. I would personally put in place an organization that requires rigorous proof that AI in those roles is significantly safer than a human, like the FDA does for medication.

    I would too, but we need TIME to get that done and right now, lawsuits will buy us time. That was the point of my comment.

    • IncognitoErgoSum@kbin.socialOP
      link
      fedilink
      arrow-up
      0
      ·
      1 year ago

      Except an AI is not taking inspiration, it’s compiling information to determine mathematical averages.

      The AIs we’re talking about are neural networks. They don’t do statistics, they don’t have databases, and they don’t take mathematical averages. They simulate neurons, and their ability to learn concepts is emergent from that, the same way the human brain is. Nothing about an artificial neuron ever takes an average of anything, reads any database, or does any statistical calculations. If an artificial neural network can be said to be doing those things, then so is the human brain.

      There is nothing magical about how human neurons work. Researchers are already growing small networks out of animal neurons and using them the same way that we use artificial neural networks.

      There are a lot of “how AI works” articles in there that put things in layman’s terms (and use phrases like “statistical analysis” and “mathematical averages”, and unfortunately people (including many very smart people) extrapolate from the incorrect information in those articles and end up making bad assumptions about how AI actually works.

      A human being is paid for the work they do, an AI program’s creator is paid for the work it did. And if that creator used copyrighted work, then he should be having to get permission to use it, because he’s profitting off this AI program.

      If an artist uses a copyrighted work on their mood board or as inspiration, then they should pay for that, because they’re making a profit from that copyrighted work. Human beings should, as you said, be paid for the work they do. Right? If an artist goes to art school, they should pay all of the artists whose work they learned from, right? If a teacher teaches children in a class, that teacher should be paid a royalty each time those children make use of the knowledge they were taught, right? (I sense a sidetrack – yes, teachers are horribly underpaid and we desperately need to fix that, so please don’t misconstrue that previous sentence.)

      There’s a reason we don’t copyright facts, styles, and concepts.

      Oh, and if you want to talk about something that stores an actual database of scraped data, makes mathematical and statistical inferences, and reproduces things exactly, look no further than Google. It’s already been determined in court that what Google does is fair use.

      • veridicus@kbin.social
        link
        fedilink
        arrow-up
        0
        ·
        1 year ago

        The AIs we’re talking about are neural networks. They don’t do statistics, they don’t have databases, and they don’t take mathematical averages. They simulate neurons, and their ability to learn concepts is emergent from that, the same way the human brain is.

        This is not at all accurate. Yes, there are very immature neural simulation systems that are being prototyped but that’s not what you’re seeing in the news today. What the public is witnessing is fundamentally based on vector mathematics. It’s pure math and there is nothing at all emergent about it.

        If an artist uses a copyrighted work on their mood board or as inspiration, then they should pay for that, because they’re making a profit from that copyrighted work.

        That’s not how copyright works, nor should it. Anyone who creates a mood board from a blank slate is using their learned experience, most of which they gathered from other works. If you were to write a book analyzing movies, for example, you shouldn’t have to pay the copyright for all those movies. You can make a YouTube video right now with a few short clips from a movie or quotes from a book and you’re not violating copyright. You’re just not allowed to make a largely derivative work.

        • IncognitoErgoSum@kbin.socialOP
          link
          fedilink
          arrow-up
          0
          ·
          1 year ago

          So to clarify, are you making the claim that nothing that’s simulated with vector mathematics can have emergent properties? And that AIs like GPT and Stable Diffusion don’t contain simulated neurons?

              • veridicus@kbin.social
                link
                fedilink
                arrow-up
                0
                ·
                1 year ago

                No, I’m not your Google. You can easily read the background of Stable Diffusion and see it’s based on Markov chains.

                • IncognitoErgoSum@kbin.socialOP
                  link
                  fedilink
                  arrow-up
                  1
                  ·
                  1 year ago

                  LOL, I love kbin’s public downvote records. I quoted a bunch of different sources demonstrating that you’re wrong, and rather than own up to it and apologize for preaching from atop Mt. Dunning-Kruger, you downvoted me and ran off.

                  I advise you to step out of whatever echo chamber you’ve holed yourself up in and learn a bit about AI before opining on it further.

                  • veridicus@kbin.social
                    link
                    fedilink
                    arrow-up
                    1
                    ·
                    edit-2
                    1 year ago

                    My last response didn’t post for some reason. The mistake you’re making is that a neural network is not a neural simulation. It’s relatively simple math, just on a very large scale. I think you mentioned earlier, for example, you played with PyTorch. You should then know that NN stack is based on vector math. You’re making assumptions based on terminology but when you read deeper you’ll see what I mean.

                • IncognitoErgoSum@kbin.socialOP
                  link
                  fedilink
                  arrow-up
                  0
                  ·
                  1 year ago

                  You need to do your own homework. I’m not doing it for you. What I will do is lay this to rest:

                  https://en.wikipedia.org/wiki/Stable_Diffusion

                  Stable Diffusion is a latent diffusion model, a kind of deep generative artificial neural network. Its code and model weights have been released publicly […]

                  https://jalammar.github.io/illustrated-stable-diffusion/

                  The image information creator works completely in the image information space (or latent space). We’ll talk more about what that means later in the post. This property makes it faster than previous diffusion models that worked in pixel space. In technical terms, this component is made up of a UNet neural network and a scheduling algorithm.

                  […]

                  With this we come to see the three main components (each with its own neural network) that make up Stable Diffusion:

                  • […]

                  https://stable-diffusion-art.com/how-stable-diffusion-work/

                  The idea of reverse diffusion is undoubtedly clever and elegant. But the million-dollar question is, “How can it be done?”

                  To reverse the diffusion, we need to know how much noise is added to an image. The answer is teaching a neural network model to predict the noise added. It is called the noise predictor in Stable Diffusion. It is a U-Net model. The training goes as follows.

                  […]

                  It is done using a technique called the variational autoencoder. Yes, that’s precisely what the VAE files are, but I will make it crystal clear later.

                  The Variational Autoencoder (VAE) neural network has two parts: (1) an encoder and (2) a decoder. The encoder compresses an image to a lower dimensional representation in the latent space. The decoder restores the image from the latent space.

                  https://www.pcguide.com/apps/how-does-stable-diffusion-work/

                  Stable Diffusion is a generative model that uses deep learning to create images from text. The model is based on a neural network architecture that can learn to map text descriptions to image features. This means it can create an image matching the input text description.

                  https://www.vegaitglobal.com/media-center/knowledge-base/what-is-stable-diffusion-and-how-does-it-work

                  Forward diffusion process is the process where more and more noise is added to the picture. Therefore, the image is taken and the noise is added in t different temporal steps where in the point T, the whole image is just the noise. Backward diffusion is a reversed process when compared to forward diffusion process where the noise from the temporal step t is iteratively removed in temporal step t-1. This process is repeated until the entire noise has been removed from the image using U-Net convolutional neural network which is, besides all of its applications in machine and deep learning, also trained to estimate the amount of noise on the image.

                  So, I’ll have to give you that you’re trivially right that Stable Diffusion does use a Markov Chain, but as it turns out, I had the same misconception as you did, that that was some sort of mathematical equation. A markov chain is actually just a process where each step depends only on the step immediately before it, and it most certainly doesn’t mean that you’re right about Stable Diffusion not using a neural network. Stable Diffusion works by feeding the prompt and partly denoised image into the neural network over some given number of steps (it can do it in a single step, although the results are usually pretty messy). That in and of itself is a Markov chain. However, the piece that’s actually doing the real work (that essentially does a Rorschach test over and over) is a neural network.

      • Ragnell@kbin.social
        link
        fedilink
        arrow-up
        0
        ·
        1 year ago

        @IncognitoErgoSum Gonna need a source on Large Language Models using neural networks based on the human brain here.

        EDIT: Scratch that. I’m just going to need you to explain how this is based on the human brain functions.

        • IncognitoErgoSum@kbin.socialOP
          link
          fedilink
          arrow-up
          0
          ·
          1 year ago

          I’m willing to, but if I take the time to do that, are you going to listen to my answer, or just dismiss everything I say and go back to thinking what you want to think?

          Also, a couple of preliminary questions to help me explain things:

          What’s your level of familiarity with the source material? How much experience do you have writing or modifying code that deals with neural networks? My own familiarity lies mostly with PyTorch. Do you use that or something else? If you don’t have any direct familiarity with programming with neural networks, do you have enough of a familiarity with them to at least know what some of those boxes mean, or do I need to explain them all?

          Most importantly, when I say that neural networks like GPT-* use artificial neurons, are you objecting to that statement?

          I need to know what it is I’m explaining.

          • Ragnell@kbin.social
            link
            fedilink
            arrow-up
            0
            ·
            1 year ago

            @IncognitoErgoSum I don’t think you can. Because THIS? Is not a model of how humans learn language. It’s a model of how a computer learns to write sentences.

            If what you’re going to give me is an oversimplified analogy that puts too much faith in what AI devs are trying to sell and not enough faith in what a human brain is doing, then don’t bother because I will dismiss it as a fairy tale.

            But, if you have an answer that actually, genuinely proves that this “neural” network is operating similarly to how the human brain does… then you have invalidated your original post. Because if it really is thinking like a human, NO ONE should own it.

            In either case, it’s probably not worth your time.

            • throwsbooks@lemmy.world
              link
              fedilink
              arrow-up
              0
              ·
              1 year ago

              But, if you have an answer that actually, genuinely proves that this “neural” network is operating similarly to how the human brain does… then you have invalidated your original post. Because if it really is thinking like a human, NO ONE should own it.

              I think this is a neat point.

              The human brain is very complex. The neural networks trained on computers right now are more like collections of neurons grown together in a petri dish, rather than a full human brain. They serve one function, say, recognizing or generating an image or calculating some probability or deciding on what the next word should be in a sequence. While the brain is a huge internetwork of these smaller, more specialized neural networks.

              No, neural networks don’t have a database and they don’t do stats. They’re trained through trial and error, not aggregation. The way they work is explicitly based on a mathematical model of a biological neuron.

              And when an AI is developed that’s advanced enough to rival the actual human brain, then yeah, the AI rights question becomes a real thing. We’re not there yet, though. Still just matter in petri dishes. That’s a whole other controversial argument.

              • IncognitoErgoSum@kbin.socialOP
                link
                fedilink
                arrow-up
                0
                ·
                1 year ago

                I don’t believe that current AIs should have rights. They aren’t conscious.

                My point is was purely that AIs learn concepts and that concepts aren’t copyrightable. Encoding concepts into neurons (that is, learning) doesn’t require consciousness.

                • Ragnell@kbin.social
                  link
                  fedilink
                  arrow-up
                  0
                  ·
                  1 year ago

                  @IncognitoErgoSum If they don’t have consciousness, then they aren’t comparable to a human being being inspired. It is that simple.

                  The human who created the AI is profitting from the AI’s work, but that human was not inspired by the works he used to train the AI. He fed them into a machine to help make that machine. It doesn’t matter how close the machine is to human thought, it is a machine that is making something for other to profit from.

                  The people who created the AI took work without permission, used it to build and refine a machine, and are now using that machine to profit. They are selling that machine to people who would otherwise hire the people who did the work that was taken without permission and used to build the machine. This is all sorts of fucked up, man.

                  If an AI’s creation is comparable to a direct human’s creation, then it belongs to the AI. Whatever it is, it doesn’t belong to the guys who built the AI OR the guys who BOUGHT the AI. Which is actually one of the demands from the WGA, that AI-generated scripts have NOBODY listed as the writer and NOBODY able to copyright that work.

                  SAG-AFTRA just got a contract offer that says background performers would get their likeness scanned and have it belong to the studio FOREVER so that they can simply generate these performers through AI.

                  This is what is happening RIGHT NOW. And you want to compare the output of an AI to a human’s blood sweat and tears, and argue that copyright protections would HURT people rather than help them avoid exploitation.

                  Because that is what the AI programmers are doing, they are EXPLOITING living authors, living artists, living performers to create a machine that will replace those very people.

                  The copyright system, which yes is exploited and manipulated by these corporations, is still the only method we have to protect small-time creatives FROM those corporations. And right now, those corporations are poised to use AI to attack small-time creatives.

                  So yes, your comparison to human inspiration is a damned fairy tale. Because it whitewashes the exploitation of human workers by equating them to the very machine that’s being used to exploit them.

                  • IncognitoErgoSum@kbin.socialOP
                    link
                    fedilink
                    arrow-up
                    0
                    ·
                    1 year ago

                    Lots to unpack here.

                    First of all, the physical process of human inspiration is that a human looks at something, their optic nerves fire, those impulses activate other neruons in the brain, and an idea forms. That’s exactly how an AI takes “inspiration” from images. This stuff about free will and consciousness is metaphysics. There’s no meaningful difference in the actual process.

                    Secondly, let’s look at this:

                    SAG-AFTRA just got a contract offer that says background performers would get their likeness scanned and have it belong to the studio FOREVER so that they can simply generate these performers through AI.

                    This is what is happening RIGHT NOW. And you want to compare the output of an AI to a human’s blood sweat and tears, and argue that copyright protections would HURT people rather than help them avoid exploitation.

                    I’ll say right off that I don’t appreciate the “you’re a bad person” schtick. Switching to personal attacks stinks of desperation. Plus, your personal attack on me isn’t even correct, because I don’t approve of the situation you described any more than you do. The reason they’re trying to slip that into those people’s contracts is because those people own their likenesses under existing copyright law. That is, you don’t have to come up with a funny interpretation of copyright law where concepts can be copyrighted but only if a machine learns them. They need a license to use those people’s likenesses regardless of whether they use an AI or Photoshop or just have a painter do it. Using AI doesn’t get them out of that – if it did; they wouldn’t need to try to put it into the contract.

                    In other words, they aren’t using an AI to attack anyone; they’re using a powerful bargaining position to try to get people to sign away an established right they already have according to copyright law. That has absolutely nothing to do with anything I’m talking about here, except that you want to attach it to what I’m talking about so you can have something to rage about.

                    And here’s the thing. None of you people ever gave a shit when anybody else’s job was automated away. Cashiers have had their work automated away recently and all I hear is “ThAt’S oKaY bEcAuSe tHeIr jOb sUcKs!!!111” Artists have been actually violating the real copyright of other artists (NOT JUST LEARNING CONCEPTS) with fanart (which is a DERIVATIVE WORK OF A COPYRIGHTED CHARACTER) for god only knows how long and there’s certainly never been a big outcry about that.

                    It sucks to be the ones looking down the business end of automation. I know that because as a computer programmer I am too. On the other hand, I can see past the end of my own nose, and I know how amazing it would be if lots of regular people suddenly had the ability to do the things that I do, so I’m not going to sit there and creatively interpret copyright law in an attempt to prevent that from happening. If you’re worried about the effects of automation, you need to start thinking about things like a universal healthcare and universal income, not just ESTABLISH SPECIAL PROTECTIONS FOR A TINY SUBSET OF PEOPLE WHOM YOU HAPPEN TO LIKE. It just seems a bit convenient, and (dare I say) selfish that the point in history that we need to start smashing the machines happens to be right now. Why not the printing press or the cotton gin or machines that build railroads or looms or or robots in factories or grocery store kiosks? The transition sucked for all those people as well. It’s going to suck for artists, and it’ll suck for me, but in the end we can pull through and be better off for it, rather than killing the technology in its infancy and calling everyone a monster who doesn’t believe that you and you alone ought to have special privileges.

                    We need to be using the political clout we have to push us toward a workable post-scarcity economy, as opposed to trying to preserve a single, tiny bit of scarcity so a small group of people can continue to do something while everybody else is automated away and we all end up ruled by a bunch of rent-seeking corporations. Your gatekeeping of the ability of people to do art isn’t going to prevent any of that.

                    P.S. We seem to be at the very beginning of a major climate disaster these last couple weeks, so we’re probably all equally fucked anyway.