A judge in Washington state has blocked video evidence that’s been “AI-enhanced” from being submitted in a triple murder trial. And that’s a good thing, given the fact that too many people seem to think applying an AI filter can give them access to secret visual data.

  • Downcount@lemmy.world
    link
    fedilink
    English
    arrow-up
    59
    ·
    8 months ago

    If you ever encountered an AI hallucinating stuff that just does not exist at all you know how bad the idea of AI enhanced evidence actually is.

    • Bobby Turkalino@lemmy.yachts
      link
      fedilink
      English
      arrow-up
      0
      arrow-down
      4
      ·
      8 months ago

      Everyone uses the word “hallucinate” when describing visual AI because it’s normie-friendly and cool sounding, but the results are a product of math. Very complex math, yes, but computers aren’t taking drugs and randomly pooping out images because computers can’t do anything truly random.

      You know what else uses math? Basically every image modification algorithm, including resizing. I wonder how this judge would feel about viewing a 720p video on a 4k courtroom TV because “hallucination” takes place in that case too.

      • Downcount@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        ·
        8 months ago

        There is a huge difference between interpolating pixels and inserting whole objects into pictures.

        • Bobby Turkalino@lemmy.yachts
          link
          fedilink
          English
          arrow-up
          0
          arrow-down
          3
          ·
          8 months ago

          Both insert pixels that didn’t exist before, so where do we draw the line of how much of that is acceptable?

          • Downcount@lemmy.world
            link
            fedilink
            English
            arrow-up
            0
            ·
            edit-2
            8 months ago

            Look it this way: If you have an unreadable licence plate because of low resolution, interpolating won’t make it readable (as long as we didn’t switch to a CSI universe). An AI, on the other hand, could just “invent” (I know, I know, normy speak in your eyes) a readable one.

            You will draw yourself the line when you get your first ticket for speeding, when it wasn’t your car.

            • Bobby Turkalino@lemmy.yachts
              link
              fedilink
              English
              arrow-up
              0
              arrow-down
              1
              ·
              8 months ago

              Interesting example, because tickets issued by automated cameras aren’t enforced in most places in the US. You can safely ignore those tickets and the police won’t do anything about it because they know how faulty these systems are and most of the cameras are owned by private companies anyway.

              “Readable” is a subjective matter of interpretation, so again, I’m confused on how exactly you’re distinguishing good & pure fictional pixels from bad & evil fictional pixels

              • Downcount@lemmy.world
                link
                fedilink
                English
                arrow-up
                1
                ·
                8 months ago

                Being tickets enforced or not doesn’t change my argumentation nor invalidates it.

                You are acting stubborn and childish. Everything there was to say has been said. If you still think you are right, do it, as you are not able or willing to understand. Let me be clear: I think you are trolling and I’m not in any mood to participate in this anymore.

                • Bobby Turkalino@lemmy.yachts
                  link
                  fedilink
                  English
                  arrow-up
                  0
                  arrow-down
                  1
                  ·
                  8 months ago

                  Sorry, it’s just that I work in a field where making distinctions is based on math and/or logic, while you’re making a distinction between AI- and non-AI-based image interpolation based on opinion and subjective observation

              • abhibeckert@lemmy.world
                link
                fedilink
                English
                arrow-up
                1
                ·
                edit-2
                8 months ago

                You can safely ignore those tickets and the police won’t do anything

                Wait what? No.

                It’s entirely possible if you ignore the ticket, a human might review it and find there’s insufficient evidence. But if, for example, you ran a red light and they have a photo that shows your number plate and your face… then you don’t want to ignore that ticket. And they generally take multiple photos, so even if the one you received on the ticket doesn’t identify you, that doesn’t mean you’re safe.

                When automated infringement systems were brand new the cameras were low quality / poorly installed / didn’t gather evidence necessary to win a court challenge… getting tickets overturned was so easy they didn’t even bother taking it to court. But it’s not that easy now, they have picked up their game and are continuing to improve the technology.

                Also - if you claim someone else was driving your car, and then they prove in court that you were driving… congratulations, your slap on the wrist fine is now a much more serious matter.

          • Catoblepas@lemmy.blahaj.zone
            link
            fedilink
            English
            arrow-up
            0
            arrow-down
            1
            ·
            8 months ago

            What’s your bank account information? I’m either going to add or subtract a lot of money from it. Both alter your account balance so you should be fine with either right?

      • Flying Squid@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        8 months ago

        normie-friendly

        Whenever people say things like this, I wonder why that person thinks they’re so much better than everyone else.

      • Catoblepas@lemmy.blahaj.zone
        link
        fedilink
        English
        arrow-up
        1
        ·
        8 months ago

        Has this argument ever worked on anyone who has ever touched a digital camera? “Resizing video is just like running it through AI to invent details that didn’t exist in the original image”?

        “It uses math” isn’t the complaint and I’m pretty sure you know that.

      • Kedly@lemm.ee
        link
        fedilink
        English
        arrow-up
        1
        ·
        8 months ago

        Bud, hallucinate is a perfect term for the shit AI creates because it doesnt understand reality, regardless if math is creating that hallucination or not

  • emptyother@programming.dev
    link
    fedilink
    English
    arrow-up
    44
    arrow-down
    1
    ·
    8 months ago

    How long until we got upscalers of various sorts built into tech that shouldn’t have it? For bandwidth reduction, for storage compression, or cost savings. Can we trust what we capture with a digital camera, when companies replace a low quality image of the moon with a professionally taken picture, at capture time? Can sport replays be trusted when the ball is upscaled inside the judges’ screens? Cheap security cams with “enhanced night vision” might get somebody jailed.

    I love the AI tech. But its future worries me.

    • Jimmycakes@lemmy.world
      link
      fedilink
      English
      arrow-up
      12
      ·
      8 months ago

      It will wild out for the foreseeable future until the masses stop falling for it in gimmicks then it will be reserved for the actual use cases where it’s beneficial once the bullshit ai stops making money.

    • GenderNeutralBro@lemmy.sdf.org
      link
      fedilink
      English
      arrow-up
      7
      arrow-down
      2
      ·
      8 months ago

      AI-based video codecs are on the way. This isn’t necessarily a bad thing because it could be designed to be lossless or at least less lossy than modern codecs. But compression artifacts will likely be harder to identify as such. That’s a good thing for film and TV, but a bad thing for, say, security cameras.

      The devil’s in the details and “AI” is way too broad a term. There are a lot of ways this could be implemented.

    • MudMan@fedia.io
      link
      fedilink
      arrow-up
      3
      ·
      edit-2
      8 months ago

      Not all of those are the same thing. AI upscaling for compression in online video may not be any worse than “dumb” compression in terms of loss of data or detail, but you don’t want to treat a simple upscale of an image as a photographic image for evidence in a trial. Sport replays and hawkeye technology doesn’t really rely on upscaling, we have ways to track things in an enclosed volume very accurately now that are demonstrably more precise than a human ref looking at them. Whether that’s better or worse for the game’s pace and excitement is a different question.

      The thing is, ML tech isn’t a single thing. The tech itself can be used very rigorously. Pretty much every scientific study you get these days uses ML to compile or process images or data. That’s not a problem if done correctly. The issue is everybody is both assuming “generative AI” chatbots, upscalers and image processers are what ML is and people keep trying to apply those things directly in the dumbest possible way thinking it is basically magic.

      I’m not particularly afraid of “AI tech”, but I sure am increasingly annoyed at the stupidity and greed of some of the people peddling it, criticising it and using it.

  • Voyajer@lemmy.world
    link
    fedilink
    English
    arrow-up
    9
    ·
    8 months ago

    You’d think it would be obvious you can’t submit doctored evidence and expect it to be upheld in court.

  • Neato@ttrpg.network
    link
    fedilink
    English
    arrow-up
    9
    arrow-down
    1
    ·
    8 months ago

    Imagine a prosecution or law enforcement bureau that has trained an AI from scratch on specific stimuli to enhance and clarify grainy images. Even if they all were totally on the up-and-up (they aren’t, ACAB), training a generative AI or similar on pictures of guns, drugs, masks, etc for years will lead to internal bias. And since AI makers pretend you can’t decipher the logic (I’ve literally seen compositional/generative AI that shows its work), they’ll never realize what it’s actually doing.

    So then you get innocent CCTV footage this AI “clarifies” and pattern-matches every dark blurb into a gun. Black iPhone? Maybe a pistol. Black umbrella folded up at a weird angle? Clearly a rifle. And so on. I’m sure everyone else can think of far more frightening ideas like auto-completing a face based on previously searched ones or just plain-old institutional racism bias.

    • UnderpantsWeevil@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      arrow-down
      1
      ·
      8 months ago

      just plain-old institutional racism bias

      Every crime attributed to this one black guy in our training data.

  • rustyfish@lemmy.world
    link
    fedilink
    English
    arrow-up
    8
    arrow-down
    1
    ·
    edit-2
    8 months ago

    For example, there was a widespread conspiracy theory that Chris Rock was wearing some kind of face pad when he was slapped by Will Smith at the Academy Awards in 2022. The theory started because people started running screenshots of the slap through image upscalers, believing they could get a better look at what was happening.

    Sometimes I think, our ancestors shouldn’t have made it out of the ocean.

  • TheBest@midwest.social
    link
    fedilink
    English
    arrow-up
    7
    arrow-down
    1
    ·
    edit-2
    8 months ago

    This actually opens an interesting debate.

    Every photo you take with your phone is post processed. Saturation can be boosted, light levels adjusted, noise removed, night mode, all without you being privy as to what’s happening.

    Typically people are okay with it because it makes for a better photo - but is it a true representation of the reality it tried to capture? Where is the line of the definition of an ai-enhanced photo/video?

    We can currently make the judgement call that a phones camera is still a fair representation of the truth, but what about when the 4k AI-Powered Night Sight Camera does the same?

    My post is more tangentially related to original article, but I’m still curious as what the common consensus is.

  • Stovetop@lemmy.world
    link
    fedilink
    English
    arrow-up
    4
    ·
    8 months ago

    “Your honor, the evidence shows quite clearly that the defendent was holding a weapon with his third arm.”

  • AutoTL;DR@lemmings.worldB
    link
    fedilink
    English
    arrow-up
    4
    ·
    8 months ago

    This is the best summary I could come up with:


    A judge in Washington state has blocked video evidence that’s been “AI-enhanced” from being submitted in a triple murder trial.

    And that’s a good thing, given the fact that too many people seem to think applying an AI filter can give them access to secret visual data.

    Lawyers for Puloka wanted to introduce cellphone video captured by a bystander that’s been AI-enhanced, though it’s not clear what they believe could be gleaned from the altered footage.

    For example, there was a widespread conspiracy theory that Chris Rock was wearing some kind of face pad when he was slapped by Will Smith at the Academy Awards in 2022.

    Using the slider below, you can see the pixelated image that went viral before people started feeding it through AI programs and “discovered” things that simply weren’t there in the original broadcast.

    Large language models like ChatGPT have convinced otherwise intelligent people that these chatbots are capable of complex reasoning when that’s simply not what’s happening under the hood.


    The original article contains 730 words, the summary contains 166 words. Saved 77%. I’m a bot and I’m open source!

  • brlemworld@lemmy.world
    link
    fedilink
    English
    arrow-up
    3
    arrow-down
    1
    ·
    8 months ago

    How does this work when you have a shitty Samsung that turns a pic of a pic of the moon into the Moon by adding details that weren’t there?

  • guyrocket@kbin.social
    link
    fedilink
    arrow-up
    2
    ·
    8 months ago

    I think we need to STOP calling it “Artificial Intelligence”. IMHO that is a VERY misleading name. I do not consider guided pattern recognition to be intelligence.

      • exocortex@discuss.tchncs.de
        link
        fedilink
        English
        arrow-up
        0
        ·
        edit-2
        8 months ago

        on the contrary! it’s a very old buzzword!

        AI should be called machine learning. much better. If i had my way it would be called “fancy curve fitting” henceforth.

        • Hackerman_uwu@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          8 months ago

          Technically speaking AI is any effort on the part of machines to mimic living things. So computer vision for instance. This is distinct from ML and Deep Learning which use historical statistical data to train on and then forecast or simulate.

          • exocortex@discuss.tchncs.de
            link
            fedilink
            English
            arrow-up
            1
            ·
            8 months ago

            “machines mimicking living things” does not mean exclusively AI. Many scientific fields are trying to mimic living things.

            AI is a very hazy concept imho as it’s difficult to even define when a system is intelligent - or when a human is.

    • Gabu@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      arrow-down
      1
      ·
      8 months ago

      I do not consider guided pattern recognition to be intelligence.

      That’s a you problem, this debate happened 50 years ago and we decided Intelligence is the right word.

    • rdri@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      arrow-down
      1
      ·
      8 months ago

      How is guided pattern recognition is different from imagination (and therefore intelligence) though?

      • Jesus_666@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        8 months ago

        Your comment is a good reason why these tools have no place in the courtroom: The things you describe as imagination.

        They’re image generation tools that will generate a new, unrelated image that happens to look similar to the source image. They don’t reconstruct anything and they have no understanding of what the image contains. All they know is which color the pixels in the output might probably have given the pixels in the input.

        It’s no different from giving a description of a scene to an author, asking them to come up with any event that might have happened in such a location and then trying to use the resulting short story to convict someone.

        • rdri@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          arrow-down
          1
          ·
          8 months ago

          They don’t reconstruct anything and they have no understanding of what the image contains.

          With enough training they, in fact, will have some understanding. But that still leaves us with that “enhance meme” problem aka the limited resolution of the original data. There are no means to discover what exactly was hidden between visible pixels, only approximate. So yes you are correct, just described it a bit differently.

          • lightstream@lemmy.ml
            link
            fedilink
            English
            arrow-up
            0
            ·
            8 months ago

            they, in fact, will have some understanding

            These models have spontaneously acquired a concept of things like perspective, scale and lighting, which you can argue is already an understanding of 3D space.

            What they do not have (and IMO won’t ever have) is consciousness. The fact we have created machines that have understanding of the universe without consciousness is very interesting to me. It’s very illuminating on the subject of what consciousness is, by providing a new example of what it is not.

            • rdri@lemmy.world
              link
              fedilink
              English
              arrow-up
              0
              arrow-down
              1
              ·
              8 months ago

              I think AI doesn’t need consciousness to be able to say what is on the picture, or to guess what else could specific details contain.

      • Natanael@slrpnk.net
        link
        fedilink
        English
        arrow-up
        0
        ·
        edit-2
        8 months ago

        There’s a lot of other layers in brains that’s missing in machine learning. These models don’t form world models and somedon’t have an understanding of facts and have no means of ensuring consistency, to start with.

        • rdri@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          arrow-down
          1
          ·
          edit-2
          8 months ago

          I mean if we consider just the reconstruction process used in digital photos it feels like current ai models are already very accurate and won’t be improved by much even if we made them closer to real “intelligence”.

          The point is that reconstruction itself can’t reliably produce missing details, not that a “properly intelligent” mind will be any better at it than current ai.

  • dual_sport_dork 🐧🗡️@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    ·
    8 months ago

    No computer algorithm can accurately reconstruct data that was never there in the first place.

    Ever.

    This is an ironclad law, just like the speed of light and the acceleration of gravity. No new technology, no clever tricks, no buzzwords, no software will ever be able to do this.

    Ever.

    If the data was not there, anything created to fill it in is by its very nature not actually reality. This includes digital zoom, pixel interpolation, movement interpolation, and AI upscaling. It preemptively also includes any other future technology that aims to try the same thing, regardless of what it’s called.

    • KairuByte@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      1
      ·
      8 months ago

      One little correction, digital zoom is not something that belongs on that list. It’s essentially just cropping the image. That said, “enhanced” digital zoom I agree should be on that list.

    • abhibeckert@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      8 months ago

      It preemptively also includes any other future technology that aims to try the same thing

      No it doesn’t. For example you can, with compute power, for distortions introduced by camera lenses/sensors/etc and drastically increase image quality. For example this photo of pluto was taken from 7,800 miles away - click the link for a version of the image that hasn’t been resized/compressed by lemmy:

      The unprocessed image would look nothing at all like that. There’s a lot more data in an image than you can see with the naked eye, and algorithms can extract/highlight the data. That’s obviously not what a generative ai algorithm does, those should never be used, but there are other algorithms which are appropriate.

      The reality is every modern photo is heavily processed - look at this example by a wedding photographer, even with a professional camera and excellent lighting the raw image on the left (where all the camera processing features are disabled) looks like garbage compared to exactly the same photo with software processing:

      • dual_sport_dork 🐧🗡️@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        8 months ago

        None of your examples are creating new legitimate data from the whole cloth. They’re just making details that were already there visible to the naked eye. We’re not talking about taking a giant image that’s got too many pixels to fit on your display device in one go, and just focusing on a specific portion of it. That’s not the same thing as attempting to interpolate missing image data. In that case the data was there to begin with, it just wasn’t visible due to limitations of the display or the viewer’s retinas.

        The original grid of pixels is all of the meaningful data that will ever be extracted from any image (or video, for that matter).

        Your wedding photographer’s picture actually throws away color data in the interest of contrast and to make it more appealing to the viewer. When you fiddle with the color channels like that and see all those troughs in the histogram that make it look like a comb? Yeah, all those gaps and spikes are actually original color/contrast data that is being lost. There is less data in the touched up image than the original, technically, and if you are perverse and own a high bit depth display device (I do! I am typing this on a machine with a true 32-bit-per-pixel professional graphics workstation monitor.) you actually can state at it and see the entirety of the detail captured in the raw image before the touchups. A viewer might not think it looks great, but how it looks is irrelevant from the standpoint of data capture.

        • Richard@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          arrow-down
          1
          ·
          8 months ago

          They talked about algorithms used for correcting lens distortions with their first example. That is absolutely a valid use case and extracts new data by making certain assumptions with certain probabilities. Your newly created law of nature is just your own imagination and is not the prevalent understanding in the scientific community. No, quite the opposite, scientific practice runs exactly counter your statements.

    • UnderpantsWeevil@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      arrow-down
      1
      ·
      8 months ago

      No computer algorithm can accurately reconstruct data that was never there in the first place.

      Okay, but what if we’ve got a computer program that can just kinda insert red eyes, joints, and plums of chum smoke on all our suspects?

  • linearchaos@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    ·
    8 months ago

    Yeah, this is a really good call. I’m a fan of what we can do with AI, when you start looking at those upskilled videos with a magnifying glass… It’s just making s*** up that looks good.

  • ChaoticNeutralCzech@lemmy.one
    link
    fedilink
    English
    arrow-up
    1
    ·
    edit-2
    8 months ago

    Sure, no algorithm is able to extract any more information from a single photo. But how about combining detail caught in multiple frames of video? Some phones already do this kind of thing, getting multiple samples for highly zoomed photos thanks to camera shake.

    Still, the problem remains that the results from a cherry-picked algorithm or outright hand-crafted pics may be presented.