• Admiral Patrick@dubvee.org
    link
    fedilink
    English
    arrow-up
    47
    arrow-down
    6
    ·
    7 months ago

    Well maybe stop shoving the tech that does that down everyone’s throats? Just a thought 🤷‍♂️

    • elshandra@lemmy.world
      link
      fedilink
      English
      arrow-up
      40
      arrow-down
      1
      ·
      7 months ago

      The best solution to any problem is to go back in time to before the problem was created, sure. That cat’s so far out of the bag, and it’s only going to multiply and evolve.

      • Admiral Patrick@dubvee.org
        link
        fedilink
        English
        arrow-up
        21
        arrow-down
        3
        ·
        edit-2
        7 months ago

        I mean, yeah that’s true, but harm reduction is also a thing that exists. Usually it’s mentioned in the context of drugs, but it could easily apply here.

        • elshandra@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          6
          ·
          7 months ago

          Interesting take, addiction to the convenience provided by AI driving the need to get more. I suppose at the end of the day it’s probably the same brain chemistry involved. I think that’s what you’re getting at?

          I’m any case, this tech is only going to get better, and more commonplace. Take it, or run for the hills.

          • treefrog@lemm.ee
            link
            fedilink
            English
            arrow-up
            18
            ·
            7 months ago

            No, harm reduction would be recognizing that an object as causing harm, that people will use that object anyway, and doing what we can to minimize the harms caused by that use.

            It’s less about addiction and brain chemistry than simple math. If harm is being caused, and it can be reduced, reduce it.

            • elshandra@lemmy.world
              link
              fedilink
              English
              arrow-up
              1
              ·
              7 months ago

              Ah, so more like self-harm prevention, gotcha.

              I guess like any tool, whether it is help or harm depends on the user and usage.

          • Admiral Patrick@dubvee.org
            link
            fedilink
            English
            arrow-up
            2
            ·
            7 months ago

            I’m heading for the hills then. I’m perfectly capable of thinking for myself without delegating that to some chatbot.

            • elshandra@lemmy.world
              link
              fedilink
              English
              arrow-up
              3
              ·
              7 months ago

              Everyone is. As time and tech progresses, you’re going to find that it becomes increasingly difficult to avoid without going off-grid entirely.

              Do you really think corps aren’t going to replace humans with AI, any later than they can profit by doing so? That states aren’t going eventually to do the same?

  • penquin@lemm.ee
    link
    fedilink
    English
    arrow-up
    25
    ·
    7 months ago

    Is that the same Microsoft company that has poured billions of dollars into that same thing they’re warning us about?

    • pop@lemmy.ml
      link
      fedilink
      English
      arrow-up
      12
      ·
      7 months ago

      Yes, this is the “we’re the good ones” flex. And anytime they do this, there has to be a big bad boogeyman elsewhere to blame without evidence or consequence.

    • AdamEatsAss@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      8
      ·
      7 months ago

      Life is an equation. For a deep fake of a celebrity to be made you need 1.a celebrity, 2.a person to make the fake, 3.deep fake tech. You have to try to remove one of those from the equation to make it stop.

      • T156@lemmy.world
        link
        fedilink
        English
        arrow-up
        6
        arrow-down
        3
        ·
        7 months ago

        Except the horse is out of the bag. You cannot uninvent the technology any more than you can negate the other parts of that triangle.

      • photonic_sorcerer@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        2
        ·
        7 months ago

        Or we can simply accept and restrict it like all the other dangerous things in our lives.

        We’ve opened Pandora’s box. There’s no going back.

  • blazera@lemmy.world
    link
    fedilink
    English
    arrow-up
    15
    arrow-down
    3
    ·
    7 months ago

    He also revealed that about nine months ago his team conducted a deep dive into how these groups are using AI to influence elections.

    “In just the last few months, the most effective technique that’s been used by Russian actors has been posting a picture and putting a real news organization logo on that picture,” he observed. “That gets millions of shares.”

    information as we know it is over, people have access to the most devilish of AI technology: Copy and Paste

    wow this article is bad

    • 14th_cylon@lemm.ee
      link
      fedilink
      English
      arrow-up
      4
      ·
      7 months ago

      I think the point was that it is easier and faster to generate that image you put the logo on than ever before, not that it was a comment on “logo inserting technology”

  • AutoTL;DR@lemmings.worldB
    link
    fedilink
    English
    arrow-up
    1
    arrow-down
    1
    ·
    7 months ago

    This is the best summary I could come up with:


    As hundreds of millions of voters around the globe prepare to elect their leaders this year, there’s no question that trolls will try to sway the outcomes using AI, according to Clint Watts, general manager of Microsoft’s Threat Analysis Center.

    Watts said his team spotted the first Russian social media account impersonating an American ten years ago.

    Initially, Redmond’s threat hunters (using Excel, of course) tracked Russian trolls testing their fake videos and images on locals, then moving on to Ukraine, Syria and Libya.

    Watts’ team tracks government-linked threat groups from Russia, Iran, China, plus other nations around the world, he explained.

    He also revealed that about nine months ago his team conducted a deep dive into how these groups are using AI to influence elections.

    Videos set in public with a well-known speaker at a rally or an event attended by a large group of people are harder to fake.


    The original article contains 624 words, the summary contains 151 words. Saved 76%. I’m a bot and I’m open source!