IBM researchers said a ChatGPT-generated phishing email was almost as effective in fooling people compared to a man-made version.

  • MysticKetchup@lemmy.world
    link
    fedilink
    English
    arrow-up
    140
    arrow-down
    1
    ·
    1 year ago

    IBM researchers said a ChatGPT-generated phishing email was almost as effective in fooling people compared to a man-made version.

    So it’s less effective than a regular phishing email?

    • snooggums@kbin.social
      link
      fedilink
      arrow-up
      45
      ·
      1 year ago

      Yes, but being about the same means ChatGPT could be used to create massive amounts or personalized phishing emails at a low cost in a very short time by automation. Basically doing what they do now, but even faster.

        • snooggums@kbin.social
          link
          fedilink
          arrow-up
          8
          ·
          1 year ago

          No, those ‘mistakes’ are part of the phishing tactic. It weeds out those that are paying too much attention to the details.

        • El Barto@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 year ago

          Better spelling and punctuation is a bug, not a feature.

          Bad spelling = people who miss those may be easy to fool.

      • afraid_of_zombies@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        I wonder how that would work. The last one I did some checking into had a bitcoin address and it (I really don’t understand Bitcoin well) looked like the person moved the fake money from account to account over and over again.

    • FoundTheVegan@kbin.social
      link
      fedilink
      arrow-up
      27
      ·
      1 year ago

      And crafting a carefully targeted phishing email took a human team around 16 hours, they wrote, while ChatGPT took just minutes

      This is significant because any person with the desire to scam can use ChatGPT from the comfort of their own home over lunch instead of hiring professionals for a few days.

      • dack@lemmy.world
        link
        fedilink
        English
        arrow-up
        8
        ·
        1 year ago

        No, it’s significant because attackers can pump out way more emails while also making them customized to their targets and constantly changing to help avoid detectors.

  • Moobythegoldensock@lemm.ee
    link
    fedilink
    English
    arrow-up
    45
    arrow-down
    5
    ·
    edit-2
    1 year ago

    And crafting a carefully targeted phishing email took a human team around 16 hours

    Ummm what? Back in college, I used to budget 30-45 minutes a page for essays. What the hell are they writing that took a team of people 16 fucking hours for a few paragraphs of text?

    • a1studmuffin@aussie.zone
      link
      fedilink
      English
      arrow-up
      28
      ·
      1 year ago

      A targeted phishing email is usually pretty sophisticated and requires days or weeks of research. For example, you might send an email pretending to be from someone’s IT department regarding a hardware audit, and ask a user to report back with the barcode sticker on their laptop, providing them with a photo of an example tag in similar format. You’ll pretend to be a specific individual at the company, or a contractor the company actually uses, and show knowledge of the internal software and hardware, and refer to other real employees by name/email to establish trust. Most of this data will be scraped from publicly available sources like LinkedIn profiles, job listings, and photos shared on social media by employees. This process is called OSINT (Open-Source Intelligence) and it’s a fascinating rabbithole to read about. Targeted phishing attempts are much, much more sophisticated than the ones you’ll see in spam email.

      • IphtashuFitz@lemmy.world
        link
        fedilink
        English
        arrow-up
        6
        ·
        1 year ago

        This is pretty much what happened at the company I work for. The assistant to the CEO received an email that appeared like it came from the CEO requesting confidential financial information. The email contained mannerisms of the CEO, was sent when the CEO was out of the office, etc. The assistant almost fell for it… She would have if our mail system didn’t clearly flag external emails so that it’s obvious they weren’t sent internally.

      • afraid_of_zombies@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        My old employer would get a call every few months from someone pretending to be our client and informing us we should change the banking information. No one could figure out how they figured out that there was a business relationship between the two companies let alone who was the financial person at my job.

    • Lichtblitz@discuss.tchncs.de
      link
      fedilink
      English
      arrow-up
      7
      ·
      1 year ago

      I guess they mean person hours since they are referring to a team. An initial brainstorming session, another review session or two and 16 hours are quickly gone.

    • cybersandwich@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      2
      ·
      1 year ago

      What the hell are they writing that took a team of people 16 fucking hours for a few paragraphs of text?

      An invoice full of billable hours.

  • Bogasse@lemmy.ml
    link
    fedilink
    English
    arrow-up
    25
    ·
    1 year ago

    To be honest, phishing emails are so bad that I don’t see how any generational AI couldn’t be better. Just making less than two typos per sentence would e enough.

    Someone explained me that it may be intentional that phishing emails are so bad as it acts as a pre-filter, then you only spend time and ressources dealing with presumably very gullible people.

    • Artyom@lemm.ee
      link
      fedilink
      English
      arrow-up
      5
      arrow-down
      1
      ·
      1 year ago

      The typos are intentional. They filter out intelligent recipients who wouldn’t fall for the scam.

      • hedgehog@ttrpg.network
        link
        fedilink
        English
        arrow-up
        4
        ·
        1 year ago

        The typos have been theorized to be intentional (for that reason), but that isn’t the only theory, and afaik those theories aren’t based off conversations with the people crafting those emails.

        It’s also been theorized that phishing emails frequently have typos (intentionally) to lower people’s resistance to well-crafted phishing emails, particular spear phishing.

        There’s also the fact that many phishing emails are crafted by people for whom English is not their first language, and even given that, phishing emails are still better written than spam emails, so it’s quite likely that in many cases it isn’t intentional at all.

  • ThatHermanoGuy@midwest.social
    link
    fedilink
    English
    arrow-up
    12
    ·
    1 year ago

    Why haven’t people learned yet to simply never click a link in an email? Even if it’s not malicious, it’s still trying to track you.

  • AutoTL;DR@lemmings.worldB
    link
    fedilink
    English
    arrow-up
    4
    ·
    1 year ago

    This is the best summary I could come up with:


    (tldr: 2 sentences skipped)

    Case in point, IBM researchers posted an internal study that details how they unleashed a ChatGPT-generated phishing email on a real healthcare company to see if it could fool people as effectively as a human-penned one.

    (tldr: 2 sentences skipped)

    “Humans may have narrowly won this match, but AI is constantly improving,” said IBM hacker Stephanie Carruthers wrote of the work.

    “As technology advances, we can only expect AI to become more sophisticated and potentially even outperform humans one day.”

    Given these results and AI chatbots rapidly improving, what can individuals do against this inbox onslaught?

    IBM’s suggestions ranged from common sense, like calling the purported sender if something looks suspicious, to anemic, like looking out for “longer emails,” which they said are “often a hallmark of AI-generated text.”

    The bottom line, though, is just to use your common sense — and to prepare yourself for an internet that looks set to be rapidly overrun with AI-generated content, malicious or otherwise.


    The original article contains 250 words, the summary contains 163 words. Saved 35%. I’m a bot and I’m open source!

  • Rhoeri@lemmy.world
    link
    fedilink
    English
    arrow-up
    3
    arrow-down
    2
    ·
    1 year ago

    The simple fact that people still fall for phishing scams is a great indicator that we’ve always been going nowhere.

    • RGB3x3@lemmy.world
      link
      fedilink
      English
      arrow-up
      6
      ·
      1 year ago

      Phishing scams are getting really good these days. It’s no longer the Nigerian prince-type obvious scams.

      They make emails nearly identical to real ones, they’re able to fake sender names, they actually use real English.

      If you think you wouldn’t fall for a phishing email, you’re kidding yourself. All it takes is one lapse of judgement while you’re too busy to realize an email is fake.

    • afraid_of_zombies@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      ·
      1 year ago

      Oh please you can’t be 100% mistrustful all the time. Eventually you are going to slip up and assume good faith. This is why it is important to stop people from doing it instead of blaming victims.

      Also, who knows how many people who do fall for these things are mentally disabled.