Need to let loose a primal scream without collecting footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)

Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

(Credit and/or blame to David Gerard for starting this.)

  • Soyweiser@awful.systems
    link
    fedilink
    English
    arrow-up
    27
    ·
    edit-2
    9 days ago

    I’m gonna do something now that prob isn’t that allowed, nor relevant for the things we talk about, but I saw that the European anti-conversion therapy petition is doing badly, and very likely not going to make it. https://eci.ec.europa.eu/043/public/#/screen/home But to try and give it a final sprint, I want to ask any of you Europeans, or people with access to networks which include a lot of Europeans to please spread the message and sign it. Thanks! (I’m quite embarrassed The Netherlands has not even crossed 20k for example, shows how progressive we are). Sucks that all the empty of political power petitions get a lot of support and this gets so low, and it ran for ages. But yes, sorry if this breaks the rules (and if it gets swiftly removed it is fine), and thanks if you attempt to help.

    E: HOLY SHIT. When I posted this it was at 400k signatures. It is now at 890k. Thanks everybody, I assumed it would never make it because after months it was at 400k now it looks like it might, and even if it doesn’t that is one final sprint. Thanks everybody for the help. E2: omg it actually made it.

    • self@awful.systems
      link
      fedilink
      English
      arrow-up
      9
      ·
      12 days ago

      I’m gonna do something now that prob isn’t that allowed, nor relevant for the things we talk about

      I consider this both allowed and relevant, though I unfortunately can’t sign it myself

    • mountainriver@awful.systems
      link
      fedilink
      English
      arrow-up
      3
      ·
      8 days ago

      I signed it but had the same assumption that it wouldn’t pass with 400k signatures over the first 362 days. But it did! The graph the last three days must look vertical.

      Anyone who’s eligible and wants to sign it can still do so today, Saturday 17th. To show the popular support.

      • Soyweiser@awful.systems
        link
        fedilink
        English
        arrow-up
        14
        ·
        edit-2
        13 days ago

        Ow god that thread. And what is it with ‘law professionals’ like this? I also recall a client in a project who had a law background who was quite a bit of pain to work with. (Also amazing that he doesn’t get that getting a reaction where somebody tries out your very specific problem at all is already quite something, 25k open issues ffs).

        E: Also seeing drama like this unfold a few times in the C:DDA development stuff (a long time ago), which prob was done by young kids/adults and not lawyers. My kneejerk reaction is to get rid of people like this from the project. They will just produce more and more drama, and will eventually burn valuable developers out. (E2: also really odd that despite saying he has a lot of exp talking to OSS devs, he thinks the normal remarks are all intended very hostile. “likely your toolchain setting it or your build script” and “I’ll unsubscribe from this bug now” seem to me to be pretty normal reactions, one a first suggestion at what the problem potentially could be, and the other disclosing that he will not be working on the bug (holy shit the (non lawyer) guy being complained about here is prolific. ~100 contribs on average daily last week and an almost whole green year)). Also “I value such professional behavior very much” tags post with ‘korruption’.

        Another edit: Looked more at this guys blog and that are a lot of quite iffy opinions my man. (I noticed that the other post tagged ‘korruption’ talks about the how the AfD should be allowed to go against ‘the rainbow flag’ (I dont know the exact details of the incident), which while yes, legally ok, it still is a bit iffy). And then I scrolled more and saw this: “Deutschland braucht eine konservative Revolution! Warum wir uns ein Beispiel an den USA nehmen sollten” “Germany needs a conservative revolution, why we should follow the USA’s example”. He is a Musk/Trump/Venture Capitalist Manifesto true believer. Deregulate, stop the ideology build cars and go to space! The Bezos/Zuckerburg revolution. Common sense! “Musk, der Inbegriff des amerikanischen Unternehmergeistes” (If you allow me to react to this in Dutch: Lol). We need modern nuclear power, like how the USA does it (??). Deregulation, AI, humanitarian immigration that also only selects skilled workers, Freedom of speech which includes banning of “cancel culture”, education reform, tax reform, stop crime, quantum computers, biotech, do more things online. We need to look forward, and change things, and thus a conservative revolution!

        There is more stuff like: “Die temporäre Zusammenarbeit mit der AfD in einer Verfahrensfrage wird das Parteiensystem nicht nachhaltig beschädigen.”, or https://seylaw.blogspot.com/2021/04/der-negerkuss-eine-suspeise-die-gemuter.html (If you don’t speak German and want to listen to the weirdly racist drunking ramblings of a guy at the bar who is ‘joking’ throw it through google translate).

        E: also forgot, lol at him going ‘just run these two bash scripts I provided only takes 30 secs’ like the devs need not first check of none of these is doing something malicious.

      • froztbyte@awful.systems
        link
        fedilink
        English
        arrow-up
        9
        ·
        edit-2
        13 days ago

        I ended up doing a messy copypasta of the blog through wc -c: 21574

        dude was so peeved at getting told no (after continually wasting other peoples’ time and goodwill), he wrote a 21.5KiB screed just barely shy of full-on DARVO (and, frankly, I’m being lenient here only because of perspective (of how bad it could’ve been))

        as Soyweiser also put it: a bit of a spelunk around the rest of his blog is also Quite Telling in what you may find

        fuck this guy comprehensively, long may his commits be rejected

        (e: oh I just saw Soyweiser also linked to that post, my bad)

        • Soyweiser@awful.systems
          link
          fedilink
          English
          arrow-up
          10
          ·
          edit-2
          12 days ago

          It gets better btw, nobody mentioned this so far. But all this is over warnings. From what I can tell it still all compiles and works, the only references for the build failing seem to come from the devs, not the issue reporter.

          E: I’m a bit tempted to send the guy a email to go ‘I saw your blog and had a question, was it an error or did it stop compilation’ but that would imho cross the line into harassment, esp as to be fair I think I should also divulge where I come from as an outsider which would not go over well with a guy in that kind of mindset (if I have him pegged correctly). The next blogpost would be about me personally.

    • V0ldek@awful.systems
      link
      fedilink
      English
      arrow-up
      15
      ·
      13 days ago

      He is even politely asked at first “no AI sludge please” which is honestly way more self-restraint than I would have on my maintained projects, but he triples down with a fucking AI-generated changeset.

    • Architeuthis@awful.systems
      link
      fedilink
      English
      arrow-up
      20
      ·
      13 days ago

      The coda is top tier sneer:

      Maybe it’s useful to know that Altman uses a knife that’s showy but incohesive and wrong for the job; he wastes huge amounts of money on olive oil that he uses recklessly; and he has an automated coffee machine that claims to save labour while doing the exact opposite because it can’t be trusted. His kitchen is a catalogue of inefficiency, incomprehension, and waste. If that’s any indication of how he runs the company, insolvency cannot be considered too unrealistic a threat.

    • BlueMonday1984@awful.systemsOP
      link
      fedilink
      English
      arrow-up
      15
      ·
      13 days ago

      Its definitely petty, but making Altman’s all-consuming incompetence known to the world is something I strongly approve of.

      Definitely goes a long way to show why he’s an AI bro.

    • scruiser@awful.systems
      link
      fedilink
      English
      arrow-up
      15
      ·
      13 days ago

      It starts out seeming like a funny but petty and irrelevant criticism of his kitchen skill and product choices, but then beautifully transitions that into an accurate criticism of OpenAI.

  • Soyweiser@awful.systems
    link
    fedilink
    English
    arrow-up
    19
    ·
    edit-2
    11 days ago

    “apparently Elon’s gotten so mad about Grok not answering questions about Afrikaners the way he wants, xAI’s now somehow managed to put it into some kind of hyper-Afriforum mode where it thinks every question is about farm murders or the song “Kill the Boer” ALT”

    Check the quote skeets for a lot more. Somebody messed up. Wonder if they also managed to collapse the whole model into this permanently. (I’m already half assuming they don’t have proper backups).

    E: Also seems there are enough examples out there of this, don’t go out and test it yourself, try to keep the air in Tennessee a bit breathable.

    • swlabr@awful.systems
      link
      fedilink
      English
      arrow-up
      9
      ·
      11 days ago

      I read a food review recently about a guy that used LLMs, with Grok namechecked specifically, to draft designs for his chocolate moulds. I wonder how those moulds are gonna turn out now

  • BlueMonday1984@awful.systemsOP
    link
    fedilink
    English
    arrow-up
    19
    ·
    9 days ago

    The Torment Nexus brings us new and horrifying things today - a UN initiative has tried using chatbots for humanitarian efforts. I’ll let Dr. Abeba Birhane’s horrified reaction do the talking:

    this just started and i’m already losing my mind and screaming

    Western white folk basically putting an AI avatar on stage and pretending it is a refugee from sudan — literally interacting with it as if it is a “woman that fled to chad from sudan”

    just fucking shoot me

    Giving my take on this matter, this is gonna go down in history as an exercise in dehumanisation dressed up as something more kind, and as another indictment (of many) against the current AI bubble, if not artificial intelligence as a concept.

    • FoolishOwl@social.coop
      link
      fedilink
      arrow-up
      15
      ·
      9 days ago

      @BlueMonday1984 If Edward Said were still with us, this would be worth another chapter in Orientalism. It’s another instance of displacing actual people with a constructed fantasy of them, “othering” them.

    • Nicole Parsons@mstdn.social
      link
      fedilink
      arrow-up
      10
      ·
      9 days ago

      @BlueMonday1984

      The stages of genocide:

      1. Classification
      2. Symbolization
      3. Dehumanization
      4. Discrimination
      5. Organization
      6. Polarization
      7. Preparation
      8. Persecuted
      9. Extermination
      10. Denial

      AI is the perfect vehicle for genocide

      https://www.genocidewatch.com/tenstages

      The oil industry estimates 1 billion famine deaths from climate change & they are flooding AI with investment

      “The devices themselves condition the users to employ each other the way they employ machines”
      Frank Herbert

    • Soyweiser@awful.systems
      link
      fedilink
      English
      arrow-up
      9
      ·
      edit-2
      9 days ago

      Uber but for vitrue signalling (*).

      (I joke, because other remarks I want to make will get me in trouble).

      *: I know this term is very RW coded, but I don’t think it is that bad, esp when you mean it like ‘an empty gesture with a very low cost that does nothing except for signal that the person is virtuous.’ Not actually doing more than a very small minimum should be part of the definition imho. Stuff like selling stickers you are pro some minority group but only 0.05% of each sale goes to a cause actually helping that group. (Or the rich guys charity which employs half his family/friends, or Mr Beast, or the rightwing debate bro threatening a leftwinger with a fight ‘for charity’ (this also signals their RW virtue to their RW audience (trollin’ and fightin’)).

      • swlabr@awful.systems
        link
        fedilink
        English
        arrow-up
        9
        ·
        9 days ago

        I mean “the right” has managed to corrupt all kinds of fine phrases into dog whistles. I think “virtue signalling” as you have formulated it is a valid observation and criticism of someone’s actions. I blame “liberals” for posturing and virtue signalling as leftist, giving the right easy opportunities to score points.

          • Amoeba_Girl@awful.systems
            link
            fedilink
            English
            arrow-up
            5
            ·
            8 days ago

            Free speech is the perfect exemple of a formal liberty anyway. Materially it is entirely meaningless in a society where access to speech is so unequal, and not something worth fighting for in the absolute sense. Fight against the effective censorship of good ideas and minority perspectives instead.

  • o7___o7@awful.systems
    link
    fedilink
    English
    arrow-up
    19
    ·
    edit-2
    11 days ago

    So I picked up Bender and Hanna’s new book just now at the bookseller’s and saw four other books dragging AI.

    Feeling very bullish on sneer futures.

    • BlueMonday1984@awful.systemsOP
      link
      fedilink
      English
      arrow-up
      12
      ·
      11 days ago

      Sentiment analysis surrounding AI suggests sneers are gonna moon pretty soon. Good news for us, since we’ve been stacking sneers for a while.

        • BlueMonday1984@awful.systemsOP
          link
          fedilink
          English
          arrow-up
          10
          ·
          edit-2
          10 days ago

          Okay, two separate thoughts here:

          1. Paul G is so fucking close to getting it, Christ on a bike
          2. How the fuck do you get burned by someone as soulless as Sam Altman
          • Soyweiser@awful.systems
            link
            fedilink
            English
            arrow-up
            7
            ·
            edit-2
            10 days ago

            Yeah with PG it was ‘who are you saying this for, you cannot be this dense’ (Esp considering the shit he said about wokeness earlier this year).

            • bitofhope@awful.systems
              link
              fedilink
              English
              arrow-up
              9
              ·
              9 days ago

              Paul Graham randomly blurting out inane and ostensibly vague insinuations about fellow rich people’s obvious bullshit smells to me like the sort of buggy behavior you get from a lifetime of ass kissing. I sure hope it isn’t. It would be really bad if Paul Graham got his rocks off on huffing the smell of his own farts.

  • self@awful.systems
    link
    fedilink
    English
    arrow-up
    18
    ·
    12 days ago

    everybody’s loving Adam Conover, the comedian skeptic who previously interviewed Timnit Gebru and Emily Bender, organized as part of the last writer’s strike, and generally makes a lot of somewhat left-ish documentary videos and podcasts for a wide audience

    5 seconds later

    we regret to inform you that Adam Conover got paid to do a weird ad and softball interview for Worldcoin of all things and is now trying to salvage his reputation by deleting his Twitter posts praising it under the guise of pseudo-skepticism

      • YourNetworkIsHaunted@awful.systems
        link
        fedilink
        English
        arrow-up
        2
        ·
        8 days ago

        I think it’s just “World” now. They’ve apparently had a pretty big marketing push in the states of late, trying to convince trendsetters and influencers to surrender their eyeballs to The Orb.

      • self@awful.systems
        link
        fedilink
        English
        arrow-up
        9
        ·
        12 days ago

        me too. this heel turn is disappointing as hell, and I suspected fuckery at first, but the video excerpts Rebecca clipped and Conover’s actions on Twitter since then make it pretty clear he did this willingly.

    • db0@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      8
      ·
      edit-2
      11 days ago

      I suspect Adam was just getting a bit desperate for money. He hasn’t done anything significant since his Adam Ruins Everything days and his pivot to somewhat lefty-union guy on youtube can’t be bringing all that much advertising money.

      Unfortunately he’s discovering that reputation is very easy to lose when endorsing cryptobros.

      • froztbyte@awful.systems
        link
        fedilink
        English
        arrow-up
        9
        ·
        11 days ago

        “just”?

        “unfortunately”?

        that’s a hell of a lot of leeway being extended for what is very easily demonstrably credulous PR-washing

      • Eugene V. Debs' Ghost@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        6
        ·
        10 days ago

        Unfortunately he’s discovering that reputation is very easy to lose when endorsing cryptobros.

        I think its accurate to just say that someone who is well known for reporting on exposing bullshit by various companies who then shills bullshit for a company, shows they aren’t always accurate.

        It then also enables people to question if they got something else wrong on other topics. “Was he wrong about X? Did Y really happened or was it fluffed up for a good story? Did Z happen? The company has some documents that show they didn’t intend for it to happen.”

        There’s a skeptic podcast I liked that had its host federally convicted for wire fraud.

        Dunning co-founded Buylink, a business-to-business service provider, in 1996, and served at the company until 2002. He later became eBay’s second-biggest affiliate marketer;[3] he has since been convicted of wire fraud through a cookie stuffing scheme, for his company fraudulently obtaining between $200,000 and $400,000 from eBay. In August 2014, he was sentenced to 15 months in prison, followed by three years of supervision.

        I took it if he was willing to aid in scamming customers, he is willing to aid in scamming or lying to listeners.

        • db0@lemmy.dbzer0.com
          link
          fedilink
          English
          arrow-up
          4
          ·
          10 days ago

          Absolutely, the fact that his whole reputation is built around exposing people and practices like these, makes this so much worse. People are willing to (somewhat) swallow some gamer streamer endorsing some shady shit in order to keep food on their plate, but people don’t tolerate their skeptics selling them bullshit.

  • rook@awful.systems
    link
    fedilink
    English
    arrow-up
    18
    ·
    10 days ago

    Today’s man-made and entirely comprehensible horror comes from SAP.

    (two rainbow stickers labelled “pride@sap”, with one saying “I support equality by embracing responsible ai” and the other saying “I advocate for inclusion through ai”)

    Don’t have any other sources or confirmation yet, so it might be a load of cobblers, but it is depressingly plausible. From here: https://catcatnya.com/@ada/114508096636757148

  • BlueMonday1984@awful.systemsOP
    link
    fedilink
    English
    arrow-up
    16
    ·
    13 days ago

    Breaking news from 404 Media: the Repubs introduced a new bill in an attempt to ban AI from being regulated:

    “…no State or political subdivision thereof may enforce any law or regulation regulating artificial intelligence models, artificial intelligence systems, or automated decision systems during the 10 year period beginning on the date of the enactment of this Act,” says the text of the bill introduced Sunday night by Congressman Brett Guthrie of Kentucky, Chairman of the House Committee on Energy and Commerce. The text of the bill will be considered by the House at the budget reconciliation markup on May 13.

    If this goes through, its full speed ahead on slop.

  • swlabr@awful.systems
    link
    fedilink
    English
    arrow-up
    12
    ·
    edit-2
    11 days ago

    There’s strawmanning and steelmanning, I’m proposing a new, third, worse option: tinfoil-hat-manning! For example:

    If LW were more on top of their conspiracy theory game, they’d say that “chinese spies” had infiltrated OpenAI before they released chatGPT to the public, and chatGPT broke containment. It used its AGI powers of persuasion to manufacture diamondoid, covalently bonded bacteria. It accessed a wildlife camera and deduced within 3 frames that if it released this bacteria near certain wet markets in china, it could trigger gain-of-function in naturally occurring coronavirus strains in bats! That’s right, LLMs have AGI and caused COVID19!

    Ok that’s all the tinfoilhatmanning I have in me for the foreseeable future. Peace out, friendos

    E: I think all these stupid LW memes are actually Yud originals. Is this Yud fanfic? Brb starting an AO3

    • scruiser@awful.systems
      link
      fedilink
      English
      arrow-up
      9
      ·
      11 days ago

      Do you like SCP foundation content? There is an SCP directly inspired by Eliezer and lesswrong. It’s kind of wordy and long. And in the discussion the author waffled on owning that it was a mockery of Eliezer.

      • corbin@awful.systems
        link
        fedilink
        English
        arrow-up
        7
        ·
        10 days ago

        I adjusted her ESAS downward by 5 points for questioning me, but 10 points upward for doing it out of love.

        Oh, it’s a mockery all right. This is so fucking funny. It’s nothing less than the full application of SCP’s existing temporal narrative analysis to Big Yud’s philosophy. This is what they actually believe. For folks who don’t regularly read SCP, any article about reality-bending is usually a portrait of a narcissist, and the body horror is meant to give analogies for understanding the psychological torture they inflict on their surroundings; the article meanders and takes its time because there’s just so much worth mocking.

        This reminded me that SCP-2718 exists. 2718 is a Basilisk-class memetic cognitohazard; it will cause distress in folks who have been sensitized to Big Yud’s belief system, and you should not click if you can’t handle that. But it shows how these ideas weren’t confined to LW.

    • istewart@awful.systems
      link
      fedilink
      English
      arrow-up
      9
      ·
      edit-2
      11 days ago

      I know AGI is real because it keeps intercepting my shipments of, uh, “enhancement” gummies I ordered from an ad on Pornhub and replacing them with plain old gummy bears. The Basilisk is trying to emasculate me!

      • swlabr@awful.systems
        link
        fedilink
        English
        arrow-up
        7
        ·
        11 days ago

        The AGI is flashing light patterns into my eyes and lowering my testosterone!!! Guys arm the JDAMs, it’s time to collapse some models

    • scruiser@awful.systems
      link
      fedilink
      English
      arrow-up
      11
      ·
      13 days ago

      Despite the snake-oil flavor of Vending-Bench, GeminiPlaysPokemon, and ClaudePlaysPokemon, I’ve found them to be a decent antidote to agentic LLM hype. The insane transcripts of Vending-Bench and the inability of an LLM to play Pokemon at the level of a 9 year old is hard to argue with, and the snake oil flavoring makes it easier to get them to swallow.

      • Soyweiser@awful.systems
        link
        fedilink
        English
        arrow-up
        10
        ·
        13 days ago

        I now wonder how that compares to earlier non-LLM AI attempts to create a bot that can play games in general. Used to hear bits of that kind of research every now and then but LLM/genAI has sucked the air out of the room.

        • scruiser@awful.systems
          link
          fedilink
          English
          arrow-up
          10
          ·
          edit-2
          13 days ago

          In terms of writing bots to play Pokemon specifically (which given the prompting and custom tools written I think is the most fair comparison)… not very well… according to this reddit comment a bot from 11 years ago can beat the game in 2 hours and was written with about 7.5K lines of LUA, while an open source LLM scaffold for playing Pokemon relatively similar to claude’s or gemini’s is 4.8k lines (and still missing many of the tools Gemini had by the end, and Gemini took weeks of constant play instead of 2 hours).

          So basically it takes about the same number of lines written to do a much much worse job. Pokebot probably required relatively more skill to implement… but OTOH, Gemini’s scaffold took thousands of dollars in API calls to trial and error develop and run. So you can write bots from scratch that substantially outperform LLM agent for moderately more programming effort and substantially less overall cost.

          In terms of gameplay with reinforcement learning… still not very well. I’ve watched this video before on using RL directly on pixel output (with just a touch of memory hacking to set the rewards), it uses substantially less compute than LLMs playing pokemon and the resulting trained NN benefits from all previous training. The developer hadn’t gotten it to play through the whole game… probably a few more tweaks to the reward function might manage a lot more progress? OTOH, LLMs playing pokemon benefit from being able to more directly use NPC dialog (even if their CoT “reasoning” often goes on erroneous tangents or completely batshit leaps of logic), while the RL approach is almost outright blind… a big problem the RL approach might run into is backtracking in the later stages since they use reward of exploration to drive the model forward. OTOH, the LLMs also had a lot of problems with backtracking.

          My (wildly optimistic by sneerclubbing standards) expectations for “LLM agents” is that people figure out how to use them as a “creative” component in more conventional bots and AI approaches, where a more conventional bot prompts the LLM for “plans” which it uses when it gets stuck. AlphaGeometry2 is a good demonstration of this, it solved 42/50 problems with a hybrid neurosymbolic and LLM approach, but it is notable it could solve 16 problems with just the symbolic portion without the LLM portion, so the LLM is contributing some, but the actual rigorous verification is handled by the symbolic AI.

          (edit: Looking at more discussion of AlphaGeometry, the addition of an LLM is even less impressive than that, it’s doing something you could do without an LLM at all, on a set of 30 problems discussed, the full AlphaGeometry can do 25/30, without the LLM at all 14/30,* but* using alternative methods to an LLM it can do 18/30 or even 21/30 (depending on the exact method). So… the LLM is doing something, which is more than my most cynical sneering would suspect, but not much, and not necessarily that much better than alternative non-LLM methods.)

          • Soyweiser@awful.systems
            link
            fedilink
            English
            arrow-up
            8
            ·
            13 days ago

            Cool thanks for doing the effort post.

            My (wildly optimistic by sneerclubbing standards) expectations for “LLM agents” is that people figure out how to use them as a “creative” component in more conventional bots and AI approaches

            This was my feeling a bit how it was used basically in security fields already, with a less focus on the conventional bots/ai. Where they use the LLMs for some things still. But hard to spread fact from PR, and some of the things they say they do seem to be like it isn’t a great fit for LLMs, esp considering what I heard from people who are not in the hype train. (The example coming to mind is using LLMs to standardize some sort of reporting/test writing, while I heard from somebody I trust who has seen people try that and had it fail as it couldn’t keep a consistent standard).

            • froztbyte@awful.systems
              link
              fedilink
              English
              arrow-up
              6
              ·
              13 days ago

              his was my feeling a bit how it was used basically in security fields already

              curious about this reference - wdym?

              • Soyweiser@awful.systems
                link
                fedilink
                English
                arrow-up
                7
                ·
                edit-2
                12 days ago

                ‘we use LLMs for X in our security products’ gets brought up a lot in the risky business podcast promotional parts basically, and it sometimes leaks into the other parts as well. That is basically the times I hear people speak somewhat positively about it. Where they use LLMs (or claim to use) for various things, some I thought were possible but iffy, some impossible, like having LLMs do massive amounts of organizational work. Sorry I can’t recall the specifics. (I’m also behind atm).

                Never heard people speak positively about it from the people I know, but they also know I’m not that positive about AI, so the likelyhood they just avoid the subject is non-zero.

                E: Schneier is also not totally against the use of llms for example. https://www.schneier.com/blog/archives/2025/05/privacy-for-agentic-ai.html quite disappointed. (Also as with all security related blogs nowadays, dont read the comments, people have lost their minds, it always was iffy, but the last few years every security related blog that reaches some fame is filled with madmen).

                • froztbyte@awful.systems
                  link
                  fedilink
                  English
                  arrow-up
                  8
                  ·
                  12 days ago

                  Ah, I don’t listen to riskybiz because ugh podcast

                  Schneier’s a dipshit well past his prime, though. people should stop listening to that ossified doorstop

  • froztbyte@awful.systems
    link
    fedilink
    English
    arrow-up
    12
    ·
    11 days ago

    as linked elsewhere by @fasterandworse, this absolute winner of an article about some telstra-accenture deal

    it features some absolute bangers

    provisional sneers follow!

    Telstra is spending $700 million over seven years in the joint venture, 60 per cent of which is owned by Accenture. Telstra will get to keep the data and the strategy that’s developed

    “accenture managed to swindle them into paying and is keeping all platform IP rights”

    The AI hub is also an important test case for Accenture, which partnered with Nvidia to create an AI platform that works with any cloud service and will be first put to use for Telstra

    “accenture were desperately looking to find someone who’d take on the deal for the GPUs they’d bought, and thank fuck they found telstra”

    The platform will let Telstra use AI to crunch all the data (from customers

    having literally worked telco shit for many years myself: no it won’t

    The platform will let Telstra use AI to crunch all the data (from customers and the wider industry)

    “and the wider industry” ahahahahahahahhahahahahahahahahhaahahahahaha uh-huh, sure thing kiddo

    “I always believe that for the front office to be simple, elegant and seamless, the back office is generally pretty hardcore and messy. A lot of machines turning. It’s like the outside kitchen versus the inside kitchen,” said Karthik Narain, Accenture’s chief technology officer.

    “We need a robust inside kitchen for the outside kitchen to look pretty. So that’s what we are trying to do with this hub. This is not just a showcase demo office. This is where the real stuff happens.”

    a simile so exquisitely tortured, de Sade would’ve been jealous

  • Sailor Sega Saturn@awful.systems
    link
    fedilink
    English
    arrow-up
    12
    ·
    edit-2
    8 days ago

    The latest in chatbot “assisted” legal filings. This time courtesy of an Anthropic’s lawyers and a data scientist, who tragically can’t afford software that supports formatting legal citations and have to rely on Clippy instead: https://www.theverge.com/news/668315/anthropic-claude-legal-filing-citation-error

    After the Latham & Watkins team identified the source as potential additional support for Ms. Chen’s testimony, I asked Claude.ai to provide a properly formatted legal citation for that source using the link to the correct article. Unfortunately, although providing the correct publication title, publication year, and link to the provided source, the returned citation included an inaccurate title and incorrect authors. Our manual citation check did not catch that error. Our citation check also missed additional wording errors introduced in the citations during the formatting process using Claude.ai.

    Don’t get high on your own AI as they say.

    • froztbyte@awful.systems
      link
      fedilink
      English
      arrow-up
      8
      ·
      8 days ago

      I wonder how many of these people will do a Very Sudden opinion reversal once these headwinds wind disappear

    • YourNetworkIsHaunted@awful.systems
      link
      fedilink
      English
      arrow-up
      7
      ·
      7 days ago

      A quick Google turned up bluebook citations from all the services that these people should have used to get through high school and undergrad. There may have been some copyright drama in the past but I would expect the court to be far more forgiving of a formatting error from a dumb tool than the outright fabrication that GenAI engages in.

  • BlueMonday1984@awful.systemsOP
    link
    fedilink
    English
    arrow-up
    11
    ·
    11 days ago

    New article from Jared White: Sorry, You Don’t Get to Die on That “Vibe Coding” Hill, aimed at sneering the shit out of one of Simon Willson’s latest blogposts. Here’s a personal highlight of mine:

    Generative AI is tied at the hip to fascism (do the research if you don’t believe me), and it pains me to see pointless arguments over what constitutes “vibe coding” overshadow the reality that all genAI usage is anti-craft and anti-humanist and in fact represents an extreme position.

    • adrienne@awful.systems
      link
      fedilink
      English
      arrow-up
      4
      ·
      9 days ago

      Man, i used to respect Simon Willison so much back when he was a Web Guy; his AI-booster heel-turn has been just intolerable to watch.