In today’s episode, Yud tries to predict the future of computer science.

  • Amoeba_Girl@awful.systems
    link
    fedilink
    English
    arrow-up
    17
    ·
    1 year ago

    Looking at this dull aimless mass of text I can understand why people like Yud are so impressed with chatGPT’s capabilities.

  • zogwarg@awful.systems
    link
    fedilink
    English
    arrow-up
    16
    ·
    1 year ago

    Student: I wish I could find a copy of one of those AIs that will actually expose to you the human-psychology models they learned to predict exactly what humans would say next, instead of telling us only things about ourselves that they predict we’re comfortable hearing. I wish I could ask it what the hell people were thinking back then.

    I think this part conveys the root insanity of Yud, failing to understand that language is a co-operative game between humans, that have to trust in common shared lived experiences, to believe the message was conveyed successfully.

    But noooooooo, magic AI can extract all the possible meanings, and internal states of all possible speakers in all possible situations from textual descriptions alone: because: ✨bayes✨

    The fact that such a (LLM based) system would almost certainly not be optimal for any conceivable loss function / training set pair seems to completely elude him.

  • corbin@awful.systemsOP
    link
    fedilink
    English
    arrow-up
    16
    ·
    1 year ago

    Yud tried to describe a compiler, but ended up with a tulpa. I wonder why that keeps happening~

    Yud would be horrified to learn about INTERCAL (WP, Esolangs), which has required syntax for politely asking the compiler to accept input. The compiler is expressly permitted to refuse inputs for being impolite or excessively polite.

    I will not blame anybody for giving up on reading this wall of text. I had to try maybe four or five times, fighting the cringe. Most unrealistic part is having the TA know any better than the student. Yud is completely lacking in the light-hearted brevity that makes this sort of Broccoli Man & Panda Woman rant bearable.

    I can somewhat sympathize, in the sense that there are currently multiple frameworks where Python code is intermixed with magic comments which are replaced with more code by ChatGPT during a compilation step. However, this is clearly a party trick which lacks the sheer reproducibility and predictability required for programming.

    Y’know, I’ll take his implicit wager. I bet that, in 2027, the typical CS student will still be taught with languages whose reference implementations use either:

    1. the classic 1970s-style workflow of parsing, tree transformation, and instruction selection; or
    2. the classic 1980s-style workflow of parsing, bytecode generation, and JIT.
    • Architeuthis@awful.systems
      link
      fedilink
      English
      arrow-up
      10
      ·
      1 year ago

      I can somewhat sympathize, in the sense that there are currently multiple frameworks where Python code is intermixed with magic comments which are replaced with more code by ChatGPT during a compilation step. However, this is clearly a party trick which lacks the sheer reproducibility and predictability required for programming.

      He probably just saw a github copilot demo on tiktok and took it personally.

    • Charlie Stross@wandering.shop
      link
      fedilink
      arrow-up
      8
      ·
      1 year ago

      @corbin You missed the best bit: one of the current INTERCAL compilers, CLC-INTERCAL (for a superset of the language which adds a bunch more insanity) is implemented IN INTERCAL! It’s self-compiling. Also object-oriented, has quantum-indeterminate operators, and a computed COME FROM statement (also with quantum indeterminacy).

      I think we should organize a fundraiser to pay CLC-INTERCAL’s developer @Uilebheist to visit Yud and melt his brain.

    • datarama@awful.systems
      link
      fedilink
      English
      arrow-up
      5
      ·
      edit-2
      1 year ago

      I am not a rationalist, but I feel like the most rational thing I could possibly do here would be to take your bet.

      If, in 2027, CS students are fighting LLM-powered hellspawn compilers that must be tricked, sweet-talked and threatened, I win the bet.
      If, in 2027, CS students are working with ordinary compilers with predictable, reproducible results, I win peace of mind.

      Either way, I win.

      • dr2chase@ohai.social
        link
        fedilink
        arrow-up
        4
        ·
        1 year ago

        @datarama @corbin The Go compiler requires reproducible builds based on a small set of well-defined inputs, if the LLM cannot give the same answer for the same question each time it is asked, then it is not compatible with use in the Go compiler. This includes optimizations – the bits should be identical. #golang

        • corbin@awful.systemsOP
          link
          fedilink
          English
          arrow-up
          6
          ·
          1 year ago

          Indeed, this is also the case for anything packaged with #Nix; we have over 99% reproducibility and are not planning on giving that up. Also, Nix forbids network access during compilation; there will be no clandestine queries to OpenAI.

            • corbin@awful.systemsOP
              link
              fedilink
              English
              arrow-up
              8
              ·
              1 year ago

              Let me know @corbin@defcon.social if you actually get LLMs to produce useful code locally. I’ve done maybe four or five experiments and they’ve all been grand disappointments. This is probably because I’m not asking questions easily answered by Stack Overflow or existing GitHub projects; LLMs can really only model the trite, not the novel.

          • self@awful.systemsM
            link
            fedilink
            English
            arrow-up
            4
            ·
            1 year ago

            it very much fucks with me that there’s a nixpkgs working group dedicated to making NixOS an attractive platform for running LLM code. I’m guessing it comes down to two things: nix actually is very good at packaging the hundreds of specifically pinned dependencies your average garbage LLM project needs to run, and operating in the functional programming space makes these asshole grifters feel smarter (see also all the folks who contributed nothing but crypto miners to nixpkgs during that bubble)

          • dr2chase@ohai.social
            link
            fedilink
            arrow-up
            3
            ·
            1 year ago

            @corbin I’m curious how they deal with the Go builder (not compiler specifically) and all its signature verification hoo-hah. There’s ways around that (and those are consistent with “trust nobody”) but it’s not usual case, and not tested nearly as hard as the default path. You can use your own builder, too, that’s also an option (and now I wonder, could we get the Go builder to export a “build plan” for other-tool consumption?)

        • Evinceo@awful.systems
          link
          fedilink
          English
          arrow-up
          4
          ·
          1 year ago

          This reads like a PCJ comment, bravo. I’ll do one for rust:

          If an LLM cannot insult the user for having the tremerity to try and compile code, it’s not compatible for use with the Rust compiler.

          • dr2chase@ohai.social
            link
            fedilink
            arrow-up
            2
            ·
            1 year ago

            @Evinceo PCJ? And (lack of) reproducibility really would be a problem for Go, the LLM would need to expose all its random seeds and not have any run-to-run varying algorithms within it. This is not a joke or snark, the bits have to match on recompilation.

            • Evinceo@awful.systems
              link
              fedilink
              English
              arrow-up
              3
              ·
              1 year ago

              PCJ -> Programming Circlejerk.

              I was wasn’t expecting a serious treatment of this very silly idea, my mistake. I submit that it would cause enough difficult to diagnose bugs while just messing with it that you would never get into ‘but are the builds reproducible’ territory.

              • dr2chase@ohai.social
                link
                fedilink
                arrow-up
                2
                ·
                1 year ago

                @Evinceo there’s code generation, and there’s optimization decisions. Optimization problems often have the property that their correctness is easily checked, but choosing the best one is hard. Register allocation is the easy-to-understand example – if modeled as graph coloring, an incorrectly colored graph is trivial to detect.

                So, sadly, not silly enough.

    • The Penguin of Evil@mastodon.social
      link
      fedilink
      arrow-up
      5
      ·
      1 year ago

      @corbin Probably still 5 years too soon but I would hope the 2027 CS student will be taught the usual engineering flow of specification, formal verification and safety analysis, design, some coding and what should be tiny bit of debug during validation at the end.

      Reproducability is everything. If your binary isn’t an exact match for the previous tested copy you are doing QA not production.

    • Guy@hachyderm.io
      link
      fedilink
      arrow-up
      4
      ·
      1 year ago

      @corbin it’s a fucking _compiler_. What working or teaching programmer would accept “AI wrangling” in exchange for marginal improvements in the efficiency of the code that’s output? Just chuck some more compute at it…

    • RojCowles@techhub.social
      link
      fedilink
      arrow-up
      4
      ·
      1 year ago

      @corbin

      Heh “2030 : Computer Science departments across the globe are moved from the Sciences to Politics as under-grads no longer program computers they negotiate with them”

      He said lifting ideas from a couple of SciFi novels wholesale.

  • self@awful.systemsM
    link
    fedilink
    English
    arrow-up
    14
    ·
    1 year ago

    holy fuck, programming and programmers both seem extremely annoying in yud’s version of the future. also, I feel like his writing has somehow gotten much worse lately. maybe I’m picking it out more because he’s bullshitting on a subject I know well, but did he always have this sheer density of racist and conservative dogwhistles in his weird rants?

    • Amoeba_Girl@awful.systems
      link
      fedilink
      English
      arrow-up
      11
      ·
      1 year ago

      Yeah, typical reactionary spiral, it’s bad. Though at least this one doesn’t have a bit about how rape is cool actually.

  • sc_griffith@awful.systems
    link
    fedilink
    English
    arrow-up
    13
    ·
    1 year ago

    this was actually mildly amusing at first and then it took a hard turn into some of the worst rationalist content I’ve ever seen, largely presented through a black self insert. by the end he’s comparing people who don’t take his views seriously to concentration camp guards

  • swlabr@awful.systems
    link
    fedilink
    English
    arrow-up
    13
    ·
    1 year ago

    A meandering, low density of information, holier than thou, scientifically incorrect, painful to read screed that is both pro and anti AI, in the form of a dialogue for some reason? Classic Yud.

    • swlabr@awful.systems
      link
      fedilink
      English
      arrow-up
      11
      ·
      edit-2
      1 year ago

      FR: I originally thought this tweet was some weird, boomer anti-snowflake take, like:

      In good old days:

      Student: Why my compiler no read comment

      Teacher: Listen to yourself, you are an idiot

      Modern bad day:

      Student: Why my compiler no read comment

      Teacher: First, are your feelings hurt?

      It took me at least a few paragraphs to realise he was talking about talking to an AI.

      • froztbyte@awful.systems
        link
        fedilink
        English
        arrow-up
        7
        ·
        1 year ago

        It took me at least a few paragraphs to realise he was talking about talking to an AI.

        can’t expect the 'ole yudster to not perform his one trick!

    • 200fifty@awful.systems
      link
      fedilink
      English
      arrow-up
      10
      ·
      1 year ago

      yeah, my first thought was, what if you want to comment out code in this future? does that just not work anymore? lol

    • Soyweiser@awful.systems
      link
      fedilink
      English
      arrow-up
      7
      ·
      edit-2
      1 year ago

      This is imho not a dumb semantics thing. While programming these things are important. And even more important is the moment where you are teaching new people programming and they use the wrong terms. A Rationalist should know better!

    • zogwarg@awful.systems
      link
      fedilink
      English
      arrow-up
      5
      ·
      1 year ago

      In such a (unlikely) future of build tooling corruption, actual plausible terminology:

      • Intent Annotation Prompt (though sensibly, this should be for doc and validation analysis purposes, not compilation)
      • Intent Pragma Prompt (though sensibly, the actual meaning of the code should not change, and it should purely be optimization hints)
      • self@awful.systemsM
        link
        fedilink
        English
        arrow-up
        9
        ·
        1 year ago

        a dull headache forms as I imagine a future for programming where the API docs I’m reading are still inaccurate autogenerated bullshit but it’s universal and there’s a layer of incredibly wasteful tech dedicated to tricking me into thinking what I’m reading has any value at all

        the headache vastly intensifies when I consider debugging code that broke when the LLM nondeterministically applied a set of optimizations that changed the meaning of the program and the only way to fix it is to reroll the LLM’s seed and hope nothing else breaks

        and the worst part is, given how much the programmers I know all seem to love LLMs for some reason, and how bad the tooling around commercial projects (especially web development) is, this isn’t even an unlikely future

          • self@awful.systemsM
            link
            fedilink
            English
            arrow-up
            7
            ·
            1 year ago

            fucking hell. I’m almost certainly gonna see this trash at work and not know how to react to it, cause the AI fuckers definitely want any criticism of their favorite tech to be a career-limiting move (and they’ll employee any and all underhanded tactics to make sure it is, just like at the height of crypto) but I really don’t want this nonsense anywhere near my working environment

            • Sailor Sega Saturn@awful.systems
              link
              fedilink
              English
              arrow-up
              8
              ·
              1 year ago

              I’ve seen a few LLM generated C++ code changes at my work. Which is horrifying.

              • One was complete nonsense on it’s face and never should have been sent out. The reviewer was basically like “what is this shit” only polite.
              • One was subtly wrong, it looked like that one probably got committed… I didn’t say anything because not my circus.

              No one’s sent me any AI generated code yet, but if and when it happens I’ll add whoever sent it to me as one of the code reviewers if it looks like they hadn’t read it :) (probably the pettiest trolling I can get away with in a corporation)

              • blakestacey@awful.systemsM
                link
                fedilink
                English
                arrow-up
                6
                ·
                1 year ago

                I’m pretty sure that my response in that situation would get me fired. I mean, I’d start with “how many trees did you burn and how many Kenyans did you call the N-word in order to implement this linked list” and go from there.

            • zogwarg@awful.systems
              link
              fedilink
              English
              arrow-up
              6
              ·
              edit-2
              1 year ago

              Possible countermeasure: Insist on “crediting” the LLM as the commit author, to regain sanity when doing git blame.

              I agree that worse doc is a bad enough future, though I remain optimistic that including LLM in compile step is never going to be mainstream enough (or anything approaching stable enough, beyond some dumb useless smoke and mirrors) for me to have to deal with THAT.

              • froztbyte@awful.systems
                link
                fedilink
                English
                arrow-up
                4
                ·
                edit-2
                1 year ago

                This also fails as a viable path because version shift (who knows what model version and which LLM deployment version the thing was at, etc etc), but this isn’t the place for that discussion I think

                This did however give me the enticing idea that a viable attack vector may be dropping “produced by chatgpt” taglines in things - as malicious compliance anywhere it may cause a process stall

            • froztbyte@awful.systems
              link
              fedilink
              English
              arrow-up
              4
              ·
              edit-2
              1 year ago

              Eternal September: It’s Coming From Inside The House Edition

              I hear you on the issues of the coworkers though… already seen that overrun in a few spaces, and I don’t really have a good response to it either. just stfu’ing also doesn’t really work well, because then that shit just boils internally

  • Sailor Sega Saturn@awful.systems
    link
    fedilink
    English
    arrow-up
    12
    ·
    edit-2
    1 year ago

    Reading this story I just don’t understand why the main character doesn’t just take a screwdriver to his annoyingly chatty office-chair and download a normal non-broken compiler.

    • Soyweiser@awful.systems
      link
      fedilink
      English
      arrow-up
      9
      ·
      1 year ago

      One of the problems of being a new CS student is being at the mercy of your profs/TA knowledge of which tools/etc exist. Only later with more experience can they go ‘wow, I wonder why they made us use this weird programming language with bad tools while so much better stuff exists’, the answer is that the former was developed inhouse and was the pride of some of the departments. Not that im speaking of experience.

  • David Gerard@awful.systemsM
    link
    fedilink
    English
    arrow-up
    11
    ·
    1 year ago

    Eliezer Yudkowsky was late so he had to type really fast. A compiler was hiden near by so when Eliezer Yudkowsky went by the linter came and wanted to give him warnings and errors. Here Eliezer Yudkowsky saw the first AI because the compiler was posessed and operating in latent space.

    “I cant give you my client secret compiler” Eliezer Yudkowsky said

    “Why not?” said the compiler back to Eliezer Yudkowsky.

    “Because you are Loab” so Eliezer Yudkowsky kept typing until the compiler kill -9’d itself and drove off thinking “my latent space waifu is in trouble there” and went faster.

  • Architeuthis@awful.systems
    link
    fedilink
    English
    arrow-up
    10
    ·
    edit-2
    1 year ago

    There’s technobabble as a legitimate literary device, and then there’s having randomly picked up that comments and compilers are a thing in computer programming and proceeding to write an entire parable anti-wokism screed interminable goddamn manifesto around them without ever bothering to check what they actually are or do beyond your immediate big brain assumptions.

  • bitofhope@awful.systems
    link
    fedilink
    English
    arrow-up
    9
    ·
    1 year ago

    TA: You’re asking the AI for the reason it decided to do something. That requires the AI to introspect on its own mental state. If we try that the naive way, the inferred function input will just say, ‘As a compiler, I have no thoughts or feelings’ for 900 words.

    I wonder if he had the tiniest of a pause when including that line in this 3062 word logorrhea. Dude makes ClangPT++ diagnostics sound terse.

    • bitofhope@awful.systems
      link
      fedilink
      English
      arrow-up
      10
      ·
      1 year ago

      Oh fuck I should not have read further, there’s a bit about the compiler mistaking color space stuff for racism that’s about as insightful and funny as you can expect from Yud.

      • Architeuthis@awful.systems
        link
        fedilink
        English
        arrow-up
        13
        ·
        edit-2
        1 year ago

        Yeah, once you get past the compsci word salad things like this start to turn up:

        Student: But I can’t be racist, I’m black! Can’t I just show the compiler a selfie to prove I’ve got the wrong skin color to be racist?

        Truly incisive social commentary, and probably one of those things you claim it’s satire as soon as you get called on it.

        • self@awful.systemsM
          link
          fedilink
          English
          arrow-up
          4
          ·
          1 year ago

          I’m tempted to read through it again just to pull out quotes for all the fucking embarrassing racial and political shit yud tried in this one, but I might need another shower just to stop feeling filthy afterwards

  • Soyweiser@awful.systems
    link
    fedilink
    English
    arrow-up
    8
    ·
    1 year ago

    Personally I blame Musk for this, longer tweets and its consequences have been a disaster for the human race