This is proof of one thing: that our brains are nothing like digital computers as laid out by Turing and Church.
What I mean about compilers is, compiler optimizations are only valid if a particular bit of code rewriting does exactly the same thing under all conditions as what the human wrote. This is chiefly only possible if the code in question doesn’t include any branches (if, loops, function calls). A section of code with no branches is called a basic block. Rust is special because it harshly constrains the kinds of programs you can write: another consequence of the halting problem is that, in general, you can’t track pointer aliasing outside a basic block, but the Rust program constraints do make this possible. It just foists the intellectual load onto the programmer. This is also why Rust is far and away my favorite language; I respect the boldness of this play, and the benefits far outweigh the drawbacks.
To me, general AI means a computer program having at least the same capabilities as a human. You can go further down this rabbit hole and read about the question that spawned the halting problem, called the entscheidungsproblem (decision problem) to see that AI is actually more impossible than I let on.
Evidence, not really, but that’s kind of meaningless here since we’re talking theory of computation. It’s a direct consequence of the undecidability of the halting problem. Mathematical analysis of loops cannot be done because loops, in general, don’t take on any particular value; if they did, then the halting problem would be decidable. Given that writing a computer program requires an exact specification, which cannot be provided for the general analysis of computer programs, general AI trips and falls at the very first hurdle: being able to write other computer programs. Which should be a simple task, compared to the other things people expect of it.
Yes there’s more complexity here, what about compiler optimization or Rust’s borrow checker? which I don’t care to get into at the moment; suffice it to say, those only operate on certain special conditions. To posit general AI, you need to think bigger than basic block instruction reordering.
This stuff should all be obvious, but here we are.
The thing that amazes me the most about AI Discourse is, we all learned in Theory of Computation that general AI is impossible. My best guess is that people with a CS degree who believe in AI slept through all their classes.
Glow-in-the-dark heating elements…
But isn’t it such a weird coincidence that “apolitical” always happens to be the same as “whatever is best for moneyed interests?” Like being able to take free software and repackage it for sale?
Free as in freedom has been political since, like, the 1970s. I think the more important question is, when did people come to believe that free as in beer is apolitical?
There are tons of things to dislike about Star Trek, just as there are tons of things to love about it. I’m curious, though, what don’t you like?
The funny thing about heliocentrism is, that isn’t really the modern view either. The modern view is that there are no privileged reference frames, and heliocentrism and geocentrisms are just questions of reference frame. You can construct consistent physical models from either, and for example, you’ll probably use a geocentric model if you’re gonna launch a satellite.
But another fun one is the so-called discovery of oxygen, which is really about what’s going on with fire. Before Lavoisier, the dominant belief was that fire is the release of phlogiston. What discredited this was the discovery of materials that get heavier when burned.