Nope. I certainly have. It’s the same arguments I’ve been hearing from people dismissing AI alignment concerns for 10 years. There’s nothing new there, and it all maps onto exactly the wishful thinking I’m talking about.
Nope. I certainly have. It’s the same arguments I’ve been hearing from people dismissing AI alignment concerns for 10 years. There’s nothing new there, and it all maps onto exactly the wishful thinking I’m talking about.
Appealing to authority is useful. We all do it every day. And like I said, all it should do is make you question whether you’ve really thought about it enough.
Every single thing you’re saying has no bearing on how AI will turn out. None.
If a 0 is “we figured it out” and 1 is “we go extinct”, here is what all possible histories look like in terms of “how things that could have made us go extinct actually turned out”:
1
01
001
0001
00001
000001
0000001
00000001
etc.
You are looking at 00000000 and assuming there can’t be a 1 next, because of how many zeroes there have been. Every extinction event will be preceded by a bunch of not extinction events.
But again, it is strange that you can label an appeal to authority, but not realize how much worse an “appeal to the past” is.
This is not like the industrial revolution. You really should examine why you think “we figured other things out in the past” is such an appealing narrative to you that you’re willing to believe the reassurance it gives you over the clear evidence in front of you. But I’ll just quote Hofstadter (someone who has enough qualifications that their opinion should make you seriously question whether you have arrived at yours based on wishful thinking or actual evidence):
“And my whole intellectual edifice, my system of beliefs… It’s a very traumatic experience when some of your most core beliefs about the world start collapsing. And especially when you think that human beings are soon going to be eclipsed. It felt as if not only are my belief systems collapsing, but it feels as if the entire human race is going to be eclipsed and left in the dust soon. People ask me, “What do you mean by ‘soon’?” And I don’t know what I really mean. I don’t have any way of knowing. But some part of me says 5 years, some part of me says 20 years, some part of me says, “I don’t know, I have no idea.” But the progress, the accelerating progress, has been so unexpected, so completely caught me off guard, not only myself but many, many people, that there is a certain kind of terror of an oncoming tsunami that is going to catch all humanity off guard.”
ChatGPT usage is a very poor metric. Anything interesting is happening via API. Even the chat completion endpoint still isn’t “ChatGPT” on its own. None of these complaints about it being “dumber” apply to the API outputs. OpenAI don’t care about nerfing chatGPT because it’s not their real product.
It would not HAVE to do that, it just is much harder to get it to happen reliably through attention, but it’s not impossible. But offloading deterministic tasks like this to typical software that can deal with them better than an LLM is obviously a much better solution.
But this solution isn’t “in the works”, it’s usable right now.
Working without python:
It left out the only word with an f, flourish. (just kidding, it left in unfathomable. Again… less reliable.)