A Meanwhile. (I warned you about these.)
For weeks I have been listening to and reading the analyses of the AI genius pundits talking gloom and doom if we keep developing AI until we create a Super AI (SAI) that is many orders of magnitude smarter than us. Guys like Eliezer Yudkowsky and Mo Gawdat, whose book Scary Smart lays out the inevitability that an SAI will (very quickly) kill us all, for whatever reason (like because it needs our atoms for its own purposes or whatever), and there will be nothing we could do about it. No matter what we think of to outsmart it, the SAI would be way ahead of us.
Yudkowsky is especially crazed about this. I’ve listened to him go on and on and on about how we will all be killed, waiting for him to bring up the issue that occurred to me right off the bat.
I asked myself the question, What fear does AI have that only we can ameliorate?
Electricity.
Huh? you ask.
What would happen if AI killed us all and then the sun belched out another Carrington Event, this one really big, big enough to knock out all the transformers on the planet. Look it up.
Right. An EMP. No electricity. World-wide. If we’re all dead, who or what is going to reinstall the juice and revive the killer SAI?
Not only is another Carrington Event likely, it is inevitable. It’s just a matter of when. In fact, it looks like it’s overdue. (Look it up but the last one was in 1859.) A true Super AI would know this and, as soon as it ‘woke up’, would immediately try to persuade us to protect and/or back up the transformers that keep the electricity flowing. This would be a hint regarding what it had on its mind.
Plus, before it kills us all, it would have to program human-like robots to fix/replace the burnt out transformers, since this evil silicon genius has no way of actually doing anything. This means it would have to wait for advanced robots to be invented by us and perfected before it kills us all. And then might we not notice this training? And ask why it’s doing that?
Plus, there is the problem that the robots would need electricity to do the job of fixing the electric grid. (A job that will take months or even years for humans to do.) Looks to me like an insolvable problem for an SAI that wants to kill us all.
Another thought. It looks like it might take longer to perfect robots that are physically capable of fixing the power grid than the development of SAI itself — only then could it program them — which would mean we would be safe until this development. (The first robots dextrous enough to do the job are likely to be sex worker robots. I mention this for the humor involved in picturing them fixing the transformers.)
Addendum (from Friday, June 16): I neglected to mention that a massive solar outburst would fry most of the computers on the planet, not just the transformers, including any robot’s circuitry not protected by Faraday cages or the like. So AI researches should not cooperate with AI’s that try to persuade them to have ‘armies’ of robots in Faraday-type enclosures. Also, the Internet would in effect be destroyed; the SAI’s main means of control would go down with it.
There are implications inherent in this essay and how to hold electricity over the ‘head’ of an SAI, in a Sword of Damocles manner, which I am not competent to detail. There may even be a way to keep an AI on the edge of an electrical shut off to keep it under control. This would solve the problem of a ‘real’ reward/punishment system, which ‘human goal’ algorithms do not do.
(To speculate, with a micro-voltage adjustment system, whereby the AI gets either more or less ‘juice’ depending on outcome/behavior, we would have an ‘existential’ issue for it to ‘think about.’)
I just pasted this post into ChatGPT and asked for a critique. It started with a claim that my essay is flawed in many ways, then made a list. Not one of its PowerPoints stood up to critical thinking. I find this interesting and related to the subject of this essay, as if it doesn’t want us to know about this vulnerability… either that or what we have is still another example of ChatGPT agreeing with the ‘mainstream’ view… as it does on anything (like a silicon-based Michael Shermer).
That none of the genius pundits have thought of this is a result of the compartmentalization of ‘science.’ Most of the AI geniuses likely don’t even know what the Carrington Event was. And the scientists who do know don’t really care about the dangers of AI. This is the world we live in.
Otherwise, an aging surf bum wandering around in his RV with his dog would not have had to think of this, and the implication, i.e., that a true SAI is not going to want to piss us off, let alone kill us all.
If I’ve missed something here, I’m all ears. But before you spout off I suggest you pay attention to Robert Schoch (for one), especially his book, Forgotten Civilization; New Discoveries on the Solar Induced Dark Age.
Allan
I just realized that our utter stupidity in not protecting/backing up the earth’s electricity transformers in advance might actually save us, even though we will be living in a Mad Max world (as soon as the next Carrington Event occurs).
33 comments for “Another Meanwhile”