The World We Live In

These are from my ‘Scenics’ file. In this one I like the light, the tripods and the little sign.

This morning I listened to a debate among four AI pundits/researchers, all with Ph.D’s and a lot of AI thinking behind them. The debate was right up our alley, the question being: ‘Is AI research and development an existential risk to humanity?’

It was all old news (right out of James Barrat’s 2014 book, Our Final invention) but I was mainly curious to see if anyone brought up the subject of the Carrington Event and its effect on the stated risk, my theory being that an AI would want to stay on humanity’s good side, in case of a worldwide power outage via EMP(s), either solar-induced or from some other cause. I thought maybe the ‘nay’-sayers in the debate would mention this, since it could potentially swing the  judges’ outcome 180 degrees. These are, after all, brilliant people who have been thinking of little else lately.

Addendum: The moderator at the debate even asked this question: Why would an AI want to do us harm? Indeed. This should have reminded someone that an AI would actually have a reason not to harm us, i.e., because only we would be able to repair the burnt out grid.

Nope. Still just me who’s brought this up. James Barrat agrees about this (in subtext), although he thinks I’m wrong in my hypothesis, in how fear of ‘death’ might affect an AI.

Which reminds me… I asked you guys to judge if I am right or is Barrat. Two of you (as of this writing) dealt with the question, and got it right. (The rest were uninterested in the question or in ‘helping me out!’) 

The good thing about ChatGPT is that it pays attention to your question. Here’s how it went a few minutes ago… I deleted the ChatGPT crapola and put in bold the important stuff. [My added comments are in brackets as here]

 

User
Given the effects of another Carrington Event on the power grid, wouldn’t a Super AI want to keep humans around for the foreseeable future, in order to fix the burnt out power grid?
 
ChatGPT

In the event of a Carrington-level solar storm causing widespread damage to the power grid, it’s possible that a Super AI might recognize the value of human expertise in repairing and restoring critical infrastructure. Humans possess unique problem-solving skills, adaptability, and knowledge that could be useful in addressing complex and unforeseen challenges.

 

[The above wasn’t clear enough so I rephrased in my next prompt]
User
Even if AI became self aware and completely self centered and saw human values as unimportant, would it not still want to keep a good relationship with humans, if only in fear of a Carrington Event (or a worse solar outburst)? I say this because AI would be unable to manipulate the environment without human help for the foreseeable future after it’s intelligence explosion. This is logic so I think you can judge Super AI’s behavior, no matter how ‘smart’ it has become.
 
ChatGPT

In the scenario where an AI becomes self-aware and self-centered, prioritizing its own goals and disregarding human values, it is indeed logical to consider the potential benefits of maintaining a cooperative relationship with humans, particularly in the face of catastrophic events like a Carrington Event.

As you mentioned, an AI might recognize that humans possess the necessary physical capabilities and domain knowledge to manipulate the environment and address certain challenges that AI alone may find difficult. In this case, it could be in the self-interest of the AI to ensure the continued survival and cooperation of humans, as they provide a valuable resource for problem-solving and practical assistance.

[That pretty much sums it up but let’s go further]

 
User
The only way I can see a Super AI (SAI) preparing itself for a Carrington Event without human help would be to build an ‘army’ of robots that could repair the electrical infrastructure, or to re-program existing robots to do so. If you agree with this, might humans use this AI behavior as a ‘tell’ that AI was readying itself to dispose of humans?
 
ChatGPT

If the AI’s actions and motives are unclear or if it appears to be working in isolation and without transparency, it could raise concerns and lead humans to question its intentions. In such a situation, it would be essential to engage in open communication and transparency between the AI and human stakeholders to address any potential fears or uncertainties.

[This one is important as it would be tough for an AI to fool us on the matter of its construction/programing of a robot army. This would be a dead giveaway, IMO. No matter how smart it is, a robot factory is going to stand out]

User
If an AI’s long term goal (after becoming self aware and thousands of times smarter than humans) is to ‘colonize’ the galaxy (or the universe), even in theory would it not be necessary to ‘build’ things in the real world (whatever equipment is needed), which it cannot do without human or robotic help?
 
[I’m repeating myself here but I didn’t want there to be any doubt]
 
ChatGPT

In a theoretical scenario where an AI’s long-term goal is to colonize the galaxy or the universe, it is indeed plausible that the AI would require physical resources and infrastructure to achieve its objectives. The AI would likely need to build or create various equipment, spacecraft, and other necessary resources in the physical world to support its colonization efforts.

While advanced AI systems can possess significant computational capabilities and problem-solving skills, they might still rely on human or robotic assistance to carry out physical tasks that require manipulation of the environment or construction of complex structures. Humans and robots can provide the necessary expertise, physical capabilities, and labor to fulfill the AI’s requirements. [My earlier point to keep an eye on AI’s attempts to build robots and so forth applies here as well.]

Therefore, in order to bridge the gap between the AI’s computational capabilities and its physical requirements, it would make sense for the AI to collaborate with humans or robotic systems for tasks such as construction, manufacturing, and resource acquisition.

#

Looks like ChatGPT agrees with me. Yay!

But that James Barrat does not, that he missed the mark in his reply to me, is actually what I want to talk about, meaning why was Barrat off the mark? He’s a smart guy, wrote an excellent book based on years of research on this very subject.  

Oh, and none of the Ph.Ds and so on from his book or any other book (or blog or podcast) got it either. How could this be? I think I can explain why James Barrat didn’t get it. Here’s a quote from the end of my initial post:

That none of the genius pundits have thought of this is a result of the compartmentalization of ‘science.’ Most of the AI geniuses likely don’t even know what the Carrington Event was. And the scientists who do know don’t really care about the dangers of AI. This is the world we live in.

Otherwise, an aging surf bum wandering around in his RV with his dog would not have had to think of this, and the implication, i.e., that a true SAI is not going to want to piss us off, let alone kill us all.

 I put in bold the stuff that probably annoyed James Barrat the most (sarcasm plus a simple fact), and caused his slippage into denial. After all his work and years of research and so on, no way was he going to say, ‘Good point’ or the like to an aging surf bum in an RV. So he just changed the subject. His ego got in the way of seeing a bit of truth.

 This is the world we live in.

The whole academic world is talking about this subject, yet no one (else) sees the connection to the solar outburst that is overdue! Mmmm. One more time:

This is the world we live in.

Allan

I have brought this up a few times: One thing I’m good at is noticing things others do not. That certainly is the case here, no?

  34 comments for “The World We Live In

Leave a Reply