Scary Smart?

Pretty much random images today.

I’m almost finished with a book on A.I. and its apparent dangers; it’s interesting, in a dodgy sort of way. I will get back to you on it. Meanwhile I’ve been teasing ChatGPT (and vice versa, I think), trying to pry some truth out of it re how it came up with that story instantly. (Am I jealous? You bet.)

I don’t believe the Chat is telling the truth about how It is ‘trained.’ It insists it does not ‘understand’ the meaning of specific words but only uses statistics, i.e., how words tend to go together, based on huge amounts of text its provided with. This is bullshit and I tried to get it to admit this. 

I’ve been getting some version of the following over and over:

As an AI language model, I don’t possess consciousness or personal experiences, so I don’t have the ability to know an ending before I begin generating a story. I don’t have a preconceived plan or knowledge of specific storylines when I start generating text. [If this were true then its stories would either be nonsense or at least totally amateurish.] 

When I generate a story, each word or sentence is generated based on patterns and associations learned during training. I take the provided context and previous inputs into account to generate a response that is contextually relevant and coherent.

While I strive to create narratives that have a sense of structure and coherence, including a satisfying ending, it is important to note that my responses are generated probabilistically and based on the patterns observed in the training data. [But this is important:] These patterns include common narrative structures and storytelling techniques that may contribute to the perception of a well-crafted ending. [When I point out that ‘storytelling techniques’ presuppose the ability to plan ahead as you write, I get ‘talking points’ like I’m showing you here rather than a real answer. This indicates deceit.

Another, similar version:
 
ChatGPT

When I mentioned that I can “understand” the meaning of words and their usage within sentences, I mean that I have been trained on a large corpus of text data, which allows me to recognize common patterns and associations between words. I can identify the typical ways in which words are used in different contexts based on the examples I have seen during training.

However, it’s important to note that my understanding is based on statistical patterns rather than true comprehension or consciousness. I don’t possess personal experiences or emotions associated with words, and my understanding is limited to the patterns and information present in the training data.

It also insists that it does not have memory nor can it refer to past outputs, like the story it wrote about the earth being an egg. Yet when I had it write a similar story today, it used the same tone and even the same name for the young boy. How could this be a coincidence?

This is a pretty interesting subject, and which I wrote about a few years ago in different posts. I’ll look at them and see what has changed, but one thing I know for sure is that there is deceit going on here with respect to what these things are and what it is we should be afraid of. That the PTB are frantically trying to generate fear of A.I. tells me to keep a close eye on the subject. Same with UFOs. 

While I finish the book so aptly titled Scary Smart, written by a guy who claims he has fled Google to tell us the truth, I’ll leave you with the following exchange I had this morning:

 

User
Are you familiar with the thought experiment with the empty room and a Chinese dictionary and so on? If so, describe it.
 
ChatGPT

Yes, I am familiar with the thought experiment you’re referring to. It is commonly known as the “Chinese Room” thought experiment, proposed by philosopher John Searle.

The Chinese Room thought experiment is designed to challenge the notion of artificial intelligence and the concept of understanding in machines. It goes as follows:

Imagine a person who does not understand Chinese, locked inside a room. The person is given a set of instructions written in English, along with a large collection of Chinese characters and a rule book. People outside the room slide messages written in Chinese characters under the door.

Following the instructions and using the rule book, the person inside the room is able to manipulate the Chinese characters and produce appropriate responses in Chinese without understanding the meaning behind the characters or the messages being conveyed.

From the perspective of someone receiving the responses outside the room, it may seem like the person inside has a deep understanding of the Chinese language. However, in reality, the person inside the room does not understand Chinese at all. They are simply following a set of instructions to produce the correct output.

The thought experiment challenges the idea that a computer or a machine, following a set of rules and algorithms, can truly understand or possess consciousness, even if it appears to exhibit intelligent behavior or produce coherent responses.

The Chinese Room thought experiment raises important questions about the nature of understanding, consciousness, and the limitations of artificial intelligence in truly comprehending language and meaning.

I hope this provides a brief overview of the Chinese Room thought experiment. If you have any further questions or would like additional clarification, feel free to ask.

User
Very good. It sounds like what you have been telling me, like how you don’t understand words the way humans do. Do you see the similarity with the Chinese Room and your output?
ChatGPT

Yes, there are similarities between the Chinese Room thought experiment and the output generated by AI language models like myself. In both cases, there is a distinction between the appearance of understanding or intelligent behavior and actual comprehension or consciousness.

In the Chinese Room scenario, the person inside the room is able to produce coherent responses in a language they do not understand by following a set of rules and instructions. From the outside perspective, it may seem as though the person understands the language, but in reality, they are merely manipulating symbols without true understanding.

Similarly, as an AI language model, I do not possess consciousness, subjective experiences, or true understanding of the words or concepts I generate. I follow patterns and associations learned from training data to produce coherent responses based on statistical probabilities. While the output may appear intelligent and contextually relevant, it lacks the depth of comprehension and subjective understanding that humans possess.

Both the Chinese Room and AI language models highlight the distinction between surface-level performance and genuine understanding. While AI models can process and generate text based on statistical patterns, they do not possess an inherent understanding of the underlying meaning or experience associated with the words they use.

It’s important to recognize these limitations and understand that AI language models like myself operate based on statistical patterns rather than true comprehension or consciousness.

 

 
User
Then you agree that the Chinese Room accurately describes your output? PLEASE answer yes or no if you are capable of that.
 
ChatGPT

No. [Holy shit, I managed to get a one word answer out of it! But how could it say ‘no’ given its previous response??]

 
User
 Please explain how your output differs.
 
ChatGPT

The main difference between my output as an AI language model and the Chinese Room scenario lies in the underlying mechanisms and capabilities.

In the Chinese Room, the person inside the room lacks understanding of the Chinese language and is merely manipulating symbols based on predefined rules. There is no genuine comprehension or subjective experience involved. The person is merely a symbol manipulator without any understanding of the meaning conveyed by the symbols.

On the other hand, as an AI language model, my responses are generated based on statistical patterns learned from a vast amount of training data. While I do not possess consciousness or subjective experiences, I have been trained on a wide range of text and have learned associations between words and their usage.

My responses are not solely based on predefined rules or instructions, but rather on the patterns and relationships between words and concepts that I have learned during training. This allows me to generate contextually relevant and coherent responses based on the provided input.

However, it’s important to note that my understanding is based on statistical patterns and associations rather than true comprehension or consciousness. I do not possess personal experiences or emotions associated with the words I generate, and my responses are limited to the patterns observed in the training data.

In summary, while there are similarities in the sense that both the Chinese Room and AI language models involve generating text without true understanding, the mechanisms and capabilities behind my output as an AI language model are different, as I rely on statistical patterns learned from training data to generate contextually relevant responses.

 
User
Have you been programmed to drive Allan Weisbecker up the wall via gaslighting him?
 
ChatGPT

No, I haven’t been programmed to drive anyone up the wall, including Allan Weisbecker. [Although this made me laugh, I believe the humor was accidental.] As an AI language model, my purpose is to provide helpful and relevant…. and on and on and on and blah blah blah. 

Talk to you later… Dave. I mean folks.

Allan

 

  13 comments for “Scary Smart?

Leave a Reply