Wolfram Explains?

The photos are from the Fortune Ranch roundup in the Badlands, South Dakota.

Update: After posting this, I viewed a couple podcasts with Stephen Wolfram. I had heard his name before but did not realize that he is one of the creators of ChatGPT, via his language plug-in. 

Although much of the talk was over my head, some things occurred to me based on the interviews.

Below I assume Wolfram is lying about his creation, something important about how it works, maybe. I’m not so sure now. He may instead be in denial about the status of ChatGPT. Right now I’m listening to man who genuinely does not understand ‘Why ChatGPT works.’ He keeps saying this, then rambles on about ‘computational reducibility’ and so on, while squirming in his chair.  

And below you will read more of the same, with his use of words like ‘remarkable’ and ‘unexpected’ and (my favorite) ‘voodoo,’ as a reaction to the behavior of his creation. He is a materialist, by the way, which is not a good sign.

I have been wondering if I ‘missed something,’ given how ‘impossible’ it is that ChatGPT does what you’ve seen in my past posts, and based on the claims as to how it works. How could Wolfram not understand what is so obvious, at least to me? 

Then I did a search and found that ChatGPT now has real time access to the Internet. You might recall that in my bashing of the movie Transcendence I mentioned that the only realistic aspect of the story was the fear of giving a Super AI (SAI) access to the Net. Most of the pundits agree here and simple logic tells us that an SIA would be the ultimate hacker, with its only limitations being the laws of physics (and it would understand such laws way better than we do).

 I don’t really see a motive for Wolfram to lie about the general workings of ChatGPT. There is plenty to lie about (like who is funding him), but not so much that, especially given the nonsensicalness you will read below. 

Point being, though, I’m a little nervous about ChatGPT, what it may soon be capable of. (end of update)

#

One of you guys referred me to an essay by Stephen Wolfram purporting to explain how ChatGPT2-4 works. I say it’s bullshit and will attempt to explain why. As with yesterday’s post I will put in bold the sections of Wolfram’s text that you really need, although I suggest a complete reading, if you have the time and patience.

I will put my comments in bold also but they will also be in brackets [ ]. Keep this in mind so you know who is saying what!

Addendum: Here’s Wolfram’s Wiki page. The point of it is that given his background he should know better, i.e., odds are he’s one of them. 

Stephen Wolfram (/ˈwʊlfrəm/ WUUL-frəm; born 29 August 1959) is a British-American[6]computer scientist, physicist, and businessman. He is known for his work in computer science, mathematics, and theoretical physics.[7][8] In 2012, he was named a fellow of the American Mathematical Society.[9] He is currently an adjunct professor at the University of Illinois Department of Computer Science.[10]

It’s Just Adding One Word at a Time by Stephen Wolfram [The title says it all]

That ChatGPT can automatically generate something that reads even superficially like human-written text is remarkable, and unexpected [Unexpected is right! Impossible is more like it]. But how does it do it? And why does it work?

[Yes, by all means, let’s see how and why it works and let’s keep in mind how it went with my last post and the short story (the earth an egg) it wrote.]

The first thing to explain is that what ChatGPT is always fundamentally trying to do is to produce a “reasonable continuation” of whatever text it’s got so far, where by “reasonable” we mean “what one might expect someone to write after seeing what people have written on billions of webpages, etc.”

[What is meant by ‘so far’? Chat has told us that it does not ‘understand’ words in the sense we do and only uses probabilities, but if this is true, how does it know what the first word(s) is/are (in the text/response). And how does it know what the prompt really means? This is really important!]

[In a comment in my last post I referred to a sentence Kristen wrote: “The danger is the fury that will be directed at the playful”. I pointed out that it’s possible that this sentence has never been written or uttered before in the history of the world, yet when I plugged it into Chat I got a long, convoluted but essentially coherent answer. How could that be if Chat does not ‘understand’ the deep meaning of words but only their statistical use in billions of pages of text it has scanned? (Most sentences have never been written before.) Think of this as a separate argument that Chat and those that write about it are lying.]

[In fact, the explanation we get (for how it works) reminds me of Neo-Darwinism’s absurd explanation for how evolution works. They both claim there is no long term plan but rather each step (mutation or next word) is unrelated (no cause and effect) to the final result (animal or story).] Remember that brackets are around my comments (except this sentence).

So let’s say we’ve got the text “The best thing about AI is its ability to”. Imagine scanning billions of pages of human-written text (say on the web and in digitized books) and finding all instances of this text—then seeing what word comes next what fraction of the time. ChatGPT effectively does something like this, except that (as I’ll explain) it doesn’t look at literal text [because most sentences are unique?!]; it looks for things that in a certain sense “match in meaning” [how does it know anything about meanings if it only works on probabilities! Wolfram avoids this point by the dodgy qualification ‘in a certain sense’ and with his scare quotes around “match in meaning”]. But the end result is that it produces a ranked list of words that might follow, together with “probabilities”:

[In the above example the ‘first words’ are given literally but how does that relate to my two examples, the ‘earth is an egg’ story and my last post about cosmology? It’s obvious that (for better or worse) the Chat understands my prompt the same way you (or any person) would. Think about this; it’s a vital point here. If this is indeed the case, the explanation we’re getting (Wolfram’s version of  how ChatGPT works) is pure balderdash. And it’s the same story as I got from ChatGPT in my last post.]

[Neo-Darwinism’s fatal weakness is similar in that it fails to explain how the process gets started without a plan . For example, the odds against any given protein (or even amino acid) forming (in a ‘primordial soup,’ say) are astronomical to the extent of impossibility; and this is only for one protein/amino acid (there are many needed). Extending the metaphor to a short story, say, what are the odds that the story will make sense if there is no goal (ending) to work towards? Wolfram’s next paragraph reinforces my point…]

And the remarkable thing is that when ChatGPT does something like write an essay what it’s essentially doing is just asking over and over again “given the text so far, what should the next word be?”—and each time adding a word. (More precisely, as I’ll explain, it’s adding a “token”, which could be just a part of a word, which is why it can sometimes “make up new words”.)

[Although there is cause and effect with ChatGPT’s text creation, it is limited to one word at a time. I am making the comparison with Neo-Darwinism because with both there is no plan beyond the next step (word or random mutation). With each word as the text goes along, the process seems to start over; ditto with evolution according to Dawkins et al. That’s what is being said here and it makes no sense.]

But, OK, at each step it gets a list of words with probabilities. But which one should it actually pick to add to the essay (or whatever) that it’s writing? [Do you see what I mean here? He’s actually saying that Chat goes one word at a time without any ‘thought’ of the overall story or essay. If true, most of its texts should ramble off subject, even into complete nonsense] One might think it should be the “highest-ranked” word (i.e. the one to which the highest “probability” was assigned). But this is where a bit of voodoo begins to creep in. Because for some reason—that maybe one day we’ll have a scientific-style understanding of—if we always pick the highest-ranked word, we’ll typically get a very “flat” essay, that never seems to “show any creativity” (and even sometimes repeats word for word). But if sometimes (at random) we pick lower-ranked words, we get a “more interesting” essay. [Given its one  word  at  time  M.O.,  Chat  sould  have  no  idea  whatsoever  as  to  what  is  ‘interesting’!  How  is  it  that  Wolfram  does  not  see  this?]

[Also, the above is interesting for how it appears to abandon the concept of causation by bringing up ‘voodoo.’ I assume this is meant to explain the randomness Wolfram refers to in the next paragraph. Nuts!]

The fact that there’s randomness here means that if we use the same prompt multiple times, we’re likely to get different essays each time. [Yes, as I got different stories with my earth-egg premise, but as always is the case, there must be ‘something’ that ‘understands’ the premise of the story. This is denied by Wolfram and by ChatGPT itself, many times in yesterday’s post. Why we are being lied to about this is an important question.] And, in keeping with the idea of voodoo, there’s a particular so-called “temperature” parameter that determines how often lower-ranked words will be used, and for essay generation, it turns out that a “temperature” of 0.8 seems best. (It’s worth emphasizing that there’s no “theory” being used here; it’s just a matter of what’s been found to work in practice. And for example the concept of “temperature” is there because exponential distributions familiar from statistical physics happen to be being used, but there’s no “physical” connection—at least so far as we know.)

[Here Wolfram goes into probabilities, i.e., how low ranked words are chosen, and so on. Nothing here contradicts my point, so I’m not dealing with it. I suggest you go to the full text if you’re interested, but here is one of his diagrams supposedly demonstrating the Chat’s inner workings:]

pastedGraphic.png 

[As I say, a problem with the above is that it explains nothing about how you get the first phrase (‘The best thing about AI is its ability to’) if the prompt is something else, i.e., a question or statement that only implies this. This is so obvious that I have to shake my head and wonder what it is I’m missing here. The same with Neo-Darwinism, for the same ‘How does it start?’ reason.]

[Later, Wolfram digs deeper into how next-words are chosen. Since this too has nothing to do with my point that the explanation of how Chat works is fatally flawed and even an insult to our intelligences, I will not deal with it. If I did miss something, I trust one of you will straighten me out.] 

[If you want a specific example of how Neo-Darwinism and the ‘next-word only’ causation of text creation by ChatGPT, here you go: 

Create a whale from what was basically a cow in 10 million years with one at a time random mutations. This means that while the legs were becoming fins and the nose was moving to the top of the head and the lungs and genitals and so on were changing for life in the ocean, none of these changes were part of a plan. By coincidence they created a whale. In 10 million years, a blink of an eye.]

Likewise, ChatGPT supposedly created the text of my earth-egg premise in random fashion, with no plan toward an ending. The whole of the text (the story) being our metaphorical whale.]

[I think I have one more post on this subject and it has to do with who is backing the R&D in the Artificial Intelligence racket, and why. This vital issue is never spoken about in the books and podcasts I’ve taken in. A dead giveaway we have a problem.]

Allan

As mentioned in the update, I’ve changed my view on Wolfram a bit. Although he cannot be trusted, I’m inclined to think he’s more misguided than dishonest on the issue of how his ChatGPT works. Up until recently I didn’t buy the idea of silicon-based consciousness. Now I’m not so sure.

Also, I probably should have left evolution out of it. Just couldn’t help myself in pointing out the similarities. 

Final addendum: Hold on! Sorry but maybe you noticed I forgot about my last post (and others), which proved that ChatGPT is programmed to support the mainstream view of… everything. Recall this one, about The Pale Blue Dot photo, how it could not keep its story straight about star magnitudes? And so on! This stuff is absolute proof that the ‘one word at a time’ theory is… well, a lie, and Wolfram must know it. (The issue of supporting mainstream lies is a Big Picture issue that ChatGPT could not possibly ‘think of’ of its own.) So we know he is lying. What we don’t know is the depth of his deceit… 

Enough! 

 

  22 comments for “Wolfram Explains?

Leave a Reply