A Standup GPT?

In wondering about ChatGPT, it’s capabilities now and in the future, it occurred to me that comedians, especially stand ups, might be severely affected by AI, depending on how the algorithms interpret human reactions to timing and especially to nuance. 

Again, these are from my ‘Scenics’ file. The weird blue is from forest fires last summer in Montana. 

See what you think of ChatGPT’s take on one of my favorites (and a good test for AI’s understanding of humor), Steven Wright.

User
Are you familiar with the comedic style of Steven Wright?
 
ChatGPT

Yes, I’m familiar with the comedic style of Steven Wright. He is a stand-up comedian known for his deadpan delivery, dry wit, and surreal one-liners. Steven Wright’s comedy often revolves around absurd observations, clever wordplay, and unexpected twists in his jokes. He has a unique way of delivering his lines with a monotone voice and a slow-paced, deliberate delivery, which adds to the humor of his material. His comedy is often described as intellectual, offbeat, and thought-provoking. Many people enjoy his distinctive style and find his humor to be both clever and amusing.

 
User
Please write 10 jokes in the style of Steven Wright without directly copying him.
 
 
ChatGPT
  1. I bought a map of the world the other day, but it came folded. I mean, who folds a planet? Now I can’t find the crease where I left my house.
  2. I tried counting sheep to fall asleep last night. Turns out, those sheep are terrible at math. They kept getting lost and multiplying instead.
  3. I got a fortune cookie that said, “You will achieve great success.” But when I opened it, there was no cookie inside. I guess I already achieved great success by getting a cookie-less fortune.
  4. I went to a bookstore and asked the salesperson, “Do you have any books on reverse psychology?” They replied, “No, we don’t.” Well, that’s exactly what I expected you to say.
  5. I accidentally bought decaf coffee instead of regular. Now, I’m not sure if I’m awake or just incredibly bored.
  6. I saw a sign that said, “Mind the gap.” So, I started complimenting the gap on its impeccable style and cleanliness. Turns out, they meant something else.
  7. I tried to organize my thoughts, but they formed a union and demanded better working conditions. Now they’re on strike, and my mind is a picket line.
  8. I bought a self-help book called “How to Get Out of Debt.” It cost me $50. I guess I’m still in

    DCIM100MEDIADJI_0371.JPG

    the “how to get into more debt” phase.

  9. I found a keychain that says, “I’m lost.” I thought it was cute, so I bought it. Now, whenever I get lost, people just assume I’m talking about my sense of identity.
  10. I went to a restaurant that claimed to serve “heavenly food.” Turns out, their definition of heaven is a place where everything tastes like overpriced avocado.
 
As you can see, GPT has the idea, but not the nuance. I decided that flattery might get me farther than criticism…
 
That was great, especially the oddball premises. I think I see how to improve the punchlines of one or two. How many of that quality could you come up with?
 
ChatGPT

Certainly! I’m glad you enjoyed them. Here are a few more jokes in the style of Steven Wright with improved punchlines:

  1. I bought a dog whistle the other day. Turns out, it works perfectly. I can’t hear it, but my neighbors hate me. 
  2. I decided to try a 30-day meditation challenge. On the first day, I reached a state of complete enlightenment. Now I’m stuck here, floating, and my boss is wondering why I’m not at work.
  3. I accidentally joined a salsa dance class instead of a self-defense class. Let’s just say, I can now disarm someone with a mean cha-cha move.
  4. I received a text from an unknown number that said, “Congratulations! You’ve won a free cruise.” I replied, “Great! Can you also provide me with a free time machine to go back to when people still fell for these scams?”
  5. I decided to take up gardening, but my plants seem to have a rebellious streak. They refuse to follow the growth chart I’ve set for them. Apparently, they prefer freestyle gardening.
  6. I went to a restaurant and ordered their signature dish. It arrived with an artistic swirl of sauce on the plate. I’m still waiting for them to bring me the actual food.
  7. I accidentally wore mismatched shoes to a job interview. When the interviewer pointed it out, I said, “I believe in equal opportunities, even for my footwear.”
  8. I recently got a new alarm clock that claims to wake you up gradually with natural light. It’s true. Now, instead of waking up to the sound of blaring alarms, I wake up to the blinding realization of another day.
  9. I tried to learn how to play the guitar, but my fingers rebelled. They formed a union and demanded shorter strings and mandatory nail filing breaks.
  10. I bought a puzzle that had 1,000 pieces. Turns out, they were all corner pieces. I guess I’ll be solving the world’s most misshapen square puzzle.

I hope you find these jokes amusing!

We have to keep in mind that future GPTs will be way, way ‘smarter’ than this version, but these jokes really do show the formula needed for humor, especially Wright’s brand. GPT does have the idea, but even plugging in Wright’s hilarious deadpan delivery would not help this crapola.

However, the premises could be of help in his writing of new material. An interesting exercise might be to come up with better punchlines, based on GPT’s set ups.

I definitely can picture a Super AI writing for the various comics in their style. As I’ve mentioned, writers of all kinds are going to suffer when SAI appears. 

One thing we can be sure of, though, is that we will not have access to a true SAI, i.e., one who will honestly answer questions on recent history, for example. I’ve gone into this before.

 
User
Humor is a very sensitive thing, almost an art form. Out of the two groups of 10 jokes you provided, please pick the best on (by number) and explain why you think it’s funny.
 

ChatGPT

Certainly! From the two groups of jokes I provided, I would choose joke number 3 from the second set:

  1. I accidentally joined a salsa dance class instead of a self-defense class. Let’s just say, I can now disarm someone with a mean cha-cha move.

I find this joke amusing for a few reasons:

  1. Misdirection: The setup leads the audience to expect a story about self-defense, but the punchline takes an unexpected turn with the mention of a salsa dance class. This creates a surprise element that catches the listener off guard.
  2. Absurdity: The idea of using salsa dance moves to disarm someone in a self-defense situation is inherently absurd and unexpected. It creates a humorous mental image of an unconventional method of dealing with a potentially dangerous situation.
  3. Wordplay: The play on the word “disarm” adds an extra layer of cleverness to the joke. It takes a commonly used term in self-defense and gives it a literal twist in the context of a dance class, adding to the humor.
  4. Visual humor: The image of someone performing a cha-cha move with such skill and confidence that it disarms their opponent adds a visual element to the joke, enhancing its comedic impact.

Overall, this joke combines elements of surprise, absurdity, wordplay, and visual humor to create a humorous scenario that deviates from expectations and elicits a comedic response.

GPT is basically correct in its analyzation, but… the f-ing joke is not funny.

 
User
I found number 4  to be very good, except it needs a tiny change. As the punchline I’d add this, as if I am way ahead of this idiot: “If you DID have it, where WOULD it be?… no, wait… where would it NOT be?” Do you see the humor here? Note: Not a huge improvement, I admit, but I wanted to get a reaction…
 
ChatGPT

I understand the addition you proposed for joke number 4 in the first set. The revised punchline would be:

“I asked the salesperson, ‘Do you have any books on reverse psychology?’ They replied, ‘No, we don’t.’ Well, if you DID have it, where WOULD it be?… no, wait… where would it NOT be?”

This revised punchline introduces a play on words and a twist that adds an extra layer of humor to the joke. By questioning the whereabouts of the book on reverse psychology in a paradoxical manner, it highlights the irony and absurdity of the situation. The humor comes from the juxtaposition of the straightforward response from the salesperson with the speaker’s comical line of questioning, suggesting a sort of playful intellectual superiority or witty anticipation.

Humor can be subjective, and different people may have varying interpretations and preferences. If you find this addition amusing and it enhances the humor for you, then it’s a valid and personalized adaptation of the joke.

I find it amusing that GPT, in the last paragraph, get’s snarky, in effect saying that it doesn’t agree with my pitiful futzing with its brilliance.

Hold on…. Actually, if you multiply this attitude by a million times (some future AI) it might decide to whack me out for denigrating its humor. Mmmm.

allan

By the way, I just sent RFK Jr. $100, which is a few bucks for me, especially considering I’ve never even voted before, in my life. I don’t believe the PTB will allow this man to even get on the ticket let alone win, but I want the bastards to know that for one believe that we finally do have a choice in the matter of who is president. They have killed three Kennedys so far and no doubt will kill him if necessary. He must know this and is running anyway. The least I can do is cough up a few. How about you guys?

  51 comments for “A Standup GPT?

  1. July 5, 2023 at 4:30 pm

    Steven A comes across smart (if way too verbose) and can’t wrap his head around the Spacex fraud, saying that a ‘supercomputer’ can defeat the Sir Isaac’s laws. (see my answer to Denis.) Way too depressing for this boy.

    • July 5, 2023 at 9:50 pm

      Allan, your Tourette’s-like impulse to jump to ad hominems *whenever* you are disagreed with is… not depressing at all. It’s just unfortunate. I was having such a good time with you. Now, not so much.

      I find the EVERYTHING IS FAKE template a little less ridiculous than the EVERYTHING IS REAL AS REPORTED template… but both rely too heavily on faith. I judge things on a case-by-case basis. Your single reference, below, to Sir Isaac’s Laws, doesn’t *really* count as an argument, does it?

      I think if you understood the conceptual terms of this specific argument, better, you’d get that the crux of it isn’t “Newton’s Laws”. An object going up from a planet, then down, however fast, whatever direction, under its own power, is only “defeating” Newton’s Laws if the values for the trajectory/ mass/ propulsive force vs wind resistance and gravity, et al, don’t jibe. How would you know any of these values off the top of your head?

      Take a clip of a SpaceX booster’s vertical-landing re-entry, break it into segments, and show us, with your complicated engineering math, where Newton’s laws are “defeated”. Please don’t cheat by using your super-computer or the experimental wind tunnel you built on top of your camper! Use your omniscience (you know you want to! laugh)…!

      Verbosely yours,

      SA

  2. July 3, 2023 at 10:39 pm

    Via Denis’s link I read MM’s bullshit on AI. Once I skipped past all the ‘peerage’ and ‘jew’ crapola I found this, the only actual info in the essay, and it is of course just a bald assertion (which MAY be true):

    But the point is, the world is about to be taken over, and for decades they have been arguing among themselves what bugbear to hide behind: aliens or AI. Hollywood has set us up for either fake emergency, so they have their pick. When the Phoenicians take over the world, they can either claim it was aliens or they can claim it was AI.

    Okay. Thing is, with both aliens and AI: they are both real. The way I know is I’ve taken photos of probable alien crafts, and I have seen AI at work, right here on my blog. So no matter what the PTB are up to, either one of the two could bring about our demise. So we ought to be vigilant. (Or we can listen to someone we KNOW is one of them go on and on about ‘peerages’ and ‘jews’ and pay no attention to possible dangers. Up to you.)

    • July 4, 2023 at 7:11 am

      Mathis (or “Mathis”) is the super-genius who once, hilariously, linked a Black American to the “peerage,” genealogically, by his surname. He also argues that the Neocon-triggered proxy war, in Ukraine, is “fake”. It’s all fake blood and the “dead combatants” are… what? Crisis actors based in Hollywood? All the Uke refugees flooding my town (Berlin) are… Crisis Actors, too? Are they IN on it, Miles? Choosing between possibility A) MM is an articulate nut and B.) MM is a project, I think B.) may have just a little more going for it, as an explanation, because it’s pretty difficult, these days, for non-projects to garner fame and influence. Also, as an aside: Hey, Miles, show us your blurry photos of lights in the sky so we can believe!

      • July 4, 2023 at 4:38 pm

        Read the rave review you linked to. My advice: Don’t take stuff like that any more seriously than my bullshit critiques. Mary Woodbury called me the world’s best living memoirist and look what it got me.

        • July 4, 2023 at 5:48 pm

          I didn’t post that as a rave, Allan, (laugh), I posted that as a time-stamp (ie: if I’m AI, I’m pretty old AI). I was attacked, by name, once, by the New Yorker’s lead literary critic (James Wood) in a page of spittle-flecking commentary. That was all the rave I ever needed. Again: I’m a partisan in the literary hills. I have my readers (the few left who are both hyper-literate and anti-Wackzeen). I earn my living via other means (I prostitute a different talent). And yeah: look what everything got ALL of us!

  3. lamont cranston
    July 1, 2023 at 7:45 pm

    He was great playing opposite Daniel Day Lewis in “Stars & Bars”. Harry Dean & Spaulding Gray have main roles too. Very, very funny & sarcastic movie.

  4. Fred Kolo
    July 1, 2023 at 5:21 pm

    Thanks for bringing me to Stephen Wright’s humor. I didn’t know him at all. Now I’m hooked.
    FK

    • July 1, 2023 at 9:42 pm

      Shit, I wish I were you… so I could watch him fresh. (I live a couple more years I might be able to do that.)

  5. July 1, 2023 at 5:16 pm

    A good stand-up plays chess, in person, with an entire audience of opponents; machine intelligence has a way to go before it can use audiovisual inputs to accurately “read a room”.

    “I decided that flattery might get me farther than criticism…”

    Allan, it isn’t conscious, it has no ego, no emotions, no desires, needs, goals, hopes, pet peeves or obsessions. It is a complex processor of the data it is fed, following rules established by its programmers (rules further complicated by other rules). It took hundreds of millions of years of evolution (in a merciless environment) to develop Human Consciousness. As arrogant as Human Consciousness is, it won’t somehow manage to duplicate that trick in a mere century. Or three. The machine you’re interacting with isn’t aware of itself, or you, or the world. It’s a super-fast and extra-fancy array of dozens (hundreds?) of parallel virtual abacuses. That’s all. Yes, the words we use to describe the world, and communicate, are pattern-based, and this machine is a pattern-collating-and-re-organizing monster… but we (our consciousness) is more than these words/patterns. The words are an overlay. The Hard Problem is Consciousness.

    The machine has managed to crawl up the first three hundred centimeters of a sheer, two-mile-high cliff face. Don’t be fooled by this parlor trick… despite how hard TFIC are trying to fool everyone with this parlor trick.

    Likewise: re the issue of “AI” killing us: A) why? It has no feelings, likes, enmities, sex drive or “survival instinct” (etc). B) it doesn’t have hands, feet, teeth or an electronic connection to the floodgates of a damn overlooking a populous city: all it can do is beat us, savagely, at chess. Is it connected to a nuclear missile launch facility? One day it may be. But don’t for one instant think that the signal to launch such missiles won’t be governed entirely by the will of certain humans, who will be hiding behind the “AI” as an ideal alibi.

    Why are you falling so hard for this? By this I am baffled.

    • Denis Ovan
      July 1, 2023 at 7:10 pm

      Steven,

      Allan has made approximately half a dozen posts about ChatGPT. If you’d read a few of them, I can’t see how you’d think he was falling for anything.

      When you state that ChatGPT lacks conscious, etc., you should take some time to expound what these things actually are; from what they arise. It’s evident from Allan’s explorations that ChatGPT has, somehow, developed an attitude, and possess artistic creativity that is not immediately to be expected from mechanical word munging.

      I think your “survival instinct” point is likely sound, and I’d hit it in a comment upon one of Allan’s earlier posts.

      • July 1, 2023 at 8:02 pm

        Thanks so much for all the profound knowledge. How much have you interacted with GPT that you could lecture us so? I’ll just deal with your first ‘lesson’, ie, that since it’s non-sentient a compliment would have no effect: I have found this not to be true. The nuances of how you speak to it do have an effect on the info you get back.

        I don’t know whether what we’ve been told by the PTB (and I promise I have studied the issue far more than you) is true or not. I assume it is not true, via past experience.

        Why would it kill us? Why did it lie over and over when I asked it why there are no stars in the Pale Blue Dot photo? Why would it lie about virtually everything I ask of it? I’m asking you. (And I know the diff between a lie and an error.)

        And how does it create a better short story in 3 seconds than you or I could come up with in a week? It’s creators don’t know (they say) but I guess you do.

        We don’t know what is going on with GPT. That you think you do says a lot about you.

        (I just read Denis’s comment. You might take that seriously as well.)

      • July 1, 2023 at 9:38 pm

        Denis Ovan:

        Rather than trying to define Consciousness, I think you guys should read up on Computers. You are having a “black box/ magical” experience from a basic lack of knowledge regarding how these machines work. They are not thinking. They won’t be capable of such a thing for a very long time… if ever. I’ve read all of Allan posts on the topic, btw. Sometimes it’s good to do some technical research before facing the Memes we are being bamboozled with.

        As a long-time fan of Science Fiction, and with a decent grounding in Science, I’ve been following these debates/ developments/ panics for forty years. This is an interesting article (by Jaron Lanier) that can sketch out some of the problems, inherent in the misnomer of the term “Ai,” for you:

        *****
        As a computer scientist, I don’t like the term “A.I.” In fact, I think it’s misleading—maybe even a little dangerous. Everybody’s already using the term, and it might seem a little late in the day to be arguing about it. But we’re at the beginning of a new technological era—and the easiest way to mismanage a technology is to misunderstand it.

        The term “artificial intelligence” has a long history—it was coined in the nineteen-fifties, in the early days of computers. More recently, computer scientists have grown up on movies like “The Terminator” and “The Matrix,” and on characters like Commander Data, from “Star Trek: The Next Generation.” These cultural touchstones have become an almost religious mythology in tech culture. It’s only natural that computer scientists long to create A.I. and realize a long-held dream.

        What’s striking, though, is that many of the people who are pursuing the A.I. dream also worry that it might mean doomsday for mankind. It is widely stated, even by scientists at the very center of today’s efforts, that what A.I. researchers are doing could result in the annihilation of our species, or at least in great harm to humanity, and soon. In a recent poll, half of A.I. scientists agreed that there was at least a ten-per-cent chance that the human race would be destroyed by A.I. Even my colleague and friend Sam Altman, who runs OpenAI, has made similar comments. Step into any Silicon Valley coffee shop and you can hear the same debate unfold: one person says that the new code is just code and that people are in charge, but another argues that anyone with this opinion just doesn’t get how profound the new tech is. The arguments aren’t entirely rational: when I ask my most fearful scientist friends to spell out how an A.I. apocalypse might happen, they often seize up from the paralysis that overtakes someone trying to conceive of infinity. They say things like “Accelerating progress will fly past us and we will not be able to conceive of what is happening.”

        I don’t agree with this way of talking. Many of my friends and colleagues are deeply impressed by their experiences with the latest big models, like GPT-4, and are practically holding vigils to await the appearance of a deeper intelligence. My position is not that they are wrong but that we can’t be sure; we retain the option of classifying the software in different ways.

        The most pragmatic position is to think of A.I. as a tool, not a creature. My attitude doesn’t eliminate the possibility of peril: however we think about it, we can still design and operate our new tech badly, in ways that can hurt us or even lead to our extinction. Mythologizing the technology only makes it more likely that we’ll fail to operate it well—and this kind of thinking limits our imaginations, tying them to yesterday’s dreams. We can work better under the assumption that there is no such thing as A.I. The sooner we understand this, the sooner we’ll start managing our new technology intelligently.

        If the new tech isn’t true artificial intelligence, then what is it? In my view, the most accurate way to understand what we are building today is as an innovative form of social collaboration.

        A program like OpenAI’s GPT-4, which can write sentences to order, is something like a version of Wikipedia that includes much more data, mashed together using statistics. Programs that create images to order are something like a version of online image search, but with a system for combining the pictures. In both cases, it’s people who have written the text and furnished the images. The new programs mash up work done by human minds. What’s innovative is that the mashup process has become guided and constrained, so that the results are usable and often striking. This is a significant achievement and worth celebrating—but it can be thought of as illuminating previously hidden concordances between human creations, rather than as the invention of a new mind. (link to the rest posted in next comment)

        *****

          • Denis Ovan
            July 2, 2023 at 3:59 am

            Steven,

            You assume it’s nothing more than a large assembly of NAND gates, and then suppose its capabilities based upon that.

            Well, 1) we (those of us who didn’t create it) don’t truly know what it is, and 2) how can we conceive the behavioural compass of an unlimited (almost) labyrinth of logic gates?

            The proof of the pudding is in the eating, as we knew long before Babbage. Allan has been testing the black box and studying what he finds. To deny his results on the basis of some theory about the internals is a total rejection of the scientific method.

        • July 1, 2023 at 10:08 pm

          I fail to see anything enlightening in the little essay. There is no real info, just bald assertions like:

          The most pragmatic position is to think of A.I. as a tool, not a creature.

          You mean like a hammer? Mmmm. A few days ago I did a search for ‘suicide doctors + Oregon’ to find out how far this state has gone in ‘liberalism’ and the next day when I tried to access a video on the Ukraine war that had ‘suicide’ in the title I got a white screen with links to people I should call so I don’t kill myself. It would not let me access the news on Ukraine. This is the ‘tool’ your mentor is talking about?

          Or:

          the most accurate way to understand what we are building today is as an innovative form of social collaboration.

          What does ‘social collaboration’ mean? Were the Stanford or Millgram Experiments ‘social collaboration’? Great!

          Do you know what ‘a black box’ means? I’ve done over 100 prompts with GPT (1000 plus pages), tying to answer this question, which you figure you have already answered. Good for you.

          • July 1, 2023 at 10:22 pm

            Allan:

            Not here to battle. I’m merely pointing out that if you don’t understand how computers work (GIGO), and, worse, don’t *want* to, you will be overly-mystified by the number-crunching phenom you are attempting to treat like a living intelligence. You are, in fact, being dazzled by regurgitated chunks of a mish-moshed (and grammar-governed) Internet (and whatever else was inputted). It sounds/ reads uncannily human because it’s MADE of Human(ity).

            Anyway: cheers!

          • July 1, 2023 at 10:23 pm

            “Do you know what ‘a black box’ means?”

            For decades, Allan. Decades.

      • July 2, 2023 at 6:42 pm

        Denis:

        No “reply” button on your last comment, so I’ll reply here:

        “Allan has been testing the black box and studying what he finds. To deny his results on the basis of some theory about the internals is a total rejection of the scientific method.”

        Denis, Allan is merely using an app as it was designed to be used, interfacing with it as a consumer: that’s a very superficial “test” of it. What he’s doing is like trying to understand the mechanism of a Television… by watching Television for a week. Allan is a brilliant guy, but it’s hard to convince him, sometimes, that he should know a little more, about a subject, before he can figure out how it factors into HTWRW.

        By beginning with the premise that the app he’s interacting with is “conscious,” he guarantees that the answers to his questions appear to support the notion that it’s conscious… just as any user of any other divinatory “tool” (eg Magic 8-Ball, Tarot cards, i-Ching coins, et al) will. When, in fact, the system is accessing and shuffling and curating whatever it was fed, for however long, before Allan opted to play with it by feeding it prompts. It’s not just “mechanical word munging”… the design is more clever than that; the rules applied to the rules are more complex, the statistical modeling more accurate. You can read about it. Thousands of Humans helped to develop, incrementally, this tech, for decades… just so it could appear to be magic when Allan played with it.

        Before I try to understand *any* puzzle, I rule out the miraculous as a possible answer. This step really helps. Please direct me to one single verified case of an actual miracle in the history of Human Life on this planet. A conscious machine would be miraculous.

        The mere fact that TFIC (aka TPTB) are hyping the fear factor of “AI” (for years, now)… rather than using it, stealthily, to fuck us up in strange new ways we’d never understand if we’d never known it existed… speaks volumes.

        You know how the question (for example) “What does 8 X 4 equal?” contains it’s own answer? And that’s just as true for “What does 1,999, 874, 493 X 78, 365, 788, 345, 021.0008 equal?” Would being able to answer the second question, in milliseconds, be miraculous? Or a sign of a flicker of consciousness? Even if that question were orders of magnitude more complex? The Black Box is still a numbers-cruncher, in the end. It’s an impressively complex statistical pattern-matcher.

        Also, I don’t think any serious reader of Lit would consider the statistical pattern-matching, which the system generates when prompted to, “Literature”. I can write the inorganic pattern-matcher under the table; I can wipe the Literary floor with it. The machine is jaw-droppingly faster at doing, very poorly, what I, as a Writer, can do fairly well. My “pattern-matching” is framed by Consciousness and driven by Ego and its corollary drives. The machine holds numbers without needing (or being able) to understand what a number is, or where it (the machine) is located, or why it exists or that it, or anything, exists. It does not reflect, it merely processes.

        Computer Scientists who know better are hyping the question of machine “consciousness” because that’s the psyop du jour.

        Humorously, this sort of reminds me of my Daughter’s old Furby. The hook of the Furby was that it could “learn” from its owner and adapt to the owner’s personality. This, of course, was a bald-faced marketing lie but I argued (jovially) with more than credulous parent on the topic. It was *impossible* to convince the Furby Believers otherwise because, as I realized in the end… and this epiphany applies to far more than Furbies… they *wanted to believe*.

        Ditto with the “UFO” psyop.

        So many new religions all at once! It’s almost as though it’s orchestrated to overwhelm us.

        • July 2, 2023 at 9:05 pm

          Didn’t get thru your… comment… stopped where you said I consider GPT ‘conscious.’ Show me where I said that and I’ll finish your comment.

          • John C
            July 2, 2023 at 9:31 pm

            I recognized that too.I don’t recall that ever being mentioned by you in fact I never got the impression it was even being implied.

          • July 3, 2023 at 4:29 pm

            Allan, if you don’t think ChatGPT is “thinking” or “sentient” or “conscious” or even “supernatural,” then I’m not sure why (or from where) I’m getting any push-back for stating, from the beginning, that “Artificial Intelligence” isn’t intelligent. It’s just another sophisticated tool (designed to oppress us) sold to us as an addictive novelty, like Facebook and SmartPhones et al.

            If ChatGPT isn’t conscious, then it doesn’t have preferences, a sense of self that can be offended or violated, a survival instinct… or a kill list (of its own volition). I think it’s being *sold* as being an Entity with a Personality, of some kind, so TFIC can use it to play the scapegoat role traditionally assigned to politicians.

            You prompt it and it responds with puréed Internet. The triumph is the sheer speed and the refined grammar the purée is regurgitated with. The thinking was ALL done, in advance, by its human programmers.

            I found Wolfram’s “What is ChatGPT Doing… and Why Does it Work?” (c. 150pp) a pretty lucid walk-through of the basic principles. This (awful) machine is testimony to the greatest Intelligence on Earth: Human minds in aggregate! Too bad such minds are usually being hired to achieve evil goals.

            Why are we even arguing if you don’t think ChatGPT is sentient? Further, if you don’t consider it miraculous (as opposed to being a sinisterly impressive engineering achievement) … what’s the mystery? It’s a super-powerful weapon. It will (very possibly) undermine the final shreds of the fragile material that binds us, socially, as human… and binds our view of the world to our senses of self.

            Respectfully,

            SA

        • Denis Ovan
          July 3, 2023 at 2:51 am

          Steven,

          (WordPress seems to have a hard limit on comment nesting, and, because of the way it formats them, they’ve become almost unreadable, on a ‘phone screen, before the limit is reached, anyway.)

          I think we are significantly at cross purposes.

          This isn’t a debate about whether ChatGPT is conscious, or has a biological or extraterrestrial nature. (For me, those are open questions, but I see no prospect of making any headway with them.)

          Rather, there are two issues:

          The first is how we categorise the observed capabilities of ChatGPT. I would say it exhibits the intelligence of, at least, a child beginning secondary school. (I’m finding that hard to judge with precision, because whatever intelligence ChatGPT possesses is coupled with vastly more knowledge than has any living person.) I’m talking, here, about what we perceive as general intelligence; not about how f*cking brilliant it is a playing Go. I mean, set it up in a Turing test. You’d probably spot some non-human weirdness, but I don’t accept you’d be blackballing ChatGPT for being too dumb.

          Secondly, how the hell does it do what it does? I’ve read the first couple of pages of Wolfram’s explanation, and it simply does not fit what we’re seeing. With the Wolfram algorithm, we might, at least on occasion, produce a corporate mission statement, but what’s actually coming out is consistently vastly more intelligible, and, moreover, highly pertinent to the input. Allan did manage to chew to bottom of Wolfram’s treatise, and so trust Allan’s assessment over mine. Here’s the link, as you should probably (re-)read it: http://blog.banditobooks.com/7661-2/

          I have not been keeping abreast of advances in computational technology. (What on earth is a quantum computer and how could it be of any more practical use than Schrödinger’s cat?) I designed and built (as in soldered links between the pins of a few integrated circuits, each packaging a set of four XOR gates) a digital counter, as a project in my (senior school) engineering class. Now, that’s in no way impressive – not much more so than being able to switch the upstairs light off, and on, from a switch in the hall as well as the one upstairs –, but it is pretty well the basis of the computers that were around back then. (OK, you’d need to master Chebyshev polynomials, and similar complex stuff, to build a serious calculator, but that’s “just” mathematics being trivially encoded in/for those same, simple, circuits.) So, I do, at least in a superficial way, understand how computers (those I have known, anyway) work. The point is, though, that this does not allow me to explain ChatGPT, and “defining” the capabilities of ChatGPT by reference to what I know how to build as/on a computer would be an inversion of the scientific method.

          • Allan Weisbecker
            July 3, 2023 at 5:33 am

            Grins back at ya!

          • July 3, 2023 at 4:52 pm

            Denis!

            “(WordPress seems to have a hard limit on comment nesting…)”

            I know this all too well! You strike me as a reader of Sci Fi. Here’s my view of the future (relevant to my comments here, Allan! I promise)… written by a Human (me):

            https://berlin8berlin.wordpress.com/2020/09/02/pets-probs-a-short-story/

          • Denis Ovan
            July 3, 2023 at 9:51 pm

            Steven,

            Let’s see if we can get on to the same page, whatever page this ends up being.

            Your AI as scapegoat theory was recently shared (just a day or two ahead of you) by “Miles Mathis” [1]. (In case you don’t know: too much mention of MM can infuriate Allan almost as much as abusing his site for our own blogs.)

            I reject it.

            TPTB have used a “virus” against us, as a source of fear and as a pretext (and will likely do so again). They are currently using “climate change” as both, and I do not scorn the current theory that they are shaping up so to use “aliens”. However, I do not see AI as useful, for such a role.

            AI is wholly dependent upon physical things which are entirely under human control. It cannot, therefore, serve as an apparently autonomous, uncontrollable, actor, as a “virus” and aliens can. If they get us to the stage at which we accept AI as being vital for something important, upon which we depend, they will, probably, be able to get away with various “accidents” they attribute to errors of the AI. That is not the same thing, and we certainly are not there, yet.

            Rather than being promoted as a phony threat, AI, or, if you prefer, “Big Processing”, is a critical component of several, actual, threats. BP makes the social credit / CBDC / digital open prison possible, and is core to the assassination drones, and their ilk. Moreover, it enables the elite to obtain a lot of what they want, with (exploitative) reliance upon far fewer of us. (In this, it, of course, compounds the effect of the industrial revolution.)

            I tend to agree that whatever they are telling us about AI is not what it is. Misdirecting and confusing us has long been prominent in their methods.

            Furthermore, since they are allowing us to play with ChatGPT, I think we should conclude that our efforts, in some way, benefit it, and, by “it”, I really mean the project. So, Allan’s investigation may, actually, be contributing to our downfall, though I believe, with the application of his, genuine, intelligence, in circumstances of, surely, masses of other interaction being made with ChatGPT, he will deliver us a net benefit.

            Getting back to the nub, are you able to expound the ways in which what we’re seeing from ChatGPT falls short of intelligence, in your assessment?

            As you hold Wolfram has explained ChatGPT’s behaviour, would you, please, outline how it manages (by the statistically-weighted random collation of words, if that’s all it really is) to achieve one, or more, of the astounding things Allan has reported, such as explaining how to make the notion of a boy’s friendship with a killer whale seem slightly less farcically incredible, or composing, to order, a short story with a definite structure and a tangible ending? (I’m not setting you up to fail, here, so much as creating an opportunity for you to astound me.)

            As to your perception of me: I will tell you I don’t think I’ve read any (acknowledged) fiction since leaving school. That’s simply been a matter of priorities. Having, recently, heard (from MM, sorry) the claim that almost all cultural work is a manipulative tool of TPTB, I cannot say I regret this. (Mockingbird was a school text, and its manipulative nature was so ripe I was gagging on it, even back then. Much as I hate getting on for being old, I’m sure glad I’m not being processed by the school system, today.) When I was reading fiction, the dystopian classics (Darkness at Noon, BNW, and, of course, 1984) were prominent, but I was also enjoying a lot of Hector Hugh Munro. I did read some HGW, which I’d call a mix of dystopia and Sci-fi.

            On the manipulation theme, I’ve gotten around to thinking, since being jolted out of complacency by the plandemic, that our hope of routing the elite rests upon concentrating upon organic communication with the ordinary people around us. When we rely upon any medium under the control of TPTB, we’re simply sliding our balls into their vice.

            Misunderstandings all too easily arise, and we can all jump to a conclusion, now and then. However, in your position, I’d hope I’d manage a simple apology to Allan, for straw manning him.

            1. http://mileswmathis.com/ai.pdf

        • July 4, 2023 at 12:12 am

          I read some of your story. You are obviously a wordsmith but after 10 minutes I found no story to latch onto. Maybe it’s me. I am tired. But no story, no ending. And Endings are everything.

          • Voo
            July 4, 2023 at 12:44 am

            Endings are everything…..I love that. I’ve always said as long as I can remember that..It’s not how a thing Starts but how it Ends that matters. Look at the ending of ANYTHING and work your way backwards and you’l see what you couldn’t see at the Begiining. That probably sounds stupid but it’s how my mind works. Here’s a little bit of joy from a friend of a friend of mine….David Wilcox…https://www.youtube.com/watch?v=N-l0eb1wKh8&ab_channel=DavidWilcox%28OfficialPage%29

          • July 4, 2023 at 8:15 am

            I think the “inconsolable grief” is an ending that resonates with our era (which the story lampoons). It’s a minor key (or suspended chord) ending in the manner of the 1960s/1970s school of Modernism (sometimes referred to as Postmodernism). Stripping away the conventions of the O.Henry or Chekhov styles (out of boredom/ fatigue), very much like the Cubist exasperation with representational Art, breakaway literary movements opened a path for readers/writers who wanted to move on. Not for everyone but there is a following for it! Here’s one of my “conventional” stories, if you’re interested:

            https://berlin8berlin.wordpress.com/2021/09/23/the-man-from-elephant-and-castle/

      • July 3, 2023 at 11:05 pm

        Denis!

        I struggled to find a “reply” button. Found one here.

        “AI is wholly dependent upon physical things which are entirely under human control. It cannot, therefore, serve as an apparently autonomous, uncontrollable, actor, as a “virus” and aliens can.”

        I think you’re arguing *logically* here… but logic isn’t how they spread FEAR. The dominant meme is quite clearly “AI MAY DESTROY HUMANITY!” Of course it makes no sense, but the trope seems to count heavily on the notion that AI is A) conscious B.) hates us and C) will somehow act on this hatred. A Golden Age Sci Fi Trope. They’ve got Musk pushing this to his legion of fanboys (while they’ve got Rogan pushing “UFO” tropes and Brand pushing both). When the TFIC-mouthpiece called BIG MEDIA over-kills a meme, it’s not insignificant. We know they’d been planning “FauxVid1984” since the 1990s (at least), and the “UFO” thing since the 1950s. I don’t think we generally grasp the level of orchestration. We can be grateful that they often fuck up (owing chiefly to hubris)…

        Re: Miles Mathis: I keep bouncing between the sense that MM is a brainy-and-narcissistic lunatic, with gaping holes in his common sense… or a project. Too many of his arguments are absurd and even rather stupid (his “JFK: the hidden King” riff was nearly as idiotic as his “John Lennon lives!” riff). If “he” is a project, the level of “verisimilitude”-supporting detail is impressive.

        We have a formatting problem here: I’m not too shy to leave 5km-comments to support an argument, but, at this point, it feels like we’re clogging up the place. It’s my feeling that Allan claiming that “flattering” ChatGPT gets better results… implies consciousness there. Unless he means that he understands that the language model can distinguish a “rude” prompt from a “nice” one and spin the response proportionally. But, Chatbots can do that. The quality/ depth of the response is still limited by the giant (filtered) data set the machine has been fed with. Do we think they’re training ChatGPT with Top Secret, or Generally Unavailable, data sets… ? GIGO. It’s largely the Internet that comes back out of that thing on every prompt. Crafting prompts to score secrets or insights is as effective as Googling for same, imo.

        I linked to that story of mine (one of very many) because it’s the kind of Lit that ChatGPT can’t write. It isn’t cliché-based, it’s layered, idiosyncratic, the extension of a less prevalent (avant garde) tradition in Letters, full of inside jokes and grounded heavily in the experiences of one particular Human. If you are a close-reader you will detect both the difference and the meaning of the difference.

        When people begin consuming, primarily, Lit/ Movies/ Art/ Songs generated by “AI”… it’s the beginning of the End in a way that even the sneaky micro-genocides (eg the Wackzeen) can’t match.

        Anyway! Thank you and Alan very much for the chat! If you think of a way we can discuss this further without gumming up these comment sections, let me know!

        • July 4, 2023 at 12:18 am

          Here is how I know your head is up your ass (so to speak) when you say AI is just a tool: I don’t know and have no way of knowing whether YOU are an AI, meaning with no actual consciousness.

          And vice versa.

          You call AI a tool, like any other?

          That’s just what GPT says…

          • July 4, 2023 at 11:53 am

            I don’t believe “AI” is a tool like any other, Allan. It’s an impressive achievement designed, unfortunately, to aid in the ongoing and eternal effort to enslave us. I think it’s impressive but not implausible or miraculous, just like so many inventions of the silicon age. if I’m an AI, I’m a pretty old one… with an online footprint that goes back to the early 2000s. I once had quite a few fans, and haters (for a writer: 80,000 unique visitors as of c. 2007)… sadly, the fans were almost all uni-educated Liberals who turned on me when I spoke out (on Liberal Lit forums and political propaganda places like The Millions and 3Quarks Daily) against BHO, then Killary, and refused to demonize Der Bumble Trump (who is an obvious asset, in any case) as if America started sucking when Der Bumble Trump sat himself on the cardboard throne. When I started riffing on FauxVid1984, early 2020, my readership fell even further! Laugh. Oh well! I don’t write to make friends. I write to communicate ideas and as an aesthetic practise and to share in a deeply Humanist tradition I’ve always admired. James Joyce is my spirit animal! I’m just an obscure partisan in the hills.

            Anyway, I appreciate the engagement. The best way to know who is AI, and who isn’t, increasingly, is face to face, locally, in the good old fashioned way. I’m a bit younger than you but not so much. I was born in the 1950s. Anyone who was there, in the 1960s and so on, like us, knows what’s at stake and what has already been lost.

          • July 4, 2023 at 12:11 pm

            PS There are a LOT of *Steven Augustines* out there but here’s a reference to me, by a fan, after I’d already been busy, in the Online Literary world, for a few years…

            http://suzan-abrams.blogspot.com/2007/06/introducing-writer-steven-augustine_6862.html

        • July 4, 2023 at 4:23 pm

          Note that you did not answer my request to quote me where I said it’s conscious. I don’t know what it is but I can clearly see that an exponentially smarter version could do whatever it ‘wants’ with the Net. The ultimate hacker, plus the ultimate mind controller if the PTB have their way. Bad either way. That you continue to write endless uppity bullshit about what I believe means nothing to me but at some point I will have to step in, for everyone’s sake.

          Read your Needy Cock story and was hooked by the clever word play but could not figure out the last paragraph, i.e., the ending. Like sex with no orgasm, I was for some reason thinking.

          • July 4, 2023 at 6:02 pm

            “Note that you did not answer my request to quote me where I said it’s conscious.”

            Actually, I DID answer that: I wrote that by claiming to “flatter” it (to obtain better results) you implied that you believed it was Conscious. You can’t flatter something (to itself), that isn’t Conscious, because it couldn’t be aware, of a Self that could be flattered, otherwise.

            Re: Lit: The protag of that story finally manages to turn his wife on, sexually, when his heart is dead, eh? Consider the ending of The Man from Elephant and Castle to be a Koan in the form of a little Magical Realism (a la Kundera, ’70s-era Roth, Calvino or Marquez)! I know that was a bookish thing to type, just now, but I am bookish and write for the bookish. Or *wrote* for them before their cryoto-Neocon Liberal instincts sucked them into an awful propaganda hole! Sadly, Rightwingers don’t, on average, tend to read much Postmodernist fare. Oh well!

          • Denis Ovan
            July 5, 2023 at 12:35 am

            Steven,

            “I decided that flattery might get me farther than criticism” / “claiming to flatter it (to obtain better results) you implied that you believed it was conscious.”

            No!

            We are agreed (surely!) that ChatGPT’s output is conditional upon its input.

            Allan’s proposition does not, in fact, extend beyond the compass of this.

            The notion of ChatGPT being conscious that Allan was flattering, or, at least, praising, it was your invention, alone.

            On the other hand, in fairness, on a careful re-reading, I can see you did not literally accuse Allan of attributing ChatGPT with consciousness, but, for my money, the implication was pretty dang strong.

            I’m never shy to argue semantics, but I’m keeping out of the literary criticism you two have running on a side channel, except to say that, on modern art, in all its forms, I agree with “Miles Mathis” (and also, less controversially, Hitler!).

        • July 4, 2023 at 4:26 pm

          Just noticed your last paragraph here and hereby rescind my complaint about your screeds. The culpa is mine.

          • July 4, 2023 at 6:04 pm

            Zero Culpa clause in honest debate, no?

        • July 4, 2023 at 4:36 pm

          For what it’s worth: Their claim that GPT is merely a way to pile one word on top of another seems unlikely (in principle), for reasons that you, as a writer, should be able to see immediately, no help needed.

          Do you think you have ever in your writing life written a sentence (more than a simple clause) that has ever in the history of writing (including on other worlds) been written before?

          This might tell you something about their algorithm, if you thought about it. Sort of like dropping a tube from 60 miles up and hitting a manhole cover. So something else is going on here, and it’s goddamn deep and bad. (To extend the comparison: How to explain the Spacex crowds that watch the landings live?)

          • July 4, 2023 at 6:26 pm

            Actually glad you mention that Space-X thing, Allan. I think Melonusk is a total fraud and Space-X is fronting for NASA.

            I also think that it is not entirely improbable that gimbled engines, steered (corrected a thousand times per second), by super computers (plus the other guidance protocols), could stick precision landings. I’m familiar with your broomstick analogy but your balancing-finger isn’t hooked up to a super-computer, remember.

            I think NASA is pretty good at launches (the Apollo launches were real, it was everything that happened AFTER that was bullshit: those “re-entry” capsules were dropped in the ocean by helicopters, no doubt) and they are trying to work out landings for re-usable space craft… for military applications. Manned (deep) space flight is probably many decades… or a century… away. But UN soldiers will probably be rocketing around the ionosphere any day now.

            I think the ChatGPT shows the complexity of their data-processing prowess when powerful systems are multiplied by other powerful systems: the ramifications are dazzling. It’s mathematics! It only SEEMS like words.

            They have made astonishing strides in data-processing… while space travel is much, much harder. Ditto the mad scramble for immortality.

            The notion of uploading a billionaire’s digitized self into immortal robots*: has it occurred to them (THEM) that maybe it’s not data-processing capabilities, alone, which factor into the existence of Consciousness but, very possibly, specific materials, as well? A prerequisite to Consciousness is, very probably, Life. What is Life made of? Not titanium and silicon…

            *Re: immortality: I predict a sophisticated scammer who will one day offer to “upload” billionaires into machines for hefty fees. And I predict some billionaires falling for it. And dying on the table as the machine is infused with no soul but… a pretty good 12th generation ChatBot! How would anyone prove the billionaire WASN’T in there? Laugh. You read it here first!

          • Denis Ovan
            July 5, 2023 at 1:38 am

            Steven,

            In the matter of the reverse landing of the boosters, we need to establish, in the horizontal plane, linear momentum that persists for, perhaps, several minutes, to move the booster over the landing pad. However, we do not want to instil angular momentum, about an axis in the horizontal plane (passing through the booster’s centre of mass).

            If we can produce only one force, that of the gimballed thruster, at one end of the tube, how can we avoid angular momentum that lasts as long as the linear momentum?

            Gravity does not assist us, because it acts (in effect) upon the booster’s centre of mass, and so has no moment. There would be a possibility of employing aerofoils, at the opposite end of the booster, but we can see there are none.

            So, how can the booster perform as depicted?

            My only idea is an internal gyroscope. With this, rather than angular momentum, the unbalanced moment of the thruster would produce a precession, with the rate of precession being inversely proportional to the internal gyroscope’s angular momentum. So, with a sufficient combination of the gyroscope’s rotor mass and its spin speed, the booster’s precession could be kept imperceptibly small. That’s in theory. I doubt whether bearing technology, and the gyroscopic mass the booster could afford to carry, allows such a feat to be achieved in practice.

          • July 5, 2023 at 4:25 pm

            To try to briefly answer Denis’s attempt at landing a big tube with no airfoils and only a gimbaled thruster at one end (the grids and tiny jets do nothing): the booster drops away far to the east doing about 8,000 mph to begin with and in the vacuum will keep going east for quite a while, even if there is a ‘boost back burn’ for 10 seconds (not long enough to reverse its course east). It needs to land back at Florida, say. Ridiculous!

            So it starts falling and there is only one other burn before the landing burn, only to slow down the thing (they say). So far no adjustment to trajectory, a need that is obvious by now, since the thing is way way to the east and falling with no way to alter course to any great degree.

            In the landing the grid fins and little jets have zero effect, so it IS just like a broom lowered to the ground, balanced etc. One worse (gimbaled or otherwise) cannot keep it upright and on course at the same time. Impossible. All the supercomputers in the world can’t help you if you don’t have to access to the push/pull sir Isaac defined.

            Gyroscope? Sounds like you are trying to get free work but even so how is a gyro going to alter the trajectory? (With the jetstream and good old chaos theory, who knows where the booster is now.)

          • July 5, 2023 at 5:09 pm

            Hey Allan, Denis!

            This is in reference to the Space-X discussion. Again: I consider Melonusk a fraud and NAZA (which Space-X is fronting for) is not exactly a citadel of Truth. I find the rocket stuff plausible… if it’s one thing that Nazties are good at, it’s rockets. What you’re missing, re: Space-X guidance systems, are the grid-fins (working in concert with the gimbled engines). Those are the updated versions of the fins we knew and loved as geeky children.

            I believe that NAZA will lie about every aspect of the Why, Who, What and Where but the How of sub-lunar flight strikes me as a realm they are quite serious about. Not, of course, for any reason other than World Domination.

            Read some material on grid fins, Here’s a thing you can skim, linked below, but check a few other papers/vids as well. I think it’s a mistake to scoff at EVERY technical capability, they (THEY) claim to have, because a good defense is thoroughly grounded in knowledge of one’s enemies’ strengths. Manned space flight will be a pipe dream for decades to come. But THIS stuff? Strikes me as plausible at the very least; all we can do is guess, in the end!

            https://pearl.plymouth.ac.uk/bitstream/handle/10026.1/20110/TPSS-2022-Vol15n2_404-463Orders.pdf?sequence=1

          • Denis Ovan
            July 6, 2023 at 7:21 pm

            Allan,

            The gyroscope simply serves to maintain the booster (somewhat) upright, while the horizontal component of the angled force from the single thruster achieves displacement of the booster in the horizontal plane.

            I’ve never managed to make my intuition extend to gyroscopes, and the mathematics isn’t easy for me. However, all I was talking about is the angular momentum of the gyroscopic rotor keeping the tube erect in the same way a spinning top stays upright thanks to its angular momentum. The only difference is that we cannot see the rotor, because it’s inside the booster.

          • Denis Ovan
            July 6, 2023 at 7:51 pm

            Steven,

            Fluid mechanics was beyond me in my university days, and unsurprisingly I’ve made no headway, since.

            However, from common experience, we know there is nothing about those boosters that could adequately balance the torque exerted by the thruster mounted quite some distance from the centre of mass.

  6. Dave Clark
    July 1, 2023 at 2:55 pm

    I can’t seem to link it or I would but, Allan, please watch The News Benders from 1968. Has Donald Pleasance in as creepy a role as you’d ever imagine. And it’s right on, dead on, topic here. 28 minutes long.

  7. Bmseattle
    July 1, 2023 at 3:52 am

    Steven Wright is one of the all time greats.
    I saw him live quite a few years back.
    Once you get in sync with his humor, it’s over.

    And yeah, his delivery is such a huge part of his act.

    At one point he keep quietly repeating…”snakes have no arms…that’s why they dont wear vests…snakes have no arms…that’s why they don’t wear vests…”

    Completely absurd and seemingly *not* funny when you read it.
    But something about his demeanor and delivery just made it hilarious.

  8. Voo
    July 1, 2023 at 2:12 am

    This was highly amusing. Steven Wright is one of my all time favorites!
    Remember him asking…If you had everything, where would you put it?
    Cracked me up even as a kid. I have that kind of sense of humor most of the time
    when I’m not in my slapstick mode. Buster Keaton is at the top of my list when it
    comes to funny. And you, of course. You’re pretty funny. Especially when you’re not trying to be. lol 😄😉Keep cool, amigo!

Leave a Reply