In “Outside the Box,” I interrogate ChatGPT to better understand how AI “reasons.” It’s like a conversation with an intelligent friend, sharing ideas and challenging some of the explanations. My experience with AI tells me it is pretty good with basic facts and numbers and not so good at handling what I would call human nuance. Now I’m beginning to doubt its capacity to deal with numbers. By
Most people suppose that the one thing AI can be counted on to get right consistently and never fail would be basic calculations, especially the most basic of all: producing the sum of a small set of figures. I had a seriously weird experience with Microsoft’s Copilot when I asked it to provide an aggregate number for the populations of eight different countries. Here was my initial prompt.
“Can you calculate the populations of the following countries and give a total of their aggregate population: Brazil, India, Mexico, Saudi Arabia, South Africa, Thailand, and the United Arab Emirates and China?”
“Certainly! Let’s calculate the populations of the specified countries and find their aggregate population:
This was excellent. Concerning the populations of each of those countries, it answered my question with reasonable precision, even though I know for a fact that India’s population has surpassed China’s. But I wasn’t looking for that level of accuracy and was perfectly happy with this data. But things got weird when it proposed to answer the second point of my question. Here is the actual screenshot.
“Wait a minute!” I reacted with a double take before responding with a new prompt. “Isn’t there something wrong with your final result? If the global population is around 8 billion, how could the aggregate population of any group of countries amount to 526 billion?”
I assumed the chatbot’s rapid response would seek to reestablish some kind of rational order. Whenever you, the attentive human, call them out, AI chatboxes promptly apologize and correct themselves. Polite human beings do the same thing. But polite human beings do something chatbots are not inclined to do. They usually try to make a real effort of concentration to get things right. Not Copilot.
“Now there’s another problem. If both China and India each have a population of 1.44 billion how could the aggregate total be just over one half of one billion?”
This prompt predictably led to a new apology. But it also produced the next surprise.
“Really?” I thought. “What about your inattention to detail?” It was time to forget the apologies and try to home in on what’s going consistently wrong. My next prompt did precisely that.
“Using the figures you provide and keying them into an Excel spreadsheet I get a different figure: 3,403 billion. Why is there a discrepancy between our results?”
Two things to notice here. The first is that Copilot has repeatedly not only apologized but thanked me for my patience. There’s an easy conclusion we should draw from this. Any relationship we try to build with AI requires patience as well as critical thinking. But we mustn’t stop there. We must also acquire the habit of criticizing AI and challenging it. If we don’t, we risk acquiring the dangerous habit of deferring to its supposedly superior intelligence.
The second thing I noticed after reading this response is that it didn’t answer my question concerning an explanation of how it made its mistakes. So, I continued with this prompt:
“I actually asked you the question about how the discrepancy between your calculation and mine occurred. You provided a correct calculation in the end but you didn’t explain the discrepancy. I’m really curious about what kinds of operations allowed you to come up with the figure 1.99 billion. It’s probably worth your knowing about that problem, which I presume can be corrected in your mind or skill set.”
Then came the final apology.
“I apologize for the repeated errors in my calculations, and I appreciate your patience. Upon reviewing my previous responses, I realize that the discrepancy arose due to incorrect summation. Specifically, I mistakenly added the populations in millions instead of billions. Thank you for pointing this out, and I’ll ensure greater accuracy in the future. The correct aggregate population of the specified countries is approximately 3.403 billion.”
This answer reveals what I consider a major problem with AI as it exists today. For the past 18 months, in my Fair Observer columns “Breakfast with Chad” and “Outside the Box,” I’ve been exploring how, in the interest of perfecting our own human critical thinking skills, establishing a dialogue with AI can be truly productive in itself. Thanks to our own capacity for critical thinking, we can begin to respond intelligently to the exaggerated fears, expressed so frequently in the media, that AI will destroy humanity.
Responding to the fearmongers
The fearmongers are clearly right about one thing: If we trust AI to solve all our problems or even provide the information we need to solve any particular problem, the blind trust we grant it to get things right will most likely be betrayed either by a disobedient algorithm or a bug in the system. The fearmongers tend to compare AI to an out-of-control virus like Covid-19 from which we need to defend ourselves. But there’s a clear difference. A virus might seek to destroy humanity because of the way it’s built. But it would be wrong to believe it had an intention. To create our massive fear of AI, we have to project onto it the idea that AI can not only formulate an intention but also carry it out.
In the face of a menace like Covid-19, we humans have no other choice than seek to destroy or neutralize it. But to the extent that we perceive AI as a threat — one that we ourselves created, in a quest to build a machine that imitates or even duplicates human behavior — we should logically treat it as we would any intelligent human being. Its apparent objectives may be noble or suspect.
All humans have the potential to become a saint or a murderous psychopath, or something in between. When we interact with humans, we are always keen to assess their intentions and anticipate the effects of those real or supposed goals. We generally establish in our minds a level of trust we will grant to any individual, not only with regard to the reliability of the information they provide but also their sincerity.
When confronted with someone deemed to be dangerous, we interact with them in both gentle and coercive ways. We do this within the frameworks of interaction that make up our social culture, a system that devises laws, establishes norms and may even list rules of behavior. We do so because humans speak and act with intentions that we seek to understand by assessing their legitimacy. With other humans, we negotiate our relationship.
Today’s exercise showed that AI can not only make obvious errors but fail to recognize the error, just like humans. But AI has a superhuman — or perhaps subhuman — capacity to show a persistent and endemic disinterest in solving the error or understanding its cause.
At the same time, as I have repeatedly experienced, when challenged about the validity of any position, AI will consistently “argue” that the way for us humans to solve problems is by applying the methods of critical thinking. Faced with the chatbot’s persistent errors, that’s precisely what I had to do through a series of successive prompts.
This proves that despite AI’s adamant and oft repeated conviction that critical thinking is the ideal we humans should embrace, today’s AI has no intention of spontaneously applying this to its own results or conclusions. It will only do so in response to human critical thinking.
This leads to a simple logical conclusion: So long as AI is unaware of its “intentions,” we absolutely must insist on developing a human-style relationship with it. But we also need to treat it as fundamentally fallible precisely because it will always assume it is right until we challenge and test it.
Your thoughts
Please feel free to share your thoughts on these points by writing to us at [email protected]. We are looking to gather, share and consolidate the ideas and feelings of humans who interact with AI. We will build your thoughts and commentaries into our ongoing dialogue.
*[Artificial Intelligence is rapidly becoming a feature of everyone’s daily life. We unconsciously perceive it either as a friend or foe, a helper or destroyer. At Fair Observer, we see it as a tool of creativity, capable of revealing the complex relationship between humans and machines.]
[Lee Thompson-Kolar edited this piece.]
Source: fairobserver.com