Tag Archives: AI

AI: The Hype and the Challenge of Critical Thinking

Generative AI is here to stay.  In light of this, there are all sorts of voices telling us to use and adapt to this new intellectual terrain.  My goal is this post is to not add to the discussion in regards to how to use the various AI tools.  Rather, my modest goal is to express reservations about the alleged unending glories of the seemingly unalterable “singularity” which is the eschatological dream of some.

My thinking was recently stimulated in this direction by reading Robert J. Marks’ book, Non-Computable You: What You Do That Artificial Intelligence Never Will.  Dr. Marks is Distinguished Professor of Engineering in the Depart of Engineering and Computer Science at Baylor University.  Furthermore, he was the founding Editor-in-Chief of IEEE Transaction on Neural Networks, one of the most prestigious technical journals for peer-reviewed AI research.  In other words, he is well-qualified to offer an assessment of the current state of AI research. 

Marks argues that, though AI is powerful in computing power and does offer some surprises, there is a fundamental gap in terms of true creativity.  In place of the well-known Turing Test, Marks draws attention to the “Lovelace Test” as more effective test for software creativity.  Named after Ada Lovelace (1815-1852), who is considered by many to be the first computer programmer, the Lovelace Test defines software creativity as the ability of a program to do something “that cannot be explained by the programmer or an expert in computer code.”[1]  Marks claims, along with others, that the Lovelace Test has not been met by current AI systems.

In spite of the failure of AI systems to generate true creativity there are all sorts of claims regarding the future of an AI-enhanced humanity.  As Marks notes, “Many worship at the feet of the exciting new technology and without foundation predict all sorts of new miraculous applications; others preach unavoidable doom and gloom.”[2]  In light of this, chapters five and six of Non-Computable You (which by themselves are worth the price of the book!) are taken up with mitigating the “hype.”  Chapter five is entitled, “The Hype Curve” and Marks graphs the dynamic in the following manner:

Marks explains the details:

  • The launch phase.  In the beginning of the hype curve, newly introduced technology spurs expectations above and beyond reality.  Poorly thought-out forecasts are made.
  • The peak-of-hype phase.  The sky’s the limit.  Imagination runs amok.  Whether negative or positive, hype is born from unbridled speculation.
  • The overreaction-to-immature-technology phase.  As the new technology is vetted and further explored, the realization sets in that some of its early promises can’t be kept.  Rather than calmly adjusting expectations and realizing that immature technology must be given time to ripen, many people become overly disillusioned.
  • The depth-of-cynicism phase. Once the shine is off the apple, limitations are recognized.  Some initial supporters jump ship.  They sell their stock and go looking for a new hype to criticize, believe in, or profit from.
  • The true-user-benefits phase. The faithful—often those whose initial expectations included the realistic possibility of failed promise—carry on and find ways to turn the new technology to useful practice.
  • The asymptote-of-reality phase. The technology lives on in accordance with its true contributions.

A number of examples of the hype curve are given by Marks, including the Segway, cold fusion, and String Theory.  Even in the realm of artificial intelligence it seems as those the hype curve begins to resurface again and again.  What to do?

This is where chapter six, “Twelve Filters for AI Hype Detection,” is so instructive and helpful.  This chapter contains a brief, but masterful, demonstration of the teaching of critical thinking.  And it is precisely this virtue of critical thinking that ought to the mainstay of higher education instruction.  This chapter, although devoted to the topic of AI, has a much broader application.  I cannot reproduce Marks’ entire presentation so I will simply quote his summation provided at the end of the chapter.

The Hype List

In a nutshell, here is the list of twelve things to consider when reading AI news:

  1. Outrageous Claims: If it sounds outrageous, maybe it is.  Recognize that AI is riding high on the hype curve and that exaggerated reporting will be more hyperbolic than for more established technologies.
  2. Hedgings: Look for hedge words like “promising,” “developing,” and “potentially,” which implicitly avoid saying anything definite.
  3. Scrutiny Avoidance: Any claim that such-and-such an AI advancement is a few years away may be made with sincerity but avoids immediate scrutiny.  Short attention spans mean that when the sell date on the promise rolls around, few people are likely to notice.  Remember the old proverb often attributed to quantum physicist Niels Bohr: “Prediction is very difficult, especially about the future.”
  4. Consensus: Beware of claims of consensus.  Remember Michael Crichton’s claim that consensus regarding new technology and science is the “first refuge of scoundrels.”
  5. Entrenched Ideology: Many AI claims conform to the writer’s ideology.  AI claims from those adherents to materialism are constrained to exclude a wide range of rational reasoning that is external to their materialistic silos.
  6. Seductive Silos: Claiming AI is conscious or self-aware without term definition can paint the AI as being more than it is.  Seductive semantics is the stuff of marketing.  In the extreme, it can misrepresent.
  7. Seductive Optics and the Frankenstein Complex: AI can be wrapped in a package that tries to increase the perception of its significance.  Unrecognized, the psychological impact of the Frankenstein Complex and the Uncanny Valley Hypothesis can amplify perception far beyond technical reality.  The human-appearing body in which a chatbot resides is secondary to its driving AI.
  8. True-ish: Beware of those tricky headlines and claims that are almost true but intended to deceive.
  9. Citation Bluffing: Web articles and even scholarly journal papers can exaggerate or blatantly misrepresent the findings of others they cite.  Checking primary sources can ferret out this form of deception.
  10. Small-Silo Ignorance: The source of news and opinion always requires consideration, but those speaking outside of their silo of expertise need to be scrutinized with particular care, especially when the speakers are widely admired for their success in their silo.  Don’t be dazzled by celebrity.  This caution applies to famous actors speaking about politics but also to celebrated physicists speaking about computer science.
  11. Assess the Source: I trust content more from the Wall Street Journal than from politically motivated sites like the Huffington Post or yellow journalism sites like the National Enquirer.  But even if the article appears at a site or periodical that has earned a measure of trust, it’s wise to assess the writer of the article.
  12. Who Benefits?: Remember financial greed, relational desires, and the pursuit of power.  These are the three factors used by police detectives in their investigation of crimes.  They are also good points to remember when considering whether a report on AI is true or hype.  Is there a hidden agenda or emotional blind spot?

As mentioned, this hype-detection list is applicable to a wide range of claims and our students can only be strengthened by inculcating these elements of critical thinking.

AI technologies are here to stay and we must navigate this techno-terrain with wisdom.  Educating students about the hype curve as well as the principles of hype detection will equip them to responsibly interact with the new and emerging technologies.

     [1] Robert J. Marks, Non-Computable You: What You Do That Artificial Intelligence Never Will (Seattle: Discovery Institute Press, 2022), 42.  A more rigorous formulation of the Lovelace Test (LT) is found on page 359 in the endnotes: “Artificial agent A, designed by H, passes LT if and only if (1) A outputs o; (2) A’s outputting o is not the result of a fluke hardware error, but rather the result of processes A can repeat; (3) H (or someone who knows what H knows, and has H’s resources) cannot explain how A produced o.”

     [2] Marks, 102.