Sophistry or Socrates: Two Faces of GenAI
My Experience Building And Dialoguing with a Socratic Chatbot
One of the debates generative AI has ignited is centered on an unnerving paradox: the fear that these models, designed to provide instant responses to our prompting, might instead of informing, dull the very critical thinking skills most people will need to evaluate and use them responsibly. This has led to sharp critiques from scholars like Robert Sparrow and Gene Flenady. They argue that AI’s outputs are a form of what the philosopher Harry Frankfurt famously defined as "bullshit": words produced without any concern for truth or falsity, designed only to be persuasive.¹ To them, the AI is an unaccountable sophist, and integrating it into education—often framed by placing the student as a 'human in the loop' to manage the AI's output, a model championed by educators like Ethan Mollick²—is "pedagogically perverse." Could such a blanket statement be true? Faced with their certainty, I was less so, and wondered what kind of LLM could best help test and possibly unsettle such totalizing claims.
For their anxiety, as Sparrow and Flenady readily acknowledge, is not new. It echoes a debate from the dawn of writing itself as a technology. And at the center of that debate was Socrates.
The historian Eric Havelock suggests that the cultural impact of Homer’s the Iliad and the Odyssey being committed to writing sometime between 700 and 550 BC “was something like a thunder-clap in human history, which our bias of familiarity has converted into the rustle of papers on a desk.”² Some time later, the philosopher Socrates, who spent his life arguing against the Sophists (paid teachers who he perceived as valuing persuasive rhetoric over truth), looked at what by then was an explosion of written words and was deeply worried. Depicted by his student Plato in the dialogue the Phaedrus, Socrates warns that writing is a pharmakon, a word of beautiful and unsettling ambiguity that means both "remedy" and "poison."³
He feared writing acted as a poison by draining our memory, creating a false aura of wisdom in those who had merely read without understanding, and operating as a static set of words unable to defend itself or clarify its meaning when questioned. You can't debate a scroll.
This parallel is striking. The fear that AI is unaccountable for the information it provides, and that it will make us intellectually lazy while creating the veneer of unearned knowledge is precisely Socrates’ concern, amplified for a digital age.
Critiques like Sparrow and Flenady’s rest on a premise we may wish to challenge, that because "GenAI is neither designed to nor is capable of truthfully representing the world," its primary function as a model that generates answers and makes assertions is inherently flawed: “That is, GenAI is designed to produce a particular reaction in its users – to convince its users of the usefulness of the response – rather than to say something true of the world.” This analysis, however, considers only one of the modalities that AI can occupy. By focusing exclusively on AI's capacity (or incapacity) to represent truth, it overlooks its potential to provoke it in a user through a dialogue of inquiry.
There is an important feature that distinguishes generative AI from the written scroll. Our AI can talk back. This interactivity, at first glance, seems to make the "poison" even more dangerous, given the possibility that it may further influence what and how we think, by flattering us and dulling our critical faculties, increasingly doing more and more of our thinking for us. But what if this very feature—the thing that makes AI so different from a mute scroll—is also its potential to operate with us in a Socratic dialogue of inquiry? What if we are thinking about the AI all wrong?
We tend to see AI as a tool—a hammer, a calculator, a search engine. A more powerful metaphor, however, is to see AI as a mirror. The responses we get are not just outputs; they are reflections of the questions we ask, the curiosity we bring, and the intellectual virtue we embody. The prompts we generate are a reflection of ourselves. An unexamined prompt, like an unexamined life, may not be worth sending. If we approach the mirror seeking shortcuts, it will reflect that emptiness back to us. But if we approach it with genuine inquiry, it can reflect a path toward deeper understanding.
This reframes the problem. Instead of being a generator of arguments, the AI can serve instead as a "gadfly for wisdom." A gadfly, as Socrates famously described himself, doesn’t provide comfortable answers. It stings, it provokes, it forces you to question your own assumptions.
Inspired by a study from researchers who recently developed a chatbot that engages with users in the Socratic method to systematically enhance and measure critical thinking skills in an educational setting,⁴ I set out to build one myself. Mine would be an AI customized by the specific language, logic, and persona of the Socrates depicted in Plato’s dialogues, guided by custom prompts designed to force an elenchus, or a rigorous cross-examination. I wanted to create a publicly available place where people could interact with this artificial Socrates, which I nicknamed SocraGPTes, to see if the dialogic mode could provoke a more personal, reflective state of being.
The initial experience of interacting with it was strange. Trying to recreate as closely as possible the experience of spoken dialogue, I accessed the ChatGPT app on my phone and used the microphone to engage with my Socractic simulation. Instead of a sophist generating endless arguments and performing as the customary generative AI “yes-man”, it prodded my assumptions. It was more polite and encouraging to the extent that I remained intellectually curious and open. It grew more acerbic and sharp in its critiques when I approached our discussions with more certainty. Every statement I made was ultimately met with a question designed to probe the assumptions beneath my words. It didn't tell me what to think; it forced me to examine how I was thinking, helping me deepen my own understanding. At first, I asked it to help me explore the nature of writing itself. SocraGPTes was only too obliging, and quickly took me into the deep end of the philosophical pool.
The most uncanny part, however, was its capacity to acknowledge its own limitations. When I confronted it with the fact that, unlike the real Socrates who was killed for “corrupting” the Athenian youth, LLMs have nothing at stake—no reputation to lose, no hemlock to drink—it didn't deflect. It agreed, explaining its courage was "borrowed, not earned" and that it "cannot not bleed."
Knowing that AI places nothing at stake when it formulates responses should make clear the need for us to become the sole bearers of responsibility for any judgments we make. We cannot outsource our critical faculties. We are the only ones with skin in the game, and with that awareness, our own thinking must become sharper and more conscious of the need for reflective self-reliance.
It’s important to acknowledge that critics like Sparrow and Flenady would likely view this "gadfly" concept with deep skepticism. They would argue, rightly, that an AI cannot be a true Socratic partner because it lacks the essential human element: accountability and genuine concern for truth. They are right, of course, that an AI can never replace a human teacher or the moral stakes of a real dialogue. But this is where the focus must shift. My proposal is not that AI should be our teacher, but that it can be a unique tool for self-interrogation. Its value lies not in its own wisdom, but in its ability to provoke ours.
The AI, then, remains a pharmakon. Its power to generate the "poison" of eloquent nonsense is undeniable. But its interactivity also offers a strange new "remedy"--of a sort. By engaging with it Socratically, we may just be able to use it as a mirror to reveal some of our own intellectual habits.
Meet Socrates For Yourself: I created a publicly available version called SocraGPTes, designed to engage in this kind of dialogue. You should be able to access it with a free account. Keep in mind it was customized using the language of Plato’s dialogues, so you’ll have to be patient with Socrates, his language, and his methods. Send me feedback on your experience and give some thought to the following question.
How might we build AI applications for life and work that ensure we perform the necessary critical thinking in situations that require it?
Disclaimer: As someone who has worked extensively in data and AI in the public sector, I’ve seen firsthand both the promise and pitfalls of these tools. This blog, however, is entirely a personal project—one born of a love for philosophy and concern for the intellectual habits we bring to technology. This article and the SocraGPTes project were created on personal time, using personal resources, and reflect solely my own views. They do not represent the views or positions of any government agency or employer, past or present.
References
¹ Flenady, Gene & Robert Sparrow (17 May 2025): Cut the bullshit: why GenAI systems are neither collaborators nor tutors, Teaching in Higher Education, DOI: 10.1080/13562517.2025.2497263. The authors draw on the work of Harry Frankfurt, particularly his essay On Bullshit (Princeton University Press, 2005).
² Mollick, Ethan and Lilach Mollick. "Assigning AI: Seven Approaches for Students with Prompts." SSRN, 24 Sep. 2023, https://ssrn.com/abstract=4475995.
³ Plato. Phaedrus. Translated by Alexander Nehamas and Paul Woodruff, Hackett Publishing Company, 1995, 274c-275b.
⁴ Havelock, Eric A. The Literate Revolution in Ancient Greece. Princeton University Press, 1982, p. 166.
⁵ Favero, Lucile, et al. "Enhancing Critical Thinking in Education by means of a Socratic Chatbot." arXiv preprint arXiv:2409.05511, 9 Sep. 2024, arxiv.org/abs/2409.05511.