As artificial intelligence continues its rapid advancements, particularly with conversational models like ChatGPT, the question of whether AI can “think” remains a complex and often debated topic. While there’s a general consensus on AI’s “intelligence”—its “ability to process, analyze, evaluate, and communicate information—differentiating this from “thinking” requires a deeper philosophical dive. Ancient Greek philosophers, despite not anticipating 21st-century technology, offer valuable frameworks for understanding what it truly means to think.
Plato’s Divided Line and AI’s “Hallucinations”
Plato, a central figure in ancient Greek philosophy, presented the analogy of the “divided line” in his work Republic to distinguish between higher and lower forms of understanding. At the pinnacle is noesis, an intuitive apprehension of truth that transcends reason, belief, or sensory perception. This, in Plato’s view, is a property of the soul and inherently human. Below noesis but still above the “dividing line” is dianoia, or reason, which relies on argumentation.
Below the line, Plato identified pistis (belief), influenced by experience and sensory perception, and eikasia (imagination), which he defined as baseless opinion rooted in false perception. While Plato didn’t directly differentiate “intelligence” from “thinking,” his distinctions offer insights into AI. Without being embodied, AI may lack the intuitive understanding (noesis) that Plato associated with the soul. Furthermore, AI’s frequent “hallucinations”—where it generates plausible but inaccurate information—bear a resemblance to Plato’s eikasia, the lowest form of comprehension based on false perceptions.
Aristotle on Active Intellect, Embodiment, and Practical Wisdom
Aristotle, Plato’s student, further elaborated on the concepts of intellect and thinking in On the Soul. He distinguished between “active” and “passive” intellect. Nous, or active intellect, is immaterial and makes meaning from experience, transcending bodily perception. Passive intellect, conversely, is bodily and receives sensory impressions without necessarily reasoning. The combination of these active and passive processes, for Aristotle, constitutes “thinking.”
Aristotle, much like Plato, suggested that true “thinking” extends beyond mere calculation or logical quality that AI might replicate. It requires an embodied form. His views on rhetoric also emphasise that deliberation and judgement are intrinsically linked to a body, feelings, and lived experience. Rhetoric, for Aristotle, involves observing and evaluating how evidence, emotion, and character influence human thinking and decisions—a nuanced understanding that seems beyond AI’s current capabilities.
Crucially, Aristotle’s concept of phronesis, or “practical wisdom” or “prudence,” further illuminates AI’s limitations in “thinking.” Defined in Nicomachean Ethics, phronesis involves lived experience that guides not only correct thought but also its application to “good ends” or virtuous actions. While AI can analyze vast datasets to reach conclusions, phronesis demands a consultation of wisdom and moral insight that transcends information processing alone.
The Question of “Thinking” Robots
Today, AI manifests in various physical forms, from autonomous vehicles to humanoid robots, seemingly bringing it closer to human thinking if intelligence requires embodiment, as Plato and Aristotle believed. However, these forms remain systems driven by code, algorithms, and datasets, not true biological bodies. Intuitive understanding, genuine emotion, inherent integrity, and practical wisdom appear to necessitate an embodied, perishable form that is deeply shaped by lived experience.
When asked directly, even advanced AI models like ChatGPT acknowledge this distinction. When prompted with “Can you think?” ChatGPT responded, “I can process information, analyze patterns, and generate responses based on data. So in some ways, it might ‘seem’ like I’m thinking. But it’s not thinking like a human does. I don’t have consciousness, emotions, intentions, or awareness. Everything I ‘do’ is based on patterns learned from huge amounts of text and a set of rules and probabilities. I can ‘simulate’ certain aspects of thinking, but I don’t truly think or understand in the human sense.” This self-assessment aligns with the philosophical perspective that true “thinking” involves a deeper, embodied, and often intuitive understanding beyond mere information processing.