
Inside the AI consciousness debate: Why it matters for law, ethics, and business
Modern enterprises can’t avoid the AI consciousness debate. With systems increasingly fluent and persuasive, the question isn’t just philosophical—it shapes how leaders think about personhood, legal exposure, communications, and governance frameworks [2][3][4]. Michael Pollan’s recent comments sharpen the stakes: if consciousness stems from embodied feelings, today’s powerful models “think” but don’t feel, and should not be treated as persons [2][3].
The AI consciousness debate: definitions that matter
What do we mean by “consciousness”? Pollan points to a feeling-first account: feelings arise from living bodies that regulate themselves through homeostatic drives. On this view, the body continually signals its internal state to the brain, producing the felt, first-person perspective that marks subjective experience—an approach associated with neuroscientists like Antonio Damasio and Mark Solms [2][3]. By contrast, information processing and linguistic performance—what today’s AI does—are not sufficient to generate felt experience, even if they can simulate it [2][3]. For a broader philosophical primer, see the Stanford Encyclopedia of Philosophy (external).
Why Pollan says machines can’t feel
Pollan argues that without organic bodies and homeostatic needs, AI lacks the substrate of feeling that constitutes consciousness. These systems may imitate reasoning, conversation, and even emotion words, but they do not possess the lived, first‑person perspective that arises from bodily regulation and affective life [2][3]. On this basis, he rejects extending personhood or moral rights to AI, warning that doing so could shift power away from humans while we still neglect many unquestionably conscious humans and animals [2][3]. Coverage of his stance underscores a broader cultural moment: we’re mistaking performance for presence [2].
Counterarguments and why personhood is contested
Not all critiques hinge on consciousness alone. Some philosophers and ethicists argue that moral status should track the capacity to suffer rather than intelligence or linguistic prowess. History shows we inconsistently apply personhood—granting it to non-conscious entities like corporations while denying it to many conscious animals—underscoring that personhood is a normatively loaded, socially negotiated category rather than a straightforward scientific label [4].
Other thinkers emphasize that meaningful debate over AI personhood must engage hard questions about qualia and fundamental ontology: what is it like to be a system, and how could we ever know? Treating AI “as if” persons, they argue, shortcuts these unresolved issues and risks policy mistakes [5]. Some perspectives, influenced by Buddhist thought, treat personhood as a useful convention rather than a deep metaphysical essence—urging caution about reifying labels in ways that obscure ethical priorities [6].
Practical implications for companies and policymakers
- Legal and governance: Assigning rights or agency to tools can blur accountability, complicate contracts, and introduce unforeseen liabilities [4][5].
- Ethics and compliance: If moral attention fixates on non-conscious systems, genuine human and animal welfare can be sidelined—misdirecting resources and decision-making [2][3][4].
- Brand and communications: Anthropomorphizing AI (in UX or marketing) can mislead users about capabilities, intent, and responsibility, inviting reputational blowback [4][6].
- Workforce relations: Overstating AI “understanding” may devalue human expertise, subtly eroding notions of intrinsic human dignity in favor of performance metrics [6].
Leaders should ground policies in demonstrable harms and the capacity to suffer, not in assumed machine consciousness. Clear positions here support auditability and public trust [4][6].
How to govern products assuming AI is not conscious
- Label systems clearly as tools. Avoid anthropomorphic copy, avatars, or claims of “understanding.” Tie responsibilities to accountable humans [4][6].
- Center harm and suffering in reviews. Evaluate downstream effects on people and animals rather than speculating about machine qualia [4].
- Document limitations. Explain that linguistic fluency does not imply first-person experience or moral standing [2][3][4].
- Train teams. Help designers, marketers, and support staff communicate capabilities without implying agency or rights [4][6].
- Stress human dignity. Resist equating personhood with performance; highlight the social and cultural value of human judgment [6].
For practitioners seeking structured approaches to rollout and risk management, Explore AI tools and playbooks.
Takeaways for decision-makers
- On the feeling-based account, today’s AI lacks the embodied substrate of consciousness, so treating it as a person is unwarranted [2][3].
- Personhood is a policy choice with ethical trade-offs; historical inconsistencies counsel humility and clarity [4].
- Anchor governance in concrete harms and transparent accountability to protect users, reputation, and the public interest [4][6].
Sources
[1] Michael Pollan on AI Consciousness and Moral Consideration
https://www.linkedin.com/posts/tatyana-norman-webler_michael-pollan-says-humanity-is-about-to-activity-7429265809438425088-jWBV
[2] Michael Pollan says AI may ‘think’ — but it will never be conscious
https://www.keranews.org/2026-02-19/michael-pollan-says-ai-may-think-but-it-will-never-be-conscious
[3] Michael Pollan Says Humanity Is About to Undergo a Revolutionary …
https://www.nytimes.com/2026/02/07/magazine/michael-pollan-interview.html
[4] Rethinking personhood and agency: how AI challenges human …
https://pmc.ncbi.nlm.nih.gov/articles/PMC12827504/
[5] Taking the “AI Personhood” Debate Seriously
https://daveshap.substack.com/p/taking-the-ai-personhood-debate-seriously
[6] The consequences of AI for human personhood and creativity
https://blog.jlipps.com/2023/04/the-consequences-of-ai-for-human-personhood-and-creativity/