I Am Begging AI Companies to Stop Naming Features After Human Processes: anthropomorphic AI feature names

Product team comparing mechanism-based labels with anthropomorphic AI feature names, using Claude Code 'auto dream' as an example

I Am Begging AI Companies to Stop Naming Features After Human Processes: anthropomorphic AI feature names

By Agustin Giovagnoli / May 6, 2026

Human metaphors are creeping into product copy, docs, and demos. The trend sounds friendly, but anthropomorphic AI feature names distort expectations and trust. For teams building or buying AI, this is not cosmetic. It shapes how users judge capability, reliability, and risk [3][5].

Case study: What Anthropic’s ‘auto dream’ in Claude Code actually does

Anthropic is testing an automated maintenance routine for Claude Code called auto dream. The system manages persistent project memory via CLAUDE.md files, then runs a background process that scans those notes, rewrites entries, merges overlapping details, and prunes stale or conflicting information to sustain coherence over long coding sessions [1][2]. It does not correspond to biological sleep or dreaming, and it does not change the application’s codebase [1][2]. Some interfaces expose it as a /dream command, but that label sits on top of indexing and cleanup work under the hood [1][2].

If you are asking what does Anthropic auto dream actually do, the practical answer is memory maintenance: normalize details, remove drift, and keep project context usable across iterations. That framing is accurate and leads to better mental models for users operating Claude Code over extended tasks [1][2].

Why anthropomorphic AI feature names are misleading — evidence from research

A growing body of critiques argues that importing human categories such as persona, intention, emotion, or memory into technical descriptions muddies understanding. These framings can make token‑prediction patterns look like stable characters or agents, turning interface metaphors into claims about mechanism [3][4]. Methodological anthropomorphism in AI safety evaluations specifically warns that human‑style labels can bias both the design and interpretation of tests, misleading researchers as much as end users [4].

On the consumer side, anthropomorphic cues increase perceived social presence and trust, even when systems are not more reliable or accountable. This can nudge users toward unearned confidence and harmful reliance [5]. Together, these findings show how human metaphors in AI products raise clarity and safety concerns, not just stylistic ones [3][4][5].

Real risks for businesses and users

The stakes are practical. When features are framed as sleeping, dreaming, or forgetting, users may assume humanlike goals or judgment. That shift can drive over‑trust, poor risk assessment, and acceptance of unvetted advice, especially among vulnerable users [5]. It also burdens support teams with preventable confusion about what the system can and cannot do.

For enterprise buyers and product leaders, the business risks include:

  • Misuse driven by inflated expectations of agency or understanding [3][5].
  • Harder audits and evaluations when personas or human labels obscure mechanisms [4].
  • Brand and legal exposure if anthropomorphic copy encourages reliance without corresponding reliability [5].

These are avoidable with clearer language and documentation that treat systems as optimization procedures over data and text, not feeling entities [3][4].

Practical naming and communication alternatives

Claude Code’s example is instructive. The feature is background memory maintenance that scans, rewrites, merges, and prunes project notes [1][2]. Use terms that match those actions.

  • Prefer mechanism‑oriented names: indexing, memory maintenance, context consolidation, merge and prune, conflict resolution [1][2].
  • Write precise summaries: what inputs are read, what data are produced, when processes run, and what is never touched, such as the codebase [1][2].
  • Document limits: potential staleness, conflicts, and how the system resolves or discards entries [1][2].

Clear copy reduces confusion and supports safer adoption. For practical frameworks, explore AI tools and playbooks.

How to train teams and craft UX copy to reduce confusion

  • In release notes and tooltips, describe operations concretely: scan notes, rewrite entries, merge duplicates, prune conflicts. Avoid human metaphors like sleep or dreaming [1][2][3].
  • Add example prompts that set correct expectations for long‑running projects and clarify how memory files are updated [1][2].
  • Maintain change logs for memory maintenance so users can review what was merged or pruned, reinforcing that this is a data operation, not a mood or intention shift [1][2][3].

When teams align on language, they avoid the trap of persona‑based narratives that skew evaluations and user trust [3][4][5].

Governance angle: naming affects safety conversations

Methodological critiques show that anthropomorphic framings complicate how we test and reason about systems [4]. That carries into governance and oversight, where clarity about mechanisms and limits is essential [3][5]. For programmatic guidance on risk practices, see the NIST AI Risk Management Framework (external).

Conclusion: Three steps product teams can take now

  • Audit your product and docs for anthropomorphic AI feature names and replace them with mechanism‑focused terms tied to actual operations [1][2][3][4][5].
  • Update help centers and in‑product copy to explain inputs, processes, outputs, and exclusions, such as not modifying the codebase [1][2].
  • Align evaluation protocols with mechanism descriptions to avoid persona‑driven misreadings and over‑trust in results [3][4][5].

Sources

[1] Anthropic tests ‘auto dream’ to clean up Claude Code’s memory
https://tessl.io/blog/anthropic-tests-auto-dream-to-clean-up-claudes-memory/

[2] Your AI Coding Agent now needs sleep — here’s what /dream actually does
https://levelup.gitconnected.com/your-ai-coding-agent-now-needs-sleep-heres-what-dream-actually-does-81d32977ec25

[3] The Case Against Anthropomorphic AI
https://blog.burkert.me/posts/llm_deanthropomorphization/

[4] Methodological Anthropomorphism in AI Safety Evaluations
https://arxiv.org/pdf/2603.13255

[5] The dark side of AI anthropomorphism: A case of misplaced …
https://scholarspace.manoa.hawaii.edu/bitstreams/b6cedcc3-cd5c-4744-bb99-2d8f90b334ec/download

Scroll to Top