From Signal magazine: How Jaron Lanier is reframing what it means to build – and trust – AI with human-centered AI accountability

Conceptual illustration of interconnected people and data illustrating human-centered AI accountability framework

From Signal magazine: How Jaron Lanier is reframing what it means to build – and trust – AI with human-centered AI accountability

By Agustin Giovagnoli / January 7, 2026

Hero image: Conceptual illustration of interconnected humans and data shaping an AI system.

Introduction: Reframing AI — From Autonomous Entity to Human-Made System

Jaron Lanier’s profile in Signal magazine underscores a simple shift with big stakes: stop treating AI as an autonomous being and start treating it as a socio-technical tool, built from human contributions, incentives, and oversight. In this view, human-centered AI accountability is not optional—it’s the basis of public trust and organizational reliability [1].

What Lanier Means by a Socio-Technical View of AI

Lanier challenges the reification of AI—the idea that it’s a new independent entity with internal defects to fix. He argues that behavior like hallucinations reflects socio-technical design choices: data pipelines, training practices, and governance—not mysterious flaws in a “being.” The point is to keep AI accountable to identifiable people and institutions rather than outsourcing responsibility to an abstraction [1]. This is a clear, practical lens for leaders exploring socio-technical AI design and AI trust and governance.

Trust Isn’t in Models — It’s in People and Institutions

Trust in AI ultimately tracks the human arrangements behind the model. Governance structures, incentives, data stewardship, and auditability determine whether a system earns credibility—not just better prompts or larger datasets [1]. That means:

  • Identify real owners for risks, from data provenance to model updates [1].
  • Align incentives so reliability, transparency, and remediation are rewarded [1].
  • Treat disclosure and audit trails as core features, not add-ons [1].

This reframing opens pathways to human-centered AI accountability across product lifecycles, translating lofty principles into operational practices.

Evidence: Studies on Data Collection, Community Trust, and Inclusion

Empirical work on data collection reinforces Lanier’s critique. People’s willingness to share data depends on who collects it and how: organizations aligned with affected communities enjoy higher trust than distant start-ups or governments. Intersectional factors, including age and disability status, shape perceived risks and participation decisions [3]. For businesses, data stewardship for AI must center community alignment, consent, and inclusive design from the outset [3].

Business Implications: Risks to Democratic Deliberation and Brand Trust

Scholars warn that generative AI can undermine democratic deliberation by eroding shared reality and amplifying manipulation if systems are built and deployed without dignity-centered safeguards. This underpins a call for a “right to reality,” emphasizing that people should not be subjected to opaque systems that steer perceptions or decisions without recourse [2]. Lanier’s concerns echo these stakes: without accountability, AI can destabilize social and epistemic foundations—creating legal, reputational, and operational risk for companies [1][2]. For leaders considering Jaron Lanier AI critique and right to reality generative AI, the policy and brand implications are inseparable from product choices [1][2].

Human-Centered AI Accountability: A Governance and Data Stewardship Checklist for Companies

Use this practical baseline to guide how to build trustworthy AI systems for business:

  1. Assign accountable owners: Name executives for model behavior, data sourcing, and deployment risk; publish their remit [1].
  2. Document data provenance: Track sources, licenses, community agreements, and exclusions; maintain a change log [1][3].
  3. Align with communities: Partner with organizations representing affected users; co-design consent and feedback mechanisms [3].
  4. Practice transparent consent: Use clear notices, accessible formats, and opt-out pathways; respect withdrawal requests [3].
  5. Monitor and report: Track errors (including hallucinations), incidents, and fixes; publish periodic transparency notes [1].
  6. Build recourse: Offer user-facing remediation, appeal routes, and human review for consequential outputs [2].
  7. Incentivize reliability: Tie performance goals to safety, inclusion, and post-deployment learning—not just speed or scale [1][3].
  8. Audit regularly: Review governance, data stewardship, and outcomes across teams and vendors [1][3].

For complementary frameworks, see the NIST AI Risk Management Framework (external).

Human-Centered AI Accountability: Role Calls Across the Org

  • Product: Translate requirements into features and metrics; plan for post-deployment monitoring [1].
  • Data governance: Oversee provenance, consent artifacts, and retention policies; ensure inclusive sourcing [3].
  • Legal/compliance: Embed recourse, disclosures, and alignment with rights such as a “right to reality” in user-facing policies [2].
  • Community engagement: Build trusted partnerships for participatory data practices and ongoing feedback loops [3].

These cross-functional processes operationalize AI trust and governance day to day.

Case Examples & Thoughtful Next Steps

Consider two directions a team might take. In one, a model launches on a dataset gathered by a distant vendor with unclear consent. Users report distortions and opt-outs spike—trust erodes. In the other, the team partners with community organizations, publishes provenance documentation, and offers visible recourse. Adoption rises as users see their input reflected in updates. The difference isn’t just technical—it’s governance [1][3].

Pilot actions to get started:

  • Run a provenance and consent gap assessment on one high-impact model [1][3].
  • Stand up a transparency note, updated quarterly, covering data sources, known limitations, and incident learnings [1].
  • Establish user recourse and a designated owner for remediation within 30 days [2].

For more step-by-step resources, Explore AI tools and playbooks.

Conclusion and Further Reading

Lanier’s message is pragmatic: treat AI as a human-made system and build trust through the governance, incentives, and stewardship that shape it. Aligning socio-technical AI design with dignity and a right to reality is not just ethics—it’s risk management and strategy [1][2][3]. For deeper context, read the profile in Signal magazine, the policy arguments on reality and dignity, and research on inclusive data collection [1][2][3].

Sources

[1] Issues Archive | Signal Magazine – Microsoft Source
https://news.microsoft.com/signalmagazine/issue/

[2] A Right to Reality: Human Dignity and Generative AI
https://www.tandfonline.com/doi/full/10.1080/18918131.2025.2582990?src=

[3] Designing an Online Infrastructure for Collecting AI Data From People With Disabilities
https://www.microsoft.com/en-us/research/wp-content/uploads/2021/01/Inclusive_AI_Datasets_FINAL.pdf

Scroll to Top