A philosophy of work in the AI era: ethics, design, and business steps

Business leaders reviewing a philosophy of work in the AI era ethics and design checklist

A philosophy of work in the AI era: ethics, design, and business steps

By Agustin Giovagnoli / April 14, 2026

A new effort to define a contemporary philosophy of work is taking shape around philosopher Michal Masny’s research and teaching, which examine the role of paid work in people’s lives and how AI and automation alter access to work’s non-monetary goods. This philosophy of work in the AI era matters for leaders deciding how to deploy intelligent systems without eroding meaning, social connection, autonomy, and skill development [3]. Masny works across institutions including MIT, UC Berkeley, and previously Princeton, with a focus on applied ethics of technology and AI [1][2][3].

Why the philosophy of work in the AI era matters for businesses

Masny’s program asks a pointed question: if technologies reliably produced material wealth, could a world with far less, or even no, paid work be desirable, and which benefits of work would need to be preserved elsewhere [3]? Grounding this in business practice requires distinguishing which aspects of work are intrinsically valuable and which are instrumental, then deciding how product, policy, and organizational design should safeguard those goods. Masny’s teaching links philosophical theory to concrete issues in computing and AI, emphasizing how technology reshapes labor markets and workplace structures [1][2][3].

What we mean by the non-monetary goods of work

Masny highlights four goods that matter beyond wages: meaning, social ties, autonomy, and skill [3]. In practice, these show up as:

  • Meaning: a sense that one’s tasks contribute to a worthwhile purpose [3].
  • Social connection: relationships and belonging formed through collaboration [3].
  • Autonomy: real discretion in how work is done, not just formal responsibility [3].
  • Skill development: chances to learn, practice, and master capabilities over time [3].

For employers, these goods are central to how roles are designed and how teams adopt AI tools. They also shape retention and the long-run capacity to adapt as tasks evolve [3].

How AI and automation reshape those goods

Emerging technologies do more than replace jobs. Decision systems, intelligent agents, and neurocomputing can reorganize remaining tasks, shift oversight, and set new boundaries on privacy and discretion [4][5]. That creates risks to autonomy and skill if tools deskill judgment or centralize control, but it can also open new spaces for meaning if systems take over rote burdens and enable higher-value coordination [4][5]. Ethical frameworks in technology highlight trade-offs involving justice, accountability, and sustainability as workplaces integrate these systems [4][5].

Which parts of work should remain distinctly human?

Masny’s question about less paid work forces a sorting exercise: which activities deliver meaning, social ties, autonomy, and skill in ways that cannot be replicated by systems, and which are purely instrumental [3]? Criteria for keeping tasks human can include whether the activity builds judgment through practice, relies on trust-based relationships, or safeguards worker discretion in context-sensitive decisions. This framing applies across frontline, knowledge, and creative roles where AI may transform oversight and collaboration [3][4][5]. It also clarifies what parts of work should remain human in an automated world.

Design and institutional tools to protect work’s non-monetary goods

Practical methods from design ethics can embed human values into workplace technologies:

  • Participatory design brings workers and users into problem framing and solution testing so systems reflect lived realities and social dynamics [4][5].
  • Value-sensitive design operationalizes values like privacy, justice, and sustainability within requirements, architectures, and evaluations [4].
  • Structured ethical reflection and checklists, such as formal review prompts, help teams anticipate impacts on autonomy, oversight, and skill before deployment [4][5].

Institutions matter too. MIT’s Ethics of Technology initiatives examine systemic social effects of computing, offering a venue where these principles are translated into practice and policy conversations [3]. For broader context on governance tools, see the NIST AI Risk Management Framework (external).

Practical checklist for business leaders and product teams

Use this starter checklist to align AI deployments with the non-monetary goods of work:

  • Stakeholders: Map affected roles and include worker representatives in discovery, testing, and rollout using participatory methods [4][5].
  • Autonomy by design: Identify where discretion matters and preserve local decision rights, with clear escalation paths and contestability [4][5].
  • Skill pathways: Redesign tasks to pair automation with practice opportunities, mentorship, or rotation that maintain skill development [4].
  • Transparency and oversight: Document system limits, data use, and error modes; assign accountable human oversight where impacts are significant [4][5].
  • Fair distribution: Evaluate who benefits and who bears risks, and adjust processes or access to ensure just outcomes across groups [4][5].
  • Metrics: Track meaning, social connection, autonomy, and skill using surveys and qualitative feedback alongside efficiency metrics [3][4].

Scenarios and policy choices: less paid work as a plausible future

Masny’s framing treats a reduction in paid work as a live scenario if automation reliably produces material abundance [3]. Planning under this uncertainty means building roles and institutions that can preserve meaning, social ties, autonomy, and skill whether tasks are restructured or hours decline. It also means stress-testing workforce strategies against multiple paths for AI and the future of paid work, with clear criteria for when to keep humans in the loop and when to redeploy talent into new, dignified activities [3][4][5].

Conclusion: strategy takeaways for leaders

  • Treat the non-monetary goods of work as design constraints, not afterthoughts [3].
  • Use participatory and value-sensitive design to align systems with worker values and organizational goals [4][5].
  • Build governance mechanisms that protect autonomy, skill, and justice as tasks shift [4][5].

For more practitioner guidance, explore our AI tools and playbooks. To engage with Masny’s work and related ethics-of-technology efforts, see coverage and materials from MIT and his academic appointments [1][2][3].

Sources

[1] Michal Masny
http://michalmasny.com/

[2] [PDF] Michal Masny (CV)
https://michalmasny.com/resources/Masny-CV.pdf

[3] A philosophy of work | MIT News
https://news.mit.edu/2026/philosophy-work-michal-masny-0409

[4] Ethical Considerations in Emerging Technologies
https://rijournals.com/wp-content/uploads/2025/03/RIJEP-41-P9-2025.pdf

[5] Ethical, Legal and Social Implications of Emerging Technology (ELSIET) Symposium
https://pmc.ncbi.nlm.nih.gov/articles/PMC9243845/

Scroll to Top