Will AI become your assistant or your boss? An ethicist explains

July 11, 2024

Will AI become your assistant or your boss? An ethicist explains

An AI ethicist argues that while AI could increase efficiency, it could create a new surveillance-based work environment.

BY Nell Watson

Will AI be your dream assistant or the boss you never asked for?

Artificial intelligence is changing how we work. But it is still unclear what tasks AI will ultimately take over. Will we leverage AI as our own personal executive assistants? Or will AI end up overseeing human workers?

As German social psychologist Erich Fromm once said, “The danger of the past was that men became slaves. The danger of the future is that men may become robots.”

I am an AI expert and ethicist and here’s what we currently know about if AI will become the dream assistant workers have always wanted—or the boss workers never asked for. 

Advancements in AI

Machine decision-making is taking over workplace management and this algorithmic management is transforming both middle management and influencing the C-Suite. AI can substantially augment executive functions by rapidly analyzing complex data on market trends, competitor behavior, and personnel. An agentic AI advisor could provide a CEO with succinct, data-driven recommendations on strategic decisions such as market expansion, product development, acquisitions, and partnerships.

Agentic AI, which refers to AI systems that can adapt and achieve complex goals independently, marks a new phase in AI development. These systems integrate with large language models (LLMs) to provide tools for innovation, logistics, and risk management. However, the independent goal-achieving capabilities of agentic AI pose unique ethical and safety challenges compared to other AI systems, necessitating careful alignment of AI goals with human values to prevent unintended consequences. As these technologies evolve, diligent oversight becomes increasingly critical to harness the benefits while mitigating the risks.

AI currently can’t replace essential human leadership qualities like trustworthiness and inspiration. The rise of AI in management has significant social implications. Automation eroding middle management roles could lead to identity crises as the traditional understanding of “management” transforms. In management consultancy, AI could disrupt by providing data-backed strategic advice. However, deploying AI in such critical roles demands meticulous oversight to validate recommendations and mitigate risks.

The evolving impact of AI on employment

In the past, AI’s greatest impact on employment was replacing manual labor. But now, AI is also taking over more “intellectual” tasks like data analysis and customer service. I believe that unlike earlier technological shifts, AI threatens to monopolize intellectual work, potentially leaving humans with only basic manual and emotional tasks. Some experts call this the “enclosure of intellectual activity.” In other words, a de-skilling of the labor force. The ability of agentic AI to adapt and achieve complex goals independently could accelerate this trend, rapidly optimizing and automating intellectual tasks.

Algorithms already guide gig workers toward fragmented, short-term tasks. While this may not cause widespread unemployment, it could make jobs less rewarding and services less effective. I would argue that “Gigification” breaks work into disconnected tasks, eroding long-term involvement, and satisfaction. Agentic AI might be used to manage and optimize these fragmented tasks, potentially exacerbating the issues of unfulfilling jobs and subpar services.

Algorithm-driven customer service often lacks human nuance and understanding, depersonalizing solutions. For instance, Denmark’s algorithmic unemployment benefits system has been criticized for surveilling recipients and cutting benefits, turning a supposed time-saver into an administrative nightmare. And similar algorithmic issues in Italy and Spain affected teaching assignments and worker oversight have arisen. These cautionary tales emphasize the need for careful planning, testing, and ethics when automating complex government tasks. Errors here can significantly impact livelihoods and well-being, so contingency plans and human oversight are paramount.

The risks of algorithmic management

Integrating machine learning and automation into business promises to increase efficiency. However, I would argue that these systems jeopardize worker autonomy and well-being. AI can enhance human potential or supplant it, reducing workers to mere cogs in an automated machine. The adaptability and independent goal-achievement of agentic AI could lead to even more impersonal and draconian management systems.

The impersonal nature of these systems makes them potentially tyrannical, controlling without context, empathy, or understanding. Their complexity can reach a point where even creators can’t fully explain their behavior. Once established, these systems are hard to roll back, risking entrenching inhumane work environments. Agentic AI’s ability to optimize for efficiency without human context or empathy could perpetuate and entrench these issues.

For instance, AI-driven applicant tracking systems (ATS) use keyword filtering to disqualify (sometimes well-suited) candidates, turning recruitment into a dehumanizing buzzword game. And contesting the decisions of ATS models can be challenging due to their opacity.

Pressuring human workers to adapt

In the AI-driven landscape, traditional roles could be minimized to “Machine Wranglers” who oversee automated systems and “Liability Sponges” who address failures. I believe that the most alarming aspect of AI’s transformation work isn’t just job loss but the transformation of remaining roles into robotic functions. Under algorithmic management, every action is scrutinized data, pitting human performance against machine metrics. Agentic AI’s ability to independently optimize workflows and processes could intensify the pressure on human workers to adopt machine-like behaviors to keep pace.

This intense scrutiny compels human workers to adopt machine-like behaviors—constant availability, intense focus, preference for quantitative metrics—to maintain employment. While this might boost productivity, it may burn out workers through an unsustainable grind. Agentic AI’s adaptability could lead to even more rigid and unforgiving performance standards that leave little room for human discretion or creativity.

Decisions once made by humans, like scheduling and executing performance evaluations, are now increasingly algorithmic. While efficient, these systems lack human empathy and understanding. For instance, machine learning models often assess rule violations more strictly than human evaluators due to nuances in behavioral norms vs. hard facts that get swept away without a lifetime of human context.

This management mechanization squeezes out room for spontaneity, creativity, and tolerable mistakes. Individual actions are reduced to data points and the latitude for human discretion shrinks. Ironically, as machines advance to mimic human cognition, humans are pressured to abandon their unique qualities to meet rigid machine standards.

The future of work

The future of work is at an inflection point. As AI advances, the risk of amplifying workforce ethical and social issues grows. A balanced approach is urgently needed to maximize efficiency and innovation while safeguarding human dignity, autonomy, and well-being.

Co-design workshops where employees collaborate with designers and engineers can outline desired features and discuss pitfalls. Aligning agentic AI’s independent goal-achieving capabilities with human values and needs in the design process is crucial to ensure it enhances, rather than diminishes, human potential. Employees can share job nuance expertise, flag concerns, and shape algorithmic rules and criteria. 

Steps like these can make the labor market more fair, more transparent, and more worker-aligned, reducing adverse effects like unjust penalties or excessive surveillance. Involving employees in a co-design process fosters agency and ownership, and increases the odds of new technology integration.

The algorithmic management challenge is harnessing AI to elevate human potential, not diminish it. The trustworthiness and unbiased nature of agentic AI will be particularly crucial given its ability to independently make decisions and take actions in the workplace. Such technologies must demonstrate they are trustworthy, unbiased, and designed to serve workers’ needs first. 

Leaders should avoid ruthless automation. Life is often at its best when not over-optimized—when we have the freedom to take time to appreciate the small things.

 

ABOUT THE AUTHOR

Nell Watson is a researcher, writer, speaker and applied tech ethicist. She is President of the European Responsible AI Office, an AI Expert at Singularity University, and pioneers global standards as an AI Ethics Certification Maestro at IEEE 


Fast Company

(7)