Artificial Intelligence at Work: How Employees Can Resist Pressure.

Artificial Intelligence at Work: How Employees Can Resist Pressure
Artificial Intelligence at Work: How Employees Can Resist Pressure
Is it possible to resist the integration of artificial intelligence (AI) in the workplace?

Your experience may vary. This column offers advice that presents a unique concept for considering your moral dilemmas. It is based on the pluralism of values, the idea that each of us has several values that are equally important but often conflict with one another. To submit a question, please fill out this anonymous form. Here is this week’s question from a reader, condensed and edited for clarity.

Introduction

I am an AI engineer working at a mid-sized advertising agency, primarily on non-generative machine learning models, such as advertising effectiveness forecasting. Lately, it seems that people, particularly senior and mid-level managers who lack engineering backgrounds, are actively encouraging the implementation of various AI tools. Honestly, it looks like mindless hustle.

I am not entirely against AI, especially generative. However, I constantly ask who truly benefits from its implementation and what financial, human, and environmental costs it entails, apart from the obvious consequences. However, as an ordinary employee, I have no opportunity to voice these concerns to those who make decisions. And even when I try to express my concerns, contrasting them with the optimism that seems to prevail in most marketing companies, I feel like an outcast in my workplace.

My question is: given the difficulties in finding good jobs in the AI field, should I try to encourage critical use of this technology in my company, or should I scale back my activism just to pay the bills?

Expert Response

Dear Conscientious Opponent,

You are not alone in your dissatisfaction with the uncritical rollout of generative AI. Many people—from artists to programmers and students—are also unhappy with it. In your company, there are probably colleagues who feel the same way.

But they do not voice their opinions—and there is, of course, a reason for that: fear of losing their jobs.

This is a perfectly valid concern. Therefore, I advise you not to take risks and fight for this cause alone. If you as an individual oppose the use of AI in your company, you become a 'problematic' employee for the company, which can have serious consequences. I don't want you to lose your paycheck.

But I also don't want you to lose your moral integrity. You are absolutely right to question who truly benefits from the mindless application of AI and whether the benefits are justified.

So I believe that you should fight for your beliefs—but do so collectively. The real question here is not whether to express your concerns about AI, but how you can join others who also want to stand against it. Joining forces is safer for you as an employee and has a higher chance of making an impact.

“The most important thing an individual can do is be a little less of an individual,” said environmentalist Bill McKibben. “Join with others in movements large enough to have a chance to change the political and economic rules that keep us stuck in this situation.”

So my word to you is: Unite. If your workplace can be organized, this will be a key strategy to counter the AI policy with which you disagree.

Support from Unions and Organizations

If you need some inspiration, consider what some labor unions have already achieved—from the Writers Guild of America, which gained important guarantees for Hollywood writers, to the Service Employees Union, which negotiated with the governor of Pennsylvania to create a working group overseeing the implementation of generative AI in public services. Meanwhile, this year, thousands of nurses took to the streets as the National Nurses Association fought for the right to determine how AI is used in patient interactions.

“There are many examples where unions have been able to be at the forefront in determining the terms of AI use, and whether it will be used at all,” Sarah Myers West, co-executive director of the AI Now Institute, recently told me.

If organizing a union in your workplace is too difficult, there are many organizations you can join. Consider the Algorithmic Justice League or Fight for the Future, which advocate for ethical and accountable technologies. There are also communities like Stop Gen AI that aim to create a counter-movement and mutual aid program for those who have lost jobs due to AI implementation.

Also, consider local initiatives that proactively build communities. One such avenue is active resistance against the mass construction of energy-intensive data centers needed to support the AI boom.

“This is where we see many people fighting for their communities—and winning,” noted Myers West. “They are advocating for fair conditions, stating that if you (companies) are reaping all the benefits from this technology, you should be accountable to the people it impacts.”

Local activists have already blocked or delayed data center constructions worth a total of 64 billion dollars across the United States, according to a study by Data Center Watch, a project run by the AI research firm 10a Labs.

Opportunities for Change

Yes, some of these data centers may resurface. Yes, fighting against the mindless implementation of AI can sometimes seem hopeless. But it is important to think about what social change should look like.

In the new book Someone Has to Do Something, three philosophers—Michael Brownstein, Alex Madva, and Daniel Kelly—show how anyone can contribute to social change. The key point is the realization that our actions can have small but significant effects:

Small actions can trigger cascades that lead to, surprisingly, large structural outcomes in a short Time. This reflects a universal feature of complex systems. Causal effects in such systems are not always intertwined in a smooth or continuous way. Sometimes they gather non-linearly, allowing seemingly small events to produce disproportionately large changes.

The authors explain that because society is a complex system, your actions are not meaningless 'drops in the bucket'. Adding water to a bucket is linear; each drop has the same impact. Complex systems act more like heating water: not every degree has the same effect. Transitioning from 99°C to 100°C crosses a critical point, triggering a state change.

We all know about the boiling point of water, but we don't know about the critical point for changes in the social world. This means that at any moment, it’s hard to determine how close we are to creating a cascade of change. But that does not mean that changes are not occurring.

According to Harvard political scientist Erik Chenoweth, achieving systemic social change requires mobilizing 3.5% of the population around a specific cause. While we have yet to see protests related to AI at that scale, data suggests that many people are concerned: 50% of Americans are more worried than excited about the rise of AI in everyday life, according to a recent Pew Research Center survey. And 73% support strong regulation of AI, according to the Future of Life Institute.

So, even if you feel lonely in your workplace, find people with similar beliefs. Create a shared vision of how technologies should develop, and fight for the future you aspire to.

Bonus: What I'm Reading

  • Microsoft's announcement of plans to build a “humane superintelligence” caught my attention. Whether you consider it an oxymoron, I perceive it as a sign that at least some powerful players are hearing us when we say we want AI that solves real problems for real people, not some fantastical AI deity.
  • The Economist article 'Meet the Real Exhibitionists of Screens: Older Adults' is absolutely spot on. When it comes to digital media, everyone always worries about youth, but I feel there is insufficient research focused on older adults who often engage with their devices.
  • AI researchers are finally beginning to take a pragmatic approach to the debates “Can AI be conscious?” I have long suspected that 'consciousness' is a tool we use to determine “This thing deserves to be in our moral circle,” so the question of whether AI is conscious is not something we will discover—it’s something we will decide.

Read also

Advertising