Julian Lutz
GradFUTURES Fellow, Initiative on the Future of Work
Employers can enact these strategies to ensure that workplace AI is a win-win for business and workers.
This article was produced as part of 国产视频's Initiative on the Future of Work and the Innovation Economy. Share this article and your thoughts with us on , , and , and subscribe to our Future of Work Updates & Events newsletter to stay current on our latest.
Artificial intelligence (AI) is sweeping the American imagination. But in many workplaces, the thinking goes that only a manager gets to decide how and when to start using AI, while everyone else only gets to decide whether they want to 鈥溾 to the company鈥檚 strategy. One survey found that 78 percent of corporate officers said their companies were already using AI, but 54 percent of employees reported having how their companies were doing so.
In a new report, , researchers from urge decision-makers to consider AI transitions in the workplace from a new perspective. They conclude that 鈥淲orkers must be included in the regulation of and decisions regarding how AI is deployed within an enterprise.鈥
The report鈥檚 goal is to of 鈥渂old, transformative change to give everyone a voice in building a society in which workers, their families, and communities can prosper.鈥 That sentiment has been championed by have also championed that sentiment as have industry conveners such as the World Economic Forum through a with 国产视频.
The report defines six common workplace AI uses. The two main uses show that AI is affecting not just workers but also managers.
The first and most common form is , which is when managers use AI to manage, monitor, and control various aspects of work, tasks, or processes. For example, during the workday, AI , monitors employees鈥 communications and time management, and looks for employee risks; all the while, it reports 鈥. After the workers finish a task, there are AI systems that provide suggestions for improvement and even AIs that can .
The second is . In contrast to algorithmic management, which involves using AI to make management decision outright, workplace surveillance involves managers using AI technology to monitor their employees鈥 movements, typing speed, and other forms of surveillance that could infringe employee rights to privacy, work-life balance, and personal boundaries. For example, Amazon has made delivery drivers sign 鈥溾 that allow the company to track not only whether they are speeding or staying on their delivery routes but also whether they ever while driving. Meanwhile, some Amazon warehouse workers must wear patented watches that hand speed as they process packages.
To be clear, not all workplace AI uses are invasive or demanding. Business leaders are developing frameworks for onboarding AI in ways that , help them , and even encourage companies to hire more workers. Economists Daron Acemoglu and Simon Johnson call this best-case scenario the 鈥溾 in their bestselling 2023 book because when it happens technology helps the workers, on average, get more done. But it is not automatic. Technology, they warn, tends to be controlled by titans whose visions of the future do not always grasp average workers鈥 wants and needs. To achieve technology鈥檚 potential, policymakers need to let those workers play a role in tech鈥檚 path.
1. Elect "AI monitors" in every workplace in which AI is deployed
The report鈥檚 first recommendation is to mandate AI monitors elected by workers in every workplace where AI is used. The authors believe that elected AI monitors can help workers better understand how AI will affect their workplaces. After receiving training in fundamental AI concepts, the monitors would gather accurate information about AI, workers鈥 legal rights, and help workers with reporting and whistleblowing on these issues.
Outside each individual workplace, monitors would meet and collaborate with other monitors from the same industry and area, coordinate with government regulators, and participate in sectoral labor boards along with businesses, regulators, and unions.
2. Require Companies to be Transparent about their AI Uses
Harvard鈥檚 report calls for two transparency policies: first, companies should have to share information on their AI policies in plain language that an average worker can understand; second, that employers should report surveillance technology on , which are legally-mandated reports detailing employer efforts to stop employee unionization.
While more work is needed, a number of have taken steps to require or encourage employers to be transparent about when and how they use AI in the workplace. Meanwhile, last year, many state legislatures laws around AI transparency.
3. Expand Workplace Penalties and Protections
Finally, the report calls for changes to labor law, governing union activities, and employment law, governing workplace safety and wages.
Federally, the authors call for changes under the National Labor Relations Act. First, the report argues that and technology is nothing more than a high-tech 鈥,鈥 in which employers require employees to listen to anti-union propaganda. The authors also call for a change to , which they believe stops unions from bargaining with employers before employers implement AI in the workplace.
Harvard also calls for changes to employment law. First, the report calls to amend the Occupational Safety and Health Act so that 鈥渢he right to a 鈥榮afe and healthful workplace鈥 . . . includes the right to be free from harms caused by AI in the workplace.鈥 Meanwhile, states and the federal government should continue to update the definition of 鈥渆mployee鈥 so that companies cannot by misclassifying workers as 鈥
Harvard鈥檚 report reminds us how little AI workplace law currently exists. America鈥檚 workplace law used to , with specific wage and workplace safety laws tailored to bakeries, factories, and farms, and a labor law that allowed unions to bargain with bosses at the level of the individual workplace.
Then , and or at least stopped being enforced effectively. At the same time, the computing power of microchips , changing the lives of white-collar and blue-collar workers. Fifty years in, workplace law no longer protects the way millions of Americans work.
America鈥檚 labor policies create burdens, frictions, and uncertainties for businesses too. Old laws create rigid rules that limit companies鈥 abilities to innovate. With no voice for employees in the AI transition, the result is what we see now: workers fearing for their jobs and bosses trying to guess the right AI moves without much feedback from employees.
The future is here, promising a whole world of creativity and productivity, and yet workers and bosses are both worried. The Harvard report offers one solution: a guaranteed place for workers in AI strategy.