Here Is What Employers Should Do Next.
Artificial intelligence is no longer a theoretical conversation in the workplace. Scheduling tools, productivity dashboards, routing software, generative AI, and automated decision-making systems are emerging in almost every industry. As these tools expand, unions are rapidly shaping their own public positions on AI, algorithmic management, and worker data rights. Employers should expect this to influence organizing campaigns, bargaining priorities, and regulatory strategies over the next several years.
Recent research from the UC Berkeley Labor Center analyzed a wide range of union and worker-organization statements and found consistent themes: transparency, limits on surveillance, guarantees of human judgment, and protection from bias. These themes are becoming the foundation of a new wave of workplace concerns. Whether or not a workplace is unionized, employees are likely to hear these messages online, in the media, or from colleagues in other industries.
For private-sector employers, this presents both a challenge and an opportunity. The challenge is that AI can increase fear and uncertainty when it is misunderstood or implemented poorly. The opportunity lies in clear communication and careful planning, which can prevent these concerns from becoming union organizing issues.
Below are the key trends employers need to understand.
Unions Are Framing AI Around Job Security, Fairness, and Control
Across sectors, unions are emphasizing three core ideas:
- Workers deserve transparency.
Employees want to know what a system does, what data it uses, and how it affects scheduling or performance evaluations. Lack of clarity fuels rumors and mistrust. - Algorithms should not replace human judgment.
Most union statements fully reject automated discipline or discharge. They want a guarantee that humans remain accountable for employment decisions. - Monitoring and productivity tools must be fair and proportional.
Customer ratings, GPS, wearables, biometric systems, and camera analytics are being scrutinized. Unions argue these tools can be inaccurate or discriminatory if used without safeguards.
These are the ideas workers will hear in organizing campaigns and on social media. It is also how unions will frame AI-related concerns in bargaining.
Why This Matters for Employers
AI issues tend to escalate when employees feel uninformed or excluded. In many workplaces, the problem is not technology itself. The problem is how employees perceive it. When workers don’t understand how systems operate, they may believe:
- “The computer is doing all the discipline.”
- “The company is watching everything we do.”
- “They are replacing us with automation.”
- “No one asked us before they rolled this out.”
These concerns can quickly evolve into grievances, organizing activity, or public criticism. Employers who overlook communication and training when adopting new technology are more likely to face resistance.
Practical Steps Employers Can Take Right Now
The goal is not to avoid technology. The goal is to deploy it responsibly and predictably. The following steps can reduce confusion, increase trust, and lower the risk of AI becoming a workplace flashpoint.
- Publish a clear and concise AI Transparency Statement
Employees should understand:
- what data the company collects
- what the technology does
- what decisions AI will not make
- how the company protects privacy and fairness
A short document that answers these questions can significantly reduce anxiety.
- Train supervisors to communicate about technology
Supervisors are the first point of contact when employees become confused or frustrated. They need to be prepared to explain why a system is used, how it works, and what controls are in place to ensure its effectiveness. They should avoid dismissive responses such as “don’t worry about it” or “that’s just how the system works.” These statements heighten suspicion and prompt employees to seek answers from third parties.
- Review high-risk monitoring and algorithmic tools
Employers should evaluate:
- whether monitoring practices are narrowly tailored
- whether algorithms reinforce bias or create unrealistic expectations
- whether customer ratings influence pay or discipline
- whether devices collect unnecessary data
If a tool cannot be defended publicly or legally, it is worth revisiting.
- Establish human review for all high-impact decisions
Even if AI tools help analyze data, a human should make final decisions in areas such as:
- attendance
- discipline
- performance evaluations
- routing and scheduling
- job assignments
This approach protects employees from system errors by ensuring a human touch, and it also protects employers from liability.
- Use a structured rollout process for new technology
Before launching a new system, employers should:
- announce the change early
- pilot the tool with a small employee group
- test for safety, fairness, and accuracy
- clarify what will and will not change
- provide a method for employees to ask questions
Careful rollout reduces operational problems and builds credibility.
Looking Ahead
AI adoption will continue to grow, and unions will continue to sharpen their messaging around transparency, fairness, and worker control. Employers that treat AI as a technical issue rather than a workforce-trust issue are more likely to encounter resistance. Employers who communicate early, involve employees appropriately, and maintain human oversight will reduce conflict and increase confidence in new tools.