The Department Of Labor recently issued guidelines on using AI in the workplace while clarifying that workers’ rights must be protected. The CDC also published a blog about how AI should “result in benefit, not harm, to worker safety, health, and well‐being.” Will these directives help ease workplace friction over AI? We will soon see how the International Longshoreman Association negotiations handle the punted automation issue, but it will take more remarkable finesse to ease AI jitters at large, even though initial predictions of the potential impact on healthcare have proven to be untrue. .
Then, there are the unique AI fears of healthcare workers. Sure, nurses, techs, and even doctors worry about being replaced by robots. Still, these workers also fear that this emerging tech, which includes chatbots performing patient intakes and AI algorithms test-monitoring critical care patients at Mount Sinai, can put patients at risk of grave misdiagnosis.
These worries are not unfounded. As Grainger Industrial Supply’s Matt Law recently stressed to the American Society of Safety Professionals, AI is “not actually intelligent” but is only “demonstrating intelligence” as trained. The technology is evolving and imperfect and should mainly be used “to enhance the human’s ability to perform.” As we have discussed, the limited use of AI has been increasing operational efficiency in healthcare settings.
That last point doesn’t exactly sound terrible, does it?
Nobody can dispute that nurses are overworked. If AI can help reduce healthcare worker burnout, that should be a net positive if workers warm up to the idea of AI as a helper, not a career-ender.
Encouraging AI-fearing healthcare workers to embrace new tech: This nuanced issue requires transparency and open communication from employers. However, it certainly would improve workers’ outlook on AI if this tech eased other worker fears, including the increasing workplace hazard of violence against healthcare workers. Via one survey, 81% of nurses felt physically threatened at work in 2023, and AI could present a solution.
Such a solution moves beyond wearable silent panic alarms, which are relatively new and can provide peace of mind for workers. However, relying only upon these devices has a disadvantage: by the time they are deployed, a threat is already present.
AI “weapons detection systems” have been tested by some healthcare systems, and they are now reporting first-year data on potential threats. At one Canadian hospital, 3,100+ perceived threats were detected, with “1,834 knives” found by a secondary inspection by a guard. At a Virginia hospital, AI reportedly detected 1,000+ knives along with box cutters, tasers, guns, and even machetes. And in Nebraska, an AI system caught 1,000+ weapons brought into an ER waiting room. Sure, some false positives have been reported, as noted with a Bronx hospital following a pilot program, but the inconvenience is undoubtedly outweighed by potential lives saved.
No magic bullet for AI worker unease exists: It’s virtually an American tradition to fret over being replaced by tech, and it’s vital that companies don’t dismiss workers’ fears.
If employees do feel that their employers don’t take their concerns seriously, this makes a workplace more vulnerable to union infiltration. Even though unions have no clue how to handle AI, a lack of expertise has never stopped them from making false promises while recruiting.