(The Center Square) – The health care field agrees: artificial intelligence is already an integral tool being used every day in the industry.
When and how to regulate it, however, poses challenges that draw a wide variety of perspectives and opinions from experts and legislators.
That’s where Rep. Arvind Venkat, D-Pittsburgh, has joined the conversation. An emergency room physician, Venkat sits at the intersection of clinical practice and legislation. His bill, House Bill 1925, was the subject of a more than three-hour hearing of the House Communications and Technology committee Monday.
“This is a very different technology and a very different situation than other developments in health care, and the reason is threefold,” said Venkat. “AI is autonomous. It purports to approach human intelligence and is black box related to its reasoning.”
Venkat emphasized that because AI is increasingly being entrusted to perform the kind of diagnostic tasks previously exclusive to physicians, it too should be regulated and held accountable. He said it wasn’t about being “pro- or anti-AI.”
Across several panels, including representatives from public interest groups, the insurance industry, information technologists, hospital administrators, and clinicians themselves, there was a consensus that some form of legislation would be a step forward in an environment where neither the federal nor state governments have made much meaningful progress.
Several, however, had notes regarding this particular piece of legislation, demonstrating that even if Pennsylvania is ahead of the game, it will likely involve significant reworking and negotiating before the bill becomes acceptable to all parties.
Central to the conversation is the necessity of “keeping a human in the loop.” The medical field employs AI for a broad spectrum of uses, almost all of which require some human oversight. From applications that listen to consultations and record notes to those that assist in diagnostics by reading radiology scans, a professional is required to sign off on final decisions.
It’s here where the insurance industry has run into concerns from its customers.
Representatives from the attorney general’s office said they receive complaints from patients concerned that AI has denied their claims without human consideration. Such claims are difficult to investigate, and many remain skeptical of the industry’s appetite to comply with existing regulations.
Insurance Commissioner Michael Humphreys said that insurance providers currently operate under the expectation that any claim recommended for denial is reviewed by a professional, where the use of AI fast-tracks the approval process.
It isn’t the only process that is speeding up in health care. Dr. David Vega, senior vice president and chief medical officer at Wellspan Health, said that the use of AI frees up thousands of hours for staff to attend to their patients. He said that AI’s analysis of more than 200,000 scans saved 900 hours of delays and accelerated life-saving care for over 10,000 patients with invisible critical conditions, such as pulmonary embolisms or brain bleeds.
“The marriage of human expertise with technology like AI is improving patient care and outcomes in ways that were unimaginable just a few years ago,” said Vega.
Vega also pointed out that the use of AI is among the creative solutions needed to address the workforce shortage. He said the technology helped reclaim 4,000 hours for human workers, emphasizing that it’s “not about replacement.”
Physicians and nurses agree that there’s hope for a return to patient-focused care when AI assumes the kind of administrative tasks that turn them toward computer screens. Some worry, however, that administrators will use the tools to center profit and cut costs.
Maureen May, president of the Pennsylvania Association of Staff Nurses and Allied Professionals, said that her organization conducted a survey of its members on the topic.
“Health care workers are deeply concerned, profoundly distrustful, and almost entirely shut out of the decisions made out of AI at the bedside,” said May.
She said 89% of respondents didn’t trust their employers to implement AI responsibly. That’s why the group voiced support for government regulation.
For legislators, there is a fine line between regulation and overregulation. Some fear that making rules about AI now will prevent future innovation, while others fear that a failure to regulate AI will lead to the kinds of problems wrought by social media’s power going largely unchecked.
J.B. Branch, artificial intelligence policy expert at Public Citizen, a progressive non-profit advocacy group, called it “buyer’s remorse” and said the concern about AI crosses demographic groups and political parties.
President Donald Trump issued an executive order last week that attempted to prohibit states from enacting regulations related to AI. The issue was dropped from July’s budget resolution after pushback from congressional representatives.
“And I would hope the federal government would move, but sometimes the states need to move first and maybe the federal government would wake up and do something because right now, there is a lot of willy-nilly that exists because the federal government is lollygagging,” said committee chair Rep. Joe Ciresi, D-Royersford. “There’s a lot of confusion in AI and a lot of non-trust.”




