@Unfiltered/AI: Bridging Bytes and Ballots | Edition 10
By Lacy Crawford and Alan Rosenblatt, PhD, from Unfiltered.Media
Welcome back to the Bridging Bytes and Ballots newsletter—helping you navigate the intersection of AI and politics. This week we’re welcoming back Alan, a partner at Unfiltered.Media and a digital and social media strategist, organizer, professor and thought leader with over 30 years’ experience in the field. This week’s issue, we take a deeper look at AI in healthcare.
Let’s get to it.
Artificial Intelligence Knows Best?!?
Artificial Intelligence May Influence Whether You Can Get Pain Medication
KFF Health News
Lacy: Our opening article did a wonderful job describing how artificial intelligence is always lurking behind many aspects of our lives—from our credit scores, to our housing loans, and evidently to whether and how frequently we get pain medication.
Honestly, I can’t say that I’ve heard of a Narx Score—although I understand its use in light of the opioid epidemic. Even separated from concerns over artificial intelligence, these types of hidden scores often do more harm than good.
Alan, how does the nature and evolution of AI in healthcare, as discussed in this article, impact patient care? In your opinion, how should doctors be using this technology in a fair and ethical way?
Alan: To a degree, AI is an algorithm that determines how data is processed. While definitely more advanced in its modeling and throughput, predictive AI in healthcare is about processing a lot of health data to identify patterns that imply possible causes. Good doctors do this instinctively as they listen to patients describe their condition, run tests, and review history. But we now have more health data to process than ever, including genetic data and data from other similar cases (for comparison), creating the need for faster processing of larger databases which carry a fair amount of uncertainty associated with some of the data measured in it and how the new data really fits into the understanding of our health.
On the other hand, as with all AI now, it is really not “intelligent” in the human sense. Aside from the typical argument that what we call AI is really machine learning (rule governed machine behavior), machines cannot be responsible, culpable agents, especially when it comes to health care.
I don’t think anyone is comfortable with a machine making their life or death decisions, not just occasionally, but as a job description. So the idea that we hand our diagnosis AND prescription over to an AI is untenable. As the opening story in this article highlights, the AI can be very good at flagging potential concerns, but it takes a human doctor to make the final diagnosis and prescribe the treatment.
As we mention here often, classification AI is decades-old technology and has lurked behind the scenes in all data intensive industries for a long time. Ironically, the health care sector was very slow to adopt data collection and processing technology. Part of it was because doctors did not want to make it easier for patients to contact them at home (they resisted email and social media for a long time), but the ethos of protecting patient privacy and the protectiveness of the medical profession guilds create fertile ground for taking a slow approach to using AI. Still, the use of classification AI to model patient health data within the context of many patients’ data and bringing together data from all of a patient’s doctors is a vast improvement over the days of analog medical files at various doctors offices.
On the other hand, much of the medical profession has been taken over by private equity firms and corporations, putting a great deal of distance between medical professionals and their patients when it comes to protecting them from the downsides of AI-driven health care.
Healthcare Follow-Up on Biden’s Executive Order
Biden’s AI Plan Spurs “Cautious Optimism” in Healthcare
HealthLeaders
Lacy: Please don’t label me a cynic, but should we be cautiously optimistic here? When I think of the U.S. government and healthcare, technological advancements are not the first thing that come to mind. (Universal healthcare and racism in health care are some of the things that come to mind.) So given my concerns, what can Uncle Sam do to ensure that AI policies are effective, fair and protective of patient privacy and the doctor patient relationship?
Alan: Historically, the U.S. government is always slow to develop effective policies to deal with information technology. Decision makers are either too steeped in the frame of a physical property economy with a human service delivery model or they are just not up to speed on how the digital information technology economy works to develop, agree upon, and implement effective laws and regulations.
Centralizing the decision-making authority for health care into a single executive department could be good for getting past those institutional challenges, as long as it is properly staffed so that it does not become a new bottleneck. That said, with health care, slow-walking systems that could threaten patient privacy and the effectiveness of the doctor-patient relationship can be a good thing.
The Business of AI in Healthcare
Hospital bosses love AI. Doctors and nurses are worried.
The Washington Post
Lacy: I think I’m siding with the doctors and nurses on this one. Capitalism in healthcare could spawn another newsletter, and I’m sure there are ones out there that cover it, but should the general public be worried about AI in healthcare? Can it enhance the caregiver-patient relationship without hurting the human and personal aspects of health care?
Alan: When I was a full time professor back in the ‘90s, I was among the first cohort of faculty to integrate the internet into our courses. The university convened us into an ad hoc committee to explore ways the internet could improve education at the university. While the professors talked about how to use the internet pedagogically, the administration’s representatives kept asking how we could teach more people online with less work. The professors saw that as unlikely. It still takes a lot of work to make the learning experience work. Twenty-five years later, we learned this reality as COVID forced students and faculty into Zoom rooms across the metaverse.
When it comes to services as personal as health care, the relationship you have with your caregiver—your ability to ask questions in real time and be comfortable with their answers—is a powerful part of the healing process. I can see the value in AI-assisted health care, but I do not feel comfortable turning the full agency of diagnosis and prescription to a machine.