As we find ourselves amidst a rapidly evolving technological landscape, generative AI (Artificial Intelligence) integration in healthcare has become a hot-button topic. Recently, Brent Lamm, a trailblazer in this sphere, conversed with Bill Russell, offering a deep dive into the profound implications and potential challenges of AI implementation in healthcare.
Lamm kicked off the discussion by drawing a parallel with the transformative advent of spreadsheet technology. "Could you imagine what it would've been like to be a CFO, finance leader, or analyst before the advent of spreadsheets? I couldn't fathom how that would've been to manually do so much of the calculation with calculators or other technology," he mused.
Reflecting on his early professional journey, he reminisced about the physical labor of analyzing Nielsen Reports, a process he completed without computers.
In the modern day, the spreadsheet has become ubiquitous, finding utility across all organizational job functions, from physicians to administrative leaders. Lamm envisions a similar trajectory for AI technology. It's projected to permeate various job functions, enhancing productivity and efficiency.
Lamm addressed the inevitable disparities in the adoption rate of AI in healthcare, with some embracing the technology early and others lagging. "How do you make sure that people are aware of the potential and bring them along?" he asked, suggesting that a critical challenge lies in avoiding the digital divide.
One approach to this issue lies in transparency, which Lamm strongly advocates. He proposed that we should be open about AI's role in workflows, suggesting that patients and third parties should be aware if AI were used.
Lamm explained, "We need to be more transparent than less transparent. This draft was generated by AI; humans are still in control, or an AI component did this. It's important."
However, he noted that achieving such transparency is still unclear. He believed awareness could lead to increased education about the topic, and this could help ensure that no one gets left behind in the process of AI adoption.
The conversation took a turn towards trust and ethical considerations with AI. Trust plays a crucial role in AI acceptance, especially in sensitive sectors like healthcare. As Russell pointed out, patients trust healthcare providers, expecting they uphold high-quality standards across the board.
However, Russell voiced a concern: if a notification declared, "This was generated via an AI model," it might introduce bias due to preconceived notions about AI. People's perception of AI is often influenced by fictional portrayals in popular culture, such as the idea that AI might turn against us, an idea propagated by movies like 'Terminator.' The challenge is to correct these misconceptions and ensure that AI is viewed as a beneficial tool, not a potential threat.
The conversation closed with an interesting comparison with self-driving cars. Lamm remarked, "We have how many hundreds or thousands of wrecks that are every day on our highways that were a human's fault. And, if a self-driving car makes one mistake, it will be a dramatically different story."
This, Lamm suggested, would also be true for AI in healthcare. The tolerance for AI errors is much less than for human errors, an issue that needs to be considered as we move forward. Despite these challenges, both Lamm and Russell agreed that the road to integrating generative AI in healthcare is promising and filled with potential for improving efficiency and care.
Brent Lamm's insights in this conversation shed light on the complex journey of AI integration in healthcare. As we navigate these uncharted waters, discussions like these serve as a compass, highlighting both the immense potential and the thoughtful consideration required for a successful voyage into the era of AI in healthcare.