This Week Health

Interviews in Action

More
This Week Health is a series of IT podcasts dedicated to healthcare transformation powered by the community

What would you like to learn about today?

Latest Episodes
View All
R25 - Podcasts Category Filter-2
  • All
  • Leadership (668)
  • Emerging Technology (494)
  • Security (307)
  • Interoperability (296)
  • Patient Experience (295)
  • Financial (286)
  • Analytics (182)
  • Telehealth (174)
  • Digital (164)
  • Clinician Burnout (158)
  • Legal & Regulatory (140)
  • AI (103)
  • Cloud (92)
View All
In the News

Google expands generative AI model Med-PaLM to more health customers | Healthcare Dive

September 24, 2023

This audio is auto-generated. Please let us know if you have feedback.

Google is expanding access of its large language model that’s specifically trained on medical information through a preview with Google Cloud customers in the healthcare and life sciences industry next month.

A limited group of customers have been testing the artificial intelligence, called Med-Palm 2, since April, including for-profit hospital giant HCA Healthcare, academic medical system Mayo Clinic and electronic health records vendor Meditech.

Google declined to share how many additional healthcare companies will be using Med-PaLM 2 following the expansion in September, but a spokesperspon said, “there are customers across healthcare sectors that have expressed interest and will be getting access.”

“We’re thrilled to be working with Cloud customers to test Med-PaLM and work to bring it to a place where it exceeds expectations,” Google health AI lead Greg Corrado told reporters during a press briefing on the preview.

Med-PaLM was the first AI system to pass U.S. medical licensing exam style questions. Its second iteration, which Google introduced in March this year, bettered its predecessor’s score by 19%, passing with 86.5% accuracy.

The LLM is not a replacement for doctors, nurses and other medical caregivers, but is instead meant to augment existing workflows and work as an extension of the care team, Corrado said.

However, Med-PaLM faces big questions that have plagued other generative AI in healthcare, including the potential for errors, the complexity of queries it can perform, meeting product excellence standards and a lack of regulation — despite already being piloted in real-world settings.

HCA has been testing Med-PaLM to help doctors and nurses with documentation, as part of the health system’s strategic collaboration with Google Cloud launched in 2021.

The system has been working with health tech company Augmedix and using Google’s LLM to create an ambient listening system that automatically transcribes doctor-patient conversations in the emergency room, according to Michael Schlosser, HCA’s senior vice president of care transformation and innovation.

HCA is currently testing the system in a cohort of 75 doctors in four hospitals, and plans to expand to more hospitals later this year as the automation improves, Schlosser said during the press briefing.

HCA is also piloting using Med-PaLM to generate a transfer summary to help nurses with patient handoffs at UCF Lake Nona Hospital in Orlando.

Meanwhile, Meditech — a major player in the hospital software space — is embedding Google’s natural language processing and LLMs into its EHR’s search and summarization capabilities.

Documentation is an appealing potential use case for generative AI that could cut down on onerous notetaking processes. Along with Google, other tech giants like Amazon and Microsoft have announced or expanded recent AI-enabled clinical documentation plays.

Privacy watchdogs, physician groups and patient advocates have raised concerns around the ethical use of AI and sensitive medical data, including worries about quality, patient consent and privacy, and confidentiality.

In 2019, Google sparked a firestorm of controversy over its use of patient data provided by health system Ascension to develop new product lines without patient knowledge or consent.

Google says that Med-PaLM 2 is not being trained on patient data, and Google Cloud customers retain control over their data as part of the preview. In the case of the the HCA pilot, patients are notified of the ambient listening system when they enter the ER, HCA’s Schlosser said.

Doctors are also leery about ceding control to what is in many cases a black box algorithm for determining information and the right course of patient care.

Schlosser said that the for-profit operator is first building AI into easy-to-accept use cases, like automating handoffs or scheduling, to make doctors and nurses more comfortable with the technology, before eventually implementing AI into additional parts of clinical practice.

“You get into nudging in the workflow around documentation, and then you could slowly step your way up to higher and higher levels of decision support,” Schlosser said. “But I want clinicians to fully embrace AI as a partner that’s making their life easier before we start getting into some of those more controversial areas.”

Read More

Digital divide affecting low-income patients, Reid Health CEO says

September 24, 2023

Craig Kinyon, CEO of Richmond, Ind.-based Reid Health, said the digital divide is disproportionately affecting low-income households in both urban and rural areas. 

In an Aug. 30 LinkedIn post, Mr. Kinyon said that in today's digital world, it is vital to address the issue of digital inequities. 

"Did you know that approximately 19 percent of Americans do not own a smartphone?" he wrote. "Shockingly, 50 percent of households earning less than $30,000 per year have limited access to computers, while around 18 million households in the U.S. lack internet access."

He proposed that healthcare should begin recognizing the impact of social determinants of health and health disparities to assess how it is hindering patients from getting access to care.

"This is why as leaders in the healthcare field, it is imperative for us to collaborate with community organizations, and policymakers in order to bridge this digital divide," he wrote. "By working together harmoniously, innovative solutions can be created that effectively address these challenges."

Read More

IBM trains its LLM to read, rewrite COBOL apps | CIO Dive

September 24, 2023

This audio is auto-generated. Please let us know if you have feedback.
  • IBM trained its watsonx.ai large language model to ingest COBOL code and rescript business applications in Java, the company announced Tuesday.
  • The generative AI solution is designed to ease mainframe modernization, assisting developers in the arduous process of analyzing, refactoring and transforming legacy code and validating the results, Skyla Loomis, VP of IBM Z Software, said during a demonstration.
  • IBM intends to deploy the AI-enabled coding assistant’s new capabilities by the end of the year, the company said.

The specter of technical debt haunts organizations, often leaving critical business functions perched perilously atop layers of arcane code. Despite modernization efforts, businesses still run on-prem applications architected with COBOL, a programming language created in the 1950s.

IBM estimates individual clients at the average enterprise may have tens of millions of COBOL lines running in production. Globally, enterprise production systems run more than 800 billion lines of COBOL daily, according to a Vanson Bourne study commissioned last year by software company Micro Focus.

Several generative AI companies, including Anthropic and OpenAI, recently introduced coding assistants. In February, Microsoft released GitHub Copilot for Business, an AI-enabled developer tool for the enterprise, and saw user headcount double in the first half of the year.

While human language contains nuances and tonal variations that can outwit the best commercially available models, computer code consists of straightforward machine instructions with clearly articulated semantics.

Errors and hallucinations can occur in coding translations, but they are relatively easy to identify and resolve, said Kyle Charlet, IBM fellow and CTO of IBM Z Software.

“Code doesn't lie, so we can immediately highlight any hallucinations that have worked their way into the code and correct them,” said Charlet.

The company trained the LLM on its COBOL data and tested the dataset on IBM's CodeNet, a database of 14 million code samples in more than 55 common programming languages, including C++, Java, Python, FORTRAN and COBOL. IBM used CodeNet to test for accuracy in COBOL to Java translation.

The model was then tuned for two specific use cases: a coding assistant for RedHat’s Ansible automation toolkit; and the new COBOL solution, which has now been trained on more than 80 languages and 1.5 trillion tokens of data, according to the company.

To mitigate risk, all of the model’s training data originated from licensed open source software, Charlet said.

The solution has four functional phases, detailed by Loomis.

  • Auto-discovery, when the tool analyzes the original script, identifies its data dependencies and provides a metadata overview of the application.
  • Refactor, which identifies the application’s business function and suggests modernization updates.
  • Transform, when the user triggers the generative AI’s COBOL-to-Java translation capabilities.
  • Validate, which tests the results to ensure that the new service is semantically and logically equivalent to the original script.

“What we are not doing is a line-by-line COBOL syntax translation to Java,” Loomis said. “When that happens, what you end up with is COBOL syntax expressed in largely unreadable, largely unmaintainable Java.”

Expanding the watsonx toolkit is part of a broader business integration strategy, built around hybrid cloud, mainframe modernization, emerging AI capabilities and IT consulting services.

IBM previously partnered with Microsoft to ease mainframe modernization and deploy enterprise-grade generative AI solutions. The two companies introduced the IBM Z and Cloud Modernization Stack on the Microsoft Azure Marketplace in June and launched a generative AI managed service last week.

Correction: This article has been updated to reflect IBM trained the LLM using its COBOL data. The company used CodeNet to test it for accuracy.

Read More

Why Apple and Amazon Are Using AI Models, Not Selling Them

September 24, 2023

Who is winning Big Tech’s generative artificial intelligence (AI) war?

While some of the world’s largest companies, including Alphabet’s GoogleMicrosoftOracleMeta and many more are drawing up their battle lines against not just each other but also innovative and agile startups like Anthropic and OpenAI, the question of primacy will likely be answered by whichever firm cracks the enterprise solution first — and most effectively.

That’s because generating businesses sales from cash-rich enterprises offers the best and most scalable bulwark against the heady costs of AI’s computing power and data center needs.

Microsoft, which has been at the forefront of the generative AI race ever since the firm struck its landmark partnership with OpenAI to leverage the latter’s foundational large language models (LLM), reportedly needs $4 billion of infrastructure for its suite of generative AI tools to do their job.

But while the who’s who of the tech sector continues to duke it out over AI, with upstart incumbent OpenAI reportedly on track to rake in more than $1 billion in revenue this year from its generative AI products, some observers are wondering — just where are the other tech sector giants, like Apple and Amazon?

Read also: It’s a ‘90s Browser War Redux as Musk and Meta Enter AI Race

A Better Use for AI

When it comes to winning this generation’s tech arms race, first mover advantage will be based on the way in which foundational AI models are applied to drive business value.

For their part, Google announced on Tuesday (Aug. 29) at the organization’s annual cloud conference in San Francisco that it will be releasing to the general public a suite of generative AI-powered tools for its corporate Gmail and workplace software customers. 

The tools will run enterprise customers an extra $30 per month, the same price as Microsoft’s equivalent enterprise AI co-pilot offerings, which are not yet available to the general business public beyond select testing partners.

Alphabet also used the event to unveil the latest iteration of its custom-built AI chips, along with a new tool that can watermark and help identify images generated by AI.

An early developer of generative AI’s core technical architecture (the T in GPT, which stands for Transformer) and a one-time home for many of AI’s most impactful researchers, Google has since suffered a substantial brain drain as its top AI researchers continue to leave and start their own ventures or join the top ranks of other firms.

“Writing a game-changing paper [in the AI space] is like a blank check,” Paul Lintilhac, a PhD researcher in computer science at Dartmouth College, told PYMNTS. 

Google’s announcement comes on the heels of OpenAI’s own launch of a generative AI service for enterprise customers. 

See more: 10 Insiders on Generative AI’s Impact Across the Enterprise

Different AI Strategies

But while the world’s biggest tech companies have been increasingly rolling out out AI applications, there have been some notably absent names in the race. 

For Amazon, that is by design.

The cloud and eCommerce giant is hoping to find an advantage in making LLM development as easy as possible for its clients, offering no and low-code AI services not too dissimilar from a Build-A-Bear for AI models.

Amazon is also treating AI as a support technology for its own business priorities, rather than packaging its proprietary models into a new product for outside customers. 

In just one example, the tech giant is reportedly using AI to optimize the viewing experience for its Thursday Night Football streaming product on Prime Video.

Meanwhile, Apple has yet to bring to market its own “GPT” moment.

“We view AI and machine learning as core fundamental technologies that are integral to virtually every product that we build,” Apple CEO Tim Cook told investors on the company’s most recent third-quarter earnings call.

There were just six references to AI during Apple’s call, and they all took place during the same exchange referenced above. For their part, Apple’s tech sector peers Microsoft and Alphabet mentioned AI 73 and 90 times, respectively, during their latest 2023 earnings calls.

Read More
1 801 802 803

Google expands generative AI model Med-PaLM to more health customers | Healthcare Dive

September 24, 2023

This audio is auto-generated. Please let us know if you have feedback.

Google is expanding access of its large language model that’s specifically trained on medical information through a preview with Google Cloud customers in the healthcare and life sciences industry next month.

A limited group of customers have been testing the artificial intelligence, called Med-Palm 2, since April, including for-profit hospital giant HCA Healthcare, academic medical system Mayo Clinic and electronic health records vendor Meditech.

Google declined to share how many additional healthcare companies will be using Med-PaLM 2 following the expansion in September, but a spokesperspon said, “there are customers across healthcare sectors that have expressed interest and will be getting access.”

“We’re thrilled to be working with Cloud customers to test Med-PaLM and work to bring it to a place where it exceeds expectations,” Google health AI lead Greg Corrado told reporters during a press briefing on the preview.

Med-PaLM was the first AI system to pass U.S. medical licensing exam style questions. Its second iteration, which Google introduced in March this year, bettered its predecessor’s score by 19%, passing with 86.5% accuracy.

The LLM is not a replacement for doctors, nurses and other medical caregivers, but is instead meant to augment existing workflows and work as an extension of the care team, Corrado said.

However, Med-PaLM faces big questions that have plagued other generative AI in healthcare, including the potential for errors, the complexity of queries it can perform, meeting product excellence standards and a lack of regulation — despite already being piloted in real-world settings.

HCA has been testing Med-PaLM to help doctors and nurses with documentation, as part of the health system’s strategic collaboration with Google Cloud launched in 2021.

The system has been working with health tech company Augmedix and using Google’s LLM to create an ambient listening system that automatically transcribes doctor-patient conversations in the emergency room, according to Michael Schlosser, HCA’s senior vice president of care transformation and innovation.

HCA is currently testing the system in a cohort of 75 doctors in four hospitals, and plans to expand to more hospitals later this year as the automation improves, Schlosser said during the press briefing.

HCA is also piloting using Med-PaLM to generate a transfer summary to help nurses with patient handoffs at UCF Lake Nona Hospital in Orlando.

Meanwhile, Meditech — a major player in the hospital software space — is embedding Google’s natural language processing and LLMs into its EHR’s search and summarization capabilities.

Documentation is an appealing potential use case for generative AI that could cut down on onerous notetaking processes. Along with Google, other tech giants like Amazon and Microsoft have announced or expanded recent AI-enabled clinical documentation plays.

Privacy watchdogs, physician groups and patient advocates have raised concerns around the ethical use of AI and sensitive medical data, including worries about quality, patient consent and privacy, and confidentiality.

In 2019, Google sparked a firestorm of controversy over its use of patient data provided by health system Ascension to develop new product lines without patient knowledge or consent.

Google says that Med-PaLM 2 is not being trained on patient data, and Google Cloud customers retain control over their data as part of the preview. In the case of the the HCA pilot, patients are notified of the ambient listening system when they enter the ER, HCA’s Schlosser said.

Doctors are also leery about ceding control to what is in many cases a black box algorithm for determining information and the right course of patient care.

Schlosser said that the for-profit operator is first building AI into easy-to-accept use cases, like automating handoffs or scheduling, to make doctors and nurses more comfortable with the technology, before eventually implementing AI into additional parts of clinical practice.

“You get into nudging in the workflow around documentation, and then you could slowly step your way up to higher and higher levels of decision support,” Schlosser said. “But I want clinicians to fully embrace AI as a partner that’s making their life easier before we start getting into some of those more controversial areas.”

Read More

Digital divide affecting low-income patients, Reid Health CEO says

September 24, 2023

Craig Kinyon, CEO of Richmond, Ind.-based Reid Health, said the digital divide is disproportionately affecting low-income households in both urban and rural areas. 

In an Aug. 30 LinkedIn post, Mr. Kinyon said that in today's digital world, it is vital to address the issue of digital inequities. 

"Did you know that approximately 19 percent of Americans do not own a smartphone?" he wrote. "Shockingly, 50 percent of households earning less than $30,000 per year have limited access to computers, while around 18 million households in the U.S. lack internet access."

He proposed that healthcare should begin recognizing the impact of social determinants of health and health disparities to assess how it is hindering patients from getting access to care.

"This is why as leaders in the healthcare field, it is imperative for us to collaborate with community organizations, and policymakers in order to bridge this digital divide," he wrote. "By working together harmoniously, innovative solutions can be created that effectively address these challenges."

Read More

IBM trains its LLM to read, rewrite COBOL apps | CIO Dive

September 24, 2023

This audio is auto-generated. Please let us know if you have feedback.
  • IBM trained its watsonx.ai large language model to ingest COBOL code and rescript business applications in Java, the company announced Tuesday.
  • The generative AI solution is designed to ease mainframe modernization, assisting developers in the arduous process of analyzing, refactoring and transforming legacy code and validating the results, Skyla Loomis, VP of IBM Z Software, said during a demonstration.
  • IBM intends to deploy the AI-enabled coding assistant’s new capabilities by the end of the year, the company said.

The specter of technical debt haunts organizations, often leaving critical business functions perched perilously atop layers of arcane code. Despite modernization efforts, businesses still run on-prem applications architected with COBOL, a programming language created in the 1950s.

IBM estimates individual clients at the average enterprise may have tens of millions of COBOL lines running in production. Globally, enterprise production systems run more than 800 billion lines of COBOL daily, according to a Vanson Bourne study commissioned last year by software company Micro Focus.

Several generative AI companies, including Anthropic and OpenAI, recently introduced coding assistants. In February, Microsoft released GitHub Copilot for Business, an AI-enabled developer tool for the enterprise, and saw user headcount double in the first half of the year.

While human language contains nuances and tonal variations that can outwit the best commercially available models, computer code consists of straightforward machine instructions with clearly articulated semantics.

Errors and hallucinations can occur in coding translations, but they are relatively easy to identify and resolve, said Kyle Charlet, IBM fellow and CTO of IBM Z Software.

“Code doesn't lie, so we can immediately highlight any hallucinations that have worked their way into the code and correct them,” said Charlet.

The company trained the LLM on its COBOL data and tested the dataset on IBM's CodeNet, a database of 14 million code samples in more than 55 common programming languages, including C++, Java, Python, FORTRAN and COBOL. IBM used CodeNet to test for accuracy in COBOL to Java translation.

The model was then tuned for two specific use cases: a coding assistant for RedHat’s Ansible automation toolkit; and the new COBOL solution, which has now been trained on more than 80 languages and 1.5 trillion tokens of data, according to the company.

To mitigate risk, all of the model’s training data originated from licensed open source software, Charlet said.

The solution has four functional phases, detailed by Loomis.

  • Auto-discovery, when the tool analyzes the original script, identifies its data dependencies and provides a metadata overview of the application.
  • Refactor, which identifies the application’s business function and suggests modernization updates.
  • Transform, when the user triggers the generative AI’s COBOL-to-Java translation capabilities.
  • Validate, which tests the results to ensure that the new service is semantically and logically equivalent to the original script.

“What we are not doing is a line-by-line COBOL syntax translation to Java,” Loomis said. “When that happens, what you end up with is COBOL syntax expressed in largely unreadable, largely unmaintainable Java.”

Expanding the watsonx toolkit is part of a broader business integration strategy, built around hybrid cloud, mainframe modernization, emerging AI capabilities and IT consulting services.

IBM previously partnered with Microsoft to ease mainframe modernization and deploy enterprise-grade generative AI solutions. The two companies introduced the IBM Z and Cloud Modernization Stack on the Microsoft Azure Marketplace in June and launched a generative AI managed service last week.

Correction: This article has been updated to reflect IBM trained the LLM using its COBOL data. The company used CodeNet to test it for accuracy.

Read More

Why Apple and Amazon Are Using AI Models, Not Selling Them

September 24, 2023

Who is winning Big Tech’s generative artificial intelligence (AI) war?

While some of the world’s largest companies, including Alphabet’s GoogleMicrosoftOracleMeta and many more are drawing up their battle lines against not just each other but also innovative and agile startups like Anthropic and OpenAI, the question of primacy will likely be answered by whichever firm cracks the enterprise solution first — and most effectively.

That’s because generating businesses sales from cash-rich enterprises offers the best and most scalable bulwark against the heady costs of AI’s computing power and data center needs.

Microsoft, which has been at the forefront of the generative AI race ever since the firm struck its landmark partnership with OpenAI to leverage the latter’s foundational large language models (LLM), reportedly needs $4 billion of infrastructure for its suite of generative AI tools to do their job.

But while the who’s who of the tech sector continues to duke it out over AI, with upstart incumbent OpenAI reportedly on track to rake in more than $1 billion in revenue this year from its generative AI products, some observers are wondering — just where are the other tech sector giants, like Apple and Amazon?

Read also: It’s a ‘90s Browser War Redux as Musk and Meta Enter AI Race

A Better Use for AI

When it comes to winning this generation’s tech arms race, first mover advantage will be based on the way in which foundational AI models are applied to drive business value.

For their part, Google announced on Tuesday (Aug. 29) at the organization’s annual cloud conference in San Francisco that it will be releasing to the general public a suite of generative AI-powered tools for its corporate Gmail and workplace software customers. 

The tools will run enterprise customers an extra $30 per month, the same price as Microsoft’s equivalent enterprise AI co-pilot offerings, which are not yet available to the general business public beyond select testing partners.

Alphabet also used the event to unveil the latest iteration of its custom-built AI chips, along with a new tool that can watermark and help identify images generated by AI.

An early developer of generative AI’s core technical architecture (the T in GPT, which stands for Transformer) and a one-time home for many of AI’s most impactful researchers, Google has since suffered a substantial brain drain as its top AI researchers continue to leave and start their own ventures or join the top ranks of other firms.

“Writing a game-changing paper [in the AI space] is like a blank check,” Paul Lintilhac, a PhD researcher in computer science at Dartmouth College, told PYMNTS. 

Google’s announcement comes on the heels of OpenAI’s own launch of a generative AI service for enterprise customers. 

See more: 10 Insiders on Generative AI’s Impact Across the Enterprise

Different AI Strategies

But while the world’s biggest tech companies have been increasingly rolling out out AI applications, there have been some notably absent names in the race. 

For Amazon, that is by design.

The cloud and eCommerce giant is hoping to find an advantage in making LLM development as easy as possible for its clients, offering no and low-code AI services not too dissimilar from a Build-A-Bear for AI models.

Amazon is also treating AI as a support technology for its own business priorities, rather than packaging its proprietary models into a new product for outside customers. 

In just one example, the tech giant is reportedly using AI to optimize the viewing experience for its Thursday Night Football streaming product on Prime Video.

Meanwhile, Apple has yet to bring to market its own “GPT” moment.

“We view AI and machine learning as core fundamental technologies that are integral to virtually every product that we build,” Apple CEO Tim Cook told investors on the company’s most recent third-quarter earnings call.

There were just six references to AI during Apple’s call, and they all took place during the same exchange referenced above. For their part, Apple’s tech sector peers Microsoft and Alphabet mentioned AI 73 and 90 times, respectively, during their latest 2023 earnings calls.

Read More
1 801 802 803
View All
Insights by Kate Gamble
View All
Our Partners

Premier

Diamond Partners

Platinum Partners

Silver Partners

Healthcare Transformation Powered by Community

© Copyright 2024 Health Lyrics All rights reserved