This Week Health

Interviews in Action

More
This Week Health is a series of IT podcasts dedicated to healthcare transformation powered by the community

What would you like to learn about today?

Latest Episodes
View All
R25 - Podcasts Category Filter-2
  • All
  • Leadership (668)
  • Emerging Technology (494)
  • Security (307)
  • Interoperability (296)
  • Patient Experience (295)
  • Financial (286)
  • Analytics (182)
  • Telehealth (174)
  • Digital (164)
  • Clinician Burnout (158)
  • Legal & Regulatory (140)
  • AI (103)
  • Cloud (92)
View All
In the News

Google Cloud Next ‘23: New Generative AI-Powered Services

September 24, 2023

The Google Cloud outside their headquarters.
Image: Sundry Photography/Adobe Stock

Google unveiled a wide array of new generative AI-powered services at its Google Cloud Next 2023 conference in San Francisco on August 29. At the pre-briefing, we got an early look at Google’s new Cloud TPU, A4 virtual machines powered by NVIDIA H100 GPUs and more.

Jump to:

Vertex AI increases capacity, adds other improvements

June Yang, vice president of cloud AI and industry solutions at Google Cloud, announced improvements to Vertex AI, the company’s generative AI platform that helps enterprises train their own AI and machine learning models.

Customers have asked for the ability to input larger amounts of content into PaLM, a foundation model under the Vertex AI platform, Yang said, which led Google to increase its capacity from 4,000 tokens to 32,000 tokens.

Customers have also asked for more languages to be supported in Vertex AI. At the Next ’23 conference, Yang announced PaLM, which resides within the Vertex AI platform, is now available in Arabic, Chinese, Japanese, German, Spanish and more. That’s a total of 38 languages for public use; 100 additional languages are now options in private preview.

SEE: Google opened up its PaLM large language model with an API in March. (TechRepublic)

Vertex AI Search, which lets users create a search engine inside their AI-powered apps, is available today. “Think about this like Google Search for your business data,” Yang said.

Also available today is Vertex AI Conversation, which is a tool for building chatbots. Search and Conversion were previously available under different product names in Google’s Generative AI App Builder.

Improvements to the Codey foundation model

Codey, the text-to-code model inside Vertex AI, is getting an upgrade. Although details on this upgrade are sparse, Yang said developers should be able to work more efficiently on code generation and code chat.

“​​Leveraging our Codey foundation model, partners like GitLab are helping developers to stay in the flow by predicting and completing lines of code, generating test cases, explaining code and many more use cases,” Yang noted.

Match your business’ art style with text-to-image AI

Vertex’s text-to-image model will now be able to perform style tuning, or matching a company’s brand and creative guidelines. Organizations need to provide just 10 reference images for Vertex to begin to work within their house style.

New additions to Model Garden, Vertex AI’s model library

Google Cloud has added Meta’s Llama 2 and Anthropic’s Claude 2 to Vertex AI’s model library. The decision to add Llama 2 and Claude 2 to the Google Cloud AI Model Garden is “in line with our commitment to foster an open ecosystem,” Yang said.

“With these additions compared with other hyperscalers, Google Cloud now provides the widest variety of models to choose from, with our first-party Google models, third-party models from partners, as well as open source models on a single platform,” Yang said. “With access to over 100 curated models on Vertex AI, customers can now choose models based on modality, size, performance latency and cost considerations.”

BigQuery and AlloyDB upgrades are ready for preview

Google’s BigQuery Studio — which is a workbench platform for users who work with data and AI — and AlloyDB both have upgrades now available in preview.

BigQuery Studio added to cloud data warehouse preview

BigQuery Studio will be rolled out to Google’s BigQuery cloud data warehouse in preview this week. BigQuery Studio assists with analyzing and exploring data and integrates with Vertex AI. BigQuery Studio is designed to bring data engineering, analytics and predictive analysis together, reducing the time data analytics professionals need to spend switching between tools.

Users of BigQuery can also add Duet AI, Google’s AI assistant, starting now.

AlloyDB enhanced with generative AI

Andy Goodman, vice president and general manager for databases at Google, announced the addition of generative AI capabilities to AlloyDB — Google’s PostgreSQL-compatible database for high-end enterprise workloads — at the pre-brief. AlloyDB includes capabilities for organizations building enterprise AI applications, such as vector search capabilities up to 10 times faster than standard PostgreSQL, Goodman said. Developers can generate vector embeddings within the database to streamline their work. AlloyDB AI integrates with Vertex AI and open source tool ecosystems such as LangChain.

“Databases are at the heart of gen AI innovation, as they help bridge the gap between LLMs and enterprise gen AI apps to deliver accurate, up to date and contextual experiences,” Goodman said.

AlloyDB AI is now available in preview through AlloyDB Omni.

A3 virtual machine supercomputing with NVIDIA for AI training revealed

General availability of the A3 virtual machines running on NVIDIA H100 GPU as a GPU supercomputer will open next month, announced Mark Lohmeyer, vice president general manager for compute and machine learning infrastructure at Google Cloud, during the pre-brief.

The A3 supercomputers’ custom-made 200 Gbps virtual machine infrastructure has GPU-to-GPU data transfers, enabling it to bypass the CPU host. The GPU-to-GPU data transfers power AI training, tuning and scaling with up to 10 times more bandwidth than the previous generation, A2. The training will be three times faster, Lohmeyer said.

NVIDIA “enables us to offer the most comprehensive AI infrastructure portfolio of any cloud,” said Lohmeyer.

Cloud TPU v5e is optimized for generative AI inferencing

Google introduced Cloud TPU v5e, the fifth generation of cloud TPUs optimized for generative AI inferencing. A TPU, or Tensor Processing Unit, is a machine learning accelerator hosted on Google Cloud. The TPU handles the massive amounts of data needed for inferencing, which is a logical process that helps artificial intelligence systems make predictions.

Cloud TPU v5e boasts two times faster performance per dollar for training and 2.5 times better performance per dollar for inferencing compared to the previous-generation TPU, Lohmeyer said.

“(With) the magic of that software and hardware working together with new software technologies like multi-slice, we’re enabling our customers to easily scale their [generative] AI models beyond the physical boundaries of a single TPU pod or a single TPU cluster,” said Lohmeyer. “In other words, a single large AI workload can now span multiple physical TPU clusters, scaling to literally tens of thousands of chips and doing so very cost effectively.”

The new TPU is generally available in preview starting this week.

Introducing Google Kubernetes Engine Enterprise edition

Google Kubernetes Engineer, which many customers use for AI workloads, is getting a boost. The GKE Enterprise edition will include muti-cluster horizontal scaling and GKE’s existing services running across both cloud GPUs and cloud TPUs. Early reports from customers have shown productivity gains of up to 45%, Google said, and reduced software deployment times by more than 70%.

GKE Enterprise Edition will be available in September.

Read More

How Will Generative AI Change the Role of Clinicians In the Next 10 Years? - MedCity News

September 24, 2023

AI is a bit of a buzzword in the healthcare world, so it’s sometimes difficult to tell how much of an impact this technology is going to end up having on the sector. This month, Citi released a report that sought to cut through the noise.

The report focused on how AI will affect the role of clinicians. It predicted that generative AI tools will increasingly streamline many aspects of a clinician’s day in the next five to 10 years — and that this is particularly true for tools that can automate diagnoses and respond to patients’ questions.

The healthcare industry could see an emergence of increasingly effective tools for diagnosis in the coming years, according to the report. As these come onto the scene, clinicians will use them to aid their decision making process, not replace it. 

“For example, a family doctor, listening to a patient, may think it’s worth investigating A, B and C; however the AI may also remind the doctor that syndromes D and E are also possible and therefore need consideration,” the report read.

These tools will likely be equipped with generative AI capabilities, such as automatic speech recognition, which can transcribe patient-clinician interactions. The report predicted that this AI will have good accuracy — large language models are less likely to produce wrong information when they are asked to summarize a text, like a transcript of medical conversation, than when they generate something completely new.

To date, no diagnostic generative AI tools have been launched on the market. However, several companies are developing and testing healthcare-focused large language models. For instance, Google unveiled Med-PaLM 2 in April, and the tool is currently being used at Mayo Clinic and other health systems. To begin, they are testing its ability to answer medical questions, summarize unstructured texts and organize health data.

Diagnostic tools that listen to patient interactions to suggest treatment advice will be used mainly by physicians, but other generative AI tools will hit the market to assist other healthcare professionals, including nurses, dieticians and pharmacists, the report predicted.

For example, generative AI can be used to call and check in on patients, which could potentially prevent avoidable hospital admissions and emergency department visits. These tools can gauge a patient’s progress after surgery, call a patient to hear how they are reacting to a new prescription, and conduct welfare checks on older patients.

But clinicians and other healthcare professionals aren’t the only ones who will use new health-focused generative AI tools in the next five to 10 years — consumers will too, according to the report. As the use of large language models becomes more widespread, consumers will likely gain access to chatbot-style tools that answer their medical questions the way a doctor would, it predicted.

While these new advancements may seem exciting, the report noted that it will take years for technology developers to produce tools that are both accurate and easy to use — and that timeline will likely be longer than what AI enthusiasts want.

Photo: Natali_Mis, Getty Images

Read More

Meet One Medical's new CEO

September 24, 2023

The incoming CEO of One Medical will bring plenty of hospital experience to his new role at the helm of the healthcare disruptor.

Trent Green will take over as chief executive of the Amazon-owned primary care chain after current CEO Amir Dan Rubin departs later this year, the company said Aug. 31.

Mr. Green spent nearly 14 years with Legacy Health, a six-hospital nonprofit system based in Portland, Ore., with locations in Oregon and Washington. He joined the $2.5 billion organization in 2008 as senior vice president and chief strategy officer, and was its COO when he left for the same position at One Medical in July 2022.

In an Aug. 31 email to staff, Mr. Green said that when he was departing Legacy Health he wanted to work for an organization that provided primary care — from pediatrics to geriatrics — on a national scale — and "One Medical was the one that was doing it right."

One Medical provides subscription-based concierge primary care, with in-person visits and 24/7 telehealth for an annual fee. The company, which Amazon bought in February for $3.9 billion, has more than 200 clinics in 29 markets. It partners with 17 health systems on specialty care referrals.

"I'm excited for us to build upon our strong culture and model, to expand our impact, and to take advantage of all we can do as part of Amazon to further delight our members," Mr. Green wrote.

At Legacy Health, Mr. Green also served as president of Emanuel Medical Center and Unity Center for Behavioral Health, both in Portland, and Legacy's medical group. Mr. Green started his career as an administrative fellow for Rochester, Minn.-based Mayo Clinic before working in healthcare management consulting.

"As COO, Trent quickly gained the trust and respect of the leadership team," Neil Lindsay, senior vice president of health services at Amazon, said in a Aug. 31 email to employees. "He brings a deep understanding of One Medical's clinical operations and patient care delivery experience and has over 25 years of healthcare operations experience. We are confident Trent's leadership will help more people get high-quality care."

In an internal email, Mr. Rubin, himself a former hospital CEO and COO, called Mr. Green a "highly effective, experienced, and values-driven leader."

Besides Mr. Green, One Medical's C-suite features several other hospital veterans. They include Chief Quality Officer Raj Behal, MD, formerly of Palo Alto, Calif.-based Stanford Health Care and Chicago-based Rush University Medical Center; Chief Strategy Officer Jenni Vargas, formerly of Stanford Health Care; and Chief Network Officer John Singerling, the former president of Columbia, S.C.-based Palmetto Health, which merged with Greenville (S.C.) Health in 2017 to become Greenville-based Prisma Health.

Read More

Inside Google's Plans To Fix Healthcare With Generative AI

September 24, 2023

Google is expanding access of its large language models to its healthcare customers.

getty

At the end of each hospital shift, the outgoing nurse has to quickly bring the incoming one up to speed about all of the patients under their care. This “handoff” can take multiple forms, including conversations, handwritten notes and electronic medical records. “[It’s] a risky part of the healthcare journey, because we're transferring information from one healthcare provider to another,” says Michael Schlosser, senior vice president of care transformation and innovation for HCA Healthcare. “We have to make sure that it's done in an accurate way and that nothing falls through the cracks.”

Schlosser and his team at Nashville-based HCA – one of the largest healthcare systems in the country with 180 hospitals and around 37 million patients a year – thought this transfer of information could be a good opportunity to apply generative artificial intelligence. Large language models are good at summarizing and organizing data. But when HCA scoured the market for potential vendors, Schlosser says they couldn’t find any companies building solutions for this handoff issue.

HCA had an existing partnership with Google Cloud, so they turned to Google’s software suite called Vertex AI, which helps customers build and deploy machine learning models. Google offers its own foundation model, known as PaLM, but the platform is model agnostic, meaning customers can also swap in and build on OpenAI’s GPT-4, Facebook’s Llama, Amazon’s Titan or any other model of their choosing.

In a bid to woo more healthcare customers, Google has also been developing a healthcare-specific large language model. The company announced Tuesday it will release the latest version – called Med-PaLM 2 – to a wider number of customers in September. HCA is one of several healthcare customers that has had early access, along with the pharmaceutical giant Bayer, electronic health record company Meditech and digital health startups Infinitus Systems and Huma. This renewed push into healthcare comes as Microsoft and Amazon are making their own AI-powered inroads into the sector, and it’s far from clear which will come out on top when the dust clears.

“We’re still five minutes into the marathon,” Gartner analyst Chirag Dekate says of the healthcare AI landscape.

In 2021, Google disbanded its standalone Google Health division but said health-related efforts would continue across the company. Its recent AI solutions in the industry are geared towards solving piecemeal problems. For example, Google released AI tools last year to help healthcare organizations read, store and label X-rays, MRIs and other medical imaging. Earlier this year, the company unveiled AI tools to help health insurers speed up prior authorization.

The use case focus is necessary because of AI technology itself, says Greg Corrado, head of health AI at Google. Despite the hype over large language models, he says it’s “naive” to expect them to be “able to do anything expertly off the shelf,” adding that “In practice, these systems always require identification of specific use cases.”

When it comes to large language models, Google has been playing catchup to OpenAI, the startup behind the viral chatbot ChatGPT, which has received $10 billion investment from Microsoft. In 2022, Microsoft acquired Nuance Communications for $18.8 billion, giving it a major foothold to sell new AI products to hospital clients, since Nuance’s medical dictation software is already used by 550,000 doctors. “Nuance has an enormous footprint in healthcare,” says Alex Lennox-Miller, an analyst for CB Insights, which makes Microsoft “well-positioned” for the use of its generative AI software for administrative tasks in the sector.

Before the generative AI boom, Amazon, Microsoft and Google were all competing for cloud customers. With $48.1 billion in cloud revenue in 2022, Amazon holds around 40% of the market share, according to technology research firm Gartner. Microsoft follows with 21.5%, while Google places fourth behind Alibaba Group with more than $9 billion in cloud revenue and 7.5% of the market.

It’s also no surprise that they are all now trying to specifically target healthcare customers, a complex and heavily regulated industry, says Dekate. He says that’s because if you’re able to prove use cases in a more complex environment, like healthcare or financial services, then it signals to other customers that generative AI is ready for broader adoption.

But no one is there yet. What all the cloud companies have presented to customers are building blocks, says Dekate. That is, plenty of ways to utilize their AI platforms in bespoke applications their customers have to build. But what those customers want are fully-built solutions.

“Amazon, Google and Microsoft are fighting it out to dominate the commanding heights of the generative AI economy,” says Dekate. “But none of them have articulated a good enough vertical story.”

Because healthcare is so highly regulated and the consequences of mistakes are high, generative AI use cases need to start out very small. For HCA that means one hospital – UCF Lake Nona – is currently piloting the handoff tool as a proof-of-concept. The AI ingests patient data from the past 12 hours, including lab results, medication, important events, and spits out a transfer summary, that also includes suggestions for what the oncoming nurse should be thinking about in the next 12 hours, says Schlosser.

While it’s built using Google’s Vertex AI software, HCA has been experimenting with different foundation models, including PaLM and Med-PaLM. “We're actually doing a lot of head-to-head testing right now to see where does the generic model work better, and where does a medically-trained model provide more accuracy and better outcomes,” says Schlosser. “I imagine both will actually have important roles in the future we're trying to create.”

The idea of using multiple models to solve a complex problem, known as “composite” artificial intelligence, presents an interesting challenge for the cloud providers, says Dekate. They are simultaneously offering their own in-house models but also partnering with other companies in order to offer “the promise of choice,” he says. Dekate expects that more and more we’ll start to see Google, Microsoft and Amazon start offering services to help customers be able to evaluate different models. Schlosser says HCA has so far been taking a manual approach to evaluation by having doctors and nurses evaluate the outputs of the model relative to what the human team would do as a side-by-side comparison.

Corrado says that at the state of the art right now, generative AI models can be likened to “an eager, studious assistant that's trying very hard to do a good job. And you should view the output critically, as a draft and say, Okay, well, what did you miss? What did you get wrong?”

OpenAI has taken the view that bigger is better when it comes to the amount of data that the model is trained on. Its GPT-3 model, which was trained on the open internet, had around 175 billion parameters and the latest version, GPT-4, is thought to have more than 1 trillion parameters (though the company has not publicly confirmed the total amount). Google says the largest PaLM and Med-PaLM models have 540 billion parameters. The company declined to comment on the size of PaLM 2.

But as models are trained on more and more data, there can be issues with performance. In July, a group of researchers from Stanford and UC Berkeley said their tests suggested that GPT-4’s performance had suffered some degradation over time, echoing anecdotal reports that can be seen on developer fora. Although this was a preliminary finding and researchers are still learning how generative AI models work, this does spark some concern, especially as it’s not entirely clear how such AI systems arrive at their answers. “One of the biggest problems in healthcare for these algorithms is going to be the difficulty they have with transparency,” says Lennox-Miller.

Corrado says these concerns are precisely why Google is experimenting with niche LLM models that are trained on narrower sets of data. Without tailoring models towards specific use cases, such as healthcare, he says, “you run the risk of just having a Swiss army knife, which is not the best knife, and it's also not the best screwdriver. And it's also not the best toothpick. And we think that it's better, particularly in these high value settings, to do domain adaptation, understand what the use case is, and have the same kind of rigorous quality evaluation and version control that you would expect from a real product.”

Another challenge for most large language models is that they’re not constantly learning. They typically have a cutoff date for their training data. For example, the free version of ChatGPT was trained on data until September 2021. But knowledge in healthcare is always advancing, so doctors who use these tools need to have a good sense of how recent the data they’re working with is. Corrado says Google is still deciding what the cutoff will be, but that it will be communicated to customers. “We don't rely on these systems to know everything about the practice of medicine,” says Corrado.

In the hospitals of the future, Schlosser envisions an “AI assistant to the care team,” that he believes will have “amazing power in reducing administrative burden.” HCA has also been working with Google and the publicly-traded ambient AI company Augmedix to automate medical note-taking in the emergency room. Schlosser says around 75 doctors at 4 HCA hospitals are using the technology. The “holy grail” for doctors, he says, is that they could focus on providing care to patients and “the documentation would take care of itself.” The reason they’re starting in the emergency room is because that is one of the most complicated venues to prove the technology actually works.

When it comes to using Augmedix’s tool, the doctor directly asks the patient for their consent to record the examination and use an AI tool for note-taking, says Schlosser. For the nurse handoff tool, which is not patient-facing, it falls under HCA’s broader privacy consent around using patient data for research and process improvement, he says. HCA is also working on using generative AI for ER discharge summaries, as well as handoffs from the ER to inpatient. Schlosser says as HCA thinks about scaling the use of AI for administrative purposes, the company will have to consider “the right way to let all patients know that an AI is part of a care delivery process.”

Consent and privacy are major concerns around the use of AI in healthcare and Google generated significant controversy with an earlier partnership with the hospital system Ascension using AI to analyze millions of medical records. In 2019, reports of the company’s “Project Nightingale,” raised concerns about data privacy and security. Both Google and Ascension said the work was compliant with federal patient privacy laws.

In the case of PaLM and Med-PaLM, Google says that none of the models are being trained on patient data at HCA or any other customer. “HCA's data is HCA's data and nobody else's,” Google Cloud CEO Thomas Kurian tells Forbes. “Think of it as a vault in our cloud that only is used to train the version of the model that they're using. It's not shared with anybody else. None of that data is used to improve our base model.”

Despite the challenges to generative AI from technical capabilities to privacy and data concerns, Schlosser is optimistic that tools built on technology will become part of the standard toolkit for doctors. HCA is taking a slow approach built on alleviating some of the burdens of their day to day job, he says, because he thinks once doctors start embracing AI, they’ll be positioned to guide the best way to use it for more complicated applications.

“I want clinicians to fully embrace AI as a partner that's making their life easier, before we start getting into some of those more controversial areas,” he says.

MORE AT FORBES

MORE FROM FORBESMicrosoft Wants To Automate Medical Notes With GPT-4- But Doctors Need ConvincingMORE FROM FORBES'AI First' To Last: How Google Fell Behind In The AI BoomMORE FROM FORBESAI At The Doctor? Amazon Launches New Service As Google, Microsoft Aim At Merging Healthcare With Artificial IntelligenceMORE FROM FORBESChatGPT Won't Fix Healthcare, But It Might Save Doctors Some Time

Read More

Google Cloud Next ‘23: New Generative AI-Powered Services

September 24, 2023

The Google Cloud outside their headquarters.
Image: Sundry Photography/Adobe Stock

Google unveiled a wide array of new generative AI-powered services at its Google Cloud Next 2023 conference in San Francisco on August 29. At the pre-briefing, we got an early look at Google’s new Cloud TPU, A4 virtual machines powered by NVIDIA H100 GPUs and more.

Jump to:

Vertex AI increases capacity, adds other improvements

June Yang, vice president of cloud AI and industry solutions at Google Cloud, announced improvements to Vertex AI, the company’s generative AI platform that helps enterprises train their own AI and machine learning models.

Customers have asked for the ability to input larger amounts of content into PaLM, a foundation model under the Vertex AI platform, Yang said, which led Google to increase its capacity from 4,000 tokens to 32,000 tokens.

Customers have also asked for more languages to be supported in Vertex AI. At the Next ’23 conference, Yang announced PaLM, which resides within the Vertex AI platform, is now available in Arabic, Chinese, Japanese, German, Spanish and more. That’s a total of 38 languages for public use; 100 additional languages are now options in private preview.

SEE: Google opened up its PaLM large language model with an API in March. (TechRepublic)

Vertex AI Search, which lets users create a search engine inside their AI-powered apps, is available today. “Think about this like Google Search for your business data,” Yang said.

Also available today is Vertex AI Conversation, which is a tool for building chatbots. Search and Conversion were previously available under different product names in Google’s Generative AI App Builder.

Improvements to the Codey foundation model

Codey, the text-to-code model inside Vertex AI, is getting an upgrade. Although details on this upgrade are sparse, Yang said developers should be able to work more efficiently on code generation and code chat.

“​​Leveraging our Codey foundation model, partners like GitLab are helping developers to stay in the flow by predicting and completing lines of code, generating test cases, explaining code and many more use cases,” Yang noted.

Match your business’ art style with text-to-image AI

Vertex’s text-to-image model will now be able to perform style tuning, or matching a company’s brand and creative guidelines. Organizations need to provide just 10 reference images for Vertex to begin to work within their house style.

New additions to Model Garden, Vertex AI’s model library

Google Cloud has added Meta’s Llama 2 and Anthropic’s Claude 2 to Vertex AI’s model library. The decision to add Llama 2 and Claude 2 to the Google Cloud AI Model Garden is “in line with our commitment to foster an open ecosystem,” Yang said.

“With these additions compared with other hyperscalers, Google Cloud now provides the widest variety of models to choose from, with our first-party Google models, third-party models from partners, as well as open source models on a single platform,” Yang said. “With access to over 100 curated models on Vertex AI, customers can now choose models based on modality, size, performance latency and cost considerations.”

BigQuery and AlloyDB upgrades are ready for preview

Google’s BigQuery Studio — which is a workbench platform for users who work with data and AI — and AlloyDB both have upgrades now available in preview.

BigQuery Studio added to cloud data warehouse preview

BigQuery Studio will be rolled out to Google’s BigQuery cloud data warehouse in preview this week. BigQuery Studio assists with analyzing and exploring data and integrates with Vertex AI. BigQuery Studio is designed to bring data engineering, analytics and predictive analysis together, reducing the time data analytics professionals need to spend switching between tools.

Users of BigQuery can also add Duet AI, Google’s AI assistant, starting now.

AlloyDB enhanced with generative AI

Andy Goodman, vice president and general manager for databases at Google, announced the addition of generative AI capabilities to AlloyDB — Google’s PostgreSQL-compatible database for high-end enterprise workloads — at the pre-brief. AlloyDB includes capabilities for organizations building enterprise AI applications, such as vector search capabilities up to 10 times faster than standard PostgreSQL, Goodman said. Developers can generate vector embeddings within the database to streamline their work. AlloyDB AI integrates with Vertex AI and open source tool ecosystems such as LangChain.

“Databases are at the heart of gen AI innovation, as they help bridge the gap between LLMs and enterprise gen AI apps to deliver accurate, up to date and contextual experiences,” Goodman said.

AlloyDB AI is now available in preview through AlloyDB Omni.

A3 virtual machine supercomputing with NVIDIA for AI training revealed

General availability of the A3 virtual machines running on NVIDIA H100 GPU as a GPU supercomputer will open next month, announced Mark Lohmeyer, vice president general manager for compute and machine learning infrastructure at Google Cloud, during the pre-brief.

The A3 supercomputers’ custom-made 200 Gbps virtual machine infrastructure has GPU-to-GPU data transfers, enabling it to bypass the CPU host. The GPU-to-GPU data transfers power AI training, tuning and scaling with up to 10 times more bandwidth than the previous generation, A2. The training will be three times faster, Lohmeyer said.

NVIDIA “enables us to offer the most comprehensive AI infrastructure portfolio of any cloud,” said Lohmeyer.

Cloud TPU v5e is optimized for generative AI inferencing

Google introduced Cloud TPU v5e, the fifth generation of cloud TPUs optimized for generative AI inferencing. A TPU, or Tensor Processing Unit, is a machine learning accelerator hosted on Google Cloud. The TPU handles the massive amounts of data needed for inferencing, which is a logical process that helps artificial intelligence systems make predictions.

Cloud TPU v5e boasts two times faster performance per dollar for training and 2.5 times better performance per dollar for inferencing compared to the previous-generation TPU, Lohmeyer said.

“(With) the magic of that software and hardware working together with new software technologies like multi-slice, we’re enabling our customers to easily scale their [generative] AI models beyond the physical boundaries of a single TPU pod or a single TPU cluster,” said Lohmeyer. “In other words, a single large AI workload can now span multiple physical TPU clusters, scaling to literally tens of thousands of chips and doing so very cost effectively.”

The new TPU is generally available in preview starting this week.

Introducing Google Kubernetes Engine Enterprise edition

Google Kubernetes Engineer, which many customers use for AI workloads, is getting a boost. The GKE Enterprise edition will include muti-cluster horizontal scaling and GKE’s existing services running across both cloud GPUs and cloud TPUs. Early reports from customers have shown productivity gains of up to 45%, Google said, and reduced software deployment times by more than 70%.

GKE Enterprise Edition will be available in September.

Read More

How Will Generative AI Change the Role of Clinicians In the Next 10 Years? - MedCity News

September 24, 2023

AI is a bit of a buzzword in the healthcare world, so it’s sometimes difficult to tell how much of an impact this technology is going to end up having on the sector. This month, Citi released a report that sought to cut through the noise.

The report focused on how AI will affect the role of clinicians. It predicted that generative AI tools will increasingly streamline many aspects of a clinician’s day in the next five to 10 years — and that this is particularly true for tools that can automate diagnoses and respond to patients’ questions.

The healthcare industry could see an emergence of increasingly effective tools for diagnosis in the coming years, according to the report. As these come onto the scene, clinicians will use them to aid their decision making process, not replace it. 

“For example, a family doctor, listening to a patient, may think it’s worth investigating A, B and C; however the AI may also remind the doctor that syndromes D and E are also possible and therefore need consideration,” the report read.

These tools will likely be equipped with generative AI capabilities, such as automatic speech recognition, which can transcribe patient-clinician interactions. The report predicted that this AI will have good accuracy — large language models are less likely to produce wrong information when they are asked to summarize a text, like a transcript of medical conversation, than when they generate something completely new.

To date, no diagnostic generative AI tools have been launched on the market. However, several companies are developing and testing healthcare-focused large language models. For instance, Google unveiled Med-PaLM 2 in April, and the tool is currently being used at Mayo Clinic and other health systems. To begin, they are testing its ability to answer medical questions, summarize unstructured texts and organize health data.

Diagnostic tools that listen to patient interactions to suggest treatment advice will be used mainly by physicians, but other generative AI tools will hit the market to assist other healthcare professionals, including nurses, dieticians and pharmacists, the report predicted.

For example, generative AI can be used to call and check in on patients, which could potentially prevent avoidable hospital admissions and emergency department visits. These tools can gauge a patient’s progress after surgery, call a patient to hear how they are reacting to a new prescription, and conduct welfare checks on older patients.

But clinicians and other healthcare professionals aren’t the only ones who will use new health-focused generative AI tools in the next five to 10 years — consumers will too, according to the report. As the use of large language models becomes more widespread, consumers will likely gain access to chatbot-style tools that answer their medical questions the way a doctor would, it predicted.

While these new advancements may seem exciting, the report noted that it will take years for technology developers to produce tools that are both accurate and easy to use — and that timeline will likely be longer than what AI enthusiasts want.

Photo: Natali_Mis, Getty Images

Read More

Meet One Medical's new CEO

September 24, 2023

The incoming CEO of One Medical will bring plenty of hospital experience to his new role at the helm of the healthcare disruptor.

Trent Green will take over as chief executive of the Amazon-owned primary care chain after current CEO Amir Dan Rubin departs later this year, the company said Aug. 31.

Mr. Green spent nearly 14 years with Legacy Health, a six-hospital nonprofit system based in Portland, Ore., with locations in Oregon and Washington. He joined the $2.5 billion organization in 2008 as senior vice president and chief strategy officer, and was its COO when he left for the same position at One Medical in July 2022.

In an Aug. 31 email to staff, Mr. Green said that when he was departing Legacy Health he wanted to work for an organization that provided primary care — from pediatrics to geriatrics — on a national scale — and "One Medical was the one that was doing it right."

One Medical provides subscription-based concierge primary care, with in-person visits and 24/7 telehealth for an annual fee. The company, which Amazon bought in February for $3.9 billion, has more than 200 clinics in 29 markets. It partners with 17 health systems on specialty care referrals.

"I'm excited for us to build upon our strong culture and model, to expand our impact, and to take advantage of all we can do as part of Amazon to further delight our members," Mr. Green wrote.

At Legacy Health, Mr. Green also served as president of Emanuel Medical Center and Unity Center for Behavioral Health, both in Portland, and Legacy's medical group. Mr. Green started his career as an administrative fellow for Rochester, Minn.-based Mayo Clinic before working in healthcare management consulting.

"As COO, Trent quickly gained the trust and respect of the leadership team," Neil Lindsay, senior vice president of health services at Amazon, said in a Aug. 31 email to employees. "He brings a deep understanding of One Medical's clinical operations and patient care delivery experience and has over 25 years of healthcare operations experience. We are confident Trent's leadership will help more people get high-quality care."

In an internal email, Mr. Rubin, himself a former hospital CEO and COO, called Mr. Green a "highly effective, experienced, and values-driven leader."

Besides Mr. Green, One Medical's C-suite features several other hospital veterans. They include Chief Quality Officer Raj Behal, MD, formerly of Palo Alto, Calif.-based Stanford Health Care and Chicago-based Rush University Medical Center; Chief Strategy Officer Jenni Vargas, formerly of Stanford Health Care; and Chief Network Officer John Singerling, the former president of Columbia, S.C.-based Palmetto Health, which merged with Greenville (S.C.) Health in 2017 to become Greenville-based Prisma Health.

Read More

Inside Google's Plans To Fix Healthcare With Generative AI

September 24, 2023

Google is expanding access of its large language models to its healthcare customers.

getty

At the end of each hospital shift, the outgoing nurse has to quickly bring the incoming one up to speed about all of the patients under their care. This “handoff” can take multiple forms, including conversations, handwritten notes and electronic medical records. “[It’s] a risky part of the healthcare journey, because we're transferring information from one healthcare provider to another,” says Michael Schlosser, senior vice president of care transformation and innovation for HCA Healthcare. “We have to make sure that it's done in an accurate way and that nothing falls through the cracks.”

Schlosser and his team at Nashville-based HCA – one of the largest healthcare systems in the country with 180 hospitals and around 37 million patients a year – thought this transfer of information could be a good opportunity to apply generative artificial intelligence. Large language models are good at summarizing and organizing data. But when HCA scoured the market for potential vendors, Schlosser says they couldn’t find any companies building solutions for this handoff issue.

HCA had an existing partnership with Google Cloud, so they turned to Google’s software suite called Vertex AI, which helps customers build and deploy machine learning models. Google offers its own foundation model, known as PaLM, but the platform is model agnostic, meaning customers can also swap in and build on OpenAI’s GPT-4, Facebook’s Llama, Amazon’s Titan or any other model of their choosing.

In a bid to woo more healthcare customers, Google has also been developing a healthcare-specific large language model. The company announced Tuesday it will release the latest version – called Med-PaLM 2 – to a wider number of customers in September. HCA is one of several healthcare customers that has had early access, along with the pharmaceutical giant Bayer, electronic health record company Meditech and digital health startups Infinitus Systems and Huma. This renewed push into healthcare comes as Microsoft and Amazon are making their own AI-powered inroads into the sector, and it’s far from clear which will come out on top when the dust clears.

“We’re still five minutes into the marathon,” Gartner analyst Chirag Dekate says of the healthcare AI landscape.

In 2021, Google disbanded its standalone Google Health division but said health-related efforts would continue across the company. Its recent AI solutions in the industry are geared towards solving piecemeal problems. For example, Google released AI tools last year to help healthcare organizations read, store and label X-rays, MRIs and other medical imaging. Earlier this year, the company unveiled AI tools to help health insurers speed up prior authorization.

The use case focus is necessary because of AI technology itself, says Greg Corrado, head of health AI at Google. Despite the hype over large language models, he says it’s “naive” to expect them to be “able to do anything expertly off the shelf,” adding that “In practice, these systems always require identification of specific use cases.”

When it comes to large language models, Google has been playing catchup to OpenAI, the startup behind the viral chatbot ChatGPT, which has received $10 billion investment from Microsoft. In 2022, Microsoft acquired Nuance Communications for $18.8 billion, giving it a major foothold to sell new AI products to hospital clients, since Nuance’s medical dictation software is already used by 550,000 doctors. “Nuance has an enormous footprint in healthcare,” says Alex Lennox-Miller, an analyst for CB Insights, which makes Microsoft “well-positioned” for the use of its generative AI software for administrative tasks in the sector.

Before the generative AI boom, Amazon, Microsoft and Google were all competing for cloud customers. With $48.1 billion in cloud revenue in 2022, Amazon holds around 40% of the market share, according to technology research firm Gartner. Microsoft follows with 21.5%, while Google places fourth behind Alibaba Group with more than $9 billion in cloud revenue and 7.5% of the market.

It’s also no surprise that they are all now trying to specifically target healthcare customers, a complex and heavily regulated industry, says Dekate. He says that’s because if you’re able to prove use cases in a more complex environment, like healthcare or financial services, then it signals to other customers that generative AI is ready for broader adoption.

But no one is there yet. What all the cloud companies have presented to customers are building blocks, says Dekate. That is, plenty of ways to utilize their AI platforms in bespoke applications their customers have to build. But what those customers want are fully-built solutions.

“Amazon, Google and Microsoft are fighting it out to dominate the commanding heights of the generative AI economy,” says Dekate. “But none of them have articulated a good enough vertical story.”

Because healthcare is so highly regulated and the consequences of mistakes are high, generative AI use cases need to start out very small. For HCA that means one hospital – UCF Lake Nona – is currently piloting the handoff tool as a proof-of-concept. The AI ingests patient data from the past 12 hours, including lab results, medication, important events, and spits out a transfer summary, that also includes suggestions for what the oncoming nurse should be thinking about in the next 12 hours, says Schlosser.

While it’s built using Google’s Vertex AI software, HCA has been experimenting with different foundation models, including PaLM and Med-PaLM. “We're actually doing a lot of head-to-head testing right now to see where does the generic model work better, and where does a medically-trained model provide more accuracy and better outcomes,” says Schlosser. “I imagine both will actually have important roles in the future we're trying to create.”

The idea of using multiple models to solve a complex problem, known as “composite” artificial intelligence, presents an interesting challenge for the cloud providers, says Dekate. They are simultaneously offering their own in-house models but also partnering with other companies in order to offer “the promise of choice,” he says. Dekate expects that more and more we’ll start to see Google, Microsoft and Amazon start offering services to help customers be able to evaluate different models. Schlosser says HCA has so far been taking a manual approach to evaluation by having doctors and nurses evaluate the outputs of the model relative to what the human team would do as a side-by-side comparison.

Corrado says that at the state of the art right now, generative AI models can be likened to “an eager, studious assistant that's trying very hard to do a good job. And you should view the output critically, as a draft and say, Okay, well, what did you miss? What did you get wrong?”

OpenAI has taken the view that bigger is better when it comes to the amount of data that the model is trained on. Its GPT-3 model, which was trained on the open internet, had around 175 billion parameters and the latest version, GPT-4, is thought to have more than 1 trillion parameters (though the company has not publicly confirmed the total amount). Google says the largest PaLM and Med-PaLM models have 540 billion parameters. The company declined to comment on the size of PaLM 2.

But as models are trained on more and more data, there can be issues with performance. In July, a group of researchers from Stanford and UC Berkeley said their tests suggested that GPT-4’s performance had suffered some degradation over time, echoing anecdotal reports that can be seen on developer fora. Although this was a preliminary finding and researchers are still learning how generative AI models work, this does spark some concern, especially as it’s not entirely clear how such AI systems arrive at their answers. “One of the biggest problems in healthcare for these algorithms is going to be the difficulty they have with transparency,” says Lennox-Miller.

Corrado says these concerns are precisely why Google is experimenting with niche LLM models that are trained on narrower sets of data. Without tailoring models towards specific use cases, such as healthcare, he says, “you run the risk of just having a Swiss army knife, which is not the best knife, and it's also not the best screwdriver. And it's also not the best toothpick. And we think that it's better, particularly in these high value settings, to do domain adaptation, understand what the use case is, and have the same kind of rigorous quality evaluation and version control that you would expect from a real product.”

Another challenge for most large language models is that they’re not constantly learning. They typically have a cutoff date for their training data. For example, the free version of ChatGPT was trained on data until September 2021. But knowledge in healthcare is always advancing, so doctors who use these tools need to have a good sense of how recent the data they’re working with is. Corrado says Google is still deciding what the cutoff will be, but that it will be communicated to customers. “We don't rely on these systems to know everything about the practice of medicine,” says Corrado.

In the hospitals of the future, Schlosser envisions an “AI assistant to the care team,” that he believes will have “amazing power in reducing administrative burden.” HCA has also been working with Google and the publicly-traded ambient AI company Augmedix to automate medical note-taking in the emergency room. Schlosser says around 75 doctors at 4 HCA hospitals are using the technology. The “holy grail” for doctors, he says, is that they could focus on providing care to patients and “the documentation would take care of itself.” The reason they’re starting in the emergency room is because that is one of the most complicated venues to prove the technology actually works.

When it comes to using Augmedix’s tool, the doctor directly asks the patient for their consent to record the examination and use an AI tool for note-taking, says Schlosser. For the nurse handoff tool, which is not patient-facing, it falls under HCA’s broader privacy consent around using patient data for research and process improvement, he says. HCA is also working on using generative AI for ER discharge summaries, as well as handoffs from the ER to inpatient. Schlosser says as HCA thinks about scaling the use of AI for administrative purposes, the company will have to consider “the right way to let all patients know that an AI is part of a care delivery process.”

Consent and privacy are major concerns around the use of AI in healthcare and Google generated significant controversy with an earlier partnership with the hospital system Ascension using AI to analyze millions of medical records. In 2019, reports of the company’s “Project Nightingale,” raised concerns about data privacy and security. Both Google and Ascension said the work was compliant with federal patient privacy laws.

In the case of PaLM and Med-PaLM, Google says that none of the models are being trained on patient data at HCA or any other customer. “HCA's data is HCA's data and nobody else's,” Google Cloud CEO Thomas Kurian tells Forbes. “Think of it as a vault in our cloud that only is used to train the version of the model that they're using. It's not shared with anybody else. None of that data is used to improve our base model.”

Despite the challenges to generative AI from technical capabilities to privacy and data concerns, Schlosser is optimistic that tools built on technology will become part of the standard toolkit for doctors. HCA is taking a slow approach built on alleviating some of the burdens of their day to day job, he says, because he thinks once doctors start embracing AI, they’ll be positioned to guide the best way to use it for more complicated applications.

“I want clinicians to fully embrace AI as a partner that's making their life easier, before we start getting into some of those more controversial areas,” he says.

MORE AT FORBES

MORE FROM FORBESMicrosoft Wants To Automate Medical Notes With GPT-4- But Doctors Need ConvincingMORE FROM FORBES'AI First' To Last: How Google Fell Behind In The AI BoomMORE FROM FORBESAI At The Doctor? Amazon Launches New Service As Google, Microsoft Aim At Merging Healthcare With Artificial IntelligenceMORE FROM FORBESChatGPT Won't Fix Healthcare, But It Might Save Doctors Some Time

Read More
View All
Insights by Kate Gamble
View All
Our Partners

Premier

Diamond Partners

Platinum Partners

Silver Partners

Healthcare Transformation Powered by Community

© Copyright 2024 Health Lyrics All rights reserved