This Week Health

Interviews in Action

More
Browse by Topic
Cybersecurity
AI/Machine Learning
Clinician Burnout
Cybersecurity
AI & Machine
Learning
Clinician Burnout
Podcast by Topic
R25 - Podcasts Category Filter-2
  • All
  • Leadership (674)
  • Emerging Technology (501)
  • Security (311)
  • Patient Experience (298)
  • Interoperability (296)
  • Financial (289)
  • Analytics (182)
  • Telehealth (175)
  • Digital (164)
  • Clinician Burnout (160)
  • Legal & Regulatory (141)
  • AI (105)
  • Cloud (92)
In the News

Artificial Intelligence May Influence Whether You Can Get Pain Medication - KFF Health NewsKFF Health News

September 24, 2023

Elizabeth Amirault had never heard of a Narx Score. But she said she learned last year the tool had been used to track her medication use.

During an August 2022 visit to a hospital in Fort Wayne, Indiana, Amirault told a nurse practitioner she was in severe pain, she said. She received a puzzling response.

“Your Narx Score is so high, I can’t give you any narcotics,” she recalled the man saying, as she waited for an MRI before a hip replacement.

Tools like Narx Scores are used to help medical providers review controlled substance prescriptions. They influence, and can limit, the prescribing of painkillers, similar to a credit score influencing the terms of a loan. Narx Scores and an algorithm-generated overdose risk rating are produced by health care technology company Bamboo Health (formerly Appriss Health) in its NarxCare platform.

Such systems are designed to fight the nation’s opioid epidemic, which has led to an alarming number of overdose deaths. The platforms draw on data about prescriptions for controlled substances that states collect to identify patterns of potential problems involving patients and physicians. State and federal health agencies, law enforcement officials, and health care providers have enlisted these tools, but the mechanics behind the formulas used are generally not shared with the public.

Artificial intelligence is working its way into more parts of American life. As AI spreads within the health care landscape, it brings familiar concerns of bias and accuracy and whether government regulation can keep up with rapidly advancing technology.

The use of systems to analyze opioid-prescribing data has sparked questions over whether they have undergone enough independent testing outside of the companies that developed them, making it hard to know how they work.

Lacking the ability to see inside these systems leaves only clues to their potential impact. Some patients say they have been cut off from needed care. Some doctors say their ability to practice medicine has been unfairly threatened. Researchers warn that such technology — despite its benefits — can have unforeseen consequences if it improperly flags patients or doctors.

“We need to see what’s going on to make sure we’re not doing more harm than good,” said Jason Gibbons, a health economist at the Colorado School of Public Health at the University of Colorado’s Anschutz Medical Campus. “We’re concerned that it’s not working as intended, and it’s harming patients.”

Amirault, 34, said she has dealt for years with chronic pain from health conditions such as sciatica, degenerative disc disease, and avascular necrosis, which results from restricted blood supply to the bones.

The opioid Percocet offers her some relief. She’d been denied the medication before, but never had been told anything about a Narx Score, she said.

In a chronic pain support group on Facebook, she found others posting about NarxCare, which scores patients based on their supposed risk of prescription drug misuse. She’s convinced her ratings negatively influenced her care.

“Apparently being sick and having a bunch of surgeries and different doctors, all of that goes against me,” Amirault said.

Database-driven tracking has been linked to a decline in opioid prescriptions, but evidence is mixed on its impact on curbing the epidemic. Overdose deaths continue to plague the country, and patients like Amirault have said the monitoring systems leave them feeling stigmatized as well as cut off from pain relief.

The Centers for Disease Control and Prevention estimated that in 2021 about 52 million American adults suffered from chronic pain, and about 17 million people lived with pain so severe it limited their daily activities. To manage the pain, many use prescription opioids, which are tracked in nearly every state through electronic databases known as prescription drug monitoring programs (PDMPs).

The last state to adopt a program, Missouri, is still getting it up and running.

More than 40 states and territories use the technology from Bamboo Health to run PDMPs. That data can be fed into NarxCare, a separate suite of tools to help medical professionals make decisions. Hundreds of health care facilities and five of the top six major pharmacy retailers also use NarxCare, the company said.

The platform generates three Narx Scores based on a patient’s prescription activity involving narcotics, sedatives, and stimulants. A peer-reviewed study showed the “Narx Score metric could serve as a useful initial universal prescription opioid-risk screener.”

NarxCare’s algorithm-generated “Overdose Risk Score” draws on a patient’s medication information from PDMPs — such as the number of doctors writing prescriptions, the number of pharmacies used, and drug dosage — to help medical providers assess a patient’s risk of opioid overdose.

Bamboo Health did not share the specific formula behind the algorithm or address questions about the accuracy of its Overdose Risk Score but said it continues to review and validate the algorithm behind it, based on current overdose trends.

Guidance from the CDC advised clinicians to consult PDMP data before prescribing pain medications. But the agency warned that “special attention should be paid to ensure that PDMP information is not used in a way that is harmful to patients.”

This prescription-drug data has led patients to be dismissed from clinician practices, the CDC said, which could leave patients at risk of being untreated or undertreated for pain. The agency further warned that risk scores may be generated by “proprietary algorithms that are not publicly available” and could lead to biased results.

Bamboo Health said that NarxCare can show providers all of a patient’s scores on one screen, but that these tools should never replace decisions made by physicians.

Some patients say the tools have had an outsize impact on their treatment.

Bev Schechtman, 47, who lives in North Carolina, said she has occasionally used opioids to manage pain flare-ups from Crohn’s disease. As vice president of the Doctor Patient Forum, a chronic pain patient advocacy group, she said she has heard from others reporting medication access problems, many of which she worries are caused by red flags from databases.

“There’s a lot of patients cut off without medication,” according to Schechtman, who said some have turned to illicit sources when they can’t get their prescriptions. “Some patients say to us, ‘It’s either suicide or the streets.’”

Elizabeth Amirault of Indiana has dealt with chronic pain for years. She believes a tool that tracks her prescription drug use negatively influenced her ability to get the medication she needs. (Nicholas Amirault)

The stakes are high for pain patients. Research shows rapid dose changes can increase the risk of withdrawal, depression, anxiety, and even suicide.

Some doctors who treat chronic pain patients say they, too, have been flagged by data systems and then lost their license to practice and were prosecuted.

Lesly Pompy, a pain medicine and addiction specialist in Monroe, Michigan, believes such systems were involved in a legal case against him.

His medical office was raided by a mix of local and federal law enforcement agencies in 2016 because of his patterns in prescribing pain medicine. A year after the raid, Pompy’s medical license was suspended. In 2018, he was indicted on charges of illegally distributing opioid pain medication and health care fraud.

“I knew I was taking care of patients in good faith,” he said. A federal jury in January acquitted him of all charges. He said he’s working to have his license restored.

One firm, Qlarant, a Maryland-based technology company, said it has developed algorithms “to identify questionable behavior patterns and interactions for controlled substances, and for opioids in particular,” involving medical providers.

The company, in an online brochure, said its “extensive government work” includes partnerships with state and federal enforcement entities such as the Department of Health and Human Services’ Office of Inspector General, the FBI, and the Drug Enforcement Administration.

In a promotional video, the company said its algorithms can “analyze a wide variety of data sources,” including court records, insurance claims, drug monitoring data, property records, and incarceration data to flag providers.

William Mapp, the company’s chief technology officer, stressed the final decision about what to do with that information is left up to people — not the algorithms.

Mapp said that “Qlarant’s algorithms are considered proprietary and our intellectual property” and that they have not been independently peer-reviewed.

“We do know that there’s going to be some percentage of error, and we try to let our customers know,” Mapp said. “It sucks when we get it wrong. But we’re constantly trying to get to that point where there are fewer things that are wrong.”

Prosecutions against doctors through the use of prescribing data have attracted the attention of the American Medical Association.

“These unknown and unreviewed algorithms have resulted in physicians having their prescribing privileges immediately suspended without due process or review by a state licensing board — often harming patients in pain because of delays and denials of care,” said Bobby Mukkamala, chair of the AMA’s Substance Use and Pain Care Task Force.

Even critics of drug-tracking systems and algorithms say there is a place for data and artificial intelligence systems in reducing the harms of the opioid crisis.

“It’s just a matter of making sure that the technology is working as intended,” said health economist Gibbons.

Read More

Google Cloud Next ‘23: New Generative AI-Powered Services

September 24, 2023

The Google Cloud outside their headquarters.
Image: Sundry Photography/Adobe Stock

Google unveiled a wide array of new generative AI-powered services at its Google Cloud Next 2023 conference in San Francisco on August 29. At the pre-briefing, we got an early look at Google’s new Cloud TPU, A4 virtual machines powered by NVIDIA H100 GPUs and more.

Jump to:

Vertex AI increases capacity, adds other improvements

June Yang, vice president of cloud AI and industry solutions at Google Cloud, announced improvements to Vertex AI, the company’s generative AI platform that helps enterprises train their own AI and machine learning models.

Customers have asked for the ability to input larger amounts of content into PaLM, a foundation model under the Vertex AI platform, Yang said, which led Google to increase its capacity from 4,000 tokens to 32,000 tokens.

Customers have also asked for more languages to be supported in Vertex AI. At the Next ’23 conference, Yang announced PaLM, which resides within the Vertex AI platform, is now available in Arabic, Chinese, Japanese, German, Spanish and more. That’s a total of 38 languages for public use; 100 additional languages are now options in private preview.

SEE: Google opened up its PaLM large language model with an API in March. (TechRepublic)

Vertex AI Search, which lets users create a search engine inside their AI-powered apps, is available today. “Think about this like Google Search for your business data,” Yang said.

Also available today is Vertex AI Conversation, which is a tool for building chatbots. Search and Conversion were previously available under different product names in Google’s Generative AI App Builder.

Improvements to the Codey foundation model

Codey, the text-to-code model inside Vertex AI, is getting an upgrade. Although details on this upgrade are sparse, Yang said developers should be able to work more efficiently on code generation and code chat.

“​​Leveraging our Codey foundation model, partners like GitLab are helping developers to stay in the flow by predicting and completing lines of code, generating test cases, explaining code and many more use cases,” Yang noted.

Match your business’ art style with text-to-image AI

Vertex’s text-to-image model will now be able to perform style tuning, or matching a company’s brand and creative guidelines. Organizations need to provide just 10 reference images for Vertex to begin to work within their house style.

New additions to Model Garden, Vertex AI’s model library

Google Cloud has added Meta’s Llama 2 and Anthropic’s Claude 2 to Vertex AI’s model library. The decision to add Llama 2 and Claude 2 to the Google Cloud AI Model Garden is “in line with our commitment to foster an open ecosystem,” Yang said.

“With these additions compared with other hyperscalers, Google Cloud now provides the widest variety of models to choose from, with our first-party Google models, third-party models from partners, as well as open source models on a single platform,” Yang said. “With access to over 100 curated models on Vertex AI, customers can now choose models based on modality, size, performance latency and cost considerations.”

BigQuery and AlloyDB upgrades are ready for preview

Google’s BigQuery Studio — which is a workbench platform for users who work with data and AI — and AlloyDB both have upgrades now available in preview.

BigQuery Studio added to cloud data warehouse preview

BigQuery Studio will be rolled out to Google’s BigQuery cloud data warehouse in preview this week. BigQuery Studio assists with analyzing and exploring data and integrates with Vertex AI. BigQuery Studio is designed to bring data engineering, analytics and predictive analysis together, reducing the time data analytics professionals need to spend switching between tools.

Users of BigQuery can also add Duet AI, Google’s AI assistant, starting now.

AlloyDB enhanced with generative AI

Andy Goodman, vice president and general manager for databases at Google, announced the addition of generative AI capabilities to AlloyDB — Google’s PostgreSQL-compatible database for high-end enterprise workloads — at the pre-brief. AlloyDB includes capabilities for organizations building enterprise AI applications, such as vector search capabilities up to 10 times faster than standard PostgreSQL, Goodman said. Developers can generate vector embeddings within the database to streamline their work. AlloyDB AI integrates with Vertex AI and open source tool ecosystems such as LangChain.

“Databases are at the heart of gen AI innovation, as they help bridge the gap between LLMs and enterprise gen AI apps to deliver accurate, up to date and contextual experiences,” Goodman said.

AlloyDB AI is now available in preview through AlloyDB Omni.

A3 virtual machine supercomputing with NVIDIA for AI training revealed

General availability of the A3 virtual machines running on NVIDIA H100 GPU as a GPU supercomputer will open next month, announced Mark Lohmeyer, vice president general manager for compute and machine learning infrastructure at Google Cloud, during the pre-brief.

The A3 supercomputers’ custom-made 200 Gbps virtual machine infrastructure has GPU-to-GPU data transfers, enabling it to bypass the CPU host. The GPU-to-GPU data transfers power AI training, tuning and scaling with up to 10 times more bandwidth than the previous generation, A2. The training will be three times faster, Lohmeyer said.

NVIDIA “enables us to offer the most comprehensive AI infrastructure portfolio of any cloud,” said Lohmeyer.

Cloud TPU v5e is optimized for generative AI inferencing

Google introduced Cloud TPU v5e, the fifth generation of cloud TPUs optimized for generative AI inferencing. A TPU, or Tensor Processing Unit, is a machine learning accelerator hosted on Google Cloud. The TPU handles the massive amounts of data needed for inferencing, which is a logical process that helps artificial intelligence systems make predictions.

Cloud TPU v5e boasts two times faster performance per dollar for training and 2.5 times better performance per dollar for inferencing compared to the previous-generation TPU, Lohmeyer said.

“(With) the magic of that software and hardware working together with new software technologies like multi-slice, we’re enabling our customers to easily scale their [generative] AI models beyond the physical boundaries of a single TPU pod or a single TPU cluster,” said Lohmeyer. “In other words, a single large AI workload can now span multiple physical TPU clusters, scaling to literally tens of thousands of chips and doing so very cost effectively.”

The new TPU is generally available in preview starting this week.

Introducing Google Kubernetes Engine Enterprise edition

Google Kubernetes Engineer, which many customers use for AI workloads, is getting a boost. The GKE Enterprise edition will include muti-cluster horizontal scaling and GKE’s existing services running across both cloud GPUs and cloud TPUs. Early reports from customers have shown productivity gains of up to 45%, Google said, and reduced software deployment times by more than 70%.

GKE Enterprise Edition will be available in September.

Read More

How Will Generative AI Change the Role of Clinicians In the Next 10 Years? - MedCity News

September 24, 2023

AI is a bit of a buzzword in the healthcare world, so it’s sometimes difficult to tell how much of an impact this technology is going to end up having on the sector. This month, Citi released a report that sought to cut through the noise.

The report focused on how AI will affect the role of clinicians. It predicted that generative AI tools will increasingly streamline many aspects of a clinician’s day in the next five to 10 years — and that this is particularly true for tools that can automate diagnoses and respond to patients’ questions.

The healthcare industry could see an emergence of increasingly effective tools for diagnosis in the coming years, according to the report. As these come onto the scene, clinicians will use them to aid their decision making process, not replace it. 

“For example, a family doctor, listening to a patient, may think it’s worth investigating A, B and C; however the AI may also remind the doctor that syndromes D and E are also possible and therefore need consideration,” the report read.

These tools will likely be equipped with generative AI capabilities, such as automatic speech recognition, which can transcribe patient-clinician interactions. The report predicted that this AI will have good accuracy — large language models are less likely to produce wrong information when they are asked to summarize a text, like a transcript of medical conversation, than when they generate something completely new.

To date, no diagnostic generative AI tools have been launched on the market. However, several companies are developing and testing healthcare-focused large language models. For instance, Google unveiled Med-PaLM 2 in April, and the tool is currently being used at Mayo Clinic and other health systems. To begin, they are testing its ability to answer medical questions, summarize unstructured texts and organize health data.

Diagnostic tools that listen to patient interactions to suggest treatment advice will be used mainly by physicians, but other generative AI tools will hit the market to assist other healthcare professionals, including nurses, dieticians and pharmacists, the report predicted.

For example, generative AI can be used to call and check in on patients, which could potentially prevent avoidable hospital admissions and emergency department visits. These tools can gauge a patient’s progress after surgery, call a patient to hear how they are reacting to a new prescription, and conduct welfare checks on older patients.

But clinicians and other healthcare professionals aren’t the only ones who will use new health-focused generative AI tools in the next five to 10 years — consumers will too, according to the report. As the use of large language models becomes more widespread, consumers will likely gain access to chatbot-style tools that answer their medical questions the way a doctor would, it predicted.

While these new advancements may seem exciting, the report noted that it will take years for technology developers to produce tools that are both accurate and easy to use — and that timeline will likely be longer than what AI enthusiasts want.

Photo: Natali_Mis, Getty Images

Read More

Meet One Medical's new CEO

September 24, 2023

The incoming CEO of One Medical will bring plenty of hospital experience to his new role at the helm of the healthcare disruptor.

Trent Green will take over as chief executive of the Amazon-owned primary care chain after current CEO Amir Dan Rubin departs later this year, the company said Aug. 31.

Mr. Green spent nearly 14 years with Legacy Health, a six-hospital nonprofit system based in Portland, Ore., with locations in Oregon and Washington. He joined the $2.5 billion organization in 2008 as senior vice president and chief strategy officer, and was its COO when he left for the same position at One Medical in July 2022.

In an Aug. 31 email to staff, Mr. Green said that when he was departing Legacy Health he wanted to work for an organization that provided primary care — from pediatrics to geriatrics — on a national scale — and "One Medical was the one that was doing it right."

One Medical provides subscription-based concierge primary care, with in-person visits and 24/7 telehealth for an annual fee. The company, which Amazon bought in February for $3.9 billion, has more than 200 clinics in 29 markets. It partners with 17 health systems on specialty care referrals.

"I'm excited for us to build upon our strong culture and model, to expand our impact, and to take advantage of all we can do as part of Amazon to further delight our members," Mr. Green wrote.

At Legacy Health, Mr. Green also served as president of Emanuel Medical Center and Unity Center for Behavioral Health, both in Portland, and Legacy's medical group. Mr. Green started his career as an administrative fellow for Rochester, Minn.-based Mayo Clinic before working in healthcare management consulting.

"As COO, Trent quickly gained the trust and respect of the leadership team," Neil Lindsay, senior vice president of health services at Amazon, said in a Aug. 31 email to employees. "He brings a deep understanding of One Medical's clinical operations and patient care delivery experience and has over 25 years of healthcare operations experience. We are confident Trent's leadership will help more people get high-quality care."

In an internal email, Mr. Rubin, himself a former hospital CEO and COO, called Mr. Green a "highly effective, experienced, and values-driven leader."

Besides Mr. Green, One Medical's C-suite features several other hospital veterans. They include Chief Quality Officer Raj Behal, MD, formerly of Palo Alto, Calif.-based Stanford Health Care and Chicago-based Rush University Medical Center; Chief Strategy Officer Jenni Vargas, formerly of Stanford Health Care; and Chief Network Officer John Singerling, the former president of Columbia, S.C.-based Palmetto Health, which merged with Greenville (S.C.) Health in 2017 to become Greenville-based Prisma Health.

Read More
View All
Insights by Kate Gamble
View All
Our Partners

Premier

Diamond Partners

Platinum Partners

Silver Partners

This Week Health
Healthcare Transformation Powered by Community
Looking to connect or attend events? Visit our sister organization, 229 Project
Click here.

© Copyright 2024 Health Lyrics All rights reserved