This Week Health

Interviews in Action

More
This Week Health is a series of IT podcasts dedicated to healthcare transformation powered by the community

What would you like to learn about today?

Latest Episodes
View All
R25 - Podcasts Category Filter-2
  • All
  • Leadership (668)
  • Emerging Technology (494)
  • Security (307)
  • Interoperability (296)
  • Patient Experience (295)
  • Financial (286)
  • Analytics (182)
  • Telehealth (174)
  • Digital (164)
  • Clinician Burnout (158)
  • Legal & Regulatory (140)
  • AI (103)
  • Cloud (92)
View All
In the News

Artisight Powers Care Transformation at WellSpan Health with AI-Driven Smart Hospital Platform

October 6, 2023

The virtual nursing and remote patient monitoring pilot program will allow the Pennsylvania health system to improve outcomes and reduce clinician burden

CHICAGO, Oct. 4, 2023 /PRNewswire/ -- Artisight, Inc., a smart hospital platform powered by industry-defining artificial intelligence to provide virtual care, quality improvement, and care coordination solutions, today announced its collaboration with WellSpan Health, a health system focused on value-based care, on a pilot patient monitoring and virtual nursing program. The platform is currently being utilized at WellSpan Surgery and Rehabilitation Hospital in York, Pa.

WellSpan sought a solution to improve patient safety and reimagine ways to address nurse burnout by utilizing artificial intelligence to monitor patients at high risk for falls in its rehabilitation inpatient hospital. The pilot also includes a virtual nursing model, utilizing in-room audio and video connections. Virtual staff located within a control room at the facility can interact with patients and request assistance from on-site clinicians when needed. The platform ensures the patient is always being monitored while allowing clinicians to focus on direct patient care, alleviating staffing challenges WellSpan, as all health systems around the country, continues to experience.

Artisight's Smart Hospital Platform encompasses AI-driven sensors, computer vision, voice recognition, vital sign monitoring, indoor positioning capabilities, and actionable analytics reports. The platform's deep learning and open integration standards streamline safe patient care and reduce clinician burden. The electronic health record and hardware-agnostic platform seamlessly integrates into existing technology, ensuring cohesion for hospitals and health systems. The ability to scale the solution across WellSpan's eight hospitals in South Central Pennsylvania was also an important factor when it came to partnering with the system.

"Artisight is driving transformation by harnessing artificial intelligence that drives efficiency across the full spectrum of hospital operations," said Stephanie Lahr, MD, CHCIO, President of Artisight, Inc. "Our proprietary algorithms are constantly learning and adapting with a 99% accuracy rate. The Smart Hospital Platform delivers what hospitals and health systems need – reduced provider burden, increased patient and nurse satisfaction, and improved financial results."

"At WellSpan, the safety and well-being of our patients is top priority, and we are committed to finding a better way to serve them, our team members and our communities," said Kasey Paulus, Senior Vice President and Chief Nursing Executive at WellSpan Health. "The Artisight platform allows us to utilize innovative technology to support nurses and improve patient safety as part of our workforce transformation strategy." 

"While many AI solutions solve a single problem well, we are discovering that the Artisight platform may be able to solve many problems for us. We're exploring those possibilities with Artisight as we imagine what's next with this platform," added Dr. R. Hal Baker, Senior Vice President and Chief Digital Information Officer at WellSpan Health.

Upon successful completion of the pilot, WellSpan has plans to expand the program to other hospitals throughout its system.

Artisight will be attending the HLTH conference in Las Vegas Oct. 8-11. Please visit booth 2646 to learn about how the AI-driven Smart Hospital Platform is transforming healthcare for hospitals and health systems. For additional information, visit artisight.com.

About Artisight

Artisight redefines the possibilities of healthcare through its Smart Hospital Platform and solutions for virtual care, quality improvement, and care coordination. Anchored in deep clinical knowledge and industry-defining artificial intelligence, Artisight's state-of-the-art computer vision and robust multi-sensor network adapts in real-time to specific environments and workflows, unlocking previously inaccessible data and ensuring seamless integration into your healthcare ecosystem.

About WellSpan Health

WellSpan Health's vision is to reimagine healthcare through the delivery of comprehensive, equitable health and wellness solutions throughout our continuum of care. As an integrated delivery system focused on leading in value-based care, we encompass more than 2,000 employed providers, 220 locations, eight award-winning hospitals, home care and a behavioral health organization serving South Central Pennsylvania and northern Maryland. With a team 20,000 strong, WellSpan experts provide a range of services, from wellness and employer services solutions to advanced care for complex medical and behavioral conditions. Our clinically integrated network of 2,600 aligned physicians and advanced practice providers is dedicated to providing the highest quality and safety, inspiring our patients and communities to be their healthiest.

Media Contact:
Kim MohrAmendolaCommunications for Artisight

[email protected]

SOURCE Artisight

Read More

How Will Generative AI Change the Role of Clinicians In the Next 10 Years? - MedCity News

October 6, 2023

AI is a bit of a buzzword in the healthcare world, so it’s sometimes difficult to tell how much of an impact this technology is going to end up having on the sector. This month, Citi released a report that sought to cut through the noise.

The report focused on how AI will affect the role of clinicians. It predicted that generative AI tools will increasingly streamline many aspects of a clinician’s day in the next five to 10 years — and that this is particularly true for tools that can automate diagnoses and respond to patients’ questions.

The healthcare industry could see an emergence of increasingly effective tools for diagnosis in the coming years, according to the report. As these come onto the scene, clinicians will use them to aid their decision making process, not replace it. 

“For example, a family doctor, listening to a patient, may think it’s worth investigating A, B and C; however the AI may also remind the doctor that syndromes D and E are also possible and therefore need consideration,” the report read.

These tools will likely be equipped with generative AI capabilities, such as automatic speech recognition, which can transcribe patient-clinician interactions. The report predicted that this AI will have good accuracy — large language models are less likely to produce wrong information when they are asked to summarize a text, like a transcript of medical conversation, than when they generate something completely new.

To date, no diagnostic generative AI tools have been launched on the market. However, several companies are developing and testing healthcare-focused large language models. For instance, Google unveiled Med-PaLM 2 in April, and the tool is currently being used at Mayo Clinic and other health systems. To begin, they are testing its ability to answer medical questions, summarize unstructured texts and organize health data.

Diagnostic tools that listen to patient interactions to suggest treatment advice will be used mainly by physicians, but other generative AI tools will hit the market to assist other healthcare professionals, including nurses, dieticians and pharmacists, the report predicted.

For example, generative AI can be used to call and check in on patients, which could potentially prevent avoidable hospital admissions and emergency department visits. These tools can gauge a patient’s progress after surgery, call a patient to hear how they are reacting to a new prescription, and conduct welfare checks on older patients.

But clinicians and other healthcare professionals aren’t the only ones who will use new health-focused generative AI tools in the next five to 10 years — consumers will too, according to the report. As the use of large language models becomes more widespread, consumers will likely gain access to chatbot-style tools that answer their medical questions the way a doctor would, it predicted.

While these new advancements may seem exciting, the report noted that it will take years for technology developers to produce tools that are both accurate and easy to use — and that timeline will likely be longer than what AI enthusiasts want.

Photo: Natali_Mis, Getty Images

Read More

Prompt Engineering as an Important Emerging Skill for Medical Professionals: Tutorial

October 6, 2023

With the emergence of large language models (LLMs), with the most popular one being ChatGPT that has attracted the attention of over a 100 million users in only 2 months, artificial intelligence (AI), especially generative AI has become accessible for the masses []. This is an unprecedented paradigm shift not only because of the use of AI becoming more widespread but also due to the possible implications of LLMs in health care [].

Numerous studies have shown what medical tasks and health care processes LLMs can contribute to in order to ease the burden on medical professionals, increase efficiency, and decrease costs [].

Health care institutions have started investing in generative AI, medical companies have started integrating LLMs into their businesses, medical associations have released guidelines about the use of these models, and medical curricula have also started covering this novel technology [-]. Thus, a new, essential skill has emerged: prompt engineering.

Prompt engineering is a relatively new field of research that refers to the practice of designing, refining, and implementing prompts or instructions that guide the output of LLMs to help in various tasks. It is essentially the practice of effectively interacting with AI systems to optimize their benefits.

In the context of medical professionals and health care in general, this could encompass the following:

  • Decision support: medical professionals can use prompt engineering to optimize AI systems to aid in decision-making processes, such as diagnosis, treatment selection, or risk assessment.
  • Administrative assistance: prompts can be engineered to facilitate administrative tasks, such as patient scheduling, record keeping, or billing, thereby increasing efficiency.
  • Patient engagement: prompt engineering can be used to improve communication between health care providers and patients. For example, AI systems can be designed to send prompts for medication reminders, appointment scheduling, or lifestyle advice.
  • Research and development: in research scenarios, prompts can be crafted to assist in tasks such as literature reviews, data analysis, and generating hypotheses.
  • Training and education: prompts can be engineered to facilitate the education of medical professionals, including ongoing training in the latest treatments and procedures.
  • Public health: on a larger scale, prompt engineering can assist in public health initiatives by helping analyze population health data, predict disease trends, or educate the public.

Prompt engineering, therefore, has the potential to improve the efficiency, accuracy, and effectiveness of health care delivery, making it an increasingly important skill for medical professionals.

This paper summarizes the current state of research on prompt engineering and, at the same time, aims at providing practical recommendations for the wide range of health care professionals to improve their interactions with LLMs.

The use of LLMs, especially ChatGPT, comes with major limitations and risks. First, since ChatGPT is not updated in real time and its training data only include information up to November 2021, it may lack crucial, up-to-date medical research or changes in clinical guidelines, potentially impacting the quality and relevance of its responses. Furthermore, ChatGPT cannot access or process individual user data or context, which limits its ability to provide personalized medical advice and increases the risk of data misinterpretation.

There is also a crucial need for users to verify every single response from ChatGPT with a qualified health care professional, as the model's answers are generated on the basis of patterns in the data it was trained on and may not be accurate or safe.

The model's inability to empathize or deliver sensitive information may also result in a subpar patient experience. Importantly, potential breaches of patient confidentiality could violate privacy laws such as the Health Insurance Portability and Accountability Act of 1996 in the United States. Despite its potential as an assistive tool, these limitations necessitate careful consideration of its application in health care [].

While these risks are significant, the potential outcomes can outweigh them; therefore, the need for improving at designing better prompts has grown extensively since the launch of ChatGPT.

There have been attempts at addressing this issue. One study aimed at designing a catalogue of prompt engineering techniques, presented in pattern form, which have been applied to solve common problems when conversing with LLMs []. Another study provided a summary of the latest advances in prompt engineering for a very specific audience, researchers working in natural language processing for the medical domain, or academic writers [,]. One study introduced the potential of an AI system to generate health awareness messages through prompt engineering [].

While there is research in the field, it is clear that there have been no comprehensive, yet practical guides for medical professionals. This is the gap that this paper aims to fill.

As in the case of any essential skill, becoming better at prompt engineering would involve an improved understanding of the fundamental principles of the technology, gaining practical exposure to systems using the technology, and continually refining and iterating the skill based on feedback.

The following are some concrete steps that a health care professional can take to improve their skills in prompt engineering:

  • Understanding the underlying principles of how AI and machine learning models work can provide a foundation on which to build prompt engineering skills. As shown, it is possible to gain that understanding without any prior technical or coding knowledge [].
  • Familiarizing themselves with the LLMs they are working with as each system has its own set of capabilities and limitations. Understanding both can help craft more effective prompts.
  • Practice makes perfect; therefore, attempting to interact with LLMs regularly and make a note of the prompts that yield the most helpful and accurate results can have benefits.

It is also important to constantly test prompts in real-world scenarios as their effectiveness is best evaluated in practical application.

Besides these general approaches, here is a summary of specific recommendations with practical examples that a health care professional might want to consider to improve their skills in prompt engineering. summarizes these recommendations, their examples with ChatGPT’s key terms, limitations, and the most popular plugins.

Figure 1. A cheat sheet of prompt engineering recommendations for health care professionals with examples for each: ChatGPT’s key terms and their explanations, its limitations, and its most popular plugins. A high resolution version is attached as .

Be as Specific as Possible

The more specific the prompt, the more accurate and focused the response is likely to be. The following is an example prompt:

  • Less specific: “Tell me about heart disease.”
  • More specific: “What are the most common risk factors for coronary artery disease?”

Describe the Setting and Provide the Context Around the Question

One must consider the discussion one is having with ChatGPT as a discussion one would have with a person they just met, who might still be able to answer their questions and address one’s challenges.

The following is an example prompt: “I'm writing an article about tips and tricks for ChatGPT prompt engineering for people working in healthcare. Can you please list a few of those tips and tricks with some specific prompt examples?”

Experiment With Different Prompt Styles

The style of one’s prompt can significantly impact the answer. One can try different formats such as asking ChatGPT to generate a list about their brief or to provide a summary of the topic. The following is an example:

  • Direct question: “What are the symptoms of COVID-19?”
  • Request for a list: “List all the potential symptoms of COVID-19.”
  • Request for a summary: “Summarize the key symptoms and progression of COVID-19.”
  • Process: “Provide a step-by-step process of diagnosing COVID-19.”

Identify the Overall Goal of the Prompt First

Describe exactly what kind of output is being sought. Whether it would be getting creative ideas for an article, asking for a specific description of an advanced scientific topic, or providing a list of examples around questions, defining it helps ChatGPT come up with more relevant answers. The following is an example: “I'd like to get a list of 5 ideas for a presentation at a scientific event to make my research findings more easily understandable.”

Ask it to Play Roles

This can help streamline the desired process of obtaining the information or input one was looking for in a specific setting. With new topics without prior knowledge, it is prudent to obtain only a basic description; in addition, one can also ask ChatGPT to act as a tutor and help dive into a detailed topic step-by-step. The following are a couple of examples:

  • “Act as a Data Scientist and explain Prompt Engineering to a physician.”
  • “Act as my nutritionist and give me tips about a balanced Mediterranean diet.”

Iterate and Refine

Even if one’s skills in prompt engineering are advanced, LLMs change so dynamically that one rarely get the best response on was looking for after the first prompt attempt. Constantly iterating prompts is something with which we should get accustomed. Users of LLMs are also encouraged to ask the LLM to modify the output based on feedback on its previous response.

Use the Threads

One can navigate back to a specific discussion by clicking on the specific thread in the left column on ChatGPT’s dashboard. This way, one can build upon the details and responses one has already received in a previous thread. This can save a lot of time as there is no need to describe the same situation and all the feedback ChatGPT has received on its responses.

Ask Open-Ended Questions

Open-ended questions can provide a broader, more comprehensive understanding of the user's situation. For instance, asking “How do you feel?” rather than “Do you feel pain?” allows for a wider array of responses that can potentially provide more insight into the patient's mental, emotional, or physical state. Open-ended questions can also help to generate a larger data set for training AI models, making them more effective. Lastly, asking open-ended questions allows ChatGPT to display its potential better by leveraging its training on a diverse range of topics. This can lead to more unexpected and creative solutions or ideas that a health care professional might not have thought of. The following is an example:

  • Closed question: “Is exercise important for patients with osteoporosis?”
  • Open question: “How does regular physical activity benefit patients with osteoporosis?”

Request Examples

Asking for specific examples can help to clarify the meaning of a concept or idea, making it easier to understand. Especially with complex medical terminology or procedures, examples can provide a practical context that aids comprehension. Also, examples often help in visualizing abstract or complicated ideas. When ChatGPT provides examples, it can showcase how a certain concept or rule is applied in different scenarios. This can be beneficial in health care, where theoretical knowledge needs to be connected to real-world applications.

Temporal Awareness

This refers to the model's understanding of time-related concepts and its ability to generate contextually relevant responses based on time. Therefore, describing what time line the prompt and the desired output would be related to helps LLMs provide a more useful answer. The following is an example:

  • Without a time reference: “Describe the healing process after knee surgery.”
  • With a time reference: “What can a patient typically expect during the first six weeks of healing after knee surgery?”

Set Realistic Expectations

Knowing the limitations of AI tools such as ChatGPT is crucial, as it helps set realistic expectations about the output. For instance, ChatGPT cannot access any data or information after November, 2021; it cannot provide personalized medical advice or replace a professional's judgement. The following is an example:

  • Unrealistic prompt: “What's the latest research published this month about Alzheimer's?”
  • Realistic prompt: “What were some of the major research breakthroughs in Alzheimer's treatment up until 2021?”

Use the One-Shot/Few-Shot Prompting Method

The one-shot prompting method is one in which ChatGPT can generate an answer based on a single example or piece of context provided by the user. The following is an example:

  • Generate 10 possible names for a new digital stethoscope device.
  • A name that I like is DigSteth.

With the few-shot strategy, ChatGPT can generate an answer based on a few examples or pieces of context provided by the user. The following is an example:

  • Generate 10 possible names for a new digital stethoscope device.
  • Names that I like include:

Prompting for Prompts

One of the easiest ways of improving at prompt engineering is asking ChatGPT to get involved in the process and design prompts for the user. The following is an example: “What prompt could I use right now to get a better output from you in this thread/task?”

As the skill of prompt engineering has gained significant interest worldwide, especially in the health care setting, it would be important to include teaching the practical methods this paper described in the medical curriculum and postgraduate education. While the technical details and background of generative AI will probably be included in future curricula, it would be useful for medical students to learn the most practical tips of using LLMs even before that happens.

The general message for every LLM user should be that they could use such AI tools to expand their knowledge, capabilities, and ideas instead of solving things on their behalf. Ideally, this approach and mindset would stem from trained medical professionals who could share it with their patients.

In summary, as more patients and medical professionals use AI-based tools—LLMs being the most popular representatives of that group—it seems inevitable to address the challenge to improve at this skill. Furthermore, as doing so does not require any technical knowledge or prior programming expertise, prompt engineering alone can be considered an essential emerging skill that helps leverage the full potential of AI in medicine and health care.

I used the generative AI tool GPT-4 (OpenAI) [] during the ideation process to make sure the paper covers every possible prompt engineering suggestion of value. During that process, I tested the prompt engineering recommendations I made in the paper through imaginary scenarios.

Edited by A Mavragani; submitted 07.07.23; peer-reviewed by O Tamburis, A Zavar; comments to author 06.09.23; revised version received 14.09.23; accepted 19.09.23; published 04.10.23

©Bertalan Meskó. Originally published in the Journal of Medical Internet Research (https://www.jmir.org), 04.10.2023.

This is an open-access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in the Journal of Medical Internet Research, is properly cited. The complete bibliographic information, a link to the original publication on https://www.jmir.org/, as well as this copyright and license information must be included.

Read More

Beyond Hype: Getting the Most Out of Generative AI in Healthcare Today

October 6, 2023

While Covid-19 may no longer be dominating the global news cycle, healthcare providers and payers are still feeling its reverberations. More than half of US hospitals ended 2022 with a negative margin, marking the most difficult financial year since the start of the pandemic.

CEOs and CFOs remember the challenges all too well: The Omicron surge halted nonurgent procedures in the first half of the year, government support tapered off, and labor expenses ballooned amid staffing shortages. There was also the record-high inflation that continues to intensify margin pressures today. According to a recent Bain survey of health system executives, 60% cite rising costs as their greatest concern.

Payers and providers are now on the hunt for margin improvements. In our experience, the most successful companies won’t merely reduce costs, but also ramp up productivity. When done right, modest technology investments can accomplish both.

Artificial intelligence (AI) may hold part of the answer. With the costs to train a system down 1,000-fold since 2017, AI provides an arsenal of new productivity-enhancing tools at a low investment.

Many executives recognize the growing opportunity, especially with the recent rise of generative AI, which uses sophisticated large language models (LLMs) to create original text, images, and other content. It’s inspiring an explosion of ideas around use cases, from reviewing medical records for accuracy to making diagnoses and treatment recommendations.

Our survey reveals that 75% of health system executives believe generative AI has reached a turning point in its ability to reshape the industry. However, only 6% have an established generative AI strategy.

It’s time to play offense—or be forced to play defense later. But choosing from the laundry list of generative AI applications is daunting. Companies are at high risk of overinvesting in the wrong opportunities and underinvesting in the right ones, undermining future profitability, growth, and value creation. A wait-and-see approach is a tempting prospect.

However, we believe the next generation of leading healthcare companies will start today, with highly focused, low-risk use cases that boost productivity and cost efficiency. Over the next three to nine months, these companies will improve margins and learn how to implement a generative AI strategy, building up the funds and experience needed to invest in a more transformative vision.

Endless potential—and high hurdles 

The excitement around generative AI may feel akin to the hype around other recent digital and technology developments that never quite rose to their promised potential. Well-intentioned, well-informed individuals are debating how much change will truly materialize in the next few years. While developments over the past six months have been a testament to the breakneck speed of change, nobody can accurately predict what the next six months, year, or decade will look like. Will new players emerge? Will we rely on different LLMs for different use cases, or will one dominate the landscape?

Despite the uncertainty, generative AI already has the power to alleviate some of providers’ biggest woes, which include rising costs and high inflation, clinician shortages, and physician burnout. Quick relief is critical, considering that the heightened risk of a recession will only compound margin pressures, and the US could be short 40,800 to 104,900 physicians by 2030, according to the Association of American Medical Colleges.

Many health systems are eyeing imminent opportunities to reduce administrative burdens and enhance operational efficiency. They rank improving clinical documentation, structuring and analyzing patient data, and optimizing workflows as their top three priorities (see Figure 1).

Some generative AI applications are already streamlining administrative tasks and allowing thinly stretched physicians to spend more time with patients. For instance, Doximity is rolling out a ChatGPT tool that can draft preauthorization and appeal letters. HCA Healthcare partnered with Parlance, a conversational AI-based switchboard, to improve its call center experience while reducing operators’ workload. And there are new announcements seemingly every week: Consider how healthcare software company Epic Systems is incorporating ChatGPT with electronic health records (EHRs) to draft response messages to patients, or how Google Cloud is launching an AI-enabled Claims Acceleration Suite for prior authorization processing. 

These applications only scratch the surface of potential. In the future, generative AI could profoundly transform care delivery and patient outcomes. Looking ahead two to five years, executives are most interested in predictive analytics, clinical decision support, and treatment recommendations (see Figure 2).

It’s hard not to catch AI “fever.” But there are real challenges ahead. Some are already tackling the biggest questions: Organizations such as Duke Health, Stanford Medicine, Google, and Microsoft have formed the Coalition for Health AI to create guidelines for responsible AI systems. Even so, solutions to the greatest hurdles aren’t yet keeping up with the rapid technology development. Resource and cost constraints, a lack of expertise, and regulatory and legal considerations are the largest barriers to implementing generative AI, according to executives (see Figure 3).

Even when organizations can overcome these hurdles, one major challenge remains: focus and prioritization. In many boardrooms, executives are debating overwhelming lists of potential generative AI investments, only to deem them incomplete or outdated given the dizzying pace of innovation. These protracted debates are a waste of precious organizational energy—and time. 

Starting small to win big 

Setting the bar too high is setting up for failure. It’s easy to get caught up, betting big on what seems like the greatest opportunity in the moment. But 12 months later, leaders often find themselves frustrated that they haven’t seen results or feeling as if they’ve made a misplaced bet. Momentum and investments slow, further hindering progress. 

Leading companies are forming a more pragmatic strategy that considers current capabilities, regulations, and barriers to adoption. Their CEOs and CFOs work together to enforce four guiding principles: 

  • Pilot low-risk applications with a narrow focus first. Tomorrow’s leaders are making no-regret moves to deliver savings and productivity enhancements in short order—at a time when they need it most. Gaining experience with currently available technology, they are testing and learning their way to minimum viable products in low-risk, repeatable use cases. These quick wins are typically in areas where they already have the right data, can create tight guardrails, and see a strong potential return on investment. Some, like call center and chatbot support, can improve the patient experience. However, given the current challenges around regulation and compliance, the most successful early initiatives are likely to be internally focused, such as billing or scheduling. Most importantly, executives prioritize initiatives by potential savings, value, and cost.
  • Decide to buy, partner, or build. CEOs will need to think about how to invest in different use cases based on availability of third-party technology and importance of the initiative.
  • Funnel cost savings and experience into bigger bets. As the technology matures and the value becomes clear, companies that generate savings, accumulate experience, and build organizational buy-in today will be best positioned for the next wave of more sophisticated, transformative use cases. These include higher-risk clinical activities with a greater need for accuracy due to ethical and regulatory considerations, such as clinical decision support, as well as administrative activities that require third-party integration, such as prior authorization.
  • Remember generative AI isn’t a strategy unto itself. To build a true competitive advantage, top CEOs and CFOs are selective and discerning, ensuring that every generative AI initiative reinforces and enables their overarching goals.

Some health systems are already seeing powerful results from relatively small, more practical investments. For instance, recognizing that clinicians were spending an extra 130 minutes per day outside of working hours on administrative tasks, the University of Kansas Health System partnered with Abridge, a generative AI platform, to reduce documentation burden. By summarizing the most important points from provider-patient conversations, Abridge is improving the quality and consistency of documentation, getting more patients in the door, and cutting down on pervasive physician burnout.

Although it will require some upfront investment, in the long run it will be more costly to underestimate the level and speed at which generative AI will transform healthcare. The next generation of leaders will start testing, learning, and saving today, putting them on a path to eventually revolutionize their businesses.

Read More

Artisight Powers Care Transformation at WellSpan Health with AI-Driven Smart Hospital Platform

October 6, 2023

The virtual nursing and remote patient monitoring pilot program will allow the Pennsylvania health system to improve outcomes and reduce clinician burden

CHICAGO, Oct. 4, 2023 /PRNewswire/ -- Artisight, Inc., a smart hospital platform powered by industry-defining artificial intelligence to provide virtual care, quality improvement, and care coordination solutions, today announced its collaboration with WellSpan Health, a health system focused on value-based care, on a pilot patient monitoring and virtual nursing program. The platform is currently being utilized at WellSpan Surgery and Rehabilitation Hospital in York, Pa.

WellSpan sought a solution to improve patient safety and reimagine ways to address nurse burnout by utilizing artificial intelligence to monitor patients at high risk for falls in its rehabilitation inpatient hospital. The pilot also includes a virtual nursing model, utilizing in-room audio and video connections. Virtual staff located within a control room at the facility can interact with patients and request assistance from on-site clinicians when needed. The platform ensures the patient is always being monitored while allowing clinicians to focus on direct patient care, alleviating staffing challenges WellSpan, as all health systems around the country, continues to experience.

Artisight's Smart Hospital Platform encompasses AI-driven sensors, computer vision, voice recognition, vital sign monitoring, indoor positioning capabilities, and actionable analytics reports. The platform's deep learning and open integration standards streamline safe patient care and reduce clinician burden. The electronic health record and hardware-agnostic platform seamlessly integrates into existing technology, ensuring cohesion for hospitals and health systems. The ability to scale the solution across WellSpan's eight hospitals in South Central Pennsylvania was also an important factor when it came to partnering with the system.

"Artisight is driving transformation by harnessing artificial intelligence that drives efficiency across the full spectrum of hospital operations," said Stephanie Lahr, MD, CHCIO, President of Artisight, Inc. "Our proprietary algorithms are constantly learning and adapting with a 99% accuracy rate. The Smart Hospital Platform delivers what hospitals and health systems need – reduced provider burden, increased patient and nurse satisfaction, and improved financial results."

"At WellSpan, the safety and well-being of our patients is top priority, and we are committed to finding a better way to serve them, our team members and our communities," said Kasey Paulus, Senior Vice President and Chief Nursing Executive at WellSpan Health. "The Artisight platform allows us to utilize innovative technology to support nurses and improve patient safety as part of our workforce transformation strategy." 

"While many AI solutions solve a single problem well, we are discovering that the Artisight platform may be able to solve many problems for us. We're exploring those possibilities with Artisight as we imagine what's next with this platform," added Dr. R. Hal Baker, Senior Vice President and Chief Digital Information Officer at WellSpan Health.

Upon successful completion of the pilot, WellSpan has plans to expand the program to other hospitals throughout its system.

Artisight will be attending the HLTH conference in Las Vegas Oct. 8-11. Please visit booth 2646 to learn about how the AI-driven Smart Hospital Platform is transforming healthcare for hospitals and health systems. For additional information, visit artisight.com.

About Artisight

Artisight redefines the possibilities of healthcare through its Smart Hospital Platform and solutions for virtual care, quality improvement, and care coordination. Anchored in deep clinical knowledge and industry-defining artificial intelligence, Artisight's state-of-the-art computer vision and robust multi-sensor network adapts in real-time to specific environments and workflows, unlocking previously inaccessible data and ensuring seamless integration into your healthcare ecosystem.

About WellSpan Health

WellSpan Health's vision is to reimagine healthcare through the delivery of comprehensive, equitable health and wellness solutions throughout our continuum of care. As an integrated delivery system focused on leading in value-based care, we encompass more than 2,000 employed providers, 220 locations, eight award-winning hospitals, home care and a behavioral health organization serving South Central Pennsylvania and northern Maryland. With a team 20,000 strong, WellSpan experts provide a range of services, from wellness and employer services solutions to advanced care for complex medical and behavioral conditions. Our clinically integrated network of 2,600 aligned physicians and advanced practice providers is dedicated to providing the highest quality and safety, inspiring our patients and communities to be their healthiest.

Media Contact:
Kim MohrAmendolaCommunications for Artisight

[email protected]

SOURCE Artisight

Read More

How Will Generative AI Change the Role of Clinicians In the Next 10 Years? - MedCity News

October 6, 2023

AI is a bit of a buzzword in the healthcare world, so it’s sometimes difficult to tell how much of an impact this technology is going to end up having on the sector. This month, Citi released a report that sought to cut through the noise.

The report focused on how AI will affect the role of clinicians. It predicted that generative AI tools will increasingly streamline many aspects of a clinician’s day in the next five to 10 years — and that this is particularly true for tools that can automate diagnoses and respond to patients’ questions.

The healthcare industry could see an emergence of increasingly effective tools for diagnosis in the coming years, according to the report. As these come onto the scene, clinicians will use them to aid their decision making process, not replace it. 

“For example, a family doctor, listening to a patient, may think it’s worth investigating A, B and C; however the AI may also remind the doctor that syndromes D and E are also possible and therefore need consideration,” the report read.

These tools will likely be equipped with generative AI capabilities, such as automatic speech recognition, which can transcribe patient-clinician interactions. The report predicted that this AI will have good accuracy — large language models are less likely to produce wrong information when they are asked to summarize a text, like a transcript of medical conversation, than when they generate something completely new.

To date, no diagnostic generative AI tools have been launched on the market. However, several companies are developing and testing healthcare-focused large language models. For instance, Google unveiled Med-PaLM 2 in April, and the tool is currently being used at Mayo Clinic and other health systems. To begin, they are testing its ability to answer medical questions, summarize unstructured texts and organize health data.

Diagnostic tools that listen to patient interactions to suggest treatment advice will be used mainly by physicians, but other generative AI tools will hit the market to assist other healthcare professionals, including nurses, dieticians and pharmacists, the report predicted.

For example, generative AI can be used to call and check in on patients, which could potentially prevent avoidable hospital admissions and emergency department visits. These tools can gauge a patient’s progress after surgery, call a patient to hear how they are reacting to a new prescription, and conduct welfare checks on older patients.

But clinicians and other healthcare professionals aren’t the only ones who will use new health-focused generative AI tools in the next five to 10 years — consumers will too, according to the report. As the use of large language models becomes more widespread, consumers will likely gain access to chatbot-style tools that answer their medical questions the way a doctor would, it predicted.

While these new advancements may seem exciting, the report noted that it will take years for technology developers to produce tools that are both accurate and easy to use — and that timeline will likely be longer than what AI enthusiasts want.

Photo: Natali_Mis, Getty Images

Read More

Prompt Engineering as an Important Emerging Skill for Medical Professionals: Tutorial

October 6, 2023

With the emergence of large language models (LLMs), with the most popular one being ChatGPT that has attracted the attention of over a 100 million users in only 2 months, artificial intelligence (AI), especially generative AI has become accessible for the masses []. This is an unprecedented paradigm shift not only because of the use of AI becoming more widespread but also due to the possible implications of LLMs in health care [].

Numerous studies have shown what medical tasks and health care processes LLMs can contribute to in order to ease the burden on medical professionals, increase efficiency, and decrease costs [].

Health care institutions have started investing in generative AI, medical companies have started integrating LLMs into their businesses, medical associations have released guidelines about the use of these models, and medical curricula have also started covering this novel technology [-]. Thus, a new, essential skill has emerged: prompt engineering.

Prompt engineering is a relatively new field of research that refers to the practice of designing, refining, and implementing prompts or instructions that guide the output of LLMs to help in various tasks. It is essentially the practice of effectively interacting with AI systems to optimize their benefits.

In the context of medical professionals and health care in general, this could encompass the following:

  • Decision support: medical professionals can use prompt engineering to optimize AI systems to aid in decision-making processes, such as diagnosis, treatment selection, or risk assessment.
  • Administrative assistance: prompts can be engineered to facilitate administrative tasks, such as patient scheduling, record keeping, or billing, thereby increasing efficiency.
  • Patient engagement: prompt engineering can be used to improve communication between health care providers and patients. For example, AI systems can be designed to send prompts for medication reminders, appointment scheduling, or lifestyle advice.
  • Research and development: in research scenarios, prompts can be crafted to assist in tasks such as literature reviews, data analysis, and generating hypotheses.
  • Training and education: prompts can be engineered to facilitate the education of medical professionals, including ongoing training in the latest treatments and procedures.
  • Public health: on a larger scale, prompt engineering can assist in public health initiatives by helping analyze population health data, predict disease trends, or educate the public.

Prompt engineering, therefore, has the potential to improve the efficiency, accuracy, and effectiveness of health care delivery, making it an increasingly important skill for medical professionals.

This paper summarizes the current state of research on prompt engineering and, at the same time, aims at providing practical recommendations for the wide range of health care professionals to improve their interactions with LLMs.

The use of LLMs, especially ChatGPT, comes with major limitations and risks. First, since ChatGPT is not updated in real time and its training data only include information up to November 2021, it may lack crucial, up-to-date medical research or changes in clinical guidelines, potentially impacting the quality and relevance of its responses. Furthermore, ChatGPT cannot access or process individual user data or context, which limits its ability to provide personalized medical advice and increases the risk of data misinterpretation.

There is also a crucial need for users to verify every single response from ChatGPT with a qualified health care professional, as the model's answers are generated on the basis of patterns in the data it was trained on and may not be accurate or safe.

The model's inability to empathize or deliver sensitive information may also result in a subpar patient experience. Importantly, potential breaches of patient confidentiality could violate privacy laws such as the Health Insurance Portability and Accountability Act of 1996 in the United States. Despite its potential as an assistive tool, these limitations necessitate careful consideration of its application in health care [].

While these risks are significant, the potential outcomes can outweigh them; therefore, the need for improving at designing better prompts has grown extensively since the launch of ChatGPT.

There have been attempts at addressing this issue. One study aimed at designing a catalogue of prompt engineering techniques, presented in pattern form, which have been applied to solve common problems when conversing with LLMs []. Another study provided a summary of the latest advances in prompt engineering for a very specific audience, researchers working in natural language processing for the medical domain, or academic writers [,]. One study introduced the potential of an AI system to generate health awareness messages through prompt engineering [].

While there is research in the field, it is clear that there have been no comprehensive, yet practical guides for medical professionals. This is the gap that this paper aims to fill.

As in the case of any essential skill, becoming better at prompt engineering would involve an improved understanding of the fundamental principles of the technology, gaining practical exposure to systems using the technology, and continually refining and iterating the skill based on feedback.

The following are some concrete steps that a health care professional can take to improve their skills in prompt engineering:

  • Understanding the underlying principles of how AI and machine learning models work can provide a foundation on which to build prompt engineering skills. As shown, it is possible to gain that understanding without any prior technical or coding knowledge [].
  • Familiarizing themselves with the LLMs they are working with as each system has its own set of capabilities and limitations. Understanding both can help craft more effective prompts.
  • Practice makes perfect; therefore, attempting to interact with LLMs regularly and make a note of the prompts that yield the most helpful and accurate results can have benefits.

It is also important to constantly test prompts in real-world scenarios as their effectiveness is best evaluated in practical application.

Besides these general approaches, here is a summary of specific recommendations with practical examples that a health care professional might want to consider to improve their skills in prompt engineering. summarizes these recommendations, their examples with ChatGPT’s key terms, limitations, and the most popular plugins.

Figure 1. A cheat sheet of prompt engineering recommendations for health care professionals with examples for each: ChatGPT’s key terms and their explanations, its limitations, and its most popular plugins. A high resolution version is attached as .

Be as Specific as Possible

The more specific the prompt, the more accurate and focused the response is likely to be. The following is an example prompt:

  • Less specific: “Tell me about heart disease.”
  • More specific: “What are the most common risk factors for coronary artery disease?”

Describe the Setting and Provide the Context Around the Question

One must consider the discussion one is having with ChatGPT as a discussion one would have with a person they just met, who might still be able to answer their questions and address one’s challenges.

The following is an example prompt: “I'm writing an article about tips and tricks for ChatGPT prompt engineering for people working in healthcare. Can you please list a few of those tips and tricks with some specific prompt examples?”

Experiment With Different Prompt Styles

The style of one’s prompt can significantly impact the answer. One can try different formats such as asking ChatGPT to generate a list about their brief or to provide a summary of the topic. The following is an example:

  • Direct question: “What are the symptoms of COVID-19?”
  • Request for a list: “List all the potential symptoms of COVID-19.”
  • Request for a summary: “Summarize the key symptoms and progression of COVID-19.”
  • Process: “Provide a step-by-step process of diagnosing COVID-19.”

Identify the Overall Goal of the Prompt First

Describe exactly what kind of output is being sought. Whether it would be getting creative ideas for an article, asking for a specific description of an advanced scientific topic, or providing a list of examples around questions, defining it helps ChatGPT come up with more relevant answers. The following is an example: “I'd like to get a list of 5 ideas for a presentation at a scientific event to make my research findings more easily understandable.”

Ask it to Play Roles

This can help streamline the desired process of obtaining the information or input one was looking for in a specific setting. With new topics without prior knowledge, it is prudent to obtain only a basic description; in addition, one can also ask ChatGPT to act as a tutor and help dive into a detailed topic step-by-step. The following are a couple of examples:

  • “Act as a Data Scientist and explain Prompt Engineering to a physician.”
  • “Act as my nutritionist and give me tips about a balanced Mediterranean diet.”

Iterate and Refine

Even if one’s skills in prompt engineering are advanced, LLMs change so dynamically that one rarely get the best response on was looking for after the first prompt attempt. Constantly iterating prompts is something with which we should get accustomed. Users of LLMs are also encouraged to ask the LLM to modify the output based on feedback on its previous response.

Use the Threads

One can navigate back to a specific discussion by clicking on the specific thread in the left column on ChatGPT’s dashboard. This way, one can build upon the details and responses one has already received in a previous thread. This can save a lot of time as there is no need to describe the same situation and all the feedback ChatGPT has received on its responses.

Ask Open-Ended Questions

Open-ended questions can provide a broader, more comprehensive understanding of the user's situation. For instance, asking “How do you feel?” rather than “Do you feel pain?” allows for a wider array of responses that can potentially provide more insight into the patient's mental, emotional, or physical state. Open-ended questions can also help to generate a larger data set for training AI models, making them more effective. Lastly, asking open-ended questions allows ChatGPT to display its potential better by leveraging its training on a diverse range of topics. This can lead to more unexpected and creative solutions or ideas that a health care professional might not have thought of. The following is an example:

  • Closed question: “Is exercise important for patients with osteoporosis?”
  • Open question: “How does regular physical activity benefit patients with osteoporosis?”

Request Examples

Asking for specific examples can help to clarify the meaning of a concept or idea, making it easier to understand. Especially with complex medical terminology or procedures, examples can provide a practical context that aids comprehension. Also, examples often help in visualizing abstract or complicated ideas. When ChatGPT provides examples, it can showcase how a certain concept or rule is applied in different scenarios. This can be beneficial in health care, where theoretical knowledge needs to be connected to real-world applications.

Temporal Awareness

This refers to the model's understanding of time-related concepts and its ability to generate contextually relevant responses based on time. Therefore, describing what time line the prompt and the desired output would be related to helps LLMs provide a more useful answer. The following is an example:

  • Without a time reference: “Describe the healing process after knee surgery.”
  • With a time reference: “What can a patient typically expect during the first six weeks of healing after knee surgery?”

Set Realistic Expectations

Knowing the limitations of AI tools such as ChatGPT is crucial, as it helps set realistic expectations about the output. For instance, ChatGPT cannot access any data or information after November, 2021; it cannot provide personalized medical advice or replace a professional's judgement. The following is an example:

  • Unrealistic prompt: “What's the latest research published this month about Alzheimer's?”
  • Realistic prompt: “What were some of the major research breakthroughs in Alzheimer's treatment up until 2021?”

Use the One-Shot/Few-Shot Prompting Method

The one-shot prompting method is one in which ChatGPT can generate an answer based on a single example or piece of context provided by the user. The following is an example:

  • Generate 10 possible names for a new digital stethoscope device.
  • A name that I like is DigSteth.

With the few-shot strategy, ChatGPT can generate an answer based on a few examples or pieces of context provided by the user. The following is an example:

  • Generate 10 possible names for a new digital stethoscope device.
  • Names that I like include:

Prompting for Prompts

One of the easiest ways of improving at prompt engineering is asking ChatGPT to get involved in the process and design prompts for the user. The following is an example: “What prompt could I use right now to get a better output from you in this thread/task?”

As the skill of prompt engineering has gained significant interest worldwide, especially in the health care setting, it would be important to include teaching the practical methods this paper described in the medical curriculum and postgraduate education. While the technical details and background of generative AI will probably be included in future curricula, it would be useful for medical students to learn the most practical tips of using LLMs even before that happens.

The general message for every LLM user should be that they could use such AI tools to expand their knowledge, capabilities, and ideas instead of solving things on their behalf. Ideally, this approach and mindset would stem from trained medical professionals who could share it with their patients.

In summary, as more patients and medical professionals use AI-based tools—LLMs being the most popular representatives of that group—it seems inevitable to address the challenge to improve at this skill. Furthermore, as doing so does not require any technical knowledge or prior programming expertise, prompt engineering alone can be considered an essential emerging skill that helps leverage the full potential of AI in medicine and health care.

I used the generative AI tool GPT-4 (OpenAI) [] during the ideation process to make sure the paper covers every possible prompt engineering suggestion of value. During that process, I tested the prompt engineering recommendations I made in the paper through imaginary scenarios.

Edited by A Mavragani; submitted 07.07.23; peer-reviewed by O Tamburis, A Zavar; comments to author 06.09.23; revised version received 14.09.23; accepted 19.09.23; published 04.10.23

©Bertalan Meskó. Originally published in the Journal of Medical Internet Research (https://www.jmir.org), 04.10.2023.

This is an open-access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in the Journal of Medical Internet Research, is properly cited. The complete bibliographic information, a link to the original publication on https://www.jmir.org/, as well as this copyright and license information must be included.

Read More

Beyond Hype: Getting the Most Out of Generative AI in Healthcare Today

October 6, 2023

While Covid-19 may no longer be dominating the global news cycle, healthcare providers and payers are still feeling its reverberations. More than half of US hospitals ended 2022 with a negative margin, marking the most difficult financial year since the start of the pandemic.

CEOs and CFOs remember the challenges all too well: The Omicron surge halted nonurgent procedures in the first half of the year, government support tapered off, and labor expenses ballooned amid staffing shortages. There was also the record-high inflation that continues to intensify margin pressures today. According to a recent Bain survey of health system executives, 60% cite rising costs as their greatest concern.

Payers and providers are now on the hunt for margin improvements. In our experience, the most successful companies won’t merely reduce costs, but also ramp up productivity. When done right, modest technology investments can accomplish both.

Artificial intelligence (AI) may hold part of the answer. With the costs to train a system down 1,000-fold since 2017, AI provides an arsenal of new productivity-enhancing tools at a low investment.

Many executives recognize the growing opportunity, especially with the recent rise of generative AI, which uses sophisticated large language models (LLMs) to create original text, images, and other content. It’s inspiring an explosion of ideas around use cases, from reviewing medical records for accuracy to making diagnoses and treatment recommendations.

Our survey reveals that 75% of health system executives believe generative AI has reached a turning point in its ability to reshape the industry. However, only 6% have an established generative AI strategy.

It’s time to play offense—or be forced to play defense later. But choosing from the laundry list of generative AI applications is daunting. Companies are at high risk of overinvesting in the wrong opportunities and underinvesting in the right ones, undermining future profitability, growth, and value creation. A wait-and-see approach is a tempting prospect.

However, we believe the next generation of leading healthcare companies will start today, with highly focused, low-risk use cases that boost productivity and cost efficiency. Over the next three to nine months, these companies will improve margins and learn how to implement a generative AI strategy, building up the funds and experience needed to invest in a more transformative vision.

Endless potential—and high hurdles 

The excitement around generative AI may feel akin to the hype around other recent digital and technology developments that never quite rose to their promised potential. Well-intentioned, well-informed individuals are debating how much change will truly materialize in the next few years. While developments over the past six months have been a testament to the breakneck speed of change, nobody can accurately predict what the next six months, year, or decade will look like. Will new players emerge? Will we rely on different LLMs for different use cases, or will one dominate the landscape?

Despite the uncertainty, generative AI already has the power to alleviate some of providers’ biggest woes, which include rising costs and high inflation, clinician shortages, and physician burnout. Quick relief is critical, considering that the heightened risk of a recession will only compound margin pressures, and the US could be short 40,800 to 104,900 physicians by 2030, according to the Association of American Medical Colleges.

Many health systems are eyeing imminent opportunities to reduce administrative burdens and enhance operational efficiency. They rank improving clinical documentation, structuring and analyzing patient data, and optimizing workflows as their top three priorities (see Figure 1).

Some generative AI applications are already streamlining administrative tasks and allowing thinly stretched physicians to spend more time with patients. For instance, Doximity is rolling out a ChatGPT tool that can draft preauthorization and appeal letters. HCA Healthcare partnered with Parlance, a conversational AI-based switchboard, to improve its call center experience while reducing operators’ workload. And there are new announcements seemingly every week: Consider how healthcare software company Epic Systems is incorporating ChatGPT with electronic health records (EHRs) to draft response messages to patients, or how Google Cloud is launching an AI-enabled Claims Acceleration Suite for prior authorization processing. 

These applications only scratch the surface of potential. In the future, generative AI could profoundly transform care delivery and patient outcomes. Looking ahead two to five years, executives are most interested in predictive analytics, clinical decision support, and treatment recommendations (see Figure 2).

It’s hard not to catch AI “fever.” But there are real challenges ahead. Some are already tackling the biggest questions: Organizations such as Duke Health, Stanford Medicine, Google, and Microsoft have formed the Coalition for Health AI to create guidelines for responsible AI systems. Even so, solutions to the greatest hurdles aren’t yet keeping up with the rapid technology development. Resource and cost constraints, a lack of expertise, and regulatory and legal considerations are the largest barriers to implementing generative AI, according to executives (see Figure 3).

Even when organizations can overcome these hurdles, one major challenge remains: focus and prioritization. In many boardrooms, executives are debating overwhelming lists of potential generative AI investments, only to deem them incomplete or outdated given the dizzying pace of innovation. These protracted debates are a waste of precious organizational energy—and time. 

Starting small to win big 

Setting the bar too high is setting up for failure. It’s easy to get caught up, betting big on what seems like the greatest opportunity in the moment. But 12 months later, leaders often find themselves frustrated that they haven’t seen results or feeling as if they’ve made a misplaced bet. Momentum and investments slow, further hindering progress. 

Leading companies are forming a more pragmatic strategy that considers current capabilities, regulations, and barriers to adoption. Their CEOs and CFOs work together to enforce four guiding principles: 

  • Pilot low-risk applications with a narrow focus first. Tomorrow’s leaders are making no-regret moves to deliver savings and productivity enhancements in short order—at a time when they need it most. Gaining experience with currently available technology, they are testing and learning their way to minimum viable products in low-risk, repeatable use cases. These quick wins are typically in areas where they already have the right data, can create tight guardrails, and see a strong potential return on investment. Some, like call center and chatbot support, can improve the patient experience. However, given the current challenges around regulation and compliance, the most successful early initiatives are likely to be internally focused, such as billing or scheduling. Most importantly, executives prioritize initiatives by potential savings, value, and cost.
  • Decide to buy, partner, or build. CEOs will need to think about how to invest in different use cases based on availability of third-party technology and importance of the initiative.
  • Funnel cost savings and experience into bigger bets. As the technology matures and the value becomes clear, companies that generate savings, accumulate experience, and build organizational buy-in today will be best positioned for the next wave of more sophisticated, transformative use cases. These include higher-risk clinical activities with a greater need for accuracy due to ethical and regulatory considerations, such as clinical decision support, as well as administrative activities that require third-party integration, such as prior authorization.
  • Remember generative AI isn’t a strategy unto itself. To build a true competitive advantage, top CEOs and CFOs are selective and discerning, ensuring that every generative AI initiative reinforces and enables their overarching goals.

Some health systems are already seeing powerful results from relatively small, more practical investments. For instance, recognizing that clinicians were spending an extra 130 minutes per day outside of working hours on administrative tasks, the University of Kansas Health System partnered with Abridge, a generative AI platform, to reduce documentation burden. By summarizing the most important points from provider-patient conversations, Abridge is improving the quality and consistency of documentation, getting more patients in the door, and cutting down on pervasive physician burnout.

Although it will require some upfront investment, in the long run it will be more costly to underestimate the level and speed at which generative AI will transform healthcare. The next generation of leaders will start testing, learning, and saving today, putting them on a path to eventually revolutionize their businesses.

Read More
View All
Insights by Kate Gamble
View All
Our Partners

Premier

Diamond Partners

Platinum Partners

Silver Partners

Healthcare Transformation Powered by Community

© Copyright 2024 Health Lyrics All rights reserved