This Week Health

Interviews in Action

More
This Week Health is a series of IT podcasts dedicated to healthcare transformation powered by the community

What would you like to learn about today?

Latest Episodes
View All
R25 - Podcasts Category Filter-2
  • All
  • Leadership (668)
  • Emerging Technology (494)
  • Security (307)
  • Interoperability (296)
  • Patient Experience (295)
  • Financial (286)
  • Analytics (182)
  • Telehealth (174)
  • Digital (164)
  • Clinician Burnout (158)
  • Legal & Regulatory (140)
  • AI (103)
  • Cloud (92)
View All
In the News

Intel pitches the 'AI PC' at software developer event

September 24, 2023

SAN JOSE, California, Sept 19 (Reuters) - A new Intel (INTC.O) chip due in December will be able to run a generative artificial intelligence chatbot on a laptop rather than having to tap into cloud data centers for computing power, the company said on Tuesday.

The capability, which Intel showed off during a software developer conference held in Silicon Valley, could let businesses and consumers test ChatGPT-style technologies without sending sensitive data off their own computer. It is made possible by new AI data-crunching features built into Intel's forthcoming "Meteor Lake" laptop chip and from new software tools the company is releasing.

At the conference, Intel demonstrated laptops that could generate a song in the style of Taylor Swift and answer questions in a conversational style, all while disconnected from the Internet. Chief Executive Officer Pat Gelsinger said Microsoft's (MSFT.O) "Copilot" AI assistant will be able to run on Intel-based PCs.

"We see the AI PC as a sea change moment in tech innovation," Gelsinger said.

Intel shares were down 1.5% after the company's presentation.

Intel executives also said the company is on track to deliver a successor chip called "Arrow Lake" next year, and that Intel's manufacturing technology will rival the best from Taiwan Semiconductor Manufacturing Co (2330.TW), as it has promised. Intel was once the best chip manufacturer, lost the lead, and now says it is on track to return to the front.

Intel has struggled to gain ground against Nvidia (NVDA.O) in the market for the powerful chips used in data centers to "train" AI systems such as ChatGPT. Intel said on Tuesday that it was building a new supercomputer that would be used by Stability AI, a startup that makes image-generating software. China's Alibaba Group Holdings (9988.HK) is using its newest central processors to serve up chatbot technology, Intel said.

But the market for chips that will handle AI work outside data centers is far less settled, and it is there that Intel aimed to gain ground on Tuesday.

Through a new version of software called OpenVINO, Intel said that developers will be able run a version of a large language model, the class of technology behind products like ChatGPT, made by Meta Platforms (META.O) on laptops. That will enable faster responses from chatbots and will mean that data does not leave the device.

"You can get a better performance, a lower cost and more private AI," Sachin Katti, senior vice president and general manager of Intel's network and edge group, told Reuters in an interview.

Dan Hutcheson, an analyst with TechInsights, told Reuters that business users who are weary of handing sensitive corporate data over to third-party AI firms might be interested in Intel's approach.

If Intel Chief Gelsinger can make AI "so that anyone can use it, that creates a much bigger market for chips – the chips that he makes," Hutcheson said.

Reporting by Stephen Nellis in San Francisco and Max Cherney in San Jose, California; Editing by Peter Henderson, Lincoln Feast and Josie Kao

Our Standards: The Thomson Reuters Trust Principles.

Read More

Bill Gurley discusses 'Regulatory Capture' and Epic's dominance in healthcare

September 24, 2023

If you've ever wondered why Epic is so dominant in the healthcare industry, Bill Gurley does an AMAZING job of documenting it in this 36 min video on "Regulatory Capture"... The topic sounds dense, but trust me, he gives you everyday examples that affect your life... starting with why we don't have free telecom in cities, to the price and availability of COVID tests, and ending with a sharp warning about upcoming attempts to regulate BigTech and AI. [NOTE: this is not me endorsing zero regulation, but an undeniable warning about letting incumbents like Epic in the room when you draft that regulation. We should not take BigTech inviting DC to regulate them as a genuine motivation.] TL;DR: (Epic case study starts at min 11) - https://lnkd.in/g8Wcq_Gd
"[Epic CEO Judy Faulkner] was the ONLY corporate representative on Obama's Health IT council in 2009"
"They came up with a brilliant idea, and I have to assume she helped. Doctors would receive $44K each if they bought software. $38B, you can look it up."
"The second phase, they got paid $17K more to prove they were using it. It was called 'Meaningful Use'."
"The ONC decided the threshold that software would need to comply with this mandate, and I'm assuming they took Epic's feature set and plowed it into this spreadsheet."
"They had 3 record fines -- $155M, $57M, $145M -- against the lesser competitors of Epic."
"If you've studied the innovators' dilemma, the way startups disrupt is they come out with lower feature products, but one that's really meaningful to the user and then they grow... they put a brick wall there so they couldn't come up."
"You may ask, am I unhappy with Judith? I'm disgusted with it. BUT, if I were a judge in the Olympic Regulator Competition, I'd give her a 10!" Thanks to Kelly Gill for the link!

To view or add a comment, sign in

Read More

The ChatGPT (Generative Artificial Intelligence) Revolution Has Made Artificial Intelligence...

September 24, 2023

In November 2022, OpenAI publicly launched its large language model (LLM), ChatGPT []. It reached the milestone of having over 100 million users in only 2 months. In comparison, reaching the same milestone took TikTok and Instagram 9 months and more than 2 years, respectively. In March 2023, OpenAI had already released a new iteration called GPT-4 that was claimed to be 100 times better than the previous version.

LLMs have seen rapid advancements and practical applications in various industries from marketing and information technology to publishing and even health care [].

These models can generate human-like text, assist in diagnosing conditions based on medical records, and even suggest treatment options or plans. Given the potential implications for patient outcomes and public health, as well as the impact on the jobs of medical professionals, LLMs have attracted much attention.

LLMs have been shown to be used in a myriad health care–related tasks and processes. Examples include education and research [], analyzing electronic health records [], oncology [], cardiology [], writing discharge summaries [], and medical journalism []. The number of use cases has skyrocketed, as well as the challenges and issues around the use of this particular technology. The ethics of its use have been analyzed in depth []: LLMs have been hallucinating references; their role as a potential coauthor of research papers has been debated; it has been shown that such models can pose a cybersecurity threat []; and they can be biased [].

Nevertheless, in this paper, I argue that attention to, public access to, and debate about LLMs have initiated a wave of products and services using generative artificial intelligence (AI), which had previously found it hard to attract physicians.

There are events of a certain magnitude that can lead to significant changes in a field, especially regarding the attitude of people experiencing them. The COVID-19 pandemic has led to an unprecedented level of patients and physicians adopting technologies such as remote care and at-home laboratory testing []. Advances in cloud computing led to new discoveries and developments in AI in the 2010s. The release of LLMs can mark a similarly impactful event that leads to physicians gaining first-hand experience with AI-based tools and technologies.

No matter how many studies, clinical trials, regulatory approvals, and product announcements keep on becoming available about AI’s role in medicine and health care, adoption rates have been lagging behind [,]. One of the reasons for this has been limited access to AI technologies. There have been hundreds of AI applications with regulatory approval; however, they were only available in limited health care institutions to a select group of physicians [].

This paper describes what AI tools have become available since the beginning of the ChatGPT revolution and contemplates how they might change physicians’ perceptions of this breakthrough technology.

In principle, tasks in a physician’s job that are repetitive in their nature and are data-based are prone to being automated. Most of the AI-based medical technologies that have received regulatory approval are built on this idea and address one specific task, mostly in radiology, cardiology, and oncology.

Generative AI has brought this to a new level. Generative AI refers to a category of AI models that are capable of generating new data samples by learning the underlying patterns and structures of a given data set. These models, including generative adversarial networks (GANs) and generative pretrained transformer (GPT)-like architectures, have demonstrated prowess in a wide range of applications, including image synthesis, natural language processing, and drug discovery. As generative AI continues to advance, it holds the potential to revolutionize industries by enabling the creation of novel and customizable content, accelerating research, and expanding the scope of human creativity.

Since the end of 2022, basic AI tools and services using generative AI have become widely accessible. This might mark the beginning of AI becoming used in practice for the average medical professional.

Even health care companies have started to integrate LLMs into their services. Examples include Microsoft’s Nuance bringing ChatGPT into its medical note-taking tool; Nabla (Nabla SRL) using ChatGPT to have conversations with patients; and Epic introducing LLMs to their electronic health record software [-].

Building a Website With AI

There are AI-based website builders that can create a brand-new website in a few minutes. Users can choose between design themes and a number of functionalities. Some of the companies even offer more advanced frameworks, such as web shops and responsive sites that automatically adjust to the device—mobile or computer—the visitor uses [].

The output currently is not as versatile or customized as a site built by a team of professionals, and these services have limitations with search engine optimization, but a physician building a website for their practice in minutes is still an impressive feat.

Creating Videos With AI

LLMs are capable of coming up with ideas for video content and even writing the scripts (eg, with prompts such as “write a 5-minute script about the importance of the flu shot for the elderly”). Revoicer (Revoicer Ltd), a text-to-speech algorithm, can transform the script into audio content, and tools like Yepic (Yepic AI Ltd) and Synthesia (Synthesia Ltd) can generate a video featuring a synthetic human. While synthetic voices and humans do not reflect the original, these tools still allow users to create informative video content without having a content production team at hand for a fraction of the cost of hiring all these professionals [].

Designing Presentations With AI

Creating presentations for meetings and conferences can be cumbersome even if clinicians are experienced with presentation tools like PowerPoint (Microsoft Corp) or Prezi (Prezi Inc). There are AI presentation tools that design stylish presentations from primary inputs. Physicians can choose a template and input basic data, information, and facts, and AI will do the rest by formatting the slides and offering visuals, animations, or voice-overs. The results can be somewhat generic—but in most cases are still more attractive than what most physicians could make by themselves [].

AI as a Medical Scribe

A growing number of algorithms claim to record and transcribe meetings or conversations automatically, analyze the content, and provide an easy-to-understand, searchable report. Some of these solutions are specifically designed for medical use, while others target a more general audience. Companies targeting health care users offer complex services, such as AI transcribing a consultation that is then reviewed by a human professional for accuracy. Clear limitations are that general transcription tools might pose a risk when used in medical settings, while those specifically designed for medical uses tend to be on the expensive side. However, using such tools might relieve burdens on medical professionals and reduce excessive administrative tasks, which are a major cause of physician burnout [].

Creating Social Media Posts, Frequently Asked Questions, and Other Informational Content for Patients With AI

Generative AI, especially LLMs and AI image generators such as DALL-E (OpenAI) and MidJourney (MidJourney Inc), can be used to provide written and image-based content for patients. ChatGPT can assist physicians in crafting social media posts, general informational materials, and brochures aimed at educating patients. While physicians need to verify every detail these generated posts contain, using AI can still save considerable time in the process.

Designing Logos and Images

AI image generators like DALL-E and MidJourney can generate images based on text prompts. Such images can be used as logos for medical practices, visualizations for social media posts, and to help deliver messages on websites.

Medical Research and Literature Analysis

Tools like Semantic Scholar (Allen Institute for AI) use AI to help physicians stay up-to-date with the latest research findings by analyzing and summarizing relevant articles, making it easier to keep abreast of medical advancements.

The difference between these and the previously regulated AI-based medical technologies is that the tools described above are widely accessible, do not require any prior technical training, and are not dedicated to niche areas such as specific medical specialties.

Using AI was previously possible in research groups dedicated to this technology, in health care settings where certain tools were available, and for selected individuals who were fortunate enough to receive access to them. The widespread release of LLMs and other AI tools using generative AI has increased accessibility for medical professionals to test this breakthrough technology. It has also come with additional benefits.

Democratization of AI Technology

Open-source projects and accessible platforms have allowed developers and researchers to contribute to AI advancements, fostering a community-driven approach that has enhanced the growth and reach of AI tools in the medical field. AutoGPT represents one of the most popular directions [].

Cost-Effectiveness

The affordability of AI solutions has enabled medical professionals to adopt these tools in their practice without incurring prohibitive costs. This has been essential in bridging the gap between cutting-edge research and its practical application in health care settings.

Improved Data Processing and Analysis

LLMs excel at handling vast amounts of unstructured data, such as clinical notes and medical literature, which has facilitated the extraction of valuable insights and improved decision-making processes for medical professionals.

Enhanced Communication and Collaboration

AI-powered tools can streamline communication between health care providers, patients, and interdisciplinary teams, promoting a more collaborative environment and fostering better patient outcomes.

Continuous Learning and Adaptation

LLMs have the capacity to learn from new data and adapt to the evolving needs of medical professionals, thus offering more effective and targeted solutions in real time. Physicians can ask ChatGPT to provide summaries, explanations, and key points from recent medical literature and can present hypothetical patient cases or real anonymized cases and seek guidance on differential diagnoses, treatment options, and potential risks, further enhancing their clinical decision-making abilities. Physicians can even request ChatGPT to generate quiz questions or self-assessment exercises on specific medical topics to test their knowledge and identify areas for improvement.

While it is simple to see the benefits of a generation of medical professionals obtaining real-life experience with a technology like AI, the journey ahead raises more questions than we can answer now. Here, I summarize those questions with the highest importance.

There are strict regulations on the way AI technologies designed for a medical or health care–related purpose can access patient and medical databases, but there are none for generative AI tools made for a general audience without any medical purpose. If physicians use such tools, they are left alone with no guidance about how to deal with patient privacy or legal responsibilities.

Moreover, as LLMs tend to generate not only responses but also the resources they base their responses on, physicians need to verify every detail that comes from using generative AI.

Currently, no clinical guideline is available on their use and how to implement them in medical practice.

There is an ongoing debate about who owns copyright on AI outputs, such as text or images. If a physician uses an AI image generator to create a new logo for their website, in theory, anyone can use the same logo unless the user pays for the exclusive rights. The same issue stands for text made by generative AI for physicians to use in their research papers, as well as the presentations and videos created with these tools. There have been attempts from legal experts to provide solutions for this challenge, but there is no consensus yet [].

As AI has finally become a tool for the masses, and it has become increasingly hard for physicians to ignore its use whether it is for medical, research or personal purposes, properly preparing medical professionals for its advantages and risks has become a timely challenge of crucial importance. Medical curricula, guidelines of medical associations, and the general discussion about AI’s future role in our profession must adjust accordingly.

Medical curricula could involve teaching prompt engineering to provide knowledge, skills, and a mindset for medical professionals to become proficient users of generative AI. Medical associations should provide a path forward for using such AI tools in the practice of medicine while keeping the values of evidence-based medicine and the importance of patient design in mind. And even more importantly, policymakers today face the challenge of not only designing policies and regulations for the generative AI tools medical professionals can use today, but also for the next versions and iterations that might involve analyzing other types of data, from image and video to sound and documents.

This is certainly an unprecedented challenge that requires relatively quick turnaround from all decision-makers in health care.

Edited by G Eysenbach, T Leung; submitted 21.04.23; peer-reviewed by N Mungoli, G Sebastian; comments to author 31.05.23; revised version received 02.06.23; accepted 07.06.23; published 22.06.23

©Bertalan Mesko. Originally published in the Journal of Medical Internet Research (https://www.jmir.org), 22.06.2023.

Read More

Threat actor attacked MGM Resorts

September 24, 2023

View profile for Jason Rebholz

I'll help you learn cyber security | The TeachMeCyber Guy | CISO, Advisor, Speaker, Mentor

6h

The threat actor behind the MGM Resorts attack had something to say yesterday...and they weren't short on their words. In a 1,100 word statement, the threat actor gave their…unique…perspective on the situation. I parsed through it all to pull out a possible timeline of activity. 🕜 Friday 9/8 - Saturday 9/9 🕜
 - The threat actor gained initial access to MGM resorts by socially engineering the IT help desk into resetting a user account. - The threat actor gained privileges to access domain controllers and dumped credentials, which they then cracked. They also claim to have intercepted passwords syncing between Okta and presumably Active Directory - The threat actor also obtained Okta super user access and Azure Global Admin access. This would have given near complete control of the environment. - The threat actor stole data at some point, though it’s unclear the extent of that data theft. - MGM Resorts appears to have taken initial containment steps, though they were not effective. 🕜 Sunday 9/10 🕜
- MGM Resorts implemented additional containment measures and attempted to kick the attacker out of the environment. This unfortunately was unsuccessful. 🕜 Monday 9/11 🕜
- The threat actor purportedly encrypted over 100 ESXi hypervisors (these run virtual machines, so the impacted number of servers is much higher). - The threat actor provided a link to download (presumably) a sample of stolen data. 🕜 Tuesday 9/12 - Wednesday 9/13 🕜
- MGM continued their incident response and recovery efforts with the help of outside experts. - The threat actor monitored user(s) lurking in their negotiation portal and presumably were upset that no one wanted to chit chat. 🕜 Thursday 9/14 🕜
- The threat actor posted a 1,101 word statement to “set the record straight” on the attack. - The threat actor claims to still have access to the environment and is threatening to carry out additional attacks if MGM does not make contact with them. /end timeline Continued #hugops to the MGM team. Navigating an active attacker situation is never a straightforward affair, regardless of what people may say. And given the sophistication of this threat actor compared to your typical ransomware group, well their job is such that much harder. For the rest of us, as we watch and learn more about what happened, it's important to remember why this information is helpful. Understanding the techniques these groups use helps you update your security program to defend against them. A perfect security program only exists in rhetoric. A motivated attacker will find a way regardless of your defenses. Stay knowledgeable, stay kind. ------------------------------
🤓 Hi, I’m Jason, the “TeachMeCyber” guy
💡I simplify cyber security to help you learn faster
🔔 Follow me for daily cyber security posts #teachmecyber #cybersecurity #ransomware #mgm

  • No alternative text description for this image

See more comments

To view or add a comment, sign in

Read More

Intel pitches the 'AI PC' at software developer event

September 24, 2023

SAN JOSE, California, Sept 19 (Reuters) - A new Intel (INTC.O) chip due in December will be able to run a generative artificial intelligence chatbot on a laptop rather than having to tap into cloud data centers for computing power, the company said on Tuesday.

The capability, which Intel showed off during a software developer conference held in Silicon Valley, could let businesses and consumers test ChatGPT-style technologies without sending sensitive data off their own computer. It is made possible by new AI data-crunching features built into Intel's forthcoming "Meteor Lake" laptop chip and from new software tools the company is releasing.

At the conference, Intel demonstrated laptops that could generate a song in the style of Taylor Swift and answer questions in a conversational style, all while disconnected from the Internet. Chief Executive Officer Pat Gelsinger said Microsoft's (MSFT.O) "Copilot" AI assistant will be able to run on Intel-based PCs.

"We see the AI PC as a sea change moment in tech innovation," Gelsinger said.

Intel shares were down 1.5% after the company's presentation.

Intel executives also said the company is on track to deliver a successor chip called "Arrow Lake" next year, and that Intel's manufacturing technology will rival the best from Taiwan Semiconductor Manufacturing Co (2330.TW), as it has promised. Intel was once the best chip manufacturer, lost the lead, and now says it is on track to return to the front.

Intel has struggled to gain ground against Nvidia (NVDA.O) in the market for the powerful chips used in data centers to "train" AI systems such as ChatGPT. Intel said on Tuesday that it was building a new supercomputer that would be used by Stability AI, a startup that makes image-generating software. China's Alibaba Group Holdings (9988.HK) is using its newest central processors to serve up chatbot technology, Intel said.

But the market for chips that will handle AI work outside data centers is far less settled, and it is there that Intel aimed to gain ground on Tuesday.

Through a new version of software called OpenVINO, Intel said that developers will be able run a version of a large language model, the class of technology behind products like ChatGPT, made by Meta Platforms (META.O) on laptops. That will enable faster responses from chatbots and will mean that data does not leave the device.

"You can get a better performance, a lower cost and more private AI," Sachin Katti, senior vice president and general manager of Intel's network and edge group, told Reuters in an interview.

Dan Hutcheson, an analyst with TechInsights, told Reuters that business users who are weary of handing sensitive corporate data over to third-party AI firms might be interested in Intel's approach.

If Intel Chief Gelsinger can make AI "so that anyone can use it, that creates a much bigger market for chips – the chips that he makes," Hutcheson said.

Reporting by Stephen Nellis in San Francisco and Max Cherney in San Jose, California; Editing by Peter Henderson, Lincoln Feast and Josie Kao

Our Standards: The Thomson Reuters Trust Principles.

Read More

Bill Gurley discusses 'Regulatory Capture' and Epic's dominance in healthcare

September 24, 2023

If you've ever wondered why Epic is so dominant in the healthcare industry, Bill Gurley does an AMAZING job of documenting it in this 36 min video on "Regulatory Capture"... The topic sounds dense, but trust me, he gives you everyday examples that affect your life... starting with why we don't have free telecom in cities, to the price and availability of COVID tests, and ending with a sharp warning about upcoming attempts to regulate BigTech and AI. [NOTE: this is not me endorsing zero regulation, but an undeniable warning about letting incumbents like Epic in the room when you draft that regulation. We should not take BigTech inviting DC to regulate them as a genuine motivation.] TL;DR: (Epic case study starts at min 11) - https://lnkd.in/g8Wcq_Gd
"[Epic CEO Judy Faulkner] was the ONLY corporate representative on Obama's Health IT council in 2009"
"They came up with a brilliant idea, and I have to assume she helped. Doctors would receive $44K each if they bought software. $38B, you can look it up."
"The second phase, they got paid $17K more to prove they were using it. It was called 'Meaningful Use'."
"The ONC decided the threshold that software would need to comply with this mandate, and I'm assuming they took Epic's feature set and plowed it into this spreadsheet."
"They had 3 record fines -- $155M, $57M, $145M -- against the lesser competitors of Epic."
"If you've studied the innovators' dilemma, the way startups disrupt is they come out with lower feature products, but one that's really meaningful to the user and then they grow... they put a brick wall there so they couldn't come up."
"You may ask, am I unhappy with Judith? I'm disgusted with it. BUT, if I were a judge in the Olympic Regulator Competition, I'd give her a 10!" Thanks to Kelly Gill for the link!

To view or add a comment, sign in

Read More

The ChatGPT (Generative Artificial Intelligence) Revolution Has Made Artificial Intelligence...

September 24, 2023

In November 2022, OpenAI publicly launched its large language model (LLM), ChatGPT []. It reached the milestone of having over 100 million users in only 2 months. In comparison, reaching the same milestone took TikTok and Instagram 9 months and more than 2 years, respectively. In March 2023, OpenAI had already released a new iteration called GPT-4 that was claimed to be 100 times better than the previous version.

LLMs have seen rapid advancements and practical applications in various industries from marketing and information technology to publishing and even health care [].

These models can generate human-like text, assist in diagnosing conditions based on medical records, and even suggest treatment options or plans. Given the potential implications for patient outcomes and public health, as well as the impact on the jobs of medical professionals, LLMs have attracted much attention.

LLMs have been shown to be used in a myriad health care–related tasks and processes. Examples include education and research [], analyzing electronic health records [], oncology [], cardiology [], writing discharge summaries [], and medical journalism []. The number of use cases has skyrocketed, as well as the challenges and issues around the use of this particular technology. The ethics of its use have been analyzed in depth []: LLMs have been hallucinating references; their role as a potential coauthor of research papers has been debated; it has been shown that such models can pose a cybersecurity threat []; and they can be biased [].

Nevertheless, in this paper, I argue that attention to, public access to, and debate about LLMs have initiated a wave of products and services using generative artificial intelligence (AI), which had previously found it hard to attract physicians.

There are events of a certain magnitude that can lead to significant changes in a field, especially regarding the attitude of people experiencing them. The COVID-19 pandemic has led to an unprecedented level of patients and physicians adopting technologies such as remote care and at-home laboratory testing []. Advances in cloud computing led to new discoveries and developments in AI in the 2010s. The release of LLMs can mark a similarly impactful event that leads to physicians gaining first-hand experience with AI-based tools and technologies.

No matter how many studies, clinical trials, regulatory approvals, and product announcements keep on becoming available about AI’s role in medicine and health care, adoption rates have been lagging behind [,]. One of the reasons for this has been limited access to AI technologies. There have been hundreds of AI applications with regulatory approval; however, they were only available in limited health care institutions to a select group of physicians [].

This paper describes what AI tools have become available since the beginning of the ChatGPT revolution and contemplates how they might change physicians’ perceptions of this breakthrough technology.

In principle, tasks in a physician’s job that are repetitive in their nature and are data-based are prone to being automated. Most of the AI-based medical technologies that have received regulatory approval are built on this idea and address one specific task, mostly in radiology, cardiology, and oncology.

Generative AI has brought this to a new level. Generative AI refers to a category of AI models that are capable of generating new data samples by learning the underlying patterns and structures of a given data set. These models, including generative adversarial networks (GANs) and generative pretrained transformer (GPT)-like architectures, have demonstrated prowess in a wide range of applications, including image synthesis, natural language processing, and drug discovery. As generative AI continues to advance, it holds the potential to revolutionize industries by enabling the creation of novel and customizable content, accelerating research, and expanding the scope of human creativity.

Since the end of 2022, basic AI tools and services using generative AI have become widely accessible. This might mark the beginning of AI becoming used in practice for the average medical professional.

Even health care companies have started to integrate LLMs into their services. Examples include Microsoft’s Nuance bringing ChatGPT into its medical note-taking tool; Nabla (Nabla SRL) using ChatGPT to have conversations with patients; and Epic introducing LLMs to their electronic health record software [-].

Building a Website With AI

There are AI-based website builders that can create a brand-new website in a few minutes. Users can choose between design themes and a number of functionalities. Some of the companies even offer more advanced frameworks, such as web shops and responsive sites that automatically adjust to the device—mobile or computer—the visitor uses [].

The output currently is not as versatile or customized as a site built by a team of professionals, and these services have limitations with search engine optimization, but a physician building a website for their practice in minutes is still an impressive feat.

Creating Videos With AI

LLMs are capable of coming up with ideas for video content and even writing the scripts (eg, with prompts such as “write a 5-minute script about the importance of the flu shot for the elderly”). Revoicer (Revoicer Ltd), a text-to-speech algorithm, can transform the script into audio content, and tools like Yepic (Yepic AI Ltd) and Synthesia (Synthesia Ltd) can generate a video featuring a synthetic human. While synthetic voices and humans do not reflect the original, these tools still allow users to create informative video content without having a content production team at hand for a fraction of the cost of hiring all these professionals [].

Designing Presentations With AI

Creating presentations for meetings and conferences can be cumbersome even if clinicians are experienced with presentation tools like PowerPoint (Microsoft Corp) or Prezi (Prezi Inc). There are AI presentation tools that design stylish presentations from primary inputs. Physicians can choose a template and input basic data, information, and facts, and AI will do the rest by formatting the slides and offering visuals, animations, or voice-overs. The results can be somewhat generic—but in most cases are still more attractive than what most physicians could make by themselves [].

AI as a Medical Scribe

A growing number of algorithms claim to record and transcribe meetings or conversations automatically, analyze the content, and provide an easy-to-understand, searchable report. Some of these solutions are specifically designed for medical use, while others target a more general audience. Companies targeting health care users offer complex services, such as AI transcribing a consultation that is then reviewed by a human professional for accuracy. Clear limitations are that general transcription tools might pose a risk when used in medical settings, while those specifically designed for medical uses tend to be on the expensive side. However, using such tools might relieve burdens on medical professionals and reduce excessive administrative tasks, which are a major cause of physician burnout [].

Creating Social Media Posts, Frequently Asked Questions, and Other Informational Content for Patients With AI

Generative AI, especially LLMs and AI image generators such as DALL-E (OpenAI) and MidJourney (MidJourney Inc), can be used to provide written and image-based content for patients. ChatGPT can assist physicians in crafting social media posts, general informational materials, and brochures aimed at educating patients. While physicians need to verify every detail these generated posts contain, using AI can still save considerable time in the process.

Designing Logos and Images

AI image generators like DALL-E and MidJourney can generate images based on text prompts. Such images can be used as logos for medical practices, visualizations for social media posts, and to help deliver messages on websites.

Medical Research and Literature Analysis

Tools like Semantic Scholar (Allen Institute for AI) use AI to help physicians stay up-to-date with the latest research findings by analyzing and summarizing relevant articles, making it easier to keep abreast of medical advancements.

The difference between these and the previously regulated AI-based medical technologies is that the tools described above are widely accessible, do not require any prior technical training, and are not dedicated to niche areas such as specific medical specialties.

Using AI was previously possible in research groups dedicated to this technology, in health care settings where certain tools were available, and for selected individuals who were fortunate enough to receive access to them. The widespread release of LLMs and other AI tools using generative AI has increased accessibility for medical professionals to test this breakthrough technology. It has also come with additional benefits.

Democratization of AI Technology

Open-source projects and accessible platforms have allowed developers and researchers to contribute to AI advancements, fostering a community-driven approach that has enhanced the growth and reach of AI tools in the medical field. AutoGPT represents one of the most popular directions [].

Cost-Effectiveness

The affordability of AI solutions has enabled medical professionals to adopt these tools in their practice without incurring prohibitive costs. This has been essential in bridging the gap between cutting-edge research and its practical application in health care settings.

Improved Data Processing and Analysis

LLMs excel at handling vast amounts of unstructured data, such as clinical notes and medical literature, which has facilitated the extraction of valuable insights and improved decision-making processes for medical professionals.

Enhanced Communication and Collaboration

AI-powered tools can streamline communication between health care providers, patients, and interdisciplinary teams, promoting a more collaborative environment and fostering better patient outcomes.

Continuous Learning and Adaptation

LLMs have the capacity to learn from new data and adapt to the evolving needs of medical professionals, thus offering more effective and targeted solutions in real time. Physicians can ask ChatGPT to provide summaries, explanations, and key points from recent medical literature and can present hypothetical patient cases or real anonymized cases and seek guidance on differential diagnoses, treatment options, and potential risks, further enhancing their clinical decision-making abilities. Physicians can even request ChatGPT to generate quiz questions or self-assessment exercises on specific medical topics to test their knowledge and identify areas for improvement.

While it is simple to see the benefits of a generation of medical professionals obtaining real-life experience with a technology like AI, the journey ahead raises more questions than we can answer now. Here, I summarize those questions with the highest importance.

There are strict regulations on the way AI technologies designed for a medical or health care–related purpose can access patient and medical databases, but there are none for generative AI tools made for a general audience without any medical purpose. If physicians use such tools, they are left alone with no guidance about how to deal with patient privacy or legal responsibilities.

Moreover, as LLMs tend to generate not only responses but also the resources they base their responses on, physicians need to verify every detail that comes from using generative AI.

Currently, no clinical guideline is available on their use and how to implement them in medical practice.

There is an ongoing debate about who owns copyright on AI outputs, such as text or images. If a physician uses an AI image generator to create a new logo for their website, in theory, anyone can use the same logo unless the user pays for the exclusive rights. The same issue stands for text made by generative AI for physicians to use in their research papers, as well as the presentations and videos created with these tools. There have been attempts from legal experts to provide solutions for this challenge, but there is no consensus yet [].

As AI has finally become a tool for the masses, and it has become increasingly hard for physicians to ignore its use whether it is for medical, research or personal purposes, properly preparing medical professionals for its advantages and risks has become a timely challenge of crucial importance. Medical curricula, guidelines of medical associations, and the general discussion about AI’s future role in our profession must adjust accordingly.

Medical curricula could involve teaching prompt engineering to provide knowledge, skills, and a mindset for medical professionals to become proficient users of generative AI. Medical associations should provide a path forward for using such AI tools in the practice of medicine while keeping the values of evidence-based medicine and the importance of patient design in mind. And even more importantly, policymakers today face the challenge of not only designing policies and regulations for the generative AI tools medical professionals can use today, but also for the next versions and iterations that might involve analyzing other types of data, from image and video to sound and documents.

This is certainly an unprecedented challenge that requires relatively quick turnaround from all decision-makers in health care.

Edited by G Eysenbach, T Leung; submitted 21.04.23; peer-reviewed by N Mungoli, G Sebastian; comments to author 31.05.23; revised version received 02.06.23; accepted 07.06.23; published 22.06.23

©Bertalan Mesko. Originally published in the Journal of Medical Internet Research (https://www.jmir.org), 22.06.2023.

Read More

Threat actor attacked MGM Resorts

September 24, 2023

View profile for Jason Rebholz

I'll help you learn cyber security | The TeachMeCyber Guy | CISO, Advisor, Speaker, Mentor

6h

The threat actor behind the MGM Resorts attack had something to say yesterday...and they weren't short on their words. In a 1,100 word statement, the threat actor gave their…unique…perspective on the situation. I parsed through it all to pull out a possible timeline of activity. 🕜 Friday 9/8 - Saturday 9/9 🕜
 - The threat actor gained initial access to MGM resorts by socially engineering the IT help desk into resetting a user account. - The threat actor gained privileges to access domain controllers and dumped credentials, which they then cracked. They also claim to have intercepted passwords syncing between Okta and presumably Active Directory - The threat actor also obtained Okta super user access and Azure Global Admin access. This would have given near complete control of the environment. - The threat actor stole data at some point, though it’s unclear the extent of that data theft. - MGM Resorts appears to have taken initial containment steps, though they were not effective. 🕜 Sunday 9/10 🕜
- MGM Resorts implemented additional containment measures and attempted to kick the attacker out of the environment. This unfortunately was unsuccessful. 🕜 Monday 9/11 🕜
- The threat actor purportedly encrypted over 100 ESXi hypervisors (these run virtual machines, so the impacted number of servers is much higher). - The threat actor provided a link to download (presumably) a sample of stolen data. 🕜 Tuesday 9/12 - Wednesday 9/13 🕜
- MGM continued their incident response and recovery efforts with the help of outside experts. - The threat actor monitored user(s) lurking in their negotiation portal and presumably were upset that no one wanted to chit chat. 🕜 Thursday 9/14 🕜
- The threat actor posted a 1,101 word statement to “set the record straight” on the attack. - The threat actor claims to still have access to the environment and is threatening to carry out additional attacks if MGM does not make contact with them. /end timeline Continued #hugops to the MGM team. Navigating an active attacker situation is never a straightforward affair, regardless of what people may say. And given the sophistication of this threat actor compared to your typical ransomware group, well their job is such that much harder. For the rest of us, as we watch and learn more about what happened, it's important to remember why this information is helpful. Understanding the techniques these groups use helps you update your security program to defend against them. A perfect security program only exists in rhetoric. A motivated attacker will find a way regardless of your defenses. Stay knowledgeable, stay kind. ------------------------------
🤓 Hi, I’m Jason, the “TeachMeCyber” guy
💡I simplify cyber security to help you learn faster
🔔 Follow me for daily cyber security posts #teachmecyber #cybersecurity #ransomware #mgm

  • No alternative text description for this image

See more comments

To view or add a comment, sign in

Read More
View All
Insights by Kate Gamble
View All
Our Partners

Premier

Diamond Partners

Platinum Partners

Silver Partners

Healthcare Transformation Powered by Community

© Copyright 2024 Health Lyrics All rights reserved