This Week Health

Interviews in Action

More
This Week Health is a series of IT podcasts dedicated to healthcare transformation powered by the community

What would you like to learn about today?

Latest Episodes
View All
R25 - Podcasts Category Filter-2
  • All
  • Leadership (670)
  • Emerging Technology (496)
  • Security (307)
  • Interoperability (296)
  • Patient Experience (296)
  • Financial (286)
  • Analytics (182)
  • Telehealth (174)
  • Digital (164)
  • Clinician Burnout (160)
  • Legal & Regulatory (140)
  • AI (103)
  • Cloud (92)
View All
In the News

Digital divide affecting low-income patients, Reid Health CEO says

September 24, 2023

Craig Kinyon, CEO of Richmond, Ind.-based Reid Health, said the digital divide is disproportionately affecting low-income households in both urban and rural areas. 

In an Aug. 30 LinkedIn post, Mr. Kinyon said that in today's digital world, it is vital to address the issue of digital inequities. 

"Did you know that approximately 19 percent of Americans do not own a smartphone?" he wrote. "Shockingly, 50 percent of households earning less than $30,000 per year have limited access to computers, while around 18 million households in the U.S. lack internet access."

He proposed that healthcare should begin recognizing the impact of social determinants of health and health disparities to assess how it is hindering patients from getting access to care.

"This is why as leaders in the healthcare field, it is imperative for us to collaborate with community organizations, and policymakers in order to bridge this digital divide," he wrote. "By working together harmoniously, innovative solutions can be created that effectively address these challenges."

Read More

IBM trains its LLM to read, rewrite COBOL apps | CIO Dive

September 24, 2023

This audio is auto-generated. Please let us know if you have feedback.
  • IBM trained its watsonx.ai large language model to ingest COBOL code and rescript business applications in Java, the company announced Tuesday.
  • The generative AI solution is designed to ease mainframe modernization, assisting developers in the arduous process of analyzing, refactoring and transforming legacy code and validating the results, Skyla Loomis, VP of IBM Z Software, said during a demonstration.
  • IBM intends to deploy the AI-enabled coding assistant’s new capabilities by the end of the year, the company said.

The specter of technical debt haunts organizations, often leaving critical business functions perched perilously atop layers of arcane code. Despite modernization efforts, businesses still run on-prem applications architected with COBOL, a programming language created in the 1950s.

IBM estimates individual clients at the average enterprise may have tens of millions of COBOL lines running in production. Globally, enterprise production systems run more than 800 billion lines of COBOL daily, according to a Vanson Bourne study commissioned last year by software company Micro Focus.

Several generative AI companies, including Anthropic and OpenAI, recently introduced coding assistants. In February, Microsoft released GitHub Copilot for Business, an AI-enabled developer tool for the enterprise, and saw user headcount double in the first half of the year.

While human language contains nuances and tonal variations that can outwit the best commercially available models, computer code consists of straightforward machine instructions with clearly articulated semantics.

Errors and hallucinations can occur in coding translations, but they are relatively easy to identify and resolve, said Kyle Charlet, IBM fellow and CTO of IBM Z Software.

“Code doesn't lie, so we can immediately highlight any hallucinations that have worked their way into the code and correct them,” said Charlet.

The company trained the LLM on its COBOL data and tested the dataset on IBM's CodeNet, a database of 14 million code samples in more than 55 common programming languages, including C++, Java, Python, FORTRAN and COBOL. IBM used CodeNet to test for accuracy in COBOL to Java translation.

The model was then tuned for two specific use cases: a coding assistant for RedHat’s Ansible automation toolkit; and the new COBOL solution, which has now been trained on more than 80 languages and 1.5 trillion tokens of data, according to the company.

To mitigate risk, all of the model’s training data originated from licensed open source software, Charlet said.

The solution has four functional phases, detailed by Loomis.

  • Auto-discovery, when the tool analyzes the original script, identifies its data dependencies and provides a metadata overview of the application.
  • Refactor, which identifies the application’s business function and suggests modernization updates.
  • Transform, when the user triggers the generative AI’s COBOL-to-Java translation capabilities.
  • Validate, which tests the results to ensure that the new service is semantically and logically equivalent to the original script.

“What we are not doing is a line-by-line COBOL syntax translation to Java,” Loomis said. “When that happens, what you end up with is COBOL syntax expressed in largely unreadable, largely unmaintainable Java.”

Expanding the watsonx toolkit is part of a broader business integration strategy, built around hybrid cloud, mainframe modernization, emerging AI capabilities and IT consulting services.

IBM previously partnered with Microsoft to ease mainframe modernization and deploy enterprise-grade generative AI solutions. The two companies introduced the IBM Z and Cloud Modernization Stack on the Microsoft Azure Marketplace in June and launched a generative AI managed service last week.

Correction: This article has been updated to reflect IBM trained the LLM using its COBOL data. The company used CodeNet to test it for accuracy.

Read More

Why Apple and Amazon Are Using AI Models, Not Selling Them

September 24, 2023

Who is winning Big Tech’s generative artificial intelligence (AI) war?

While some of the world’s largest companies, including Alphabet’s GoogleMicrosoftOracleMeta and many more are drawing up their battle lines against not just each other but also innovative and agile startups like Anthropic and OpenAI, the question of primacy will likely be answered by whichever firm cracks the enterprise solution first — and most effectively.

That’s because generating businesses sales from cash-rich enterprises offers the best and most scalable bulwark against the heady costs of AI’s computing power and data center needs.

Microsoft, which has been at the forefront of the generative AI race ever since the firm struck its landmark partnership with OpenAI to leverage the latter’s foundational large language models (LLM), reportedly needs $4 billion of infrastructure for its suite of generative AI tools to do their job.

But while the who’s who of the tech sector continues to duke it out over AI, with upstart incumbent OpenAI reportedly on track to rake in more than $1 billion in revenue this year from its generative AI products, some observers are wondering — just where are the other tech sector giants, like Apple and Amazon?

Read also: It’s a ‘90s Browser War Redux as Musk and Meta Enter AI Race

A Better Use for AI

When it comes to winning this generation’s tech arms race, first mover advantage will be based on the way in which foundational AI models are applied to drive business value.

For their part, Google announced on Tuesday (Aug. 29) at the organization’s annual cloud conference in San Francisco that it will be releasing to the general public a suite of generative AI-powered tools for its corporate Gmail and workplace software customers. 

The tools will run enterprise customers an extra $30 per month, the same price as Microsoft’s equivalent enterprise AI co-pilot offerings, which are not yet available to the general business public beyond select testing partners.

Alphabet also used the event to unveil the latest iteration of its custom-built AI chips, along with a new tool that can watermark and help identify images generated by AI.

An early developer of generative AI’s core technical architecture (the T in GPT, which stands for Transformer) and a one-time home for many of AI’s most impactful researchers, Google has since suffered a substantial brain drain as its top AI researchers continue to leave and start their own ventures or join the top ranks of other firms.

“Writing a game-changing paper [in the AI space] is like a blank check,” Paul Lintilhac, a PhD researcher in computer science at Dartmouth College, told PYMNTS. 

Google’s announcement comes on the heels of OpenAI’s own launch of a generative AI service for enterprise customers. 

See more: 10 Insiders on Generative AI’s Impact Across the Enterprise

Different AI Strategies

But while the world’s biggest tech companies have been increasingly rolling out out AI applications, there have been some notably absent names in the race. 

For Amazon, that is by design.

The cloud and eCommerce giant is hoping to find an advantage in making LLM development as easy as possible for its clients, offering no and low-code AI services not too dissimilar from a Build-A-Bear for AI models.

Amazon is also treating AI as a support technology for its own business priorities, rather than packaging its proprietary models into a new product for outside customers. 

In just one example, the tech giant is reportedly using AI to optimize the viewing experience for its Thursday Night Football streaming product on Prime Video.

Meanwhile, Apple has yet to bring to market its own “GPT” moment.

“We view AI and machine learning as core fundamental technologies that are integral to virtually every product that we build,” Apple CEO Tim Cook told investors on the company’s most recent third-quarter earnings call.

There were just six references to AI during Apple’s call, and they all took place during the same exchange referenced above. For their part, Apple’s tech sector peers Microsoft and Alphabet mentioned AI 73 and 90 times, respectively, during their latest 2023 earnings calls.

Read More
1 803 804 805

Digital divide affecting low-income patients, Reid Health CEO says

September 24, 2023

Craig Kinyon, CEO of Richmond, Ind.-based Reid Health, said the digital divide is disproportionately affecting low-income households in both urban and rural areas. 

In an Aug. 30 LinkedIn post, Mr. Kinyon said that in today's digital world, it is vital to address the issue of digital inequities. 

"Did you know that approximately 19 percent of Americans do not own a smartphone?" he wrote. "Shockingly, 50 percent of households earning less than $30,000 per year have limited access to computers, while around 18 million households in the U.S. lack internet access."

He proposed that healthcare should begin recognizing the impact of social determinants of health and health disparities to assess how it is hindering patients from getting access to care.

"This is why as leaders in the healthcare field, it is imperative for us to collaborate with community organizations, and policymakers in order to bridge this digital divide," he wrote. "By working together harmoniously, innovative solutions can be created that effectively address these challenges."

Read More

IBM trains its LLM to read, rewrite COBOL apps | CIO Dive

September 24, 2023

This audio is auto-generated. Please let us know if you have feedback.
  • IBM trained its watsonx.ai large language model to ingest COBOL code and rescript business applications in Java, the company announced Tuesday.
  • The generative AI solution is designed to ease mainframe modernization, assisting developers in the arduous process of analyzing, refactoring and transforming legacy code and validating the results, Skyla Loomis, VP of IBM Z Software, said during a demonstration.
  • IBM intends to deploy the AI-enabled coding assistant’s new capabilities by the end of the year, the company said.

The specter of technical debt haunts organizations, often leaving critical business functions perched perilously atop layers of arcane code. Despite modernization efforts, businesses still run on-prem applications architected with COBOL, a programming language created in the 1950s.

IBM estimates individual clients at the average enterprise may have tens of millions of COBOL lines running in production. Globally, enterprise production systems run more than 800 billion lines of COBOL daily, according to a Vanson Bourne study commissioned last year by software company Micro Focus.

Several generative AI companies, including Anthropic and OpenAI, recently introduced coding assistants. In February, Microsoft released GitHub Copilot for Business, an AI-enabled developer tool for the enterprise, and saw user headcount double in the first half of the year.

While human language contains nuances and tonal variations that can outwit the best commercially available models, computer code consists of straightforward machine instructions with clearly articulated semantics.

Errors and hallucinations can occur in coding translations, but they are relatively easy to identify and resolve, said Kyle Charlet, IBM fellow and CTO of IBM Z Software.

“Code doesn't lie, so we can immediately highlight any hallucinations that have worked their way into the code and correct them,” said Charlet.

The company trained the LLM on its COBOL data and tested the dataset on IBM's CodeNet, a database of 14 million code samples in more than 55 common programming languages, including C++, Java, Python, FORTRAN and COBOL. IBM used CodeNet to test for accuracy in COBOL to Java translation.

The model was then tuned for two specific use cases: a coding assistant for RedHat’s Ansible automation toolkit; and the new COBOL solution, which has now been trained on more than 80 languages and 1.5 trillion tokens of data, according to the company.

To mitigate risk, all of the model’s training data originated from licensed open source software, Charlet said.

The solution has four functional phases, detailed by Loomis.

  • Auto-discovery, when the tool analyzes the original script, identifies its data dependencies and provides a metadata overview of the application.
  • Refactor, which identifies the application’s business function and suggests modernization updates.
  • Transform, when the user triggers the generative AI’s COBOL-to-Java translation capabilities.
  • Validate, which tests the results to ensure that the new service is semantically and logically equivalent to the original script.

“What we are not doing is a line-by-line COBOL syntax translation to Java,” Loomis said. “When that happens, what you end up with is COBOL syntax expressed in largely unreadable, largely unmaintainable Java.”

Expanding the watsonx toolkit is part of a broader business integration strategy, built around hybrid cloud, mainframe modernization, emerging AI capabilities and IT consulting services.

IBM previously partnered with Microsoft to ease mainframe modernization and deploy enterprise-grade generative AI solutions. The two companies introduced the IBM Z and Cloud Modernization Stack on the Microsoft Azure Marketplace in June and launched a generative AI managed service last week.

Correction: This article has been updated to reflect IBM trained the LLM using its COBOL data. The company used CodeNet to test it for accuracy.

Read More

Why Apple and Amazon Are Using AI Models, Not Selling Them

September 24, 2023

Who is winning Big Tech’s generative artificial intelligence (AI) war?

While some of the world’s largest companies, including Alphabet’s GoogleMicrosoftOracleMeta and many more are drawing up their battle lines against not just each other but also innovative and agile startups like Anthropic and OpenAI, the question of primacy will likely be answered by whichever firm cracks the enterprise solution first — and most effectively.

That’s because generating businesses sales from cash-rich enterprises offers the best and most scalable bulwark against the heady costs of AI’s computing power and data center needs.

Microsoft, which has been at the forefront of the generative AI race ever since the firm struck its landmark partnership with OpenAI to leverage the latter’s foundational large language models (LLM), reportedly needs $4 billion of infrastructure for its suite of generative AI tools to do their job.

But while the who’s who of the tech sector continues to duke it out over AI, with upstart incumbent OpenAI reportedly on track to rake in more than $1 billion in revenue this year from its generative AI products, some observers are wondering — just where are the other tech sector giants, like Apple and Amazon?

Read also: It’s a ‘90s Browser War Redux as Musk and Meta Enter AI Race

A Better Use for AI

When it comes to winning this generation’s tech arms race, first mover advantage will be based on the way in which foundational AI models are applied to drive business value.

For their part, Google announced on Tuesday (Aug. 29) at the organization’s annual cloud conference in San Francisco that it will be releasing to the general public a suite of generative AI-powered tools for its corporate Gmail and workplace software customers. 

The tools will run enterprise customers an extra $30 per month, the same price as Microsoft’s equivalent enterprise AI co-pilot offerings, which are not yet available to the general business public beyond select testing partners.

Alphabet also used the event to unveil the latest iteration of its custom-built AI chips, along with a new tool that can watermark and help identify images generated by AI.

An early developer of generative AI’s core technical architecture (the T in GPT, which stands for Transformer) and a one-time home for many of AI’s most impactful researchers, Google has since suffered a substantial brain drain as its top AI researchers continue to leave and start their own ventures or join the top ranks of other firms.

“Writing a game-changing paper [in the AI space] is like a blank check,” Paul Lintilhac, a PhD researcher in computer science at Dartmouth College, told PYMNTS. 

Google’s announcement comes on the heels of OpenAI’s own launch of a generative AI service for enterprise customers. 

See more: 10 Insiders on Generative AI’s Impact Across the Enterprise

Different AI Strategies

But while the world’s biggest tech companies have been increasingly rolling out out AI applications, there have been some notably absent names in the race. 

For Amazon, that is by design.

The cloud and eCommerce giant is hoping to find an advantage in making LLM development as easy as possible for its clients, offering no and low-code AI services not too dissimilar from a Build-A-Bear for AI models.

Amazon is also treating AI as a support technology for its own business priorities, rather than packaging its proprietary models into a new product for outside customers. 

In just one example, the tech giant is reportedly using AI to optimize the viewing experience for its Thursday Night Football streaming product on Prime Video.

Meanwhile, Apple has yet to bring to market its own “GPT” moment.

“We view AI and machine learning as core fundamental technologies that are integral to virtually every product that we build,” Apple CEO Tim Cook told investors on the company’s most recent third-quarter earnings call.

There were just six references to AI during Apple’s call, and they all took place during the same exchange referenced above. For their part, Apple’s tech sector peers Microsoft and Alphabet mentioned AI 73 and 90 times, respectively, during their latest 2023 earnings calls.

Read More
1 803 804 805
View All
Insights by Kate Gamble
View All
Our Partners

Premier

Diamond Partners

Platinum Partners

Silver Partners

Healthcare Transformation Powered by Community

© Copyright 2024 Health Lyrics All rights reserved