Sam Altman returns as CEO of OpenAI, Yeah, says the crowd of adoring fans. But what just happened. Those that advocated for guardrails on the pace of AI development were stomped. No longer will we advance AI at the pace of research, now it will advance at the pace of Capitalism. Is this a good thing?
Ilya video I reference. https://youtu.be/9iqn1HhFJ6c?si=jllxMLFzmtt5KYhw
Today in health, it capitalism wins open. AI brings back Sam Altman as their CEO. And I'm wondering if this is a good thing, today we have a short week. This is the last episode of this week. We're not going to put one out for Thursday or Friday, so I'm just going to do a stream of consciousness, episode of how I'm processing this thing. There you go.
My name is bill Russell. I'm a former CIO for 16 hospital system and creator of this week health, a set of channels and events dedicated to transform health care. One connection at a time. We want to thank our show sponsors who are investing in developing the next generation of health leaders. Short test artist, site parlay it's certified health, notable and service.
Now check them out at this week. Health. Dot com slash today. All right. One last thing, share this podcast with a friend or colleague, you said as a foundation for daily or weekly discussions on topics that are relevant to you and the industry, they can subscribe wherever you listen to podcasts.
So Sam Altman returning, and they're going to replace the board. It turns out that when you're CEO and chief technology officer or whatever Brockman was leave and go to Microsoft and 600 or so of your. Eight seven or 800 employees. Threatened to leave with him. That creates a. A defining moment at a organization. That Was probably unintended like no one anticipated this being the outcome.
If they had, I'm sure the board would have acted differently and probably should have acted differently. This should have been an ongoing dialogue and conversation. It should not have been a, Hey you're out. Kind of thing. And regardless he's out over the weekend and he gets hired at Microsoft and now he's coming back. To open AI as the CEO. What does this mean?
What does it look like? We had some interesting conversations over the last couple of days. In interviews that I've done and yesterday we had a David ting. Loro tool and directs to Ford auto show. And we talked about this a little bit. And it's interesting because as I was talking on that show, Th, the show will air in two weeks. And you will listen to it and be like, oh my gosh, that's ancient history.
But just remember it was recorded right in the middle of what was going on. But as I was looking at it, something that really struck me. Was what I talked about yesterday on the today's show. Which is. Open AI was founded as , an organization that was like a research project that was going to progress us towards artificial general intelligence, which is where computers can do essentially anything humans could do except better.
It's that old song, anything you could do, I could do better. Artificial . General intelligence is where computers can do anything we can do, but better. And we were going to do that with guardrails. So that we could make sure that we do it in a safe way that we could do it in a way. That things get developed and not put humanity in jeopardy. Now, there are several schools of thought on that, by the way, there are people who are like, this is silly. Yeah, computers will never get there or so far away. But I find that the most let's call it informed. Person on this today is probably ileus it's Covar. who is the chief scientist
and co-founder. Of open AI. And one of the people who they say architected the ouster, and I would also call him probably the conscience. Of the original mission of open AI, which was to advance towards artificial general intelligence . In a way that it was going to be safe for humanity.
He sees a picture. Where. If we develop artificial general intelligence, And we were going to be able to essentially create beings that go off on their own with their own intelligence and then act accordingly. And if we don't figure out a way to instill those with human values, that essentially.
And actually I just watched this video it's worth, I'll post it in the In the comments it's worth looking at, it's a guardian video. And Ilya talks about this progression towards artificial general intelligence and what it means. He he describes it as, the way we treat animals, the way we treat our dogs and those kinds of things.
He goes, we don't treat them. Poorly. We treat them with kindness and respect and those kinds of things, but we also do not consult them when we were going to build a new road. Or when we were going to build a building or those kinds of things, we just if it's in our best interest, if it's in the best interest, Of those around us.
We just do it. And he goes, that's when artificial general intelligence progresses to this point. It will do things like that. It will start taking actions. That is in our best interest, but it will not consult us anymore. And It's interesting to hear him talk about. What a future might look like with artificial general intelligence, by the way, he also starts with this fact that we are a lot closer than what people think. It is. Maybe not just around the corner, but it's not far from where we sit.
And a lot of people believe that open AI and chat GPT is the start of that. Like we are seeing it. In fact, I put a post out there on LinkedIn this morning. And it's very interesting to me because it said, would you like to be rewritten by AI? And we are seeing AI get integrated into everything.
I opened two applications last week and there was a little box there that hadn't been there before. Saying, what would you like this application to do for you? And I tell it in natural language, what I want it to do. And then it presents back to me, the things that I've asked it to do, we're going to see that permeate applications as we move forward.
And if you write applications in art, Including this kind of functionality. You're behind the curve. Today. As we stand. So it's really interesting to see. That, and by the way, from a healthcare perspective, we talked about this, the genie being out of the bottle, this is progressing really fast and we think we're setting up.
We think we're setting up this governance to ensure that. That AI comes into our healthcare systems in a reasonable and a safe way. But in reality, what's happening is it's getting integrated into all these applications and it's just starting to show up. And we go into these applications and all of a sudden there's a box there that asks us something or a little button that we can speak and then it transcribes and that it does what it's going to do.
And, I think we're going to just wake up sometime. Around this point next year and realize we're not governing it anymore, that it is just showing up in our applications and our environment. And it's going to be pervasive. And I think it's about a year out that it will be pervasive in healthcare, especially on the administrative side.
But I think we have to be careful because it could start showing up on the clinical side. As well, not doing diagnostics and those kinds of things, which will require FDA approval, but maybe some of the more mundane things. And so it begins. And then it becomes pervasive. I'm going to share this video because I think it's worth watching cause it's sobering to think of what artificial general intelligence can bring us, how close it is. Ileus starts the interview with. The guardian and he talks about the really cool things that it can bring about. And then he talks about the scary things that it can bring about. And as I look at this, let me see if I could pull this up.
Now, AI is a great thing, because AI will solve all the problems that we have today. It will solve employment, it will solve disease, it will solve poverty. But it will also create new problems. The problem of fake news is going to be a million times worse. Cyber attacks will become much more extreme. We will have totally automated AI weapons. I think AI has the potential to create infinitely stable dictatorships.
And so you see. Amazing possibilities. Amazing downsides. I think we do need to have a conversation about this. I think we need to be cognizant of this. I'm not sure we're going to be able to do anything at this point because the genie is out of the bottle, as they say. And the events over the weekend, I think dictate. That the camp that said, Hey, let's progress reasonably have lost at open AI.
And I think the camp that says, they won't say these words. Because it would be it would be hard for them to say these words, but they're acting in this way. And it is that capitalism has won. That we are going to progress. AI at the pace that capitalism requires and all the 600 people or whatever voted in their self-interest.
And when they said, Hey, we're going to leave here and go to Microsoft. Look, if the employees had said, Hey, we're staying, it's no big deal. We believe in the mission of this organization and where it's going, then we wouldn't be having this conversation. And I don't think Sam Altman would be returning. I think they would have looked at it and said, Hey, we're progressing. We're doing what we want to do and it's mission-based and that kind of stuff, but I don't think that's what's happening.
I think people have seen the money and the dollar signs. And they've seen the fame, the fortune that comes with it. And I think the 600, some odd people who signed that essentially said, look, I want the stock options.
I want the money. I want to participate in this. I want to be able to afford my home. I'd look, I'm not looking down on them. I'm just saying. They're acting like capitalists, quite frankly. And Sam Altman. Absolutely. As a capitalist, regardless of whatever you says on stage. It's not what you say, it's what you do.
And what we're seeing right now is that what is dictating the pace at which open AI moves is capitalism. Is this a good thing? Is it a bad thing? I happen to think that markets tend to correct themselves. This might be a little different. The progress of AI might be something that we developed that gets outside of our control ad. From that perspective, we need to be a little bit more careful with this than other things. I like this when I was watching the Elliot. Cisco VAR video. I liken him to Oppenheimer. It's, he developed the first atomic bomb with the group.
And if you watch the Oppenheimer, the movie, the whole story of how they developed it. And I think of AI in that same sort of context, it's like scientists they researched, they study, , they invent new things. They bring new things to our. To us in consciousness. That's what they're supposed to do.
That's what they do. But then it's incumbent upon us. Two. Put guardrails around it. And when nuclear weapons were created it's overshadowed everything else that ever happened with nuclear medicine, nuclear energy. And there's a lot of positives that come through really understanding the Adam and how it works and those kinds of things.
And then there's some horrific things as a result of Oppenheimer and what came out of that research, same thing could be true here with AI. There, there could be some really amazing things that happen as a result of AI. There could also be some devastating things that happened as a result of AI.
It's worth the conversation it's worth getting in front of. I liked the nuclear AI. Analogy. I think it holds. I think what we've seen now is that capitalism will dictate how fast we move with AI. And the part of me that is using it loves that because I want it to progress faster. And the part of me, that's looking at it say, Hey, do I really want to work? For a computer at some day or some sort of artificial general intelligence. Not really.
I still like people, I like interacting with people. I like having conversations with people. I like knowing that there's a moral center of some kind with people. And I'm not sure if we're going to be able to instill that. I know that's the direction that Elon Musk is going. But it will be interesting to watch.
So anyway, This is what a Friday episode feels like at this week health. And this feels like a Friday said because I'm taking the next two days off. Just thought I'd ramble a little bit about the things I'm reading things I'm watching. I will share that video in the link. I think it's worth watching. Just to give you a little perspective on what could go wrong.
I think we focus a lot on what could go right with us. Especially on the show, we do, we focus a lot on what could go, and why we should develop this further. But we also need to be cognizant of what could go wrong with AI development. All right. That's all for today. It's actually that's all for this week.
Don't forget to share this podcast with a friend or colleague. Keep the conversation going. We want to thank our channel sponsors who are investing in our mission to develop the next generation of health leaders. Short test artist, site parlance, certified health, notable and 📍 service. Now check them out at this week.
health.com/today. Thanks for listening. That's all for now.