AI Clinician Recommends Treatment for Sepsis Patients

AlphaAtlas

[H]ard|Gawd
Staff member
Joined
Mar 3, 2018
Messages
1,713
Using AI as a diagnostic aid is nothing new. Some systems are already helping radiologists spot tumors, but systems like that stop short of actually recommending treatments. Now, researchers have made an "AI Clinician" that looks at a patient's symptoms and actually recommends sepsis treatments. The AI was trained on over 17,000 cases, and by some metrics, it outperforms real doctors. Aldo Faisal, one of the paper's authors, told Spectrum "It's really cognition that is captured here. We're not just making the AI see like a doctor, we're making it act like a doctor."

After AI Clinician experimented with the 17,000 cases in its training data set, it was tested on 79,000 cases it had never seen before. Overall, the AI recommended lower doses of IV fluids and higher doses of vasopressors than the patients actually received in those test cases. Patients who received doses similar to those recommended by AI Clinician had the lowest mortality. The researchers plan to test their system in a real hospital-though for safety's sake, the first trials won't affect patient care. Initially the AI Clinician will get real-time data from the hospital's electronic medical record system and will issue recommendations, but the doctors won't see them or act on them. The researchers will observe patient outcomes, and determine whether patients fare better when the doctors independently decide on the same course of treatment that the AI recommends. If the trials prove AI Clinician's worth, the researchers will work toward commercial software that can be put in place in hospitals across the world. "With sepsis, we've struggled to find new treatments," says Gordon. "But optimizing our current therapies can have a big effect. Even if we can change mortality by just a few percentage points, we can save tens of thousands of lives."
 
Coming soon ... it will bill you like a doctor.

Sure, because the medical industry is a huge money racket. You won't be able to out right purchase one of these...they will rent them out with contracts and feature upgrades.
 
I am all for new tools to help in managing treatment but this should remain a tool and not replace the human interaction and oversight of the treatment process. Also, with data breaches and ransomware attacks on the healthcare system allowing AI/technology to take ever-greater roles in healthcare could bring significant risk.
 
The problem with most of these "AI" systems is that they can't tell you why they make the decisions they do. This makes it difficult for humans to trust them, even if statistically they do a better job than the average human trained in the same field of work. Keep in mind that the best humans also outperform the average, and may indeed outperform the "AI".
 
The problem with most of these "AI" systems is that they can't tell you why they make the decisions they do. This makes it difficult for humans to trust them, even if statistically they do a better job than the average human trained in the same field of work. Keep in mind that the best humans also outperform the average, and may indeed outperform the "AI".

Scary thing is...seems a lot of doctors are becoming like robots themselves...so may not be that big of a transition. Besides its a insurance/doctor relationship and not really much of a patient interaction. Sorry just got the wife out of a major lumbar surgery...rods all the way screwed up to the hipbones.


/soapbox
 
The problem with most of these "AI" systems is that they can't tell you why they make the decisions they do.

Who said they can't tell you why they made a certain decision? I see nothing about these AI system that prevent them from educating us on how or why they came to a certain decision.
 
It not AI ( they sure lovvveee that word. Easy to get grants i guess. Every damn thing is called AI now)

it's just a program with live monitoring. What has to be administered and to what dosage is already discovered and analyze by other clinical trails.

Tech is already there decades ago but that would prob cost a 100k in bills if implemented if considering price gouging these days.
 
The problem with most of these "AI" systems is that they can't tell you why they make the decisions they do. This makes it difficult for humans to trust them, even if statistically they do a better job than the average human trained in the same field of work. Keep in mind that the best humans also outperform the average, and may indeed outperform the "AI".

However, if you're an insurance company statistician and get a statistically significant malady, you want the statistically superior clinician - even if it's an autistic machine.
 
Coming soon ... it will bill you like a doctor.
That's only a problem in a few "backwater countires". The rest of the world realizes that health care is a right, and not a means of profit. But you know, single payer health care is perfectly fine for certain politicians in those countries, just not their constituents. :)
 
Who said they can't tell you why they made a certain decision? I see nothing about these AI system that prevent them from educating us on how or why they came to a certain decision.
That's the way these learning systems work. They are not programmed in the conventional sense, but are fed lots of data to train them. They work more like instinct than reasoning. Pattern recognition in a very limited problem space. It's kind of like asking you to explain exactly how you recognize a face or balance on two legs. You can't; you just "do it".
 
Back
Top