Researchers Find A.I. Is Capable at Diagnosing Common Childhood Conditions

cageymaru

Fully [H]
Joined
Apr 10, 2003
Messages
22,074
Researchers in the USA and China have published a paper in Nature Medicine that finds artificial intelligence (A.I.) is as capable as an experienced physician's assistant when it comes to automatically diagnosing common childhood diseases. Data from 600,000 Chinese pediatric patient records, covering a 18 month time period, were analyzed to train the A.I and validate the framework. 101.6 million data points from 1,362,559 pediatric patient visits were used. Other A.I. systems use machine learning classifiers (MLCs) which allow them to excel at creating image-based diagnosis. Analysis of diverse and massive electronic health record (EHR) data remains challenging for typical A.I. systems. The A.I. system developed by the researchers is capable of automatically extracting data from natural language records, can utilize hypothetico-deductive reasoning that is used by physicians and is capable of unearthing associations that previous statistical methods have not found.

Because the research was done in China, it was much easier to accumulate the data. A new "American A.I. Initiative" was signed into law to encourage federal agencies and universities to share data and create similar automated systems, but pooling health care data in America is much more complicated. The equipment systems aren't standardized in America and getting permission to collect and use patient data can be hard; even if it is anonymized.

When tested on unlabeled data, the software could rival the performance of experienced physicians. It was more than 90 percent accurate at diagnosing asthma; the accuracy of physicians in the study ranged from 80 to 94 percent. In diagnosing gastrointestinal disease, the system was 87 percent accurate, compared with the physicians' accuracy of 82 to 90 percent. The system was highly accurate, the researchers said, and one day may assist doctors in diagnosing complex or rare conditions.

Able to recognize patterns in data that humans could never identify on their own, neural networks can be enormously powerful in the right situation. But even experts have difficulty understanding why such networks make particular decisions and how they teach themselves. As a result, extensive testing is needed to reassure both doctors and patients that these systems are reliable.
 
Cool story and I see a few angles with this. First, our robot overlords approve. More efficient ways of re-purposing humans are inbound. Secondly I think this can lead to faster and more accurate diagnosis in the long run because who doesn't like a 2+ hour visit for something basic. Lastly, paying full price for simple visits should become a thing of the past-far fetched I know but a man can dream.

Thanks Cagey!
 
"one day may assist doctors" "humans could never identify on their own" "extensive testing is needed to reassure"

Carefully worded to prevent putting doctors on an outright offensive to block.
 
"one day may assist doctors" "humans could never identify on their own" "extensive testing is needed to reassure"

Carefully worded to prevent putting doctors on an outright offensive to block.

Everyone needs to know many jobs will be taken by AI in the coming 5-10 years from truck/taxi/delivery driver to farmer, secretary to programmer, primary care physician to cashier. Just to name a few
 
Everyone needs to know many jobs will be taken by AI in the coming 5-10 years from truck/taxi/delivery driver to farmer, secretary to programmer, primary care physician to cashier. Just to name a few

It's going to be interesting for sure. What will happen to second opinions when your provider only has a single billion dollar contract with IBM Watson or whatever and it only spits out one treatment option? What happens when they decide to do a youtube and the first three pages of AI solutions are corporate sponsor approved and the exact match is pushed out on page four or 4000?
 
having an automated AI doing basic diagnoses is a good thing, IMO. In fact, the AI will probably still be spending as much time as a DR does currently looking over my chart hahahaha
 
These are just advanced pattern matching systems. While this is something that humans are very good at, it is hardly the basis for intelligence, so calling these systems "intelligent" is misleading, IMHO.
 
Because the research was done in China, it was much easier to accumulate the data. A new "American A.I. Initiative" was signed into law to encourage federal agencies and universities to share data and create similar automated systems, but pooling health care data in America is much more complicated. The equipment systems aren't standardized in America and getting permission to collect and use patient data can be hard; even if it is anonymized.

This is an over generalization. I am not sure what equipment standardization has to do with it. As far as getting records, that actually is not that difficult depending on how it is done. There are already programs for the DoD for this. The US is just a bit more methodical and not as care-free about information and findings as the Chinese are.
 
This is an over generalization. I am not sure what equipment standardization has to do with it. As far as getting records, that actually is not that difficult depending on how it is done. There are already programs for the DoD for this. The US is just a bit more methodical and not as care-free about information and findings as the Chinese are.
Well HIPAA laws and the fact that every hospital system runs custom software makes it a bit harder to create an AI that can easily read patient records. I think it is awesome that they are training A.I. to read unlabeled documents and figure out what the doctor meant, but it would be much nicer if the data was already stored in a standardized way.

In short, if China standardizes their data storage in the future for A.I. to easily read their documents and we continue on the custom software / record keeping path; Chinese A.I. programmers are going to have it easier than us.
 
Regardless of the privacy concerns, if a computer program can aid in detection of ailments from medical scans to supplement a doctor, it can only be a good thing. I've had 2 family members misdiagnosed from separate PCPs who failed to identify obvious ailments on xrays that were blatantly apparent to the naked eye, where the specialists are dumbfounded how they could have been missed. If there was an automatic computer-driven assessment report, perhaps the PCPs wouldn't miss so many obvious issues.

As far as privacy goes, models shouldn't need any specific personal information, just keep the data anonymous.
 
It's going to be interesting for sure. What will happen to second opinions when your provider only has a single billion dollar contract with IBM Watson or whatever and it only spits out one treatment option? What happens when they decide to do a youtube and the first three pages of AI solutions are corporate sponsor approved and the exact match is pushed out on page four or 4000?

One way of looking at it is a neural network has already considered multiple opinions and picked the most likely out of all of them. :p Assuming no bias is placed on it afterwards to lean it more toward certain medicines or treatments based on outside agreements.
 
Well HIPAA laws and the fact that every hospital system runs custom software makes it a bit harder to create an AI that can easily read patient records. I think it is awesome that they are training A.I. to read unlabeled documents and figure out what the doctor meant, but it would be much nicer if the data was already stored in a standardized way.

In short, if China standardizes their data storage in the future for A.I. to easily read their documents and we continue on the custom software / record keeping path; Chinese A.I. programmers are going to have it easier than us.

So, to point out where this is not exactly true, the military has standardized methods for maintaining patient records. HIPAA guidelines have mostly to do with human access to information, not machine access to it. Using ML and AI to parse through patient records on a system is far easier than designing a patient search and archive system for human consumption. Also realize that in the Chinese program they were using anonymity, that actually is an easier way to do things here as well. There are a lot of HIPAA regulations, but they aren't as binding when it comes to this kind of system, considering most of the system has no accessibility to humans and even where it does, it is a very select few.

This is one of those areas, where R&D in the military can benefit the rest of society.
 
Back
Top