Imagine going to the doctor and finding out that the computer helping to diagnose you hasn’t been properly tested. Sounds scary, right? Well, some researchers at the University of North Carolina (UNC) have discovered that this might be happening with many new medical tools that use artificial intelligence (AI).
The big discovery
Sammy Chouffani El Fassi and Dr. Gail E. Henderson from UNC looked at over 500 artificial intelligence medical devices that the Food and Drug Administration (FDA) said were okay to use. They found out that almost half of these devices (43%) didn’t have any published information about how well they were tested with real patients. It’s like having a new video game that hasn’t been play-tested – you wouldn’t know if it works properly or has bugs!
Why this matters: These tools are being used more and more in hospitals and clinics. They help doctors look at X-rays, analyze blood tests, and even suggest treatments. In 2016, there were only 2 of these AI-driven tools approved by the FDA. Now, there are about 69 new ones every year! That’s a big jump, and it means we need to make sure they’re all working correctly.
How do we test these tools?
The researchers talk about three main ways to check if an AI medical device works well:
- Looking at old records: This is like studying history to predict the future. It’s easy to do but might not show how the AI works in real life today.
- Testing in real-time: This is when the AI is tested as it’s being used with patients right now. It’s more accurate but takes more time and effort.
- Randomized controlled trials: This is the gold standard of testing. It’s like a scientific experiment where some patients use the tool and others don’t, to see if it really makes a difference.
Some companies are doing it right
While many tools haven’t been tested well enough, some companies are showing how it should be done. For example, HealthOrbit makes sure their tools are tested in real-life situations. Our tools can write down what doctors and patients say with 90% accuracy and work with most hospital computer systems.
What HealthOrbit does differently
We have created AI that can:
- Automatically write notes from doctor-patient conversations
- Help with medical coding (that’s how hospitals bill for their services)
- Suggest diagnoses and medicines in real-time
- Help nurses collect patient information before seeing the doctor
Why is this important?
As more and more AI tools are being used in healthcare, we need to make sure they’re safe and actually help patients. The study by the UNC researchers shows that we need to:
- Be more open about how these tools are tested
- Use AI tools that have been proven to work well in real hospitals and clinics
- Keep studying and improving how we test these tools
The takeaway
Just like you wouldn’t want to play a video game full of bugs, we don’t want to use medical tools that haven’t been properly tested. By making sure AI in healthcare is thoroughly checked and validated, we can make sure it helps doctors take better care of patients, rather than causing problems.
HealthOrbit’s commitment to validation shows that with the right approach, AI can be a trustworthy and helpful tool in healthcare. This kind of careful testing is what helps make sure that the AI tools used in healthcare are safe, effective, and truly helpful for patients and doctors alike.