Artificial intelligence (AI) has been the buzzword and hot topic as of late. It’s in every industry from retail to medicine. In recent years, technological advances in computing and Big Data have proven to be promising. Tedious tasks are becoming automated, lightening the load for many professionals and allowing them to put their efforts into more complex tasks of their work. AI can outperform humans in many areas: when it comes to sifting through vast quantities of data, AI can do it faster and more accurately….usually. As more data comes out, it turns out that many of these claims about AI are, indeed, a little too good to be true. It’s important to bear in mind that AI is still a relatively new technology, and as we embrace it, we need to do so with caution.
The idea of AI replacing doctors and specialists is a common false narrative. While AI can catch things that doctors miss, it’s also true that doctors can catch things that AI systems miss. Each system has its deficits and it’s best that they are used together instead of solely relying on just one or the other. It’s true that AI is changing medicine in the realm of diagnostics and triage, and before long, we’ll see tangible benefits in our health.
Google has developed a new algorithm for analyzing x-rays, and the results are promising. In examining mammograms, researchers found that the AI system reduced false positives by 5.7 percent and false negatives by 9.4 percent. The AI was able to do this by looking solely at mammograms without any other health data that human doctors usually use. But reader beware: just because a system works well in a simulation doesn’t mean that it will work as intended in doctor’s offices. In Google’s study, the AI system performed well retroactively, which is to say that the patients’ final diagnosis was already known. According to a co-author of the study, Christopher Kelly, “Prospective studies are the only way you find out how these things perform in the real world. That’s a different program or research that we’re now excited to be exploring.” This isn’t to say that AI is not trustworthy; rather, there is still a lot to learn about AI before the widespread adoption of the technology.
Artificial intelligence has great potential, and I’ve covered in detail the potential of AI in healthcare in a previous blog. The Google breast cancer study is only one of a few applications for AI. Google researchers developed an AI system that can accurately detect some types of eye disease. Furthermore, AI can recommend the correct treatment approach for over 50 eye diseases. Algorithms are helping us in diagnosing polyps on the colon during colonoscopies, detecting cancer, and understanding the folding of proteins to help in drug development. All of these examples present AI as a boon to medicine, but it comes with inherent risks.
One of many concerns stems from the way that AI will affect the attention and confidence of a human doctor. I’ve alluded to the potential blindspots of AI, but if doctors grow used to relying on AI because it’s usually right, they might miss an instance of cancer that they normally would have spotted. While the worry is right now hypothetical, it is imperative that we address these concerns before they come at the cost of human life.
Algorithms learn from data sets, and data isn’t as neutral as people think. For example: if you train it on data from one population in one country, it might not work as effectively when used on another population in another country. Not to mention, it is known that AI systems sometimes perform less well on minority groups, and the researchers need to check for algorithmic bias.
Oftentimes, AI companies treat their data as proprietary, which doesn’t allow for the source code of the algorithm to be made public. In being open source, other scientists can build on the work and doctors can understand how and why an AI arrived at its decision. Few technology startups publish their research in peer-reviewed journals where other scientists can examine their work. Transparency would be preferable for scientists, doctors, and patients alike.
Not to mention, these questions lie in the midst of privacy concerns when tech giants like Google begin to acquire medical records. It is imperative that such data be de-identified, but as startups and tech companies slowly expand into healthcare, these concerns will be at the forefront of our minds.
Until we tackle the concerns of transparency, privacy, and consumer risks of AI, even though the technology has great medical potential, it’s unlikely to replace doctors any time soon.