Let us first understand what medical imaging is before we delve into how deep learning and other similar expert systems can help medical professional such as radiologists in diagnosing their patients.
This is how Wikipedia defines Medical Imaging:
Medical imaging is the technique and process of creating visual representations of the interior of a body for clinical analysis and medical intervention, as well as visual representation of the function of some organs or tissues (physiology). Medical imaging seeks to reveal internal structures hidden by the skin and bones, as well as to diagnose and treat disease. Medical imaging also establishes a database of normal anatomy and physiology to make it possible to identify abnormalities. Although imaging of removed organs and tissues can be performed for medical reasons, such procedures are usually considered part of pathology instead of medical imaging.
In plain terms: A ton of devices are used in medicine to help doctors see what goes on inside us. This includes devices X-Ray machines which help in visualizing a wide range of tissue structures. Technology such as CT (Computed Tomography) uses a bunch of X-Ray scans to stitch together a virtual slice of a tissue, allowing a doctor to see it without cutting it open.
We must understand that due to the vast usage of such devices, there exists a vast amount of annotated medical images, all primed and ready for Data Scientists (like us!) to build neural networks to consume this data.
If you have a tiny engine and a ton of fuel, you can’t even lift off. … The analogy to deep learning [one of the key processes in creating artificial intelligence] is that the rocket engine is the deep learning models and the fuel is the huge amounts of data we can feed to these algorithms. — Andrew Ng
I recently took part in the Kaggle competition: Histopathologic Cancer Detection. The goal was to identify metastatic tissue in histopathologic scans of lymph node sections. My solution placed 46th out of 1157 teams. This solution was an ensemble of my solution and a few top kernels. Without ensembling, my results were only 0.6–0.7% less than my submission results, and I will be sharing my solution in this article. The code will be the basis for my general methodology for tackling deep learning problems in medicine.
I will be breaking down my design methodology into a few easy steps:
- Know the task at hand and process your data accordingly: This is an extremely important step that many new data scientists fail to do during the prototyping phase. You must first understand your dataset and identify any issues that may come up during the development phase. This includes identifying noisy data with artifacts as well as any outliers.
- Perform appropriate augmentation of images. If appropriate, consider performing train test augmentation too. Discussing image augmentation is outside the scope of this article, but there exists a large number of tutorials and guides on the topic online. It is up to the reader’s discretion to pick a good resource to follow :)
- Define the model architecture. You could try already defined architectures like any of the VGG/ResNetmodels. Or try something fancy, like the NASNet model. If it is image segmentation you are after, try the UNet. And of course, you are free to define any new architecture that you would want to try out on the dataset. Just make sure you beat the state of the art ^_^
- Validate and verify your results. Once you are satisfied with what you have, publish your findings and spread the word.
And this is a general primer on how to perform medical image analysis using deep learning. Happy Coding folks!!
About me: I am a computer science student at Delhi Technological University. I am also an Intel AI ambassador.
A beginner’s guide to Deep Learning Applications in Medical Imaging was originally published in Hacker Noon on Medium, where people are continuing the conversation by highlighting and responding to this story.