Blog

Automatically grading the severity of prostate cancer in scanned and digitized biopsies

The machine learning community Kaggle invites the global machine learning talent to tackle different competitions posed by various companies and organizations. This time, two of our AI experts, Mikko Tukiainen and Joni Juvonen are participating in a challenge aiming to improve diagnostics with computer vision. Together with a third machine learning expert Antti Karlsson, their team rähmä.ai is now after to automatically grade the severity of prostate cancer.

The competition, organized by the Dutch Radboud University Medical Center and the Swedish Karolinska Institute, focuses on leveraging computer vision to automatically grade the severity of cancer in prostate tissue samples. Precise diagnostics are key in curing prostate cancer, which is the second most common cancer among men.

Computer vision is already improving diagnostics to become more precise, but there’s still work to do. How familiar are you with using computer vision in this type of healthcare context?

Joni: I have built computer vision algorithms for reading point-of-care test results. Still, it's entirely different to quantify a tracer signal from a test strip than to interpret a biological tissue. However, we have also participated in other diagnostics competitions, like the diabetic retinopathy or the pneumothorax chest x-ray detection competitions which had some similarities in terms of understanding the treatment and diagnostics process. 

Mikko: In my previous job we were studying the capacities of computer vision in cancer diagnostics and recognizing different tumor types and even mutations in digitized tissue slide images. In addition, I have had several freelance projects, which aimed at automatising the diagnostics and grading of various eye diseases. Also the Kaggle competitions that Joni mentioned above would count as experience in the field.

What are some of the key learnings you’ve had so far in using computer vision to grade the severity of cancer?

Mikko: Although the competition is still ongoing, it already looks like we’re able to grade prostate cancer biopsies with machine learning, based on the current scores on the competition’s Leaderboard. This board shows all the teams participating and grades them according to their results. So far our own success has mainly been built on good data preprocessing. 

In the beginning of the project, we had a telco with a pathologist from Turku University Hospital (Tyks) to briefly explain the grading. This discussion provided us with a lot of insight of what features a human pathologist would seek when grading the severity in a prostate cancer biopsy.

Understandably you can’t reveal your solution yet, but could you describe some of the methods you’re using.

Joni: We are still tweaking the architecture and trying different approaches for handling such giant tissue slides. The method that is working decently so far is processing the slide in smaller tiles through a convolutional neural network (CNN) feature extractor and pooling the outputs from all tiles to a classification module.

Mikko: We both have some previously acquired domain knowledge on digital pathology. There are tricks, such as color deconvolution, that we can use to balance the color variations in the slides. These variations are typically caused by the different protocols and scanners that different laboratories might have for staining and scanning the biopsies. However for the computer vision model, these variations might be confusing, which is why we try to eliminate them.

Many machine learning challenges in healthcare involve a lot of data preprocessing. How do the particularities of the digitized biopsy data impact the training of the model?

Mikko: The (cancer) biopsy tissue slides, once scanned into digitized images are usually very large, extending from hundreds to thousands of megapixels in size. This is problematic when training the machine learning model. We have scripted different sampling methods to optimally extract smaller subimages from the tissue image. In addition, we have designed the machine learning architecture so that it can ingest a stack of these subimages, effectively covering the whole tissue slide for the analysis.

The large size of the images also leads to trouble in annotating, in other words, giving a label or a description for the data. Pathologists simply can’t thoroughly annotate the data. In addition, the opinion of two pathologists can differ: this can lead to noisy or unclear labels. As mentioned before, there are variations in the color distributions between the different diagnostic laboratories caused, for example, by different staining methods and scanners. If not properly taken into account when training the model, such discrepancies can lead to models not generalizing between the different laboratories and hospitals. Lastly, the data may not always be enough for a meaningful training since collecting the sensitive clinical data is obviously strictly regulated. 

Joni: I have trained models for detecting breast cancer from lymph node biopsy data in one previous Kaggle competition and for my master's thesis. In my experience, the CNN models tend to recognize cancer at least as well as trained pathologists. As Mikko mentioned, stain appearances can vary from different scanners and preparation procedures, so it's essential to add robustness against it to the solution through data augmentation or normalization.

Let’s talk about teamwork in your team rähmä.ai. How have you divided the work between the three of you?

Mikko and Joni: We have split the roles in the team very efficiently: some of us are focused on data preparation whereas others are more dedicated to model development and training. Because of the ongoing Covid-19 outbreak, we haven’t been able to have any face-to-face meetings. Instead, all our communication and idea baking has happened in the instant messaging tool Slack,and in occasional Hangouts meetings. Our code is shared between us in Github. Even so, it’s still been a blast to work together and keep our eyes on the big price – not the money or Kaggle-glory but finding new ways to defeat this cancer!

Image of Pertti Hannelin

Interested in knowing more about our smart health solutions? Get in touch with our VP, Business Development Pertti Hannelin at pertti.hannelin@silo.ai.


We are always interested to discuss more with curious computer vision experts like Joni and Mikko! Check out our open AI Scientist positions and apply directly here https://silo.ai/careers.

About

No items found.

Want to discuss how Silo AI could help your organization?

Get in touch with our AI experts.
Author
Authors
Pauliina Alanen
Former Head of Brand
Silo AI

Share on Social
Subscribe to our newsletter

Join the 5000+ subscribers who read the Silo AI monthly newsletter to be among the first to hear about the latest insights, articles, podcast episodes, webinars, and more.

What to read next

Ready to level up your AI capabilities?

Succeeding in AI requires a commitment to long-term product development. Let’s start today.