Sen Yang

Why is the winner the best?

By Matthias Eisenmann, Annika Reinke, Vivienn Weru, Minu D. Tizabi, Fabian Isensee, Tim J. Adler, Sharib Ali, Vincent Andrearczyk, Marc Aubreville, Ujjwal Baid, Spyridon Bakas, Niranjan Balu, Sophia Bano, Jorge Bernal, Sebastian Bodenstedt, Alessandro Casella, Veronika Cheplygina, Marie Daum, Marleen de Bruijne, Adrien Depeursinge, Reuben Dorent, Jan Egger, David G. Ellis, Sandy Engelhardt, Melanie Ganz, Noha Ghatwary, Gabriel Girard, Patrick Godau, Anubha Gupta, Lasse Hansen, Kanako Harada, Mattias P. Heinrich, Nicholas Heller, Alessa Hering, Arnaud Huaulmé, Pierre Jannin, Ali Emre Kavur, Oldřich Kodym, Michal Kozubek, Jianning Li, Hongwei Li, Jun Ma, Carlos Martı́n-Isla, Bjoern Menze, Alison Noble, Valentin Oreiller, Nicolas Padoy, Sarthak Pati, Kelly Payette, Tim Rädsch, Jonathan Rafael-Patiño, Vivek Singh Bawa, Stefanie Speidel, Carole H. Sudre, Kimberlin van Wijnen, Martin Wagner, Donglai Wei, Amine Yamlahi, Moi Hoon Yap, Chun Yuan, Maximilian Zenk, Aneeq Zia, David Zimmerer, Dogu Baran Aydogan, Binod Bhattarai, Louise Bloch, Raphael Brüngel, Jihoon Cho, Chanyeol Choi, Qi Dou, Ivan Ezhov, Christoph M. Friedrich, Clifton D. Fuller, Rebati Raman Gaire, Adrian Galdran, Álvaro Garcı́a Faura, Maria Grammatikopoulou, SeulGi Hong, Mostafa Jahanifar, Ikbeom Jang, Abdolrahim Kadkhodamohammadi, Inha Kang, Florian Kofler, Satoshi Kondo, Hugo Kuijf, Mingxing Li, Minh Luu, Tomaž Martinčič, Pedro Morais, Mohamed A. Naser, Bruno Oliveira, David Owen, Subeen Pang, Jinah Park, Sung-Hong Park, Szymon Plotka, Élodie Puybareau, Nasir Rajpoot, Kanghyun Ryu, Numan Saeed, Adam Shephard, Pengcheng Shi, Dejan Štepec, Ronast Subedi, Guillaume Tochon, Helena R. Torres, Helene Urien, João L. Vilaça, Kareem A. Wahid, Haojie Wang, Jiacheng Wang, Liansheng Wang, Xiyue Wang, Benedikt Wiestler, Marek Wodzinski, Fangfang Xia, Juanying Xie, Zhiwei Xiong, Sen Yang, Yanwu Yang, Zixuan Zhao, Klaus Maier-Hein, Paul F. Jäger, Annette Kopp-Schneider, Lena Maier-Hein

2023-02-27

In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (CVPR)

Abstract

International benchmarking competitions have become fundamental for the comparative performance assessment of image analysis methods. However, little attention has been given to investigating what can be learnt from these competitions. Do they really generate scientific progress? What are common and successful participation strategies? What makes a solution superior to a competing method? To address this gap in the literature, we performed a multi-center study with all 80 competitions that were conducted in the scope of IEEE ISBI 2021 and MICCAI 2021. Statistical analyses performed based on comprehensive descriptions of the submitted algorithms linked to their rank as well as the underlying participation strategies revealed common characteristics of winning solutions. These typically include the use of multi-task learning (63%) and/or multi-stage pipelines (61%), and a focus on augmentation (100%), image preprocessing (97%), data curation (79%), and postprocessing (66%). The “typical” lead of a winning team is a computer scientist with a doctoral degree, five years of experience in biomedical image analysis, and four years of experience in deep learning. Two core general development strategies stood out for highly-ranked teams: the reflection of the metrics in the method design and the focus on analyzing and handling failure cases. According to the organizers, 43% of the winning algorithms exceeded the state of the art but only 11% completely solved the respective domain problem. The insights of our study could help researchers (1) improve algorithm development strategies when approaching new problems, and (2) focus on open research questions revealed by this work.

Continue reading

PAIP 2019: Liver cancer segmentation challenge

Abstract

Pathology Artificial Intelligence Platform (PAIP) is a free research platform in support of pathological artificial intelligence (AI). The main goal of the platform is to construct a high-quality pathology learning data set that will allow greater accessibility. The PAIP Liver Cancer Segmentation Challenge, organized in conjunction with the Medical Image Computing and Computer Assisted Intervention Society (MICCAI 2019), is the first image analysis challenge to apply PAIP datasets. The goal of the challenge was to evaluate new and existing algorithms for automated detection of liver cancer in whole-slide images (WSIs). Additionally, the PAIP of this year attempted to address potential future problems of AI applicability in clinical settings. In the challenge, participants were asked to use analytical data and statistical metrics to evaluate the performance of automated algorithms in two different tasks. The participants were given the two different tasks: Task 1 involved investigating Liver Cancer Segmentation and Task 2 involved investigating Viable Tumor Burden Estimation. There was a strong correlation between high performance of teams on both tasks, in which teams that performed well on Task 1 also performed well on Task 2. After evaluation, we summarized the top 11 team’s algorithms. We then gave pathological implications on the easily predicted images for cancer segmentation and the challenging images for viable tumor burden estimation. Out of the 231 participants of the PAIP challenge datasets, a total of 64 were submitted from 28 team participants. The submitted algorithms predicted the automatic segmentation on the liver cancer with WSIs to an accuracy of a score estimation of 0.78. The PAIP challenge was created in an effort to combat the lack of research that has been done to address Liver cancer using digital pathology. It remains unclear of how the applicability of AI algorithms created during the challenge can affect clinical diagnoses. However, the results of this dataset and evaluation metric provided has the potential to aid the development and benchmarking of cancer diagnosis and segmentation.

Continue reading