The Universal Lesion Segmentation '23 Challenge


🎯 Clinical Relevancy

In recent years, the number of CT exams conducted annually has continued to increase [1], resulting in higher workloads for radiologists [2]. With a predicted rise of 47% in global cancer burden in 2040 compared to 2020 [3] oncological radiology can be expected to be a major contributor to future workload increase, especially since patients with cancer often have multiple imaging exams over an extended period of time to track disease progression.

The quantification of disease progression and treatment response in longitudinal CT scans often relies on manual long or short-axis measurements of lesions. Typically, these measurements are interpreted using the Response Evaluation Criteria In Solid Tumors (RECIST) guidelines [4], which were developed to standardize and speed up this process. The guidelines limit the number of lesions that need to be measured to a maximum of five "target lesions", across multiple organs or structures, using which the overall response is estimated.

To reduce the time burden of annotating lesions in oncological scans, automatic segmentation models can extract information with limited guidance from a radiologist. Guidance can consist of a single-click inside the lesion by a radiologist or by using a bounding box prediction of a detection model. Segmenting the lesion volume in 3D provides additional information that can be leveraged to calculate more informative lesion volumes or lesion characteristics. Registration algorithms can also be used to propagate segmented lesions [5], enabling significant time savings during follow-up exams.

🖥️  The ULS23 Challenge

Significant advancements have been made in AI-based automatic segmentation models for tumours. Medical challenges focusing on e.g. liver, kidney, or lung tumours have resulted in large performance improvements for segmenting these types of lesions. However, in clinical practice there is a need for versatile and robust models capable of quickly segmenting the many possible lesions types in the thorax-abdomen area. Developing a Universal Lesion Segmentation (ULS) model that can handle this diversity of lesions types requires a well-curated and varied dataset. Whilst there has been previous work on ULS [6-8], most research in this field has made extensive use of a single partially annotated dataset [9], containing only the long- and short-axis diameters on a single axial slice. Furthermore, the test set containing 3D segmentation masks used during evaluation on this dataset by previous publications is not publicly available.

For these reasons we are excited to host the ULS23 Challenge, to serve as a robust benchmark for ULS models. We are releasing novel fully annotated 3D training data and combining already existing sources of data to reduce the barrier-of-entry for training ULS models. For evaluation we provide a varied, multi-centre test set in a type II challenge format. We also release baseline ULS models and make these publicly available on GrandChallenge.

You can cite the ULS23 challenge, it's dataset or the baseline model using:

M. J. J. de Grauw, E. T. Scholten, E. J. Smit, M. J. C. M. Rutten, M. Prokop, B. van Ginneken, A. Hering, The ULS23 challenge: a baseline model and benchmark dataset for 3D universal lesion segmentation in computed tomography (2024). arXiv:2406.05231.

👨‍⚕️👩‍⚕️  Partners and Organizers

Organizers: Max de Grauw, Bram van Ginneken & Alessa Hering of the Diagnostic Image

Analysis Group

Partners: Mathai Tejas, Pritam Mukherjee & Ronald Summers of the National Institutes of

Health

Annotation team: dr. Ernst Scholten, dr. ir. Ewoud Smit, Pit van Halbeek, Suze Loomans, Romy van den Akker, Pieter Drijver, Noortje van Kempen, Eva Boldrini, Temke Kohlbrandt
Data providers: Radboudumc & Jeroen Bosch Ziekenhuis
Compute sponsored by:
📖  References

  1. Boland, G. W., Guimaraes, A. S., & Mueller, P. R. (2009). The radiologist’s conundrum: benefits and costs of increasing CT capacity and utilization. European radiology, 19, 9-11.
  2. McDonald, R. J., Schwartz, K. M., Eckel, L. J., et al. (2015). The effects of changes in utilization and technological advancements of cross-sectional imaging on radiologist workload. Academic radiology, 22(9), 1191-1198.
  3. Sung, H., Ferlay, J., Siegel, R. L., et al. (2021). Global cancer statistics 2020: GLOBOCAN estimates of incidence and mortality worldwide for 36 cancers in 185 countries. CA: a cancer journal for clinicians, 71(3), 209-249.
  4. Eisenhauer, E. A., Therasse, P., Bogaerts, J., et al. (2009). New response evaluation criteria in solid tumours: revised RECIST guideline (version 1.1). European journal of cancer, 45(2), 228-247.
  5. Hering, A., Peisen, F., Amaral, T., et al. (2021, August). Whole-body soft-tissue lesion tracking and segmentation in longitudinal CT imaging studies. In Medical Imaging with Deep Learning (pp. 312-326). PMLR.
  6. Cai, J., Tang, Y., Lu, L., et al.(2018). Accurate weakly-supervised deep lesion segmentation using large-scale clinical annotations: Slice-propagated 3d mask generation from 2d recist. In Medical Image Computing and Computer Assisted Intervention–MICCAI 2018: 21st International Conference, Granada, Spain, September 16-20, 2018, Proceedings, Part IV 11 (pp. 396-404). Springer International Publishing.
  7. Tang, Y., Yan, K., Xiao, J., et al. (2020). One click lesion RECIST measurement and segmentation on CT scans. In Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part IV 23 (pp. 573-583). Springer International Publishing.
  8. Tang, Y., Yan, K., Cai, J., et al. (2021). Lesion segmentation and RECIST diameter prediction via click-driven attention and dual-path connection. In Medical Image Computing and Computer Assisted Intervention–MICCAI 2021: 24th International Conference, Strasbourg, France, September 27–October 1, 2021, Proceedings, Part II 24 (pp. 341-351). Springer International Publishing.
  9. Yan, K., Wang, X., Lu, L., et al. (2018). DeepLesion: automated mining of large-scale lesion annotations and universal lesion detection with deep learning. Journal of medical imaging, 5(3), 036501-036501.