Rules:¶
- All participants must form teams (even if the team is composed of a single participant), and each participant can only be a member of a single team.
- Any individual participating with multiple or duplicate Grand Challenge profiles will be disqualified.
- Anonymous participation is not allowed. To qualify for ranking on the validation/testing leaderboards, true names and affiliations [university, institute or company (if any), country] must be displayed accurately on verified Grand Challenge profiles, for all participants.
- This challenge only supports the submission of fully automated methods in Docker containers. It is not possible to submit semi-automated or interactive methods.
- Algorithms will be evaluated on the cloud using AWS instances, and need to conform to the limits on runtime, compute and memory as specified on the evaluation procedure page.
- To be eligible for winning the challenge, participants need to submit a short paper on their methodology (2-3 pages), including architecture type, model size, loss functions and hyperparameters used. We take these measures to ensure reproducibility of all proposed solutions, and to promote open-source AI development.
- Participants competing in the challenge can use pre-trained AI models based on computer vision and/or medical imaging datasets. They can also use external datasets to train their AI algorithms. However, they must clearly state the use of external data on their algorithm page and in their short methodology paper with either a supporting publication or a brief description of the additional data.
- The best performing algorithms on the leaderboard (of the challenge participants, not non-participants), will be made publicly available as Grand Challenge Algorithms, once the challenge has officially concluded. This allows Grand Challenge users to apply these algorithms to novel data. Note that users will not be able to inspect the container, the code detailing the training procedure or the network architecture, which we encourage to be disclosed by the algorithm's authors in a publication.
- Participants affiliated with the organizing institution 'Radboudumc' are not eligible to win the challenge.
- The organizers of the challenge reserve the right to disqualify any participant or participating team, at any point in time, on grounds of unfair or dishonest practices.
- All participants have the right to drop out of the ULS23 challenge and forego any further participation. However, they will not be able to retract their prior submissions or any published results till that point in time.
- Participants of the ULS23 challenge, as well as all non-participating researchers using the training data or test data for benchmarking purposes, can publish their own results any time, separately. While doing so, they are requested to cite this paper detailing the challenge and its dataset: (will be released soon).
- The results of the challenge will be announced in April 2024. The top-3 teams with an algorithm beating the baseline model will be announced, in their respective positions, as the winners of the ULS23 challenge. Up-to 3 authors of each of these winning teams will be invited to be co-authors on the challenge evaluation paper.
Notes:¶
We encourage challenge participants to contribute their relevant data to this project. This can be done by contacting the challenge organizers or by using the challenge forum. Since the winning solutions of the challenge will be made public, sharing data will benefit the entire research community.
Researchers and companies, who are interested only in benchmarking their AI models or products, but not in competing in the challenge, can freely use private or unpublished external datasets to train their AI algorithms. They must clearly state their non-participant status in their submissions e.g. ["ULS Model from Company-X (non-participant)"].