To evaluate submissions, our scoring system will compare each submitted catalogue to the ground-truth data embedded in the simulations.
The score, similar to the one adopted for SDC1 (see Bonaldi et al. 2021 for more details), will depend on completeness, reliability, and the accuracy of the reported source properties.
In order to be scored correctly, submitted catalogues will have to strictly abide by the adopted format. A scoring service will be made available during the challenge duration for teams to check their score and iteratively improve on their results.
The final score will correspond to the latest submission at the end of the challenge.