Few remarks on the evaluation runs.
1) Runs are processing even more slowly these days. This is because of an trade-off in runtime versus accuracy for probing evaluation; the prior setting was simply not accurate enough and thus potentially misleading for your algorithm design. We understand that slower evaluation is not great, but there might be some faster evaluation coming soon.
For now, we have revised the setting again such that runtime becomes bit better but accuracy should still be OK. Still, treat the probing evaluation results with caution as of now — the exposure numbers are all underestimated.
2) There was some confusion for handling of exploitable regions w/ <20 continuous sites. After the start of the final round, we had adopted an evaluation strategy to iteratively search also for regions w/ <20 sites. We’ve reverted this now — the threshold of 20 sites applies again. (But the prior related processing errors are fixed nevertheless.)
Accordingly, the overall scoring is revised as follows:
sec * des = (ti + fsp_fi)/2 * des
This way, any submission where the regions are fixed altogether won’t have a trivial overall score of 0. Note that, the moment we see some submissions where frontside probing and fault injection metrics are also optimized to all zero, we would have to revise the scoring further, e.g., to emphasize more on design cost then.
3) As indicated, there are still issues w/ sites of exploitable regions in general and we’ll look further into that. This means that, as of now, when you achieve ti=0 rating, this might change/increase again later on.
4) With the recent change in the scoring, as well as some fixes for the evaluation still pending (outlined above and others), we will re-run the final alpha-round scoring once that’s settled.
5) We’re also thinking to provide again intermediate and anonymized scores for the final round. Let us know in case you want your team’s score not to be published anonymously for the final round.