Update on evaluation runs

Hope this finds you all well! Please see the following remarks and let us know your feedback on points 3) and 4), thanks.

1) The constraints checking which had been pending for some time — PDN size/track coverage and pin placement — are live by now. See Constraints for a reminder on details if needed. In case your designs don’t meet these constraints, you’d be seeing related notes in the errors.rpt files going forward.

2) We still have some issues pending for the evaluation scripts. As mentioned before, while the scripts work in principle, they do underestimate exploitable regions as well as exposure of cells and nets. This means that you are *not* at disadvantage here, and should be obtaining good scores as long as you follow the security metrics and related objectives in general.

We will continue looking into this but unfortunately cannot promise to resolve it in time, as the team member programming those scripts is unavailable these days.

3) We’d be able now to release the evaluation scripts. This would allow you to study the evaluation in more detail and also run locally as desired.

However, consider that a) we’re nearing the end of the contest, b) the scripts are specific to Cadence Innovus, c) the scripts are provided only as is, without comments or documentation, or further support from our end. So, we understand that such release is two-edged — probably only those skilled in Cadence Innovus scripting would benefit, if at all, and others might be at loss here.

Thus, we’d need unanimous approval from all participants of the final round for such release. Please let us know, via email, whether your team would like the scripts to have released or not. Again, we feel that it would only be fair to release the scripts in case all final-round teams would agree, and we hope this approach makes sense to you as well.

4) Please also let us know, via email, whether your team would like to have your intermediate scores for the final round to be released in anonymous form. To make this fair, only those who agree would be allowed to retrieve the others’ scores in turn. (Thus, only if all agree, we’d update this on the website; if some disagree, we’d share only via email among those who agree.)

5) With the last pending constraints live by now and possibly some more fixes incoming for the evaluation, we’d be re-running evaluation of all final-round submissions soon, probably at the start of the coming week. There’s nothing you need to do for that, and there also wouldn’t be any impact/delay on the other ongoing runs, as we’re preparing another backend server for these re-evaluation runs. (BTW, that other server would thereafter go online to work in parallel with the current ones, such that more evaluation bandwidth becomes available for the final stretch.)

Update on evaluation runs

Few remarks on the evaluation runs.

1) Runs are processing even more slowly these days. This is because of an trade-off in runtime versus accuracy for probing evaluation; the prior setting was simply not accurate enough and thus potentially misleading for your algorithm design. We understand that slower evaluation is not great, but there might be some faster evaluation coming soon.

For now, we have revised the setting again such that runtime becomes bit better but accuracy should still be OK. Still, treat the probing evaluation results with caution as of now — the exposure numbers are all underestimated.

2) There was some confusion for handling of exploitable regions w/ <20 continuous sites. After the start of the final round, we had adopted an evaluation strategy to iteratively search also for regions w/ <20 sites. We’ve reverted this now — the threshold of 20 sites applies again. (But the prior related processing errors are fixed nevertheless.)

Accordingly, the overall scoring is revised as follows:
sec * des = (ti + fsp_fi)/2 * des

This way, any submission where the regions are fixed altogether won’t have a trivial overall score of 0. Note that, the moment we see some submissions where frontside probing and fault injection metrics are also optimized to all zero, we would have to revise the scoring further, e.g., to emphasize more on design cost then.

3) As indicated, there are still issues w/ sites of exploitable regions in general and we’ll look further into that. This means that, as of now, when you achieve ti=0 rating, this might change/increase again later on.

4) With the recent change in the scoring, as well as some fixes for the evaluation still pending (outlined above and others), we will re-run the final alpha-round scoring once that’s settled.

5) We’re also thinking to provide again intermediate and anonymized scores for the final round. Let us know in case you want your team’s score not to be published anonymously for the final round.

Start of final round

This link points to the ZIP bundle of the public final-round benchmarks, which are different crypto cores and a microprocessor. As with the alpha-round benchmarks, the designs have varying ranges of complexity, size, layout density, timing constraint, number of assets, and available metal layers.

Note that all the benchmarks from the alpha round are included as is for the final round as well. Also note that on/around March 27th, few more blind benchmarks will be released, namely further crypto cores, microprocessor and SoC designs.

Few more remarks:

1) Your Google Drive is updated with a root folder “final” — please use only this folder going forward.

2) You might have noticed that the backend was offline today (namely from 01:00 AM EST). Thus, submission files you have uploaded since then haven’t been processed yet. Also, they will not be processed until you actually move them to the “final” folder.

3) We have fixed various issues for the backend, most of them also outlined in the Q&A section.

4) There are still some issues with the evaluation of exploitable regions, and we’ll keep you posted throughout the coming week. Furthermore, checks on PG routing and pins are not live yet but will be most likely in the coming week as well.

Wrapping up of alpha round

A reminder that the alpha round is wrapping up today. We’re still accepting submissions until midnight AOE, i.e., tomorrow noon UTC.

For those of you who already submitted some valid solutions for all the benchmarks w/ scores returned (as also documented here), you’re hereby qualified for the final round. Congratulations!

Thanks again also for your interest and great interaction so far. We’re in process of working on those evaluation issues reported, and we will update you again once the final round starts on Thursday.

Update backend for constraints checking

Various further remarks:

1) Filenames with spaces (incl. tab and newline) are currently not supported; please don’t use them for submission files.

2) Important note on folders management: for all results folders, you need to maintain “results” as substring in the folder’s name. Otherwise, these results would be re-run over and over, as we’re currently see happening. (The alternative is to disable the feature of subfolders altogether.)

3) In scores.rpt, “des_DRC” is renamed to “des_issues” and comprises now all the checks summarized in checks_summary.rpt, not only DRC checks. Any issues are also warned about in warnings.rpt, with reference to related, new report files provided in there as well.

4) As indicated, (most of) these checks will not be handled as hard constraints, so issues reported there will only impact the score, not the further evaluation and acceptance of submissions. The only exceptions (hard constraints) are to maintain cell nets, net assets, and functional equivalence.

5) Note that there are some issues already present in the benchmarks’ baseline layouts. Specifically, there are DRC and module/top pin issues with AES_2, as previously indicated. We will keep them as is, and if you can fix those in your submissions, scores would be improved, but if not, you won’t be penalized either.

Here’s an overview on the checks and commands performed, as summarized in checks_summary.rpt:

Equivalence issues — as reported by Conformal LEC, allowing for pipeline retiming but requiring cycle accuracy/equivalence
Unreachable points issues — same setup/command as above; relates to components with connectivity missing
Undriven pins issues — as reported by Conformal LEC during parsing, relates to open cell pins
Open output ports issues — same setup/command as above; relates to open/dangling cell output pins
Net output floating issue — same setup/command as above; relates to cell output pins that are trivially connected to floating dummy wires
Basic routing issues — as reported by Innovus check_connectivity command, relates to routing issues like dangling wires, floating metals, etc.
Module pin issues — as reported by Innovus check_pin_assignment command, relates to module/top-level pins
Unplaced components issues — as reported by Innovus “check_design -type route” command
Placement and/or routing issues — same setup/command as above, relates to generic placement and/or routing issues
DRC issues — as reported by Innovus check_DRC command, relates to routing DRC issues

Regarding “Net output floating issues” and “Open output ports issues”, both issues relate to components having some output pin open — think of FFs with only Q or QN used, but not both — or only trivially connected to dummy UNCONNECTED wires. These issues mainly arise from synthesis. As these are relevant for our notions of security, these issues will be maintained and contribute for the scoring.

While some checks are redundant in principle, in practice this depends on the kind of issues present or not in the layouts. And, not all possible issues will be reported by some subset of these commands; hence we employ this broader set of checks.