Update on evaluation runs

Hope this finds you all well! Please see the following remarks and let us know your feedback on points 3) and 4), thanks.

1) The constraints checking which had been pending for some time — PDN size/track coverage and pin placement — are live by now. See Constraints for a reminder on details if needed. In case your designs don’t meet these constraints, you’d be seeing related notes in the errors.rpt files going forward.

2) We still have some issues pending for the evaluation scripts. As mentioned before, while the scripts work in principle, they do underestimate exploitable regions as well as exposure of cells and nets. This means that you are *not* at disadvantage here, and should be obtaining good scores as long as you follow the security metrics and related objectives in general.

We will continue looking into this but unfortunately cannot promise to resolve it in time, as the team member programming those scripts is unavailable these days.

3) We’d be able now to release the evaluation scripts. This would allow you to study the evaluation in more detail and also run locally as desired.

However, consider that a) we’re nearing the end of the contest, b) the scripts are specific to Cadence Innovus, c) the scripts are provided only as is, without comments or documentation, or further support from our end. So, we understand that such release is two-edged — probably only those skilled in Cadence Innovus scripting would benefit, if at all, and others might be at loss here.

Thus, we’d need unanimous approval from all participants of the final round for such release. Please let us know, via email, whether your team would like the scripts to have released or not. Again, we feel that it would only be fair to release the scripts in case all final-round teams would agree, and we hope this approach makes sense to you as well.

4) Please also let us know, via email, whether your team would like to have your intermediate scores for the final round to be released in anonymous form. To make this fair, only those who agree would be allowed to retrieve the others’ scores in turn. (Thus, only if all agree, we’d update this on the website; if some disagree, we’d share only via email among those who agree.)

5) With the last pending constraints live by now and possibly some more fixes incoming for the evaluation, we’d be re-running evaluation of all final-round submissions soon, probably at the start of the coming week. There’s nothing you need to do for that, and there also wouldn’t be any impact/delay on the other ongoing runs, as we’re preparing another backend server for these re-evaluation runs. (BTW, that other server would thereafter go online to work in parallel with the current ones, such that more evaluation bandwidth becomes available for the final stretch.)

Leave a Reply

Your email address will not be published. Required fields are marked *