Update backend for constraints checking

The backend supports now checking of equivalence as well as basic checks for validity of layouts. The results folders will contain the corresponding additional reports going forward. As indicated, these checks are parts of the constraints to be met for acceptance/scoring. Also remember that, for the alpha round, you only need to submit some valid solutions; scores are not considered for passing of the alpha round.

Note that the constraints on PG network and IO pins placement are not working yet. Given these delays, we’ll skip these two constraints for the alpha round, but we will most likely bring them back for the final round.

Update and reminder on use of Google Drive; constraints checking

1) Here’s an update and reminder on the use of the Google Drive. You may now use subfolders (level/depth 1 only) or zip archives when uploading files into the benchmarks folders, or just continue with uploading files as is.

The guides for using are also copied again here:

    • Please follow the provided folder structure for uploading submission files:
      • there is a dedicated subfolder for the alpha round (and later on also one for the final round); and
      • within there are subfolders for each benchmark — upload your submissions into those subfolders.
    • Each team may upload submission files anytime, upon which these files are automatically downloaded to our servers for evaluation.
      • Submission files are: the DEF file and the post-layout netlist — no other files from the benchmark ZIP archives have to be re-uploaded again.
      • You can upload your submission files as follows:
        • directly as is (what you did until now);
        • within a subfolder (use level/depth of 1 only, i.e., no further subfolders inside that subfolder; do not use folder names containing ‘results’);
        • or as zip archive.
      • DEF and netlist files that go together must use the same basename, e.g., trial1.def and trial1.v. You may still upload multiple trials at once; they will be handled in separate runs.
        • This also applies to subfolder submission.
        • For zip submission, you cannot put multiple trials into one archive; each trial must go into a separate zip.
      • Re-uploaded submission files with the same name (option offered by Drive: “Replace existing file”) are not re-evaluated. Thus, you want to either select the Drive option “Keep both files” when re-uploading, or use files of different names when re-uploading.
      • Once uploaded and processed, renaming of files will also not trigger re-evaluation.
    • Results will be returned/uploaded into the same benchmark subfolder. Results will include scores and report files as generated by our evaluation scripts.
    • Participants will receive an email notification once their results are available.
    • You may also keep any other files in the Drive, and you’re free to just put them in the home directory or organize in folders separate from the provided folder structure. These other files won’t be downloaded for evaluation.

2) The constraints checking is not fully working yet. Thus, even when you receive a score without any processing errors, it does not mean that your solution is valid. Please follow the constraints proactively, and we will let you know as soon as the constraints checking works.

Alpha benchmarks update and misc

1) We have revised the scoring. In short, we a) apply streamlined weights now for design metrics, b) added total exposed area for cell assets, and c) dropped the limiting max_cost terms for timing metrics so that any deterioration over the baseline will dominate scores. Please see the Scoring page for more details.

2) We have revised the alpha benchmark suite to v4.0. Please re-download from https://drive.google.com/file/d/15D5xDMWLrsUumDOcRTt-5PSkmRSR1uRe/view?usp=sharing

This v4.0 release comprises the following changes:
— Evaluation: metrics and scoring simplified and streamlined
— Backend: fix in timing, power evaluation (added clock propagation, set to post-route timing)
— All benchmarks: updated reports, scores, and plots (related to backend fix, revised metrics and scores, as well as revised accuracy for probing evaluation)
— gnuplot_exploit_regions.sh: minor updates

Alpha benchmarks update and misc

Few remarks and announcements:

1) The registration is closed by now. We have 17 teams entering the competition, and we’re looking forward to your efforts! Going forward, we will post anonymized rankings every now and then, so that you know where you stand.

2) We should have all your team members in our email list. Please let us know if anyone is missing, or if you like us to drop/update any email address currently used.

3) The Q&A section is updated every now and then. Please check to see if any of your question might already be answered there. If not, please feel free to reach out to us at any time.

4) We have revised the alpha benchmark suite to v3.0. Please re-download from https://drive.google.com/file/d/15D5xDMWLrsUumDOcRTt-5PSkmRSR1uRe/view?usp=sharing

This v3.0 release comprises the following changes:
— Minor updates plot generation: add inkscape call for png generation, lighter grey shade for regular components
— All benchmarks: added free tracks in exploit_regions.rpt files
— All benchmarks: added scores.rpt files for baseline layouts
— All benchmarks: dropped unnecessary links from reports/ folders
— MISTY, PRESENT benchmarks: fix redundant nets assets, related updates
— Updates README files

5) The evaluation backend is now fully automated. This has gone through extensive testing and debugging, so it should be ready for your use, but we’ll also keep an eye on things and would appreciate feedback.

Alpha benchmarks and evaluation

We have released the alpha benchmarks now: https://wp.nyu.edu/ispd_22_contest/benchmarks/#alpha

In case you haven’t seen the details on the evaluation yet, please make sure to check them out, as this has been largely updated since last week.

Also, we noticed the DEF files of the sample benchmarks files had the routing missing; this is fixed by now.

Good luck to all registered participants (deadline extended until Jan 29th) for the alpha round!