It is an honor for me to write the inaugural post for the TCCA blog. I will take the opportunity to highlight interesting statistics that I learned from running the reviewing process for the 26th IEEE International Symposium on High-Performance Computer Architecture (HPCA 2020), in San Diego, California.
General statistics. A total of 248 finalized submissions were received, plus 15 from industry track. A total of 48 papers were accepted (including three with shepherding), bringing the total acceptance ratio of 19.4%. We recruited a total of 54 program committee members and 53 external review committee members. Double blind review process was in place for the entire review process.
Reviewer Expertise. In managing the reviewing process, we strove to continue the tradition of fair and thorough reviewing of past HPCAs, while adding to that a philosophy of getting as much feedback as possible to authors. The reviewing process has two rounds. In the first round, each paper was assigned at least four reviews. Papers that advanced to the second round two additional reviews were solicited, for a total of at least six reviews. As a result, we collected 1,151 reviews, with the following expertise breakdown. The average expertise level is 2.8. Approximately two thirds declared at least Level 3 expertise level.
Topics area. The following show the number of papers submitted vs. accepted based on self-declared topic areas, sorted by the number of submitted papers.
Topics | Submitted | Accepted |
Accelerators, domain-specific architectures | 109 | 17 |
Architecture/applications of machine learning | 59 | 7 |
GPUs | 56 | 3 |
Parallel/multicore architectures | 53 | 5 |
Emerging technologies | 52 | 4 |
Caches | 45 | 3 |
Power efficiency and management | 44 | 6 |
Hardware/software interactions/interface | 41 | 7 |
Memory – high level | 40 | 3 |
Performance characterization and modeling | 40 | 7 |
Security | 38 | 6 |
Memory – low level | 36 | 2 |
Networking, interconnects | 36 | 9 |
Cloud, datacenter, cluster/distributed system | 33 | 4 |
Reliability and fault tolerance | 28 | 2 |
ILP techniques | 21 | 3 |
Embedded, IoT | 20 | 2 |
FPGAs and reconfigurable | 17 | 1 |
IO, storage | 14 | 1 |
Quantum architectures | 4 | 0 |
A few observations arose. By far, accelerators and domain-specific architectures dominated the submission (nearly 14%), followed by machine learning (7.5%) and GPUs (7.1%). If one considers accelerators for machine learning and GPUs as domain-specific, then together they constitute nearly three out of ten papers (28.4%).
Sorted based on acceptance rate, the following data was obtained.
Topics | Success rates |
Accelerators, domain-specific architectures | 25% |
Architecture/applications of machine learning | 18% |
GPUs | 17% |
Parallel/multicore architectures | 16% |
Emerging technologies | 16% |
Caches | 14% |
Power efficiency and management | 14% |
Hardware/software interactions/interface | 12% |
Memory – high level | 12% |
Performance characterization and modeling | 10% |
Security | 9% |
Memory – low level | 8% |
Networking, interconnects | 8% |
Cloud, datacenter, cluster/distributed system | 7% |
Reliability and fault tolerance | 7% |
ILP techniques | 7% |
Embedded, IoT | 6% |
FPGAs and reconfigurable | 6% |
IO, storage | 5% |
Quantum architectures * | 0% |
*two submissions |
It is unclear what observations can be made from the data, as each topic area has its own dynamic, and acceptance rate depends on many factors (novelty, impact, quality, etc. of submitted papers). It will be interesting to collect such data over multiple years in order to see if there are any broader patterns.
Yan Solihin
Program Chair of HCPA 2020
Director of Cyber Security & Privacy
Charles N. Millican Professor in Computer Science
University of Central Florida