Repeatability in Computer Science

Research is repeatable if we can re-run the researchers' experiment using the same method in the same environment and obtain the same results. A prerequisite for repeatability is for the research artifacts that back up the published results to be shared. Sharing for repeatability is essential to ensure that colleagues and reviewers can evaluate our results based on accurate and complete evidence.

In this study, we examine the extent to which Computer Systems researchers share their research artifacts (source code), and the extent to which shared code builds. We refer to this as weak repeatability. In our study we examined 601 papers from ACM conferences and journals, attempted to locate any source code that backed up the published results, and, if found, tried to build the code.

For completeness, the first version of our results is still available, but has been superseded by the second version. Shriram Krishnamurthi has coordinated the efforts of a group of researchers to review our build results.

Summary Graph

- More in Tech Report.
Legend
Classification Code Location Build Results
BC Paper where the results are backed by code. Article Code is found from link in the article itself. OK≤30 We succeed in building the system in ≤30 minutes.
NC Paper excluded due to results not being backed by code. Web Code is found from a web search. OK>30 We succeed in building the system in >30 minutes.
HW Paper excluded due to replication requiring special hardware. EMyes Code is provided by author after email request. OK>Author We fail to build, but the author says the code builds with reasonable effort.
EX Paper excluded due to overlapping author lists. EMno Author responds that the code cannot be provided. Fails We fail to build, and the author doesn't respond to survey or says code may have problems building.
EMØ Author does not respond to email request within 2 months.