Repeatability is a cornerstone of the scientific process: only if my colleagues can reproduce my work should they trust its veracity. Excepting special cases, in applied Computer Science reproducing published work should be as simple as going to the authors' website, downloading their code and data, typing "make" and seeing if the results correspond to the published ones.
To investigate the extent to which Computer Science researchers are willing to share their code and data, and the extend to which this code will actually build with reasonable effort, we performed a study of 613 papers in eight ACM conferences (ASPLOS'12, CCS'12, OOPSLA'12, OSDI'12, PLDI'12, SIGMOD'12, SOSP'11, VLDB'12) and five journals (TACO'9, TISSEC'15, TOCS'30, TODS'37, TOPLAS'34).
We originally put up the first version of the results so that the reviewers of our submitted paper could have access to our raw data, code, and technical report, in case they wished to review it. It was never publicly announced. Nevertheless, the site became public knowledge, and we received feedback from the community. As a result, we made another pass over the data and are conducting a survey. A second version of the technical report is being written, and preliminary results for the second version are available.
The first version of the technical report is available "Measuring Reproducibility in Computer Systems Research"