Reproduction report for SV-COMP 2023
M Gerhold, A Hartmanns - arXiv preprint arXiv:2303.06477, 2023 - arxiv.org
arXiv preprint arXiv:2303.06477, 2023•arxiv.org
The Competition on Software Verification (SV-COMP) is a large computational experiment
benchmarking many different software verification tools on a vast collection of C and Java
benchmarks. Such experimental research should be reproducible by researchers
independent from the team that performed the original experiments. In this reproduction
report, we present our recent attempt at reproducing SV-COMP 2023: We chose a
meaningful subset of the competition and re-ran it on the competition organiser's …
benchmarking many different software verification tools on a vast collection of C and Java
benchmarks. Such experimental research should be reproducible by researchers
independent from the team that performed the original experiments. In this reproduction
report, we present our recent attempt at reproducing SV-COMP 2023: We chose a
meaningful subset of the competition and re-ran it on the competition organiser's …
The Competition on Software Verification (SV-COMP) is a large computational experiment benchmarking many different software verification tools on a vast collection of C and Java benchmarks. Such experimental research should be reproducible by researchers independent from the team that performed the original experiments. In this reproduction report, we present our recent attempt at reproducing SV-COMP 2023: We chose a meaningful subset of the competition and re-ran it on the competition organiser's infrastructure, using the scripts and tools provided in the competition's archived artifacts. We see minor differences in tool scores that appear explainable by the interaction of small runtime fluctuations with the competition's scoring rules, and successfully reproduce the overall ranking within our chosen subset. Overall, we consider SV-COMP 2023 to be reproducible.
arxiv.org
以上显示的是最相近的搜索结果。 查看全部搜索结果