On Monday, September 18 2023, three codecheckers — Daniel Nüst, Stephen Eglen, and Jeremy Cohen — teamed up with participants from TU Delft and the Netherlands to share their expertise and insights on codechecking, an open and collaborative code review process.

The first half of the day was dedicated to introducing the concept of codecheck to participants, with Daniel expanding on the principles of this Open Science initiative that aims to check the computational workflows in a research project, enhancing its reproducibility (see the video below for a little snippet!). A live demo followed the introductory presentation, where a project submitted from a TU Delft participant was successfully live-codechecked.

 

 

After a short break, participants then divided into breakout groups, with each group codechecking a project previously submitted by TU Delft researchers, most of whom were also present at the event. The breakout groups helped participants get hands-on experience of what codechecking entails, and the skills you need and acquire in the process. For participants who submitted their code, codechecking provided important feedback on how they could make their code more reproducible. An example codecheck from the workshop is avaialable here.

Some reflections from participants who attended the workshop:

“Early on, it was stated that it is hard to give generic advice for a coder that is in between the beginner and advanced level, and that makes a lot of sense to me. I am currently there and find it hard to get my hands on new tips, tricks and courses. This workshop actually hit the sweet spot, so I was impressed. I really appreciated and agreed with the sentiment that science should be a ‘show me’ world and that the code is equally, if not more, important than the paper itself is.”

“The hands-on experience of having someone else’s code and research project codechecked was incredibly beneficial. It allowed me to see firsthand the potential pitfalls and challenges in ensuring the reproducibility of my work. The constructive feedback and guidance provided by the codecheckers were instrumental in improving the quality and transparency of code and data.”

In a final reflection session that wrapped up our workshop, participants gave consideration to the concepts they learned, and whether codechecking fit into their own research workflows. Additionally, we also collectively reflected on the challenges of codechecking, including how to reward or recognise codechecking efforts, how to ensure there are enough codecheckers out there to make the practice more widespread, and how to think about making codecheck a more sustainable practice, including at TU Delft.