Niels Doorn, Ph.D. student

Logo

My research on improving teaching and learning strategies of software testing in computer science education.

Blog

Orcid

My Orcid ID is: 0000-0002-0680-4443.

Mastodon

I sometimes toot about my research on my Mastodon account @niels76@mastodon.online about my research, but more often about other things that interest me, or that fill me with wonder.

GitHub

Some of my projects can be found on GitHub.com/nielsdoorn. Feel free to contribute.

Running / sports

Follow me on Strava! I like to go for a run now and then. It helps me to think and also to clear my mind. I often get my best ideas while working out, but I also tend to forget most of them immediately.

Other interests

☕ 🧘 🌳 🐱 🐔 🥞 🚲 📷

TILE Repository

Came here for the Test Informed Learning with Examples repository? Look no further! It can be found at tile-repository.github.io/.

20 June 2023

Presentation at OURsi event

by Niels Doorn

Understanding the Sensemaking Process in Test Case Design: Enhancing Software Testing Education

Software testing is a widely used technique for quality assurance in the software industry. However, in computer science education, software testing is often neglected, and students struggle to effectively test their software. Teaching software testing is challenging, as it requires students to allocate multiple cognitive resources simultaneously. Despite various attempts to address this issue, progress in improving software testing education has been limited.

To enhance pedagogical approaches in software testing, it is crucial to gain a deeper understanding of the sensemaking process that occurs when students design test cases. In an initial exploratory study, we identified four distinct sensemaking approaches employed by students during test model creation. Building upon these findings, we conducted a follow-up study with 50 students from a large university in Spain.

In this study, we provided the participants with a specialized web-based tool for modelling test cases. They were tasked with creating test models based on given descriptions of test problems. We evaluated the fit between the models and the test problems, examined the sensemaking processes employed by students, and gathered their perspectives on the assignment. To capture comprehensive data, we collected textual, graphical, and video data, which were analysed using an iterative inductive analysis process.

The insights gained from our study shed light on the sensemaking processes involved in test case modelling. We refined our previous findings and identified new sensemaking approaches. These results have significant implications for influencing the sensemaking process in software testing education. By addressing potential misconceptions and fostering desired mental models for test case design, we can improve the effectiveness of software testing education.

Our findings provide a foundation for further research and exploration in this domain. By gaining a deeper understanding of the sensemaking process, we can develop interventions and pedagogical strategies to enhance software testing education and equip students with the necessary skills for effective software testing.

tags: event - ou - presentation - sensemaking