Note-taking for Research on Games and Simulations with Jan Plass
Barab, S. A. (2014). Design-based research: a methodological toolkit for engineering change. In K. Sawyer (ed.) Handbook of the Learning Sciences, Vol 2, (pp. 233-270), Cambridge, MA: Cambridge University Press.
Design-based research (DBR) is a subject close to my heart, because it was the basis of my masters thesis, and informs the work of the NYU Music Experience Design Lab. All of our tools are designed and tested in the context of messy and complex natural learning and creating environments: classrooms, bedrooms, studios, and public events. We evaluate our tools continuously, but the only purely empirical and “experimental” methods we use involve Google analytics. We sometimes conduct user research in formal settings, but mostly observe practice “in the wild” between regular iterations.
DBR follows Dewey’s model of praxis, not so much a traditional positivist science or ethnographic tradition of inquiry, but rather a pragmatic form of inquiry “where theories are judged not by their claims to truth, but by their ability to do work in the world.” It entails the study of learning innovations inspired by theory, conducted in naturalistic contexts. The goal is to produce new tools and practices that can be generalized to other learning environments (typically schools, but not necessarily.) DBR also aims to produce new theories, viewing them as intertwined with practice rather than occurring prior to it.
DBR would appear to fail to meet tests of rigor, freedom from bias, and reproducibility. However, advocates respond that for a learning intervention to be studied with full empirical rigor, variables must be isolated to the point where they no longer resemble the reality we are supposed to be studying. We can draw an analogy from music psychology. A teacher of mine studies musical tension and release. She isolates the various musical parameters that might create a sense of tension and release – pitch, rhythm, timbre and so on – as synthesizer tones varying only along each parameter, and measures listener responses. While her results are rigorous, her test material bears almost no resemblance to actual music. It may well be that musical tension and release are emergent properties of parameters interacting in social/cultural context. Research in a more naturalistic context might hopelessly confound the variables, but might be the only way to access a valid explanation. As in music, the context of learning experiences is not simply a backdrop against which things occur, but “an integral part of the complex causal mechanisms that give rise to the phenomenon under study.”
DBR is to traditional empirical research what agile methodologies are to waterfall-style software development. Rather than thinking of a study as a hypothesis which is then tested and written up, DBR is an ongoing process of iteration, testing, and iterating further. This makes it difficult to point to a clear, unambiguous result. DBR findings are more likely to take the form of narratives, descriptions of the various iterations and tests. The challenge then becomes to present these narratives in ways that readers can generalize from them, rather than limiting their relevance to the specific situations being described. While DBR is not ethnography, its results may well take the form of ethnographic “thick description” in Clifford Geertz’s sense. As in qualitative social science research, rigor “comes from principled accounts that provide logical chains of reasoning and prove useful to others.”
In considering DBR, we need to draw a distinction between measuring outputs and outcomes. Outputs are directly measurable results of an intervention, while outcomes are larger-scale and longer-term consequences. For example, standardized test scores are an output of schooling, while learning is a consequence. Because outputs are so much more readily measured than outcomes, there is a danger that we will optimize around outputs rather than outcomes (as schools have around standardized tests.) To combat this danger, DBR must provide explanatory accounts of outcomes with the same persuasive power as quantitative measures of outputs. Research must combine “experience-near meanings” (the local story) with “experience-distance significance” (the more general implications and applications.) Good DBR will produce grounded “petite generalizations” that can be built up into broader generalizations. Researchers will also need to be explicit about the assumptions and theoretical bases that underlie the work, since these can not be controlled for.