Advancing MOOCs for Development – The need for unbiased research

The “Advancing MOOCs for Development Initiative” is a collaborative effort by a number of academic and development institutions to push MOOCs for Professional Development in the developing world. In the first stage of this project, a report has been released on the current state of MOOC usage in Colombia, the Philippines, and South Africa. This blogpost is a brief review of the report, and some reflections on my own study. For an in-depth breakdown of the results of this study versus actual platform data, I highly recommend reading Willem van Valkenburg from TU Delft’s critique of this study.

From its initial inception through a $1 million grant from USAID, this initiative has been of interest to me, as the parallels with my own research are obvious. “MOOCs for Development” features in the title of my PhD, and both are interested in considering the usage of MOOCs in the developing world, albeit in different contexts. However, this is where the similarity ends.

Fundamentally, this initiative begins with the position that MOOCs are a good thing. The title “Advancing MOOCs” makes this clear – and any study that comes out of this initiative is inherently biased in this regard. On the other hand, my research title is, more accurately, “MOOCs for Development?” The question mark is subtle, but key, because it suggests a questioning of the narrative of MOOCs as the solution to the challenges facing the developing world – and begins with a neutral, if not sceptical view of MOOCs. When the overwhelming literature suggests MOOCs are successful largely for experienced educated lifelong learners, there is a need to be cautious when making bold claims, particularly with respect to the developing world.

Given this inherent bias of this initiative, it was no surprise that the press release for this report emphasized “High MOOC completion rates in the developing world”, claiming almost 80% completion rates. For anyone already engaged in MOOC research, it’s clear that this sort of reporting is incredibly careless and misleading. What these headlines fail to mention is that these figures are reporting the results of their survey (incredibly biased sample) of MOOC students, rather than actual platform data. Having worked with FutureLearn data on Indian learners, their completion rate is in fact far lower than the average completion rate, and similar findings are reported in the post from TU Delft. In fact, even the report mentions the limitations of these findings, yet this does not figure anywhere in the press releases, leading to a perception that MOOCs are incredibly successful in the developing world.

PR spin aside, the research design is fairly sound. The survey is robust (and incredibly similar to my survey design), and the method of comparison between the three contexts, as well as interviews with policy makers, makes this quite a comprehensive study. As the focus was for professional development, they only surveyed 18-35 year olds to answer their specific research questions, which does miss out on some of the key demographics of MOOC learners, but they mention this in their discussion of limitations, which addresses most of the broader concerns with this study as well. It is a shame though, as a decent research study has been discredited through poor PR management, and a lack of objectivity in general.

More broadly, given my (and I’m sure the authors of this reports’) desire to improve the state of higher education in the developing world, it can be tempting to leave aside objectivity in favour of a positive, pro-technology narrative. The reality, however, is that there are systemic challenges to broader adoption of MOOCs in the West, and these challenges are further amplified in a developing world context.

As I begin to analyse my own survey data, this report allows me to reflect on some of the limitations of my own study, and the need to be cautious when reporting findings. Given the ways in which I have recruited participants, I know for a fact that my sample is biased. Almost a quarter of my respondents claim to have taken 9+ MOOCs. This is not the norm. Further, as a large proportion of respondents were contacted through university and academic intermediaries, I know that the demographics are going to skew in favour of younger, college going students. I am also aware that my respondents are not just educated, but also possess a high speed internet connection and the digital literacies needed to find, enrol, and likely complete MOOCs, making them almost certainly (economically) part of the top 10% of the Indian population. I have to be careful when making generalizations from my study into the broader population, and put the sample in context when considering the benefits (if any) for society. But of course, “Advancing MOOCs for the top 10% of the developing world” doesn’t make for as fundable a project.