by Collin Ruud
Monday, January 27, 2014
Undoubtedly, most who keep up with innovations in higher education have at least heard of massive open online courses (MOOCs), and, based on my conversations with many, have already come to a conclusion about their feasibility in higher education and their impact (or non-impact) on higher and online education. What seems to be missing is a significant body of robust research to support or discount these conclusions.
This lack of significant research is understandable. After all, MOOCs have only been in existence for a couple years, with most pointing to 2008 as an origin point for the term and concept. Additional traction was afforded to MOOCs in 2011 and 2012 as courses stemmed from online and distance education, with the added feature of being publicly accessible and reaching a wide, often international, audience. Between then and now, however, MOOCs have been receiving a lot of buzz with highly mixed reactions. With that buzz has been a flurry to begin research in order to better understand this phenomenon and its potential impact on the status quo.
For the sake of full disclosure, I want to point out a couple of important considerations:
- I am currently serving as a postdoctoral research associate at the University of Illinois, where I participate in a project studying MOOC courses at Illinois, through a mixed-methods examination of engagement, expectations, and outcomes, with an eye to the potential issues that may confront the expansion of MOOCs to graduate and professional education; and
- In terms of opinions, I do not have a strong opinion either way on whether MOOCs are an innovation that will positively impact higher education and access to education. I feel there is not enough yet that is known, and am cautious to develop a strong opinion in either direction without further support from research on quality.
What follows is an (hopefully) objective look at just a couple pieces of research that have come to fore and how the media has treated the research, regardless of the whole of findings or quality of methods, as a means for showing how, when the public is very interested in knowing more about an innovation, that studies released the soonest are often used by media to make opinions on an issue of import to higher education seem more heated than they may in fact be.
What the Media is Reporting
If we are to believe what the media shares regarding MOOCs, the outlook on them is dismal. Perhaps the most recent of these reports comes from the Chronicle of Higher Education on January 16, 2014, entitled “Attitudes on Innovation: How College Leaders and Faculty See the Key Issues Facing Higher Education.” Publicity for the report in the e-mail correspondence details that the findings will show “the validity of the MOOCs business model for the future,” and reports that “60% [of] presidents think MOOCs will negatively impact the future of higher education.” Another recent Chronicle article, entitled “Doubts About MOOCs Continue to Rise, Survey Finds,” focuses on a report from the Babson Survey Research Group that showed “a growing skepticism among academic leaders about the promise of MOOCs,” namely that more of the leaders surveyed indicated concern about MOOCs’ sustainability and tended to think that “credentials for MOOC completion will cause confusion about higher education degrees.” Based off of how the media presented these findings, the take-away would be that MOOCs are all but dead-in-the-water. However, when looking more deeply at the text of the reports, it seems that limitations and important contextual considerations were overlooked.
Limits of the Research
Several points of the reports are not prominently featured by the media, or omitted entirely, either for the sake of presenting a controversial story or to save on time, elements that seem critical to understanding the findings. Some of the information that would be helpful includes:
- Low response rates: The response rate to the president and faculty survey was only 8-10 percent, which questions the reliability and generalizability of the findings.
- Low scope of MOOC availability: The Babson survey found that only 5.0 percent of institutions included in the survey actually implemented any MOOCs. Surveys only presented attitudinal data, a majority of which could be from faculty or administrators with no experience with these courses.
- High emphasis on uncertainty: The reports themselves detail a large amount of uncertainty, or center-leaning opinions toward questions of the feasibility of MOOCs. This uncertainty seems lost when the articles present the findings.
- Missing the point on online education: There is so much more that the reports present than is published in the articles. The Babson survey included more questions pertaining to online education in general and its comparison to face-to-face instruction, while the Chronicle study had interesting findings about perceived changes to higher education in the next 10 years. Neither got coverage in the article, as the limited findings related to MOOCs seemed to dominate the conversation.
I do not intend to be overly critical of the work of the media that covers education, as they engage valuable conversations and highlight reports worth reading. However, in the case of emerging innovations such as MOOCs, to me, it appears that this media has extracted findings from reports with much more mixed results and presented a story that makes this innovation seem like it is infeasible and meeting wide resistance. While it makes for a good story, my experience suggests that there are far more individuals who are on the fence, interested in watching how things develop and getting involved in the process of developing strong practices than there are individuals who are completely pro- or completely con-MOOCs. That type of story has its own interest, and is a story worth telling. Mixed results are still results.