Developing a Research Agenda for Ed-Tech
Last week, the Jefferson Education Accelerator, an ed-tech incubator at University of Virginia’s Curry School of Education, announced its plans to launch a large-scale project to research the “barriers that keep companies and their customers from conducting and using efficacy research when creating or buying ed-tech products.” In a Chronicle article announcing the project, Bart Epstein, CEO and managing director of Jefferson Education Accelerator, explains there exists little research that explores the efficacy of ed-tech tools in a rigorous manner. Much of what does exist is funded by companies, and, if conducted by an outside party, is often completed as a consulting engagement with proprietary results.
Awash in marketing claims, sector hype about new tools or products, and a variety of institutional pressures, educators and administrators often have few reliable ways of discerning which tools work best to improve student learning and outcomes. Even more troubling—and the real problem the project seeks to address—is that most institutions have neither the capacity nor the incentives to facilitate or use the sort of efficacy research that might aid in this sort of evidence-based decision making.
It is for this reason that a research agenda for ed-tech tools must do more than ask “what works.” We also need to know under what conditions these tools are effective, for whom they are effective, and why they are effective in the particular context in which they are implemented. An educational technology tool and its adoption can be based on sound and rigorous research, but if implemented in a context with little buy-in, among poorly trained faculty or staff, or for students with poorly matched levels of digital literacy, it has little chance of meeting its potential.
Last week, the Community College Research Center published a report that illustrated this point well. The report evaluated six institutions that had implemented a type of technology-mediated advising program that the Gates Foundation refers to as IPASS, or integrated planning and advising for student success. Though each of the institutions studied implemented a similar suite of technology tools to support advising, each realized varied success. The authors of the study found that the success with which institutions used the tools depended on circumstantial factors that included vendor relationships, organizational orientation towards student success, a sense of urgency, and aligned leadership. In other words, technology was a necessary but insufficient condition for change.
At Ithaka S+R, we’ve encountered similar findings in our research. Nearly all of the institutions we’ve studied in our case studies have implemented multiple technology tools to support student learning and progression. We’ve found that one important determinant whether or not a tool has an impact is how it is used and how it has been integrated into a larger student-success oriented strategy. Similarly, in our ongoing evaluation of online and hybrid humanities courses at liberal arts institutions, we have found that, in the courses with the highest reported levels of engagement, instructors used tools creatively, thought deeply about how tools supported their pedagogical goals, and had support from instructional designers, IT staff, administrators, and other faculty members.
All of this is not to suggest that research into the efficacy of the tools themselves isn’t valuable and needed. However, a research agenda for these tools must acknowledge that efficacy cannot be isolated to features, design, or development. Rather, it is deeply embedded in the organizational circumstances in which institutions and their stakeholders implement and use technology, and research into the efficacy of ed-tech tools should focus as much attention on context as it does on the tools themselves.