Research

Alex Kindel

Summary of Discussion
Student Data and Records in the Digital Era
Asilomar, CA
June 15‐17, 2016

This memo synthesizes the discussions of the “research” strand of Asilomar II. This strand was convened by Tim McKay of the University of Michigan and included approximately 25 people representing a wide variety of education organizations, including research universities, broad-access colleges, education technology companies, and nonprofit organizations. The conference sought to explore the application of scientific norms as a basis for making policy about student data use.

Scientists across the academy are taking a renewed interest in the practices of teaching and learning, and are applying diverse methodological and conceptual tools to advance the study of postsecondary instruction. Yet, systems are not yet in place to enable researchers to perform this work more widely and systematically. To date, much research in higher education has been organized by ad hoc partnerships and one-off research arrangements. Many providers in the higher education sector do not have the resources to routinely provide administrative data to researchers. Educational organizations are only now beginning to develop the technical and organizational infrastructure necessary to support ongoing systematic study of their activities. Researchers at the convening discussed at length the opportunities, challenges, and prospects of building this infrastructure.

Discussions in the in the research strand focused on three broad ideas:

  1. Technical infrastructures that undergird higher education’s basic activities offer untapped opportunities for systematic investigation.
  2. Relationships with third-party vendors should be organized in ways that accommodate and support scientific investigation.
  3. Academics and their institutions have an ethical responsibility to support research that advances basic science, informs instructional practice, and enhances educational opportunity.

Technical infrastructures

The collection and aggregation of information on students is not a new process. Colleges and universities have always had systems for routinely documenting their activities, and they have long maintained official records of student performance and progress. Organization-level statistics—completion rates, tuition fees, acceptance rates—are used by regional accreditation agencies, state and federal agencies, and media companies to evaluate and rank institutions. These public comparisons in turn shape how colleges and universities establish priorities, allocate scarce resources, and compete with peers. They also shape how students themselves plan and think about their own educational trajectories.

Recent developments in the use of technology by colleges and universities both complement and extend these legacy systems. Instructional software is increasingly instrumented to collect fine-grained data on student behavior. As these tools proliferate, researchers and administrators are beginning to see existing systems as amenable to data collection and analysis. The application of techniques from data visualization and human-computer interaction make it possible to understand all of these data sources in new ways. Yet, researchers remain uncertain about what kinds of data are appropriate for research use, particularly when research findings are intended to inform academic decision-making.

Participants in the research strand suggested that studying how students and administrators use data tools to make decisions is a critical step in developing routines and norms for scientific data access. Rather than relying on assumptions and traditions that dictate what kinds of resources are “appropriate” or “useful”, researchers called for systematic analyses of advising tools and their effects on academic trajectories.

Methodologically, researchers are interested in performing experiments on instructional interventions to understand how they affect student trajectories, and where intentionally designed experiments are not possible, causal inference methods can support similar analyses. In every case, researchers believe they can only learn more about how these methods are best deployed by designing research interventions and instructional resources simultaneously. Researchers also acknowledge that the results of these inquiries are likely to change the conditions under which academic judgments will be made by students, instructors, and administrators. Participants in the research strand recognized a professional responsibility to produce insights that clearly and robustly identify causal mechanisms, and to build software tools that create rather than foreclose educational opportunity.

Third-party vendor relationships

Relationships with third-party vendors are under increasing scrutiny. Colleges and universities have long relied on external service providers to build technical infrastructure for teaching and learning, both on and off campus. For example, the traditional academic textbook market has long been a commercial third party supporting instruction in generations of college classrooms.  Additionally, as early as the 1950s, many schools worked with commercial radio and television firms to build systems linking faraway learners with educational resources. Since personal and networked computing became widely accessible in the 1990s, schools have actively experimented with commercial applications.

Different kinds of educational organizations have leveraged new technologies differently. Broad-access schools have depended on commercial software vendors to provide robust and reliable tools for supporting large enrollments.  Moderately selective schools, especially state university systems, have formed consortia to bargain collectively with vendors. The most admissions-selective schools (which are often far wealthier than their broad-access peers) often have been able to afford active in-house development of their own software tools.

As these systems become ubiquitous, colleges and universities of all sorts are actively rethinking how vendor relationships should be framed and negotiated.  Schools have begun to expect vendors to provide data access and analytic tools along with their instructional software services.  Participants in the research strand, including both researchers and vendors, highlighted a need for institutional data agreements that explicitly address major obstacles to scientific inquiry. Commonly cited obstacles included data documentation, transfer processes, and privacy protections.

Institutional data agreements can provide researchers with systematic access to data on academic activities that rely on third party software. Such agreements also make it possible to conduct experiments and causal-inference inquiries at departmental, divisional, or organizational levels. Where such agreements are not present, researchers must rely on ad hoc relationships to gain access to archived data or potential experimental environments. When data shares or experimental access are negotiated one by one, researchers have no guarantee that data will be consistently documented or that received datasets will be available for future replication. Without institutional support to maintain vendor relationships, it is difficult for individual researchers to obtain reliable and timely information about changes to the software that might affect the validity of ongoing experiments or previously collected data. When left implicit, the fiduciary status of student data inhibits data transfers between higher education institutions and third-party vendors.

Institutional support for research

Research and data analytics can both challenge and strengthen professional evaluations of instructional quality. As with technical infrastructure and vendor relationships, instructional evaluation is by no means a new activity in higher education. Colleges, universities, and other academic organizations may offer a wide range of services to assist faculty with curriculum development and course evaluation. However, these activities have rarely been linked to the training of new instructors or to institutional data collection efforts. While resources are made available to interested and self-motivated instructors, only a few institutions have made commitments to systematically and routinely evaluate instructional quality or student outcomes.

New technical tools make it possible to think of institutional data collection, instructional practice, and academic evaluation as parts of a continuous cycle of evidence-based improvement.  Participants in the research strand concurred that research on postsecondary instruction cannot advance without becoming more closely linked to both teacher training and ongoing instructional practice. For example, robust experimental designs can rarely be implemented without allowing researchers some say in how educational interventions are designed and delivered. Participants also suggested that linking these activities may make it easier to discern and measure relevant outcomes, rather than relying solely on the expertise and discretion of instructors or on student self-reports.

The prospect of systematic instructional design and evaluation also presents a challenge to the autonomy of academic professionals. Instruction has historically been treated as an art or tradition to be honed over time, rather than as an activity that might be gradually improved on the basis of evidence. Postsecondary instruction has no widely accepted norms of continuous improvement, peer review, or malpractice. Instructors are typically not taught to look for mistakes or missteps in their own teaching practice, or to question the assumptions they bring to teaching. This stands in sharp contrast to the norms of scientific research—a culture of collective questioning and systematic improvement with clearly defined ethical boundaries—norms to which many postsecondary instructors already ascribe in other aspects of their professional lives.

Higher education leaders, including researchers, are actively considering whether all participants in higher education may in fact have an ethical obligation to participate in intramural research on teaching and learning. Scientists working in this domain understand student data as both a personal good and a public good, to be used for the benefit of the individual student’s own education and for the generation of knowledge that can improve the education of others. The participants in the research strand envisioned building academic environments where students and instructors alike are active participants in building knowledge.