In June of 2020, Ithaka S+R conducted 18 synchronous exploratory virtual focus groups with 46 undergraduate students and 37 faculty, to assist a higher education institution in learning from its emergency transition to online learning during the spring 2020 term due to Covid-19. We will publish our findings, a time capsule of a university community’s rapid adaptation in response to COVID-19, in a public report later this year. To conduct these focus groups in the midst of the pandemic, we, like the students and faculty we interviewed, needed to swiftly shift our work online. Collecting qualitative data online posed challenges, some of which mirrored faculty and student experiences while moving to a fully remote semester. Like our focus group participants, we confronted technological access and equity issues and labored to engage participants and replicate human connection online. We present here a short methodology case study to share our process of transitioning to strictly online qualitative research, so that our focus group series can benefit not only higher education institutions, but also other social science researchers navigating the emergency online context.

Although the pandemic necessitated that our focus group series be conducted completely online, virtual focus groups are not a new phenomenon and in fact offer numerous advantages for many research studies. For instance, they permit researchers to assemble participants that are geographically disparate and for whom travel may not be physically or financially feasible, and don’t require large meeting spaces. Of course, virtual focus groups have limitations, the most serious being the exclusion of individuals who do not have adequate access to devices and reliable internet connections. Nonetheless, it is reasonable to assume that virtual data collection, including focus group interviews, will increase and persist in the near future, particularly as higher education continues to be at least partially remote. Below, we highlight existing methodological recommendations from the field, what did or did not work for us, and what we learned in terms of how our online focus group procedures aligned with existing recommendations from older peer reviewed publications and more contemporary practice blogs.

Prior to commencing our study, we reviewed the literature pertaining to virtual focus groups to identify recurring strategies and recommendations. When sorted by relevance, search results for “virtual focus group” and “online focus group” in GoogleScholar yield articles primarily from the early 2000s, before widespread access to video-conferencing software and the corresponding tech-savvy. An exclusive focus on synchronous or asynchronous typed or text-based focus groups, conducted via email or discussion boards, antiquates much of the peer-reviewed literature on the topic. Given the near-ubiquity of video-conferencing software in 2020, we turned to newer scholarly literature and to practice blogs published by MDRC, Rev, and Research Design Review at the onset of the Covid-19 pandemic for more contemporary recommendations. Recommendations across peer-reviewed and blog publications coalesced around four core topics: recruitment, technology, moderating, and privacy. Each topic includes a few distinct themes and/or best practices, which we attempted to adhere to in our original focus group design. However, over time, we jettisoned some suggestions and prioritized those that supported our ultimate goals. From our experiences, which we describe below in a short methodology case study, we were able to draw some preliminary conclusions about existing recommendations for conducting focus groups for qualitative research in the present context.

Participant Recruitment

To ensure inclusivity, the ways in which participants are recruited, grouped, and engaged must shift to meet the needs of the online context. Based on the literature, there are a few best practices for shaping the online group, including over-recruiting, limiting group size, and mitigating exclusion of participants with limited tech access. Multiple researchers warn that online focus groups tend to come with higher attrition rates than in-person focus groups, which makes oversampling preferable. However, the resources we reviewed suggested that each group remain relatively small. Our ideal number of participants per group converged around 4 to 5.

In the midst of the pandemic, we sought to minimize the burden of recruitment on the higher education institution and affiliated participants alike by conducting an open call with both students and faculty. To recruit students, an email was sent to a random sample of 2,500 undergraduates who had been enrolled during the Spring 2020 term and were at least 15 credits away from graduation. To recruit faculty, an email was sent to all members of an institutional advisory board, who in turn forwarded the invitation to their fellow faculty constituents. Both groups were asked to sign up for specific pre-set focus group time slots, and then sent final invitations with relevant details. As predicted, we saw relatively high attrition—41% among students and almost 25% among faculty—which was offset in our student sample thanks to over-recruitment. Recruitment efforts garnered a final sample of 46 undergraduate students and 37 full-time and contingent faculty members. Students were compensated with a $20 online gift card. Faculty were not compensated for their participation, though an invite to discuss their experiences with others appeared to be a strong incentive as there was a waitlist for participation and most faculty expressed sincere thanks for the opportunity, with one of them even calling it “free therapy.”

Participating students were divided into three rounds of three focus groups, with four to seven participants each, based on major concentration and/or class standing. The faculty were divided into nine discipline-based groups, with two to six participants each, conducted across five days. Consistent with the literature, ideal group size for both our student and faculty groups seemed to be four to five, which allowed for robust conversation while ensuring everyone had time and space to contribute. Because our recruitment strategy was exclusively online, our sample included only those with technology access that allowed them to receive emails, complete online forms, and access instructions for when and how to join their group.

Technology

Technological considerations were at the forefront of our minds. To ensure seamless technological usage and prevent disruptions mid-session, the literature recommends that the project team conduct a preparatory trial run of the software and send participants emails detailing how to best prepare (e.g. test technology and review technology FAQs) in advance of attending. Existing publications also recommend allocating time at the beginning of the session to testing participant technology, reviewing technology features and etiquette, and describing what to do if issues arise during the interview. In response to these suggestions, we put a number of failsafes in place to avoid technological pitfalls mid-session.

Prior to our first student focus group, the project team met on the software platform to practice sharing screens, sizing powerpoints, and dividing the large group into smaller interview pods. In our full group opening, we provided a brief overview of tech features, best practices, and troubleshooting procedures, as recommended, and displayed a corresponding informational slide. We also encouraged students to experiment with their own settings to optimize the layout before beginning the focus groups. After our first round of student groups, during which a few students faced technical challenges, we assigned a non-facilitator to remain in the conferencing platform “waiting room” for the duration of the session to provide technical assistance as needed. Students could “step out” of their group to address technology concerns while minimizing disruptions to the groups. These two strategies were highly effective in mitigating the technical issues that arose over the course of the student interviews. Because students and faculty had already been through at least half of a semester fully online and experienced the rapidly increasing prevalence of videoconferencing, we had anticipated basic tech literacy and online etiquette. In fact, positive virtual group behaviors like remaining muted, pausing before responding, watching for others unmuting themselves, etc. were not a concern and needed no further adjustments.

In addition to technological use, we thought extensively about technological access and equity disparities. By conducting a synchronous group, we presumed some level of tech access, while understanding that not all participants would have the same tech capabilities. We attempted to involve participants who struggled to access the internet and/or video by offering a call-in option and encouraging participants to use the chat function. The importance of both options was reinforced during certain student focus groups, when participants experienced technical difficulties or lost internet access and were able to reconnect via phone. We also offered follow-up email communication as another potential remedy to mid-session technology or internet problems. Technical issues were faced primarily by students, and not faculty. Later findings from the faculty focus groups suggest that reliable internet access was indeed a widespread problem among students, possibly impacting our student sample and attrition rate more than we anticipated. It is not clear whether faculty we were not able to reach in this study also experienced digital access challenges. Future researchers might find it worthwhile to consider setting up an asynchronous, chat-based group for students, much like those described in the early peer-reviewed articles we mentioned previously, to garner more representative samples.

Moderating the Groups

The virtual context can make it more difficult for participants to engage with the group, as the social norms that usually facilitate in-person conversation (i.e. body language, eye contact, etc.) are not in place. Subsequently, the literature suggests additional supports to make it easier for participants to engage with the questions and each other. In addition to reviewing technical functionalities, the literature recommends opening the session by reviewing participation guidelines and session purpose. This framing prepares students to engage with each other as well as the content of the interview. Soliciting round-robin responses to each question (each participant answers or passes) before opening the question to further discussion was also recommended as a way to get around the increased hesitation to contribute that comes with online conversations. Similarly, past authors like Lewis and Muzzy caution, “expect and encourage a slower rhythm to the conversation than in an in-person focus group.” Using text slides to offer a written correlate to verbal questions was also suggested across resources. Finally, the pieces we reviewed encouraged incorporating multi-modal engagement strategies—including pushing out survey questions, polling, and inviting text responses in the chat—in order to collect data and give participants time to reflect before discussion.

In our first student groups, we utilized each piece of advice that seemed the most relevant and/or appeared across resources. We also made time to build rapport through a fifteen minute video “warm-up” where facilitators were introduced in a casual and friendly environment, which seemed to put the students at ease once the interview portion began. We found that inviting a round-robin response, as recommended, to an open-ended introductory question helped to establish a baseline for participation and get a sense of the group. In the first two rounds of student groups, we shared slides as a visual supplement to verbal questions, but after trying to perfect the screen settings necessary to show the slides and see students’ non-verbal communication cues, we concluded that providing this support was doing more harm than good. However, in the future, we may consider including written questions in the chat for participants to reference as needed. We also dropped the introductory poll questions after two rounds of student groups. They took a significant amount of time to gather, and we felt this time would be better used allowing participants to engage with each other.

We adopted fewer faculty supports, but we found certain techniques, like opening with a round-robin question worked across student and faculty focus groups. Moderators often referred both students and faculty to the chat. Some individuals enthusiastically embraced the chat function, and it was a good way for big talkers to stay engaged without dominating the conversation. Anticipating a slower rhythm of conversation, moderators always adopted “wait time” between question and answers. When conversation was stilted, facilitators manufactured dialogue in the “unnatural” space of the online platform by cold-calling on a student or faculty member to ask whether their experiences resembled or differed from those voiced already. The cold-call process was made easier by having participant names displayed, a luxury usually not afforded by in-person focus groups. Participants were told they could rename themselves to ensure their privacy, a practice on which we will elaborate in the section below.

Privacy

Virtual focus groups introduce new privacy and confidentiality considerations which require planning and foresight. Specifically, existing literature recommends that researchers ensure informed consent by displaying consent form language as a slide prior to asking interview questions, protect participant identity by assigning anonymous or abbreviated usernames, and store audio-video recordings securely.

Ithaka S+R adhered to these privacy and ethics best practices across groups while taking two slightly different approaches to obtaining student-participant and faculty-participant consent. During the student recruitment process, we informed prospective participants of privacy risks and secured signed student consent forms as part of the registration process. Because student-participants signed the consent form digitally and well in advance of their focus group sessions, we reminded them of that information at the start of the interview and visually reinforced this oral reminder with a slide. While faculty entered sessions aware of the general goals of the study and knowing that they would be joining their colleagues in groups, they did not pre-submit consent forms. We offered faculty the opportunity to ask researchers questions ahead of the session, via email, before obtaining their verbal consent to participate at the start of the session. This consent procedure worked well in most faculty focus groups, but one faculty member felt rushed to consent at the time of the group and asked many questions about how the data would be used. Because these valid questions took up a significant portion of the session, in the future, we recommend sending a consent form ahead of time for all participants to at least review prior to joining the group.

We took extra precautions to safeguard participant privacy in the online forum. The videoconferencing software we employed, like many other text and video-based technologies,  assigns auto-generated usernames based on participants’ real names. To protect participants’ anonymity, we reminded them at the beginning of each session that they could rename themselves. We also informed participants that their responses would be fully anonymized in notes and findings and obtained each participant’s verbal consent prior to beginning the audio recording for each session. Although our video-conferencing platform was equipped with video-recording capabilities, we saved only audio files to protect participants’ images and usernames. As we conducted sessions on a cloud-based video-conferencing software, we adjusted the software settings to ensure that the audio files were saved to our secure GDPR-compliant storage drives rather than the software cloud. A few participants told our project team that the reassurance of confidentiality and security encouraged them to share candidly and enthusiastically.

Conclusion

Since the earliest publication of articles discussing digital focus groups methodologies at the turn of the twenty-first century, video conferencing software has narrowed the gap between in-person and online focus group modalities. Our faculty and student participants were processing a monumental, life-altering experience, so their engagement and within-session bonding may have been stronger than can otherwise be expected. Still, we believe that conducting synchronous audio-video interviews in 2020 requires simple, common-sense adaptations to in-person norms and procedures, with a special focus on fostering human connections with and among participants. Qualitative data collection online, as in-person, succeeds when researchers place participants’ experiences and well-being at the center of the process, while maintaining a flexible disposition to field unexpected challenges as they arise.

Internet connectivity poses the most daunting challenge to researchers working in this modality, and substantial disparities in access to broadband internet (and, by extension, video conferencing software) persist by age, race, income, education, and geography. Establishing clear procedures for call-in participation strengthened our groups’ engagement and reduced attrition due to internet connectivity challenges. However, we are keenly aware of the constraints that the Covid-19 pandemic placed on group composition and participation. We regret that our fully online format may have excluded students and faculty who were least digitally equipped, and, therefore, possibly most negatively impacted by the university’s transition online. Once global circumstances allow for it, we encourage future researchers working primarily online to recruit and conduct one or more complementary in-person focus groups to advance equity and study quality.