Introduction

The rise in technology-facilitated assessments has created a paradigm shift in employer talent acquisition. Traditionally, the process of assessing candidates’ skills has focused on resumes composed of credentials signifying successful participation in or completion of an experience (like a degree, a training program, or a prior job) and candidates’ own claims of competencies.[1] Such a process favored intermediaries like higher education institutions, industry associations, governmental agencies, and former employers, who awarded credentials for successful program completion or could vouch for candidates’ competencies. By contrast, a large and increasing share of today’s job candidates are assessed directly on their job-related competencies through various technology-assisted means, allowing employers to supplement—and sometimes forego—the traditional criteria. In fact, a 2017 survey of more than 800 human resources professionals in the Americas found that 63 percent are using pre-hire assessments as part of the hiring process, and 89 percent of these assessments are delivered online.[2]

This has significantly altered the ecosystem’s circuitry. Some of the largest employers are developing and deploying in-house, technology-facilitated assessment solutions when considering a candidate for a new employment opportunity. At the same time, a new crop of third-party assessment providers is creating customized solutions for corporate clients as well as stand-alone assessment platforms that employers and candidates access independently of one another. The talent acquisition technology and services market is estimated to reach $113.9 billion in 2021,[3] and assessment technology in particular is primed to be one of the biggest areas for innovation in this sector. Traditional intermediaries haven’t disappeared; instead, an increasing number of higher education providers, industry associations, and other groups that have historically validated candidates’ skills are adopting new assessment technologies and adapting to the new ecosystem.

In short, pre-employment assessment in the United States is in the midst of a wave of rapid innovation. Our aim for this paper is to document and evaluate this uncharted territory, describing how the dynamic landscape has changed to date and where it seems to be heading. In addition to discussing the most promising emerging trends and platforms in the employment assessment ecosystem, we identify some potential concerns and barriers to scale, as well as opportunities to address them.

Several major themes emerged from our research:

  • Traditional pathways from education to the workforce are starting to be supplemented and circumvented in the assessment ecosystem. A perceived skills gap between candidates’ competencies and employers’ needs has emerged. Employers are increasingly challenged to find workers with the skills they need and are beginning to distrust the traditional signaling credentials such as academic degrees, industry association endorsements, and state licensures. Although the most popular pre-hire screening tool is still the resume review,[4] a developing trend among employers is to bypass the historical pathways for verifying candidate competencies, especially among entry-level positions. Innovative employers, equipped with online “always on” testing portals, cloud storage, and big data analytics, are turning to novel approaches for assessing job readiness.
  • Third-party providers are rapidly entering this new technology-driven ecosystem, providing new assessment methods. A new and growing crop of assessment startups promise to help employers connect verified talent to real jobs through both pull and push models of skills signaling. Employers have historically outsourced their assessment needs to test publishers, consultants, and headhunters. However, these new technology providers leverage the capture of big data around talent acquisition and the cutting-edge technologies that analyze it—such as machine learning, social media scraping, gamification, and digital interviews—in order to “pull” a diverse set of competencies from the talent pool. Intermediaries including higher education and industry associations are also turning to technology providers for skills assessment and signaling by adopting various badging and ePortfolio platforms that “push” competencies to hirers. Similarly, technology providers with candidate-focused testing platforms allow individuals to directly assess their hard and soft skills and push them to employers without the need for traditional intermediaries’ verification.
  • The marketplace is flooded; assessment technology selection can be a burden for all players in the ecosystem. This new combination of provider-generated pathways isn’t always easily navigable. In fact, the hyper-fast pace of assessment technology product development is so difficult to follow that during the relatively short preparation of this paper alone, a number of new assessment providers have entered the space, converged through mergers, and shuttered their doors. This “wild west” marketplace has caused several issues. Employers have limited information with which to decide among assessments and assessment providers, or with which to parse the hundreds of thousands of “verified” digital credentials candidates present to them. Additionally, black box validation measures used by third-party providers that claim high predictive capabilities and verified job matching abilities are not only confusing for employers, but legally risky. Candidates face the same problem when choosing how to validate and signal their skills and fear wasting time and money on tools that employers might not even use to screen talent.
  • Incompatibility—of both content and software—across assessments and employers’ human resources systems presents a barrier to broad-based and efficient use of direct pre-hire assessment. Without a proverbial “Rosetta Stone” for competing skills assessment technologies, employers and candidates alike will struggle with adoption. The ecosystem needs platforms that promote the comparison of disparate alternative credentials and, most importantly, that give transparency into which skills these credentials verify and how these competencies were assessed. Similarly, human resources professionals at the top of the hiring funnel need effective, validated tools that scientifically thin the applicant herd without causing additional work and translation. Thus, technology-facilitated assessments, whether standalone tests or part of a digital badge program, should be integrated into enterprise applicant tracking systems (ATS), software that a majority of employers use to manage the hiring process.
  • Intermediaries in the ecosystem are out of touch with employers’ and providers’ new methodologies. Despite their growth and adoption in the private sector, intermediaries are not tracking the market innovations for analyzing candidates’ skills. Pull model technology-facilitated assessments are not developing in a vacuum. Yet key players in the ecosystem—including higher education administrators and industry association officials—seem to ignore many of the specific competencies today’s employers are measuring as well as the advanced methods by which their new assessments are capturing skills. These invested stakeholders should review, develop, and implement the very processes employers are actually using to hire candidates. In so doing, intermediaries such as higher education can better serve their students and guide them toward employability.
  • Emerging partnerships that leverage several players in the ecosystem to provide integrated, multimethod assessment strategies are best equipped to successfully measure and develop candidates’ skills. Technology-facilitated assessment is not a panacea that will magically match qualified candidates to jobs, nor is there a single tool that will act as a talent divining rod for identifying skilled candidates. These technologies are still very much in their infancy, so their adoption by employers requires phased checks-and-balances processes in order to fine tune and adjust their predictive effectiveness. Multimethod assessment strategies that continuously report strengths and areas for development directly to candidates have the best potential to bridge the skills gap and help candidates build the competencies they need to enhance employability.

This paper uncovers these themes by taking a landscape view of the complex technology-facilitated assessment ecosystem. It begins with a history of the ecosystem’s shift from analog to computerized delivery. It then takes a closer look at today’s assessment ecosystem—its new technologies, its new providers, and the new relationships that are being shaped among traditional players. (A detailed table that inventories the growing marketplace’s notable third-party providers is linked in the paper’s appendix.) Next, we identify emerging trends and promising platforms within this new assessment ecosystem as well as the unique concerns posed by technology-facilitated assessments. We end by first discussing opportunities offered by responsible adoption, closing with a set of specific recommendations:

  • Research validity and impact. There is much work to be done regarding the scientific validity of new assessments, especially advanced assessment approaches utilizing black box machine learning algorithms. Similarly, research should be undertaken now in order to begin tracking the impact of adopting technology-facilitated assessments on the labor market. This will not only offer a clearer picture of how employers are using these tools, but also identify the consequences—intended and unintended—affected by this evolution in employee screening procedures.
  • Develop ethical and legal frameworks. The assessment marketplace needs marshalling. New techniques for evaluating and measuring candidates’ competencies involve generating large datasets containing personal information that could infringe upon employees’ and applicants’ right to privacy. Ethical guidelines must be established to govern this data. Legal policies regarding pre-hire screening must also be adapted for new assessment tools, ensuring that widespread adoption of advanced assessments does not have an adverse impact on protected groups.
  • Pilot multiplayer programs with third-party auditors. To open the pathways for successful collaboration between employers, candidates, and traditional intermediaries in the new assessment ecosystem, pilot programs should be designed that implement skills assessments throughout candidates’ career and learning pathways. Instead of assessing students after graduation, the assessment process should begin while learners are still training for careers, a practice that could have enormous potential to help close the skills gap. By providing transparent feedback on assessment performance to both individual learners and higher education institutions, candidates can better use their education to ramp up the skills needed to land their first job—and educators can better tailor their curriculum to impart the competencies for which employers are hiring.

The Assessment Ecosystem’s Technology-Facilitated Shift

This section provides a background of the traditional assessment ecosystem, including how employers historically screened candidates for job opportunities and how credential-issuing intermediaries aided candidates in the signaling of their competencies to employers. It then details the ways in which first computerized and then internet-based assessment technologies complemented and changed analog assessment approaches.

Traditional Approaches to Employment Assessment

This section discusses the history of both the “pull” and “push” modes of candidate assessment. It begins with a historical perspective of how employers have assessed candidates’ skills through a pull model of direct screening. It then discusses the simultaneous methods by which candidates pushed their applicable competencies to employers by signaling skills through a set of credentialing intermediaries.

History of Employers’ Pre-Hire Screening

While pre-employment tests have recently re-entered the national conversation, it is worth noting that these tools are in fact a century old. The U.S. military pioneered the field during World War I when the millions of drafted American soldiers necessitated an efficient and effective way to sort and place personnel. The success of these early assessments, paired with the growth of applied psychology and its sub-field industrial and organizational (I-O) psychology in the 1920s, spurred the rapid adoption of civilian workplace assessment.[5]

By the mid-twentieth century, the use of paper-and-pencil psychometric assessment testing was a popular tool for employee screening across many industries, from entry-level jobs to managerial positions. The majority of these analog personnel selection tests measured cognitive skills through standardized general aptitude tests developed by industrial psychologists.[6] During this national rise in testing, it was not uncommon for large U.S. organizations to employ I-O psychologists to develop proprietary assessments,[7] or for enterprises to outsource candidate testing to consulting firms in order to predict candidates’ fit. For example, the Wonderlic Personnel Test was designed in 1936 to measure general intelligence and is still administered through Wonderlic Inc., having assessed more than 200 million individuals’ cognitive ability as a measure for job performance.[8] Psychometric personnel assessments such as Wonderlic were used not only for hiring new candidates, but also for identifying the potential of current employees for upskilling.

Candidates’ competencies were first screened directly by employers through written and oral examination and evaluated based on standardized test scoring methods. In the 1950s, employers began looking for more effective ways to measure candidates by incorporating behavioral observation and developing assessment centers.[9] Assessment centers include a mix of individual and group exercises that simulate the work conditions of the position for which the candidate is being tested. Applicants’ behaviors are observed and rated by trained assessors, typically subject area experts for that role, as they demonstrate their effectiveness in completing the tasks presented by the simulation exercises.[10] These assessment centers foreground standalone candidate simulations, which became an important tool within the analog assessment ecosystem by the 1970s.[11] Simulation tests typically measure specific job-related skills and require the successful demonstration of a given task, for example drafting a press release, installing a telephone line, or welding a pipe.[12] In the healthcare industry, in-person simulations have long been used to assess technical skills, such as in the United States Medical Licensing Exam.[13]

History of Skills Signaling to Employers through Intermediary Credentialers

Simultaneous to the development of pre-hire assessments, employers began to rely on the credentials offered by a set of intermediaries to assess candidates’ skills: higher education, industry associations, and government agencies. The value of these credentials as screening tools weighed heavily upon the reputation of the credentialing intermediary in the field. In theory, employers could trust that candidates’ competencies were thoroughly vetted by the credentialing organization through the process of earning the credential.

The twentieth-century’s education revolution caused the academic credential—from the high school diploma to advanced graduate degrees—to signify occupational prestige.[14] By the 1970s, economic theorists saw the degree as a key employment-filtering device.[15] For certain fields, industry associations and government agencies began to standardize workers’ required skills and administrate exams that verified individuals’ mastery of a skillset. These intermediaries continue to regulate the field in order to provide the public with dependable mechanisms for identifying qualified practitioners.[16] For example, trade associations such as the National Association of Manufacturers, the American Welding Society, and the National Institute of Metalworking Skills each award certification for workers in these representative fields through assessment.[17] Similarly, state licensure is granted to candidates for positions that are regulated by specific governing boards or agencies. This includes the licensing of nurses, teachers, social service workers, nutritionists, real estate brokers, and cosmetologists among other professions. Criteria for licensure are dependent upon individual state regulation and vary by profession, but most licensing boards require direct competency assessment through qualifying exams. In many cases, higher education institutions partner with industrial organizations and governmental boards to ensure that learning outcomes are aligned to industry certification and state licensure; in fact, many vocational programs build this into the curriculum.[18]

Computerized and Internet-Based Testing

This section discusses the move from paper-based assessments to first computer- and then internet-based pre-hire testing and skills credentialing. It argues that while computerized assessments certainly changed the ease and accuracy of exam delivery and scoring, many analog typologies (e.g. assessment structure or types of skills measured) remain the same despite having moved online. However, technological advances in collecting, storing, and analyzing assessment data have revolutionized employers’ ability to leverage assessments with a much greater magnitude than that of analog methods, building a new technology-facilitated ecosystem in which employers develop and deploy cutting-edge pre-hire assessments.

Computerized Assessments Increase Efficacy of Analog Approaches

Beginning in the 1960s, assessment testing moved to computerized delivery. This shift allowed for cost-effective assessments through the automated scoring of tests, accelerating turnaround from 14 days to 30 minutes or less.[19] Another key technological advantage in moving to machine-delivered assessments is computer adapted testing (CAT). CAT assessments draw on large electronic question banks for a given test to tailor forced-choice exams in two ways: first, by algorithmically adapting question order and difficultly by using the accuracy of previous answers given by an individual tester; and second, by auto-generating forced-choice answers from a set of pre-made selection options. CAT provides shorter and more accurate assessments that better engage candidates and curbs test fraud by providing each user with a unique set of test questions and response choices.

Many of today’s computerized assessments still present more or less traditional testing items on a computer screen.[20] In fact, computerized pre-hire assessments use the same typologies of assessment as their analog predecessors. In terms of assessment content (the “what” of assessment tests), candidates’ hard skills are still typically tested by either general mental ability assessments or specialized skills assessments. General mental ability assessments test cognitive skills such as verbal fluency, verbal comprehension, quantitative reasoning, and logical reasoning, whereas specialized skills assessments examine specific job knowledge.[21] Soft skills assessments on the other hand evaluate candidates’ strengths and weaknesses regarding communication, teamwork, leadership, flexibility, adaptability, problem solving, creativity, persuasiveness, interpersonal skills, time management, and work ethic.[22]

In terms of assessment structure (the “how” of assessment tests), candidates’ hard and soft skills are assessed through either forced-choice or performance-based tests. In forced-choice assessments—such as those utilized by CAT tests—candidates select from a list of supplied answers, i.e. multiple choice, true or false, and ranking formats. Unlike forced-choice questions, performance-based assessments ask candidates to generate their own responses. These include assessments that evaluate candidates by demonstration, such as essays, short answer, or even the presentation of a skill or job task through onscreen simulation. Often, soft skills are tested using an assessment structure referred to as situational judgment tests (SJT), which measures how applicants would handle typical on-the-job situations.[23] SJT analysis can be administered in either a forced-choice or performance-based structure.

Online Assessments Enable Big Data Capture and Widen the Talent Pool

Internet-based assessment has become the primary method of pre-hire testing of hard and soft skills, with thousands of test publishers providing a wide range of online assessments that measure anything from firefighter job knowledge to legal typing speed to communication effectiveness. Whereas offline computerized assessments are held in proctored testing centers, the move to online assessments allows for remote access and scoring. This widens the talent search to a globally diverse applicant pool whose fluency in a skillset can be quickly, easily, and cheaply assessed.

Internet-based approaches are lucrative to employers for several reasons: the assessments themselves are effortlessly delivered through always-on testing portals; the data they generate are seamlessly stored on the cloud; and the results are automatically measured. However, simply hosting an assessment online does not leverage the full potential of today’s technology surrounding big data capture. As detailed in the next section, the game-changing innovation in assessment derives from the ability to harness the interconnected data and computing power of internet- and machine-enabled analytics.

The Technology-Facilitated Assessment Ecosystem

This section focuses on the new technology-facilitated assessment ecosystem by listing the major innovations being adopted by employers. The third-party providers that assist in developing these tools are also introduced as new intermediaries in the ecosystem. It ends with a short evaluation of how traditional players’ roles are changing given the new marketplace of assessment technology.

Assessment Enabling Technologies

As outlined in the previous section, employers’ desire to scientifically evaluate candidates’ competencies is not a new phenomenon. What is new, however, is their increasing adoption of technologies that drive better assessment delivery, data capture, storage, and the algorithmic analysis of assessment results. In the technology-facilitated assessment ecosystem, the core technologies that enable this are applicant tracking systems and artificial intelligence.

Applicant Tracking Systems

For most large enterprises, the applicant tracking system (ATS) is the principle technology employers use in the screening of candidates. ATS vendors provide software as a service (SaaS) applications, many of which centrally host applicant data on cloud servers that is accessible to employers through the internet. While ATS vendors own the application code, employers own the data collected in the system. The ATS centralizes the application process for employers by collecting, sorting, and organizing candidates’ application materials. An employers’ ATS is so integral to the hiring process that it is often referred to as the core element to the recruiting process, much to the chagrin of HR professionals who voice frustration with their often outdated, inherited legacy systems that have trouble incorporating the software application programming interfaces (APIs) of the new assessment technologies described below.[24] However, ATS providers enable employers to capture the big data needed for many predictive talent analytics and assessment technologies. The top ATS providers are Taleo, Greenhouse Software, iCims, and Jobvite, and many companies develop their own systems in-house.[25] However, new disrupters are entering the ATS marketplace. In 2017 Google launched its affordable Hire platform that targets small- to medium-sized business, integrates with the Google suite of products using an intuitive and user-friendly interface for ease of scheduling and communication, and incorporates Google’s artificial intelligence technology to automate many HR recruiting tasks such as candidate discovery.[26] Such adaptability to integrating third-party APIs with ATS software is integral to the widespread adoption of new assessment technology, as will be discussed later in this paper.

Artificial Intelligence and Machine Learning

The proliferation of captured and analyzable personnel data stored within a company’s ATS has paved the way for technology-facilitated assessment solutions that leverage artificial intelligence (AI). AI essentially performs predictive analytics in real time using advanced algorithms and large datasets.[27] AI refers to any device that mimics human cognitive functions, and machine learning is an important function of AI. In the field of talent assessment, machine learning is a process that uses training data inputs to algorithmically “learn” and classify information in order to make decisions based on that process. The goal of these algorithms is automated, predictive decision-making about candidate selection. For example, the provider HiredScore, which advertises itself as “The Recruiter’s AI Assistant,” uses big data capture and proprietary data analytics to create automated customized algorithms that instantly send high priority candidates to HR professionals by integrating with their ATS. Some examples of AI-enabled tools include natural language processing, sentiment analysis, facial recognition, speech recognition, and web scraping.[28] Many new assessment technology providers in the assessment marketplace use artificial intelligence and machine learning algorithms.

Innovative Assessment Technologies

Numerous assessment providers are simply delivering analog assessment tests online without leveraging today’s technological innovations in data capture and analysis. This section provides a scan of the cutting-edge developments in pre-hire assessment and skills signaling that have altered the traditional assessment ecosystem, several of which would not be possible without the foundational technologies of ATS software and AI-powered algorithms.

Resume Filtering

Software that automatically parses and filters candidates’ resumes at the moment of online submission is the most common use of technology by employers in the new assessment ecosystem, especially because it is a function included within ATS software bundles. This “low effort assessment” technique allows for quick, inexpensive candidate screening,[29] but there are major disadvantages to thinning applicants in this manner. Many skilled candidates are instantly eliminated because of diction rather than qualification; in fact, many online career message boards and blog posts help frustrated applicants “game” these resume filters. There are even third-party providers—such as Jobscan, which boasts a client pool of more than half a million job seekers—that help candidates optimize their resume and LinkedIn profiles in order to get past dreaded resume robots.[30] Employers recognize the two major risks of resume filtering—unqualified false positives that are advanced simply because of keyword matches and qualified false negatives that are cut because of dictional misalignment—and are interested in moving away from reliance on resume filtering. Yet because of the sheer mass of applicants and ease of application, the resume review remains the top assessment tool for all levels of candidates at the top of the hiring funnel.[31]

Talent Analytics

Talent analytics, also called people analytics, refers to technology that measures and evaluates a company’s vast collection of captured employee data. This big data analysis includes everything from cost-per-hire analyses, turnover reports, and pertinent to the focus of this paper, performance history on pre-hire assessments. The global adoption of talent analytics is in its early phase, but interest is growing; 34 percent of recruiting professionals in the Americas name talent analytics as a top priority in 2018. Currently, an estimated 23 percent of global human resources professionals already use this technology, and an additional 30 percent plan to use talent analytics in the near future.[32]

In terms of recruitment, talent analytics help employers built competency-based success profiles from analyzing current top performing employees’ tracked data. This includes job performance reviews, biodata that details individuals’ skills and background experience, and, when available, other quantitative metrics such as sales figures. When candidates apply for a position within the organization, their information is assessed against these analytics-generated success profiles,[33] which in turn use machine learning algorithms to fine tune profiles as more hires are made. Patheer, for example, uses talent analytics to build competency models and success profiles for client companies to identify both skills gaps in the organization as well as high potential employees for internal recruiting.[34] Part of Aon’s talent analytics approach is to guide clients through meta-analyses of their already captured assessment data, helping them perform more accurate and predictive assessments for talent acquisition.[35]

Online Simulations and Virtual Reality

Simulation-based assessments are increasingly popular among employers as the use of rich multimedia can deliver realistic job previews of on-the-job tasks. These can be classified as low-fidelity and high-fidelity. Online low-fidelity simulations are used to assess soft skills by using situational judgment exams; users are asked to select an appropriate response from a set of supplied courses of action that might be taken during various hypothetical business scenarios. Low-fidelity simulations are also commonly used to validate candidates’ hard skills, for example online Microsoft Excel simulations, specific coding assignments that candidates perform in a specified programming language, or the use of touchscreen technology to assess users’ ability in performing mechanical tasks like changing brake pads.[36] High-fidelity simulations are performance-based assessments that ask candidates to work through a series of complex tasks that demonstrate specific competencies through digitized on-the-job scenarios. An example of such simulation technology would be asking a candidate to prove leadership ability within a simulated organizational unit by responding to a specific challenge given a collection of emails, news updates, and text messages.[37] Simulations are beneficial to hiring organizations because they are able to observe and assess behaviors directly rather than hearing anecdotal accounts of experience.

The third-party provider Authess delivers high fidelity simulations using AI-powered scenario- and performance-based assessments and learning activities, which are aligned to actual professionals’ skills and responsibilities. Complex tasks such as reenacting a bank foreclosure for candidates applying to jobs in the financial services sector are built using professionals’ input to create digital case studies and scenarios that mimic the very challenges the candidate would face in the workplace. Authess also offers a novel option called benchmarking, whereby industry professionals take the assessment to create data patterns for how individuals already in the role solve the real-world problems they face in their jobs. These expert benchmarks can be used as a baseline profile for the assessment. During screening, training, and onboarding, both candidates and employees work through the activity, and Authess uses machine learning algorithms to not only compare their behavior to the baseline profiles, but to also aggregate and analyze the various data points—such as process and behavior—that are automatically captured during the assessment in order to present users with customized performance reports.[38]

Virtual reality technology, on the verge of adoption, will allow for an even more immersive simulated assessment delivery. Already firms such as Osso VR are using virtual reality to train surgeons in cutting-edge orthopedic and spine therapies, and using these simulations to verify skills for surgical job opportunities does not seem terribly far around the corner.[39] Assense, a mixed reality simulation software in development by the startup Actiview, immerses candidates into real-life, on-the-job simulations and uses machine learning to analyze the sensory feedback based on psychometric models customized to employers’ specific needs.[40]

Cybervetting and Social Media Scraping

Although many employers use candidates’ social media information to assess them during the hiring process—a 2010 survey of 825 recruiters found that 73 percent of them used Facebook in talent searches—one empirical study found that there was no correlation between recruiter ratings of Facebook profiles and job performance.[41] However common cybervetting is as an assessment tool, there are not only privacy concerns but also questions about authenticity. Candidates can easily manicure their public social media profiles, so employers might not get an accurate picture of the candidate but rather a tailored version that they self-present online.[42]

On the other hand, the automated social media scraping of Facebook activity (especially candidates “likes”) can accurately predict scores on well-established psychometric tests. This approach predicts job fit by comparing big data trends in intelligence (i.e. those who “like” Mozart are more likely to be highly intelligent). TalentBin and Entelo are companies that use this approach to identify and target the 70 percent of adults who are passive job seekers – not actively on the job market, but open to new employment opportunities. For example, Entelo claims to be able to scrape 200 million candidate profiles from more than 50 online sources to assess talent matches for its clients.[43]

Digital interviews

Currently, 65 percent of Fortune 500 companies use live video interviews to screen candidates, with an additional 14 percent planning to adopt this assessment technology in the near future. Asynchronous or pre-recorded digital interviews are also on the rise; 20 percent of Fortune 500 companies use this technology with an additional 17 percent planning to add it to their assessment strategy.[44] Advanced technologies that record digital interviews allow assessors to analyze tone, speech pattern, and facial reactions at the microexpression level. AI-powered tools such as machine learning, sentiment analysis, and speech recognition are then able to examine the thousands of data points automatically generated during a short, recorded interview to assess candidate fit.[45] The biggest player in this space is HireVue, which performs advanced video assessment algorithms for clients that automatically capture and measure more than 20,000 visual and audio data points. This data is compared to that already captured by HireVue during previously recorded video interviews, which the firm has been collecting and analyzing for 13 years. New applicants’ performances during their interviews are then algorithmically measured against that of successfully hired candidates in similar roles.[46]

Gamified Assessments

Gamified assessments introduce the element of online game playing to skills testing. This can include earning points in a scaffolded learning and assessment environment (such as in the collection of digital badges), participating in interactive videos that deliver simulated job previews through avatars, or by playing online games that assess cognitive and problem solving skills. Some types of gamified assessments are controversial, as many instances of the technology tend to blur fantasy and reality. Many games also necessarily provide users feedback when playing in order to advance through its levels, and the effect of ongoing performance feedback during skills assessment has not yet been studied. There are ethical concerns, too; some providers “disguise” gamified assessments as pure entertainment, posting them on social media platforms and then sending the top-ranking players to hiring agencies without candidates’ direct knowledge of being accessed and recruited.[47]

Gamified assessment provider pymetrics has made significant waves in business journals concerning its approach to gamification. The firm gamifies existing scientifically validated and peer-reviewed psychometric assessments and utilizes them to build bespoke hiring models for its clients. After candidates complete the series of games, pymetrics uses machine learning algorithms to then score and compare users’ game performance to successful employees in the same role for which the candidate is applying, comparing results to customized success profiles. While candidates are scored in terms of their similarity to the success profile for a particular position at a specific company, the feedback each candidate receives after completing the 12 games is specific to their performance on the bi-directional social, emotional, and cognitive traits that pymetrics assesses. When candidates underperform against the success profile of the role for which they are applying—yet match to a success profile generated by another client—pymetrics will suggest that those candidates apply to the better-fitted role for which they have already been assessed.[48]

Badging and Digital Credentialing

Digital badges promise the delivery of authenticated skills that curb candidate fraud. They securely and instantly verify a candidate’s competencies, which reduces employee turnover and the cost of training an under-skilled hire. They are especially fitting in IT, where technical skills matter the most for hiring fit.[49] Digital badges have potential in the ecosystem because they are portable, verified, easy to share, and provide metadata that tag specific skills assessed by the badge; a digital emblem acts as a symbol or placeholder for the actual achievement assessed and earned.[50] In addition to signaling applicants’ competencies, badge-issuing technology vendors are increasingly partnering with employers to develop certificate courses for upskilling their workforces. Coursera, for example, is already working with 18 companies to develop workplace courses that train, assess, and signal employees’ competencies. [51] Similarly, when Udacity designed its Android Developer Nanodegree program, the credentialing vendor spent thousands of hours working with Google to ensure that the curriculum trained and assessed the competencies Google desired for its developers.[52]

ePortfolios

Higher education institutions are increasingly working with technology vendors that host detailed student portfolios online. These platforms highlight actual student work through projects and assignments and integrate with badge providers for the signaling of verified competencies. Although the space is growing fast, these platforms are not yet fully recognized by employers. HR professionals have difficulty understanding what they are looking at when they are sent links to ePortfolios and currently view them as extraneous, especially since they do not integrate into ATS software.[53] However, vendors such as Portfolium are moving fast to bring employers onboard by matching real job descriptions to the verified skills and competencies listed within candidates’ ePortfolios.[54]

The New Assessment Marketplace

Technology-facilitated assessment allows employers to measure and verify candidates’ skills with more precision and efficiency than ever, and with the new technology and opportunity comes new players.[55] Joining the ranks of IO psychologists and human resources professionals, user experience designers, data scientists, software engineers, media producers, web developers, and online learning specialists have entered the technology-facilitated assessment ecosystem to deploy innovative, media-rich solutions.

Similarly, the assessment marketplace has grown to introduce a series of new assessment providers. These vendors do not directly hire candidates seeking employment, but rather they develop the new technologies that screen and qualify candidates for employers.[56] We can think of these providers by their intended clients. “Pull” providers offer solutions for companies wishing to assess and hire the best fitting candidates among the talent pool; “push” providers empower candidates to broadcast their competencies to employers. The following is a sample of new technology-facilitated providers, categorized by the type of service they offer. It should be noted, however, that many of these firms provide suites of services and are not necessarily limited to one type of product.

Pull Providers

Pull providers allow employers to assess candidates’ skills through either off-the-shelf or customized assessment solutions. These vendors typically cater to employers directly for hiring new talent and upskilling their workforce. Pull vendors also target intermediaries such as higher education, industry associations, and government agencies to assist in credentialing and certification assessments. This is especially true of off-the-shelf pull providers.

Off-the-Shelf Assessment Platforms

Off-the-shelf platforms package assessment solutions that can be adapted to clients’ specific needs. These providers sell products that can be tailored, but are not custom-made or bespoke approaches. These large-volume providers typically have vast libraries of test batteries to cater to specific employer needs, but the delivery, scoring, and analysis of results follows the same method from client to client.

With their decades of specialization in analog, computerized, and internet-based assessment solutions—both in proctored and unproctored testing environments—PSI, Pearson VUE, and SHL remain the biggest third-party providers in the technology-facilitated assessment ecosystem.[57] These global businesses specialize in creating off-the-shelf assessment delivery platforms that secure exam integrity and ensure validated results, especially for high-stakes testing. While these market leaders’ core products are part of the traditional assessment ecosystem, they are adopting new tools to compete in the new technology-facilitated assessment ecosystem. PSI, for example, has developed a soft skills platform for college graduates called “Am I Job Ready,” which implements online simulations to train and assess skills such as teamwork, communication, and innovation. When users successfully complete the program, they are awarded PSI’s Professional Skills Certificate to signal their strengths to employers.[58] Similarly, SHL is beginning to apply talent analytics to their assessment scoring, and Pearson VUE has become integral to IBM’s Open Badge Program by administering the assessment exam that verifies users’ competencies, as discussed later in this paper.

Other off-the-shelf pull providers—such as Mettl, Criteria Corp, Hundred5, and Interview Mocha—specialize less on secure delivery of high-stakes examination and more on publishing and developing extensive libraries of online assessments for employers. Criteria Corp, for example, has a library of more than a thousand assessment tests covering specialized job skills and general cognitive ability, whereas provider Hundred5 sells 10-minute prebuilt short skills challenges to administer at the top of the hiring funnel as an alternative to resume filtering. Many of these assessment developers focus on specific industries; for example, Codility, Devskiller, and Qualified offer coding simulation assessments for technical positions, whereas FurstPerson provides a suite of assessment simulations for different positions within call centers.

Customized Assessment Solutions

Firms such as WebAssess, Plum, Amberjack, HireVue, and pymetrics offer proprietary psychometric assessments that are delivered online, scored automatically, and analyzed through extensive performance statistics across the candidate pool. What differentiates their approach from off-the-shelf providers is that these vendors engage in heavy consultation with employers to build assessments tailored to the hiring organization. This includes first collecting extensive data on current employees, building success profiles within the organization, and piloting assessment tests on employees. Once the assessments are fine-tuned and tested, these customized solutions are then delivered to applicants within varying stages of the hiring funnel.

Push Providers

Push providers allow candidates to signal their skills to employers—either directly or through an intermediary that verifies candidates’ competencies. Push providers give candidates more ownership of the assessment process; candidates select the skills to signal and the method for sharing them with employers. This is especially true of providers that give candidates a community-based platform on which to compete and broadcast their competencies.

Training and Digital Credential Platforms

Providers such as Coursera, Udacity, and edX are essentially training platforms that impart and assess candidates’ skills online. Sometimes this is through a partnership with higher education intermediaries. For example, edX partners with universities such as MIT and the University of California, Berkeley to issue their self-branded MicroMasters, which issues digital badges that employers can use to screen for specialized skills. These credentials are signaling devices that supplement—and increasingly circumvent—traditional credentials. For example, the Project Management Institute is complementing its industry-recognized Project Management Professional certification by issuing an accompanying digital badge. IBM’s New Collar Program is using Coursera to deliver programs which issue an IBM professional certificate in place of the academic degree for hiring skilled workers with verified competencies.

While new providers are issuing credentials, another crop of vendors are supplying the platform on which these credentials are hosted. Credly and Badgr offer solutions for displaying, sharing, and managing credentials earned through online training and assessment. These platforms promise to give transparency to the skills and competencies badges verify–for issuers, earners, and employers searching for talent. For companies wishing to upskill and reskill their workforce through employee training, these platforms also help employers identify verified talent for internal promotions. For example, Badgr is now the native badging solution for Canvas, an enterprise learning management system.[59]

Candidate User Communities

The growth in employer adoption of technology-facilitated assessments has spurred another set of players in the ecosystem: candidate-targeted assessment providers. These vendors allow communities of candidates to “own” their assessment process: Instead of being directed to online assessments as part of the application process or being assessed as part of earning a digital credential, third-party providers are giving candidates access to assessments that will directly evaluate their skills in order to signal desirable competencies to employees. Providers such as HackerRank, Degreed, and SquarePeg are leaders in this space, and even use these assessments to match candidates to jobs. For example, HackerRank allows users to compete in online coding challenges, and business clients pay to access opted-in top performers. Some university professors are even using HackerRank’s user-driven assessments as a means for students to earn extra credit in coursework.[60]

Traditional Players in the New Assessment Ecosystem

Though the previous section focused on the technologies and providers in today’s technology-facilitated assessment ecosystem, traditional intermediaries such as higher education, industry associations, and government organizations still have important roles to play. However, the relationships between intermediaries and employers have become precariously misaligned in the new ecosystem, as siloes continue to deepen at the speed of new technology entering the marketplace.

The idea of a problematic skills gap between employers’ competency needs and candidates’ competency attainment has long been addressed in both business and higher education publications. The rise in technology-facilitated assessments converges with a growth in employers’ dissatisfaction with traditional methods for skills signaling, especially among candidates just entering the workforce. For example, a 2013 survey found that out of 700 employers, more than half reported having trouble recruiting qualified job candidates, and one-third felt that colleges did a fair to poor job training successful employees.[61] A similar Gallup poll indicated even grimmer figures: while 96 percent of college chief academic officers think that their institutions are very or somewhat effective at preparing their students for entry into the workforce, only 11 percent of business leaders think college graduates are actually prepared for employment.[62]

As we have seen, employers are shifting from anecdotal signals provided by resumes and interviews to hard data and analytics when it comes to hiring across the organization—from temporary appointments to entry level jobs to executive searches. Research shows that these efforts are paying off; a 2015 study by the Aberdeen Group found that companies using technology-based pre-hire assessments had a 39 percent lower turnover.[63] This trend is carrying over to professional staffing agencies who are increasingly assessing candidates before sending them to clients, as evidenced by Korn Ferry’s multimethod approach, ManpowerGroup’s new Learnability Quotient product, and Randstad’s recent blog post “Welcome to the New World of Pre-Employment Screening,” which details employer assessment tools for its clients.[64] Though these consultants have traditionally vetted candidates for employers, now innovative assessment providers leveraging new big data and AI assessment technologies are marketing directly to staffing firms, as noted in the paper’s Appendix.

Yet other players on the ecosystem are less adaptive to the change in today’s hiring practices. There is a large area of opportunity for traditional intermediaries in the assessment ecosystem to leverage the technology employers are actually using to select candidates for jobs. The disconnect between higher education and employers when it comes to assessing candidates’ competencies for employment cannot be emphasized enough. Higher education institutions’ learning outcomes do not clearly correlate to employers’ desired competencies; to help remedy this, they could feed the assessment criteria by which candidates are being evaluated into the curriculum. Similarly, industry associations could use applicable assessments to better prepare member candidates for the skills employers in their field are seeking during the hiring process.

Emerging Trends and Promising Partnerships

Partnerships among multiple players within the ecosystem offer promising assessment approaches. The following examples showcase integrated platforms and methods for successful multimethod assessment implementation. These examples are unique in that they diversify the talent pool to “nontraditional” candidates, eliminating much of the subjective bias implicit to most traditional assessments.

IBM New Collar and Talent Match Programs

At any given time, IBM has thousands of job openings in the U.S., and about 15 percent of IBM’s U.S. hires don’t have a four-year degree. IBM refers to these roles as “new collar.” To tap into a larger, international candidate pool for these roles, IBM built a certificate program to skill and assess these workers. The program begins with a reversed engineered pre-hire employment assessment—called a career fit assessment—designed by assessment publisher WebAssess that evaluates cognitive skills and personality traits to ensure job fit and increase job satisfaction. Candidates then complete online training through Coursera, using real-world simulations and experiential online learning that is structured with on-the-job evaluation, performance feedback, and exams. When candidates demonstrate their competencies in their respective learning pathways, they are issued a digital certificate that is hosted on Credly. Last, candidates can opt into IBM’s Talent Match database where employers can search credentials earned through digital badges that signal candidates verified skills.[65]

Unilever Future Leaders Program

Unilever uses a multimethod approach for hiring college interns and graduates. It leverages new assessment technologies developed by third-party providers who collaborated to provide a streamlined process for candidates. The company hired global experts in future talent and assessments, Amberjack, to build success profiles for different functions within the organization. When applications go live, candidates apply through a Taleo-hosted online form to assure that they meet basic requirements. The candidate can import their information from LinkedIn to make the application process short and simple. Next, eligible candidates are invited to a suite of 12 gamified assessments hosted by pymetrics, which takes less than 25 minutes to complete. Within 24-hours, candidates are informed if they successfully meet the cognitive, emotional, and social trait profiles in order to proceed to the next stage, a digital interview hosted by HireVue, where they answer a series of situational questions and solve real-world problems unique to Unilever. HireVue’s underlying AI technology measures them against the Unilever-specific, function-specific profile. Based on their success in the digital interview, the candidate may proceed to a final on-site assessment called a Discovery Center. This round of assessments uses an onsite approach including team exercises, one-on-ones, and day-in-the-life scenarios which are evaluated through observation. All of this data is fed back into the Taleo ATS, which allows Unilever to hire the candidates who are the best fit for the company and the function. [66]

Texas State Technical College’s SkillsEngine

When Texas State Technical College, a public two-year multi-campus institution focusing on advanced technical training, turned to a completely outcomes-based funding model, the college quickly adopted competency-based learning curriculum founded on skills and job demands that were directly supplied to the institution from practitioners in the field. TSTC recognized immediately that traditional methods of designing curriculum—involving the laborious process of adjourning industry advisory groups with faculty and curriculum designers—was time ineffective, very often did not get the right experts in the room, and could not keep pace with the change of pace for desired skills in the job market. To streamline this process, and to back-up the college’s “Get a Job or Get a Refund” campaign, TSTC designed, developed, and implemented their own skills assessment platform: SkillsEngine.

SkillsEngine creates job profiles through its Calibrate product that correspond to the needed competencies for specific occupations. The Calibrate process enables educators and workforce professionals to collect and confirm required skills through direct employer feedback. At the core of Calibrate are customizable job profiles that serve as intermediaries between the changing skill needs of employers and actual course content. These AI-informed profiles are living documents that use algorithms to customize and adapt to changing competencies within the roles they describe. Profiles are then validated by industry experts within minutes online, bypassing lengthy in-person job analysis sessions. The customizable validation platform easily allows users to invite hiring managers into the process as well, bridging industry associations, employers, and higher education providers in the creation of competency-based outcomes. Thus, educators are able to identify employer-validated marketable skill requirements in an online process that guides curriculum development and eliminates potential skill gaps. [67]

Challenges for Adoption

Though new assessment technologies have the potential to verify candidates’ mix of hard and soft skills using validated predictive analytics, the technology-facilitated ecosystem is still very much in its infancy. This section outlines and describes certain preconditions and barriers to a more streamlined and comprehensive ecosystem of assessments, with an eye toward specific challenges for enabling collaborative assessment approaches that efficiently, accurately, and transparently match candidates to well-fitted job opportunities.

Employer Assessments Are Not Transparent to Candidates

A survey of 52,000 job North American job candidates found that 71 percent had been given a pre-employment assessment in 2017. However, less than 4 percent report having received feedback on their test performance.[68] This is a huge missed opportunity to help candidates better build their skillset for future careers.

New Tools Lack Rigorous Validation Measures

While certain providers such as pymetrics employ scientifically validated, peer-reviewed assessment technologies,[69] many innovative assessment tools have not yet demonstrated the validity of traditional assessment methods, and, even more problematically, they seem to eschew the grounded theory backing analog tests. Predictive validity is king in the assessment marketplace, but there is little to no peer-reviewed evidence for the predictive powers of many of these new tools.[70]

In this regard, AI-based assessments are especially tricky. Training data is required to create AI algorithms, which are typically built off data gathered by companies’ current employees. In essence, this technology could thus bake previous poor hiring decisions into the machine learning process. Similarly, AI requires transparency into the process, so that assessors can see how predictions are occurring, as well as flexibility, so that practitioners can adjust algorithms for accuracy. Unfortunately, most vendors’ AI solutions do not do this yet, and because human assessors are not privy as to why the system makes a particular decision or measurement, they cannot evaluate whether that criteria is accurately weighted.[71]

The Risk of Adverse Impact Limits Adoption

Many of the new technologies on the market are not validated in compliance with Equal Employment Opportunity Commission (EEOC) regulations. Since the mid-20th century in the U.S., the EEOC has regulated the use of pre-hire assessments, in particular, by scrutinizing the use of assessments that have an adverse impact on protected group members (i.e. women, individuals over 40, racial minorities, and persons with disabilities).[72] Companies using pre-hire assessments can avoid potential EEOC liability through two time-consuming and costly methods: 1) by showing a lack of adverse impact with well-documented impact analyses performed across all candidates assessed; and 2) by proving the assessment tests skills that are directly related to the job through thorough rigorous validation studies. While most vendors claim that their algorithms solve for bias, they have not performed rigorous adverse impact studies.[73]

AI approaches are particularly problematic in this respect. As previously noted, most processes run unsupervised, and thus clarity into the potential for biased outcomes is hidden behind black box machine learning algorithms. A recent example comes from an Amazon AI screening tool, which the company recently abandoned after discovering that it was inherently biased against female candidates. Because the tool was trained on a decade-worth of resume data captured from Amazon’s mostly male recruits, it penalized any direct mention of the word “women,” including downgrading graduates from women’s colleges in the hiring funnel.[74] Well-publicized cases like this uncover AI assessments’ ability to create algorithmic bias—a pressing concern for adoption that must be monitored. As such, there is still a lot of pushback from compliance and HR legal teams regarding the adoption of tools designed by technology providers selling advanced algorithmic solutions, even when validation and adverse impact studies are performed. This is a big barrier for adoption, and could negatively affect American workers. Experts in the field are noticing that companies are turning to foreign countries to find and hire skilled candidates, as other legal systems do not have such stringent limitations on pre-hire assessments.[75]

Pull and Push Assessment Adopters Lack Selection Guidance

Although technology-facilitated assessments enable cheap, quick hiring decisions, employers are left unguided in the selection process due to the lack of rigorous evaluation. There is no credible evidence for innovative assessment technologies that harvest big data because high-tech talent identification practices are evolving faster than the I-O psychology research needed to bolster them.[76] In fact, the Society for Human Resource Management has directly warned HR professionals that new pre-hire assessment innovations are under-researched and could potentially undermine rather than aid in better hiring decisions.[77]

The burden of selection for intermediaries and candidates signaling their skills to employers through push-based platforms and technologies is equally frustrating for users. Digital badges, though stackable and cheaper than traditional college courses, are not always recognized by employers as properly training and verifying candidate competencies and can still be quite costly, running hundreds of dollars on average. For certain industries such as IT, several trusted skills signaling platforms and programs have emerged, but for candidates applying within other fields, the landscape is much less clear. For example, PSI’s “Am I Job Ready” platform aims to train and verify candidates’ soft skills for workplace readiness. Even though this is a highly desired skillset among employers, it is still unclear how such skills signaling platforms will be used by employers in the hiring process. ePortfolios face the same challenge. Thus, the lack of assurance in employee adoption is prohibitive to candidates’ selection in costly, time-consuming push models of skills assessment.

The Flooded Marketplace Lacks Software Integration

Perhaps the chief barrier for adoption of both pull and push skills assessments are that these new technologies do not always integrate into employers’ ATS systems, the centralized “command center” for talent acquisition. While some cutting-edge assessment providers advertise ATS integration through their APIs, this process is not always as streamlined as promised. Even employers such as Unilever who are at the cutting edge of new assessment approaches bemoan their reliance of feeding all the data collected into their legacy ATS software; and not all technology can connect with the multitude of ATS options equally. [78]

Assessment Content is Hard to Compare Within the Ecosystem

Candidates’ skills are increasingly measured across a variety of criteria and through an abundance of new technologies. However, there is a lack of a common language around these new approaches and the actual competencies that they assess. Guidelines informing employers, intermediaries, and candidates are needed so that skills can be efficiently signaled to employers without arduous translation. This is especially needed to align the problematic disconnect between higher education and employers.

Opportunities and Recommendations

Whether utilizing AI-powered gamified assessments, digital interviews scored by success profiles, or online simulations that provide realistic job previews, technology-facilitated assessments promise to streamline the hiring process, reduce employee turnover, and find well-fitted career matches for candidates. However, as detailed above, there are still major challenges for implementing collaborative assessment approaches. This section concludes our landscape view of the new assessment ecosystem by highlighting promising opportunities and offering specific recommendations.

Opportunities

Although there are many barriers for adopting technology-facilitated assessments, the potential positive impact among candidates, intermediaries, and employers are manifest. Below are some of the key opportunities uncovered by our research.

Credentialing Aggregators Have the Potential to Map and Standardize Skills

Organizations such as Credential Engine are working to make the competencies online, traditional, and alternative credential issuers assess and verify more transparent for all players in the ecosystem. From Microsoft-issued badges to Stanford-awarded degrees, its Credential Registry and Credential Finder (beta) captures, archives, and shares information gathered around more than 5,000 credentials, including the quality assurance processes that verify them.[79] While efforts like this have a long way to go—representing only a fraction of the hundreds-of-thousands of credentials available in the US, many of the records in the Credential Registry nevertheless lack information on competencies, and much of the information is unverified—creating searchable, standardized indices of assessments and the competencies they test will create a more harmonious, symbiotic ecosystem. Using these common sets of skills vocabularies can also bridge the widening disconnect between higher education and employers. By looking to the competencies for which employers assess and building them into competency-based learning models, higher education has the opportunity to churn out more employable graduates.

Iterative, Multimethod Approaches Can Best Match Candidates to Jobs

All testing approaches contain some selection flaws. To curb these issues, employers look to multimethod approaches, which were expensive and time-consuming in the traditional assessment landscape. Today’s new technologies geared toward automation allow for multiple assessment typologies to be delivered and scored during many phases of the hiring process. This mixing of assessment methods minimizes selection risk because the flaws associated with a certain type of test are balanced out through the bundling of different structure and content type. This reduces testing bias, increases candidate diversity, and enhances predictive capability.[80]

Technology-Facilitated Assessments Can Democratize the Hiring Process

Employers are able to assess a global talent pool’s competencies directly and automatically with technology-facilitated assessments. Through new technologies and providers, candidates have the ability to validate their level of fluency in a desired skillset quickly and easily, which enables them to apply for roles for which they might not previously have been considered due to geographical, biographical, or educational constraints—constraints that technology-based assessments eliminate. The elimination of personal interview bias and degree bias is a huge leap forward in opening career pathways to a wide array of candidates, especially those with underrepresented backgrounds.

Recommendations

New technologies and new providers are disrupting the traditional hiring process in an attempt to help employers and candidates navigate away from the skills awareness gap and toward platforms that can bridge the gap through the screening and signaling of candidates’ competencies. That said, there is much necessary work to be performed in this area. Below we suggest a set of recommendations to help both tame and harness the power of today’s unregulated wild west assessment ecosystem.

Research Validity and Impact

Research must be done on the scientific validity claimed by many new players in the ecosystem regarding their predictive approaches to fitting candidates to employers’ open roles. Proprietary technology in its earliest stages of development—especially assessments using black box AI algorithms—have yet to be peer-reviewed by experts. Academic studies such as those done by assessments in the traditional ecosystem by IO psychologists must be performed before these assessments themselves can be fully assessed.

Similarly, studies that analyze the broader impact of assessments in the new technology-facilitated ecosystem should be performed in order to evaluate the short- and long-term consequences of these tools. Because private companies tend to own and protect the data surrounding their hiring practices–and thus any information regarding the effects of assessment implementation–we have little knowledge of how technology-facilitated assessments’ adoption might affect the labor market. Using quantitative research methods involving, for example, employer surveys, US Census demographics, and other available labor market data, researchers could develop a thorough analysis of the impacts technology-facilitated assessments are having over time. In so doing, one could trace any causal relationships between intended (and unintended) consequences that the use of these new assessment processes might have.

Develop Ethical and Legal Frameworks

The technology-facilitated ecosystem hosts a marketplace of providers and employers using advanced computational and algorithmic tools that analyze people’s personal data. Ethical frameworks governing the use of these technologies and their sensitive datasets must be developed to protect both candidates and current employees. Thus far, despite the implementation of the EU’s General Data Protection Regulation (GDPR), there has been little discussion about the ethics of private data collection being amassed by new assessment approaches. We recommend establishing standards for the capture, use, and storage of data generated during the assessment process, as well as setting policies that hold third-party providers and employers responsible for the protection of personal information.

There are also legal frameworks that must be adapted in the new ecosystem. It is still unclear how the EEOC will regulate AI-powered screening tools and any potential they may have on adverse impact and biased hiring. Legal scholars are beginning to take note about the adverse impact of certain algorithmic bias and the use of machine learning in hiring practices,[81] but thus far, there has been an almost haphazard adoption in the marketplace, and experts agree that many new third-party providers are putting their clients into legal risky territory.

Pilot Multiplayer Programs with Third-Party Auditors

A sizable opportunity within the ecosystem is to involve higher education practitioners in the iterative assessment process. Instead of assessing candidates post-graduation, the assessment process can begin while learners are still training for careers as students. Evaluating learners in, for example, a particular major and using applicable employers’ actual assessment tools for that field has huge potential to help close the skills gap. This process should be iterative; it must provide transparent feedback on student assessment performance to both the higher education institution and individual learners, and assess their competencies as they progress. In so doing, candidates can better use their education to ramp up the skills needed to land their first job, and educators can better tailor their curriculum to impart the competencies for which employers are hiring. Designing a series of pilot programs in different fields and at different institutions, all implemented under the administration of an impartial third-party auditor that monitors quality assurance across participants, would not only test this theory, but also provide meaningful collaboration between higher education institutions, industry experts, employers, and third-party assessment providers.

Appendix: Notable Third-Party Providers

For a select list of third-party assessment providers in the new technology-facilitated ecosystem, please visit http://sr.ithaka.org/technology-facilitated-assessment-providers/.

Special Acknowledgements

We would like to thank the following people for their contribution to this work:

  • Alexander Alonso, Chief Knowledge Officer, Society for Human Resource Management
  • Michael Bettersworth, Vice Chancellor and Chief Policy Officer, Texas State Technical College; Executive Director, Center for Employability Outcomes; Founder; SkillsEngine
  • Marie Cini, President, Council for Adult and Experiential Learning
  • Ryan Craig, Co-Founder and Managing Director, University Ventures
  • Paul Crockett, Chief Executive Officer and Co-founder, Authess
  • Sean Gallagher, Founder and Executive Director, Northeastern University’s Center for the Future of Higher Education, and Talent Strategy; Executive Professor of Educational Policy, Northeastern University
  • Tatiana Goldberg, Senior Human Resources Director, Marketing, Unilever
  • Denise Hartsoch, President, GreatBizTools
  • Michael Horn, Senior Partner, Entangled Solutions
  • Natasha Jankowski, Director, National Institute for Learning Outcomes Assessment and Research Assistant Professor, University of Illinois
  • David Leaser, Senior Program Executive, Innovation and Growth Initiatives, IBM
  • Rory McCorkle, Senior Vice President, Certification and Education Services, PSI
  • Robert Sheets, Research Professor, George Washington University
  • Chris Small, Vice President, Corporate Talent Measurement, PSI
  • Kelly Trindel, Head of IO Science and Diversity Analytics, pymetrics
  • Jason Tyszko, Vice President, Center for Education and Workforce, U.S. Chamber of Commerce Foundation

Endnotes

  1. This paper provides a landscape view of employment assessments that measure candidates’ competencies. Out of scope for this analysis are personality tests, such as the popular Myers-Briggs Type Indicator (MBTI). While many pre-hire assessments take into account candidates’ cultural fit and behavioral characteristics, we only look at assessments that use these factors in tandem with evaluations of hard and soft skills.
  2. Tracy M. Kantrowitz, Kathy A. Tuzinski, and Justin M. Raines, “2018 Global Assessment Trends Report,” SHL, 2018, https://www.shl.com/en/assessments/trends/global-assessment-trends-report/.
  3. Kyle Lagunas, “Worldwide Talent Acquisition Technologies and Services Forecast, 2017–2021,” IDC, 2017, https://www.idc.com/getdoc.jsp?containerId=US42578317.
  4. A global survey of 3,135 human resources professionals found that 85 percent use resume reviews in pre-hire screening. See Tracy M. Kantrowitz, Kathy A. Tuzinski, and Justin M. Raines, “2018 Global Assessment Trends Report,” SHL, 2018, https://www.shl.com/en/assessments/trends/global-assessment-trends-report/.
  5. For a detailed history of pre-employment testing, see Andrew J. Vinchur, “A History of Psychology Applied to Employee Selection,” in Historical Perspectives in Industrial and Organizational Psychology, ed. Laura L. Koppes (New York: Psychology Press, 2007), 193-218.
  6. Frank L. Schmidt and John E. Hunter, “Employment Testing: Old Theories and New Research Findings,” American Psychologist 36, no.10 (1981): 1128-1137.
  7. Laura Koppes Bryan, “History of Industrial and Organizational Psychology in North America,” in The SAGE Encyclopedia of Industrial and Organizational Psychology, ed. Steven G. Rogelberg (Thousand Oaks: SAGE Publications, 2016), 637-642.
  8. “History of Wonderlic,” Wonderlic, 2018, https://www.wonderlic.com/about-wonderlic/history/ and “Frequently Asked Questions,” Wonderlic, 2018, https://www.wonderlic.com/faq/.
  9. Andrew J. Vinchur and Laura L. Koppes Bryan, “A History of Personnel Selection and Assessment,” in The Oxford Handbook of Personnel Assessment and Selection, ed. Neal Schmitt (New York: Oxford University Press, 2012), 9-30.
  10. Dennis Joiner, “Assessment Centers in the Public Sector,” Public Personnel Management 13, no.4 (1984): 435-450.
  11. This became especially true after the 1964 Civil Rights Act established the Equal Employment Opportunity Commission, which began regulating job selection procedures. See Lawrence O’Leary, “Fair Employment, Sound Psychometric Practice, and Reality: A Dilemma and a Partial Solution,” American Psychologist 28, no. 2 (1973): 147-150.
  12. “Types of Employment Tests,” SIOP, 2018, http://www.siop.org/workplace/employment%20testing/testtypes.aspx.
  13. Fadi Munshi Hani Lababidi Sawsan Alyousef, “Low- Versus High-Fidelity Simulations in Teaching and Assessing Clinical Skills,” Journal of Taibah University Medical Sciences 10, no.1 (2015): 12-15.
  14. David P. Baker, “Credentialing in the Schooled Society,” in The Schooled Society: The Educational Transformation of Global Culture (Stanford: Stanford University Press, 2014), 156-183.
  15. Kenneth J. Arrow, “Higher Education as a Filter Device,” Journal of Public Economics 2, no.3 (1973): 193-216.
  16. John A Weiner and David Foster, “Licensing and Certification,” in Next Generation Technology-Enhanced Assessment: Global Perspectives on Occupational and Workplace Testing, ed. John C. Scott, Dave Bartram, and Douglas H. Reynolds (New York: Oxford University Press, 2018), 36-70.
  17. “MSSC Board-Suggested Quality Assurance Guidelines and Rating System for Industry-Recognized Credentials,” MSSC, 2017, http://files.constantcontact.com/7b7d1c26101/d754737c-207d-41f1-ab55-cf8f1df29fdf.pdf.
  18. Gordon Stanley, “Accreditation and Assessment in Vocational Education and Training,” in The Oxford Handbook of Skills and Training, ed. Chris Warhurst, Ken Mayhew, David Finegold, and John Buchanan (New York: Oxford University Press, 2017), 124-142. See for example NewSchool of Architecture and Design’s Integrated Path to Architectural Licensure, “Integrated Path to Architectural Licensure (IPAL),” NewSchool of Architecture and Design, 2018, https://newschoolarch.edu/academics/school-of-architecture/graduate-architecture-programs/integrated-path-to-architectural-licensure/.
  19. Michael J. Burke and Jacques Normand, “Computerized Psychological Testing: Overview and Critique,” Professional Psychology Research and Practice 18, no.1 (1987): 42-51.
  20. Nancy T. Tippins, “Technology and Assessment in Selection,” The Annual Review of Organizational Psychology and Organizational Behavior 2: (2015): 551-582.
  21. Seymour Adler, Anthony S. Boyce, and Pat M. Caputo, “Employment Testing,” in Next Generation Technology-Enhanced Assessment: Global Perspectives on Occupational and Workplace Testing, ed. John C. Scott, Dave Bartram, and Douglas H. Reynolds (New York: Oxford University Press, 2018), 3-35.
  22. Gary Buswell, “100 Soft Skills Assessment Questions for Hiring Top Talent,” Hundred5, 2017, https://hundred5.com/blog/100-soft-skills-questions-to-help-you-hire-top-talent.
  23. “Situational Judgment Tests,” Wisconsin Personnel Partners, 2015, http://wpp.wi.gov/section.asp?linkid=277&locid=16.
  24. Dave Zielinski, “Applicant Tracking Systems Evolve,” SHRM, 2011, https://www.shrm.org/resourcesandtools/hr-topics/technology/pages/atsevolves.aspx.
  25. Rob Kelly, “The Top 100 Applicant Tracking Systems in 2018,” The Magnet, 2018, https://blog.ongig.com/recruiting-software/top-100-applicant-tracking-systems-in-2018; Paige Garner, “The Top 25 Best Applicant Tracking Systems (Updated For 2018),” Proven, 2018, https://blog.proven.com/top-25-best-applicant-tracking-systems.
  26. Matt Charney, “Google Moves from Search to Sourcing as Hire by Google Launches Candidate Discovery,” RecruitingDaily, 2018, https://recruitingdaily.com/google-moves-search-sourcing-hire-launches-candidate-discovery.
  27. Josh Bersin, “HR Technology Disruptions for 2018,” Deloitte, 2017, http://marketing.bersin.com/rs/976-LMP-699/images/HRTechDisruptions2018-Report-100517.pdf.
  28. Charles Handler, “Are We Ready for Self-Driving Talent Assessments?” ERE, 2017, https://www.ere.net/are-we-ready-for-self-driving-talent-assessments/.
  29. Elaine Pulakos and Tracy Kantrowitz, “Choosing Effective Talent Assessments to Strengthen Your Organization,” SHRM, 2018, https://www.shrm.org/hr-today/trends-and-forecasting/special-reports-and-expert-views/documents/effective-talent-assessments.pdf.
  30. “Boost Your Interview Changes,” Jobscan, 2018, https://www.jobscan.co/.
  31. Tracy M. Kantrowitz, Kathy A. Tuzinski, and Justin M. Raines, “2018 Global Assessment Trends Report,” SHL, 2018, https://www.shl.com/en/assessments/trends/global-assessment-trends-report/.
  32. Ibid.
  33. Sean Gallagher, The Future of University Credentials: New Developments at the Intersection of Higher Education and Hiring (Cambridge: Harvard Education Press, 2016).
  34. “Talent Analytics,” Patheer, 2017, https://patheer.com/talentanalytics.html.
  35. “Predictive Talent Analytics,” Aon, 2018, https://assessment.aon.com/consulting/predictive-analytics/.
  36. John A. Weiner and David Foster, “Licensing and Certification,” in Next Generation Technology-Enhanced Assessment: Global Perspectives on Occupational and Workplace Testing, ed. John C. Scott, Dave Bartram, and Douglas H. Reynolds (New York: Oxford University Press, 2018) 36-70.
  37. Seymour Adler, Anthony S. Boyce, and Pat M. Caputo, “Employment Testing,” in Next Generation Technology-Enhanced Assessment: Global Perspectives on Occupational and Workplace Testing, ed. John C. Scott, Dave Bartram, and Douglas H. Reynolds (New York: Oxford University Press, 2018) 3-35.
  38. Conversation with Authess Chief Executive Officer and Co-founder Paul Crockett on August 7, 2018.
  39. “Virtual Surgery. Real Results,” Osso VR, 2018, http://ossovr.com/.
  40. “Assense: Beyond Assessment,” Actiview 2018, https://www.actiview.io/product/.
  41. Chad H. Van Iddekinge, Stephen E. Lanivich, Philip L. Roth, and Elliott Junco, “Social Media for Selection? Validity and Adverse Impact Potential of a Facebook-Based Assessment,” Journal of Management 42, no.7 (2016): 1811-1835.
  42. Tomas Chamorro-Premuzic, Dave Winsborough, Ryne A. Sherman, and Robert Hogan, “New Talent Signals: Shiny New Objects or a Brave New World?” Industrial and Organizational Psychology 9, no. 3 (2016): 621–640.
  43. Ibid.
  44. Tracy M. Kantrowitz, Kathy A. Tuzinski, and Justin M. Raines, “2018 Global Assessment Trends Report,” SHL, 2018, https://www.shl.com/en/assessments/trends/global-assessment-trends-report/.
  45. Seymour Adler, Anthony S. Boyce, and Pat M. Caputo, “Employment Testing,” in Next Generation Technology-Enhanced Assessment: Global Perspectives on Occupational and Workplace Testing, ed. John C. Scott, Dave Bartram, and Douglas H. Reynolds (New York: Oxford University Press, 2018) 3-35.
  46. Alexia Elejalde-Ruiz, “The End of the Resume? Hiring Is in the Midst of a Technological Revolution with Algorithms, Chatbots,” Chicago Tribune, 2018, http://www.chicagotribune.com/business/ct-biz-artificial-intelligence-hiring-20180719-story.html.
  47. Seymour Adler, Anthony S. Boyce, and Pat M. Caputo, “Employment Testing,” in Next Generation Technology-Enhanced Assessment: Global Perspectives on Occupational and Workplace Testing, ed. John C. Scott, Dave Bartram, and Douglas H. Reynolds (New York: Oxford University Press, 2018) 3-35.
  48. Interview with pymetrics Head of IO Science and Diversity Analytics Kelly Trindel on August 30, 2018.
  49. Brandye Barrington, “Digital Badges Are Now an Essential Tool for Employers and Candidates Alike,” Oracle, 2017 https://blogs.oracle.com/certification/digital-badges-are-now-an-essential-tool-for-employers-and-candidates-alike.
  50. David Leaser, “Future Workplace Summit 2018 Keynote,” LinkedIn, 2018, https://www.slideshare.net/DavidLeaser.
  51. Michael Bernick, “Coursera’s Bet on the Upskiling of American Workers,” Forbes, 2018, https://www.forbes.com/sites/michaelbernick/2018/02/21/courseras-bet-on-the-upskilling-of-american-workers/#3836649b5eb2.
  52. John K. Waters, “How Nanodegrees Are Disrupting Higher Education,” Campus Technology, 2015, https://campustechnology.com/articles/2015/08/05/how-nanodegrees-are-disrupting-higher-education.aspx.
  53. Sean Gallagher, The Future of University Credentials: New Developments at the Intersection of Higher Education and Hiring (Cambridge: Harvard Education Press, 2016).
  54. “Connect Your Students Directly with Employers,” Portfolium, 2018, https://portfolium.com/solutions/talentmatch.
  55. John C. Scott, Dave Bartram, and Douglas H. Reynolds, “Preface,” in Next Generation Technology-Enhanced Assessment: Global Perspectives on Occupational and Workplace Testing, ed. John C. Scott, Dave Bartram, and Douglas H. Reynolds (New York: Oxford University Press, 2018), xv-xx.
  56. Chris Mohr and Ryan Craig, “Who’s Playing Matchmaker between Students and Employers?” EdSurge, 2016, https://www.edsurge.com/news/2016-07-29-who-s-playing-matchmaker-between-students-and-employers.
  57. PSI is privately held. SHL was bought by private equity firm Exponent for $400 million in March 2018. Pearson VUE is part of the publicly traded British conglomerate Pearson PLC, which reported a $4.513 billion GBP 2017 revenue. See “SHL,” Exponent, 2018, http://www.exponentpe.com/our-portfolio/shl and “Pearson 2017 Results,” Pearson PLC, 2018, https://www.pearson.com/corporate/news/media/news-announcements/2018/02/pearson-2017-results.html.
  58. “Am I Job Ready?” PSI, 2018, https://www.amijobready.com/personal.
  59. Becky Frost, “Canvas Announces Badges Powered by Badgr for All Canvas Users,” Market Insider, 2018, https://markets.businessinsider.com/news/stocks/canvas-announces-badges-powered-by-badgr-for-all-canvas-users-1027400509.
  60. “Turn Your Class into a Programming Power House,” HackerRank, 2018, https://www.hackerrank.com/products/school/.
  61. Amy Scott, “What do employers really want from college grads?” Marketplace, 2013, https://www.marketplace.org/2013/03/01/education/what-do-employers-really-want-college-grads.
  62. Jaimie Francis and Zac Auter, “3 Ways to Realign Higher Education with Today’s Workforce” 2017, Gallup, https://news.gallup.com/opinion/gallup/212522/ways-realign-higher-education-today-workforce.aspx.
  63. Sean Gallagher, The Future of University Credentials: New Developments at the Intersection of Higher Education and Hiring (Cambridge: Harvard Education Press, 2016).
  64. See “Talent Assessments,” Korn Ferry, 2018, https://www.kornferry.com/solutions/products/talent/talent-assessments; “Learnability Quotient,” ManpowerGroup, 2018, https://www.manpowergroup.com/workforce-insights/expertise/learnability-quotient; and “Welcome to the New World of Pre-Employment Screening,” Randstad, 2018, https://www.randstadusa.com/jobs/career-resources/career-advice/welcome-to-the-new-world-of-pre-employment-screening/593.
  65. Interview with IBM Senior Program Executive, Innovation and Growth Initiatives David Leaser on August 14, 2018. See also “New Collar,” IBM, 2018, https://www.ibm.com/newcollar and “Welcome to WebAssess,” WebAssess, 2018, https://www.webassess.com/.
  66. Interview with Unilever Senior Human Resources Director, Marketing Tatiana Goldberg on August 24, 2018. See also Unilever (2018). “Application Process.” Future Leaders Program, https://www.unileverusa.com/careers/graduates/application-process.
  67. Interview with SkillsEngine Founder Michael Bettersworth on October 5, 2018. See also “Calibrate: Teach What Matters,” SkillsEngine, 2018, https://app.hubspot.com/documents/2410624/view/29604544?accessId=db0243.
  68. Tomas Chamorro-Premuzic, Dave Winsborough, Ryne A. Sherman, and Robert Hogan, “New Talent Signals: Shiny New Objects or a Brave New World?” Industrial and Organizational Psychology 9, no. 3 (2016): 621–640.
  69. Pymetrics utilizes machine-learning approaches to proactively remove race, ethnicity, and gender bias from its models prior to deployment. The firm then back-tests the performance of its models in the real world via adverse impact, return-on-investment, and predictive validity testing for each client. It has also open-sourced its bias-checking technology, available through GitHub: https://github.com/pymetrics/audit-ai.
  70. Tomas Chamorro-Premuzic, Dave Winsborough, Ryne A. Sherman, and Robert Hogan, “New Talent Signals: Shiny New Objects or a Brave New World?” Industrial and Organizational Psychology 9, no. 3 (2016): 621–640.
  71. Josh Bersin, “AI in HR: A Real Killer App,” Forbes, 2018: https://www.forbes.com/sites/joshbersin/2018/06/18/ai-in-hr-a-real-killer-app/#dac3d6748f1e.
  72. Adverse impact is usually calculated using the 4/5 or 80 percent rule. For example, if 50 women are assessed and 30 are hired, that means 60 percent were hired post assessment. If the same assessment is given to men and 45 are hired, that means 90 percent were hired post assessment. The ratio of protected candidates (women) to men is 67 percent. A ratio below 80 percent is evidence of adverse impact. See Elaine Pulakos and Tracy Kantrowitz, “Choosing Effective Talent Assessments to Strengthen Your Organization,” SHRM, 2016, https://www.shrm.org/hr-today/trends-and-forecasting/special-reports-and-expert-views/documents/effective-talent-assessments.pdf.
  73. Sean Gallagher, The Future of University Credentials: New Developments at the Intersection of Higher Education and Hiring (Cambridge: Harvard Education Press, 2016).
  74. Jeffrey Dastin, “Amazon Scraps Secret AI Recruiting Tool that Showed Bias against Women,” Reuters, 2018, https://www.reuters.com/article/us-amazon-com-jobs-automation-insight/amazon-scraps-secret-ai-recruiting-tool-that-showed-bias-against-women-idUSKCN1MK08G.
  75. Interview with University Ventures Co-Founder and Managing Director Ryan Craig on August 2, 2018. See also Ryan Craig and Monica Herk “Degree Requirements Make Hiring Less Diverse. Here’s How to Fix That,” Fortune, 2018, http://www.fortune.com/2018/09/17/hiring-college-degree-labor-market/.
  76. Tomas Chamorro-Premuzic, Dave Winsborough, Ryne A. Sherman, and Robert Hogan, “New Talent Signals: Shiny New Objects or a Brave New World?” Industrial and Organizational Psychology 9, no. 3 (2016): 621–640.
  77. Elaine Pulakos and Tracy Kantrowitz, “Choosing Effective Talent Assessments to Strengthen Your Organization,” SHRM, 2018, https://www.shrm.org/hr-today/trends-and-forecasting/special-reports-and-expert-views/documents/effective-talent-assessments.pdf.
  78. Interview with Unilever Senior Human Resources Director, Marketing Tatiana Goldberg on August 24, 2018.
  79. “Credential Registry,” Credential Engine, 2018, https://www.credentialengine.org/credentialregistry.
  80. Seymour Adler, Anthony S. Boyce, and Pat M. Caputo, “Employment Testing,” in Next Generation Technology-Enhanced Assessment: Global Perspectives on Occupational and Workplace Testing, ed. John C. Scott, Dave Bartram, and Douglas H. Reynolds (New York: Oxford University Press, 2018) 3-35.
  81. See for example: Solon Barocas and Andrew D. Selbst, “Big Data’s Disparate Impact,” California Law Review 104 (2016): 671-732, http://dx.doi.org/10.2139/ssrn.2477899 and Paul Ohm and David Lehr, “Playing with the Data: What Legal Scholars Should Learn About Machine Learning,” University of California, Davis Law Review 51 (2017): 653-717, https://lawreview.law.ucdavis.edu/issues/51/2/Symposium/51-2_Lehr_Ohm.pdf.