Emotion Recognition

STUDYING THE ROLE OF FACIAL EXPRESSION ANALYSIS IN HIRING

Emotion_Rec.png

Context

At UC Berkeley, I am currently conducting research related to the use of emotion detection and facial expression analysis in one-way video interview technologies, along with MIDS student Chase Inguva, and PhD students Mehtab Khan and Ji Su Yoo. Our advisor for the project is Professor Niloufar Salehi.

We are currently working on this project. Please find a summary of the background and methodological approach below.

research Overview

For this work, we aim to put forth empirical contributions to the ongoing conversation around the risks of algorithmic hiring. Through this research, we specifically investigate how algorithms are used to filter out candidates during the sourcing and selection phases of the hiring process, and the risks created by such technology.

Our study is designed to emulate common video interview platforms. Our current plan is to have 100 participants answer two questions that are routinely asked during hiring interviews along with a short demographic survey. We will recruit 100 participants using Amazon’s Mechanical Turk during March of 2020. We then analyze these videos using OpenFace, an open-source software that provides “facial landmark detection, head pose estimation, facial action unit recognition, and eye-gaze estimation”. We then create a ‘top performer’ profile based on the OpenFace features for a randomly selected research participant. This mimics the manner in which employers seek to find candidates who match their top performer profile. The resulting analysis will look at how recommended candidates compare to the top performer profile.

METHODS

  1. Experiment

  2. Legal Analysis

OUTCOME

We are working towards a CSCW paper and law-reviewed journal. This work is currently underway with target deadline of June and August 2020.