Tech, Women
Affectiva, AI, DEI, Women in Tech

How Emotion AI Overcomes the Challenges of Gender Bias

Many AI products and services that we still use today tend to be tested and validated with data from western-based, English-speaking, Caucasian and/or male populations, as those were the predominant sample demographics that were available at the time.  While this may have sufficed in the past,we now need to broaden our horizons and think about how we can make technology applicable to all.

We still use these products and services, but never think about the repercussions of having such a skewed dataset.  In recent years, articles have been appearing discussing “algorithm bias” or “AI bias” in code  – where programs do not take into account people of all genders, ethnicities, age and circumstances. Literature focusing specifically on facial coding (FACS) systems have found that FACS databases can be heavily skewed, and that faces of women, older individuals and those with darker complexions are more likely to be inaccurately detected and measured.

This year, the theme for International Women’s Day is “Breaking the Bias.”  In honor of this theme, we want to show everyone just how AI companies like Affectiva, a Smarteye company, are proactively working to break biases with our Emotion AI.  We are always eager to show people how our technology works and are proud of the fact that our data is diverse and well-trained, and to collaborate with our research partners to push for data improvement.  And it is because of our database that we are able to build algorithms that minimize  bias for gender, age or ethnicity.

Affectiva’s Role in breaking the bias

Our mission of humanizing technology fuels us to continue to make our technology as inclusive as possible.  By working with some of the largest companies in the market research space, we deploy projects all over the world, making it possible for our team to continue improving upon our Emotion AI technology.

In 2021, we deployed a major update of our Emotion AI.  This new pack included enhanced facial emotion metrics and the rollout of our new mental state measurements of Sentimentality and Confusion.  We also improved our face tracker to allow facial expression data to be captured in various lighting conditions and angles to allow for higher usable rates.

To do so, we trained our algorithms using over 50,000 unique face videos pulled from our dataset of over 12 million videos of people from 90+ countries around the globe.  In our training set, over 50% of the data was supplied from female face videos, and over half the dataset was from non-caucasian subjects.  When comparing overall model performance summary metrics across men and women, female performance is on par with male performance on the algorithm.  This means that our face trackers are able to accurately detect and classify male and female faces and their facial expressions with little to no issue.

We additionally had notable improvements in the algorithm being able to detect and analyze facial expressions in older populations and those who are of African and South Asian descent. By actively adding more face videos to our database and improving upon our AI systems, our team continues to diminish potential gender, age and ethnicity biases with our Media Analytics solutions, and support our research partners with robust data.

Continuing to work towards positive change

Overcoming algorithm bias is not something that can be done overnight.  It is an iterative process that can be done with constant refinement.  What is important to remember is that code, just like any other language, naturally has no bias.  The bias comes not from the physical code but from the person who is coding, and the datasets that have trained the AI algorithm.

It is difficult to recommend that we need to get rid of all our preconceived notions to overcome bias as our brains naturally encode information and create heuristics to consolidate memories and make our thinking processes efficient.  Therefore, it is important to be aware of the potential for bias, and make the effort to be proactive in mitigating prejudices.  For those in AI, two great ways to minimize algorithm bias is by working with diversified data and having a diverse team that classifies the data.  We approach this by continuing to build upon our existing global face video database as well as having an evenly split Annotation team of men and women who work with our data.

By taking these actions, we not only break personal pre-existing biases, but we also can have a dataset that reduces stereotypes based on a person’s gender and/or identity. As we strive to build technology that is stronger, efficient, and more accurate, we need to provide our AI algorithms with diverse data points.  This is how we can continue to minimize the gender bias in algorithms to improve upon research methodologies and outcomes.

___

This post was originally published on the Affectiva blog.

 

 

 

 

Upcoming Events

Share

Related Articles