The women problem in AI

Blog -

On 28th and 29th of November, for the second year in a row, I had the pleasure and privilege to participate in the European Women in Technology conference, which took place in RAI in Amsterdam. The event featured talks, presentations and workshops by (mostly) women working in different tech-oriented roles. What I really liked about the conference is that it also had a focus on soft skills, and grappled with issues such as (gender) diversity, equality and of course, strategies and lessons learned from different companies in addressing these issues.

At ABN AMRO, we are doing well when it comes to meeting our goals of gender diversity at high positions in Innovation and technology (I&T). Violeta Misheva Data Scientist

I personally work in the area of machine learning and artificial intelligence (AI), and as such, the field is lagging behind the broader tech field in hitting its diversity targets. At ABN AMRO, we are doing well when it comes to meeting our goals of gender diversity at high positions in Innovation and technology (I&T). Currently, almost 33% of roles at scale 14 and above in the I&T are filled by women, and about 22% of roles at levels 12 and 13.

However, when it comes to machine learning and artificial intelligence, the picture changes dramatically in almost any company. In the summer, Wired magazine published an article, in which they stressed that only 12% of the authors of papers submitted to the top machine learning and AI conferences, are women. While technical roles in the tech giants have similar gender diversity proportions as our bank, when it comes to AI/Machine learning roles, the number is around 10% up to 15%. The evidence, uncovered in the Wired article, as well as in the AI Now report for 2018, shows that machine learning and AI as fields are less diverse than computer science as a whole.

Why is this problematic?

Increasingly, machine learning models will be utilized to automate processes, make decisions, and take action on our behalf. When the tools are designed by a team of predominantly white males (or any other homogenous group, for that matter), they are more likely to be skewed, and might even have harmful effects. The examples of that are abundant. Facial recognition systems have performed poorly in recognizing people from different ethnicities and skin colours. Systems used in helping judges decide sentencing in the US, and in helping police decide how to allocate their resources, have been shown to disproportionately target people from lower socio-economic background, and from certain ethnic groups. A few weeks ago, Amazon withdrew an algorithm it has design for recruiting purposes because the algorithm was disproportionately not selecting women candidates, even punishing candidates for having the word ‘woman’ in their CVs. What this means for us is that we might also build inferior solutions and products for our clients, sometimes inadvertently. We might fail to realize that on time, before the product has been released and done the damage, simply because there was no one to question its functionality.

What can be done to address the issue?

Companies have started to tackle the problem. What makes things especially difficult is not only the lower proportion of women working selecting into the field, but also lower retention rates, i.e. women are also more likely to quickly leave such as field as AI. The same Wired article illustrates some alarming cases of hostile atmosphere in tech companies, as well as at tech conferences towards women. There isn’t a single solution yet, and there isn’t a single company that has nailed it as of now. Some have introduced diversity boards, KPIs they want to achieve in the near future, communities and support groups that promote and help voice different diverse groups. There have been a few walkouts and discontent of workers working in places such as Google and Amazon in 2018, and these walkouts have been somewhat successful. Companies are also starting to realize the importance and potential broader societal impact of the tools they are designing. Many are therefore, investing into building ethical, fair and unbiased machine learning models. The first step in solving the problem though starts in being aware the problem exists and why it is important to address it. Having diversity strategies and goals in place, supporting and sponsoring diversity-oriented programs and conferences, are a good first step to solving it. But we still have a long way to go.


Filter on