Twitter’s image-cropping algorithm marginalizes elderly, disabled, and Arabic

An AI bias bounty contest exposed numerous potential harms

Twitter’s algorithmic biases

Bogdan Kulynych, who bagged the $3,500 first-place prize, showed that the algorithm can amplify real-world biases and social expectations of beauty.

Kulynych, a grad student at Switzerland’s EPFL technical university, investigated how the algorithm predicts which region of an image people will look at.

The researcher used acomputer-vision model to generate realistic pictures of people with different physical features. He then compared which of the images the model preferred.

Kulynychsaid the modelfavored “people that appear slim, young, of light or warm skin color and smooth skin texture, and with stereotypically feminine facial traits:”

The other competition entrants exposed further potential harms.

The runners-up, HALT AI, found the algorithm sometimes crops outpeople with grey hair, dark skin, or using wheelchairs, while third-place winner, Roya Pakzad, showed the modelfavors Latin scripts over Arabic.

The algorithm also has a racial preference when analyzing emoji. Vincenzo di Cicco, a software engineer, found thatemoji with lighter skin tonesare more likely to be captured.

Bounty hunting in AI

The array of potential algorithmic harms is concerning, but Twitter’s approach to identifying them deserves credit.

There’s a community of AI researchers that can help mitigate algorithmic biases, but they’re rarely incentivized in the same way as whitehat security hackers.

“In fact, people have been doing this sort of work on their own for years, but haven’t been rewarded or paid for it,” Twitter’sRumman Chowdhury told TNWbefore the contest.

The bounty hunting model could encourage more of them to investigate AI harms. It can also operate more quickly than traditional academic publishing. Contest winner Kulynychnotedthat this fast pace has both flaws and strengths:

He added that there are also limitations in the approach. Notably, algorithmic harms are often a result of design rather than mistakes. An algorithm that spreads clickbait to maximize engagement, for instance, won’t necessarily have a “bug” that a company wants to fix.

“We should resist the urge of sweeping all societal and ethical concerns about algorithms into the category of bias, which is a narrow framing even if we talk about discriminatory effects,” Kulynych tweeted.

Nonetheless, the contest showcased a promising method of mitigating algorithmic harms. It also invites a wider range of perspectives than one company can incorporate (or will want) to investigate the issues.

Greetings Humanoids! Did you know we have a newsletter all about AI? You can subscribe to itright here.

Story byThomas Macaulay

Thomas is a senior reporter at TNW. He covers European tech, with a focus on AI, cybersecurity, and government policy.Thomas is a senior reporter at TNW. He covers European tech, with a focus on AI, cybersecurity, and government policy.

Get the TNW newsletter

Get the most important tech news in your inbox each week.

Also tagged with

More TNW

About TNW

AI startup Poolside raises $500M as AI coding market booms

Startups race to curb data centre energy use amid AI boom

Discover TNW All Access

Spotify’s Daniel Ek has brought his futuristic body scanners to London

AI is changing science: Google DeepMind duo win Nobel Prize in Chemistry