What if we just asked AI to be biased?

[ad_1]

Last week I published a story about a new tool developed by AI startup Hugging Face and researchers at the University of Leipzig so that AI models can see for themselves what natural biases they have about different genders and ethnicities.

Although I’ve written a lot about how our biases are reflected in AI models, it’s still shocking to see how pale, male, and old AI people are. That was especially true for the DALL-E 2, which produced 97% white males when asked questions like “CEO” or “Director.”

And the problem of bias is deeper than you think To the vast world created by AI. These models are built by American companies and trained on North American data, so when asked to generate everyday items, they create door-to-door things that look American, says Federico Bianchi, a researcher at Stanford University. I.

As the world becomes flooded with AI-generated images, we increasingly see images that reflect American biases, culture, and values. Who knew that AI could become America’s main tool of soft power?
So how can we solve these problems? Much work has gone into correcting biases in the data sets on which AI models are trained. But two recent research papers offer interesting new approaches.

Instead of making the training data less biased, what if you just asked the model to give you less biased answers?

A team of researchers at the Technical University of Darmstadt in Germany and AI startup Hugging Face developed a tool called Fair Distribution that makes it easy to adjust AI models to generate the types of images you want. For example, you can generate stock photos of CEOs in different settings and then use fair distribution to swap out the white men in the image for women or people of different races.

Hugging Face Tools shows that AI models that generate images based on image-text pairs have strong biases about occupation, gender, and ethnicity in their training data. The German researchers’ fair distribution tool is based on a method called semantic guidance that lets users know how the AI ​​system generates images of people and edits the results.

The AI ​​system is very close to the original image, says Christian Kersting, professor of computer science at TU Darmstadt, who participated in the work.

[ad_2]

Source link

Leave a Reply

Your email address will not be published. Required fields are marked *

6 − two =