How to create, release and share creative AI


“If we really want to address these issues, we have to get serious,” Farid said. For example, Amazon, Microsoft, Google, and Apple, which are part of the PAI, want cloud service providers and app stores to block services that allow people to use deep-fake technology to engage in non-consensual sex. Fig. Watermarks on all AI-generated content should also be mandatory, not voluntary, he said.

Another important missing element is how AI systems themselves can become more responsible, says Ilke Demir, a senior research scientist at Intel who leads the company’s generative AI responsibility development work. This may include more details about how the AI ​​model was trained, what data went into it, and whether generative AI models have any biases.

The guidelines do not contain anything about ensuring that there is no toxic content in the dataset of generative AI models. “It’s one of the most significant ways these systems are harmed,” said Daniel Leufer, senior policy analyst at the digital rights group Access Now.

The guidelines include a list of harms that these companies seek to prevent, such as fraud, harassment and misinformation. But the generative AI model, which always creates white people, is also causing damage, and is currently not listed, Demir added.

Farid raises a more fundamental issue. “Should we be doing this in the first place?” Why don’t they ask the question?


Source link

Leave a Reply

Your email address will not be published. Required fields are marked *

six + seven =