It has been plagued by privacy issues with grief algorithms, and image AI.


– Tate Ryan-Mosley, senior technology policy reporter

I’ve always been super Googled, dealing with uncertainty by trying to learn as much as I can about anything that comes up. This includes my father’s throat cancer.

From an app on my iPhone, I began to intentionally and unintentionally highlight people’s experiences of grief and bereavement through Instagram videos, various news feeds and Twitter testimonials about stages of grief, and books and academic studies about loss.

Yet with every search and click, I unwittingly created a sticky web of digital grief. Ultimately, it’s impossible to detach myself from what the algorithms serve me. I got out – finally. But why is it so hard to unsubscribe and opt out of content we don’t want, even if it’s harmful to us? Read the full story.

AI models spit out photos of real people and copyrighted images

The news: Image generation models can be inspired to create identifiable photos of real people, medical images and copyrighted work by artists, new research suggests.

How to do it: Researchers have used Stable Diffusion and Google Image to generate captions for images such as human names for multiple times. They then analyzed whether any generated images matched the original images in the model database. The team was able to extract more than 100 images in the AI ​​training set.

Why is it important? The finding could strengthen the case for artists currently suing AI companies for copyright infringement and potential threats to people’s privacy. It may have implications for startups looking to use generative AI models in healthcare, as it shows that these systems run the risk of leaking sensitive personal data. Read the full story.



Source link

Related posts

Leave a Comment

twelve − twelve =