Google Image Scan Reveals How Tech Companies Punish the Innocent | John Naughton


HIt is a hypothetical situation. You are the parent of a grandchild, a young child. The penis is swollen and hurting due to infection. They call the GP surgery and eventually get a nurse to the practice. The nurse suggests that you consult one of the doctors so that you can take a photo of the affected area and email it to her.

So you pull out your Samsung phone, snap a couple of photos, and send them. A few moments later, the nurse called and said that the doctor had ordered antibiotics that he could pick up from the surgery pharmacy. Drive them in there and pick them up and within a few hours the swelling will go down and your baby is showing. Panic is over.

After two days, you will get a message from Google on your phone. Your account has been disabled for “violation of Google’s policies and harmful content that may be illegal.” Click the “Learn More” link and find a list of possible causes, including “Child Sexual Abuse and Exploitation.” Coin drops suddenly: Google thinks the photos you sent caused child abuse!

Don’t worry – there’s a form to describe the circumstances and ask Google to reverse the decision. You know you don’t have Gmail at this point, but luckily you still have an old email account that works, so you use that. Now, though, you don’t have access to your notebook, address book, and all the work documents you’ve saved on Google Docs. You also won’t be able to access any photos or videos you’ve taken with your phone, because they all live on Google’s cloud servers – your device has securely (and automatically) uploaded them.

Soon, you will receive a response from Google: the company will not restore your account. No explanation was given. Two days later there was a knock at the door. Outside are two policemen, one male, one female. You are here because you are suspected of possessing and transmitting illegal images.

Nightmare, eh? But at least it’s tentative. Except: it’s a British adaptation of what happened to a father in San Francisco, “Mark,” as recently revealed. New York Times By the awesome tech journalist Kashmir Hill And, as of the writing of this column, Mark still hasn’t gotten his Google account back. Being America, he has the option to sue Google – just like he has the option to dig up his garden with a teaspoon.

The reason for this is that technology platforms are, thankfully, more diligent in scanning their servers for child abuse images. But due to the incredible number of images hosted on these platforms, scanning and detection is done by machine learning systems, with the help of other tools (for example, cryptographic identification of illegal images, which makes them instantly visible worldwide).

All this is very good. The problem with automatic detection systems, however, is that they always throw up a large amount of “false positives” – those that signal a warning but are actually harmless and legitimate. This is often because machines are terrible at understanding context, something only humans can currently do. While studying her report, Hill looked at the photos Mark had taken of the boy. “The decision to point them out was understandable,” she wrote. “These are photos of a baby’s genitalia. But the context matters: those taken by a parent are worried about a sick child.

Accordingly, most platforms employ people to review problematic images in their context and determine whether they warrant further action. The surprising thing about San Francisco is the images They were. Reviewed by a human, he determined they were innocent, as was the police, to whom the images were sent. However, despite this, Google stood by its decision to ban the account and rejected the appeal. It can do this because it owns the platform and anyone who uses it clicks the agreement to accept the terms. In this respect, it is no different from Facebook/Meta, Apple, Amazon, Microsoft, Twitter, LinkedIn, Pinterest and others.

This arrangement works well as long as users are happy with the services and the way they are delivered. But once a user decides they’ve been abused by a platform, they fall into a legal black hole. If you’re an app developer who feels you’re being assessed a 30% tax by Apple, you have two choices for selling on that marketplace: pay or shut down. Similarly, if you’ve been selling at a profit on Amazon’s marketplace and suddenly find out that the platform is now selling a cheaper, comparable product under its own label, well… tough. Of course, you can complain or appeal, but ultimately the forum is judge, jury and executioner. Democracy does not tolerate this in any sphere of life. Why are technology platforms excluded? Isn’t it time they weren’t?

What I read

Too big a picture?
There’s a fascinating critique by Ian Hesketh in Iain Digital magazine, titled The Big History Missed, on how Yuval Noah Harari and co compressed human history into everyone.

1-2-3, gone…
Bypassing Passwords David GW Birch at Digital Identity has a good note on passwords in his substack.

Warning
Gary Markus has written a beautiful critique of Google’s new robot project on his Substack.



Source link

Related posts

Leave a Comment

sixteen − 4 =