You don’t have to be stupid to deal with bots

[ad_1]

Once upon a time. A virtual assistant named Ms. Divya, played by Janina Gavankar, a beautiful librarian who helps you with your questions in Microsoft’s first attempt at a search engine. Ms. Dewey debuted in 2006 and completed over 600 lines of recorded spoken word. She preceded her in a few ways, but one particularly overlooked example was captured by data scientist Miriam Sweeney in her 2013 doctoral dissertation, detailing the gendered and racial implications of Divvy responses. That includes lines like, “Hey, if you get into your computer, you can do whatever you want to me.” Or how searching for “destroying jobs” like she made a clip for her to play bananas or putting in words like “ghetto” will result in lyrics including gems like “no, gold tooth, ghetto-awesome dead-fucker BEEP. Steps to this piece [ass] Beep. Sweeney analyzes the obvious: Divvy was designed to cater to the white straight male consumer. After all, blogs at the time praised Dewey’s flirtation.

In the year There was outrage on Reddit when Microsoft engineers revealed that they had programmed Cortana to strictly reject sexual requests or advances. One popular post reads: “Are these people serious?! ‘Her’ purpose is to do what people tell her! Hey, female dog, add this to my calendar… The day Cortana becomes ‘independent female’ is the day the software becomes useless. Criticism of such behavior is rife, including from your humble journalist.

Now, with the pushback against ChatGPT and their ilk, the pendulum has swung back hard, and we’ve been warned. Resist Care about these things. It’s a point I made last year following the LaMDA AI fiasco: a bot doesn’t need to be useful to us to shape our bodies, and that fact is exploited by profiteers. I stand by that caveat. But some point out that the criticism leveled at people who misused their virtual assistants in the past was naive in retrospect. Maybe the people who kept calling Cortana “dog” were on to something!

You may be shocked to learn that this is not the case. Past criticisms of AI abuses were not only correct, they anticipated the more dangerous digital landscape we now face. The real reason the criticism has shifted from “humans are too cruel” to “humans are too beautiful” is that the political economy of AI has suddenly and dramatically changed, and with it, the sales pitches of tech companies. Where bots were once sold to us as perfect servants, now they’re selling us as our best friends. But in each case, the response to each bot generation implicitly asks us to humanize them. Bot owners have always been the tools of our worst and best impulses.

A paradoxical truth about violence, however dehumanizing, is that it requires the perpetrator to treat you as a human being. It’s a sad reality, but everyone from war criminals to bar crawlers thrive on the idea that their victims feel pain on some level. Dehumanization is not the inability to see someone as a human being, but the act of viewing someone as less than human and wanting to treat them as such. So, to some extent, it’s precisely the degree to which they mistake their imaginary helpers for real humanity that encourages people to abuse them. Otherwise it won’t be fun. That brings us to the present.

The previous generation AI has been sold to us as perfect servers: a sophisticated PA or perhaps Majel Barrett’s Starship Enterprise computer. Obedient, omniscient, ever ready to serve. The new chatbot search programs also carry some of the same associations, but as they evolve, they are sold as new secrets, even to new therapists.

[ad_2]

Source link

Leave a Reply

Your email address will not be published. Required fields are marked *

15 + 19 =