Undercover in the Metaverse | MIT Technology Review

[ad_1]

The second aspect of preparation is related to mental health. Not all players behave the way you want them to behave. Sometimes people just come to be mean. We prepare by looking at the different types of situations you may encounter and how best to handle them.

We also monitor everything. We keep track of the game we are playing, what players have joined the game, what time we started the game, and what time we finished the game. What was the conversation during the game? Is the player using bad language? Is the player abusive?

Sometimes we get borderline behavior like someone using bad words out of frustration. We still monitor, because there may be children on stage. And sometimes the behavior goes beyond a certain limit, like if it’s getting too personal, and we have more options for that.

If someone says something really racist, for example, what are you trained to do?

Well, based on our monitoring we create a weekly report and provide it to the client. Based on the repeat of bad behavior from the player, the client may decide to take some action.

And if the behavior is really bad in real time and violates policy guidelines, we have different controls that we can use. We can mute the player so that no one can hear what he is saying. We can remove the player from the game and notify the player [to the client] With footage of what happened.

What do you think people don’t know about this place?

It’s very interesting. I still remember the feeling I had when I put on the VR headset for the first time. Not all jobs allow you to play.

And I want everyone to know that it is important. I was reviewing an article once. [not in the metaverse] And we got this review from a boy Someone kidnapped me and hid me in the basement. My phone is about to die. Someone please call 911 and it’s coming please help me.

I was skeptical about it. What should I do? This is not a forum to ask for help. Anyway, I sent it to our legal team and the police went to the scene. A few months later, we got feedback that when the police went to the place, they found the boy tied up in the basement with injuries all over his body.

That was a life-changing moment for me personally, because I always thought that this job was just a buffer, something you do before you know what you really want to do. And that’s how most people handle this job. But that event changed my life and made me realize that what I do here affects the real world. I literally saved a child. Our team literally saved a child, and we are all proud. That day, I decided to stay in the field and let everyone know that this is very important.

I will read this week

  • Analytics company Palantir has built an AI platform that allows the Army to make strategic decisions through chatbots like ChatGPT, which can analyze satellite imagery and create attack plans. The company promises to act ethically, even though…
  • Twitter’s blue-check efficiency is starting to have real-world implications, making it difficult to know what and who to trust on the platform. The misinformation is spreading — in the 24 hours since Twitter removed previously verified blue checks, at least 11 new accounts have started impersonating the Los Angeles Police Department, according to the New York Times.
  • Russia’s war on Ukraine has led to the collapse of its tech industry, Masha Borak wrote in a great feature for the MIT Technology Review published a few weeks ago. The Kremlin’s pressure to monitor and control the data on Yandex has suppressed the search engine.

What I learned this week

When users report false information online, it may be more important than previously thought. A new study published in the Stanford Journal Online Trust and Safety shows that users’ fake news reports on Facebook and Instagram can be more accurate in combating misinformation when they are sorted by certain characteristics, such as the type of comment or content. The study, which is the first of its kind to quantitatively assess the accuracy of users’ misinformation, shows some promise that crowdsourced content moderation may be effective.

[ad_2]

Source link

Leave a Reply

Your email address will not be published. Required fields are marked *

11 + five =