How to control Metavas by hidden content moderators

[ad_1]

Meta would not say how many content moderators it employs or contracts at Horizon Worlds, or whether the company intends to increase the number under the new age policy. But the change puts a spotlight on the people tasked with enforcement in these new online spaces — people like Yekanti — and how they do their jobs.

Yekanti has worked as a moderator and training manager in virtual reality since 2020 and came to the role after doing traditional media work on text and images. He is employed by Webpurify, a company that provides content hosting services to Internet companies such as Microsoft and PlayLab, and works with a team based in India. The work is done on major platforms, including those owned by Meta, though WebPurify declined to confirm which ones refer to customer confidentiality agreements.

Yekanti, a longtime Internet enthusiast, says he enjoys wearing a VR headset, meeting people from around the world and advising creators on how to improve their games and “worlds.”

He is part of a new class of personnel who protect safety in the metaverse as private security agents, interacting with avatars of real people to prevent nefarious behavior in virtual-reality. He does not publicly disclose his moderation. Instead, it works more or less stealthily, posing as the average user to better witness abuses.

Because traditional exchange tools such as AI-enabled filters do not translate well in real-time immersive environments on certain terms, mods like Yekanti are the main way to ensure safety in the digital world, and the work is becoming more and more important with each one. day.

Metaverse security problem

The security problem in Metaverse is complex and ambiguous. Journalists have reported orchestrated bullying comments, scams, sexual assaults and even kidnappings through Meta Oculus. Big immersive platforms like Roblox and Meta’s Horizon Worlds are pretty quiet about bad behavior statistics, but Yekanti says he comes across reportable offenses every day.

Meta declined to comment on the filing, but sent the tools and policies it contains. A Roblox spokesperson said the company “has a team of thousands of moderators who monitor inappropriate content 24/7 and investigate reports submitted by our community,” and also uses machine learning to review text, images and audio.

When it comes to security issues, tech companies have turned to volunteers and staff, such as meta community guidelines, hidden moderators like Yekanti, and—increasingly—platform features that allow users to manage their own security, like a private border line to keep other users out. Very close.

[ad_2]

Source link

Leave a Reply

Your email address will not be published. Required fields are marked *

9 − 2 =