[ad_1]
This article is from The Technocrat, MIT Technology Review’s weekly technology policy newsletter about power, politics, and Silicon Valley. To receive it in your inbox every Friday, sign up here.
If you use Google, Instagram, Wikipedia, or YouTube, you’ll start seeing changes in content moderation, transparency, and security features over the next six months.
why? It’s some major tech legislation that was passed in the EU last year but hasn’t gotten enough attention (IMO), especially in the US. I’m referring to bills called the Digital Services Act (DSA) and the Digital Markets Act (DMA), and as they say, this is your cue to get familiar.
Its actions are revolutionary, setting the global gold standard for technology regulation of user-generated content. DSA looks at digital security and transparency from tech companies, while DMA looks at antitrust and competition in the industry. Let me explain.
A few weeks ago, DSA reached a major milestone. In the year On February 17, 2023, all major technology platforms in Europe were required to self-report their size, which was used to group the companies into different levels. The largest companies in the EU with more than 45 million active monthly users (or roughly 10% of the EU) are creatively called “Very Large Online Platforms” (or VLOPs) or “Very Large Online Search Engines” (or VLOSEs). They are held to the strictest standards of transparency and control. The smaller online platforms have far fewer obligations, a policy designed to encourage competition and innovation while still keeping Big Tech in mind.
“If you ask [small companies]For example, you kill small companies to hire 30,000 moderators,” Henri Verdier, France’s ambassador for digital affairs, told me last year.
So what does DSA actually do?To date, at least 18 companies have declared themselves eligible as VLOPs and VLOSEs, including prominent players such as YouTube, TikTok, Instagram, Pinterest, Google and Snapchat. (If you want a full list, London School of Economics law professor Martin Husovec has a great Google doc where all the major players are rattled off and an accompanying explanation has been written.)
The DSA requires these companies to assess potential problems on their platforms, such as illegal content or election fraud, and to develop plans to mitigate those risks through independent audits to ensure their security. Smaller companies (those with fewer than 45 million users) will also have to meet new content remediation requirements that include removing illegal content “promptly” once it’s flagged, notifying users of that removal, and enforcing existing company policies.
Supporters of the legislation say the bill will help end the era of self-regulation by tech companies. “I don’t want the companies to decide what’s prohibited and what’s not,” Verdier says, “with no separation of powers, no accountability, no reporting, no opportunity to compete.” “It’s very dangerous.”
That said, the law makes it clear that platforms are not responsible for unauthorized user-generated content unless they knowingly remove the content.
Perhaps most importantly, the DSA requires companies to significantly increase transparency by reporting obligations for “terms of service” notices and regular, audited reports on content moderation. Regulators hope this will have a broader impact on public discourse, with societal risks such as hate speech, misinformation and violence on large tech platforms.
What do you notice?Companies can participate in the content editing decisions they make and compete regularly. DSA prohibits shadow blocking (the practice of de-prioritizing content without ads), prevents cyberbullying against women, and prohibits advertising targeted at users under 18. There will also be a lot more public information about how recommendation algorithms, ads, content, and so on. Account management works across platforms, shedding new light on how large tech companies operate. Historically, technology companies have been very eager to share platform information with the public or with academic researchers.
What’s next?Now the European Commission (ECC) examines the reported user numbers, and has time to object or request more information from tech companies. One notable issue is the exclusion of porn sites from the “very large” category, which Husovec called “shocking.” He told me he thinks their reported user count should be challenged by EC.
Once the size sets are confirmed, the largest companies will have until September 1, 2023 to comply with the rules, while smaller companies will have until February 17, 2024. Many experts assume that companies will roll out some changes to all users. Not only those who live in the European Union. With the Section 230 amendment seemingly unlikely in the US, many US users will be able to safely access internet services hosted abroad.
What else am I reading?
More chaos, and layoffs, on Twitter.
- Elon made big news again after laying off another 200 people, or 10% of Twitter’s remaining employees, over the weekend. It is assumed that these workers were part of the “hard core” group who agreed to abide by the harsh working conditions of the field.
- NetBlocks has reported four major site outages since early February.
Everyone is trying to make sense of the generative-AI hoopla.
- The FTC issued a statement warning companies against lying about their AIs’ capabilities. I recommend reading this helpful article from my colleague Melissa Heikkila of Tech Policy Press about how to use generative AI responsibly and this explainer of 10 legal and business risks.
- The technology disaster is already making news. This reporter hacked into his bank account using an AI-generated voice.
In the year 2022 sees more internet shutdowns than ever before, continuing the trend of authoritarian censorship.
- This week, Access Now published its annual report tracking shutdowns around the world. India once again led the list with most wickets.
- Last year, I spoke with Dan Keyserling, who worked on the 2021 report, to learn more about how shutters are rigged. During our interview, he said, “Internet shutdowns are increasing. Many governments are restricting Internet access as a tool to influence citizens’ behavior. The costs of internet shutdowns are increasing both because governments are becoming more sophisticated in how they handle it, but also because we live more and more of our lives online.
What I learned this week
A new report from the Duke Cyber Policy Program finds that data brokers are selling mental health information online. The researcher asked 37 data brokers for mental health information, and 11 responded willingly. The report details how these data brokers approached to sell depression, ADHD, and insomnia with minimal restrictions. Some information is linked to people’s names and addresses.
“There are a variety of companies that are not covered by the narrow health privacy rules that we have,” project leader Justin Sherman said in an interview with PBS. And they’re even legally free to collect and sell such health data, allowing a variety of companies that wouldn’t normally be able to find it here — advertising firms, Big Pharma, even health insurance companies — to buy this data. and making decisions about health plan pricing, such as running ads, profiling users, and so on. And the data brokers allow these companies to get around health regulations.
On March 3, the FTC announced a ban barring online mental health company BetterHelp from sharing people’s information with other companies.
[ad_2]
Source link