[ad_1]
She wanted to know if I had any suggestions and asked me what I thought all the new developments meant for legislators. I’ve been thinking about this for a few days, reading and talking to experts, and my answer turned into this newsletter. So here goes!
Although GPT-4 is the standard bearer, it’s only one of several high-profile generative AI releases in the past few months: Google, Nvidia, Adobe, and Baidu have all announced their own projects. In short, creative AI is what everyone is talking about. And although the technology is not new, its policy implications are months, if not years, from being understood.
GPT-4, released by OpenAI last week, is a multimodal large-scale language model that uses deep learning to predict words in a sentence. It produces remarkably fluent text, and can respond to images as well as word-based questions. For paying customers, GPT-4 now supports ChatGPT, which is already included in business applications.
The new iteration made a big splash, and Bill Gates called it “revolutionary” in a letter this week. However, OpenAI has also been criticized for a lack of transparency in how the model was trained and assessed for bias.
Despite all the excitement, generative AI comes with serious risks. The models are trained on the toxic repository that is the internet, which means they often generate racist and sexist comments. They also regularly organize things and express themselves convincingly. That can be an illusion from a misinformed perspective and can make scams more convincing and effective.
Generative AI tools are a potential threat to people’s safety and privacy and have little regard for copyright laws. Companies using generative AI have already been sued for stealing other people’s work.
Alex Engler, a management research fellow at the Brookings Institution, looks at how policymakers should think about this and two main risks: harm from malicious use and harm from commercial use. Malicious uses of the technology, such as disinformation, automated hate speech and fraud, “have a lot in common with content moderation,” the engineer said in an email, “and the best way to mitigate these risks may be through platform management.” (If you want to learn more about this, I recommend listening to this week’s Sunday Show from Tech Policy Press, where Justin Hendricks, editor and lecturer on technology, media and democracy, talks to a panel of experts about whether generative AI systems should be treated the same way as search and recommendation algorithms. Hint: class 230.)
Policy discussions about generative AI have so far focused on the second category: concerns about commercial uses of the technology, such as coding or advertising. So far, the US government has taken small but notable steps, primarily through the Federal Trade Commission (FTC). The FTC issued a statement last month warning companies not to make claims about technical capabilities, such as overstating what AI can do. This week, on the Business Blog, he used stronger language about the risks companies should consider when using generative AI.
[ad_2]
Source link