AI may not steal your job, but it could change it.

[ad_1]

(This article is from The Technocrat, MIT Technology Review’s weekly technology policy newsletter about power, politics, and Silicon Valley. To receive it in your inbox every Friday, Register here.)

Advances in artificial intelligence are followed by concerns around jobs. This latest wave of AI models, such as ChatGPT and OpenAI’s new GPT-4, is no different. First we had to start the operating system. We are now seeing predictions of automation.

In a report released this week, Goldman Sachs predicts that AI growth will create 300 million jobs, roughly 18% of the global workforce, that will be automated in some way. OpenAI also recently released its own study with the University of Pennsylvania, which claims that ChatGPT could affect more than 80% of jobs in the US.

The numbers are scary, but the words of these reports can be frustratingly vague. “Influence” can mean completely different things, and the details are murky.

People whose work deals with language may be particularly affected by large language models such as ChatGPT and GPT-4. Let’s take an example: lawyers. I’ve spent the last couple of weeks looking at the legal industry and how it could be affected by the new AI models, and what I’ve found is as much optimism as it is alarming.

The archaic, slow-moving legal industry remains a candidate for technological disruption. In an industry where there is a shortage of labor and when you need to deal with complex documents again, technology that allows you to quickly understand and summarize texts can be very useful. So how should we think about the impact these AI models will have on the legal industry?

First, recent AI developments are particularly suited to legal work. GPT-4 recently passed the Universal Bar Exam, the standard exam required for bar licensure. However, that doesn’t mean AI is ready to be a lawyer.

The model could be trained on thousands of practice tests, making it an amazing challenger but not necessarily a great advocate. (We don’t know much about GPT-4 training data because OpenAI hasn’t released that data.)

Still, the system is very good at analyzing the most important text for legal professionals.

“Language is the coin in the legal industry and the legal field. Every road leads to a document. You either have to read it, use it, or prepare a document. . . . It’s really the currency that people trade,” said Daniel Katz, a law professor at Chicago-Kent College of Law who conducted the GPT-4 test. .

Second, legal work has many repetitive tasks that can be automated; According to Katz, searching for applicable laws and cases and pulling relevant evidence.

Pablo Arredondo, one of the researchers on the bar exam paper, has been secretly working with OpenAI to use GPT-4 in his legal product Casetext since this fall. Casetext uses AI to conduct “document review, legal research notes, deposition preparation and contract analysis,” according to its website.

Arredondo said his ability to assist attorneys has grown over time as he uses attorneys at GPT-4. He said the technology was “amazing” and “distorted”.

But AI in law is not a new trend. It’s already been used to evaluate contracts and predict legal outcomes, and researchers have recently explored how AI can help draft laws. Recently, consumer rights company Do NotPay considered using an AI-written argument to argue a case in court, known as a “robot lawyer,” via a headset. (DoNotPay did not pass on the profits and was accused of practicing law without a license.)

Despite these examples, such technologies are still not widely accepted in law firms. Could this change with these new big language models?

Third, lawyers are used to evaluate and correct work.

Large language models are not perfect, and their results must be closely tested, which is difficult. But lawyers are very used to examining documents produced by someone or something. Many are trained in document review, which means that using more AI, with a human in the loop, can be simpler and more practical than technology in other industries.

The big question is whether lawyers can trust the system more than a junior lawyer who has spent three years in law school.

Finally, there are limitations and risks. GPT-4 sometimes produces very persuasive but incorrect text, and abuses the source. At one point, Arrodondo made GPT-4 question the reality of the case he had made against himself. I said. You are wrong. I have argued about this. A.A.M. You can sit there and brag about the cases you’ve worked on, Pablo, but I’m right and here’s the proof. And then it gave no url. Arredondo added, “He’s a bit of a sociopath.

Katz says it’s important for people to take the case when using AI systems and highlights the validity of lawyers’ professional obligations: “You shouldn’t just take the results of these systems and evaluate them, but give them to people.”

Others are more skeptical. “This is not a tool that I can rely on to ensure that this important legal analysis is developed and appropriate,” said Ben Winters, who directs projects on AI and human rights at the Electronic Privacy Information Center. Winter describes the culture of generative AI in the legal field as “overconfident and unaccountable.” It is also well documented that AI suffers from racial and gender biases.

In addition, long-term, high-level consideration. If lawyers have experience doing legal research, what does it mean for professionalism and oversight in the field?

But for now, we’re just a little away.

This week, my colleague and Tech Review editor David Rothman wrote an article analyzing the impact of the new era of AI on the economy, specifically on jobs and productivity.

“Optimism: It’s proving to be a powerful tool for improving the capabilities and skills of many workers, providing growth for the overall economy. The pessimism is that companies are using automation to eliminate jobs that once didn’t disrupt, well-paid creative skills and logical thinking. Few high-tech companies and the tech elite are more They will become rich, but it will not contribute much to the overall economic growth.

I am reading this week

Some of the big names, including Elon Musk, Gary Marcus, Andrew Young, Steve Wozniak and more than 1,500 others, have signed a letter sponsored by the Future of Life Institute calling for a ban on big AI projects. Few AI experts agree with the idea, but the rationale (avoiding AI Armageddon) has come in for a lot of criticism.

The New York Times has it He announced that he will not pay for Twitter verification. It’s another blow to Elon Musk’s plan to make Twitter profitable by paying for blue ticks.

On March 31, Italian regulators temporarily banned ChatGPT over privacy concerns. Specifically, the regulators are investigating violations of the GDPR in the way OpenAI trained the model with user data.

Lately I’ve been drawn to some tall tales of culture. Here’s a sampling of my recent favorites:

  • My colleague Tanya Basu wrote a great story about people who sleep together, Plato, in VR. It’s part of the new virtual social scene, which she calls “convenient but embarrassing.”
  • Steven Johnson of the New York Times comes out with a beautiful, if shocking, profile of Thomas Mighley, Jr., the inventor of two of the most climate-threatening inventions in history.
  • And Wired’s Jason Kehe interviews the most famous sci-fi author you’ve never heard of in this sharp and insightful look into the mind of Brandon Sanderson.

What I learned this week

“News snacking”—spoofing online headlines or teasers—is seen as a poor way to learn about current affairs and political news. A peer-reviewed study by researchers at the University of Amsterdam and Macromedia University of Applied Sciences in Germany found that “users who consume more ‘snacks’ than others gain little from their high exposure” and that “snacking” results in “significantly less learning” than greater news consumption. This means that the way people use information is more important than the amount of information they see. The study reinforces previous research, showing that while the number of “contacts” people interact with news each day is increasing, the time spent on each connection is decreasing. Turns out… that’s not good for the public.



[ad_2]

Source link

Leave a Reply

Your email address will not be published. Required fields are marked *

four × 1 =