(This article is from The Technocrat, MIT Technology Review’s weekly tech policy newsletter about power, politics, and Silicon Valley. To receive it in your inbox every Friday, sign up here.)
Advances in artificial intelligence tend to be followed by anxieties around jobs. This latest wave of AI models, like ChatGPT and OpenAIâs new GPT-4, is no different. First we had the launch of the systems. Now weâre seeing the predictions of automation.
In a report released this week, Goldman Sachs predicted that AI advances could cause 300 million jobs, representing roughly 18% of the global workforce, to be automated in some way. OpenAI also recently released its own study with the University of Pennsylvania, which claimed that ChatGPT could affect over 80% of the jobs in the US.
The numbers sound scary, but the wording of these reports can be frustratingly vague. âAffectâ can mean a whole range of things, and the details are murky.
People whose jobs deal with language could, unsurprisingly, be particularly affected by large language models like ChatGPT and GPT-4. Letâs take one example: lawyers. Iâve spent time over the past two weeks looking at the legal industry and how itâs likely to be affected by new AI models, and what I found is as much cause for optimism as for concern.
The antiquated, slow-moving legal industry has been a candidate for technological disruption for some time. In an industry with a labor shortage and a need to deal with reams of complex documents, a technology that can quickly understand and summarize texts could be immensely useful. So how should we think about the impact these AI models might have on the legal industry?
First off, recent AI advances are particularly well suited for legal work. GPT-4 recently passed the Universal Bar Exam, which is the standard test required to license lawyers. However, that doesnât mean AI is ready to be a lawyer.
The model could have been trained on thousands of practice tests, which would make it an impressive test-taker but not necessarily a great lawyer. (We donât know much about GPT-4âs training data because OpenAI hasnât released that information.)
Still, the system is very good at parsing text, which is of the utmost importance for lawyers.
âLanguage is the coin in the realm of the legal industry and in the field of law. Every road leads to a document. Either you have to read, consume, or produce a document ⦠thatâs really the currency that folks trade in,â says Daniel Katz, a law professor at Chicago-Kent College of Law who conducted GPT-4’s exam.
Secondly, legal work has lots of repetitive tasks that could be automated, such as searching for applicable laws and cases and pulling relevant evidence, according to Katz.
One of the researchers on the bar exam paper, Pablo Arredondo, has been secretly working with OpenAI to use GPT-4 in its legal product, Casetext, since this fall. Casetext uses AI to conduct âdocument review, legal research memos, deposition preparation and contract analysis,â according to its website.
Arredondo says heâs grown more and more enthusiastic about GPT-4âs potential to assist lawyers as heâs used it. He says that the technology is âincredibleâ and ânuanced.â
AI in law isnât a new trend, though. It has already been used to review contracts and predict legal outcomes, and researchers have recently explored how AI might help get laws passed. Recently, consumer rights company DoNotPay considered arguing a case in court using an argument written by AI, known as the ârobot lawyer,â delivered through an earpiece. (DoNotPay did not go through with the stunt and is being sued for practicing law without a license.)
Despite these examples, these kinds of technologies still havenât achieved widespread adoption in law firms. Could that change with these new large language models?
Third, lawyers are used to reviewing and editing work.
Large language models are far from perfect, and their output would have to be closely checked, which is burdensome. But lawyers are very used to reviewing documents produced by someoneâor somethingâelse. Many are trained in document review, meaning that the use of more AI, with a human in the loop, could be relatively easy and practical compared with adoption of the technology in other industries.
The big question is whether lawyers can be convinced to trust a system rather than a junior attorney who spent three years in law school.
Finally, there are limitations and risks. GPT-4 sometimes makes up very convincing but incorrect text, and it will misuse source material. One time, Arrodondo says, GPT-4 had him doubting the facts of a case he had worked on himself. âI said to it, Youâre wrong. I argued this case. And the AI said, You can sit there and brag about the cases you worked on, Pablo, but Iâm right and hereâs proof. And then it gave a URL to nothing.â Arredondo adds, âItâs a little sociopath.â
Katz says itâs essential that humans stay in the loop when using AI systems and highlights the professional obligation of lawyers to be accurate: âYou should not just take the outputs of these systems, not review them, and then give them to people.â
Others are even more skeptical. âThis is not a tool I would trust with making sure important legal analysis was updated and appropriate,â says Ben Winters, who leads the Electronic Privacy Information Centerâs projects on AI and human rights. Winters characterizes the culture of generative AI in the legal field as âoverconfident, and unaccountable.â Itâs also been well-documented that AI is plagued by racial and gender bias.
There are also the long-term, high-level considerations. If attorneys have less practice doing legal research, what does that mean for expertise and oversight in the field?
But we are a while away from thatâfor now.
This week, my colleague and Tech Reviewâs editor at large, David Rotman, wrote a piece analyzing the new AI ageâs impact on the economyâin particular, jobs and productivity.
âThe optimistic view: it will prove to be a powerful tool for many workers, improving their capabilities and expertise, while providing a boost to the overall economy. The pessimistic one: companies will simply use it to destroy what once looked like automation-proof jobs, well-paying ones that require creative skills and logical reasoning; a few high-tech companies and tech elites will get even richer, but it will do little for overall economic growth.â
What I am reading this week
Some bigwigs, including Elon Musk, Gary Marcus, Andrew Yang, Steve Wozniak, and over 1,500 others, signed a letter sponsored by the Future of Life Institute that called for a moratorium on big AI projects. Quite a few AI experts agree with the proposition, but the reasoning (avoiding AI armageddon) has come in for plenty of criticism.
The New York Times has announced it wonât pay for Twitter verification. It’s yet another blow to Elon Muskâs plan to make Twitter profitable by charging for blue ticks.
On March 31, Italian regulators temporarily banned ChatGPT over privacy concerns. Specifically, the regulators are investigating whether the way OpenAI trained the model with user data violated GDPR.
Iâve been drawn to some longer culture stories as of late. Hereâs a sampling of my recent favorites:
- My colleague Tanya Basu wrote a great story about people sleeping together, platonically, in VR. Itâs part of a new age of virtual social behavior that she calls âcozy but creepy.â
- In the New York Times, Steven Johnson came out with a lovely, albeit haunting, profile of Thomas Midgley Jr., who created two of the most climate-damaging inventions in history
- And Wiredâs Jason Kehe spent months interviewing the most popular sci-fi author youâve probably never heard of in this sharp and deep look into the mind of Brandon Sanderson.
What I learned this week
âNews snackingââskimming online headlines or teasersâappears to be quite a poor way to learn about current events and political news. A peer-reviewed study conducted by researchers at the University of Amsterdam and the Macromedia University of Applied Sciences in Germany found that âusers that âsnackâ news more than others gain little from their high levels of exposureâ and that âsnackingâ results in âsignificantly less learningâ than more dedicated news consumption. That means the way people consume information is more important than the amount of information they see. The study furthers earlier research showing that while the number of âencountersâ people have with news each day is increasing, the amount of time they spend on each encounter is decreasing. Turns out ⦠thatâs not great for an informed public.