Understanding the Hype Around Artificial Intelligence
The buzzword in technology today is “Artificial Intelligence.” For some, it’s a beacon of hope, promising a future where information is more accessible, mundane tasks are replaced, and we see improvements in health, wealth, and creativity. Yet, others see it as a looming threat, so much so that tech giants, including those at the forefront of AI technology, have signed an open letter stating that curbing the existential risks posed by AI should be a global priority, alongside other significant risks like pandemics and nuclear war.
The real question we should ask is whether AI could lead to our extinction and how it already impacts our day-to-day lives, particularly in the workplace. A case study involving a federal lawsuit in New York City exemplifies the potential pitfalls of substituting humans with AI.
A Not-So-Special Civil Lawsuit: Mata v. Avianca Airlines
The civil lawsuit of Mata v. Avianca Airlines is not incredibly unique – it’s one of the thousands of cases that course through federal courts each year. In this case, Roberto Mata, a passenger on one of Avianca’s flights from New York City to El Salvador in 2019, sued the airline for “severe personal injuries” he sustained when a flight attendant allegedly hit his left knee with a serving cart.
Avianca sought to dismiss the case due to the statute of limitations. In contrast, Mata’s lawyers argued that the Montreal Convention, an international treaty on air travel, permitted Mata to file the case in state court. Additionally, they maintained that the statute of limitations was paused or “tolled” during Avianca’s bankruptcy proceedings. They cited around a dozen similar rulings across the U.S. to support their claims.
However, Avianca’s lawyers found many of these cases untraceable, and those they could locate did not support the cited propositions. A significant period of inactivity followed, which ended when the court ordered Mata’s lawyer, Peter LoDuca, to provide copies of the rulings cited or risked automatic dismissal of the case. LoDuca complied, but some rulings remained untraceable.
Avianca subsequently expressed doubts about the authenticity of many of these cases and, after an exhaustive search in several databases, including the federal court’s electronic docket system (PACER) and legal research databases like Westlaw, found no evidence of their existence.
A Courtroom Drama Unveiled: AI vs. Human Judgement
The court lost patience and ordered LoDuca to defend himself against possible sanctions for the “unprecedented circumstance.” LoDuca admitted that the cases were sourced from ChatGPT, the generative AI program developed by OpenAI. Stephen Schwartz, another lawyer at LoDuca’s firm, had been consulting ChatGPT for legal research. Schwartz asserted that the AI assured him of the authenticity of the cases, and he was unaware of the potential for false content.
This situation brings to light a critical problem with generative AI – it does not truly “know” anything. AI, like ChatGPT, operates by seeking patterns in pre-existing written texts and providing potential answers they cannot think, reason, imagine or learn. They can only make sophisticated guesses.
Misconceptions and Limitations of Generative AI
The question of AI taking over human jobs would be more complex if generative AI could truly live up to the hype. However, there are substantial doubts. While researching this piece, professionals attempted to replicate Schwartz’s interaction with ChatGPT. Although they couldn’t perfectly recreate his prompts, they were startled when they inputted “cases on tolling Montreal convention.” The AI tool gave them a significant disclaimer about its limitations in legal research.
ChatGPT correctly pointed out, “As an A.I. language model, I don’t have real-time access to current case law or the ability to browse legal databases.” The AI model can only provide general information but cannot access current case law or legal databases. It further suggested that consulting with a legal professional or conducting research using up-to-date legal resources would be the best way to obtain specific and accurate information on recent cases.
AI and the Future of the Legal Profession
While AI has proven to be a powerful tool in many respects, this case study shows that it has significant limitations regarding legal research and application. AI cannot understand the nuanced complexities of the law or the depth of human experience. Thus, it cannot replace the judgment and expertise of a human lawyer.
Despite the hype around AI, it’s crucial to understand that the technology is a tool to assist, not replace, human intelligence and expertise. The idea of AI replacing a wide variety of white-collar jobs is premature and, as of now, unrealistic.
At Alvarez Technology Group, we believe in leveraging the best of both worlds – technology’s speed, precision, and efficiency with professionals’ experience, judgment, and human touch. We urge all to see AI as a powerful complement to human intelligence, not a replacement. By understanding its capabilities and limitations, we can use AI to enhance our productivity and innovation while maintaining the essential human elements that make our work truly valuable.