There is a new hype: ChatGPT version 3.5. First launched 30 November 2022 this publicly available natural language processing tool immediately caught the attention of the media and the public. It allows to have high quality human like conversations with a chatbot, which I must admit is pretty cool. Up till then, conversations with chatbots were invariably cumbersome, frustrating and stupid.
Natural Language Processing consists of two main elements: the language + the content. ChatGPT is impressively good when it comes to the language. The algorithm has been trained to predict which words are most likely to be used and in what order. The language and grammar of the tool could indeed be mistaken for real. This as such is a major technological achievement.
The second element is the content. This is where things get a bit obscure. Even though ChatGPT responds like a human, unlike a real person it does not have any knowledge or understanding. ChatGPT sources its information on the internet, and that spells trouble. ChatGPT runs a high likelihood of producing bullshit in a very convincing and coherent manner. ChatGPT will always sound plausible, but could be totally wrong. So, while the Natural Language Processing part is groundbreaking, the content part is far from perfect.
Training AI is not straightforward
Algorithms need to be trained with vast amounts of data. The quality of these datasets will have a huge impact on the end product. It is tempting to use the internet as a source, but that invariably is a bad idea. In December 2022 Melissa Heikkilä (she/her) published an article in MIT Technology Review. She tried viral AI avatar app Lensa to make an avatar of herself. For her male colleagues at MIT the app generated realistic and flattering avatars —think astronauts, warriors, and electronic music album covers.
However for Melissa, being a woman, Lensa created tons of nudes. Out of 100 avatars she generated, 16 were topless, and another 14 had her in extremely skimpy clothes and overtly sexualized poses. Only after she told the algorithm she was a male, Lensa produced flattering avatars like it had done for her colleagues.
This is a great example of how the data used to train the algorithms affect the results later on. Lensa relies on a free-to-use machine learning model called Stable Diffusion, which was trained on billions of image-and-text combinations scraped from the internet. One just need to look at Instagram to recognize that women often place ‘sexy’ images of themselves. The internet is not a good source to train AI on. This is as true for Lensa as it is for ChatGPT (or any other tool).
ChatGPT in the legal industry
Lawyers use and process language a lot. Perhaps with the exception of writers, journalists and academics, no profession has language at the core like the legal profession. With that in mind the question arises if ChatGPT will disrupt the legal industry? Personally I think it will not. Let me explain:
1. Where will the content come from?
Obviously lawyers at a law firm will not use Google as a source, so in order to use ChatGTP or a derivative, law firms need to feed it with their own datasets. These datasets need to be clean, meaning that every bit of data needs to be vetted before. Data must also be updated on a permanent basis. For many law firms this will be a hurdle they cannot take.
2. Will there be a return on the investment?
Purchasing the AI tool and preparing the dataset will require a substantial investment. Further licensing fees and maintenance will add to the cost. Such an investment will only make sense if it will help the law firm to make more money. The question is will it?
In this context I would like to point you towards an analysis by Lexoo which was recently shared by its CEO Daniel van Binsbergen. Lexoo analyzed how AI and Machine Learning could speed up contact review. It started with the assumption that such tools could save 50% of a lawyer’s time. It turned out to be 5% in reality! So AI/Machine learning would have very little real-world impact. Side note: this is why inhouse teams that have adopted machine learning have not magically freed up 50% of their time…
Even in the unlikely event that the employment of ChatGPT would lead to substantial time savings for lawyers, this still wouldn’t mean there is a business model. If lawyers spend less billable time, this needs to be compensated for by higher rates or more work volume, otherwise it will be a lossmaking operation. The third option would be to employ fewer lawyers, but that would assume the ‘idle time’ could be pinpointed to one or more individuals, which will not be the case.
3. The Technology Paradox
Once technology is introduced to simplify our lives, we humans get less experience. Today we have to drive our own cars. In the future there might be fully autonomous cars that do the driving for us. This will make us less experienced as drivers, so when something goes wrong, it will go dramatically wrong because the human lacks the experience to intervene.
The legal profession runs the same risk. The present generation of lawyers already has the experience and will probably save little or no time using a ChatGPT based tool. The future generation of lawyers using such a tool will have less experience and will likely struggle to assess the results and identify issues, omissions or mistakes produced by the tool.
Parting Shot
While lawyers and the legal industry are well advised to remain open to change and new (technological) developments – maybe even more so than today – technology by itself is highly unlikely to fundamentally disrupt the legal industry. Its impact will always be marginal.
The legal profession is first and foremost a human profession. Law firms are well advised to prioritize training and development opportunities for lawyers and partners over investment in technology. We do not need faster lawyers, we need ‘better humans’.
Commentaires