AI Explained: Navigating AI in Arbitration – The SVAMC Guideline Effect

Rebeca E. Mosquera and Benjamin Malek | Reed Smith

Arbitrators and counsel can use artificial intelligence to improve service quality and lessen work burden, but they also must deal with the ethical and professional implications. In this episode, Rebeca Mosquera, a Reed Smith associate and president of ArbitralWomen, interviews Benjamin Malek, a partner at T.H.E. Chambers and former chair of the Silicon Valley Arbitration and Mediation Center AI Task Force. They reveal insights and experiences on the current and future applications of AI in arbitration, the potential risks of bias and transparency, and the best practices and guidelines for the responsible integration of AI into dispute resolution. The duo discusses how AI is reshaping arbitration and what it means for arbitrators, counsel and parties.

Transcript:

Intro: Hello, and welcome to Tech Law Talks, a podcast brought to you by Reed Smith’s Emerging Technologies Group. In each episode of this podcast, we will discuss cutting-edge issues on technology, data, and the law. We will provide practical observations on a wide variety of technology and data topics to give you quick and actionable tips to address the issues you are dealing with every day. 

Rebeca: Welcome to Tech Law Talks and our series on AI. My name is Rebeca Mosquera. I am an attorney with Reed Smith in New York focusing on international arbitration. Today we focus on AI in arbitration. How artificial intelligence is reshaping dispute resolution and the legal profession. Joining me is Benjamin Malek, a partner at THE Chambers and chair of the Silicon Valley Arbitration and Mediation Center AI Task Force. Ben has extensive experience in commercial and investor state arbitration and is at the forefront of AI governance in arbitration. He has worked at leading institutions and law firms, advising on the responsible integration of AI into dispute resolution. He’s also founder and CEO of LexArb, an AI-driven case management software. Ben, welcome to Tech Law Talks. 

Benjamin: Thank you, Rebeca, for having me. 

Rebeca: Well, let’s dive in into our questions today. So artificial intelligence is often misunderstood, or put it in other words, there is a lot of misconceptions surrounding AI. How would you define AI in arbitration? And why is it important to look beyond just generative AI? 

Benjamin: Yes, thank you so much for having me. AI in arbitration has existed for many years now, But it hasn’t been until the rise of generative AI that big question marks have started to arise. And that is mainly because generative AI creates or generates AI output, whereas up until now, it was a relatively mild output. I’ll give you one example. Looking for an email in your inbox, that requires a certain amount of AI. Your spellcheck in Word has AI, and it has been used for many years without raising any eyebrows. It hasn’t been until ChatGPT has really given an AI tool to the masses that question started arising. What can it do? Will attorneys still be held accountable? Will AI start drafting for them? What will happen? And it’s that fear that started generating all this talk about AI. Now, to your question on looking beyond generative AI, I think that is a very important point. In my function as the chair of the SAMC AI Task Force, while we were drafting the guidelines on the use of AI, one of the proposals was to call it use of generative AI in arbitration. And I’m very happy that we stood firm and said no, because there’s many forms of AI that will arise over the years. Now we’re talking about predictive AI, but there are many AI forms such as predictive AI, NLP, automations, and more. And we use it not only in generating text per se, but we’re using it in legal research, in case prediction to a certain extent. Whoever has used LexisNexis, they’re using a new tool now where AI is leveraged to predict certain outcomes, document automation, procedural management, and more. So understanding AI as a whole is crucial for responsible adoption. 

Rebeca: That’s interesting. So you’re saying, obviously, that AI and arbitration is more than just chat GPT, right? I think that the reason why people think that and relies on maybe, as we’ll see in some of the questions I have for you, that people may rely on chat GPT because it sounds normal. It sounds like another person texting you, providing you with a lot of information. And sometimes we just, you know, people, I can understand or I can see why people might believe that that’s the correct outcome. And you’ve given examples of how AI is already being used and that people might not realize it. So all of that is very interesting. Now, tell me, as chair of the SVAMC AI Task Force, you’ve led significant initiatives in AI governance, right? What motivated the creation of the SVAMC AI guidelines? And what are their key objectives? And before you dive into that, though, I want to take a moment to congratulate you and the rest of the task force on being nominated once again for the GAR Awards, which will be unveiled during Paris Arbitration Week in April of this year. That’s an incredible achievement. And I really hope you’ll take pride in the impact of your work and the well-deserved recognition it continues to receive. So good luck to you and the rest of the team. 

Benjamin: Thank you, Rebeca. Thank you so much. It really means a lot, and it also reinforces the importance of our work, seeing that we’re nominated not only once last year for the GAR Award, but second year in a row. I will be blunt, I haven’t kept track of many nominations, but I think it may be one of the first years where one initiative gets nominated twice, one year after the other. So that in itself for us is worth priding ourselves with. And it may potentially even be more than an award itself. It really, it’s a testament to the work we have provided. So what led to the creation of the SVAMC AI guidelines? It’s a very straightforward and to a certain extent, a little boring answer as of now, because we’ve heard it so many times. But the crux was Mata versus Avianca. I’m not going to dive into the case. I think most of us have heard it. Who hasn’t? There’s many sources to find out about it. The idea being that in a court case, an attorney used Chad GPT, used the outcome without verifying it, and it caused a lot of backlash, not only from opposing party, but also being chastised by the judge. Now when I saw that case, and I saw the outcome, and I saw that there were several tangential cases throughout the U.S. And worldwide, I realized that it was only a question of time until something like this could potentially happen in arbitration. So I got on a call with my dear friend Gary Benton at the SVAMC, and I told him that I really think that this is the moment for the Silicon Valley Arbitration Mediation Center, an institution that is heavily invested in tech to shine. So I took it upon myself to say, give me 12 months and I’ll come up with guidelines. So up until now at the SVAMC, there are a lot of think tank-like groups discussing many interesting subjects. But the SVAMC scope, especially AI related, was to have something that produces something tangible. So the guidelines to me were intuitive. It was, I will be honest, I don’t think I was the only one. I might have just been the first mover, but there we were. We created the idea. It was vetted by the board. And we came up first with the task force, then with the guidelines. And there’s a lot more to come. And I’ll leave it there. 

Rebeca: Well, that’s very interesting. And I just wanted to mention or just kind of draw from, you mentioned the Mata case. And you explained a bit about what happened in that case. And I think that was, what, 2023? Is that right? 2022, 2023, right? And so, but just recently we had another one, right? In the federal courts of Wyoming. And I think about two days ago, the order came out from the judge and the attorneys involved were fined about $15,000 because of hallucinations on the case law that they cited to the court. So, you know I see that happening anyway. And this is a major law firm that we’re talking about here in the U.S. So it’s interesting how we still don’t learn, I guess. That would be my take on that. 

Benjamin: I mean, I will say this. Learning is a relative term because learning, you need to also fail. You need to make mistakes to learn. I guess the crux and the difference is that up until now, at any law firm or anyone working in law would never entrust a first-year associate, a summer associate, a paralegal to draft arguments or to draft certain parts of a pleading by themselves without supervision. However, now, given that AI sounds sophisticated, because it has unlimited access to words and dictionaries, people assume that it is right. And that is where the problem starts. So I am obviously, personally, I am no one to judge a case, no one to say what to do. And in my capacity of the chair of the SVAMC AI task force, we also take a backseat saying these are soft law guidelines. However, submitting documents with information that has not been verified has, in my opinion, very little to do with AI. It has something to do with ethical duty and candor. And that is something that, in my opinion, if a court wants to fine attorneys, they’re more welcome to do so. But that is something that should definitely be referred to the Bar Association to take measures. But again, these are my two cents as a citizen. 

Rebeca: No, very good. Very good. So, you know, drawing from that point as well, and because of the cautionary tales we hear about surrounding these cases and many others that we’ve heard, many see AI as a double-edged sword, right? On the one hand, offering efficiency gains while raising concerns about bias and procedural fairness. What do you see as the biggest risk and benefits of AI in arbitration? 

Benjamin: So it’s an interesting question. To a certain extent, we tried to address many of the risks in the AI guidelines. Whoever hasn’t looked at the guidelines yet, I highly suggest you take a look at them they’re available on svamc.org I’m sure that they’re widely available on other databases Jus Mundi has it as well. I invite everyone to take a look at it. There are several challenges. We don’t believe that those challenges would justify not using it. To name a few, we have bias. We have lack of transparency. We also have the issue of over-reliance, which is the one we were talking about just a minute ago, where it seems so sophisticated that we as human beings, having worked in the field, cannot conceive how such an eloquent answer is anything but true. So there’s a black box problem and so many others, but quite frankly, there are so many benefits that come with it. AI is an unlimited knowledge tool that we can use. As of now, AI is what we know it is. It has hallucinations. It does have some bias. There is this black box problem. Where does it come from? Why? What’s the source? But quite frankly, if we are able to triage the issues and to really look at what are the advantages and what is it we want to get out of it, and I’ll give you a brief example. Let’s say you’re drafting an RFA. If you know the case, you know the parties, and you know every aspect of the case, AI can draft everything head to toe. You will always be able to tell what is from the case and what’s not from the case. If we over-rely on AI and we allow it to draft without verifying all the facts, without making sure we know the transcript inside and out, without knowing the facts of the case, then we will always run into certain issues. Another issue we run into a lot with predictive AI is relying on data that exists. So compared to generative AI, predictive AI is taking data that already exists and predicting another outcome. So there’s a lesser likelihood of hallucinations. The issue with that is, of course, bias. Just a brief example, you’re the president of Arbitral Women, so you will definitely understand. It has only been in the last 30 years that women had more of a presence in arbitration, specifically sitting as an arbitrator. So if we rely on data that goes beyond those 30, 40, 50 years, there’s going to be a lot of male decisions having been taken. Potentially even laws that applied back then that were not very gender neutral. So we need, we as people, need to triage and understand where is the good information, where is information that may have bias and counterbalance it. As of now, we will need to counterbalance it manually. However, as I always say, we’ve only seen a grain of salt of what AI can do. So as time progresses, the challenges, as you mentioned, will become lesser and lesser and lesser. And the knowledge that AI has will become wider and wider. As of now, especially in arbitration, we are really taking advantage of the fact that there is still scarcity of knowledge. But it is really just a question of time until AI picks up. So we need to get a better understanding of what is it we can do to leverage AI to make ourselves indispensable. 

Rebeca: No, that’s very interesting, Ben. And as you mentioned, yes, as president of ArbitralWomen, the word bias is something I pay close attention. You know, we’re talking about bias. You mentioned bias. And we all have conscious or unconscious biases, right? And so you mentioned that about laws that were passed in the past where potentially there was not a lot of input from women or other members of our society. Do you think AI can be trained then to be truly neutral or will bias always be a challenge? 

Benjamin: I wish I had the right answer. I think, I actually truly believe that bias is a very relative term. And in certain societies, bias has a very firm and black and white standing, whereas in other societies, it does not. Especially in international arbitration, where we not only deal with cross-border disputes, but different cultures, different laws, laws of the seats, laws of the contract. I think it’s very hard to point out one set of bias that we will combat or that we will set as principle for everything. I think ultimately what ensures that there is always human oversight in the use of AI, especially in arbitration, are exactly these type of issues. So we can, of course, try to combat bias and gender bias and others. But I don’t think it is as easy as we say, because even nowadays, in normal proceedings, we are still dealing with bias on a human level. So I think we cannot ask from machines to be less biased than we as humans are. 

Rebeca: Let me pivot here a bit. And, you know, earlier, we mentioned the GAR Awards. And now I’d like to shift our focus to the recent GAR Life on Technology that took place here in New York last week on February 20th. And to give our audience, you know, some context. GAR stands for Global Arbitration Review, a widely read journal that not only ranks international arbitration practices at law firms worldwide, but also, among other things, organizes live conferences on cutting-edge topics in arbitration across the globe. So I know you were a speaker at GAR Live, and there was an important discussion about distinguishing generative AI, predictive AI, and other AI applications. How do these different AI technologies impact arbitration, and how do the SVAMC guidelines address them? 

Benjamin: I was truly honored to speak at the GAR Live event in New York, and I think the fact that I was invited to speak on AI as a testament on how important AI is and how widely interested the community is in the use of AI, which is very different to 2023 when we were drafting the guidelines on the use of AI. I think it is important to understand that ultimately, everything in arbitration, specifically in arbitration, needs human oversight. But in using AI in arbitration, I think we need to differentiate on how the use of AI is different in arbitration versus other parts of the law, and specifically how it is different in arbitration compared to how we would use it on a day-to-day basis. In arbitration specifically, arbitrators are still responsible for a personal or arbitrators are given a personal mandate that is very different to how law works in general. Where you have a lot of judges that let their assistants draft parts of the decision, parts of the order. Arbitration is a little different, and that for a reason. Specifically in international arbitration, because there are certain sensitivities when it comes to local law, when it comes to an international standard and local standards. Arbitrators are held to a higher standard. Using AI as an arbitrator, for example, which could technically be put at the same level as using a tribunal secretary, has its limits. So I think that AI can be used in many aspects, from drafting for attorneys, for counsel, when it comes to helping prepare graphs, when it comes to preparing documents, accumulating documents, etc., etc. But it does have its limits when it comes to arbitrators using it. As we have tried to reiterate in the guidelines, arbitrators need to be very conscious of where their personal mandate starts and ends. In other words, our recommendation, again, we are soft law guidelines, our recommendation to arbitrators are to not use AI when it comes to any decision-making process. What does that mean? We don’t know. And neither does the law. And every jurisdiction has their own definition of what that means. It is up for the arbitrator to define what a decision-making process is and to decide of whether the use of AI in that process is adequate. 

Rebeca: Thank you so much, Ben. I want to now kind of pivot, since we’ve been talking a little bit more about the guidelines, I want to ask you a few questions about them. So they were created with a global perspective, right? And so what initiatives is the AI task force pursuing to ensure the guidelines remain relevant worldwide? You’ve been talking about different legal systems and local laws and how practitioners or certain regulations within certain jurisdictions might treat certain things differently. So what is the AI task force doing to remain relevant, to maybe create some sort of uniformity? So what can you tell me about that? 

Benjamin: So we at the SVAMC task force, we continue to gather feedback, of course, And we’re looking for global adaptation. We will continue to work closely with practitioners, with institutions, with lawmakers, with government, to ensure that when it comes to arbitration, AI is given a space, it’s used adequately, and if possible, of course, and preferential to us, the SVAMC AI guidelines are used. That’s why they were drafted, to be used. When we presented the guidelines to different committees and to different law sections and bar associations, it struck us that jurisdictions such as the U.S., and more specifically in New York, where both you and I are based, the community was not very open to receiving these guidelines as guidelines. And the suggestion was actually made to creating a white paper, And as much as it seemed to be a shutdown at an early stage, when we were thinking about it, and I was very blessed to have seven additional members in the Guidelines Drafting Committee, seven very bright individual members that I learned a lot from during this process. It was clear to us that jurisdictions such as New York have a very high ethical standard, and where guidelines such as our guidelines would potentially be seen as doubling ethical rules. So although we advocate for them not being ethical guidelines whatsoever, because we don’t believe they are, we strongly suggest that local and international ethical standards are being upheld. So with that in mind, we realize that there is more to a global aspect that needs to be addressed rather than an aspect of law associations in the US or in the UK or now in Europe. Up-and-coming jurisdictions that up until now did not have a lot of exposure to artificial intelligence and maybe even technology as a whole are rising. And they may need more guidance than jurisdictions where technology may be an instinct away. So what the AI task force has created. And is continuing to recruit for, are regional committees for the AI Task Force, tracking AI usage in different legal systems and different jurisdictions. Our goal is to track AI-related legislation and its potential impact on arbitration. These regional committees will also provide jurisdiction-specific insights to refine the guidelines. And hopefully, or this is what we anticipate, these regional committees will help bridge the gap between AI’s global development and local legal framework. There will be a dialogue. We will continue, obviously, to be present at conferences, to have open dialogue, and to recruit, of course, for these committees. But the next step is definitely to focus on these regional committees and to see how we, as the AI task force of the Silicon Valley Arbitration Mediation Center, can impact the use of AI in arbitration worldwide. 

Rebeca: Well, that’s very interesting. So you’re utilizing committees in different jurisdictions to keep you appraised of what’s happening in each jurisdiction. And then with that, continue, you know, somehow evolving the guidelines and gathering information to see how this field, you know, it’s changing rapidly. 

Benjamin: Absolutely. Initially, we were thinking of just having a small local committee to analyze different jurisdictions and what laws and what court cases, etc. But we soon came to realize that it’s much more than tracking judicial decisions. We need people on the ground that are part of a jurisdiction, part of that local law, to tell us how AI impacts their day-to-day, how it may differ from yesterday to tomorrow, and what potential legislation will be enacted to either allow or disallow the use of certain AI. 

Rebeca: That’s very interesting. I think it’s something that will keep the guidelines up to date and relevant for a long time. So kudos to you, the SVAMC and the task force. Now, I know that the guidelines are a very short paper, you know, and then in the back you have the commentary on them. So I want to, I’m not going to dissect all of the guidelines, but I want to come and talk about one of them in particular that I think created a lot of discussion around the guidelines itself. So for full disclosure, right, I was part of the reviewing committee of the AI guidelines. And I remember that one of the most debated aspects of the SVAMC AI guidelines is guideline three on disclosure, right? So should arbitrators and counsel disclose their AI use in proceedings? So I think that that has generated a lot of debates. And that’s the reason why we have the resulting guideline number three, the way it is drafted. So can you give us a little bit more of insight what happened there? 

Benjamin: Absolutely. I’d love to. Guideline three was very controversial from the get-go. We initially had two options. We had a two-pronged test that parties would either satisfy or not, and then disclosure was necessary. And then we had another option that the community could vote on where it was up to the parties to decide whether their AI-aided submission could impact the outcome of the case. And depending on that, they would disclose or not disclose whether AI was used. Quite frankly, that was a debate we had in 2023, and a lot changed from November 2023 until April, when we finally published the first version of the AI guidelines. A lot of courts have implemented an obligatory disclosure. I think people have also gotten more comfortable with using AI on a day-to-day. And we ultimately came to the conclusion to opt for a flexible disclosure approach, which can now be found in the guidelines. The reason for that was relatively simple, or relatively simple to us who debated that. Having a disclosure obligation of the use of AI will very easily become inefficient for two reasons. A blanket disclosure for the use of AI serves nobody. It really boils down to one question, which is, if the judge, or in our case in arbitration, if the arbitrator or tribunal knows that AI was used for a certain document, now what? How does that knowledge transform into action? And how does that knowledge lead to a different outcome? And in our analysis, it turned out that a blanket disclosure of AI usage, or in general, an over-disclosure of the use of AI in arbitration, may actually lead to adverse consequences for the parties who make the disclosure. Why? Because not knowing how AI can impact these submissions causes arbitrators not to know what to do with that disclosure. So ultimately, it’s really up to the parties to decide, how was AI used? How can it impact the case? What is it I want to disclose? How do I disclose? It’s also important for the arbitrators to understand, what do I do with the disclosure before saying, everything needs to be disclosed. During the GAR event in New York, the issue was raised whether documents which were prepared with the use of AI should be disclosed or whether there should be a blanket disclosure. And quite frankly, the debate went back and forth, but ultimately it comes down to cross-examination. It comes down to the expert or the party submitting the document, being able to back up where the information comes from rather than knowing that AI was used. And if you put that in aspect, we received a very interesting question of why we should continue using AI, knowing that approximately 30% of its output are hallucinations and it needs revamping. This was compared to a summer associate or a first-year associate, and the question was very simple. If I have a first-year associate or a summer associate whose output has a 30% error rate, why would I continue using that associate? And quite frankly, there is merit to the question, and it really has a very simple answer. And the answer is time and money. Using AI makes it much faster to receive using AI makes it faster to receive output than using a first year associate or summer associate and it’s way cheaper. For that, it’s worth having a 30% error margin. I don’t know where they got the 30% from, but we just went along with it. 

Rebeca: I was about to ask you where they get the 30%. And well, I think that for first-year associates or summer associates that are listening, I think that the main thing will be for them to then become very savvy in the use of AI so they can become relevant to the practice. I think everyone, you know, there’s always that question about whether AI will replace all of us, the entire world, and we’ll go into machine apocalypses. I don’t see it that way. In my view, I see that if we, you know, if we train ourselves, if we’re not afraid of using the tool, we’ll very much be in a position to pivot and understand how to use it. And when you have, what is the saying, garbage in, garbage out. So if you have a bad input, you will have a bad output. You need to know the case. You need to know your documents to understand whether the machine is hallucinating or giving you, you know, an information that is not real. I like to play and ask certain questions to chat GPT, you know, here and there. And sometimes I, you know, I ask obviously things that I know the answer to. And then I’m like, chat GPT, this is not accurate. Can you check on this? And he’s like, oh, thank you for correcting me. I mean, and it’s just a way of, you got to try and understand it so you know where to make improvements. But that doesn’t mean that the tool, because it’s a tool, will come and replace, you know, your better judgment as a professional, as an attorney. 

Benjamin: Absolutely. One of the things we say is it is a tool. It does nothing out of its own volition. So what you’re saying is 100% right. This is what the SVAMC AI guidelines stand for. Practitioners need to accustom themselves on proper use of AI. AI can be used from paid versions to unpaid versions. We just need to understand what is an open source AI, what is a close circuit AI. Again, for whoever’s listening, feel free to look up the guidelines. There’s a lot of information there. There’s tons of articles written at this point. And just be very mindful of if there is an open AI system, such as an unpaid chat GPT version. It does not mean you cannot use it. First, check with your firm to make sure you’re allowed to use it. I don’t want to get into any trouble. 

Rebeca: Well, we don’t want to put confidential information on an open AI platform. 

Benjamin: Exactly. Once the firm or your colleagues allow you to use ChatGPT, even if it’s an open version, just be very smart about what it is you’re putting in. No confidential information, no potential conflict check, no potential cases. Just be smart about what it is you put in. Another aspect we were actually debating about is this hallucination. Just an example, let’s say you say this is an ISDS case, so we’re talking a little more public, and you ask Chad GPT, hey, show me all the cases against Costa Rica. And it hallucinates, too. It might actually be that somebody input information for a potential case against Costa Rica or a theoretical case against Costa Rica, Chad GPT being on the open end, takes that as one potential case. So just be very smart. Be diligent, but also don’t be afraid of using it. 

Rebeca: That’s a great note to end on. AI is here to stay. And as legal professionals, it’s up to us to ensure it serves the interests of justice, fairness, and efficiency. And for those interested in learning more about the SVAMC AI guidelines, you can find them online at svamc.org and search for guidelines. I tried it myself and you will go directly to the guidelines. And if you like to stay updated on developments in AI and arbitration, be sure to follow Tech Law Talks and join us for future episodes where we’ll continue exploring the intersection of law and technology. Ben, thank you again for joining me today. It’s been a great pleasure. And thank you to our listeners for tuning in. 

Benjamin: Thank you so much, Rebeca, for having me and Tech Law Talks for the opportunity to be here. 


When one of your cases is in need of a construction expert, estimates, insurance appraisal or umpire services in defect or insurance disputes – please call Advise & Consult, Inc. at 888.684.8305, or email experts@adviseandconsult.net.

Leave a Reply

%d bloggers like this: