Current Legal Issues on Adopting Artificial Intelligence in Legal Judgment
Abstract
In line with the development of artificial intelligence (AI), with its great advantage of amount of data and data processing speed, artificial intelligence is now considered to replace not only routine and simple jobs but also jobs that require specialties. Among them, there is heated controversy over whether AI could be used in legal professions. According to one study by Oxford, considering various automation risk factors they said that judges would be categorized as a job with relatively high automation risk. Some people say that with the efficient speed, cost and accuracy of AI, it could be better judges. Despite the rapid development of machine learning technique and its accuracy, however, most people still insist that there are some limitations of replacing judges with artificial intelligence. One of the first limitations is inability to actively understand the situation. Artificial intelligence has difficulty in understanding details in the cases, and finding out hidden meanings. Also, it lacks creativity. It is hard for AI to deal with new cases and come up with the new judgment without human’s help. Finally due to practical problems such as systematic defects, the complete job automation is difficult. It is true that it would be very helpful if people use AI in the legal field in terms of its ability of processing vast amounts of data at high speed. However, considering limitations of AIs, replacing human judge completely with artificial intelligence seems unlikely to succeed for the foreseeable future.
Ⅰ. Introduction
After an impressive Go match between Google DeepMind’s Alpha-go and Korean go player Lee Sedol, the world has begun to pay attention to the development of the artificial intelligence. Through this event, it has become a widely known fact that the ‘Artificial Intelligence’ is not just a mere search engine but a learning machine with a great capacity of dealing countless data. It can not only count and classify data, but also ‘learn’ and ‘develop’ through its own process.
With its great advantage of data capacity and processing speed, Artificial Intelligence(AI) is now considered to replace many of the human jobs including some areas that require human specialties. Among them, there are various controversies over whether AI could be used in legal professions. According to Carl Benedikt Frey and Michael A. Osborne, considering automation risk factors such as social perceptiveness, negotiation, originality, finger dexterity, manual dexterity, cramped work space, fine arts, caring for others, they said that judge would be rated with relatively high automation risk of approximately 40%.[1] Some people say that with the efficient speed, cost and accuracy of AI, it could replace judges whereas others point out the limitations of its automation. Can AI system successfully carry out legal professionals’ job?
Ⅱ. Current AI system in Legal field
In 2011, IBM Watson has created artificial intelligence that could be used in legal fields called ‘ROSS intelligence.’ Ross users can ask any questions in the same way as a client might ask—for instance, “If an employee has not been meeting sales targets and has not been able to complete the essentials of their employment, can they be terminated without notice?” Then, the system sifts through its database of legal documents and spits out an answer paired with a confidence rating.[2][3]
Also, it continues to learn, because a human can approve the answer with thumbs up button or reject the answer with thumbs down button. Since Ross uses a cognitive computing platform, it learns from past interactions, meaning that the more it is used by lawyers, the more accurate it will become. According to IBM Watson’s report[4], this cognitive systems are probabilistic. They generate not just answers to numerical problems, but hypotheses, reasoned arguments and recommendations about more complex—and meaningful—bodies of data. What’s more, cognitive systems can make sense of the 80percent of the world’s data that computer scientists call “unstructured.” This enables them to keep pace with the volume, complexity and unpredictability of information and systems in the modern world.
Furthermore, in 2017, the state of Wisconsin in the United States introduced an artificial intelligence system called ‘COMPAS’ to predict the possibility of second offense rate of offenders in the criminal trials. Accordingly, there was a case in which a criminal was sentenced to six years in prison for judging that his or her risk of recidivism was high. Loomis, who was the prisoner at the bar, appealed that using AI system on judging violates the principles of legal proceedings but the Wisconsin Supreme Court rejected it. In this way, AI technologies have been actively settling in the legal field.
Ⅲ. Positive sides of AI based judgments.
Considering the point that AI takes far less time in finding data and has far bigger data capacity, it suggests using AI would help legal professions not to forget precedents or trivial legal statements among a vast amount of information. In 2004, Theodore Ruger, Andrew Martin, and other collaborators built a statistical model of Supreme Court outcomes based upon certain factors. The fact is, not only did the statistical model outperform several experts in terms of predicting Supreme Court outcomes, it also highlighted relationships in the underlying data that may not have been fully understood by human experts.[5] According to the thesis by Harry Surden,[6] the Supreme Court hears appellate cases originating from many different appellate circuits. Many experts had deemed the circuit of origin (e.g., Ninth or Sixth Circuits) of such a lower opinion as less important than other factors. However, the analysis of the data showed a stronger correlation between specific circuits of origin which was the key to solve the case. This case was not using the ‘learning’ technology through feedbacks so if they continue to be strengthened with feedbacks, its accuracy would improve.
In addition, considering that legal judgment should be objective and fair, AI could successfully perform tasks in terms of fairness. Humans have limited time to successfully classify vast amount of data and a bad culture such as giving privileges to one’s former post(Jeon-gwan-ye-u), having bias in themselves unconsciously. AI-based with the given standards would hardly be swayed by those problems therefore could fairer judgment[7].
Ⅳ. Limitations of AI-based judgments.
- Problems in active understanding
Despite the rapid development of machine learning technique and its accuracy, people insist there still are some limitations of fully replacing legal professions with artificial intelligence. First limitation is inability to actively understand the situation. What active understanding means here is that judges should not only acknowledge and find the right data, but also have to actively interpret situations and relations between laws.[8]
The law system is not that kind. It sometimes has abstract concepts and needs to be interpreted by judges. Even the interpretations by judges are not same with one another. According to Harry Surden, if there are certain types of salient information that are both difficult to quantify in data and whose assessment requires nuanced analysis, such important considerations may be beyond the reach of current machine learning predictive techniques.
Also, there are situations where opinions vary. In this case, all AI could do is to show what they have searched, not the decisive answer[9] because they cannot decide which opinion is right or wrong with full understanding. It is true that AI could deal with much data than human judges but it is still hard for them to synthetically and individually analyze and apply the data as human brain. It is true that defining ambiguous concept is also hard for human but we cannot say it is okay to believe computers since humans also have difficulties. Although there are some cases that humans do not find as many as data as AI, the process of linking, dividing, analyzing and judging values of certain matters are naturally inside human brain and the artificiality of AI sometimes could cause errors when dealing with that massive data to provide a good understanding of them.
Also, in the case of legal AI, the data inside should be based on legal filed. Therefore, it would be inevitable that in order to understand the concept of ‘political criminal’ for instance, the system should search for other data sources such as philosophy or politics. However, inserting information from other fields mentioned above to the legal database would also cause some problems and these problems would be dealt with in the later parts.
It could be said that humans could insert the data in legal terms to make AI understand and find right data. However, even if we provide the situation in legal terms, during the process of making a decision of a certain incident it is inevitable that common senses involve. Maybe we could make another program to deal with common sense data and work collaboratively with legal system. Even in this case, there are two problems. First, it is hard to define common sense in a literal term because it should be applied in a various situations and it is hard to establish every lists of it. In addition, there are countless common senses in our lives and they are sometimes under the surface therefore it is difficult to reflect all of them in the database. Secondly, common sense could change. Therefore if we confine the definition of common sense at the first place, it is hard to reflect the changing values of it. Although we introduce feedback system to make this change applied, it is ambiguous and hard to say when to actually change the definition therefore it is not easy that the change is instantly reflected within the system
- Problems in dealing new judgments
Since AI in legal field would deal with laws and judicial precedent data, it is obvious that the data has limitations. Not all cases are the same and not all judgment would be the same. Can artificial intelligence deal with new cases and draw new conclusions?
First of all, can robots deal with completely new cases? All cases are differentd sometimes there are totally new types of misconducts, breach of contract and illegal acts so it is not easy to make agreeable judgment based on general cases in its data.[10] People can say that handling that kind of new cases is also very hard for humans. However, humans can notice that the acts are somehow illegal although there is not any exact legal provision explaining over that case. In this sense, humans can interpret the situation and find where that new case could be applied. However, for machines which haveo relevant data, it would be hard for them to predict the possible judgment and may come out with bizarre consequences.
Secondly, can AI generate new conclusion? The legal professionals have to be skeptical about their own legal system as well.[11] Can artificial intelligence have a doubt on its own system and reflect the world’s changing values?[12] For example, in Korea, marital rape was not legally sentenced until 2013. In previous cases, the rape between married couple was not considered illegal. So, if that 2013 judgment[13] had not yet been made, the artificial intelligence would deal with the case only based on the previous cases. Therefore, artificial intelligence cannot come out with new and also ‘right’ judgment.
Then could we input other information such as sociology or philosophy to reflect the legal system’s error? It is also problematic for two reasons. First, we cannot just randomly insert those data but should carefully verify each theory and select in order to put it into law system. In this procedure, judge’s decision is needed and as a result, subjectivity is inherently involved. Second, if those data gets into the law system, it is as same as law and has almost equal authority as law. Then, it is beyond judge’s right because inserting new laws is a job of legislatives.
- Problems in practice
According to a study by Harry Surden[14] , even if artificial intelligence comes up with the right answer, there are certain problems. According to his research, today’s artificial intelligence that uses neural networks tends to understand situation using inscrutable formulas or paths. Although neural networks make accurate judgments through self-learned methods, figuring out how the answer came out in the system is complicated even for AI programmers. This point is undesirable because of the nature of the law, which requires clear explanations of the reasons and principles that led to the conclusion. According to his research, the indeterpretable mechanism of artificial intelligence could rather create a deformed judgment structure that produces the correct result in an inaccurate way. Thus, although it is possible for artificial intelligence to make legal judgments, making them on its own without human’s consideration can be dangerous.
Secondly, as artificial intelligence operates through computers, we should consider security issues as well. Hacking issue is being discussed as one of the serious problems of self-driven cars operated by artificial intelligence since they are directly related to our life. Legal judgment is not free from the problem of hacking, as it is also essential to guarantee basic human rights. Suppose artificial intelligence makes a judgment and it completely replaces the legal profession. In this case, if there is any possibility of system being manipulated to make decisions that are in favor of certain party, then this is a violation of the law.
AI learns through feedbacks and it is known that the feedback process is quite efficient and very accurate through various examples. Alpha-go and Google’s autonomous car also use feedback process to improve accuracy of its own system. However, in case of judging, there is a problem of ‘who’ gives the feedback. In case of Ross intelligence, when the answer was not appropriate and not same with court’s decision it will have negative feedbacks. However, how can we give feedbacks when we use it as a judge? Judges are the ultimate authority to make a judgment therefore deciding who will provide feedbacks is an important problem. Who could define whether the artificial judge’s decision was legally, morally right or not? Let’s assume that the robot itself gives the feedback. Can it be objective? Also, it could cause hypertrophy of judiciary. Then, how about president or national assembly? If they give the feedback it will violate the independence of judiciary and would break the rule of separation of three powers; administrative, legislative and judiciary powers.[15] Then, if public offers feedbacks would it be better? It is also problematic since it is hard to know if they are qualified and objective enough to give the feedbacks of the legal judgment. Also, if we should allocate independent organization to give the feedbacks, they are not as different as judges. Then, it is also not a full replacement of human judges.
Finally, in the end, to find out whether people are prepared to accept artificial intelligence as a judge, is the most important part of implementing AI in legal judgments. Some may welcome the legal judgment of artificial intelligence, praising its objectivity and transparency, but so far, many people are wary of the limitations of artificial intelligence and prefer to get a more sympathetic judgment from humans.
Ultimately, it may be a matter of whether the guardian of law becomes a person or a machine. This will lead to a debate over whether artificial intelligence can regulate the legal system. It is highly likely that it is wrong to leave legal judgment, which are still a product of social consensus, to machines that do not take legal and ethical responsibility. If people’s trust on the legal profession falls, more and more people may seek a more accurate and fair judgment from artificial intelligence. Still, based on practical limitations, it is possible that even though artificial intelligence can perform much of the legal profession, it seems hard to completely replace legal professions without human supervision.
Ⅴ. Conclusion
The outstanding data processing speed and its accuracy of dealing data of Artificial Intelligence is hardly doubtful. It’s hard to say that humans can do that better than robots. However, although it is true that AI could be helpful for supplementing judges’ works, it is hard to say they could fully replace human’s legal judgment.
It is still very hard for AI to actively understand and interpret what is under the law statement, deal with new cases and generate new judgments. Furthermore, there are practical problems when the replacement is realized. The points of these limitations is not that trivial but very crucial to keep the justice, democracy and authority of legal system. It is true that judgments of human judges sometimes receive criticism for not being objective. However, we could not also blindly believe that robots are objective since it deals with human’s data.
Nevertheless, it is very hard to predict future and especially how future machines will work. Therefore someday maybe those limitations could be solved with technological development. In that case the problem would be whether or not people are willing to be judged by AI judges. Although many people believe the authority of current legal system, if the current legal system loses credibility of ruling fair judgments, it is possible that one day robot judges would be the authority. However, from current status and in near future, it seems that there still are various limitations suggesting that complete replacement of human judge’s work would not be possible.
References
Carl Benedikt Frey, Michael A. Osborne, The Futrure of Employment: How Susceptible Are Jobs to Computerisation, Oxford Martin School, (Sep. 17, 2013) 16-26. .
Ross Intelligence Homepage http://www.rossintelligence.com/lawyers (last visitied Jul. 15, 2018)
Davey Alba “Your Lawyer May Soon Ask This AI-Powered App for Legal Help”, Wired, ( Aug. 7, 2015) https://www.wired.com/2015/08/voice-powered-app-lawyers-can-ask-legal-help/ (last visited Jul. 15, 2018)
Dr. John E. Kelly III, Computing, cognition and the future of knowing How humans and
machines are forging a new age of understanding, IBM.Research and Solutions Portfolio,
(Sep. 2016) Vol. 28/No, 8, 2-11.
Theodore A. Dunn, Jr. Exploring the Possibility of Using Intelligent Computers to Judge
Criminal Offenders, Florida Department of Law Enforcement. (1995), 1-9.
Harry Surden , Machine Learning and Law, Washington Law Review, Vol. 89, No.1 (Mar. 2014),
1-29.
Kim Sung-Ryong, The current situation and the suggestions of AI & Law research in Legal Reasoning, Kyungbook University IT & law review, Vol.5 (2011) 319-346.
Lee Sang Jik , “The Age of Artificial Intelligence, can we let kids to be legal professionals?” Dong-A Science, (Mar. 22, 2016) http://dongascience.donga.com/news.php?idx=10995 (last visited Jul. 15, 2018).
Jung Jung Hoon, “Artificial Intelligence and Judges” , Hankyeorae, (Mar. 15 2016), http://www.hani.co.kr/arti/opinion/column/735139.html (last visited Jul. 15, 2018)
Seo Min Seok, “Artificial Intelligence Judges”, Lawtimes, (Mar. 10, 2016), https://www.lawtimes.co.kr/Legal-Opinion/Legal-Opinion-View?serial=99109, (last visited Jul. 15, 2018)
Pamela N Gray , Artificial Legal Intelligence, Harvard Journal of Law&Technology, Vol.12, No.1, (Fall 1998) 241-261
Harry Surden , Values Embedded in Legal Artificial Intelligence, Colorado Law Legal Studies
Research Paper, No. 17, University of Colorado Law School, (Mar. 13, 2017) 1-6.
Supreme Court 16th May 2013. 2012do1478
[1] Carl Benedikt Frey, Michael A. Osborne, The Futrure of Employment: How Susceptible Are Jobs to Computerisation, Oxford Martin School, 16-26, (Sep. 17, 2013).
[2] Ross Intelligence Homepage http://www.rossintelligence.com/lawyers (last visited Jul. 15, 2018).
[3] Davey Alba “Your Lawyer May Soon Ask This AI-Powered App for Legal Help”, Wired (Aug. 7, 2015), https://www.wired.com/2015/08/voice-powered-app-lawyers-can-ask-legal-help/ (last visited Jul. 15, 2018).
[4] Dr. John E. Kelly III, Computing, cognition and the future of knowing How humans and machines are forging a new age of understanding, IBM Research and Solutions Portfolio, Vol. 28/No, 8, 2-11, (Sep. 2016).
[5] Theodore A. Dunn, Jr. Exploring the Possibility of Using Intelligent Computers to Judge Criminal Offenders, Florida Department of Law Enforcement, 1-9, (1995).
[6] Harry Surden, Machine Learning and Law, Washington Law Review, Vol. 89, No.1, 1-29, (Mar. 2014).
[7] Theodore A. Dunn, Jr. Exploring the Possibility of Using Intelligent Computers to Judge Criminal Offenders, Florida Department of Law Enforcement, 1-9, (1995).
[8] Harry Surden, Machine Learning and Law, Washington Law Review, Vol. 89, No.1, 21, (Mar. 2014).
[9] Kim Sung-Ryong, The current situation and the suggestions of AI & Law research in Legal
Reasoning, Kyungbook University IT & law review, Vol.5, 319-346, (2011).
[10] Lee Sang Jik, “The Age of Artificial Intelligence, can we let kids to be legal professionals?”
Dong-A Science, (Mar. 22, 2016), http://dongascience.donga.com/news.php?idx=10995 (last visited
Jul. 15, 2018).
[11] Jung Jung Hoon, “Artificial Intelligence and Judges”, Hankyeorae, (Mar. 15 2016),
http://www.hani.co.kr/arti/opinion/column/735139.html (last visited Jul. 15, 2018).
[12] Pamela N Gray, Artificial Legal Intelligence, Harvard Journal of Law&Technology, Vol.12, No.1,
241-261, (Fall 1998).
[13] 2012Do1478 (Supreme Court, May, 16, 2013)
[14] Harry Surden, Values Embedded in Legal Artificial Intelligence, Colorado Law Legal Studies
Research Paper, No. 17, University of Colorado Law School, 1-6, (Mar. 13, 2017).
[15]Seo Min Seok, “Artificial Intelligence Judges”, Lawtimes, (Mar. 10, 2016),
https://www.lawtimes.co.kr/Legal-Opinion/Legal-Opinion-View?serial=99109 (last visited Jul. 15,
2018).
csh435@gmail.com