Risks and Benefits of Artificial Intelligence in Courts

In recent years artificial intelligence (AI) has started seeping into government services, aiding or replacing human labor. Estonia’s chief data officer, Ott Velsberg, is currently working towards the installation of AI in courts. It comes as no surprise that Estonia is the first to strive for the operation of AI judges in courts, considering that the country is highly digitalized in its public sector. A national ID card system, e-voting, digital tax filing, digital signatures, online healthcare records, as well as an e-Residency program, which allows commencement of businesses within the European Union from a remote place, are all part of Estonia’s national e-governance project.

AI technology in Estonian courts will relieve judges from giving rulings on small disputes of up to 7,000 Euro (8,000 USD) by sorting and processing information at a much faster rate. This aid will grant human judges the time to focus on cases of greater dimension. Decisions made by AI personnel will be legally binding, but the element of human decision-making will be maintained. Appeals will still be filed to human judges.

Installing AI judges further begs the question of accountability for court rulings. Estonia’s Ministry of Economic Affairs and Communications considers regulating the legal status of AI judges by defining it as a hybrid of separate legal identity – like corporations – as well as the personal property of liable individuals or groups. Algorithms are only as good as their programmers and this leaves AI systems vulnerable to human error. Therefore, programmers and operators of AI technology may be held liable for mistakes resulting in unintended or unpredicted occurrences. But if AI are defined as a separate legal personality only, human actors behind the technology would escape responsibility. Therefore, legal frames determining liability and negligence issues need to be defined more clearly in the future.

Development in technology for government services is meant to support and speed up judiciary processes. The human component, however, can complicate impartiality in legal matters since the quality of AI rests on the skill of technology developers. AI systems could collect and process personal data in an immanently biased way and disadvantage individuals based on factors such as race. Crime prediction software such as PredPol is already utilized by several states in the United States and was found to be fed with racial prejudice by developers and operators. If AI in courts were to gain greater relevance in the future, these issues would have to be eliminated.

In 2015, the United Nations Interregional Crime and Justice Research Institute (UNICRI) set up a subdivision focusing on AI called the Centre for Artificial Intelligence and Robotics. The newly established institution collaborated with the International Criminal Police Organization (INTERPOL) in 2018, organizing a global event on utilization, safety, and responsibility issues regarding AI in law enforcement. If the judiciary is also equipped with AI systems on an international scale, questions of human error and algorithm design need to be further addressed in order to ensure a transparent, reliable, and impartial legal system.

While the use of technology in courts offers benefits, it is also susceptible to threats such as cybercrime. In 2007, Russian hackers caused a disruption of services in Estonian e-governance networks. Although no severe damage was inflicted, the attack demonstrated that heightened security must become a priority if AI systems were to be established in courts. AI systems in the judiciary can become a weak point for criminals, terrorist groups, and governments with malicious intent to take advantage of.

Additionally, the digitalization of legal services could cause a decrease in the number of civil servants. According to estimates, the utilization of AI in workplaces could save 1.2 billion working hours and 41.1 billion USD annually. These prospects foreshadow positive outcomes such as more efficient labor and free time for workers. A shift from mundane manual tasks to more mentally demanding work would be the result. What dims positive outlooks is the possibility that such a shift could also lead to an employment vacuum for countless civil servants. Extensive parts of administrative and organizational work could be given to AI, leaving civil servants without a job. At the same time, a more positive outlook suggests that shifting towards an AI-oriented society will expand employment opportunities in the technology sector in the long term.

About the Author

Yasemin Zeisl

Yasemin Zeisl earned her MSc in International Relations and Affairs from the London School of Economics and Political Science (LSE). Yasemin is fluent in German and English and possesses advanced Japanese language skills.

Contact Expert