323-380-2660 [email protected]

AI is a developing field of study that will change our lives forever. Computer ethics plays a very crucial role in controlling this area of study. Ethics assists us in finding solutions for policy vacuums created by this technology. As we see more advancement in technology we will need to use ‘robot ethics’ to create beneficial ethical impact agents. Artificial intelligence has evolved so much that it will start to impact our definition of justice. Not only will we see AI in our new Teslas, but we will also see its contribution to our courts.  Like Moor states, “any robot is a potential ethical impact agent to the extent that its actions could cause harm or benefit to humans” (Moor 2014). If the power of AI in courts is misused it can corrupt our whole entire justice system. But if we can develop a beneficial ethical impact agent from this AI, then we can combat existing corruption in courts. There can be computer ethics issues associated with this AI and this paper will explore the positives and negatives.

Current AI in US courts “use algorithms to determine a defendant’s ‘risk’, which ranges from the probability that an individual will commit another crime to the likelihood a defendant will appear for his or her court date” (Tashea). The AI outputs decisions about bail, sentencing, and parole. This AI raises many questions like Will a time come where judges will step down and AI will do their work? Would this be safe or will the collaboration between humans and machines always be necessary to make ethical decisions? These are the types of questions we should ask ourselves to create an ethical system that respects core values. Core values in human societies are life, freedom, knowledge, ability, resources, and security. We should respect others and their core values. We should also avoid harming others without justification. Moor believes, “the core values provide standards with which to evaluate the rationality of our actions and policies. They give us reasons to favor some courses of action over others. They provide a framework of values for judging the activities of others as well” (Bynum 33). Moor states that computers are “logically malleable,” this means that “they can be shaped and molded to do any activity that can be characterized in terms of inputs, outputs and connecting logical operations” (Bynum 18). The AI described in this paper is an explicit ethical agent which is an “agent that can identify and process ethical information about a variety of situations and make sensitive determinations about what should be done. When ethical principles are in conflict, these robots can carry out resolutions” (Moor 2014). In the case of “Wisconsin v. Loomis, defendant Eric Loomis was found guilty for his role in a drive-by shooting.” Loomis gave answers to “a series of questions that were then entered into Compas, a risk-assessment tool developed by a privately held company and used by the Wisconsin Department of Corrections.” Loomis was given a longer sentence partially because of the score this AI produced (Tashea). One machine-learning policy simulation concluded that “such programs could be used to cut crime up to 24.8 percent with no change in jailing rates or reduce jail populations by up to 42 percent with no increase in crime rates” (Johnston). A ProPublica report on Compas found that “black defendants in Broward County, Florida were far more likely than white defendants to be incorrectly judged to be at a higher rate of recidivism” (Johnston).  In the European court of human rights, the AI “judge” has reached the same verdicts as judges in almost four in five cases involving torture, degrading treatment and privacy  (Johnston). The AI “judge” is an explicit ethical agent that can bring many benefits to human courts. This AI can be very useful because it can protect communities from bad individuals and deliver justice to guilty individuals.

Humans rely on “inherently biased personal experience to guide their judgments.” Professionals in the criminal justice system “have a seemingly impossible task.” They must examine the “probability that a criminal defendant will show up to trial, whether they are guilty, what the sentence should be, whether parole is deserved and what type of probation ought to be imposed.” These decisions “require immense wisdom, analytical prowess, and evenhandedness to get right” (Watney). Once the decision is made by human courts it will change the course of people’s lives forever. Since humans rely on biased personal experience, empirically grounded questions of predictive risk analysis play to the strengths of machine learning, and other forms of AI. Explicit ethical agents like Compas can “identify and process ethical information about a variety of situations and make sensitive determinations about what should be done. When ethical principles are in conflict, these robots can work out reasonable resolutions” (Moor 2014). According to previous research, this AI can cut crime with no change in jailing rates, and even reduce jail populations with no increase in crime rates. Although the court AI can bring many benefits, there are still underlying issues with this technology that we must face.

This AI should respect its society’s laws and also respect every human’s core values. Some might say that this AI fails to do those things and that it lacks the empathy of a human being. Algorithms for AI’s like Compas are hidden and owned by private companies. Neural networks, “a deep learning algorithm meant to act as the human brain, cannot be transparent because of their very nature.” These algorithms not explicitly programmed, “a neural network creates connections on its own. This process is hidden and always changing, which runs the risk of limiting a judge’s ability to render a fully informed decision and defense counsel’s ability to zealously defend their clients” (Tashea). It can be very dangerous to trust these algorithms because a neural network constantly keeps changing and learns on its own. The legal community “has never fully discussed the implications of algorithmic risk assessments.”  Now, professionals are “grappling with the lack of oversight and impact of these tools after their proliferation” (Tashea). It is obvious that we cannot let unchecked algorithms blindly drive the criminal justice system into the ocean. Describe all the ethical things it goes against. Not every criminal in the US justice system is convicted using this AI. This AI is a policy vacuum in itself because how can we use an unchecked algorithm to convict citizen? There is currently no direct policy for when this AI should be used and how much it can be trusted. When we feed historic data to train the AI, how can we expect all historic cases to be just? Does this AI value the life of humans or does it just contain the specific biases of previous cases? This area of AI is a huge risk to human life because there are not enough policies surrounding it.

We must focus on combining the power of AI in courts with our pre-existing judges. AI’s like Compas are still very far from being able to make decisions on their own. We must allow professionals in courts to make decisions and let them use tools like Compas to improve justice in courts. It is stated that “this sort of tool would improve efficiencies of high-level, in-demand courts, but, to become a reality, we need to test it against more articles and the case data submitted to the court” (Johnston). AI’s can help judges and lawyers in rapidly identifying patterns in cases that lead to certain outcomes. Finding the right balance of AI and human interaction in the justice system will be very difficult. We must keep trying to improve our justice system so that there are fewer flaws. In order to reach our goal, we will require systems and institutions that ensure proper transparency and due process.

Image result for ai in courts

 

 

 

Works Cited

Johnston, Chris. “Artificial Intelligence ‘judge’ Developed by UCL Computer Scientists.” The

        Guardian. Guardian News and Media, 23 Oct. 2016. Web. 9 Oct. 2017.

        <https://www.theguardian.com/technology/2016/oct/24/artificial-intelligence-judge-university-college-london-computer-scientists>.

Moor, James. “Four Kinds of Ethical Robots.” Philosophy Now: A Magazine of Ideas.

          Philosophy Now, Mar.-Apr. 2014. Web. 9 Oct. 2017.

          <https://philosophynow.org/issues/72/Four_Kinds_of_Ethical_Robots>.

Moor, Rogerson, Simon, and Terrell Ward Bynum, editors. Computer Ethics and Professional

          Responsibility. Blackwell Publ, 2004.

Tashea, Jason. “Courts Are Using AI to Sentence Criminals. That Must Stop Now.” Wired.

           Conde Nast, 02 June 2017. Web. 9 Oct. 2017.

            <https://www.wired.com/2017/04/courts-using-ai-sentence-criminals-must-stop-now/>.

Vincent, James. “AI Program Able to Predict Human Rights.” The Verge. N.p., 24 Oct. 2016.

             Web. 11 Nov. 2017.

             <https://www.theverge.com/2016/10/24/13379466/ai-judge-european-human-rights-court-prediction>.

Watney, Caleb. “It’s Time for Our Justice System to Embrace Artificial Intelligence.” Brookings.

               Brookings, 20 July 2017. Web. 9 Oct. 2017.

              <https://www.brookings.edu/blog/techtank/2017/07/20/its-time-for-our-justice-system-to-

                embrace-artificial-intelligence/>.