There is currently a flurry of interest in AI ethics. The Pentagon is wrestling with establishing AI ethics for military use. Know-it-all Google is offering its advice on dealing with the tricky ethics of AI.
When examining AI ethics problems, we can distinguish between design ethics and end-user ethics. Design ethics concerns the final performance of an AI product. End-user ethics concerns how the designed AI is used.
Assuring that AI weapons work as desired is the aim of design ethics. Moral concern such as opposition to autonomous AI military weapons that kill is an example of end-user ethics concerns. Design ethics is always the responsibility of the AI programmers and system testers. End-user ethics addresses how the technology is to be used.
Legal language can be applied to the vetting of AI design standards. Some AI makes many mistakes, like Alexa’s response to audio requests to play Spotify tunes. But they are mistakes we can live with. A legal standard here might be that there is a “preponderance of evidence” that Alexa works. Self-driving cars are a more serious matter. Before considering riding in a self-driving car, I would like to know that the vehicle operates as intended “beyond a reasonable doubt.” Total certainty is, unfortunately, never possible.
Measuring the different levels of design assurance is the task of those making regulatory policy and standards. How are “preponderance of evidence” and “beyond a reasonable doubt” quantified? There is much from the field of reliability engineering that can be applied here.
Is biased AI ethical? A headline from Wired reads ”AI Is Biased. Here’s How Scientists Are Trying to Fix It.” In the most basic sense, all software, including AI, is infused with bias. How can a computer add numbers without being biased toward accepting the fundamentals of arithmetic? Without the guiding bias of the programmer, computer programs can do nothing. AI without bias is like ice cubes without cold.