Search overlay

Search form

People

    Programs

      Events

        Faculty

        Ethics scholar gives insight on artificial intelligence and role of ethics

        March 10, 2020

        All News

         

        The subject of artificial intelligence and the role of ethics is gaining news coverage in recent stories such as “Pentagon to adopt detailed principles for using AI” and “AI ethics backed by Pope and tech giants in new plan.”

        Auburn University ethics scholar Dr. O.C. Ferrell comments on concerns, policies, societal benefits and challenges to implementation. He is the James T. Pursell Sr. Eminent Scholar in Ethics and director of the Center for Ethical Organizational Cultures in Auburn’s Harbert College of Business.

        Dr. Ferrell on Ethics of AIWhat should be our key concerns with AI as its use grows in prominence?

        While AI tech firms such as Google, Facebook, Amazon, IBM and Microsoft are embracing AI in their operations, the key risks of simulating the cognitive functions associated with humans is just being addressed. AI systems that think like humans through machine learning will have to make ethical decisions. While ethics relates to principles, values and norms, the algorithms, or set of rules simulating human intelligence, are developed by programmers that may have limited ethical knowledge. Ethics in AI at our current stage of development cannot internalize human principles and values. The result has been discrimination, bias and intrusive surveillance in some cases. Most fully-enabled AI can result in unanticipated outcomes. For example, in one case two fully enabled AI systems used machine learning to develop their own language and started communicating with each other in a manner not understood by humans. As AI systems learn from experience, develop solutions to problems, make predictions and take actions there is the need for human oversight to provide disengagement or deactivation of systems that have unethical outcomes. The mass media reports traditional misconduct by humans every day. Machines have to be regulated by organization ethics programs related to their risk areas. At this stage of development human control and oversight systems must be in place.

        How important is it to develop organizational policies for how AI will be developed and implemented?

        AI is transforming decision making in the private sector, public services and the military. The Department of Defense (DOD) recognizes the importance of developing principles and policies to address AI ethics. There are not standardized values or core practices for building decision-making systems involving machine learning. The Defense Innovation Board developed a set of principles for the ethical use of AI for the DOD. While various professions such as engineering and medical associations have developed ethical principles, AI safety, security and robustness requires principles as a first step in opening a dialogue about how to address risks. As a starting point the DOD believes the principles should reflect the values and principles of the American people as well as uphold all international laws and treaties related to the conduct of armed forces. This approach could be used by the private sector based on existing ethical values and accepted core practices that are applied to behavior not enabled by AI. The AI principles developed by the Defense Innovation Board address responsible, equitable, traceable, reliable and governable actions. Robert Bosch GMBH, a German engineering firm, is taking this approach with an ethics-based AI training program for 16,000 executives and developers. Part of the training includes a new code of ethics emphasizing principles including human control. The principles include: “invented for life” with social responsibility; AI as a tool for people; safe, robust, and explainable AI products; trust as a key value; and legal compliance. There are almost 100 private-public initiatives to guide AI ethics, but, most are designed for humans and not machines. While these principles may not be programmed into algorithms, they can be understood by humans. Both machines and humans need to work together.

        Do you see the growing use of AI as more of a benefit to society, a detriment or perhaps a bit of both?

        AI is a technology system that is not inherently good or bad. It is basically enabling technology that can allow robots and drones to carry out operations. It can take big data, and through predictive analytics, make decisions and implement operations and actions. Therefore, AI should not be viewed as a threat any more than other technologies, like computers. The risk of using this technology relates to appropriate implementation and its power to make decisions and learn from experience, making it able to go beyond human decision makers. For example, in medicine, machine learning can find statistical significance across millions of features, examples or data. Therefore, AI can exceed human ability in performing tasks quickly, learning about the nature of complex relationships, and make it possible for clinicians to provide reliable information to patients. But AI in some medical fields has been found to create biases in the data. Racial biases could possibly be a part of the algorithms. 

        In the military, AI would have to be reviewed so control of a weapon would not cause unnecessary death and destruction. While AI can ensure safety and reliability of weapons systems there will need to be human controls for disengagement, if necessary. At this stage of development, society should not fear AI because it has the potential to improve the quality of life and operational efficiency. On the other hand, until the issues of privacy and bias as well as other ethical issues are addressed it should augment rather than replace humans.

        What do you believe will be society’s biggest challenge in successfully implementing AI to its fullest and best use?

        AI systems are not capable of mastering some of the strongest human intelligence attributes. It is a system of algorithms or rules programmed for a specific task. In other words, AI does not have the creativity and common sense that humans use to take some knowledge and apply it to a completely different context. While AI has the ability to develop predictive analytics, learn from big data, and make decisions, its capabilities are different from human decision making. AI works with algorithms in a series of rules or steps to construct a desired outcome. Humans have a better opportunity to apply principles and values to ambiguous situations. This creates a dilemma for incorporating ethics into AI decisions. Principles are pervasive boundaries for behavior and rule-based. Unfortunately, there are no proven ways to translate principles into algorithms with legal and professional accountability. On the other hand, values are general beliefs and are used to develop norms that are socially enforced. There is always the possibility for ethical conflict even using the same set of values. Highly difficult ethical decisions may need an organizational mechanism for resolution of questionable issues. At this stage of AI development one of the biggest challenges will be to incorporate values into AI decisions. Some are turning to philosophical theories to resolve ethical decision making, but machines cannot take philosophical theories such as social justice and consequentialism and apply them to outcomes. There are many judgements to making ethical decisions that will be very difficult to program in a series of algorithms. Developing AI for the common good of society will require integrating machine learning with the innate ability of humans to use their cognitive ability and values to achieve desired outcomes.

        logo