Search overlay

Search form

People

    Programs

      Events

        The Director's Activities

        Questions About Changes in Business Ethics Related to COVID-19

        How is the 2020 COVID-19 pandemic different from the 2018 Spanish Flu epidemic? How has it impacted firms' integrity and responsibilities to stakeholders? 

        The Spanish Flu infected about one-third of the world’s population and killed about 675,000 Americans. People were asked to wear masks, and similar to the COVID-19 pandemic in 2020, schools, theatres, and businesses were closed for some time. What has changed is medical knowledge, technology, advanced communication, the and global connectedness of the world. Polio vaccine tests started in 1935, and a vaccine was developed by Jonas Salk and licensed to the public in 1955.

        Today, we have the capability to develop vaccines in 1–2 years. In addition, our communication systems have the ability to shift social and work environments to virtual interaction. For example, Zappos transitioned its Las Vegas, Nevada, staff to remote work across all departments and provided instructional videos to teach team members how to set up a home office. 

        The current pandemic has provided an opportunity for firms to contribute to helping both employees and the public by taking socially responsible actions and conducting business virtually and online. Apple Inc. donated millions of dollars in personal protective equipment (PPE) and other support to both the United States and China.

        Many other businesses stepped up by making masks, hand sanitizers, and ventilators to support the health care community and patients. Budweiser produced and distributed more than 500,000 bottles of hand sanitizer. Others—such as Wendy’s, Dunkin’, and Taco John’s—provided free food to health care workers and first responders, those at higher risk due to operating on the front line, and individuals in need. 

        How has business ethics changed during the pandemic? What will be the long-term effect on employees? 

        Business ethics involves and affects every employee in an organization, while social responsibility decisions about how to make a positive impact are often made by top management. The COVID-19 pandemic has created many new ethics issues. It has become more important than ever to navigate the risks associated with health, safety, and privacy.

        It has become the responsibility of each employee to be accountable and comply with policies to protect one another and customers. Wearing masks and maintaining six feet of distance requires discipline and respect. Monitoring employees working from home creates privacy issues just as tracking COVID-19 contacts using smartphones can create privacy issues. 

        In a distributive workforce, creating and maintaining an ethical organizational culture will become increasingly challenging. In an organization, there is proximal and supervisory oversight and interaction. What will this look like in our evolving and transforming teleworkforce?

        There will be long-term changes in the importance of responsible and authentic care for both employees and customers. The CEO of Yum! Brands, the company behind KFC, Pizza Hut, and Taco Bell, among others, gave up his 2020 salary to fund bonuses for general managers as well as an employee medical relief fund to support franchise restaurant workers and corporate employees with a COVID-19 diagnosis or for those caring for an individual suffering from COVID-19. Companies such as Hormel, Walmart, and Kroger increased bonuses at a time when most organizations were eliminating employees. The way companies treat employees during the COVID-19 pandemic stands to define those organizations for years to come. 

        This is an opportunity for leading brands to make a difference in their communities by providing products that allow for the ability to live and work from home. Online retailers such as Amazon have strengthened their relationship with customers by supporting the new duality of home life. Daily downloads of the Zoom videoconferencing platform increased more than 30 times year-over-year with total users reaching more than 200 million.

        The pandemic has created many ethical challenges. In the education arena, how do you balance health and safety versus educational needs versus economic viability?

        Businesses have faced challenges involving when to open, whether to open, and when to go out of business. Big box stores and online retailers have fared much better during the pandemic than smaller retailers such as restaurants, dry cleaners, and other service businesses. Retailers with both a brick-and-mortar and an online presence had an advantage as online-only retailers faced inventory stockouts while brick-and-mortar retailers could tap into store inventory. 

        Walmart, Home Depot, Walgreens, and others were deemed essential. As such, they faced the challenges of protecting their workforce from infection and providing a safe shopping environment. To protect the safety of customers and associates, Home Depot shortened store hours to allow for more thorough sanitization, limited the number of customers allowed in the store at one time, installed plexiglass shields to separate customers from employees, supplied thermometers for team members to perform health checks before their shifts, and provided face masks and gloves to associates. The company also eliminated major sales promotions—no doubt taking a financial hit—to avoid driving unnecessary traffic to its stores. 

        What are the ethical expectations/responsibilities and consequences of employees who work from home with varying levels of supervision? 

        Many employees, not involved in face-to-face service exchanges, moved to working from home. Two major issues developed that impacted the worker and the firm. Some employees were unable to carry out their responsibilities due to childcare, the need for in-home education, and other distractions from attempting to work in a non-office environment. On the other hand, some employees had a hard time separating work and their personal lives. These individuals worked long hours and some of these employees felt increasing pressure to the point of causing mental health issues.  

        Working from home is becoming the new normal. Before the pandemic, over the last five years, there has been a 44 percent increase in working remotely, according to Flexjobs.com. Now, Twitter will allow employees to work from home indefinitely. Morgan Stanley CEO James Gorman went as far as to say the bank would likely need less commercial real estate post-pandemic.  

        Working at home requires boundaries and discipline. Friends and neighbors need to know that you’re working from home and are not as accessible at non-work times. Double-dipping is an ethical issue in billing clients by the hour. Though some ethical issues, such as harassment, bullying, and personal use of organizational resources, may actually decline, the biggest risk, time theft, is one of the most challenging issues in any organization. Developing shared organizational and ethical leadership skills may be challenging. Any way you slice it, just as we have had to learn how to stay e-connected, we now will need to find other ‘e-ways’ to manage and lead through this change and beyond.

        Artificial Intelligence and the Role of Ethics

        The role of ethics in artificial intelligence (AI) is gaining news coverage in recent stories such as Pentagon to Adopt Detailed Principles for Using AI and AI Ethics Backed by Pope and Tech Giants in New Plan.

        Auburn University ethics scholar Dr. O.C. Ferrell comments on concerns, policies, societal benefits, and challenges to implementation. He is the James T. Pursell Sr. Eminent Scholar in Ethics and Director of the Center for Ethical Organizational Cultures in Auburn’s Harbert College of Business.

        What should be our key concerns with AI as its use grows in prominence?

        While tech firms such as Google, Facebook, Amazon, IBM, and Microsoft embrace AI in their operations, the key risks of simulating the cognitive functions associated with humans is just being addressed. AI systems that think like humans through machine learning will have to make ethical decisions.

        While ethics relates to principles, values, and norms, the algorithms, or set of rules simulating human intelligence, are developed by programmers that may have limited ethical knowledge. Ethics in AI at our current stage of development cannot internalize human principles and values. The result has been discrimination, bias, and intrusive surveillance in some cases.

        Most fully-enabled AI can result in unanticipated outcomes. For example, in one case two fully enabled AI systems used machine learning to develop their own language and started communicating with each other in a manner not understood by humans. As AI systems learn from experience, develop solutions to problems, make predictions, and take actions, there is the need for human oversight to disengage or deactivate systems that have unethical outcomes.

        The mass media report traditional misconduct by humans every day. Machines have to be regulated by organization ethics programs related to their risk areas. At this stage of development, human control and oversight systems must be in place.

        How important is it to develop organizational policies for how AI will be developed and implemented?

        AI is transforming decision making in the private sector, public services, and the military. The Department of Defense (DOD) recognizes the importance of developing principles and policies to address AI ethics. There are not standardized values or core practices for building decision-making systems involving machine learning.

        The Defense Innovation Board developed a set of principles for the ethical use of AI for the DOD. While various professions such as engineering and medical associations have developed ethical principles, AI safety, security, and robustness requires principles as a first step in opening a dialogue about how to address risks. As a starting point the DOD believes the principles should reflect the values and principles of the American people and should uphold all international laws and treaties related to the conduct of armed forces. This approach could be used by the private sector based on existing ethical values and accepted core practices that are applied to behavior not enabled by AI.

        The AI principles developed by the Defense Innovation Board address responsible, equitable, traceable, reliable, and governable actions. Robert Bosch GMBH, a German engineering firm, is taking this approach with an ethics-based AI training program for 16,000 executives and developers. Part of the training includes a new code of ethics emphasizing principles including human control. The principles include: “invented for life” with social responsibility; AI as a tool for people; safe, robust, and explainable AI products; trust as a key value; and legal compliance.

        There are almost 100 private-public initiatives to guide AI ethics, but most are designed for humans and not machines. While these principles may not be programmed into algorithms, they can be understood by humans. Both machines and humans need to work together.

        Do you see the growing use of AI as more of a benefit to society, a detriment, or perhaps a bit of both?

        AI is a technology system that is not inherently good or bad. It is essentially enabling technology that can allow robots and drones to carry out operations. It can take big data and, through predictive analytics, make decisions and implement operations and actions. Therefore, AI should not be viewed as a threat any more than other technologies such as computers.

        The risk of using this technology relates to appropriate implementation and its power to make decisions and learn from experience, making it able to go beyond human decision makers. For example, in medicine, machine learning can find statistical significance across millions of features, examples, or data. Therefore, AI can exceed human ability in performing tasks quickly, learning about the nature of complex relationships, and make it possible for clinicians to provide reliable information to patients. But AI in some medical fields has been found to create biases based on the data. Racial biases could possibly be a part of the algorithms. 

        In the military, AI would have to be reviewed so control of a weapon would not cause unnecessary death and destruction. While AI can ensure safety and reliability of weapons systems, there will need to be human controls for disengagement, if necessary. At this stage of development, society should not fear AI because it has the potential to improve the quality of life and operational efficiency. On the other hand, until privacy, bias, and other ethical issues are addressed, it should supplement rather than replace humans.

        What do you believe will be society’s biggest challenge in successfully implementing AI to its fullest and best use?

        AI systems are not capable of mastering some of the strongest human intelligence attributes. It is a system of algorithms or rules programmed for a specific task. In other words, AI does not have the creativity and common sense that humans use to take knowledge and apply it to a completely different context. While AI has the ability to develop predictive analytics, learn from big data, and make decisions, its capabilities are different from human decision making. AI works with algorithms in a series of rules or steps to construct a desired outcome. Humans have a better opportunity to apply principles and values to ambiguous situations. This creates a dilemma for incorporating ethics into AI decisions.

        Principles are pervasive boundaries for behavior and rule-based. Unfortunately, there are no proven ways to translate principles into algorithms with legal and professional accountability. On the other hand, values are general beliefs and are used to develop norms that are socially enforced. There is always the possibility for ethical conflict even using the same set of values. Highly difficult ethical decisions may need an organizational mechanism for resolution of questionable issues.

        At this stage of AI development one of the biggest challenges will be to incorporate values into AI decisions. Some are turning to philosophical theories to resolve ethical decision making, but machines cannot take philosophical theories such as social justice and consequentialism and apply them to outcomes. There are many judgments to making ethical decisions that will be very difficult to program in a series of algorithms. Developing AI for the common good of society will require integrating machine learning with the innate ability of humans to use their cognitive ability and values to achieve desired outcomes. 

        O.C. Ferrell

        Dr. O.C. Ferrell is the Director of the Center for Ethical Organizational Cultures and the James T. Pursell Sr. Eminent Scholar in Ethics

         

         

        Take your next step.