- Information for:
- Future Students
- Current Students
- Employers & Industry Partners
- Alumni & Friends
- Faculty & Staff
Gain access to valuable instructional and training resources for ethical leadership through the Center for Ethical Organizational Cultures. Through the Center, you can find information on current research projects supported by the Harbert College of Business as well as information on the student organization (Student Center for Public Trust) supported by the center.
Dr. O.C. Ferrell and Mr. James T. Purcell
The Center was established in 2008 in response to support by Mr. James T. Pursell, Sr. of Sylacauga, AL to address ethical conduct in business. The Center is dedicated to “producing high-demand graduates and generating knowledge that drives business thought and practice.” Preparing ethical leaders and supporting ethical organizational cultures provides a foundation for success in any organization.
Dr. Achilles Armenakis was the founding Director in 2008. Dr. O.C. Ferrell became Director of the Center in the Summer of 2017. Dr. Ferrell continues to establish contacts and identify individuals/organizations and create programs that will contribute to the Center’s mission. Faculty, students, and external constituents are involved in advancing the activities of the Center.
To address ethical conduct in business, the Center for Ethical Organizational Cultures (the Center) is dedicated to “producing high-demand graduates and generating knowledge that drives business thought and practice.” Organizational cultures provide the foundation for ethical decision making. While individual integrity is important, each profession has its own set of ethical risks and standards of behavior required for success. Ethical leaders are necessary to implement and maintain shared values that create an ethical organizational culture.
To meet this need, the Center focuses on providing resources and activities that support ethical conduct in any type of organization. Faculty are provided resources for support in the teaching of ethics in academic programs, as well as research and scholarship. Students are provided educational opportunities and activities to advance their ethical leadership skills. Businesses are provided access to resources that aid leaders in advancing organizational ethical cultures.
The Student Center for Public Trust has 37 chapters in 18 states and consists of students that show dedication to ethical leadership. This organization focuses on the importance of not only ethical leadership but also the necessity for accountability, integrity and trust across all avenues of business. Student CPT chapters have resources, mentors and networking available due to the NASBA (National Association of State Boards of Accountancy) Center for Public Trust.
The Center for Ethical Organizational Cultures provides teaching resources in the form of a wide variety of cases, debate issues and classroom simulations. These resources are meant to spark discussions about different types of ethical dilemmas and decisions.
Many of our simulations offer the opportunity for students to work as a team on a role-playing exercise that imitates a real-world issue in organizational ethics.
The Center for Ethical Organizational Cultures is dedicated to staying informed of the latest trends in organizational cultures. Check out some of the current projects we are working on below.AMS: Code of Ethics Marketing Professors: Good ethics is good business Business ethics outweighs corporate social responsibility Wells Fargo's Organizational Culture
The Spanish Flu infected about one-third of the world’s population and killed about 675,000 Americans. People were asked to wear masks, and like in 2020, schools, theatres, and businesses were closed for some time. What has changed is the medical knowledge, technology, and advanced communication and global connectedness of the world. Polio vaccine tests started in 1935, and a vaccine was developed by Jonas Salk and licensed to the public in 1955. Today, we have the capability to develop vaccines in 1–2 years. In addition, our communication systems have the ability to shift social and work environments to virtual interaction. For example, Zappos transitioned its Las Vegas, Nevada, staff to remote work across all departments and provided instructional videos to teach team members how to set up a home office.
The current pandemic has provided an opportunity for firms to contribute to helping both employees and the public by socially responsible actions and conducting business virtually and online. Apple Inc. donated millions of dollars in personal protective equipment (PPE) and other support to both the U.S. and China. Many other businesses ‘stepped up’ by making masks, hand sanitizer, or ventilators to support the health care community and patients. Budweiser produced and distributed more than 500,000 bottles of hand sanitizer. Others—such as Wendy’s, Dunkin’, and Taco John’s—provided free food to health care workers and first responders, those at higher risk due to operating on the front line and individuals in need.
Business ethics involves and affects every employee in an organization, while social responsibility decisions about how to make a positive impact are often made by top management. The COVID-19 pandemic has created many new ethical issues. Ethics has become more important to navigate the risks associated with health, safety, and privacy. It has become the responsibility of each employee to be accountable and comply with policies to protect one another and customers. Wearing masks and maintaining 6 feet of distance requires discipline and respect. Monitoring employees working from home creates privacy issues just as tracking COVID-19 contacts using smartphones can create privacy issues.
In a distributive workforce, creating and maintaining an ethical organizational culture will become increasingly challenging. In an organization, there is proximal and supervisory oversight and interaction. What will this ‘look like’ in our evolving and transforming tele-workforce?
There will be long term changes in the importance of responsible and authentic care for both employees and customers. The CEO of Yum! Brands, the company behind KFC, Pizza Hut, and Taco Bell, among others, gave up his 2020 salary to fund bonuses for general managers as well as an employee medical relief fund to support franchise restaurant workers and corporate employees with a COVID-19 diagnosis or for those caring for an individual suffering from COVID-19. Companies such as Hormel, Walmart, and Kroger increased bonuses at a time when most organizations were eliminating employees. The way companies treat employees during the COVID-19 pandemic stands to define those organizations for years to come.
This is an opportunity for leading brands to make a difference in their communities by providing products that allow for the ability to live and work from home. Online retailers such as Amazon have strengthened their relationship with customers by supporting the new duality of home life. Daily downloads of the Zoom videoconferencing platform increased more than 30 times year-over-year with total users reaching more than 200 million.
The pandemic has created many ethical challenges. In the education arena, how do you balance health and safety versus educational needs versus economic viability? Businesses have had conflictual challenges involving when to open, whether to open, and when to go out of business. Big box stores and online retailers have fared much better during the pandemic than smaller retailers such as restaurants, dry cleaners, and other service businesses. Retailers with both a brick-and-mortar and an online presence had an advantage as online-only retailers faced inventory stockouts while brick-and-mortar retailers could tap into store inventory.
Walmart, Home Depot, Walgreens, etc. were deemed essential and as such had the challenges of how to protect their workforce from infection and provide a safe shopping environment. To protect the safety of customers and associates, Home Depot shortened store hours to allow for more thorough sanitization, limited the number of customers allowed in the store at one time, installed plexiglass shields to separate customers from employees, supplied thermometers for team members to perform health checks before their shifts, and provided face masks and gloves to associates. The company also eliminated major promotions—no doubt taking a financial hit—to avoid driving unnecessary traffic to its stores.
Many employees, not involved in face-to-face service exchanges, moved to working from home. Two major issues developed that impacted the worker and the firm. Some employees were unable to carry out their responsibilities due to childcare, the need for in-home education, and other distractions from attempting to work in a non-office environment. On the other hand, some employees had a hard time separating work and their personal lives. These individuals worked long hours and some of these employees felt increasing pressure to the point of causing mental health issues.
Working from home is becoming the new normal. Before the pandemic, over the last 5 years, there has been a 44 percent increase in working remotely, according to Flexjobs.com. Now, Google will let employees work from their homes until at least July 2021. Twitter will allow employees to work from home forever. Morgan Stanley CEO James Gorman went as far as to say the bank would likely need less commercial real estate post-pandemic.
Working at home requires boundaries and discipline. Friends and neighbors need to know that you’re working from home and are not as accessible at non-work times. Double-dipping is an ethical issue in billing clients by the hour. Though some ethical issues, such as harassment, bullying, and personal use of organizational resources may actually decline, the biggest risk is one of the most challenging issues in any organization and that is time theft. Developing shared organizational values may be harder to develop as well as the development of ethical leadership skills. Any way you slice it, just as we have had to learn how to stay e-connected, we now will need to find other ‘e-ways’ to manage and lead through this change and beyond.
Director of the Center for Ehical Organizational Cultures and James T. Pursell Sr. Eminent Scholar
The subject of artificial intelligence and the role of ethics is gaining news coverage in recent stories such as “Pentagon to adopt detailed principles for using AI” and “AI ethics backed by Pope and tech giants in new plan.”
Auburn University ethics scholar Dr. O.C. Ferrell comments on concerns, policies, societal benefits and challenges to implementation. He is the James T. Pursell Sr. Eminent Scholar in Ethics and director of the Center for Ethical Organizational Cultures in Auburn’s Harbert College of Business.
What should be our key concerns with AI as its use grows in prominence?
While AI tech firms such as Google, Facebook, Amazon, IBM and Microsoft are embracing AI in their operations, the key risks of simulating the cognitive functions associated with humans is just being addressed. AI systems that think like humans through machine learning will have to make ethical decisions. While ethics relates to principles, values and norms, the algorithms, or set of rules simulating human intelligence, are developed by programmers that may have limited ethical knowledge. Ethics in AI at our current stage of development cannot internalize human principles and values. The result has been discrimination, bias and intrusive surveillance in some cases. Most fully-enabled AI can result in unanticipated outcomes. For example, in one case two fully enabled AI systems used machine learning to develop their own language and started communicating with each other in a manner not understood by humans. As AI systems learn from experience, develop solutions to problems, make predictions and take actions there is the need for human oversight to provide disengagement or deactivation of systems that have unethical outcomes. The mass media reports traditional misconduct by humans every day. Machines have to be regulated by organization ethics programs related to their risk areas. At this stage of development human control and oversight systems must be in place.
How important is it to develop organizational policies for how AI will be developed and implemented?
AI is transforming decision making in the private sector, public services and the military. The Department of Defense (DOD) recognizes the importance of developing principles and policies to address AI ethics. There are not standardized values or core practices for building decision-making systems involving machine learning. The Defense Innovation Board developed a set of principles for the ethical use of AI for the DOD. While various professions such as engineering and medical associations have developed ethical principles, AI safety, security and robustness requires principles as a first step in opening a dialogue about how to address risks. As a starting point the DOD believes the principles should reflect the values and principles of the American people as well as uphold all international laws and treaties related to the conduct of armed forces. This approach could be used by the private sector based on existing ethical values and accepted core practices that are applied to behavior not enabled by AI. The AI principles developed by the Defense Innovation Board address responsible, equitable, traceable, reliable and governable actions. Robert Bosch GMBH, a German engineering firm, is taking this approach with an ethics-based AI training program for 16,000 executives and developers. Part of the training includes a new code of ethics emphasizing principles including human control. The principles include: “invented for life” with social responsibility; AI as a tool for people; safe, robust, and explainable AI products; trust as a key value; and legal compliance. There are almost 100 private-public initiatives to guide AI ethics, but, most are designed for humans and not machines. While these principles may not be programmed into algorithms, they can be understood by humans. Both machines and humans need to work together.
Do you see the growing use of AI as more of a benefit to society, a detriment or perhaps a bit of both?
AI is a technology system that is not inherently good or bad. It is basically enabling technology that can allow robots and drones to carry out operations. It can take big data, and through predictive analytics, make decisions and implement operations and actions. Therefore, AI should not be viewed as a threat any more than other technologies, like computers. The risk of using this technology relates to appropriate implementation and its power to make decisions and learn from experience, making it able to go beyond human decision makers. For example, in medicine, machine learning can find statistical significance across millions of features, examples or data. Therefore, AI can exceed human ability in performing tasks quickly, learning about the nature of complex relationships, and make it possible for clinicians to provide reliable information to patients. But AI in some medical fields has been found to create biases in the data. Racial biases could possibly be a part of the algorithms.
In the military, AI would have to be reviewed so control of a weapon would not cause unnecessary death and destruction. While AI can ensure safety and reliability of weapons systems there will need to be human controls for disengagement, if necessary. At this stage of development, society should not fear AI because it has the potential to improve the quality of life and operational efficiency. On the other hand, until the issues of privacy and bias as well as other ethical issues are addressed it should augment rather than replace humans.
What do you believe will be society’s biggest challenge in successfully implementing AI to its fullest and best use?
AI systems are not capable of mastering some of the strongest human intelligence attributes. It is a system of algorithms or rules programmed for a specific task. In other words, AI does not have the creativity and common sense that humans use to take some knowledge and apply it to a completely different context. While AI has the ability to develop predictive analytics, learn from big data, and make decisions, its capabilities are different from human decision making. AI works with algorithms in a series of rules or steps to construct a desired outcome. Humans have a better opportunity to apply principles and values to ambiguous situations. This creates a dilemma for incorporating ethics into AI decisions. Principles are pervasive boundaries for behavior and rule-based. Unfortunately, there are no proven ways to translate principles into algorithms with legal and professional accountability. On the other hand, values are general beliefs and are used to develop norms that are socially enforced. There is always the possibility for ethical conflict even using the same set of values. Highly difficult ethical decisions may need an organizational mechanism for resolution of questionable issues. At this stage of AI development one of the biggest challenges will be to incorporate values into AI decisions. Some are turning to philosophical theories to resolve ethical decision making, but machines cannot take philosophical theories such as social justice and consequentialism and apply them to outcomes. There are many judgements to making ethical decisions that will be very difficult to program in a series of algorithms. Developing AI for the common good of society will require integrating machine learning with the innate ability of humans to use their cognitive ability and values to achieve desired outcomes.
Below are publications created by Raymond J. Harbert College of Business faculty that are related to organizational ethical cultures. The topics include ethical decision making, ethical cultures, social responsibility, sustainability and stakeholder relationships.
Dr. O.C. Ferrell is the James T. Pursell, Sr. Eminent Scholar in Ethics and Director of the Center for Ethical Organizational Cultures at Auburn University. He has served on the faculty at Belmont University, the University of New Mexico, University of Wyoming, Colorado State University, University of Memphis, Texas A&M University, University of Michigan, Illinois State University, and Southern Illinois University.
Dr. Ferrell holds a Ph.D. from Louisiana State University in Marketing, an M.B.A. in Marketing as well as a B.A. in Sociology from Florida State University. Dr. Ferrell is President of the Academy of Marketing Science. He was formerly Vice-President of Publications for the Academy of Marketing Science and was Past President of the Academic Council of the American Marketing Association. He serves on the advisory board for Savant Learning. Dr. Ferrell also serves on the Academic Advisory Committee for the Direct Selling Education Foundation. He received the AMS Cutco/Vector Distinguished Educator Award for contributions to the marketing discipline. Additional recognition includes being the first recipient of the Marketing Education Innovation Award for the Marketing Management Association, Lifetime Achievement Award from the Macromarketing Society and special award for service to doctoral students from the Southeast Doctoral Consortium. He has chaired 13 dissertations with his former students currently serving as Deans, Associate Provost, CIBER Directors, journal editors, among others. Dr. Ferrell is co-author of several leading textbooks including Business Ethics: Ethical Decision Making and Cases (13th edition), Marketing (19th edition), Marketing Strategy (6th edition), Business and Society (5th edition) Management (4th edition) and Introduction to Business (13th edition). He has published in the Journal of Marketing, Journal of Marketing Research, Journal of the Academy of Marketing Science, Journal of Business Ethics, Journal of Public Policy & Marketing, AMS Review, Journal of Business Research, as well as others. He writes weekly business ethics summaries and reviews for the Wall Street Journal with a subscriber list of over 6000. Dr. Ferrell has served as an expert witness is some high profile ethics, legal and marketing cases.
Dr. Ferrell is Professor and Roth Family Professor of Marketing and Business Ethics in the Harbert College of Business at Auburn University. Dr. Ferrell earned a Ph.D. from the University of Memphis. She has published in Journal of the Academy of Marketing Science, AMS Review, Journal of Business Ethics, Journal of Public Policy & Marketing, Journal of Business Research, as well as others. She has co- authored numerous books including Business Ethics, 12th edition, Business & Society, 7th edition, Management, 4th edition, Business: A Changing World, 12th edition, and Business, 7th edition. She serves on the Executive Committee and Board of the DSEF. She is on the Cutco/Vector College Advisory Board. She serves on the Board of Governors of the Academy of Marketing Science. She is Past President of the Academy of Marketing Science and Marketing Management Association. She is a recipient of the Marketing Education Innovation Award from the Marketing Management Association and received the DSEF “Circle of Honor” recognition in 2019.
Jordan Burkes graduated from Texas A&M University with a degree in Business Administrations where she served as the Public Relations Officer for the local USO (United Service Organization). She’s currently working towards her Masters in Business Administrations and Masters of Science in Finance at Auburn University with an objective of working in the publishing industry. She’s also Vice President of the Student Center for Public Trust and a writer for the campus newspaper, The Auburn Plainsman.
Director of Center for Ethical Organizational Cultures