#EthicsMatter - AI Governance and the Ethical Guardian Role of Communicators - Part 2

AI and Ethics: Implications for Communicators

A thee-piece article series

Part Two:

AI Governance and the Ethical Guardian Role of Communicators

The use of AI-based tools is profoundly shaping the present and future of communications. While much of the current discussion is focusing on the application and implications of AI-based tools for communication, less attention is paid to the more fundamental shifts that AI brings to organisations as a whole: Organisations are increasingly relying on AI to manage operations and make decisions. The implications this has for stakeholder relationships and the reach, role and responsibilities of communications needs to be a key focus of future consideration in our field. In this three-piece article series, we examine the implications of AI for the role and responsibilities of communication practitioners. The first part assesses where we are in applying AI in communication practice. The second part explores the fundamental shifts that AI brings to organisations as a whole and the role communication can play at this wider level. The final part will discuss key implications of emerging AI-based practices in communications.

***

Recent studies on AI in communications suggest that communicators may be buying into technologically mediated processes that they do not sufficiently understand. In addition to grappling with the ever-evolving technologies that are used for communication tasks, there are increasing concerns about the ethical implications and impacts of the use of AI more generally by the organisations for which communicators work. AI may be harmful to stakeholders and cause ethical and reputational concerns in three fundamental ways:

1.     AI may raise evidence concerns about how systems convert vast data into ‘insights’ (which then form the basis decisions). Evidence may be inconclusive, meaning that decisions are based on patterns that are just that in vast data repositories, or inferences made simply on correlations. On the other hand, evidence may be misguided, meaning that decisions are based on inadequate inputs, such as incomplete or incorrect data.

2.     AI may raise outcome concerns: system decisions causing harm. Harm can come in the form of immediate and direct unfair outcomes, such as bias and discrimination based on race or gender to the detriment of diversity and inclusion. It may emerge indirectly and long-term, for example in technological unemployment or fundamental changes to people’s perceptions as algorithms can confine them to particular ‘echo chambers’.

3.     Finally, AI may raise epistemic concerns through their potentially inscrutable inputs, untransparent algorithmic processes and poorly traceable harmful impacts (also commonly referred to as ‘AI opacity’). The self-learning capacity and relative autonomy of AI systems can make it difficult to know how data inputs are processed. Any decisions made on this information may, in turn, be used as data for other decisions and so on. AI use vast data sets which often do not have a straightforward explainable relationships with particular decisions. Furthermore, ethical considerations are difficult to include or detect in data sets and are, therefore not usually factored into AI processes.

The advance of AI poses challenges to organisational legitimation that are important to communication. These challenges emerge in particular from epistemic concerns with AI. While some concerns, for instance those that come with strategic necessity to obfuscate (for reasons of functionality, competitiveness, or data privacy), can be addressed through standard accountability frameworks, other concerns cause more fundamental issues. For example, the lack of transparency around how AI systems are developed and deployed: often by reusing and repurposing code from libraries and then further evolving during testing and deployment based on its self-learning capacity. Throughout this process, even software engineers refer to parts of their work as ‘black boxes’, meaning they find it difficult to explain how a particular system handles data to make decisions There are therefore uncertainties about: the use of potentially sensitive variables such as race and gender; latent and long-term impacts; responsibilities for decisions across vast networks of human and non-human agents and the embodied norms and values within AI systems.

Such fundamental epistemic concerns demand the special attention of communicators, who are charged with managing an organisation’s legitimacy and reputation, as they raise a critical question: ‘How to strategically manage organisational legitimacy when stakeholders ask reasonable questions about fundamentally unanswerable questions?’

In practice practitioners have three strategic options: first, a manipulative approach, where communicators make an active attempt to shape external expectations in favour and support of organizational conduct. Second, an adaptive approach where organisations monitor external expectations, rules, and regulations in their environment and work towards compliance. Third, a dialogue approach, where organisations and stakeholders engage in a conversation to develop a joint understanding of challenges and desirable solutions and conduct. We would suggest that the third option is the ethical route. To enable such engagement, communicators can usefully follow three principles.

1.     First, communicators can facilitate an inclusive and continuous debate where all those potentially affected by the processes and decisions of AI systems have equal opportunity and assess to the discussion. For instance, several news organizations, such as BuzzFeed, maintain repositories in which data and code used for data-driven articles are at least partially published, and media outlets, such as The New York Times, upload datasets they use to feed their machine learning algorithms to GitHub. These platforms offer opportunities for stakeholders to engage. Other organisations such a Vodafone, set up actual conversations with key stakeholders.

2.     Second, communicators can make efforts to go beyond merely providing information about systems to helping stakeholders genuinely understand. All those involved in discussions need to be able to comprehend systems and related issues and the options for solutions and their implications. Practically this may involve providing examples of how algorithms are constructed, producing visualisations of how machine learning operates, or encouraging exercises in reverse engineering, i.e., showing how results have been arrived at.  Information can be mediated by expert third parties, trusted by both organisations and end users. Algorithm audits use sophisticated methods that simulate or follow actual algorithm developers and users to determine if there are any biases or misinterpretations.

3.     Third, communicators can use the above principles of inclusiveness and comprehension as the basis for an open debate where participants get the opportunity to see an issue from all points of view and jointly develop broadly acceptable and reasonable solutions for AI challenges. Such a debate should actively include and empower diverse voices, ideally even those who may not be aware that they are negatively impacted by algorithmic systems.

The above principles place a strong emphasis on involving those affected so that developers and owners do not hold a privileged position in assessing the emerging issues with AI.

In the following, third part of this article series we will discuss the key implications of emerging AI-based tools in communication practice as well as of the fundamental shifts that AI brings to organisations as a whole.

About the authors

Alexander Buhmann, Ph.D., is associate professor of corporate communication at BI Norwegian Business School and director of the Nordic Alliance for Communication & Management. Alexander is a member of the expert panel on artificial intelligence at the Chartered Institute of Public Relations (CIPR). Follow Alexander on LinkedIn or Twitter.

Anne Gregory, Ph.D., is professor emeritus of corporate communication at the University of Huddersfield, honorary fellow and former president of the Chartered Institute of Public Relations (CIPR) and past chair of the Global Alliance for Public Relations and Communication Management. Anne is a member of the CIPR expert panel on artificial intelligence. Follow Anne on LinkedIn or Twitter.

Any thoughts or opinions expressed are that of the authors and not of Global Alliance.