Three communicative principles for managing algorithmic accountability

In the second part of our look at accountability in the age of algorithms, a look at the strategic communication principles to help organizations overcome reputation challenges posed by tech.

Photo by Markus Spiske on Unsplash

Organizations today are rapidly changed by the introduction of artificial intelligence and computerized algorithms. This change also affects relationships with stakeholders.

Part one of this two-part series highlighted the challenges that these technologies pose for managing organizational accountability and reputation. This second part will discuss three strategic principles for communication managers to help their organizations overcome these challenges.

While the growing public concern about algorithms and the challenges they pose to organizational accountability and reputation (see part 1 of this two-part series) seem to come as a novelty, they pronounce one of the rather “classical challenges” that communicators help solve for their organization: issues of public concern have to be addressed in a conversation with stakeholders and organizations have to provide reasons for their actions to safeguard their legitimacy. However, and herein lies the core challenge: for opaque algorithms, there is no straightforward way to ‘deliver accounts’ at any one point in time. Oftentimes, even the programmers that developed these systems at one point, struggle to fully explain their actions and decisions.

When organizations’ poorly transparent and highly fluid algorithmic practices become the object of reputational concerns, these organizations often cannot hope to merely “deliver” accounts. They need to be prepared to participate in a discursive process together with their stakeholders in order to work towards good practices of account-giving and account-holding over time.

For thi, we propose a strategic framework of three key communicative principles of stakeholder engagement that serve as a basis to manage accountability when stakeholder relationships become heavily burdened by algorithmic opacity and fluidity. These principles place a strong emphasis on involving those affected. Stakeholders need to be an active part of detecting and assessing the potential shortcomings of algorithms, as particular developers and applicants of algorithms do not necessarily hold a privileged position in assessing these issues.

Bluntly put: If simple accounts cannot be given by one party, the emphasis needs to shifts to an inclusive communication process through which a continuous and tentative assessment of the development, workings, and consequences of algorithms can be achieved over time. Communication managers need to be in charge of these processes.

Communication Principle 1: Facilitate Access to Continuous Debate

  • All those who potentially suffer negative effects of the processes and decisions of algorithmic systems should have equal access to a forum and a communicative process that aims to spotlight potential issues and facilitate argumentation.

For instance, a number of news organizations, such as BuzzFeed, maintain repositories in which data and code used for data-driven articles are at least partially published. The limited functionality of published code in the context of machine learning received early criticism, which led to the development of machine learning repositories such as the UCI Machine Learning Repository, which has benchmark datasets used to audit machine learning algorithms. Today, media outlets such as The New York Times upload datasets they use to feed their machine learning algorithms to GitHub.

However, because of the dynamic changes in complex algorithmic systems, the fostering of access to debate must be supplemented by platforms that allow for a sufficient continuity (and not just for debate at selected time points). The fluidity of algorithms necessitates fluid observation and discussion. Rigid certification processes, for instance, would not be able to do justice to the speed at which most complex algorithmic systems change.

Recent suggestions for cooperative and procedural audits of algorithms (Mittelstadt 2016; Sandvig et al., 2014a) address this aspect of continuity. The same aspect is also increasingly considered for public code repositories that use benchmark datasets to audit dynamic machine learning algorithms. This discussion indicates that communicative forums for algorithmic accountability are likely to become an important area of contact and interaction between organizations and their environments, thus emerging as a new playing field of corporate communications.

Communication Principle 2: Not Only Provide Information But Facilitate Comprehension

  • All those that take part in the continuous debate need to have not only full information about the issues at stake, but they need to be able to really comprehend them, understand the various suggestions for their solution and the ramifications of these suggestions.

This principle points directly to the fundamental challenge in accounting for complex algorithms: often the mere provision of information does not allow for straightforward comprehension. From the perspective of reputation management, this may pose an inherent challenge and likely reputational threats for modern organizations. However, there are some ways to increase comprehensible information on principally inconceivable algorithms, e.g., through experiment databases that enable comparisons between algorithms (van Otterlo 2013, p. 17) or methods of simplifying machine learning models by visually translating their actions to humans (Burrell, 2016, p. 9).

Another pathway to increased access to comprehendible information is via reverse engineering. These are approaches applied to produce transparency in a system without disclosing its inner workings. Through the observation of the inputs and outputs of a given system, a model is developed that explains how this system works. Methods of reverse engineering algorithms are already in journalistic practice, and they will become increasingly sophisticated in the future.  Further, information can be made accessible by expert third parties trusted by both organizations and the public if they are granted exclusive access to the algorithms in order to scrutinise them without disclosing their details (Pasquale, 2010).

Finally, as an alternative to reverse engineering a whole system, there are approaches available for generating information by focusing on actual use scenarios (cf. Sandvig et al. 2014a): Algorithm audits propose a sophisticated set of methods that simulate or follow actual algorithm users in order to determine how much algorithms discriminate in these realistic use cases.

These different approaches, of course, are not mutually exclusive but can be combined. Ideally, reputation-aware corporations optimize their performance on these dimensions.

Communication Principle 3: Allow For The Inclusion Of All Arguments

  • Participants need to have the opportunity to see an issue from all relevant points of view. All those possibly affected should have a chance to voice their concerns.

In addition to the inclusion of informed stakeholders (principles 1 and 2), the inclusion of all arguments is a key principle to enable the communication for safeguarding reputation and finding accountable solutions for algorithmic systems.

This is a critical aspect especially in relation to people’s often limited ability to comprehend the technicalities of algorithms and thus their limited means to formulate and present concerns. This is a twofold problem as algorithmic harms often arise from the way groups are classified or stigmatized. These groups are not only laypersons to algorithms; they are also often unaware that they are disadvantaged by them.

To cover all arguments, it is essential to include voices of people who may not even be aware that they are suffering negative outcomes of algorithmic systems. More fundamental than making available information about algorithms is creating awareness of their opacity. This is, of course, also important because if stakeholders become aware that proprietors of algorithms have made no efforts to reveal critical ‘unknowns’ about their technologies, this can yield remarkable reputational ramifications.

Communication is at the heart of managing algorithmic accountability

Algorithms are no longer the special-interest subject of internet activists, programmers or marketers. They are a major public issue, and their development and application relate to significant reputational concerns. However, they can remain, at least in part, opaque and inconceivable to many stakeholders, sometimes even to the data engineers that create them.

This opacity constitutes an inherent reputational concern for the developers, proprietors and users of algorithms: when critical stakeholders demand information and transparency, the proprietors and users will inevitably struggle to give explanations. This is why we argue that, in the context of practical opacity, algorithmic accountability does not need reporting standards but needs to be managed by communicators through applied principles of inclusive communication.

Alexander Buhmann

Dr. Alexander Buhmann is a researcher working at the intersection of communication, new technology, and management. He is currently assistant professor at the Department of Communication and Culture at BI Norwegian Business School, co-director of the BI Centre for Corporate Communication, and research fellow at the USC Annenberg School of Communication and Journalism’s Center on Public Diplomacy.