A vital issue for professional communicators today is the safeguarding and promoting of consumer privacy. This is demanded by citizens who are alarmed by seemingly endless exposures of immoral, unethical (and often illegal) use of personal data.
Given that estimates suggest that the data that will be transmitted about us by 2020 over 20 billion connected devices will be staggering, brands have a fundamental role to play here.
The implications for our lives are immense. For unless the safeguarding of that data is achieved, we will face nothing less than the disappearance of our personal privacy, which, once lost, may be impossible to entirely regain.
According to Wired magazine, “the most consequential war being waged today is over data protection. One ideology puts control of personal data in the hands of the individual, the other cedes control to the corporation or state. Will this year be remembered as when the right to privacy was enshrined as a fundamental human right?”
Meanwhile, UN’s Right to Privacy in the Digital Age resolution recognises that more and more personal data is being collected, processed and shared, and expresses concern about the sale or multiple re-sale of personal data, which often happens without the individual’s free, explicit and informed consent. Since the first resolution on the matter, the UN has evolved its approach from being a mainly political response to mass surveillance; to looking into more complex issues around data collection and the role of the private sector.
And when it comes to the private sector, a problem for many of those working in the marketing, media and tech industries is that they are complicit in this key problem of our age.
In my book The Post-Truth Business I describe our digital world, where privacy is under attack as never before and ‘surveillance capitalism’ is an ever-present reality. I spoke at the 2019 European Communication Summit in Berlin, where I described how the home is becoming a central battleground for brands.
The home is an ever-more commercial space, where the Internet of Everything is fully present, with Google’s Assistant installed into cookers, fridges and washing machines, Alexa and Cortana collaborate with each other, and Amazon’s camera and microphone equipped home-droid represents just one example of mass-market robotics.
The Times notes that such tools will be “creepy data-gatherers that could be hacked by third-parties to spy on their masters”. In response, Amazon says that it records and archives our Alexa interactions to provide, personalise and improve their services. And as for their concern for privacy, as USA Today noted “privacy be damned”: a recent Amazon press launch featured 70 different product announcements and not a single word about privacy.
Elsewhere, Facebook’s Segway-style self-balancing robot can follow users around the home, monitor their surroundings using cameras and microphones and give access to Facebook on a screen. While polls have shown that a majority of Americans don’t trust Facebook to protect their personal information, Fast Company recently noted that Facebook’s smart display device “will have a privacy shutter to disable the camera tracking, but amazingly, they only thought to include this in response to their own privacy scandals”.
While these products obviously have their fans, others feel this is the stuff of smart-home nightmares, where consumers see their cool new piece of technology has actually turned into an Orwellian surveillance machine.
But as illustrated by the questioning of Mark Zuckerberg at Congress last year, the debates over privacy and use of our data go much further than companies simply collecting data to recommend a new product. And no doubt we all recall Zuckerberg being called before the US Senate, where he casually revealed that Facebook had been “quietly constructing a vast database of ‘shadow profiles’ of people who had never consented to use the social media platform.”
This is a point that Jonathan Taplin, author of Move Fast and Break Things, took on when he talked about Facebook being in the “surveillance capitalism” business.
So it’s been interesting to see the actions over privacy from some of the other ‘G-Mafia’ brands (a term devised by futurist Amy Webb to cover Google, Amazon, IBM, Facebook, Apple and Microsoft) at a time when the issue of privacy is becoming not just a feature inside the home, but outside also, in smart cities. One version of these interconnected, always-on cities features a wide range of useful technologies that enable inhabitants to live an easier, more productive and more stress-free lives.
That utopian aim is matched, however, by a much more sinister one, as demonstrated by reactions against facial recognition technology used in American schools and airports. Elsewhere, it is used in Russian cities as a method of social control and policing, and most overtly in China, where technology including 200 million CCTV cameras enables the facial recognition of vast numbers of its enormous population. Elsewhere, the government-controlled monitoring and tracking of entire communities and the individuals within them is delivering an omnipresent ‘digital dictatorship meets Big Brother’ surveillance state. This sees a combination of AI powered big data linking with (the Chinese version of) the internet to distort social behaviour and achieve a cultural self-enforced obedience.
That is the dismal reality for anyone who wishes to avoid the petty to extreme punishments awaiting those who transgress the Chinese State’s version of acceptable behaviour in their Social Credit system. The end result of the largest social engineering project in history (it’ll be fully operational next year) may well be the outward appearance of stability, but many believe this is nothing but a fake image of social cohesion, which is based on the ruling party’s paranoia about instability and subversion. Moreover, this system is powered by a combination of leading-edge companies and brands (like Baidu, Ailbaba and Tencent) who are both massively successful, and are controlled by the government, as are all Chinese companies regarding their domestic laws and administrative guidelines.
This AI-enabled system is rapidly seeing everything connected to everything else – a literal result of the ‘internet of everything’ promise spoken about at tech conferences a decade ago. The problem of course, is that a truly dystopian end-result (automated facial recognition posing one of the greatest threats to citizens’ personal freedom) is being created for authoritarian regimes in a manner far away from the one presented at events like the annual Consumer Electronics Show in Las Vegas. To reference one of the great television series of our era, it seems that we are truly in Black Mirror territory.
Therefore, it was encouraging to see Microsoft president Brad Smith, in an appeal to Congress, note that “facial recognition technology in particular has broad social ramifications and potential for abuse”. More recently, the company announced that they had deleted their MS Celeb database (described as the world’s largest publicly available facial recognition data set) which contained over 10 million images.
Elsewhere, a few months ago, Tim Cook, CEO of Apple, called for a wide-ranging federal privacy law to protect individuals and society alike from the “data-industrial complex” at the European conference Debating Ethics: Dignity and Respect in Data Driven Life. He had previously warned that everyone has a right to the security of their data, companies should recognise that data belongs to users, and we should make it easy for people to get a copy of it, as well being able to correct and delete that information.
Contrasting Apple’s attitudes towards privacy versus the other giants of Silicon Valley, Tim Cook warned that our personal data is being weaponised, with the results used against us with military efficiency. So it was encouraging to see San Francisco, the world capital of tech, becoming the first city in America to outlaw automated facial recognition systems.
It is why US presidential nominee Elizabeth Warren says that “today’s big tech companies have too much power, over our economy, our society, and our democracy. That’s why my administration will make big, structural changes to the tech sector to promote more competition, including breaking up Amazon, Facebook, and Google”.
The collateral damage of that weaponised data are the freedoms of emotional, irrational individuals. For as The Guardian put it, “privacy isn’t the right to keep secrets: it’s the right to be an individual, not a type; the right to make a choice that’s entirely your own; the right to be private”.
Meanwhile, issue of GDPR and the growing ‘tech backlash’ was noted as part of a Financial Times article about the way in which the EU has “led the way in enforcing antitrust law against Google, privacy law against Facebook, and tax law against Apple. Plenty of people are threatening to follow suit, on the left and right. The new technology tools are destroying both trust and truth, creating a hunger for community and authenticity”.
So when it comes to brands safeguarding what I call in my book their ‘reputation capital’ in privacy terms, I believe this is all about building consumer engagement via the absolutely transparent use of personal data, ensuring that ‘informed consent’ has been agreed, and acting in an empathetic manner, whereby the consumer-brand relationship is one that delivers strong mutual benefit.
Essentially, it is a situation where increasing numbers of people will demand an answer from organisations to the simple question “what are your ethical values around data privacy, and why should I trust you to keep my data private?”