With great power comes great responsibility

Every organisation that develops or uses AI, or hosts or processes data, must do so responsibly and transparently. Society will decide which companies it trusts.

It may surprise some that artificial intelligence is older than the internet.

The term “AI” was coined ahead of a 1956 summer research workshop at Dartmouth College. Since then AI has made intermittent but memorable forays into the spotlight, including IBM’s Deep Blue defeat of Garry Kasparov in a 1987 chess game.

(Image above: Ginni Rometty promotes a responsible adoption of artificial intelligence at the World Economic Forum in Davos. Photo: World Economic Forum / Mattias Nutt)

Yet it is only recently that AI is coming of age, and we see three factors behind this rise:

  • The exponential growth in big data. Imagine how much data your phone is collecting and transmitting on a daily basis. Or how each one of us, on average, leaves a trail of more than one million gigabytes of health-related data in our lifetime. That’s the equivalent of about 300 million books.
  • Developments in the capabilities of software and algorithms. IBM alone has significant research activities constantly improving and innovating on AI.
  • The increase in computing power. This year for the first time, the collective computing and storage capacity of smartphones alone will surpass that of all servers worldwide meaning AI can be deployed by almost anyone, anytime.

AI: reshaping the world  

AI will transform every sector, including healthcare, education, retail, transportation, financial services and more.

First example: lifts. Even if we use them every day, we generally don’t notice them until they are broken, slow or crowded - when it’s too late. Kone, one of the world’s largest lift manufacturers, carries more people in its lifts at any given time than there are in all the airplanes in the sky. Kone works with Watson, IBM’s AI platform, to connect, monitor and optimise millions of lifts and escalators across the world. This means issues with lifts are not only rapidly identified but can be predicted and prevented, reducing downtime and providing users shorter waits and smarter, less-crowded lifts. 

A topical case: AI is helping with privacy regulation compliance. Your company’s legal team will no doubt agree that their GDPR-related workload is anything but light. Complying to regulation is essential but time-intensive. And there is a lot at stake, with noncompliance resulting in high fines and substantial risk to business reputation.

Thomson Reuters and IBM teamed up to create Data Privacy Advisor, an AI tool to support privacy professionals at multinational businesses and law firms, and to offer a deeper understanding of the law and privacy obligations across multiple global jurisdictions.

Backing up Data Privacy Advisor is a next-generation question-answering feature, Ask Watson a Question. It is the world’s first question-answering feature for global privacy compliance.

These are just two examples of the many, many ways that IBM clients are currently using AI to improve service and productivity. It’s also exciting to look at our research labs and see what’s coming down the road. For example, statistics show that eye fatigue is a common problem with radiologists as they visually examine many images every day. Medical Sieve is an ambitious long-term grand challenge project by IBM Research to build a next-generation cognitive assistant that can assist radiologists in clinical decision-making. It will exhibit a deep understanding of diseases and how they appear in X-ray, Ultrasound, CT, MRI, PET and more. It is easy to see the benefits this can bring to radiologists and to patients.

Addressing concerns

Yet, as with any new wave of technology, there are questions and concerns. There are misguided reports of a robot takeover or massive job loss. While many media reports are exaggerated (and over-ambitious as to what AI can actually achieve right now), it would be facile to dismiss all of them as scaremongering. At IBM we recognise that there is much learning ahead for all of us. Our experience has also taught us that it is necessary for us to guide the responsible adoption and use of the innovations that we develop and bring to the world.

Is innovation without responsibility really innovation?

It is no longer enough to bring a game-changing, disruptive product or service to the market without ensuring its responsible and ethical adoption. The “oops, sorry!” argument from innovators has had its day and governments and customers rightly expect more.

"It is no longer enough to bring a game-changing, disruptive product or service to the market without ensuring its responsible and ethical adoption."

Companies playing fast and loose with peoples’ trust is having an impact across the technology industry. It is bad for all of us when Gary Cohn, former president of Goldman-Sachs, reportedly says that banks were “more responsible citizens” before the financial crash than social media companies are now.

Companies need to step up

IBM has chosen to take a stand on the responsible adoption of artificial intelligence. This came at the highest level, top down from our chairman, president and chief executive officer, Ginni Rometty, who attended the world’s leading business and technology fora – including World Economic Forum in Davos, VivaTech in France and CeBit in Germany, to spell out IBM’s principles and practices around AI and to call on other companies to follow suit.

Here’s what we say – and practice:

1. The purpose of AI is to augment human intelligence

• Our technology, products, services and policies will be designed to enhance and extend human capability, expertise and potential. Cognitive systems will not realistically attain independent consciousness. AI will remain within human control.

• AI should make ALL of us better at our jobs, and the benefits of the AI era should touch the many, not just the elite few. We face an immediate and profound transformation of the workforce. Enterprises need to emphasise diversity and skills development, governments must prioritise education reform – and individuals need to embrace these changes by committing to lifetime learning. We must add new approaches to our mindset on the workers of the future. We believe that many jobs in emerging areas such as cloud computing, cybersecurity, and even digital design do not necessarily require a bachelor’s degree but instead rely on practical education and applied skills. We call these 'New Collar' jobs (think: programmers, developers, technicians, managers... jobs that prioritise skills over degrees). At IBM, we take responsibility for training individuals with the skills to fill these jobs. We are working to bring our P-TECH education model to the EU. It provides young people – the majority from disadvantaged communities – with the qualifications and professional skills they need for the modern workforce.

2. Our clients’ data is their own

We at IBM have always believed that our clients’ data is their own. Clients are not required to relinquish rights to their data to have the benefits of IBM's Watson solutions and services. The insights derived from clients’ data are their competitive advantage. IBM employs industry-leading security practices to safeguard data. This includes use of encryption, access control methodologies, and proprietary consent management modules

3. AI systems must be transparent and explainable

We make clear when and why AI is being applied, where the data comes from, and which methods were used to train algorithms. These training methods must be explainable. For example, if a patient or a medical professional wants to know how Watson came to a given conclusion, we will be transparent and explain this, and the explanation will be adapted to best suit who we are giving it to.

Detecting and mitigating bias is an ongoing effort that we and all companies advancing AI have an obligation to address proactively. We therefore continually test our systems and find new data sets to better align their output with human values and expectations.

"Detecting and mitigating bias is an ongoing effort that we and all companies advancing AI have an obligation to address proactively."

The data factor

With the skyrocketing growth of data as a driver for the AI era, responsible AI also extends to being responsible when collecting, storing, managing or processing data. Late last year, Financial Times columnist Ranah Foroohar wrote that “Technology companies may have to say whether they are data peddlers or data stewards.” We couldn’t agree more. Companies must be more vocal and transparent about their data practices so that people can understand what’s happening to their data – and make their own decisions about their data. 

For example, IBM publicly adheres to employing strong encryption and security strategies – and constantly challenging and evolving them. We do not put ‘backdoors’ in our products for any government agency, nor do we provide source code or encryption keys to any government agency for the purpose of accessing client data.

Be a part of the overall solution

Companies have a duty to help bring other organisations up to a high standard of responsibility. The European Commission’s new AI expert group will shape EU AI policy, including creating guidelines for AI ethics. IBM’s global head of AI ethics, Francesca Rossi, is putting her 30+ years of AI experience to use as part of that group. Francesca is also actively involved in such platforms as the European Association for AI, the IEEE initiative on AI ethics and AI4People.

• We recently launched Everyday Ethics for Artificial Intelligence, a best practices guide for designers and developers in AI culled from experts in fields such as AI, engineering, ethics, and philosophy. It describes five ethical principles that AI developers must consider throughout the design process.

• This autumn, IBM took a major step in breaking open the “black box” of AI with a new service that brings greater transparency to AI decision-making. For the first time businesses will be able to “live” monitor AI for bias. Explanations are provided in easy to understand terms, showing which factors weighted the decision in one direction vs. another, the confidence in the recommendation, and the factors behind that confidence. To encourage a shift to greater industry-wide transparency in AI decision making, IBM also released trust and transparency capabilities into the open source community.

• IBM is a founder member of the Charter of Trust, a cross-industry initiative centred around 10 cybersecurity principles to strengthen trust in the security of the digital economy. We host AI-themed events in healthcare, security, energy and more at which our peers and policy makers discuss how to be responsible. This kind of cross-industry and cross-sector collaboration is essential to build trust in the new era of data and AI.

Society’s decision to trust or not to trust AI, and the companies that deliver it, will determine its success. As companies we should not squander the opportunity before us right now to earn that trust. We must make every effort. .

Liam Benham

Liam Benham leads government affairs activities for IBM in Europe, including relationships with the European Union institutions in Brussels as well as national governments across the EU and Russia. He leads a team of more than 25 government affairs professionals. Liam joined IBM in February 2012, having spent 15 years in senior government relations positions at Ford Motor Company, based in the UK, Brussels and Asia Pacific. He is a Board Member of the American Chamber of Commerce to the EU, where he chairs the Policy Group. He is also Vice Chair of the BusinessEurope Digital Economy Taskforce.