Competitor Collaboration Before a Crisis

Competitor Collaboration Before a Crisis

What the AI Industry Can Learn From Previous Industry Examples.

by Henry Chesbrough
{{item.title}}

Artificial intelligence (AI) technology is both promising and controversial. AI is a technological breakthrough in software development that vastly improves the ability of computers systems to make accurate predictions and optimize decision-making, among other benefits. AI is a general-purpose technology able to generate value in applications across many industries. Applications of AI include self-driving cars, cancer diagnoses, robotics, and choosing audio and video content. Yet AI is also quite risky. Tesla CEO Elon Musk likened AI research to “summoning the demon”. Physicist Stephen Hawking told the BBC in 2014: “The development of full artificial intelligence could spell the end of the human race”. AI is already known to embody many kinds of biases in its algorithms. 

Collaboration between competitors becomes essential when these benefits are threatened by the potential realization of these risks. In the chemicals industry, after the disastrous explosion of a Union Carbide plant in Bhopal, India in 1984, the entire chemicals industry decided to come together to develop better operating practices. They reasoned, rightly, that another massive explosion might cause tremendous regulatory restrictions, and might even cause governments to close entire production facilities.

Tech giants who employ AI recognized its potential risks, and saw the need for collaboration on a collective response. In 2016, Amazon, Facebook, Google, DeepMind, Microsoft, and IBM created the nonprofit organization Partnership on AI (PAI). Apple joined in 2017. Collectively they committed research resources to enable PAI to study and formulate best practices on AI technologies, to advance the public’s understanding of AI, and to serve as an open platform for discussion and engagement about AI and its influences on people and society.

 

Competitive Collaboration for a Safer AI Industry

The PAI has created educational material designed to increase public understanding of the potential benefits, costs, and progress of AI. Based on the lessons learned though, education alone is not enough. PAI will need to develop the ability to observe and audit its member companies’ code, to detect potential risks before they are widely exposed, and to motivate competitive executives to take steps to fix these problems early on.

To this end, PAI has created several working groups like the Fairness, Transparency, and Accountability Working Group and the AI and Media Integrity Steering Committee that are tasked with making timely decisions to react to any AI opportunity or threat. Each group engages experts from disciplines such as psychology, philosophy, economics, finance, sociology, public policy, and law to discuss, provide guidance, and support objective third-party studies on emerging issues related to the impact of AI on society. PAI’s working groups develop case studies on AI’s impact on labor and its use in criminal justice sentencing.

While the PAI is off to a good start with these working groups, it must do more to sustain collaboration between competitors over time. PAI needs to evolve into a structure that sustains and supports the implementation of responses to new opportunities and threats. What PAI lacks is a process for working groups to audit members and hold them accountable for meeting best standards or best practices.

In conclusion, cooperation among competitors is sometimes necessary to sustain an industry. The AI industry has not yet experienced a catastrophe and has proactively formed the Partnership in AI to try to avoid this. By tackling risks jointly and in advance, PAI and AI companies can co-develop solutions from the design stage, which might reduce the likelihood of a disaster, mitigate one if it occurs, and reduce potential costs for resolving it. PAI’s success depends on growing and sustaining a shared sense of collective responsibility for safety in AI development, and credibly enforcing safe practices within the industry.

 

This essay draws from a larger study that was recently published. See Bez, Sea, and Henry Chesbrough. "Competitor Collaboration Before a Crisis What the AI Industry Can Learn” RESEARCH-TECHNOLOGY MANAGEMENT 63.3 (2020): 42-48

Henry Chesbrough's profiles

This site uses both first and third party analytics and profiling cookies to send you advertisements tailored to your personal preferences. By closing this banner, scrolling down this page, clicking on a link or continuing to navigate the site in any other way, you are consenting to the use of cookies. If you would like more information or wish to withdraw your consent to all or certain cookies, then please consult our cookie policy. Accept and close