AI is advancing at a rapid pace, bringing with it potentially transformative benefits for society. If developed responsibly, AI can be a powerful tool to help us deliver a better, more equitable future.
However, AI also presents challenges. From bias in machine learning used for sentencing algorithms, to misinformation, irresponsible development and deployment of AI systems poses the risk of great harm. How can we navigate these incredibly complex issues to ensure AI technology serves our society and not the other way around?
First, it requires all those involved in building AI to adopt and adhere to principles that prioritise safety while also pushing the frontiers of innovation. But it also requires that we build new institutions with the expertise and authority to responsibly steward the development of this technology.
The technology sector often likes straightforward solutions, and institution-building may seem like one of the hardest and most vague paths to go down. But if our industry is to avoid superficial ethics-washing, we need concrete solutions that engage with the reality of the problems we face and bring historically excluded communities into the conversation.
To ensure the market seeds responsible innovation, we need the labs building innovative AI systems to establish proper checks and balances to inform their decision-making. When the language models first burst on to the scene, it was Google DeepMind’s institutional review committee that decided to delay the release of our new paper until we could pair it with a taxonomy of risks that should be used to assess models, despite industry-wide pressure to be “on top” of the latest developments.
We are also starting to see convergence across the industry around important practices such as impact assessments and involving diverse communities in development, evaluation and testing. Of course, there is still a long way to go.
Decades ago they started offering “bug bounties”—a financial reward—to researchers who could identify a vulnerability or “bug” in a product. Once reported, the companies had an agreed time period during which they would address the bug and then publicly disclose it, crediting the “bounty hunters”. Over time, this has developed into an industry norm called “responsible disclosure”. AI labs are now borrowing from this playbook to tackle the issue of bias in datasets and model outputs.
Last, advancements in AI present a challenge to multinational governance. Guidance at the local level is one part of the equation, but so too is international policy alignment, given the opportunities and risks of AI won’t be limited to any one country. Proliferation and misuse of AI has woken everyone up to the fact that global coordination will play a crucial role in preventing harm and ensuring common accountability.
The institutional review committee in Paragraph 5 is mentioned to present________
a contrasting case
a convincing example
a related topic
a background story
B