But it will only do good if it is held to the highest ethical standards, says Mustafa Suleyman of DeepMind
As a global community, we’ve made stunning strides in recent decades, tackling some of the world’s cruelest tragedies. Consider one: child mortality. Every day 17,000 new lives get to be lived by children who would have died just a quarter-century ago. Peace and innovation have been the driving forces of this spectacular progress.
Yet some of our toughest challenges, like inequality, haven’t improved — they’ve actually become worse. Malnutrition and preventable disease continue to kill millions, straining health-care systems in both rich and poor countries. And the devastating threat of climate change looms, hitting the poorest the hardest.
If we are to reduce this suffering, then humanity will need to come up with bold new solutions. But to have any chance of solving these problems, what is feasible today will not be enough. Instead, we must look at what is currently impossible, and do all we can to overcome the limits of what humanity can accomplish. These limits are real, and they cap our aspirations for change.
Our efforts to tackle disease are capped by a desperate shortage of trained nurses and doctors, in the rich West as well as in developing countries. Our efforts to reduce energy consumption are held back by the insatiable demand for new products and services, and what appear to be hard limits on how efficient our energy infrastructure can become.
This is why science and technology have been so critical in the history of civilization. Technological progress expands the possibilities of human achievement, increasing our collective capability to solve problems that were once considered unsolvable.
My message is simple. First, the progress of science and technology is about to go through the greatest acceleration of all time. And second, this is the greatest opportunity we have had for generations to advance the causes of social justice, equality and the reduction of human suffering.
AI and human progress
My confidence is based on what I see every day at places like DeepMind, the company I co-founded in 2010, where some of our brightest technological minds are working at the frontier of artificial-intelligence research and application. AI means different things to different people. What I mean is technology that people can use in complex domains to discover new knowledge, ideas and strategies by dint of algorithms that learn from data.
This AI-enabled future has already begun. Take energy, where AI is making a dramatic impact. At DeepMind, we developed a “safety-first” AI system to autonomously manage the cooling in Google’s big data centres. The system delivers around 30% energy savings, with further improvements expected over time. The achievement stunned experts who thought that this scale of improvement was impossible. Industrial systems make up one-third of the world’s total energy consumption, so there is widespread potential for these AI techniques in the fight against climate change.
Or take health care, where costs keep rising and millions suffer preventable harm every day. Each year around 250m people worldwide are affected by varying degrees of sight loss. Sadly, this number is expected to triple by 2050, even though the majority of cases would be curable if caught early. The eye-scanning technology needed to detect early sight loss is widely available, but there simply aren’t enough qualified doctors to analyse these scans and figure out which patients are most at risk.
We formed a partnership with experts at Moorfields Eye Hospital in London to develop a system to identify and recommend treatment for several sight-threatening conditions. It achieved groundbreaking results: a 94% accuracy level. In fact, the technology performs as well as clinicians with more than 20 years’ experience. Although it is still early days, and any product would of course need to go through robust clinical trials and regulatory approvals, this work may revolutionize how eye diseases are treated. This, in turn, could save millions of pounds for economies and let doctors focus more on patient care.
Similar advances are happening across many medical areas. Deep-learning algorithms have identified more malignant melanomas and misidentified fewer false positives than human analysts. Another machine-learning study of breast cancer diagnosis showed the potential to reduce unnecessary surgeries by 30%.
And this is only the beginning. AI’s potential uses are broad. It could dramatically cut the time needed to test the new molecules that could be the foundation of clean energy, from 10–20 years to just one or two years. It could make farmers more productive by identifying where fertilizers are needed. It could craft more reliable and low-cost materials for housing, infrastructure and transport. It could help allocate scarce resources — like clean water and food — more equitably and identify new sources of growth.
What holds AI back
So what is holding us back? While it is clear how AI can help, we must be honest about the challenges, too. First, not enough of our brightest minds are focused on solving the most serious problems facing humanity. Despite the hype, tech often gravitates toward the safest and most commercially short-term ideas: creating personalized soda drinks when half a billion people don’t have access to clean water, or new ways to order food when more than 800m people are malnourished. We need new incentive structures that encourage technologists to take on society’s gravest challenges, and to do so with ethics at their heart.
Second, if we want technologists to be more ambitious in solving big problems, then we also need societies to be more ambitious about how these technologies are governed, directing them toward intended benefits not unintended harms. From the spread of facial recognition in drones to biased predictive policing, the risk is that individual and collective rights are left by the wayside in the race for technological advantage.
A fairer world won’t emerge by accident. We need our institutions to guarantee ethical outcomes, and to preserve human dignity as societies and technologies evolve.
One of the global community’s greatest achievements is the adoption of the Universal Declaration of Human Rights. Presenting to the United Nations General Assembly in December 1948, its great champion, Eleanor Roosevelt, said: “We stand today at the threshold of a great event both in the life of the United Nations and the life of all mankind.” To this day it remains foundational to our sense of a just society, a good life and what it means to be human.
Now, as technology transforms every aspect of our lives, it is time to go one step further. We should re-imagine the Universal Declaration of Human Rights to explicitly include digital rights. If AI is to serve society, it must be held to the highest ethical standards, and to incorporate the intrinsic rights that have proven to be the bedrock of fair and just societies.
As well as getting the principles right, we need to get the practicalities right too. If the majority of tech investment has flowed into areas that are tangential to social progress, the side-effect is that some of our most societally important infrastructure — the areas that would benefit most from the careful and ethical use of AI — are far from ready for change. Much of the world’s health data is still stored on paper, and public information is often kept in inconsistent formats, limiting the ability of AI to help provide solutions. Supporting a new era of open, verifiable data and investing in digitization will lay the necessary groundwork for the breakthroughs we desperately need.
What is at stake is something world-changing. And we need it to be. Together, we have the opportunity to put AI — the next phase of the technological revolution and one of the most important of all time — at the service of societal needs. If we can create the right structures, ethics and incentives, then the scientific and social progress could be truly incredible.
This article originally appeared here.