Advertisement
Engadget has been testing and reviewing consumer tech since 2004. Our stories may include affiliate links; if you buy something through a link, we may earn a commission. Read more about how we evaluate products.

Sony's head of AI research wants to build robots that can win a Nobel Prize

Dr. Kitano hopes to launch the Nobel Turing Challenge and build an 'AI Scientist' by 2050.

metamorworks via Getty Images

AI and Machine Learning systems have proven a boon to scientific research in a variety of academic fields in recent years. They’ve assisted scientists in identifying genomic markers ripe for cutting-edge treatments, accelerating the discovery of potent new drugs and therapeutics, and even publishing their own research. Throughout this period, however, AI/ML systems have often been relegated to simply processing large data sets and performing brute force computations, not leading the research themselves.

But Dr. Hiroaki Kitano, CEO of Sony AI, has plans for a “hybrid form of science that shall bring systems biology and other sciences into the next stage,” by creating an AI that’s just as capable as today’s top scientific minds. To do so, Kitano seeks to launch the Nobel Turing Challenge and develop a AI smart enough to win itself a Nobel Prize by 2050.

“The distinct characteristic of this challenge is to field the system into an open-ended domain to explore significant discoveries rather than rediscovering what we already know or trying to mimic speculated human thought processes,” Kitano wrote in June. “The vision is to reformulate scientific discovery itself and to create an alternative form of scientific discovery.”

“The value lies in the development of machines that can make discoveries continuously and autonomously,” he added, “AI Scientist will generate-and-verify as many hypotheses as possible, expecting some of them may lead to major discoveries by themselves or be a basis of major discoveries. A capability to generate hypotheses exhaustively and efficiently verify them is the core of the system.”

Today’s AIs are themselves the result of decades of scientific research and experimentation, starting back in 1950 when Alan Turing published his seminal treatsie, Computing Machinery and Intelligence. Over the years, these systems have grown from laboratory curios to vital data processing and analytical tools — but Kitano wants to take them a step further, effectively creating “a constellation of software and hardware modules dynamically interacting to accomplish tasks,” what he calls an “AI Scientist.”

“Initially, it will be a set of useful tools that automate a part of the research process in both experiments and data analysis,” he told Engadget. “For example, laboratory automation at the level of a closed-loop system rather than isolated automation is one of the first steps. A great example of this is Robot Scientist Adam-Eve developed by Prof. Ross King that automatically generates hypotheses on budding yeast genetics, plan experiments to support or refute, and execute experiments.”

“Gradually, the level of autonomy may increase to generate a broader range of hypotheses and verification,” he continued. “Nevertheless, it will continue to be a tool or a companion for human scientists at least within the foreseeable future.”

By having an AI Scientist handle the heavy intellectual lifting involved in generating hypotheses to explore, their human counterparts would have more free time to focus on research strategies and decide which hypotheses to actually look into, Kitano explained.

As always, avoiding the Black Box Effect and implicit bias (both in the software’s design and the data sets it is trained on) will be of paramount importance to establishing and maintaining trust in the system — the residents of Dr. Moreau’s island wouldn’t have been any less miserable had he been a mad AI instead of a mad geneticist.

“For scientific discoveries to be accepted in the scientific community, they must be accompanied with convincing evidence and reasoning behind them,” Kitano said. “AI Scientists will have components that can explain the mechanisms behind their discoveries. AI Scientists that do not have such explanation capabilities will be less preferred than ones [that do].”

Some of history's greatest scientific discoveries — from radiation and the microwave to Teflon and the pacemaker — have all come from experimental screwups. But when hyper-intelligent AIs start devising their own inscrutable spoken language, researchers rush to pull the plug. So what happens if and when an AI Scientist makes a discovery or devises an experiment that humans cannot immediately understand, even with an explanation?

“When AI Scientists get sophisticated enough to handle complex phenomena, there are chances to discover things that are not immediately understood by human scientists,” Kitano admitted. “Theoretically, there is a possibility that someone can run highly autonomous AI Scientists without restrictions and [not caring] if their discovery is understandable. However, this may come with a large price tag and one has to justify it. When such an AI Scientist is recognized to make important scientific discoveries already, I am certain there will be guidelines for operation to ensure safety and to prevent misuse.”

The advent of an AI Scientist able to work alongside human researchers could also lead to some sticky questions as to who should be credited with the discoveries made — is it the AI that generated the hypothesis and ran the experiment, the human that oversaw the effort, or the academic institution/corporate entity that owns the operation? Kitano points to a recent decision by an Australian court that recognized the DABUS “artificial neural system” as an inventor for patent applications as one example.

Conversely, Kitano notes the case of Satoshi Nakamoto and his inventions of blockchain and bitcoin. “There is a case where a decisive contribution was simply published as a blogpost and taken seriously,” he argues, “yet no one ever met him and his identity (at the time of writing) is a complete mystery.”

“If a developer of an AI Scientist was determined to create a virtual persona of a scientist with an ORCID iD, for demonstration of technological achievement, product promotion, or for another motivation,” he continued, “it would be almost impossible to distinguish between the AI and human scientist.” But if a truly groundbreaking medical advancement comes from this challenge — say, a cure for cancer or nanobot surgeons — does it really matter if it was a human or a machine running the experiment?