
Investing in the future of A.I. is nothing new for Eric Schmidt, who has spent the past few years backing dozens of startups like Stability AI, Inflection AI and Mistral AI. But through a new $10 million venture aiming to bolster research on safety challenges surrounding the new technology, the former Google (GOOGL) CEO is taking a different tack.
The funds will launch an A.I. safety science program at Schmidt Sciences, a nonprofit organization established by Schmidt and his wife Wendy last year to accelerate scientific breakthroughs. Instead of simply emphasizing A.I.’s risks, however, the program will prioritize the science underpinning safety research, according to Michael Belinsky, the program’s head. “That’s the kind of work we want to do—academic research to figure out why some things are systemically unsafe,” he told Observer.
More than two dozen researchers have already been selected to receive grants of up to $500,000 from the program, which will additionally offer computational support and A.I. model access to participants. Subsequent funding, meanwhile, will engage with the latest developments in the fast-moving industry. “We don’t want to be solving for problems of, say GPT-2,” said Belinsky, referencing an OpenAI model released in 2019. “We want to be solving for problems that are of the current systems that people use today,”
Initial grantees include renowned researchers like Yoshua Bengio, a machine learning expert lauded as one of the “Godfathers of A.I.” for his contributions to the field. Bengio’s project will center on building risk mitigation technology for A.I. systems. At the same time, other recipients, such as Zico Kolter, a member of OpenAI’s board and head of Carnegie Mellon University’s machine learning department, will explore specific phenomena like adversarial transfer, which occurs when attacks developed for one A.I. model are effectively applied to others.
Another beneficiary of the Schmidt Sciences’ new program is Daniel Kang, an assistant professor at the University of Illinois Urbana-Champaign, who will use the organization’s grants to study whether A.I. agents can conduct cybersecurity attacks. A.I.’s capabilities in this area have implications besides malicious use by bad actors, according to the researcher. “If A.I. can autonomously perform cyberattacks, you could also imagine this being the first step of A.I. potentially escaping control of a lab and being able to replicate itself on the wider internet,” Kang told Observer.
Is A.I. safety falling behind?
As Silicon Valley’s unabashed frenzy for A.I. continues, some worry that safety concerns have taken a backseat. Earlier this month, a global A.I. summit in Paris dropped “safety” from its title as tech CEOs and global leaders gathered to praise the technology’s economic potential. “I worry that competitive pressures will leave safety behind,” said Belinsky.
Schmidt Sciences’ new program hopes to tackle some of the barriers slowing down A.I.’s safety research community, such as a lack of quality safety benchmarks, adequate philanthropic and government funding, and academic access to frontier A.I. models. To bridge the gap between academia and industry, researchers like Kang hope that leading A.I. companies will incorporate safety research breakthroughs as they continue developing the technology’s wide-ranging capabilities.
“I totally understand the need for speed in this kind of fascinating field,” said Kang. But open communication and accurate reporting should be a bare minimum for makers of frontier models, according to the professor. “I really, really hope that the major labs take their responsibility very seriously and use some of the work that has been coming out of my lab and other labs to accurately test their models, and transparently report what the actual tests say.”
<