Skype co-founder Jaan Tallinn on the three most vital existential dangers

Skype co-founder Jaan Tallinn

Center for the Investigation of Existential Risk

LONDON – Skype co-founder Jaan Tallinn has figured out what he believes are the top three threats to human existence this century.

While the climate emergency and coronavirus pandemic are viewed as issues that urgently require global solutions, Tallinn told CNBC that artificial intelligence, synthetic biology and so-called unknown unknowns each pose an existential risk through 2100.

Synthetic biology is the design and construction of new biological parts, devices and systems, while unknown unknowns, according to Tallinn, are “things we may not be able to think about right now”.

The Estonian computer programmer, who helped set up the Kazaa file-sharing platform in the 1990s and the Skype video call service in the 00s, has become increasingly concerned about AI in recent years.

“Climate change will not be an existential risk unless there is an out of control scenario,” he told CNBC over Skype.

Of course, the United Nations has recognized the climate crisis as the “defining issue of our time” and recognized its impact as global and unprecedented. The international group has also warned that there is alarming evidence that “critical turning points leading to irreversible changes in key ecosystems and the planetary climate system may have already been reached or passed”.

Of the three threats Tallinn is most concerned about, AI is at the center and it spends millions of dollars making sure the technology is developed safely. This includes investing early in AI labs like DeepMind (partly so he can keep an eye on their activities) and funding AI security research at universities like Oxford and Cambridge.

Referring to a book by Oxford Professor Toby Ord, Tallinn said there was a one-in-six chance people will not survive this century. One of the biggest potential threats in the short term is AI, according to the book, while the likelihood that climate change will cause human extinction is less than 1%.

Predicting the future of AI

When it comes to AI, nobody knows how smart machines get, and it’s basically impossible to guess how advanced AI will be in the next 10, 20 or 100 years.

Trying to predict the future of AI is made even more difficult by the fact that AI systems are starting to create other AI systems without human input.

“There is a very important parameter in predicting AI and the future,” Tallinn said. “How much and how exactly will AI development give feedback on AI development? We know that AI is currently being used to search for AI architectures.”

If AI turns out to be not good at building other AIs, we needn’t be unduly concerned as there is time to “dissipate and deploy” AI skill gains, Tallinn said. However, if AI is able to create other AIs, it is “very legitimate to be concerned … about what will happen next,” he said.

Tallinn explained how there are two main scenarios that AI security researchers are looking at.

The first is a laboratory accident in which a research team leaves an AI system in the evening to train on some computer servers and “the world is no longer there in the morning”. The second is where the research team produces a prototechnology which is then adopted and applied to different areas “where it has an unfortunate effect”.

Tallinn said it is focusing more on the former as fewer people think about this scenario.

When asked if he’s more or less concerned about the idea of ​​superintelligence (the hypothetical point where machines reach and then quickly surpass human-level intelligence) than three years ago, Tallinn says his point of view is “muddier” or less has become more “nuanced”. “”

“If you say that it will happen tomorrow or that it won’t happen in the next 50 years, I would say that both of them are cocky,” he said.

Open and closed laboratories

The world’s largest tech companies are investing billions of dollars in advancing the state of AI. While some of their research is openly published, many are not, and this has raised alarm bells in some corners.

“The question of transparency is not at all obvious,” says Tallinn, claiming that it is not necessarily a good idea to reveal the details of a very powerful technology.

Tallinn says some companies take AI security seriously than others. For example, DeepMind is in regular contact with AI security researchers at places like the Future of Humanity Institute in Oxford. It also employs dozens of people who focus on AI security.

At the other end of the scale, business centers like Google Brain and Facebook AI Research are less connected to the AI ​​security community, according to Tallinn. Google Brain and Facebook did not immediately respond to CNBC’s request for comment.

If the AI ​​becomes an “arms race,” it will be better with fewer participants in the game, according to Tallinn, who recently listened to the audiobook for “Making the Atomic Bomb,” which had major concerns about the number of research groups were working on Science. “I think it’s a similar situation,” he said.

“If it turns out that AI isn’t going to be very disruptive in the near future, it would certainly be useful for companies to actually try to solve some of the problems in a more distributed manner,” he said.

You might also like

Comments are closed.