Security expert Ari Juels has been thinking about how technology can derail society for about as long as he can remember. That doesn’t mean Juels, the chief scientist at Chainlink and professor at Cornell Tech, in New York CIty, thinks the world is going off the rails anytime soon. But over the past decade — with the development of large language models that back increasingly powerful artificial intelligence systems and autonomous, self-executing smart contracts — things have begun to trend in a more worrying direction. There is a “growing recognition that the financial system can be a vector of AI escape,” Juels said in an interview with CoinDesk. “If you control money, you can have an impact on the real world.” This doomsday scenario is the jumping point for Juels’ second novel, “The Oracle,” a crime thriller published by heavyweight science fiction imprint Talos, about a NYC-based blockchain researcher enlisted by the U.S. government to thwart a weaponized crypto protocol. Set in the near future, readers might see some familiarities to today. Take the protagonist’s research into smart contracts that can go rogue — and kill — similar to Juel’s own 2015 academic paper about “criminal smart contracts.” Or references to Chainlink CEO Sergey Nazarov’s famous plaid shirt. Others, like a powerful AI tool that helps computers interact with and interpret the world, like OpenAI’s ChatGPT, only came online after Juels started writing. Thankfully, fiction sometimes is stranger than reality, and the prospects of smart contracts programmed to kill remain a distant threat, Juels said. He said he remains cautiously optimistic that if people start thinking about the risks today, and design guardrails like blockchain-based oracles (essentially feeder systems for information), it could help prevent problems in the long run. CoinDesk caught up with Juels last week to discuss the burgeoning intersection of blockchain and AI, the ways things can go off the rails and what people over- and under-rate about technology. Are smart contracts like the ones in “The Oracle” possible today? They're not possible with today's infrastructure, but are possible or at least plausible with today's technology. What's your timeline for when something like the events of the book could play out? It's a little hard to say. At least a few years. What makes them technologically plausible now, when in fact, they weren't at the time I started writing the novel, is the advent of powerful LLM [large language models] because they're needed essentially to adjudicate with what the novel calls a rogue contract. The rogue contract was soliciting a crime, in this case, the death of the hero of the novel, and somehow a determination has to be made as to whether or not the crime occurred and who was responsible for it and should therefore receive a reward. To do those two things, you need something to extract keywords from news articles — basically an LLM plugged into blockchain infrastructure and inheriting the same properties that smart contracts have, namely the fact that they are, at least in principle, unstoppable if they're coded to behave that way. Do you need a blockchain to build smart contracts? It depends on the trust model you're after. In a sense, the answer is no, you could run smart contracts in a centralized system. But then you would lose the benefits of decentralization, namely, resilience to the failure of individual computing devices, censorship-resistance and confidence that the rules aren't going to change out from under you. This may be a weird question, but I figured you might get it: Are blockchains Apollonian? That is a weird question, and I do get it. I would not say blockchains in general, but oracles definitely. As you may know the novel is about not just modern day oracles but also the Oracle of Delphi. They're both aimed to serve as definitive sources of truth, in some sense. One is literally powered by the god Apollo, at least in the belief of the ancient Greeks. And the other is powered by authoritative sources of data like websites. So if you take that perspective, yes, I would say that oracle systems are kind of Apollonian in nature because Apollo was the God of truth. Is blockchain privacy sufficient today? All technology features a double edged sword. There are obviously good and important facets to privacy. You can't have a truly free society without privacy. People's thoughts, at the minimum, need to remain private for people to act freely. But privacy can be abused. Criminal activities can make use of blockchain technology. But I would say that we today don't yet have powerful enough privacy-preserving tools to provide users with the benefits of privacy that I think they deserve. Would you say technology as a whole is a generally positive force? There are clear benefits to technology. We've come a very long way toward eradicating global poverty, that's one of the good news stories that people tend to overlook. But there have been costs to the use of new technologies — that becomes visible when you look at the general happiness or contentment of those in rich Western nations, which has stagnated. That can be accounted for as a side effect of technology, in part. There are other factors at play, including a breakdown in social cohesion and feelings of loneliness, but technology has been somewhat responsible for that. One of the reasons I incorporated the ancient Greek dimension [in The Oracle] was that I feel one of the things we're losing as a result of the pervasiveness of technology is a certain sense of awe. The fact that we have the answers to most of the questions we would naturally pose with Google or AI agents at our fingertips means a diminishment of our sense of wonder and mystery with which we used to be encompassed. There's less room for us in our daily lives to explore intellectually. You have to dig deeper, if that makes sense. What are we overreacting about when it comes to technology? I tend to be somewhat optimistic when it comes to AI doomsday scenarios.I'm by no means a subject matter expert here, but I have studied information security for quite a long period of time. And the analogy I like to draw, and I hope it holds good, is to the Y2K bug. The doomsday scenarios that people envisioned didn't happen. There wasn't a need for manual intervention. We have all of these kinds of hidden circuit breakers in place. And so I feel a certain degree of confidence that those circuit breakers will kick in if, say, an AI agent goes rogue. This provides me at least with a certain degree of comfort and optimism around the future of AI. Do you have any unusual work techniques, coming from someone who teaches in the Ivy League, does research for Chainlink and writes in his spare time? It depends on the set of projects I'm juggling. The thing that was helpful when I was trying to squeeze in time for the book was that I was obsessive about writing it. It was a real flow process. The Hungarian psychologist Mihaly Csikszentmihalyi explored the concept of flow, defining it as an activity in which you can maintain a unique focus over an extended period and lose track of time. Writing placed me in a flow state. I squeezed it into the little nooks and crannies of time available to me. Anything else you wanted to say about the book? One thing I do want to emphasize, an important message for the community at large, is the growing recognition that the financial system can be a vector of AI escape. People are worried about AI agents escaping from their confines and controlling cyber physical systems like autonomous vehicles, power plants or weapons systems — that's the scenario they have in mind. I think they forget that the financial system, particularly cryptocurrency, is especially well suited to control by AI agents and can itself be an escape vector. If you control money, you can have an impact on the real world, right? The question is how do we deal with AI safety in view of this very particular concern around blockchain systems? The book has actually gotten me and my colleagues at Chainlink thinking about how oracle's act as gatekeepers to this new financial system, and the role they could play in AI safety. Is there anything tangible in mind that Chainlink can do to prevent something like that? This is something I've just started to give thought to, but some of the guardrails that are already present in systems we build like CCIP or cross chain bridges would actually be helpful in the case of an AI escape by establishing boundaries for what a malicious agent could do. That's a starting point. But the question is, do we need things like anomaly detection in place to detect not just rogue human activity but rogue AI activity? It's an important problem, it's actually one I'm starting to devote a fair amount of attention to. Read the full article online... – D.K.
@danielgkuhn [email protected] |