Wouldn’t it be great if AI applications could trust each other in the same way humans do? I trust my local bank to handle my account since I know its history and it has been doing business honestly since my grandparents were alive. I trust my friends even more, because I know more about their history. Trust is often based on a common history, and we now have a technology for recording and sharing history.
What if every instance of AI had access to a public ledger, which contained a detailed history of all other AI, and AI-to-AI interactions? From this immutable ledger, they could see which algorithms, versions and heuristics the other AI is using; everybody they have interacted with before; what parts they contain and even their history and so on. They would be able to process this information in milliseconds and make decisions about who to trust and to what extent. After some level of trust had been established, they could proceed to make binding contracts with the help of smart contracts.
This is possible with blockchain technology. Blockchain is a technology which enables complete strangers to engage in the distributed but joint production and maintenance of a database. It could also increase people’s trust in AI. Distributed ledgers could serve as reliable black boxes in the case of accidents. For example, if autonomous cars were to collide, we would want to know what happened in order to prevent a recurrence. If the cars were seriously damaged, the only sources of information would be their manufacturers, but can we really trust them to hand over all information provided by the cars’ AI? The companies may want to protect their reputations and hold certain information back, or even tamper with it, before handing it over to investigators. If the blockchain included hashes of the data sets, we could check whether the information provided was the same as the last data stream sent to the manufacturer by AI.
Blockchain, the technology behind Bitcoin, has given us a new tool for building trust on the basis of cryptographic techniques, and via authentication through huge computing power. After the success of cryptocurrencies, this new trust model has been considered for many applications in various domains. However, it seldom adds much value, since we already have working systems with their own trust models and a long history behind them. Above all, the average user likes the current, familiar methods of building trust.
We currently have the technology to make digital contracts, or even smart contracts, but the preferred way to make a contract is by handwritten signatures on physical paper and shaking hands. To be frank, this quite a handy approach. But what about AI, which tends to lack hands but still needs a way of building trust? Fortunately, AI is not burdened by old techniques and established practices, leaving us free to create new types of trust-building methods in situations where AI interacts with AI. Of course, we could try to use trusted third parties to grant certificates for trustworthy AI, rather like driving licenses for autonomous cars. However, who would be the trusted party?
Visa Vallivaara & Kimmo Halunen
Based on the topic of Vallivaara’s panel discussion “Opportunities to shape future data research – blockchain and what is it for and what not” at the RDA EU Data Innovation Forum