The Groundbreaking Bridge Between Real World Data and Smart Contracts

--

I recently worked on an end-to-end project that involves accessing real-world data and information from an on-chain Ethereum smart contract. I’d like to explain the importance of this process for future applications as well as explain the process a bit as I find it extremely groundbreaking. I will be explaining this process following the Chainlink protocol architecture as that is what I had used learning the process.

Diagram of Architecture to bring real-world data to on-chain smart contracts.

The path that data takes from a regular (and often centralized) data source is this: Data Source API → External Adapter → Chainlink Job (Created by a node operator) → Chainlink Oracle smart contract (or aggregator contract) → Your smart contract trying to use this data for whatever purposes it needs.

Let’s chat about why this is initially interesting overall, but I’m sure as I explain the process, we’ll go into further use cases that underline why I use the word ‘groundbreaking’.

Why?

Smart contracts that live ‘on-chain’ need gas to be decentralized and to execute the operations and functions they were designed for. This ‘gas’ is comparable to computing power, or even more basically, plugging our computer into an outlet in the wall. Electricity flows through the outlet and into your computer and the computer uses that energy as fuel to run whatever code you write on your local machine.

So gas, is what makes the decentralized universe run as it gives incentive to any individuals who stake a token or miners in a blockchain network (which of course is dependent on whether that blockchain runs a proof-of-stake or proof-of-work algorithm).

Big Data Jobs

In the end, this means it is extremely expensive to do operations that might take a lot of computing power. Protocols like Chainlink work to fix this issue by using their infrastructure to allow for these smart contracts to pull this data quickly in one inexpensive operation On-chain, while the bulk of the expensive computation is done Off-chain. For an example, think of how expensive running a large AI neural network is and how much computing power that can require. Wouldn’t you rather not take a mortgage on your home just to pay for the gas to run it once?

BONUS FEATURE: As Chainlink has leveled up, the architecture has gotten more efficient and more decentralized. When data passes through these aggregator nodes, the reason they are called ‘aggregator’ nodes is because they truly aggregate the data and basically check any data’s validity by running the data against other nodes around the world and reach a consensus on whether the data coming On-Chain for the smart contracts to use is accurate. I don’t feel a need to go further into why that is important.

Accessing Real World Data and Your External Adapter

Let’s get to this specific example and the technicals.

The data we want to reach is housed from an API that we can access. So, our next step is to build what is called an external adapter. This adapter itself is run as its own API through Node.js using Express and Typescript. We access the JSON response from calling the API and specify within our adapter the endpoints and information that we will ultimately be retrieving from the data source.

Photo by NASA on Unsplash

It’s pretty much that simple to draw from the API, now you have the data in your external adapter to use how you wish. I also want to mention that in this external adapter API, you could make it as large as a project as you wish, and for instance, here is where you could set up your AI neural network or some quantum computing algorithm if you wanted. Ultimately, the data that you are happy with here, you will be sending on its way for the smart contract to use.

Once the data is prepared and ready to send on its way, you must ‘dress up’ the data in the right way for a Chainlink node to read it in its own language. Thankfully, Chainlink makes this part pretty simple, as its fairly standard for most external adapters, through their templates which they offer up in their documentation (linked below in references)

Zoom! The data is now moving on to your chainlink node.

Trending AI Articles:

1. Why Corporate AI projects fail?

2. How AI Will Power the Next Wave of Healthcare Innovation?

3. Machine Learning by Using Regression Model

4. Top Data Science Platforms in 2021 Other than Kaggle

The Chainlink Node Operator And Oracle Contract

In this project, I set up my own Chainlink development environment and local Chainlink node for quick testing purposes. This is also technically good as the more nodes on the network push for the ideology of decentralization.

Once my node was up and running, I logged in as an operator and created what is called a ‘Bridge’. This Bridge basically just identified the external adapter I just wrote as an official adapter and made it readily available for my use in any ‘Job’ that I wanted to write. Just think of a Job as the translator between smart contracts and Typescript External Adapters.

To create a Job, we must write a Job spec, that basically just is a pipeline that we feed our data coming in through and makes it easier for the Ethereum (or whatever blockchain you are using) to understand your data.

In order to do this, we must place our Bridge adapter in the Job spec as well as other necessary Chainlink ‘Core’ Adapters which are there for us to use to complete any other operations on our data (i.e. converting our data to uint256 etc.).

In the past, this spec was written in JSON, but with the new versions of Chainlink, specs are now written in TOML, so read up on that if you’re interested in running nodes and creating your own jobs.

One last thing to note, is that here is also a good point to deploy your ‘Oracle Contract’, which you’ll need to successfully finish the job and you’ll also need it in the next step when referring to your job in your smart contract. Luckily, Chainlink also makes this fairly simple through a nice template in their documentation as well as it is a simple contract. You can easily deploy right from Remix IDE and then grab the contract address from Etherscan.io.

The Consumer Smart Contract

Finally, we’ve reached the destination.

Think of ‘Consumer Contract’ as just being the smart contract that is trying to access the real-world API data. There are great templates already out there for the parts of the contract that need to ‘grab’ the data from the Job spec as it runs through the Oracle contract (also in Chainlink documentation below).

You’ll just need to know enough solidity to write any functions that will pull the endpoints, or specific data from the JSON responses from the external adapters. Then feel free to do however you please with the data!

Conclusion

The future holds so many use cases for smart contracts and this idea of Oracles and basically bridging the scary gap of computation and data as we collect Off-chain and placing that On-chain for the blockchain to do its thing. The architecture of this process is always improving and being worked on every day. Another great benefit is that many of these Oracle protocols are ‘Blockchain-Agnostic’ meaning that they don’t rely on Ethereum specifically to succeed. New blockchains are being created all the time and this architecture is improving to be compatible with new blockchains all the time. This is awesome for true decentralization!

References

https://docs.chain.link/ — chainlink documentation

https://github.com/smartcontractkit/external-adapters-js — chainlink external-adapter typescript template-repo

https://www.gemini.com/cryptopedia/what-is-chainlink-and-how-does-it-work#section-where-do-link-tokens-fit-in — Better help in understanding the process of Chainlink or Other oracle services.

https://docs.chain.link/docs/fulfilling-requests/ — Fulfilling Node Requests and Oracle Contract Deployment

Don’t forget to give us your 👏 !

--

--