Powered by
Interview

Composable Data Assists Dapp Developers in Unlocking Rich Data Applications - Swaroop Hegde

This article was published more than a year ago. Some information may no longer be current.

Composable data, a flexible and modular approach in the field of data analytics, benefits decentralized application ( dapp) developers constrained by the limitations of current data protocols. Swaroop Hegde, co-founder of Powerloom, explains that composable data maintains a decentralized database of data points verified through a consensus mechanism.

WRITTEN BY
SHARE
Composable Data Assists Dapp Developers in Unlocking Rich Data Applications - Swaroop Hegde

Clean Insights Versus Actionable Intelligence

Hegde, a thought leader in decentralized data issues, asserts that this approach ensures data integrity and reliability. This, in turn, enables the construction of “complex data points” from existing ones, which according to Hedge are more straightforward.

To illustrate this, Powerloom co-founder suggests that a data point summarizing a decentralized exchange’s daily trade volume can be created by aggregating data points that record individual trade events from specific transactions contracts.”

In his written responses to Bitcoin.com News, Hedge briefly discussed the differences between what are known as “clean insights” and actionable intelligence. He also offered advice to data networks on the necessary steps to ensure that smart contracts are not supplied with incurable, unreliable, or tampered data. Below are all of Hedge’s responses to the questions posed.

Bitcoin.com News (BCN): Can you briefly talk about what starts as raw data, how it’s sourced and indexed and how it evolves into clean insights and actionable intelligence?

Swaroop Hegde (SH): Raw data on most programmable blockchains start as state variables and events on smart contracts, apart from simple value transfers. They point to vital information about the interaction between wallets and the smart contract(s) that constitute decentralized applications( dapps) via transactions on the blockchain. Popular examples of such dapps would be decentralized exchanges and other decentralized finance ( defi) protocols that allow you to lend, borrow, stake, earn yield, etc.

Onchain data protocols, like Powerloom and others, facilitate this by offering tools for efficiently managing and analyzing large datasets. They leverage decentralized storage and computing resources to ensure data integrity and availability, allowing for the creation of sophisticated data markets and the seamless exchange of insights. By doing so, they play a crucial role in the data transformation process, enabling the generation of clean insights and actionable intelligence from raw data across various Web3 segments.

BCN: What’s composable data and how does it address the limitations of current data protocols for decentralized application developers?

SH: Most useful information on smart contract applications comes from building together data points captured over time by applying data operations like map-reduce, filter, and aggregation. This is the principle on which we built our composable data network.

Presently, most data protocols or platforms are built on a model replicating traditional relational database models queried for basic columnar information categorized by tables and then further joined together by complex queries supported by intermediate caching strategies.

With blockchains, most information needs to fit into a time series model, and the above approach complicates the development of useful information products such as dashboards, LP trackers, and aggregation workflow-based bots, among others. Managing libraries of customized queries ORM modules on backends and mapping them to data structures consumable by differing frontends can quickly drive up the technical debt of an organization looking to hit the market fast and gather vital feedback from its end users. Not to mention the necessity to maintain their blockchain infrastructure or other data subscriptions often to have a reliable surety of the synchronization state of the underlying data they consume.

In addressing these challenges, the concept of composable data simplifies the process by keeping a decentralized database of data points verified through a consensus mechanism. This ensures data integrity and reliability.

This allows for constructing complex data points from more straightforward, existing ones. For instance, a data point that sums up a day’s trade volume can be created by aggregating and filtering through essential data points that record individual trade events from specific transaction contracts, like those on Uniswap v2.

BCN: Your platform claims to offer pre-computed data tailored to market needs. Can you explain what this service includes and how it responds to the rapidly changing market demands to provide dapps with accurate and relevant data solutions?

SH: Data markets are defined as a collection of:

  1. Data sources, i.e., smart contracts.
  2. Data model configuration that specifies the state variables and events of interest on such contracts.
  3. Compute modules that build ‘base snapshots’ and higher order snapshots that compose upon the base and other intermediate data points that build on top of each other.

Each component can be flexibly swapped in or out depending on the interest specified by a data market’s consumer(s). Signallers are usually the peers that respond to market demands, such as the ability to track contracts that qualify for a specific categorization of activity. For example, a decentralized exchange’s data market consumers might only be interested in contracts that cross a certain threshold of liquidity or display a particular pattern of trading activity.

BCN: What is the difference between insights and actionable intelligence, particularly from the perspective of Web3 builders who need ready-to-use data points for their decentralized applications?

SH: Insights – not flexible, usually broad spectrum information on data points accepted as typical information on smart contract protocols, like price movement over the past 1 hour.

We’re talking about actionable intelligence that is flexible. It is made possible by curators in tandem with snapshotters. Curators dynamically update data points and their composition definition according to the demands of a data market. At the same time, snapshotters quickly begin generating such composed data points that are readily consumed by information products that change shape with the activity of underlying data sources.

BCN: Web3 is not without its vulnerabilities, and these could get amplified amid the growing convergence of artificial intelligence (AI) and Web3. In your opinion, what are some of the precautionary measures data networks should take to ensure that smart contracts are not fed with incurable, unreliable, or tampered data?

SH: To safeguard smart contracts from unreliable or tampered data, data networks should:

  • Ensure data decentralization through consensus among multiple participants, reducing the risk of manipulation.
  • Utilize tamper-proof storage technologies like IPFS or Filecoin, providing onchain data integrity proof.
  • Introduce a verification period allowing for the review and challenge of data accuracy before its final use.

These measures help maintain the security and reliability of data used in smart contracts.

BCN: The Web3 ecosystem is evolving rapidly and the need for reliable data is expected to explode in the coming years. Can you talk about the obstacles to ensuring data composability and accessibility in an ecosystem that is supposed to be infinitely scalable?

SH: One of the biggest obstacles comes down to the differences in the execution environments of smart contracts and how individual accounts and wallets are represented. Web3 applications inevitably become adopted by users across varying blockchain platforms. Ensuring reliable cross-chain information would require certain data access and retrieval standards to be established to enable a fair playing field for not just end users but also builders and information providers who want to enrich the experience with better tools and products.

What are your thoughts about this interview? Please share your opinions in the comments section below.