Interoperability: The Liquidity Angle
Why General Messaging Is Not Enough and Liquidity-Specific Solutions are Needed
2022 has seen the introduction of multiple new L1 blockchains and L2 rollups - which created new solutions for (or attempts to solve) challenges such as scalability. The flip side is an increase in the already fragmented space. Interoperability will reduce fragmentation yet it remains a huge problem to solve - one that hinders adoption, usability, security and stickiness.
In fact, interoperability is more than just a single problem to solve and it doesn’t have a magical panacea. Moreover, any attempt to solve it in a generic, one-size-fits-all way is destined to be inferior compared to specialized protocols. Instead, over time we may see layered interoperability point solutions composed together, getting us closer to an interoperable future.
To understand the interoperability problem space more, I will first zoom in on the attributes of current interoperability solutions and their shortcomings. Later I’ll try to define a framework for the optimal solution.
Messaging protocols - such as Axelar, IBC, Wormhole and Layer Zero - enable (as their name implies) messages to be sent across chains.
They may be perfect for triggering and invoking remote contracts and functionality. For simplicity's sake, we can think of them as the web3 version of remote procedure calls (RPC) or SOAP in legacy architectures. Such protocols have an important role in creating a composable interchain architecture (just like the WebServices partially-fulfilled promise back in the pre-cloud internet days).
However, while smart contract interop/communication is an important use case, it doesn't solve every need and challenge. The interoperability reality is more complex.
Take, for instance, data and liquidity. Both need more than just a crossing message: they need the existence of additional resources (accessible data and liquid assets - respectively). For focus purposes, while data interoperability has an important role in the web3 evolution, it is beyond the scope of this post.
Let’s focus on liquidity, then. We need more than messaging in order to solve the liquidity fragmentation problem. Imagine TCP/IP and its ability to relay packets (messages). It allowed the emergence of Wide Area Networks (WAN) - i.e. communication and distributed execution of functionality across networks and locations. It enabled, for instance, the transfer of files - digital collections of data bits (characters, numbers or snippets of machine language).
Digital files can be copied and replicated without changes to their fundamental attributes.
But money is different. Although money can be represented in a digital form, it cannot be replicated without losing its fundamental attribute: scarcity.
Files were transferrable peer-to-peer. Money was not, until the invention of the first digital ledger technology (aka Bitcoin).
If money is “sent” from Alice to Bob, Alice should have less, and Bob should have more. And someone/something needs to make sure it happens.
The emergence of TCP/IP and WAN could not have been a complete solution to digital money transfers. Banks were still needed as a trusted entities that will mobilize the transferred scarce resource. In a similar way, cross-chain messaging protocols do not address the trustless interoperable money transfer need. When the original assets reside on the source chain, even when a message gets carried across chains to invoke a remote procedure or workflow, tokens need to somehow become accessible on the target chain. And since there is no way to transfer native tokens (by definition), there are only two ways to make it happen: either re-create them or move/withdraw them from a dedicated repository.
Let’s unpack that even more.
Liquidity and asset interoperability can be achieved in one of two ways: bridging and smart contract-managed liquidity pools.
Bridging
The native asset is locked in the source chain by a smart contract.
A message that contains the needed information (e.g. target wallet, amount) is relayed to a smart contract on the target chain.
Upon reception and validation, a wrapped/synthetic version of the assets is minted to represent the same value.
The pros:
In theory, no liquidity is needed at the target chain to enable the transfer - which (in theory) removes any caps and limitations to the amounts that can be transferred. In practice, this advantage is limited by the liquidity that exists to swap the minted tokens into native tokens (see cons below).
The cons:
Smart contracts that hold the native tokens (at the source chain) are honeypots for hackers.
Smart contracts that mint tokens are money printing machines, and if the smart contract gets hacked, the machine goes brrrr…
Smart contracts that mint tokens are complex - and therefore may become slow and expensive gas guzzlers.
To use the tokens, they need to be swapped into native tokens. Someone needs to provide the liquidity for that - which creates capital inefficiencies (and limits the amount of available liquidity to support the transfers)
Synthetic tokens create further fragmentation. Each bridge creates its own version of minted tokens. Instead of reducing fragmentation, this method in fact increases it.
This approach has additional attack vectors and trust assumptions such as relayers, and oracles.
Scalability is limited since every bridge supports one source chain and one target chain.
Smart-contract-managed liquidity pools networks
The native asset is locked in the source chain by a smart contract.
A message that contains the needed information (e.g. target wallet, amount) is relayed to a smart contract on the target chain.
Upon reception and validation, the same value is withdrawn from the local liquidity pool in the target network.
The pros:
Native tokens only - no risk of brrrr (so no free money printing) and no fragmentation.
Faster finality - no minting done by smart contracts.
The cons:
Smart contracts that hold the native tokens (on both sides) are honey pots for hackers.
Use of additional attack vectors and trust assumptions to enable the operation: relayers, oracles.
Liquidity pools may get drained, effectively disabling the support of a chain if its related liquidity pool doesn’t get replenished on time.
Note: Some protocols use dual-sided liquidity pools that always contain their own native tokens which function as a bridging asset. This is a variation of the above methods. While it may remove some of the trust assumptions, it creates a set of tradeoffs and attack vectors such as exchange rate manipulation, and impermanent loss.
Note 2: Some protocols try to use both methods - use one as the main method and the other as a fallback. That, too, has tradeoffs and they can’t escape from the aforementioned disadvantages.
So… back to the drawing board.
What if would want to re-design a solution that addresses the needs but handles the downside of the existing solution?
Here’s probably how it would look like:
Removal of the known attack vectors:
Avoid smart contracts
Avoid reliance on oracles
No external relayers
No minting/wrapping
Use only native tokens
Removal of capital inefficiencies
Use only native tokens that don’t require swaps to be used
Liquidity pools that hold only the needed amounts - “just enough liquidity” (avoid supply > demand)
High protocol efficiency: high service level
No liquidity shortage, hurting (avoid supply < demand)
Enter Kima.
We designed Kima to be a blockchain, a protocol and an SDK that removes the attack vectors, removes capital inefficiencies, maximizes service levels and creates a sustainable model.
It uses a blockchain as a transaction ledger, validators that validate the transactions, one-sided liquidity pools that only hold native tokens, TSS and SGX to secure the keys and a unique liquidity management algorithm to ensure balanced supply and demand. It ticks all the boxes of an optimal solution.
Kima is neither a bridge nor a messaging protocol. The latter is not needed for the Kima operation. The former can be built on top of Kima, like many other use cases and applications. For them, Kima will take care of the money interoperability and its complexity and security challenges are completely abstracted away.
Epilogue
Web3 needs a robust infrastructure. In many cases, generic tools and layers solve a wide range of problems. Yet, there are problems that require specific solutions. That’s why we will always witness unbundling0 and specialization, working in tandem with generic protocols. That’s also the case with liquidity transfer - which cannot be fully addressed by general messaging protocols.