Kadena – SpecR Handbook

Kadena's public chain introduces Chainweb - a previously unexplored scaling concept that enables parallel block processing across multiple PoW chains that are braided together into what is a web of chains.

Disclaimer: The information below aims to be impartial and is subject to the terms and conditions of the website. It is not investment advice and should not be perceived as such.

Kadena: A viable Proof of Work solution for Scaling Blockchain

Recommended Reading to assist in better informing this post:

Scalability is one of the most central issues in the current decentralised ecosystem. It receives ample air-time and has a significant share of mind, even the most lay-investor understands on some level that a mature distributed ledger technology (DLT) will be one that has solved for the scaling issue.

Although existing DLT solutions are making headway on this front, an on-chain scaling solution that does not partially or fully compromise the other core tenets of the DLT system is yet to be developed. Kadena introduces Chainweb, a novel architecture that leverages the idea of parallelism braiding multiple PoW chains, to meaningfully increase the network throughput while preserving the core tenets of DLTs. In order to understand the implications of such technology it is important to place it in the context of the current landscape and existing solutions.

The lead up to the Kadena section of the article is intended to highlight the key tenets that form the scalability debate, and provides a benchmark against which new and existing solutions can be compared. However, if you would prefer to dive straight into the Kadena section please read ahead.

Scaling: A Problem with no simple answers

Scalability is the ability of distributed ledger technologies to achieve transactional throughput equivalent to its centralised counterparts and meet the ongoing requirements of an expanding network.

Scalability forms one of the three fundamental requirements of a decentralised economy alongside decentralisation and security – the trilemma. The largest hurdle we face in overcoming the scalability bottleneck is achieving the level of required throughput without compromising the other core tenets. Existing technologies have only been able to find a partial balance between the three.

PoW has presented the most viable case for achieving security and trustlessness in the current landscape but its technical implementations concede compromises to scalability, as seen in even industry leaders Bitcoin and Ethereum.

The current issues to scaling PoW Blockchains can be summarized as two deceptively simple issues:

  1. Time to add a transaction to a block – this refers to the various limitations of a mineable blockchain that stem from the inherent work element of PoW.
  2. Time taken to reach consensus – time taken for all nodes to verify work done and limitations that arise from the requirement to remain trustless.

A variety of scalability solutions are being developed such as second layer solutions, modification of block size, consensus and data structure. See Table 1.0 below for a comparison of alternative solutions that make up the current landscape.

Merits of PoW

PoW as the original model of a secure decentralised system offers certain merits that act as a benchmark to understand the positioning of new and alternate solutions and what compromises they are making in their endeavour to scale.

Objectivity vs Subjectivity

Objectivity, in a network, allows any new user to independently come to the same conclusion on the current state of the chain as any other user (pre-existing or otherwise). PoW is the best current example of an objective system. The reason a new node on a PoW network is able to independently come to the same conclusion as the rest of the network is because of the external resource required by PoW- computational power. That is to say that a new node entering a PoW network is able to rely on the fact that the current state of the chain is always the state that requires the most computational work. They are able to do this without the need for any information except that to which they are privy by virtue of joining the network:

  • protocol definition (the rules by which a protocol operates)
  • the set of all blocks previously published on the chain (Source)

On a subjective network nodes cannot solely depend on knowledge of protocol rules and the full set of blocks published, to securely interact with the network. They require external information such as recent block hashes from block explorers, forking event details or reputation etc. to participate. That is to say a new node entering a subjective network does not have access to objective anchors such as computational power in PoW, bur rather rely on subjective anchors such as reputation, identity etc. to know the true state of the chain, requiring trust in a third party. In such cases a malicious party may be able to convince a new node entering the network that their false nodes are in fact legitimate. The subjective network does not protect new nodes against such attacks short of relying on the subjective factors mentioned above.

While subjective networks offer varying degrees of security through the provision of external information, PoW (an objective mechanism) is able to offer an inherent layer of security against involuntary alterations to the chain.

True Trustlessness

The most valued merit of the trustless model is its ability to preserve the network’s independence from any entity acting as a trusted third party (government, bank or otherwise) with the power to tamper with, control or censor any element of the network.

Close proximity transactions between two parties enable the participants of the transaction to easily authenticate the identity of each other and confirm the validity of the transaction itself (i.e. the physical exchange of value). This is the equivalent of two people meeting at a designated location to exchange a sum of cash. However, such close proximity transactions are not practical or scalable in a modern economy. This has led to the need to place trust in centralised third parties such as banks, payment systems etc. to mediate and validate secure transactions. The inherent trust required in a centralised system gives rise to the vulnerabilities associated with a powerful, all-knowing middleman – ceding unchecked authority to an entity not free of human error or fallibility. Distributed ledger technologies are able to imitate the convenience and security of close proximity transactions while solving for a practical implementation through peer-to-peer digital transactions – thus eliminating the middle-man. This is made possible by the combination of public key encryption and incentivised consensus mechanisms which facilitate secure peer-to-peer networks.

While most, if not all, DLT networks leverage public key encryption to authenticate participants of a transaction, it is the robustness of the consensus mechanism that determines the extent of true trustlessness achieved by a network. PoW currently offers the most trustless system of transaction verification, incentivising its miners to come to a consensus on the true state of the blockchain (the network thus coming to an agreement on the validity of transactions) in the form of transaction fees and mining rewards. The computational work element of PoW removes the all vulnerabilities associated with human contribution to the verification process.

It is this distinguishing feature of PoW, that enables it to offer a level of trustlessness beyond other consensus mechanisms which rely on the use of validators. A validator in a blockchain is a “human element” or third party to whom the network cedes some degree of trust to confirm the verification of transactions. This approach has been adopted by many chains who utilise Proof-of-Stake (PoS), Delegated-Proof-of-Stake (DPoS) or similar consensus models. To understand the spectrum of trustlessnes, one can imagine that centralised systems which completely relinquish trust to a third party are on one end and complete trustlessness on the other extreme. Even the most robust security technologies (encryption & cryptographic protocols) currently existing do not provide systems of absolute trustlessness, variations of PoW however have made the most headway in working towards trust elimination. The various validator reliant consensus mechanisms lie in the middle, redistributing rather than eliminating trust.

True Permissionless Network

Permissionless networks allow anyone to participate and contribute to the network. However within permissionless networks there still exists some barriers to entry- albeit minimal. For instance a PoW network requires computational power and a PoS network requires some degree of value staking. It is important to note that this does not refer to the guarantee of nodes being able to meaningfully participate upon entering the network. Nodes with more hashing (PoW) or economic (PoS/DPoS) power will better be able to leverage the network.

On the other hand, permissioned networks require identification and approval of the users in order to interact with the network. While permissioned networks support the privacy and scaling needs of corporations and traditional institutions, permissionless networks endeavour to facilitate true decentralisation.

The existing DNA of network decentralisation within the space is formed by a number of permissioned-permissionless hybrids that sit between either extreme. At present scalability is more easily achieved by networks subscribing to the permissioned end of the spectrum. In an attempt to solve for the scalability bottleneck without needing to compromise decentralisation, the space has given rise to a new variation of PoW based network – Kadena.

Unforgeable Costliness

Gold is one of the best examples to demonstrate the concept of unforegable costliness.

Gold is a precious metal that is formed during the collision of neutron stars which ejects a flood of neutrons that forge into this heavy element (Source). Short of being able replicate this natural phenomenon it is impossible to forge gold in a profitable manner. In this example, the difficulty of recreating a star collision is the costliness of creating gold.

As the costliness of forging gold is not only near impossible but unprofitable if achieved , it becomes an unforgeable commodity- leading to its scarcity. This property of gold provides value independent of any trusted third party verification.

“The unforgeable costliness pattern includes the following basic steps:

(1) find or create a class of objects that is highly improbable, takes much effort to make, or both, and such that the measure of their costliness can be verified by other parties.
(2) use the objects to enable a protocol or institution to cross trust boundaries” (Source)

Drawing from this example, PoW is able to capture the properties of unforgeable costliness similar to Gold. The costliness of PoW is derived from the the work required to mine a PoW blockchain and this work is then able to be verified by any other node on the network. By virtue of this PoW creates value on the network that is independent of any trusted third party verification, ultimately enabling the network to function trustlessly in an inherently distrustful situation.

Bitcoin is the most notable PoW network to have captured unforgeable costliness alongside its other properties, which led to its innovative breakthrough. However, unforgeable costliness remains unique to PoW networks and is yet to be solved for by the various competing consensus mechanisms such as PoS. The designs of most competing mechanisms more actively focus on other elements such as ‘randomness’ and ‘incentivisation’.

PoW as the original model of a secure decentralised system offers certain merits that act as a benchmark to understand the positioning of new and alternate solutions and what compromises they are making in their endeavour to scale.

Alternative Solutions

Alternative solutions take various approaches by optimising for different layers of the technology of a decentralised economy. The leading solutions can be categorised as:

  • First layer solutions: these alter the base chain characteristics to cater for increased scalability.
  • Second layer solutions: these support the base chain’s network activity through off-chain operations to improve scalability
  • Optimisation of consensus mechanisms: these provide alternative methods of reaching block consensus in order to achieve higher throughput
  • Alternative data structures: these introduce new decentralised architectures that are distinct from blockchain design to provide new scalability solutions.

In order to understand the above solutions, it is important consider what decentralisation and security trade-offs they are making. We will attempt to benchmark the solutions against the merits of PoW and draw comparisons between their positioning within the trilemma vector.

First Layer Solutions

SolutionBlock SizeSharding
DescriptionBlock size refers to the maximum capacity of any one block.Sharding is the splitting up of the main chain’s state into partitions called shards, each shard comprising its own state and transaction history.
ScalabilityIncreased block size can accommodate more transactions per block.Parallelisation of block creation as shads act concurrently, in effect more transactions processed by the overall network.
DecentralisationBlock size increase will inflate the computational power, data storage capacity and bandwidth required to effectively mine the blocks, therefore limiting participation.The ideal scenario of sharding would be able to replicate the same level of decentralisation of its unsharded network protocol.
SecurityIncreased block size results in the exacerbation of vulnerabilities related to centralisation, increasing exposure to existing attack vectors.Partitioning the network increases vectors of attack, especially as they pertain to attacks on individual shards and their interaction with the network.

Description


Block Size: Blocks refer to a group of transactions that are compiled together, verified and appended to the blockchain upon confirmation. Block size refers to the maximum capacity of any one block. For example, a Bitcoin block at present is 1 MB meaning that this is the upper limit of transaction data that can be processed at any one time; the block size in turn is one of the determinants of the chain’s throughput (TPS). Block size increase, a proposed scaling solution entails the expansion of block limit in order to facilitate a higher rate of TPS.

Sharding: Sharding is the splitting up of the main chain’s state into partitions called shards, each shard comprising its own state and transaction history. State in this context refers to the complete set of information that defines the activities and records of a system at any one point in time. The shards enable the mainchain to achieve a higher throughput by facilitating parallel processing of transactions through the various shards. Sharding is a first layer solution and remains exclusively at the protocol level.

Scalability


Block Size: Increased block size will result more transactions to be mined in shorter time as each block can accommodate more transaction. As the amount of transactions increase on the network, the number of full blocks (blocks that have reached full capacity) will increase. This can result in a backlog of transactions which may delay or entirely prevent transaction confirmations. An increased block size will result in larger capacity and thus fewer full blocks reducing network congestion.

Sharding: Sharding achieves higher throughput horizontally scaling through the compartmentalisation of the network state into multiple shards. Instead of all nodes on a base chain processing a sequential blockchain in which only one block is mined at any one time, the multiple shards are able to simultaneously process different transaction groups. Shards maintain their own state and block history, allowing them to process transaction activity independently. Each shard is correlated to a state-root established on the main blockchain, keeping the global state up to date.

One can imagine as the network grows, first order shards may themselves become congested giving rise to the need for further scaling. In such a situation Sharding may evolve into what is known as super quadratic Sharding which simply refers to the process of creating shards of shards. True to sharding’s unique property of horizontal scaling, throughput increases with network growth.

Furthermore, second layer solutions compatible with sharding such as Plasma offer sharded blockchains the potential to achieve exponential scaling through compounded effect.

Decentralisation


Block Size: With block limits even as low as 1 MB, PoW chains are still faced with concerns of a tendency towards centralised mining powers given the associated bandwidth requirements. Thus, increasing the block size will further inflate the computational power, data storage capacity and bandwidth required to effectively mine the blocks. The cost associated with this will limit meaningful block discovery to only a few participants to whom this is economically viable- compromising decentralisation to a greater extent.

Sharding: One of the pitfalls of Sharding is that it cannot be implemented with PoW as the network hashpower is diluted across the shards creating security vulnerabilities. Thus sharding is typically seen as a validator based solution whereby the aforementioned vulnerabilities are diminished. In emerging manifestations of validator based solutions such as that on Ethereum, there are attempts to minimize any centralisation associated with these validators. For example, the random selection with shuffling of validators from the broader network proposed by PoS networks. The ideal scenario of sharding would be able to replicate the same level of decentralisation of its unsharded network protocol.

Security


Block Size: Increased block size results in the exacerbation of vulnerabilities related to centralisation and as a result increase the exposure of PoW chains to existing attack vectors.

Sharding: Sharding as a first layer solution that alters the structure of the base chain makes certain security trade-offs in its endeavour to achieve greater scalability. Some examples of security concerns associated with Sharding below:

  • Single-shard takeover attacks (1% Attack): When a main chain is compartmentalised into shards, the network’s hashing power is distributed amongst the shards. For instance, if a chain has 100 shards, each shard would account for 1% of the network’s hashing power. This creates the potential risk that a malicious party with as little as 1% of the main network’s hashing power is able to command 100% of an individual shard – easily attacking the respective portion of the network.

    A proposed measure to mitigate this risk is to move from PoW to a validator based consensus. The random sampling of validators with reshuffling, which removes the vulnerabilities of affording validators the option of choosing or knowing ahead of time which shards they will be working on. However, random sampling and reshuffling give rise to their own set of complications which diminish the efficiency of the network. The nature of this measure requires validators to potentially sync with completely new chains every time they are randomly allocated as collators to a new shard. This introduces latency complications for the network as well as increased overhead and inconveniences to validator nodes.

    Due to the distribution of the network’s hashing power amongst multiple shards, shards operating on PoW are more vulnerable to 51% attacks leveraging collusion or bribery as opposed to non-sharded networks. It is important to note that in this instance, random sampling is not possible provided that miners on a PoW network cannot be prevented from mining any specific shard.


  • Fraud detection: In the event, an invalid transaction is committed to the network, all nodes on a typical blockchain are able to detect the fraudulent activity as they all operate on the one true state of the chain. However on a sharded chain, if an invalid collation or state claim is made, short of a secure messaging protocol between shards, there is no reliable or guaranteed way for nodes to detect and prevent fraudulent activity.

    Additionally, due to diminished fraud detection on a sharded blockchain, the data availability problem becomes harder to solve for – in comparison to a non-sharded blockchain. The data availability problem refers to the malicious act of partially withholding block information in order to hinder safety measures such as fraud proofs. In this situation, the ‘honest’ nodes on the network may flag such incomplete blocks. However, if the malicious party publishes the remaining information on the block immediately after the alarm is raised, the nodes not monitoring the activity on that specific block in that exact moment are unable to determine who the malicious actor was – the node that withheld block data or the node that seemingly raised a false alarm. A malicious actor is able to repeat this process until the altruistically motivated ‘honest’ nodes become disincentivised to continue flagging incomplete blocks. At this point, the malicious actor can freely broadcast incomplete blocks without the interference of honest nodes and manipulate the chain.

Second Layer Solutions

SolutionLightning NetworkPlasmaRaiden
DescriptionLightning Network is an off-chain second layer payment protocol that facilitates bi-directional transactions, only committing to the main-chain upon closure of payment channels.Plasma is a second layer scalability solution which relies on blockchain nesting to redirect load off the main-chain and onto child chains.The Raiden Network is an off-chain second layer payment system developed for ERC20-compliant token transfers on the Ethereum blockchain. It is a variation of Bitcoin's Lightning Network.
ScalabilityThe Lightning Network, in the absence of the computational work necessitated by PoW, is not inhibited in its ability to scale unlike the mainchain.Plasma improves its root chain’s ledger scalability by redistributing network activity to child chains which only commit final transactions to the main chain.Similar to the Lightning Network, the Raiden Network also operates as a second layer solution for transactions based on payment channel technology.
DecentralisationHas the risk of centralised transfer hubs forming with the incentive of capturing a greater share of the network fees.Provided the flexibility of plasma chains to determine their own consensus mechanisms and chain rules, concerns of centralised child chains emerging are not completely misplaced.Has the risk of centralised transfer hubs forming with the incentive of capturing a greater share of the network fees.
SecurityAs the Lightning Network is a separate network, there are vulnerabilities which may allow for security breaches through the manipulation of both the lightning network and the main chain.Child chains are exposed to attack vectors from Block producers. Block producers of any one child chain may not necessarily belong to one entity which would allow them to easily collude, the risk of malicious block producers to users of the child chains still exists in various forms.As the Raidan Network is a separate network, there are vulnerabilities which may allow for security breaches through the manipulation of both the lightning network and the main chain.

Description


Lightning Network: Lightning Network is an off-chain second layer payment protocol that facilitates bi-directional transactions without needing to access the main-chain; the main-chain is only used to commit netted transactions which were completed off-chain.

It presently operates on top of the Bitcoin blockchain and transacts in BTC only.

Plasma: Plasma is a second layer scalability solution which relies on blockchain nesting to redirect load off the main-chain and onto child chains. Through the creation of a foundational smart contract on the root chain (i.e. the public chain) Plasma is able to enable applications to function off-chain through a series of smart contracts which form the child chain. Child chains are then able to support a further subsidiary branch of child chains, creating a tree structure. Child chains need only commit to parents chains on a periodic basis or in the event of a dispute- significantly reducing the network load of the root chain and allowing it to scale. However, if disputes are not correctly settled by parent chains, the root chain (public chain) will act as the final resort for dispute resolution.

Raiden: The Raiden Network is an off-chain second layer payment system developed for ERC20-compliant token transfers on the Ethereum blockchain. It is a variation of Bitcoin's Lightning Network. Raiden additionally allows its users to exchange its native token – the RDN in exchange for peripheral services on the network.

Scalability


Lightning Network: The Lightning Network, in the absence of the computational work necessitated by PoW, is not inhibited in its ability to scale unlike the mainchain. By facilitating most of the payments on a second layer payment protocol and only committing to the main-chain upon closure of payment channels, the Lightning Network reduces the network load on the main chain. This provides for scalability of network throughput up to millions of transactions per second (TPS).

Plasma: Plasma improves its root chain’s ledger scalability by redistributing network activity to child chains which only commit final transactions to the main chain. While this is typical of a second layer solution, Plasma’s compatibility with on chain scaling solutions such as Sharding is able to provide compounded scalability for its root chain.
Furthermore, Plasma’s child chains each carry similar functionalities to the root chain, replicating its benefits with the added optionality to operate on various consensus mechanisms. Thus, Plasma not only allows the root chain to scale, but simultaneously equips child chains with the ability to run DAPPs and smart contract functionalities congestion-free.

Raiden: Similar to the Lightning Network, the Raiden Network also operates as a second layer solution for transactions based on payment channel technology. As an off-chain solution, Raiden is able to scale proportionally to its network growth; the potential for its throughput limited only by the size of the network.

Decentralisation


Lightning Network: In the event, that a party initiating the transaction is not directly connected to the party receiving the transaction via a payment channel, the Lightning Protocol will identify the shortest route to the counterparty node via other channels that the initiating party is connected to. However, each intermediary node must carry sufficient funds equivalent to or higher than the transaction amount in order to provide the necessary liquidity. In this scenario, one would imagine that it is natural for more centralised hubs to form that would provide higher liquidity and closer proximity of transaction distance (fewer hops) to capture a greater share of network fees.

Plasma: While Plasma envisions a robust and decentralised topology made up of a multitude of child chains and their subsidiary branches, it is important to note that it brings a certain element of centralisation to the network. Provided the flexibility of plasma chains to determine their own consensus mechanisms and chain rules, concerns of centralised child chains emerging are not completely misplaced. However, the decentralisation typical of public blockchains can be replicated to some extent through the dispersal of block production on a particular child chain to multiple entities.

Raiden: Similar to the Lightning Network, Raiden is also faced with the risk of centralised transfer hubs forming with the incentive of capturing a greater share of the network fees.

Security


Lightning Network: The Lightning Network, as a second layer protocol leverages the security measures of its base blockchain such as Bitcoin in the case of Lightning Network. This is made possible through the Lightning Network’s design which requires netted claims to be committed to the main blockchain thus operating within the parameters of Bitcoin’s battle tested security.

However, there are vulnerabilities which may allow for security breaches through the manipulation of both the lightning network and the main chain. Example breaches include:

  • Improper Timelock: the timelock period refers to the time allocated, following the closure of a payment channel, for transacting parties to confirm that the correct and most up to date transaction has been broadcast to the main chain. If interacting with a malicious counterparty, allowing sufficient time to flag and penalise the falsely broadcast transaction will ensure the security of the funds.

  • Forced expiration spam: when a payment channel on the Lightning Network is closed, the final netted transaction is committed to the main blockchain. A malicious party can abuse this mechanism to create multiple payment channels and subsequently force the closure of these channels all at once. As a result, the surge of commits on the main blockchain will overload the blocks causing a delay to any subsequent transactions that need to be committed. The delays may be extensive enough such that invalid, older transactions become valid simply because the latest transaction cannot be broadcasted due to network overload. This could result in fraudulent claims of funds by the malicious party.

  • Failing to broadcast transaction in Time: In order for a transaction to successfully take place both the sending and receiving parties must be online. Failing to broadcast a transaction within the correct timeframe of the HTLC can lead to counter parties stealing funds. As all nodes will not be able to be online at all times, third party services may emerge to mitigate this issue.


Plasma: Plasma’s design, requiring periodic commits of transactions from child chains to the root chain, and its fraud prevention mechanisms (such as Plasma exits and fraud proofs) leverage the security provided by the main chain. This is evident in the solutions available in response to the various scenarios that could potentially breach security on the child chains; such as those that might arise because of malicious block producers.

Block producers are the nodes that receive transactions from their respective child chains, form blocks, and commit them to the root chain for a transaction fee. Although the block producers of any one child chain may not necessarily belong to one entity which would allow them to easily collude, the risk of malicious block producers to users of the child chains still exists in various forms. Some examples:

  • Fake blocks: Block producers can create new blocks that subvert the original rules of the chain, introducing their own rules which would allow them to manipulate funds on the child chain. Plasma guarantees that participants of child chains always have access to Plasma exits, whereby the user is able to withdraw their assets or funds from the child chain to the main chain in the event of such malicious activity. However, this extreme measure is not the only option for users to prevent against fraud. Users are able to publish a fraud proof to the root chain, containing information about the previous block which demonstrates that the falsified block does not follow on from the previous block’s state.

  • Withholding of block information: Block producers intentionally withhold information of previous blocks committed to the chain- preventing users from submitting fraud proofs to the root chain. In this scenario, the user’s best course of action would be to exit the child chain and withdraw assets to the root chain.

  • Censorship: Block producers censor participants of the child chain, potentially excluding certain transactions from the root chain in order to prohibit users from performing any operations on the child chain. Once again, the most viable option for the user would be to exit the child chain similar to the examples above.

Despite Plasma’s ability to leverage the root chain’s security infrastructure, it is important to note that the root chain faces limitations of its own allowing for certain vulnerabilities such as mass withdrawal attacks.

A mass withdrawal attack refers to the initiation of many simultaneous fraudulent plasma exits from a child chain onto the root chain. Faced with such a high number of incoming transactions, the root chain in face of its throughput limitations may not have the capacity to facilitate all transactions within the challenge period. The challenge period refers to the allocated time following a Plasma exit, designated to allow anyone to challenge the users’ claim to the funds or assets. Given that not all of the fraudulent withdrawals can be challenged within the appropriate time frame, malicious parties may potentially succeed in stealing the funds.

Raiden: Given the significant similarities Raiden Network shares with Lightning Network, Raiden also faces similar security breaches with respect to fraudulent transactions and network manipulation.

Optimisation of Consensus Mechanisms

SolutionProof of Work (PoW)Proof of Stake (PoS)Delegated Proof of Stake (DPoS)
Chain ExampleBitcoinEthereum to move to PoS, CardanoEOS
DescriptionEnables block producers, known as miners, to confirm transactions through the computation of a cryptographic puzzle. This process is known as mining.Enables network participants to validate transactions based on the amount of coins/tokens staked. The greater the stake, the higher the likelihood of being chosen as the next block validator.Witnesses (delegates) elected by network participants act as the block validators. The network participants’ possession of coins/tokens determine their voting weight.
Method of
Scalability
Throttled by need to maintain integrity of the network (fraud and censorship resistant). The absence of computational work allows decreased block confirmation latency but does not completely solve for throughput limitations. A consensus mechanism based on limited number of validators to ensure higher throughput.
Objectivity vs
Subjectivity
ObjectiveWeakly Subjective Subjective
True
Trustlessness
HighLowVery Low
True Permissionless
Network
HighHighHigh
Unforgeable
Costliness
YesNoNo

Description


Proof of Work: Example chain - Bitcoin

Enables block producers, known as miners, to confirm transactions through the computation of a cryptographic puzzle. This process is known as mining. Miners with varying hashing power capabilities compete to validate blocks that are committed to the chain in return for a reward – newly minted coins that result from mining and fees from transactions.

Proof of Stake: Example Chain - Ethereum to move to PoS / Cardano

Enables network participants to validate transactions based on the amount of coins/tokens staked. The greater the stake, the higher the likelihood of being chosen as the next block validator. In the absence of mining, no new coins are minted and thus validators are rewarded in the form of transaction fees. There are consensus mechanisms that have made modifications to PoS replacing the staking of tokens with various measures of value such as PoWeight, PoA etc. However they are not significantly different in regards to scaling.

Delegated Proof of Stake: Example Chain - EOS

Witnesses (delegates) elected by network participants act as the block validators. The network participants’ possession of coins/tokens determine their voting weight. Voting is a recurring process to ensure that the witnesses are incentivised to uphold a high standard in order to retain their role as block validators. Witnesses are rewarded in transaction fees and in certain versions of DPoS, the delegates are required to deposit a stake to a security-locked account which acts as an incentive to prevent malicious activity.

Method of Scalability


Proof of Work: Throttled by need to maintain integrity of the network (fraud and censorship resistant).

Proof of Stake: The absence of computational work allows decreased block confirmation latency but does not completely solve for throughput limitations. However, PoS additionally facilitates first layer solutions such as sharding which have capacity to compound scalability.

Delegated Proof of Stake: A consensus mechanism based on limited number of validators to ensure higher throughput. It does though require resources to scale vertically to maintain quality as the size of the network increases.

Objectivity vs Subjectivity


Ability of a new node entering a network to completely know the true state of the chain without reliance on external sources outside of the protocol/chain data.

Proof of Work: Objective

The computational work requirement of PoW makes it the most objective network. A new node entering a PoW network is able to rely on the fact that the current state of the chain is always the state that requires the most computational work. It is by virtue of this objective mechanism PoW offers an added layer of security against involuntary alterations to the chain.

Proof of Stake: Weakly Subjective

On a weakly subjective network nodes depend on knowledge of protocol rules, the full set of blocks published and a state from less than N blocks ago that is known to be valid to securely interact with the network. However, if your node is offline for extended periods, you may need to get information from external sources. Censorship is able to be detected within a short time window of the alteration but proves more difficult beyond the timeout of this window.

Delegated Proof of Stake: Subjective

On a subjective network nodes require external information to confirm the correct state of the chain- less easily and reliably able to detect censorship of any kind.

True Trustlessness


Network’s ability to facilitate secured peer-to-peer exchange in the absence of a trusted third party.

Proof of Work: High

The computational work element of PoW removes all vulnerabilities associated with human contribution to the verification process; thus offering a level of trustlessness beyond other consensus mechanisms which rely on the use of validators.

Proof of Stake: Low

PoS requires trust to be placed in validator nodes that are elected to commit transactions to the chain. An element of trust is introduced in the absence of a mathematical proof and the mining process.

Delegated Proof of Stake: Very Low

Similar to PoS, places trust in validator nodes to commit transactions to the chain. However, the introduced element of trust is further concentrated in ‘elected’ validators- ceding trust to a limited number of nodes.

True Permissionless Network


Ability for nodes to enter or exit the network at will.

Note:- This does not refer to the guarantee of nodes being able to meaningfully participate upon entering the network. Nodes with more hashing (PoW) or economic (PoS/DPoS) power will better be able to leverage the network.

Proof of Work: High - Computational power is its only requirement for interacting with the network.

Proof of Stake: High - Any level of stake is sufficient to enter or exit the network.

Delegated Proof of Stake: High - Any level of stake is sufficient to enter or exit the network.

Unforgeable Costliness


The difficulty of producing the object is both verifiable and high enough such that the costliness of forging this process outweighs the benefits of doing so.

Proof of Work: Yes - The costliness of PoW is derived from the the work required to mine a PoW blockchain and this work is then able to be verified by any other node on the network. By virtue of this PoW creates value on the network that is unquestionable enabling participants to transact with confidence in an inherently distrustful situation.

Proof of Stake: No - In the absence of the difficulty resultant of PoW’s computational work requirement, PoS consensus mechanisms are yet to solve for unforgeable costliness.

Delegated Proof of Stake: No - In the absence of the difficulty resultant of PoW’s computational work requirement, DPoS consensus mechanisms are yet to solve for unforgeable costliness.

Alternative Data Structures 

SolutionBlockchainDAG (Directed Acyclic Graphs)
DescriptionBlockchain is a linked list data structure which organises transactions into blocks that are appended to a single linear chain forming a sequential series of blocks.DAG is a data structure which consists of vertices (data storage points) and edges (links) that are organised in a single direction. Unlike the blockchain it is not a linked list but rather a linked graph, meaning it is not limited to a single sequential chain and is able to process data concurrently.
Execution on StructureSequential

Causally Dependent
Visual Representation



ScalabilityThe network is throttled by the sequential nature of a blockchain’s data structure, which only enables one block to be appended successfully to the chain at any one time. How users use the system does not impact overall performance maximums.
In the absence of blockchain’s sequential chain structure, DAG is able to verify parallel transactions as long as they are causally unrelated.
DecentralisationThe blockchain data structure being a DLT distributes the data across multiple points in the network. This results in a scenario where there is no single point of failure and thus maintaining decentralisation.DAG chains do not compartmentalise network participation into separate roles such as transaction initiator and transaction validator (miners) but instead combines these roles through its peer verification system; transaction initiators must also act as transaction validators in order to successfully append their own transactions. In doing so it distributes the work required to validate transactions across the entire network, eliminating the tendencies towards centralisation that form with independent mining nodes and pools.
SecurityThe blockchain data structure being a DLT distributes the data across multiple points in the network requiring full synchronisation confirming the longest chain. The requirement for full synchronization and agreement on the global state (longest chain) secures the network even as it grows. While DLTs are faced with various network attack vectors, these are often specific to varying consensus mechanisms, governance structures, off-chain solutions etc.DAG chains are currently unable to solve for scalability and decentralisation bottlenecks without resorting to the introduction of centralised elements such as witnesses or coordinators. This gives rise to a particular set of security vulnerabilities associated with the increased tendency towards centralisation. The alternative is that in the absence of witnesses or coordinators vulnerabilities arise from the lack of monitoring.

Description


Blockchain: Blockchain is a linked list data structure which organises transactions into blocks that are appended to a single linear chain forming a sequential series of blocks.

DAG (Directed Acyclic Graphs): DAG is a data structure which consists of vertices (data storage points) and edges (links) that are organised in a single direction. The acyclic property of a DAG chain refers to the inability of any node to loop back to itself. Unlike the blockchain it is not a linked list but rather a linked graph, meaning it is not limited to a single sequential chain and is able to process data concurrently.

Topology


Blockchain: Synchronous

DAG (Directed Acyclic Graphs): Asynchronous

Scalability


Blockchain: The sequential nature of a blockchain’s data structure enables only one block to be appended successfully to the chain at any one time. Increasing block size or reducing block production time are the only existing options to scale within the parameters of a blockchain data structure.

Alternative solutions including Sharding on the protocol level and second layer solutions such as Lightning Network, Raiden and Plasma offer scaling potential while maintaining the blockchain data structure on the main chain. However, these come with certain trade-offs that are not made by the original data structure of blockchain. (Read more in descriptions above).

DAG (Directed Acyclic Graphs): The fundamental design of DAG chains removes the block structure traditionally associated with blockchains, enabling individual transactions to be verified on the network achieving faster confirmation time compared to the groups of transactions bundled into blocks on a blockchain. Verification in this instance simply requires peers to validate previous transactions in order to have their own transaction verified. This peer verification system which operates on a transaction to transaction basis significantly increases the speed at which transactions can be processed. The scaling implication of this is that transaction throughput is proportionate to the network size, as the network grows it is expected that more transactions will be verified. Furthermore, in the absence of blockchain’s sequential chain structure DAG is able to verify parallel transactions given that they flow in one direction.

Decentralisation


Blockchain: The blockchain data structure being a DLT distributes the data across multiple points in the network. This results in a scenario where there is no single point of failure and thus maintaining decentralisation.

DAG (Directed Acyclic Graphs): DAG chains do not compartmentalise network participation into separate roles such as transaction initiator and transaction validator (miners) but instead combines these roles through its peer verification system; transaction initiators must also act as transaction validators in order to successfully append their own transactions. In doing so it distributes the work required to validate transactions across the entire network, eliminating the tendencies towards centralisation that form with independent mining nodes and pools.

Given the graph like structure of DAGs, achieving a global state becomes challenging as not all nodes will have the complete history of the chain at all times. In light of this, special nodes such as witnesses or coordinators take on the role of proofing the network and issuing a ‘correct’ history of transactions that act as reliable checkpoints for all other nodes on the network. This however introduces an element of centralisation that may only be solved for once the chain has achieved sufficient network effect. In a robust network, nodes will be able to repeatedly confirm transactions ultimately confirming their validity in the absence of coordinators. However this method harbours its own shortfalls given that such an autonomous system may not be able to incentivise detection of malicious activity in the absence of the enforcement and accountability currently provided by centralised bodies.

Security


Blockchain: The blockchain data structure being a DLT distributes the data across multiple points in the network requiring full synchronisation confirming the longest chain. The requirement for full synchronization and agreement on the global state (longest chain) secures the network even as it grows. While DLTs are faced with various network attack vectors, these are often specific to varying consensus mechanisms, governance structures, off-chain solutions etc. as described previously.

However, DDoS attacks are fundamentally related to the sequential nature of blockchains. A malicious party attempting to spam the network can do so by creating enough false transactions such that blocks are completely filled, causing congestion on the network. This will result in a backlog of transactions that may be delayed or never processed. The sequential data structure of blockchains that allows only one block to be produced and appended at any one time, elevates the vulnerability of DDoS attacks in comparison to asynchronous data structures.

DAG (Directed Acyclic Graphs): DAG chains are currently unable to solve for scalability and decentralisation bottlenecks without resorting to the introduction of centralised elements such as witnesses or coordinators. This gives rise to a particular set of security vulnerabilities associated with the increased tendency towards centralisation. The alternative is that in the absence of witnesses or coordinators vulnerabilities arise from the lack of monitoring.

Given the data structure of DAG chains and the consequent lack of global state the only way for nodes on the network to confirm the true state of the chain is to place trust in a select group of validator nodes (coordinators, witnesses etc.), introducing concentrated points of potential failure or vulnerability to malicious activity. If a significant portion of the limited number of validators chosen by a centralised foundation or authority become unresponsive or collude a 51% attack becomes increasingly viable.

Certain DAG chains may attempt to mitigate this centralised vulnerability through the introduction of a more democratic election of validator nodes (witnesses, coordinators) but this will not fully eliminate the risk of collusion or unresponsiveness.

While the current solutions discussed above are making headway in solving for scalability, provided the trade offs they must make within the parameters of existing technology, the trilemma persists. Given that PoW best caters to the decentralisation and security arms of the trilemma, a scalable PoW solution would be best placed to solve for it.

What if there were a viable PoW based Scaling Solution: Kadena

Kadena’s Chainweb solution, as suggested by the name, is a previously unexplored scaling concept that enables parallel block processing across multiple PoW chains that are braided together into what is a web of chains. Chainweb’s value add to the scalability debate is not merely the increase in block processing but rather its ability to do so whilst retaining all aspects of Satoshi Nakamoto’s vision of security and trustlessness, altering only the architecture in so far as “braiding” multiple chains together.

PoW has presented the most viable case for achieving security and trustlessness in the current landscape but its technical implementations concede compromises to scalability. This bottleneck is best defined by the trilemma, the inability of any current solution to address all three elements; scalability, security and decentralisation. Chainweb’s parallelisation and ability to scale a PoW chain, whilst preserving the other two tenets of the trilemma, achieves a previously untapped set of scalable outcomes.

It is important to understand the mechanics of the Chainweb protocol to be able to draw analysis of how and to what extent Chainweb is able to preserve the merits of PoW consensus, offer a viable scaling solution, strengthen the security of the network linear with network growth, all the while maximally reducing trade-offs on decentralisation. Let’s take a closer look below.

Chainweb & Merits of PoW

Objectivity as opposed to Subjectivity

Chainweb, as a PoW protocol preserves the objectivity of its network similar to that typically provided by classic computational work based mechanisms. This is achieved through the merkle cone structure associated with any given block, whereby any node joining the network is only required to leverage the past merkle cone of its chain to trustlessly prove the current state of the chain. Additionally, the Chainweb architecture requires each participating chain to reference merkle roots of its peers (determined by the edges any one chain is connected to in the base graph) within its own blockhash. This cross-referencing as opposed to the typically consecutive block referencing of PoW chains directly increases the difficulty for an attacker to project a hostile fork as the correct state of the network. The number of peers required to be referenced correlates to the level of resistance to such malicious activity ensuring that as the network grows Chainweb increasingly offers impenetrable security against involuntary alterations to the chain.

True Trustlessness

Chainweb’s PoW consensus by virtue of the mining process eliminates the need for any middle-men or third parties in the validation of transactions and propagation of the network. As such it lies on the end of the spectrum alongside other PoW mechanisms, where the need for trust is eliminated as opposed to validator reliant consensus mechanisms which redistribute rather than eliminate trust.

True Permissionless Network

Chainweb as a PoW protocol does not require any permissions for nodes to enter and participate in the network but only the minimal requirement of computational power. Though it concedes that potential degrees of hashing power centralisation may occur it is important to note that this does not pose direct deterrents to nodes entering and participating in the network at will. As mentioned previously, the ability of a node to enter the network in a permissionless manner should not be confused with the ability to meaningfully participate upon entry. For instance, even a classical PoW chain (similar to PoS) has the possibility of given participants with more hashing power (or wealth in PoS) having greater access to the network.

However it should be noted that the Chainweb architecture is designed such that it allows meaningful participation in economic incentive for individuals or miners with limited resources, thus improves the permissionless nature of PoW. Chainweb facilitates this by giving individuals or miners with limited resources the flexibility to mine only subsets of chains or only one chain within the network. More on this below.

Unforgeable Costliness

Chainweb does not compromise the unforgeable costliness of PoW chains in its endeavour to scale, preserving the costly work element of mining a PoW chain and the ability of any node on the network to verify this work without the need for verification of its value by any third parties.

The merits of PoW are what form the principles envisioned by Satoshi, Kadena’s ability to introduce a new PoW based architecture that solves for scalability while preserving and improving upon the original merits of PoW is critical to its unique positioning.

Chainweb & Scalability

Chainweb’s unique parallel chain binding architecture provides a fundamental advantage over classic PoW chains – making scalability a non issue . The unique design which is able to braid hundreds of individually mineable chains into its network not only immediately elevates block production through parallel processing but also further lends itself to two major efficiencies:

  1. More efficient usage of global hashrate
  2. Maintaining confirmation latency despite the added complexity of Chainweb architecture

Efficient Usage of Global Hashrate

The Chainweb network leverages graph structures such as the Petersen graph as its base graph – informing its architecture. This base graph will define not only the operations of the chains on the network as seen in the infographic but also determine the correlated network hashrate respective to the capacity of the given base graph. The network hashrate, which is the is the sum of all individual chain hashrates, remains constant as long as the base graph is unchanged. This simply means that as additional chains join the network, throughput will continue to increase while the network hashrate remains constant – effectively achieving parrellelisation, ergo higher throughput than the traditional single chain solutions seen in BTC and ETH.

An architecture, such as the Chainweb, that moves away from a single chain to a multi-chain formation becomes vulnerable to attacks whereby a malicious actor, with only sufficient hashpower to overtake any given single chain, is able to compromise the network security. However, the increase in attack-resistance inherently provided by Chainweb via Merkle cones and peer reference proofs remove the need to maintain a minimum hashrate per chain as a security measure. This enables the Chainweb protocol to deploy hashrate more effectively across an increasing number of chains.

Furthermore, the parallel-chain architecture distributes mining competition across the network enabling miners to discover and mine blocks across multiple chains as opposed to the traditional single chain PoW protocols. This effectively reduces spurious competition and the resulting waste of hashrate across the network by incentivising miners to deploy their hashpower across multiple chains for greater profitability.

Maintaining Confirmation Latency

Within Chainweb, a new layer consensus is equivalent to block consensus in classical PoW chains. It is achieved when each chain is braided into the merkle cone at a given block height. The Chainweb method of reaching new layer consensus maintains similar confirmation latency to its traditional PoW chains despite the added processing of multiple chains simultaneously. The variables that impact the speed at which a new layer consensus can be achieved include:

  1. in traditional PoW chains, to achieve a desirable level of security a given number of confirmations are required to be processed. For example, in Bitcoin the number of confirmations required to achieve the desirable security could be 3-6 confirmations. Chainweb is able to process multiple blocks in parallel during the time required to achieve security to a desirable level (merkle cone).
  2. configuration of the base graph diameter or in other words the number of hops (confirmations) required to construct a merkle proof (merkle cone) by any chain. Chainweb’s graph based architecture allows it to minimise the hops required to construct a merkle proof between any two chains. The construction of a merkle proof between any two chains requires the given chain to reference the previous headers or merkle roots of its peer chain. Any given block will become fully braided upon reaching the layer (cross-section of the braid i.e. set of all chains’ blocks at a given block height) equivalent to the graphs diameter. The minimisation of interchain hops required to construct such a merkle proof ensures that as the network expands to include growing number of peers on multiple chains the confirmation latency and throughput of the network is not throttled.
  3. configuration of base graph degree or in other words how many previous headers of peers a given chain must reference.
  4. determination of block time or in other words the optimisation of difficulty setting that will define the average amount of time required to mine a block on the network

The base graph of the network will be determined upon the optimisation of these among other elements, providing the flexibility to manipulate the variables to achieve optimal confirmation latency.

Chainweb’s architecture further optimizes confirmation latency through Chainweb’s consensus mechanism which consists of individual chain block streams and network header streams. The individual chain block streams are made up of headers and blocks for the respective chain i.e. consensus for individual chains are reached independently. Chainweb miners with the capacity to replicate the entire network (all individual block streams) broadcast a lightweight network header stream. This can be used by any node to trustlessly operate a single chain in the network by accessing just its block stream along with the lightweight header stream.

As a result, finding layer consensus is made more efficient where the bulk of the work in providing confirmation of the full network state is completed by such Chainweb miners – enabling individual nodes to propagate blocks quicker than they otherwise would. Therefore, any confirmation delays resulting from the added complexity of Chainweb is mitigated enabling the parallel processing to take place seamlessly.

————————————

Revisiting the two fundamental issues surrounding scaling of PoW chains, time to add transaction to a block and time taken to reach consensus – we are able to identify that Chainweb offers innovative workarounds for both issues.

Time to add transactions to a block: Instead of tackling the issue of appending more transaction in the one block with the aim of increasing throughput to match centralised system’s throughput capacity , Chainweb has developed an alternate architecture which gives rise to scaling throughput via parallel processing of multiple blocks.

Time to reach consensus: Optimising the base graph configuration of Chainweb provides the flexibility to manipulate the variables that impact the time taken to reach consensus.

The most important development Chainweb brings to the scaling issue is that it is able to offer these meaningful advances in scaling without compromising the merits offered by traditional PoW chains.

Chainweb & Security

PoW chains offer the best available security measures in the current landscape, Chainweb being a PoW consensus based protocol benefits from these shared characteristics. However, the introduction of multi-chain graph structure to the network architecture fundamentally alters the Chaineweb network in certain aspects. It is important to understand the perceived vulnerabilities associated with this and the measures introduced to combat these to demonstrate Chainweb’s position on security, particularly in comparison to the single PoW chain architecture.

Chainweb Attack VectorsDescriptionChainweb SolutionsSecurity Improvements on single PoW Chain
Full-braid replacementAdversary trying to generate an entire alternate fork of the Chainweb, or braid, faster than the honest network. A recipient within the network will wait until a certain number of chainweb layers are braided before accepting a transaction. The adversary will also need to replicate the same number of full layers to be successful.

The probability that the adversary who has a finite source of hashpower is able to keep pace with the honest network n braiding the chain is highly unlikely. This concept is best described by the Gambler’s Ruin problem.
For a single PoW chain the practice of waiting z block height before accepting transactions results in generation of z blocks; however in a Chainweb braid this practice will result in the the generation of a layer z of the braid for every increase in block height. The increased difficulty of replicating layers in a braid of multiple chains as opposed to blocks in a single chain inherently increases the security against such an attack.
Merkle cone replacement for arbitrarily large values of zInstead of recreating the entire braid parallel to the network as above, the adversary creates only the blocks that directly reference the block that contains their fraudulent transaction, or the Merkle cone of their fraudulent block, for arbitrarily large values of z.
At a minimum, for an adversary’s duplicate Chainweb to be accepted, they must offer a fully propagated merkle cone. But in this scenario where the recipient will not accept the transaction until an arbitrarily large value of z, the adversary must continue generating layers.

The architecture of Chainweb is such that each parallel blockchain in addition to referencing its own previous block, must also reference the headers of other chains (peers) at the same block height as its previous block. As a result, an adversary, when generating arbitrary number of layers, will start to converge towards generating the entire braid - rendering this attack almost as disadvantaged as a full braid replacement.
The referential nature of Chainweb ensures that in order for an adversary to replace a block in the braid it is not enough that they only generate subsequent blocks on their own chain as would be the case on a single PoW chain. Rather they must also generate blocks on peer chains which reference back to the adversary’s bad block.

The added complexity resulting from this design makes it comparably more difficult to propagate bad blocks.
Merkle cone replacement for z = ∆With reference to the above scenario, practically speaking, the recipient waiting an arbitrarily large block height before accepting a transaction is improbable and thus may accept the transaction at a limited number of layers ∆.

This is the most probable attack vector for an adversary as the bounds of the attack is limited to the ∆ layers such that they generate only a Merkle cone up to that point.

The only difference between the second scenario and this scenario is the number of layers being finite in this scenario as opposed to an arbitrarily large number of layers in the former scenario.
Although this attack vector is a more viable option for the adversary, the difficulty in propagating a bad block through the replication of a merkle cone remains constant, only the upper bound has changed.

If an adversary is mining the first layer, in order to successfully catch up to the honest network they must successfully finish the current layer as well as all subsequent layers until layer z is completed. While this process is perceived to be simpler than the other two scenarios presented, this in itself carries a near impossible probability of successful execution. See White Paper for mathematical proof.
See scenario 2

Although the Chainweb architecture introduces attack vectors unique to its features, its fundamental cross braided structure equips it with not only the means to combat these attacks but also provide an added strength to the security offered by single PoW chains.

Chainweb & Decentralisation

Absolute decentralization is beyond the reach of any existing system or technology in the DLT landscape today. Absolute decentralisation extends beyond simple network decentralisation to ensure that there is no consolidation of power or resources capable of influencing the network in any capacity. This theoretical concept at present is only able to exist in a vacuum void of network and economic incentivisation.

It is important to understand the role incentivisation plays in facilitating DLTs to function in place of centralised alternatives. In the absence of a supervisory authority, economic incentives allow a network of disparate strangers to cooperate and self regulate – the fundamental mechanism of DLTs. However, where there is incentive there will always be parties whom consolidate resources or collude to capture the value offered by economies of scale. Provided this paradox, DLTs are able to offer the nearest solution to absolute decentralisation. The different DLT innovations solve for this to varying degrees, forming a spectrum in which complete centralisation lies on one end and absolute decentralisation on the other.

Given that some degree of centralisation manifests itself in any model with economic incentives, the distinguishing factor remains the ability of a network to minimise the impact of such centralising elements without compromising other core tenets – network security and scalability.

Kadena acknowledges that it faces similar challenges in this respect to its PoW counterparts. However the Chainweb architecture is designed such that it improves the permissionless nature of PoWs. Chainweb facilitates this by giving individuals or miners with limited resources the flexibility to mine only subsets of chains or one chain within the network. This type of flexible network participation is made viable by the architecture’s ability to offer uncompromised network security while enabling subsets and individual chains to be mined at lower hashpower than the entire network. The security is provided by Chainweb’s Merkle cones and peer reference proofs which remove the need to maintain a minimum hashrate per chain as an attack-resistant measure. The reduction in entry threshold offered by an increased degree of permissionlessness maintains decentralisation over time – ensuring participation is not limited to a select few. This works to relieve pressure towards centralising elements in PoW such as mining pools to a greater extent than existing chains.

How does Chainweb differ from Sharding?

Sharding is the splitting up of the main chain’s state into partitions called shards, each shard comprising its own state and transaction history. While this may appear to resemble the multi-chain architecture of Chainweb, there are many fundamental differences which distinguish one from the other – security being one of the key factors.

Sharding, when implemented with PoW, has the unfortunate side effect of sharing resources (hashpower) on the network, leaving shards with lower hashrates vulnerable to attacks. This necessitates a move towards PoS in order to consider sharding a viable option whereby solutions such as random sampling can be exercised (see Sharding under First Layer Solutions for more on this). In doing so, networks implementing sharding on a PoS system are unable to preserve the merits of PoW in an effort to scale securely. Chainweb alternatively is able to preserve all three core tenets of the trilemma (decentralisation, security and scalability), while leveraging the merits of a PoW consensus mechanism through the load balancing ensured by the merkle cone structure.

Conclusion

As the space has wrestled with scalability we have seen various solutions emerge whether that means modification of consensus methodology, data structure or otherwise . Thus far none have established themselves as the clear front-runner – each iteration seeking to achieve progress in scalability but not without compromise. Kadena however addresses each arm of the trilemma to a greater degree and with more focus than any existing PoW solution. If the network is able to retain the desirable and “battle-tested” qualities of PoW while maintaining its ability to scale going forward – that is certainly worthy of our attention.

© 2018 Spec-Rationality.com | All Rights Reserved.

Logo