In another step towards driving cross-industry cooperation in the development and use of blockchain technologies, Hyperledger has added two new groups to its growing list of Special Interest Groups (SIG). These two new groups, the Hyperledger Social Impact SIG and the Hyperledger Trade Finance SIG, join Healthcare and Public Sector focused SIGs.
SIGs gather community members from an industry segment to work on domain-specific problems and create an environment for open discussion, document co-creation and solution proposals. SIGs help specific vertical markets in their efforts to address problems specific to that particular community.
The Hyperledger Social Impact Special Interest Group
The Hyperledger Social Impact SIG is primarily focused on serving as a platform for exchanging ideas and exploration of ways to use blockchain technology in the context of social good. The Social Impact SIG will work with the other working groups and technical teams, especially in the areas of implementation.
The areas of focus include:
Identifying related use cases, current pilots and proofs of concept, production case studies, and the opportunity blockchain presents for social impact;
Sharing stories of successes, failures, lessons learned, opportunities and challenges;
Exploring cross-cutting concerns like security, privacy, and identity in global South contexts;
Identifying existing or needed common critical software components that would serve the particular needs of global social impact;
Working towards proposing solutions to the problems identified;
Identifying conferences or other opportunities to connect face to face, as well as submit talks or present as a group at an event.
Identifying the business and NGO community and building an inclusive platform for early adopters to contribute with their experiences.
Facilitating and raising awareness of the opportunities blockchain can bring to the field.
The group is currently working on developing an opportunity matrix to map out the landscape of activity in philanthropy and remittances, supply chain, governance and democracy, financial empowerment, and identity. We welcome your participating in this effort!
We also want to hear about your use cases in using Hyperledger in a social impact context, please let us know if you’d like to present at our next meeting by sending a message on our mailing list.
The Hyperledger Trade Finance Special Interest Group
Identifying related reference architectures (for example Trade Finance business and integration architecture, technical and infrastructure architecture), frameworks and models (OSI), use cases, current pilots and proofs of concept, and production case studies;
Sharing stories of successes, failures, opportunities and challenges;
Exploring and addressing cross-cutting architectural principles, options and decisions like performance and scalability, security, identity management and privacy, and identity in trade finance contexts;
Identifying existing or needed common critical software, middleware, and hardware components that would serve the particular needs of trade finance, including data transmission and processing from access point (device), through networks, and cloud computing deployment model options (private, public, hybrid)
Working towards proposing solutions to the problems identified;
How to get involved in Hyperledger Special Interest Groups
Participation in any SIG is open to everyone in the community. Each group has an open e-mail list, a Chat channel, and a wiki page. Live meetings are also held regularly via web teleconference. When needed, a task force can also be created within the SIG and have working sessions to discuss specific work items. If you’re interested in joining a SIG, please subscribe to the SIG’s mailing list and start by saying “hi” to the community by sharing what your interests are on the topic. If you are looking for more information on the focus of an active SIG or proposed SIGs please visit wiki.hyperledger.org. We look forward to your participation and contributions!
As of release 1.1, Hyperledger Sawtooth supports dynamic consensus through its consensus API and SDKs. These tools, which were covered in a previous blog post, are the building blocks that make it easy to implement different consensus algorithms as consensus engines for Sawtooth. We chose to implement the Raft algorithm as our first consensus engine, which we describe in another blog post. While our Raft implementation is an excellent proof of concept, it is not Byzantine-fault-tolerant, which makes it unsuitable for consortium-style networks with adversarial trust characteristics.
To fill this gap, we chose the Practical Byzantine Fault Tolerance (PBFT) consensus algorithm. We started work on the Sawtooth PBFT consensus engine in the summer of 2018 and continue to develop and improve on it as we work towards its first stable release. This blog post summarizes the PBFT algorithm and describes how it works in Sawtooth.
What Is PBFT?
PBFT dates back to a 1999 paper written by Miguel Castro and Barbara Liskov at MIT. Unlike other algorithms at the time, PBFT was the first Byzantine fault tolerant algorithm designed to work in practical, asynchronous environments. PBFT is thoughtfully defined, well established, and widely understood, which makes it an excellent choice for Hyperledger Sawtooth.
PBFT is similar to Raft in some general ways:
It is leader-based and non-forking (unlike lottery-style algorithms)
It does not support open-enrollment, but nodes can be added and removed by an administrator
It requires full peering (all nodes must be connected to all other nodes)
PBFT provides Byzantine fault tolerance, whereas Raft only supports crash fault tolerance. Byzantine fault tolerance means that liveness and safety are guaranteed even when some portion of the network is faulty or malicious. As long as a minimum percentage of nodes in the PBFT network are connected, working properly, and behaving honestly, the network will always make progress and will not allow any of the nodes to manipulate the network.
How Does PBFT Work?
The original PBFT paper has a detailed and rigorous explanation of the consensus algorithm. What follows is a summary of the algorithm’s key points in the context of Hyperledger Sawtooth. The original definition is broadly applicable to any kind of replicated system; by keeping this information blockchain-specific, we can more easily describe the functionality of the Sawtooth PBFT consensus engine.
A PBFT network consists of a series of nodes that are ordered from 0 to n-1, where n is the number of nodes in the network. As mentioned earlier, there is a maximum number of “bad” nodes that the PBFT network can tolerate. As long as this number of bad nodes—referred to as the constant f—is not exceeded, the network will work properly. For PBFT, the constant f is equal to one third of the nodes in the network. No more than a third of the network (rounded down) can be “out of order” or dishonest at any given time for the algorithm to work. The values of n and f are very important; you’ll see them later as we discuss how the algorithm operates.
Figure 1 — n and f in the PBFT algorithm
As the network progresses, the nodes move through a series of “views”. A view is a period of time that a given node is the primary (leader) of the network. In simple terms, each node takes turns being the primary in a never-ending cycle, starting with the first node. For a four-node network, node 0 is the primary at view 0, node 1 is the primary at view 1, and so on. When the network gets to view 4, it will “wrap back around” so that node 0 is the primary again.
In more technical terms, the primary (p) for each view is determined based on the view number (v) and the ordering of the nodes. The formula for determining the primary for any view on a given network is p = v mod n. For instance, on a four-node network at view 7, the formula p = 7 mod 4 means that node 3 will be the primary (7 mod 4 = 3).
In addition to moving through a series of views, the network moves through a series of “sequence numbers.” In the context of a Sawtooth blockchain, a sequence number is equivalent to a block number; thus, saying that a node is on sequence number 10 is the same as saying that the node is performing consensus on block 10 in the chain.
Each node maintains a few key pieces of information as part of its state:
The list of nodes that belong to the network
Its current view number
Its current sequence number (the block it is working on)
The phase of the algorithm it is currently in (see “Normal-Case Operation”)
A log of the blocks it has received
A log of all valid messages it has received from the other nodes
To commit a block and make progress, the nodes in a PBFT network go through three phases:
Figure 2 shows these phases for a simple four-node network. In this example, node 0 is the primary and node 3 is a faulty node (so it does not send any messages). Because there are four nodes in the network (n = 4), the value of f for the network is 4-13=1. This means the example network can tolerate only one faulty node.
To kick things off, the primary for the current view will create a block and publish it to the network; each of the nodes will receive this block and perform some preliminary verification to make sure that the block is valid.
After the primary has published a block to the network, it broadcasts a pre-prepare message to all of the nodes. Pre-prepare messages contain four key pieces of information: the ID of the block the primary just published, the block’s number, the primary’s view number, and the primary’s ID. When a node receives a pre-prepare message from the primary, it will validate the message and add the message to its internal log. Message validation includes verifying the digital signature of the message, checking that the message’s view number matches the node’s current view number, and ensuring that the message is from the primary for the current view.
The pre-prepare message serves as a way for the primary node to publicly endorse a given block and for the network to agree about which block to perform consensus on for this sequence number. To ensure that only one block is considered at a time, nodes do not allow more than one pre-prepare message at a given view and sequence number.
Once a node has received a block and a pre-prepare message for the block, and both the block and message have been added to the node’s log, the node will move on to the preparing phase. In the preparing phase, the node will broadcast a prepare message to the rest of the network (including itself). Prepare messages, like pre-prepare messages, contain the ID and number of the block they are for, as well as the node’s view number and ID.
In order to move onto the next phase, the node must wait until it has received 2f + 1 prepare messages that have the same block ID, block number, and view number, and are from different nodes. By waiting for 2f + 1 matching prepare messages, the node can be sure that all properly functioning nodes (those that are non-faulty and non-malicious) are in agreement at this stage. Once the node has accepted the required 2f + 1 matching prepare messages and added them to its log, it is ready to move onto the committing phase.
When a node enters the committing phase, it broadcasts a commit message to the whole network (including itself). Like the other message types, commit messages contain the ID and number of the block they are for, along with the node’s view number and ID. As with the preparing phase, a node cannot complete the committing phase until it has received 2f + 1 matching commit messages from different nodes. Again, this guarantees that all non-faulty nodes in the network have agreed to commit this block, which means that the node can safely commit the block knowing that it will not need to be reverted. With the required 2f + 1 commit messages accepted and in its log, the node can safely commit the block.
Once the primary node has finished the committing phase and has committed the block, it will start the whole process over again by creating a block, publishing it, and broadcasting a pre-prepare message for it.
In order to be Byzantine fault tolerant, a consensus algorithm must prevent nodes from improperly altering the network (to guarantee safety) or indefinitely halting progress (to ensure liveness). PBFT guarantees safety by requiring all non-faulty nodes to agree in order to move beyond the preparing and committing phases. To guarantee liveness, though, there must be a mechanism to determine if the leader is behaving improperly (such as producing invalid messages or simply not doing anything). PBFT provides the liveness guarantee with view changes.
When a node has determined that the primary of view v is faulty (perhaps because the primary sent an invalid message or did not produce a valid block in time), it will broadcast a view change message for view v + 1 to the network. If the primary is indeed faulty, all non-faulty nodes will broadcast view change messages. When the primary for the new view (v + 1) receives 2f + 1 view change messages from different nodes, it will broadcast a new view message for view v + 1 to all the nodes. When the other nodes receive the new view message, they will switch to the new view, and the new primary will start publishing blocks and sending pre-prepare messages.
View changes guarantee that the network can move on to a new primary if the current one is faulty. This PBFT feature allows the network to continue to make progress and not be stalled by a bad primary node.
Want to Learn More?
This blog post only scratches the surface of the PBFT consensus algorithm. Stay tuned to the Hyperledger blog for more information on PBFT, including a future post about our extensions and additional features for Sawtooth PBFT.
Is WebAssembly the future of smart contracts? We think so. In this post, we will talk about Sawtooth Sabre, a WebAssembly smart contract engine for Hyperledger Sawtooth.
A smart contract is software that encapsulates the business logic for modifying a database by processing a transaction. In Hyperledger Sawtooth, this database is called “global state”. A smart contract engine is software that can execute a smart contract. By developing Sawtooth Sabre, we hope to leverage the WebAssembly ecosystem for the benefit of application developers writing the business logic for distributed ledger systems. We expect an ever-growing list of WebAssembly programming languages and development environments.
Unblocking Contract Deployment
The primary mechanism for smart contract development in Hyperledger Sawtooth is a transaction processor, which takes a transaction as input and updates global state. Sound like a smart contract? It is! If you implement business logic in the transaction processor, then you are creating a smart contract. If you instead implement support for smart contracts with a virtual machine (or interpreter) like WebAssembly, then you have created a smart contract engine.
If we can implement smart contracts as transaction processors, why bother with a WebAssembly model like Sabre? Well, it is really about deployment strategy. There are three deployment models for smart contracts:
Off-chain push: Smart contracts are deployed by pushing them to all nodes from a central authority on the network.
Off-chain pull: Smart contracts are deployed by network administrators pulling the code from a centralized location. Network administrators operate independently.
On-chain: Smart contracts are submitted to the network and inserted into state. Later, as transactions are submitted, the smart contracts are read from state and executed (generally in a sandboxed environment).
We won’t discuss off-chain push, other than to note that this strategy—having a centralized authority push code to everyone in the network—isn’t consistent with distributed ledgers and blockchain’s promise of distributing trust.
Off-chain pull is an opt-in strategy for updating software, and is widely used for Linux distribution updates. We use this model to distribute Sawtooth, including the transaction processors. By adding the Sawtooth apt repository on an Ubuntu system, you pull the software and install it via the apt-get command. Each software repository is centrally managed, though it is possible to have multiple software repositories configured and managed independently. This model has a practical problem—it requires administrators across organizations to coordinate software updates—which makes business logic updates more complicated than we would like.
On-chain smart contracts are installed on the blockchain with a transaction that stores the contract into an address in global state. The smart contract can later be executed with another transaction. The execution of the smart contract starts by loading it from global state and continues by executing the smart contract in a virtual machine (or interpreter). On-chain smart contracts have a big advantage over off-chain contracts: because the blockchain is immutable and the smart contract itself is now on the chain, we can guarantee that the same smart contract code was used to create the original block and during replay. Specifically, the transaction will always be executed using the same global state, including the stored smart contract. Because contracts are deployed by submitting transactions onto the network, we can define the process that controls the smart contract creation and deletion with other smart contracts! Yes, this is a little meta, but isn’t it great?
The on-chain approach seems superior, so why did we implement Hyperledger Sawtooth transaction processors with the off-chain model? Because our long-term vision—and a main focus for Sawtooth—has been smart contract engines that run on-chain smart contracts. Smart contract engines are more suitable for off-chain distribution, because they do not contain business logic, and are likely to be upgraded at the same time as the rest of the software.
Our initial transaction processor design reflected our goal for several types of smart contract engines. We later implemented one of them: Sawtooth Seth, a smart contract engine that runs Ethereum Virtual Machine (EVM) smart contracts. For us, Seth was a validation that our transaction processor design was flexible enough to implement radically different approaches for smart contracts. Like Ethereum, Seth uses on-chain smart contracts, so Seth is great if you want Ethereum characteristics and compatibility with tools such as Truffle. However, Seth is limited by Ethereum’s design and ecosystem, and does not expose all the features in our blockchain platform. We knew that we needed an additional approach for smart contracts in Hyperledger Sawtooth.
Crafting a Compatible Path Forward
Sawtooth Sabre, our WebAssembly smart contract engine, is our solution for native, on-chain smart contracts.
The programming model for Sabre smart contracts is the same as that for transaction processors. A transaction processor has full control of data representation, both in global state and in transaction payloads (within certain determinism requirements). Hyperledger Sawtooth uses a global state Merkle-Radix tree, and the transaction processors handle addressing within the tree. A transaction processor can use different approaches for addressing, ranging from calculating an address with a simple field hash to organizing data within the tree in a complex way (to optimize for parallel execution, for example). Multiple transaction processors can access the same global state if they agree on the conventions used in that portion of state.
Sawtooth Sabre smart contracts use this same method for data storage, which means they can access global state in the same way that transaction processors do. In fact, smart contracts and transaction processors can comfortably coexist on the same blockchain.
The other major feature is SDK compatibility. The Sawtooth Sabre SDK API is compatible with the Hyperledger Sawtooth transaction processor API, which means that smart contracts written in Rust can switch between the Sawtooth SDK and the Sabre SDK with a simple compile-time flag. (Currently, Rust is the only supported Sabre SDK.) The details of running within a WebAssembly interpreter are hidden from the smart contract author. Because Sabre smart contracts use the same API as transaction processors, porting a transaction processor to Sabre is relatively easy—just change a few import statements to refer to the Sabre SDK instead of the Hyperledger Sawtooth SDK.
Now the choice between off-chain and on-chain smart contracts is a compile-time option. We use this approach regularly, because we can separate our deployment decisions from the decisions for smart contract development. Most of the transaction-processor-based smart contracts included in Hyperledger Sawtooth are now compatible with Sawtooth Sabre.
A Stately Approach to Permissioning
Hyperledger Sawtooth provides several ways to control which transaction processors can participate on a network. As explained above, transaction processors are deployed with the off-chain pull method. This method lets administrators verify the the transaction processors before adding them to the network. Note that Hyperledger Sawtooth requires the same set of transaction processors for every node in the network, which prevents a single node from adding a malicious transaction processor. Additional controls can limit the accepted transactions (by setting the allowed transaction types) and specify each transaction processor’s read and write access to global state (by restricting namespaces).
These permissions, however, are not granular enough for Sawtooth Sabre, which is itself a transaction processor. Sabre is therefore subject to the same restrictions, which would then apply to all smart contracts. Using the same permission control has several problems:
Sabre smart contracts are transaction-based, which means that a smart contract is created by submitting a transaction. This removes the chance to review a contract before it is deployed.
Sabre transactions must be accepted by the network to run smart contracts, but we cannot limit which smart contracts these transactions are for, because this information is not available to validator.
Sabre must be allowed to access the same areas of global state that the smart contracts can access.
An “uncontrolled” version of Sabre would make it too easy to deploy smart contracts that are not inherently restricted to a to the permissions that the publisher of the smart contract selects.
Our solution in Sawtooth Sabre is to assign owners for both contracts and namespaces (a subset of global state). A contract has a set of owners and a list of namespaces that it expects to read from and write to. Each namespace also has an owner. The namespace owner can choose which contracts have read and write access to that owner’s area of state. If a contract does not have the namespace permissions it needs, a transaction run against the smart contract will fail. So, while the namespace owner and contract owner are not necessarily the same, there is an implied degree of trust and coordination between them.
Also, contracts are versioned. Only the owners of a contract are able to submit new versions to Sabre, which removes the chance that a malicious smart contract change could be accepted.
A Final Note About WebAssembly
On-chain WebAssembly isn’t limited to just smart contracts. For example, in Hyperledger Grid, we are using on-chain WebAssembly to execute smart permissions for organization-specific permissioning. Another example is smart consensus, which allows consensus algorithm updates to be submitted as a transaction. There are several more possibilities for on-chain WebAssembly as well.
In short, we think WebAssembly is awesome! Sawtooth Sabre combines WebAssembly with existing Hyperledger Sawtooth transaction processors to provide flexible smart contracts with all the benefits of both a normal transaction processor and on-chain smart-contract execution. Sabre also takes advantage of WebAssembly’s ability to maintain dual-target smart contracts, where the contract can be run as either a native transaction processor or a Sabre contract. And the permission control in Sawtooth Sabre allows fine-grained control over both contract changes and access to global state.
If you would like to learn more about WebAssembly, take a look at webassembly.org.
We are incredibly grateful for Cargill’s sponsorship of Sawtooth Sabre and Hyperledger Grid (a supply chain platform built with Sawtooth Sabre). We would also like to thank the following people who help make our blog posts a success: Anne Chenette, Mark Ford, David Huseby, and Jessica Rampen.
About the Authors
Andi Gunderson is a Software Engineer at Bitwise IO and maintainer on Hyperledger Sawtooth and Sawtooth Sabre.
Shawn Amundson is Chief Technology Officer at Bitwise IO, a Hyperledger technical ambassador, and a maintainer and architect on Hyperledger Sawtooth and Hyperledger Grid.
Back to our Developer Showcase Series to learn what developers in the real world are doing with Hyperledger technologies. Next up is Raj Sadaye.
What advice would you offer other technologists or developers interested in getting started working on blockchain?
One piece of advice I would offer someone interested in getting started with blockchain is to start working on a project or an application using the technology that they want to learn. It doesn’t have to be an elaborate or a complicated application, but could be something that has utility in the real world. While working on it, they might face difficulties or technical setbacks. The best way to tackle this is to reach out to the community of several other developers who are currently maintaining/ working on that technology. We can learn a lot by working on a project and reading the documentation thoroughly.
Give a bit of background on what you’re working on, and let us know what was it that made you want to get into blockchain?
My interest in blockchains developed when I was searching for a method to secure IoT device communication as well as make it de-centralized to increase speed. Blockchain technology turned out to be the perfect solution for this. Over the past 8 months, I’ve worked on several projects at Arizona State University’s Blockchain research Lab. In March 2018, I worked on building a PoC for a Carbon credits trading ecosystem using blockchain for Lightworks at ASU. The system enables various players of the market to control carbon emissions while maintaining sustainable growth by incentivizing carbon capturing actors. A brief description of this project can be found here. Currently, we’re working with the Center for Negative Emission of Carbon to design a way to verify capture as well as emission of carbon with minimal human intervention. My current research focus is developing a data sharing protocol that enables edge to edge communication in IoT devices. I’ve also been working on building the CSE 598: Engineering Blockchain Applications course on Coursera for Arizona State University.
What project in Hyperledger are you working on? Any new developments to share? Can you sum up your experience with Hyperledger?
Primarily, I’ve been working with Hyperledger Fabric and Hyperledger Composer and overall, the experience has been really good. Hyperledger Fabric has a good set of tools to build the infrastructure for a distributed ledger solution. The certification authority is a high-quality tool that helps us with cryptographic validation and dynamically assigning certificates for actors being added to the network. Once a person is familiar with the documentation, it’s really simple to go about building applications. Hyperledger Composer is a tool that excited me the most over the last 8 month because it runs on top of Hyperledger Fabric and it can help a blockchain novice on how to build a distributed application. Both frameworks have really good tutorial sections that help developers get familiar with the technology.
As Hyperledger’s incubated projects start maturing and hit 1.0s and beyond, what are the most interesting technologies, apps, or use cases coming out as a result from your perspective?
In my opinion, supply chains and blockchain technology were always meant to go hand in hand. The most interesting app or use case that I’ve been recently come across is Everledger. Everledger rewires trust and confidence in a previously broken market by building consortiums of actors participating and maintaining provenance by supplementing blockchain technology with various other verification techniques. In the near future, I see other products also adopt such an architecture to avoid counterfeiting and adultery.
What’s the one issue or problem you hope blockchain can solve?
One issue I’m hoping to be solved using blockchain technology is verification of identity through digitization of personal documents. Verification of documents using hash-based fingerprinting assigning ownership of this digital record to the person rather than a centralized authority can help a lot in maintaining the privacy of data as well as avoiding frauds through detection of counterfeit documents being used.
Where do you hope to see Hyperledger and/or blockchain in 5 years?
In 5 years, I expect blockchain technology to move over the crypto-hype and focus on the real applications and use cases that it be integrated with. For Hyperledger, the most interesting upcoming project, in my opinion, is Hyperledger Quilt which aims to achieve interoperability in blockchains. I’d also like to see a solution within the Hyperledger project that enables seamless integration of blockchain application with the existing infrastructure.
The time has come again for another Hyperledger project to begin their version 1.0 release process. Hyperledger Iroha is getting close to a 1.0 release and as part of that, Hyperledger hired an outside security auditing firm to review the code and audit it for security vulnerabilities. Nettitude conducted a review of the code this past fall and reported their findings to the Hyperledger security team and the Iroha developers.
The Iroha audit found four security issues, including one that was critical enough to require us to issue our first Common Vulnerabilities and Exposure (CVE) notice. All four issues were tracked using our JIRA and resolved shortly after the audit concluded.
I want to highlight the details of two of the security issues that the audit discovered because they show how easy it is to make bad assumptions about cryptography that results in a critical failure. Crypto code is always difficult to get right and as you will see, knowing good coding practices isn’t always enough. A developer must also be aware of algorithm and implementation details and the guarantees offered by a cryptographic primitive.
Before digging into the error, let us review the way things are supposed to work in a permissioned blockchain network. Figure 1 shows the normal process of transaction proposal and verification. In the diagram, Node 1 proposes the transaction by signing it and forwarding it to Node 2. Node 2 verifies the validity of the transaction as well as the validity of Node 1’s digital signature endorsement. Node 2 then endorses the transaction and forwards it to Node 3. Node 3 does the same checks as Node 2 except that it is also careful to ensure that the endorsements from Node 1 and Node 2 are both valid and unique. If everything passes the checks, Node 3 endorses the transaction and forwards it to Node 4. Node 4 now repeats the checks of Node 2 and Node 3 and sees that the transaction has enough valid and unique endorsements to be accepted into the next block of the blockchain. Node 4 transmits the fully endorsed and accepted transaction to all other nodes in preparation of the block construction and consensus steps. It is important to point out that not only is the validity of each digital signature important, but that a transaction also has enough unique endorsements before it will be accepted.
Figure 1—How a transaction is endorsed and validated.
Hyperledger Iroha uses the Twisted Edwards Curves based elliptic curve digital signature scheme more commonly known as Ed25519 or EdDSA. Unlike almost every other elliptic curve digital signature scheme, Ed25519 doesn’t take random data as one of its inputs. Most digital signature schemes generate a random number used only once—also known as a nonce (Number used ONCE)1—when calculating a digital signature of a message. The reason for this is because a digital signature is just a message digest encrypted using a public key encryption algorithm. Public key encryption algorithms are trivial to break if there is no nonce or a nonce gets reused, with the same secret key, to encrypt multiple messages.2 This is called a “chosen plaintext attack”.3 Figure 2 shows how a random nonce is used when encrypting the message digest to create the digital signature. By including a nonce, repeated use of the secret key over different messages does not compromise the encryption. Digital signatures using this method are different even though the same secret key and message are used.
Figure 2—Digital signature calculation with random nonce.
The Ed25519 signature scheme used by Iroha is different in that it generates the nonce by processing the inputs to the signing algorithm and thus repeated signatures of the same data with the same key result in the same encrypted data.4 This doesn’t compromise the key because the nonce is still different for different inputs. Figure 3 illustrates how the nonce for an Ed25519 digital signature is calculated from the input message and are therefore deterministic rather than generated randomly. Digital signatures using this method are the same when the same secret key and message are given.
Figure 3—Digital signature calculated with deterministic nonce.
The flaw in Iroha was that the developers wrote the signature checking code to assume that signing the same data with the same key would always result in the same encrypted data. When determining if a transaction has enough different signatures to be valid, the code was comparing the public key bytes as well as the digital signature bytes when testing to see if two signatures were different. Figure 4 shows how the public key bytes and the digital signature bytes were combined when checking to see if two endorsements were different.
Figure 4—Flawed endorsement check that includes digital signature bytes.
The auditors at Nettitude created a modified version of the Ed25519 signature library so that it instead used random nonces, thus creating different encrypted data for the same secret key and message data. Figure 5 shows how the comparison of endorsements fails when random nonces are used. The resulting endorsements are not the same even though the message and secret key used to sign the message are the same.
Figure 5—Random nonces produce different signatures from the same inputs.
The result is that other nodes in the Iroha network—nodes running unmodified Ed25519 libraries—correctly validate the signatures because the public key correctly decrypts the digital signatures but the code for testing the uniqueness of the signatures is fooled. Each validating node sees different signatures for the same data and the same secret key and assumes they are unique endorsements and that the transaction is properly endorsed. Figure 6 shows how the Nettitude engineers were able to fully bypass this check with their single malicious node. It resulted in a bypass of the Byzantine guarantees of the system.
Figure 6—A malicious node bypassing the Byzantine checks.
The correction for this security bug is to change the transaction and block signature validation code to first check that all signatures are valid and then check only the public keys for uniqueness when determining if there are enough valid and unique signatures on a transaction or block. Figure 7 shows how the scenario in Figure 6 plays out with the fixed code. Again a malicious node with a modified Ed25519 implementation signs a transaction multiple times with the same key. The signature bytes are unique, but the keys are not. When the other nodes in the network check the transaction, they see three valid signatures but the keys are not different. Each nodes determines that there is only one unique and valid signature and rejects the transaction.
Figure 7—A malicious node unable to bypass the Byzantine checks.
Two bugs were filed, one for transaction validation and one for block validation to address this flaw. The first bug is titled “multi-signature transactions can potentially be authorised by single user”5 The second bug is titled “vote early, vote often”6 Both flaws were fixed shortly after the report was given to us from Nettitude and the current version of Iroha has been fixed.
It is very important for developers to understand the subtleties of cryptography and applying it to engineering problems. Careful study and consideration of the guarantees and assumptions is required as well as multiple reviews from other engineers with similar knowledge and attention to detail. The “many eyeballs” theory of open source software development does work. This audit proved it.
The management and technical reports from the audit can be found on the Hyperledger wiki.
Growing community, new project developments and accelerating pace of deployments mark start of 2019
SAN FRANCISCO (January 30, 2019) – Hyperledger, an open source collaborative effort created to advance cross-industry blockchain technologies, begins 2019 by announcing it has added eight new members to the consortium. In addition, Hyperledger has delivered some key technology updates and now has a total of 12 projects.
Hyperledger is a multi-venture, multi-stakeholder effort that includes various enterprise blockchain and distributed ledger technologies. Recent project updates include the release of Fabric v1.4 LTS, the first long term support version of the framework, as well as the addition of two new projects Hyperledger Ursa and Hyperledger Grid. Grid uses shared, reusable tools to accelerate the development of ledger-based solutions for cross-industry supply chain applications. Additionally, a detailed case study on Circulor’s Hyperledger Fabric-based production system for tracing tantalum mining in Rwanda adds to growing list of resources for guiding enterprise blockchain adoption.
“We wrapped up 2018 with a successful and exciting Hyperledger Global Forum,” said Brian Behlendorf, Executive Director, Hyperledger. “This first worldwide meeting of the Hyperledger community underscored the growing pace of development and deployment of blockchain in general and our tools and technologies in particular. We are seeing more signs of this accelerating pace of maturation and adoption here in early 2019. We welcome these newest members and look forward to their help in driving this growth.”
Hyperledger allows organizations to create solid, industry-specific applications, platforms and hardware systems to support their individual business transactions by offering enterprise-grade, open source distributed ledger frameworks and code bases. The latest general members to join the community are BTS Digital LLP, Exactpro Systems Limited, Jitsuin, Lares Blockchain, Myndshft, Omnigate, Poste Italiane and Wrapious Marketing Co Ltd.
“We are an emerging company aiming at creating a national digital ecosystem in Kazakhstan that will facilitate the basic processes of human life and provide equal access to resources,” said Eugene Volkov, Chief Digital Officer, BTS Digital LLP. “As we see accelerated growth of transactions and actors in today’s life, we acknowledge the growing need to build a trustworthy society where all the participants can act with consensus, immutability, equality and transparency. Building such an environment requires trust. Our trust in Hyperledger’s expertise is a primary reason why we choose to become a member. We believe this community will guide us in finding technological solutions in achieving our goals.”
“Being a firm strategically focused on providing the highest level QA services for mission-critical market infrastructures, Exactpro understands the important role of this new technology and strives to enhance our expertise in this area through collaboration with leading blockchain consortia such as Hyperledger,” said Maxim Rudovsky, CTO, Exactpro. “We firmly believe our Hyperledger and The Linux Foundation memberships will provide Exactpro with access to community resources that will help us deliver more profound testing of DLT-based software systems to our clients.”
“One of the founding decisions we made at Jitsuin was to become a Hyperledger member,” said Jon Geater, Chief Technology Officer, Jitsuin. “As part of our mission to unlock the value of data in the Internet of Things, we focus on Industrial IoT device lifecycle assurance where security, price, reliability and shared responsibility are all crucial. Keeping IoT in a known, good state is a team sport and is exactly where distributed ledger technologies work best. I am also delighted to continue serving the Governing Board and Hyperledger community to help ensure it remains the unrivaled home of advanced cross-industry business blockchain technologies.”
“Lares Blockchain Security is delighted to join the Hyperledger community,” said Chris McGarrigle, CEO, Lares Blockchain Security. “Hyperledger’s fundamental strengths of performance, scalability and security resonate with our core values at Lares Blockchain Security. As our blockchain products and technologies continue to gain momentum in the medical, biotech, mining and financial industries, we see our partnership with Hyperledger as critical to further establishing ourselves in the enterprise.”
“Blockchain presents an enormous opportunity for healthcare to simplify and unify claims management, prior authorizations and other administrative functions, helping payers and providers reduce costs and improve timeliness and quality of care,” said Ron Wince, CEO, Myndshft Technologies. “That is why Myndshft is thrilled to join Hyperledger and collaborate with blockchain leaders and innovators across industries to find ways to leverage the technology to increase efficiency of healthcare operations, improve the patient experience and optimize financial performance in the value-based care era.”
“Omnigate Systems is delighted to join Hyperledger and to leverage blockchain technologies to drive interoperability in finance. Omnigate provides enterprise-grade, universal ledger software with extensive integrations. Our mission is to empower businesses of any size to rapidly build production-grade transactional systems for both traditional assets and emerging digital assets,” said Raphael Carrier, CEO, Omnigate. “We consider the integration of the Interledger protocol (via Hyperledger Quilt) into our product to be a key milestone. We believe this is an important initiative which will advance interoperability and accessibility to the ‘Internet of Value.'”
“Blockchain is not just a buzzword or a myth anymore, but is becoming the foundation for establishing a distributed, transparent and cross-industry interoperable ecosystem,” said Mirko Mischiatti, Chief Information Officer, Poste Italiane. “Poste Italiane wants to actively participate in this new and exciting community by becoming a member of Hyperledger in order to continue its path for the innovation and modernization of financial, logistic and insurance industries. We really look forward to working with other members and making our effort to contribute for the enhancement of blockchain technology.”
“It is our honor to become a member of the Hyperledger community,” said Tommy Wong, Chief Operating Officer, Wrapious Marketing Co Ltd. “Joining Hyperledger provides us with more opportunity to explore more within the blockchain space and to contribute to project developments. Our vision is to create a virtual world that provides equal access to everyone regardless of their status or social class in the community. We believe being part of Hyperledger will add to our ability to achieve this vision.”
Hyperledger is an open source collaborative effort created to advance cross-industry blockchain technologies. It is a global collaboration including leaders in finance, banking, Internet of Things, supply chains, manufacturing and Technology. The Linux Foundation hosts Hyperledger under the foundation. To learn more, visit: https://www.hyperledger.org/.
Today Hyperledger, part of the Linux Foundation announced its latest enterprise blockchain project, Hyperledger Grid. The initiative is a framework that provides a set of tools particularly suited to supply chain blockchains. The main sponsors are agribusiness company Cargill and two of the maintainers of the Hyperledger Sawtooth blockchain protocol, Intel and Bitwise IO. This is the first project focused on a particular industry.
This blog post shows how to setup Grafana to display Sawtooth and system statistics.
Grafana is a useful tool for displaying Sawtooth performance statistics. Hyperledger Sawtooth optionally generates performance metrics from the validator and REST API components for a each node. Sawtooth sends the metrics to InfluxDB, a time series database that is optimized for fast access to time series data. Telegraf, a metrics reporting agent, gathers supplemental system information from the Linux kernel and also sends it to InfluxDB. Finally, Grafana reads from the InfluxDB and displays an assortment of statistics on several graphical charts in your web browser. Figure 1 illustrates the flow of data.
Figure 1. Metrics gathering data flow.
Grafana can display many validator, REST API, and system statistics. The following lists all supported metrics:
Sawtooth Validator Metrics
Chain head moved to fork
Pending batches—number of batches waiting to be processed
Batches rejected (back-pressure)—number of rejected batches due to back-pressure tests
Transaction execution rate, in batches per second
Transactions in process
Transaction processing duration (99th percentile), in milliseconds
Valid transaction response rate
Invalid transaction response rate
Internal error response rate
Message round trip times, by message type (95th percentile), in seconds
Messages sent, per second, by message type
Message received, per second, by message type
Sawtooth REST API Metrics
REST API validator response time (75th percentile), in seconds
REST API batch submission rate, in batches per second
User and system host CPU usage
Disk I/O, in kilobytes per second
I/O wait percentage
RAM usage, in megabytes
Read and write I/O ops
Thread pool task run time and task queue times
Executing thread pool workers in use
Dispatcher server thread queue size
The screenshot in Figure 2 gives you an idea of the metrics that Grafana can show.
Figure 2. Example Grafana graph display.
Setting Up InfluxDB and Grafana
By default, Hyperledger Sawtooth does not gather performance metrics. The rest of this post explains the steps for enabling this feature. The overall order of steps is listed below with in-depth explanations of each step following.
Have the required prerequisites: Sawtooth blockchain software is running on Ubuntu and Docker CE software is installed
Installing and configuring InfluxDB to store performance metrics
Building and installing Grafana
Configuring Grafana to display the performance metrics
Configuring Sawtooth to generate performance metrics
Installing and configuring Telegraf to collect metrics
ProTip: These instructions assume a Sawtooth node is running directly on Ubuntu, not in Docker containers. To use Grafana with Sawtooth on Docker containers, additional steps (not described here) are required to allow the Sawtooth validator and REST API containers to communicate with the InfluxDB daemon at TCP port 8086.
2. Installing and Configuring the InfluxDB Container
InfluxDB stores the Sawtooth metrics used in the analysis and graphing. Listing 1 shows the commands to download the InfluxDB Docker container, create a database directory, start the Docker container, and verify that it is running.
ProTip: You can change the sample passwords here, pwadmin and pwlrdata, to anything you like. If you do, you must use your passwords in all the steps below. Avoid or escape special characters in your password such as “,@!$” or you will not be able to connect to InfluxDB.
3. Building and Installing the Grafana Container
Grafana displays the Sawtooth metrics in a web browser. Listing 2 shows the commands to download the Sawtooth repository, build the Grafana Docker container, download the InfluxDB Docker container, create a database directory, start the Docker container, start the Grafana container, and verify that everything is running.
Building the Grafana Docker container takes several steps and downloads several packages into the container. It ends with “successfully built” and “successfully tagged” messages.
4. Configuring Grafana
Configure Grafana from your web browser. Navigate to http://localhost:3000/ (replace “localhost” with the hostname or IP address of the system where you started the Grafana container in the previous step).
Login as user “admin”, password “admin”
(Optional step) If you wish, change the Grafana webpage “admin” password by clicking the orange spiral icon on the top left, selecting “admin” in the pull-down menu, click on “Profile” and “Change Password”, then enter the old password, (admin) and your new password and, finally, click on “Change Password”. This Grafana password is not related to the InfluxDB passwords used in a previous step.
Click the orange spiral icon again on the top left, then click on “Data Sources” in the drop-down menu.
Click on the “metrics” data source.
Under “URL”, change “influxdb” in the URL to the hostname or IP address where you are running InfluxDB. (Use the same hostname that you used for Grafana web page, since the Grafana and InfluxDB containers run on the same host.) This is where Grafana accesses the InfluxDB
Under “Access”, change “proxy” to “direct” (unless you are going through a proxy to access the remote host running InfluxDB)
Under “InfluxDB Details”, set “User” to “lrdata” and “Password” to “pwlrdata”
Click “Save & Test” to save the configuration in the Grafana container
If the test succeeds, the green messages “Data source updated” and “Data source is working” will appear. Figure 3 illustrates the green messages. Otherwise, you get a red error message that you must fix before proceeding. An error at this point is usually a network problem, such as a firewall or proxy configuration or a wrong hostname or IP address.
Figure 3. Test success messages in Grafana.
For the older Sawtooth 1.0 release, follow these additional steps to add the Sawtooth 1.0 dashboard to Grafana (skip these steps for Sawtooth 1.1):
In your terminal, copy the file sawtooth_performance.json from the sawtooth-core repository you cloned earlier to your current directory by issuing the commands in Listing 3.
Listing 3. Commands for getting the Sawtooth 1.0 dashboard file.
In your web browser, click the orange spiral icon again on the top left, select “Dashboards” in the drop-down menu, then click on “Import” and “Upload .json file”.
Navigate to the directory where you saved sawtooth_performance.json.
Select “metrics” in the drop-down menu and click on “Import”.
5. Configuring Sawtooth
The Sawtooth validator and REST API components each report their own set of metrics, so you must configure the login credentials and destination for InfluxDB. In your terminal window, run the shell commands in Listing 4 to create or update the Sawtooth configuration files validator.toml and rest_api.toml:
for i in /etc/sawtooth/validator.toml /etc/sawtooth/rest_api.tomldo [[ -f $i ]] || sudo -u sawtooth cp $i.example $i echo 'opentsdb_url = "http://localhost:8086"' \ | sudo -u sawtooth tee -a $i echo 'opentsdb_db = "metrics"' \ | sudo -u sawtooth tee -a $i echo 'opentsdb_username = "lrdata"' \ | sudo -u sawtooth tee -a $i echo 'opentsdb_password = "pwlrdata"' \ | sudo -u sawtooth tee -a $idone
Listing 4. Commands to create or update Sawooth configuration.
After verifying that the files validator.toml and rest_api.toml each have the four new opentsdb_* configuration lines, restart the sawtooth-validator and sawtooth-rest-api using the commands in Listing 5
Protip: The InfluxDB daemon, influxd, listens to TCP port 8086, so this port must be accessible over the local network from the validator and REST API components. By default, influxd only listens to localhost.
6. Installing and Configuring Telegraf
Telegraf, InfluxDB’s metrics reporting agent, gathers metrics information from the Linux kernel to supplement the metrics information sent from Sawtooth. Telegraf needs the login credentials and destination for InfluxDB. Install Telegraf use the commands in Listing 7.
Listing 8. Create the Telegraf configuration file.
Finally restart telegraf with the command in Listing 9.
sudo systemctl restart telegraf
Listing 9. Restart Telegraf.
Try it out!
After completing all the previous steps, Sawtooth and system statics should appear in the Grafana dashboard webpage. To see them, click the orange spiral icon on the top left, then click on “Dashboards” in the drop-down menu, then click on “Home” next to the spiral icon, and then click on “dashboard”. This is the dashboard for Grafana.
Generate some transactions so you can see activity on the Grafana dashboard. For example, run the intkey workload generator by issuing the Listing 10 commands in a terminal window to create test transactions at the rate of 1 batch per second.
Listing 10. Start the workload generator to get some statistics.
I recommend changing the time interval in the dashboard from 24 hours to something like 30 minutes so you can see new statistics. Do that by clicking on the clock icon in the upper right of the dashboard. Then click on the refresh icon, ♻, to update the page. Individual graphs can be enlarged or shrunk by moving the dotted triangle tab in the lower right of each graph.
If the Grafana webpage is not accessible, the Grafana container is not running or is not accessible over the network. To verify that it is running and start it:
If the container is running, the docker host may not be accessible on the network
If no system statistics appear at the bottom of the dashboard, either Telegraf is not configured or the InfluxDB container is not running or is not accessible over the network. To verify that InfluxDB is running and start it:
Check that the InfluxDB server, influxd, is reachable from the local network. Use the InfluxDB client (package influxdb-client) or curl or both to test. The InfluxDB client command should show a “Connected to” message and the curl command should show a “204 No Content” message.
Check that the interval range (shown next to the clock on the upper right of the dashboard) is low enough (such as 1 hour).
Check that the validator and REST API .toml files and Telegraf sawtooth.conf files have the opentsdb_* configuration lines. Make sure that the passwords and URLs are correct and that they match each other and the passwords set when you started the InfluxDB container.
Click the refresh icon, ♻, on the upper right of the dashboard.