Hyperledger Adds to Growing Line up of Groups Focused on Cross-Industry Development

By | Blog

In another step towards driving cross-industry cooperation in the development and use of blockchain technologies, Hyperledger has added two new groups to its growing list of Special Interest Groups (SIG). These two new groups, the Hyperledger Social Impact SIG and the Hyperledger Trade Finance SIG, join Healthcare and Public Sector focused SIGs.

SIGs gather community members from an industry segment to work on domain-specific problems and create an environment for open discussion, document co-creation and solution proposals. SIGs help specific vertical markets in their efforts to address problems specific to that particular community.

The Hyperledger Social Impact Special Interest Group

The Hyperledger Social Impact SIG is primarily focused on serving as a platform for exchanging ideas and exploration of ways to use blockchain technology in the context of social good. The Social Impact SIG will work with the other working groups and technical teams, especially in the areas of implementation.

The areas of focus include:

  • Identifying related use cases, current pilots and proofs of concept, production case studies, and the opportunity blockchain presents for social impact;
  • Sharing stories of successes, failures, lessons learned, opportunities and challenges;
  • Exploring cross-cutting concerns like security, privacy, and identity in global South contexts;
  • Identifying existing or needed common critical software components that would serve the particular needs of global social impact;
  • Working towards proposing solutions to the problems identified;
  • Identifying conferences or other opportunities to connect face to face, as well as submit talks or present as a group at an event.
  • Identifying the business and NGO community and building an inclusive platform for early adopters to contribute with their experiences.
  • Facilitating and raising awareness of the opportunities blockchain can bring to the field.

The group is currently working on developing an opportunity matrix to map out the landscape of activity in philanthropy and remittances, supply chain, governance and democracy, financial empowerment, and identity. We welcome your participating in this effort!

We also want to hear about your use cases in using Hyperledger in a social impact context, please let us know if you’d like to present at our next meeting by sending a message on our mailing list.

The Hyperledger Trade Finance Special Interest Group

The Hyperledger Trade Finance Special Interest Group is primarily focused on serving as a platform for exchanging ideas and exploration of ways to use blockchain technology in the context of trade finance.

Areas of focus for the group include:

  • Identifying related reference architectures (for example Trade Finance business and integration architecture, technical and infrastructure architecture), frameworks and models (OSI), use cases, current pilots and proofs of concept, and production case studies;
  • Sharing stories of successes, failures, opportunities and challenges;
  • Exploring and addressing cross-cutting architectural principles, options and decisions like performance and scalability, security, identity management and privacy, and identity in trade finance contexts;
  • Identifying existing or needed common critical software, middleware, and hardware components that would serve the particular needs of trade finance, including data transmission and processing from access point (device), through networks, and cloud computing deployment model options (private, public, hybrid)
  • Working towards proposing solutions to the problems identified;

How to get involved in Hyperledger Special Interest Groups

Participation in any SIG is open to everyone in the community. Each group has an open e-mail list, a Chat channel, and a wiki page. Live meetings are also held regularly via web teleconference. When needed, a task force can also be created within the SIG and have working sessions to discuss specific work items.
If you’re interested in joining a SIG, please subscribe to the SIG’s mailing list and start by saying “hi” to the community by sharing what your interests are on the topic. If you are looking for more information on the focus of an active SIG or proposed SIGs please visit We look forward to your participation and contributions!

Introduction to Sawtooth PBFT

By | Blog, Hyperledger Sawtooth

As of release 1.1, Hyperledger Sawtooth supports dynamic consensus through its consensus API and SDKs. These tools, which were covered in a previous blog post, are the building blocks that make it easy to implement different consensus algorithms as consensus engines for Sawtooth. We chose to implement the Raft algorithm as our first consensus engine, which we describe in another blog post. While our Raft implementation is an excellent proof of concept, it is not Byzantine-fault-tolerant, which makes it unsuitable for consortium-style networks with adversarial trust characteristics.

To fill this gap, we chose the Practical Byzantine Fault Tolerance (PBFT) consensus algorithm. We started work on the Sawtooth PBFT consensus engine in the summer of 2018 and continue to develop and improve on it as we work towards its first stable release. This blog post summarizes the PBFT algorithm and describes how it works in Sawtooth.

What Is PBFT?

PBFT dates back to a 1999 paper written by Miguel Castro and Barbara Liskov at MIT. Unlike other algorithms at the time, PBFT was the first Byzantine fault tolerant algorithm designed to work in practical, asynchronous environments. PBFT is thoughtfully defined, well established, and widely understood, which makes it an excellent choice for Hyperledger Sawtooth.

PBFT is similar to Raft in some general ways:

  • It is leader-based and non-forking (unlike lottery-style algorithms)
  • It does not support open-enrollment, but nodes can be added and removed by an administrator
  • It requires full peering (all nodes must be connected to all other nodes)

PBFT provides Byzantine fault tolerance, whereas Raft only supports crash fault tolerance. Byzantine fault tolerance means that liveness and safety are guaranteed even when some portion of the network is faulty or malicious. As long as a minimum percentage of nodes in the PBFT network are connected, working properly, and behaving honestly, the network will always make progress and will not allow any of the nodes to manipulate the network.

How Does PBFT Work?

The original PBFT paper has a detailed and rigorous explanation of the consensus algorithm. What follows is a summary of the algorithm’s key points in the context of Hyperledger Sawtooth. The original definition is broadly applicable to any kind of replicated system; by keeping this information blockchain-specific, we can more easily describe the functionality of the Sawtooth PBFT consensus engine.

Network Overview

A PBFT network consists of a series of nodes that are ordered from 0 to n-1, where n is the number of nodes in the network. As mentioned earlier, there is a maximum number of “bad” nodes that the PBFT network can tolerate. As long as this number of bad nodes—referred to as the constant f—is not exceeded, the network will work properly. For PBFT, the constant f is equal to one third of the nodes in the network. No more than a third of the network (rounded down) can be “out of order” or dishonest at any given time for the algorithm to work. The values of n and f are very important; you’ll see them later as we discuss how the algorithm operates.

Figure 1 — n and f in the PBFT algorithm

As the network progresses, the nodes move through a series of “views”. A view is a period of time that a given node is the primary (leader) of the network. In simple terms, each node takes turns being the primary in a never-ending cycle, starting with the first node. For a four-node network, node 0 is the primary at view 0, node 1 is the primary at view 1, and so on. When the network gets to view 4, it will “wrap back around” so that node 0 is the primary again.

In more technical terms, the primary (p) for each view is determined based on the view number (v) and the ordering of the nodes. The formula for determining the primary for any view on a given network is p = v mod n. For instance, on a four-node network at view 7, the formula p = 7 mod 4 means that node 3 will be the primary (7 mod 4 = 3).

In addition to moving through a series of views, the network moves through a series of “sequence numbers.” In the context of a Sawtooth blockchain, a sequence number is equivalent to a block number; thus, saying that a node is on sequence number 10 is the same as saying that the node is performing consensus on block 10 in the chain.

Each node maintains a few key pieces of information as part of its state:

  • The list of nodes that belong to the network
  • Its current view number
  • Its current sequence number (the block it is working on)
  • The phase of the algorithm it is currently in (see “Normal-Case Operation”)
  • A log of the blocks it has received
  • A log of all valid messages it has received from the other nodes

Normal-Case Operation

Figure 2 — Messages sent during normal operation of PBFT (Node 3 is faulty)

To commit a block and make progress, the nodes in a PBFT network go through three phases:

  1. Pre-preparing
  2. Preparing
  3. Committing

Figure 2 shows these phases for a simple four-node network. In this example, node 0 is the primary and node 3 is a faulty node (so it does not send any messages). Because there are four nodes in the network (n = 4), the value of f for the network is 4-13=1. This means the example network can tolerate only one faulty node.


To kick things off, the primary for the current view will create a block and publish it to the network; each of the nodes will receive this block and perform some preliminary verification to make sure that the block is valid.

After the primary has published a block to the network, it broadcasts a pre-prepare message to all of the nodes. Pre-prepare messages contain four key pieces of information: the ID of the block the primary just published, the block’s number, the primary’s view number, and the primary’s ID. When a node receives a pre-prepare message from the primary, it will validate the message and add the message to its internal log. Message validation includes verifying the digital signature of the message, checking that the message’s view number matches the node’s current view number, and ensuring that the message is from the primary for the current view.

The pre-prepare message serves as a way for the primary node to publicly endorse a given block and for the network to agree about which block to perform consensus on for this sequence number. To ensure that only one block is considered at a time, nodes do not allow more than one pre-prepare message at a given view and sequence number.


Once a node has received a block and a pre-prepare message for the block, and both the block and message have been added to the node’s log, the node will move on to the preparing phase. In the preparing phase, the node will broadcast a prepare message to the rest of the network (including itself). Prepare messages, like pre-prepare messages, contain the ID and number of the block they are for, as well as the node’s view number and ID.

In order to move onto the next phase, the node must wait until it has received 2f + 1 prepare messages that have the same block ID, block number, and view number, and are from different nodes. By waiting for 2f + 1 matching prepare messages, the node can be sure that all properly functioning nodes (those that are non-faulty and non-malicious) are in agreement at this stage. Once the node has accepted the required 2f + 1 matching prepare messages and added them to its log, it is ready to move onto the committing phase.


When a node enters the committing phase, it broadcasts a commit message to the whole network (including itself). Like the other message types, commit messages contain the ID and number of the block they are for, along with the node’s view number and ID. As with the preparing phase, a node cannot complete the committing phase until it has received 2f + 1 matching commit messages from different nodes. Again, this guarantees that all non-faulty nodes in the network have agreed to commit this block, which means that the node can safely commit the block knowing that it will not need to be reverted. With the required 2f + 1 commit messages accepted and in its log, the node can safely commit the block.

Once the primary node has finished the committing phase and has committed the block, it will start the whole process over again by creating a block, publishing it, and broadcasting a pre-prepare message for it.

View Changing

In order to be Byzantine fault tolerant, a consensus algorithm must prevent nodes from improperly altering the network (to guarantee safety) or indefinitely halting progress (to ensure liveness). PBFT guarantees safety by requiring all non-faulty nodes to agree in order to move beyond the preparing and committing phases. To guarantee liveness, though, there must be a mechanism to determine if the leader is behaving improperly (such as producing invalid messages or simply not doing anything). PBFT provides the liveness guarantee with view changes.

Figure 3 — Messages sent for a view change in PBFT (Node 0 is the faulty primary, Node 1 is the new primary)

When a node has determined that the primary of view v is faulty (perhaps because the primary sent an invalid message or did not produce a valid block in time), it will broadcast a view change message for view v + 1 to the network. If the primary is indeed faulty, all non-faulty nodes will broadcast view change messages. When the primary for the new view (v + 1) receives 2f + 1 view change messages from different nodes, it will broadcast a new view message for view v + 1 to all the nodes. When the other nodes receive the new view message, they will switch to the new view, and the new primary will start publishing blocks and sending pre-prepare messages.

View changes guarantee that the network can move on to a new primary if the current one is faulty. This PBFT feature allows the network to continue to make progress and not be stalled by a bad primary node.

Want to Learn More?

This blog post only scratches the surface of the PBFT consensus algorithm. Stay tuned to the Hyperledger blog for more information on PBFT, including a future post about our extensions and additional features for Sawtooth PBFT.

In the meantime, learn more about PBFT in the original PBFT paper, read the Sawtooth PBFT RFC, and check out the Sawtooth PBFT source code on GitHub.

About the Author

Logan Seeley is a Software Engineer at Bitwise IO. He has been involved in a variety of Hyperledger Sawtooth projects, including the development of the consensus API, Sawtooth Raft, and Sawtooth PBFT.

Assembling the Future of Smart Contracts with Sawtooth Sabre

By | Blog, Hyperledger Sawtooth

Is WebAssembly the future of smart contracts? We think so. In this post, we will talk about Sawtooth Sabre, a WebAssembly smart contract engine for Hyperledger Sawtooth.

We first learned about WebAssembly a couple of years ago at Midwest JS, a JavaScript conference in Minneapolis. The lecture focused on using WebAssembly inside a web browser, which had nothing to do with blockchain or distributed ledgers. Nonetheless, as we left the conference, we were excitedly discussing the possibilities for the future of smart contracts. WebAssembly is a stack-based virtual machine, newly implemented in major browsers, that provides a sandboxed approach to fast code execution. While that sounds like a perfect way to run smart contracts, what really excited us was the potential for WebAssembly to grow a large ecosystem of libraries and tools because of its association with the browser community.

A smart contract is software that encapsulates the business logic for modifying a database by processing a transaction. In Hyperledger Sawtooth, this database is called “global state”. A smart contract engine is software that can execute a smart contract. By developing Sawtooth Sabre, we hope to leverage the WebAssembly ecosystem for the benefit of application developers writing the business logic for distributed ledger systems. We expect an ever-growing list of WebAssembly programming languages and development environments.

Unblocking Contract Deployment

The primary mechanism for smart contract development in Hyperledger Sawtooth is a transaction processor, which takes a transaction as input and updates global state. Sound like a smart contract? It is! If you implement business logic in the transaction processor, then you are creating a smart contract. If you instead implement support for smart contracts with a virtual machine (or interpreter) like WebAssembly, then you have created a smart contract engine.

If we can implement smart contracts as transaction processors, why bother with a WebAssembly model like Sabre? Well, it is really about deployment strategy. There are three deployment models for smart contracts:

  • Off-chain push: Smart contracts are deployed by pushing them to all nodes from a central authority on the network.
  • Off-chain pull: Smart contracts are deployed by network administrators pulling the code from a centralized location. Network administrators operate independently.
  • On-chain: Smart contracts are submitted to the network and inserted into state. Later, as transactions are submitted, the smart contracts are read from state and executed (generally in a sandboxed environment).

We won’t discuss off-chain push, other than to note that this strategy—having a centralized authority push code to everyone in the network—isn’t consistent with distributed ledgers and blockchain’s promise of distributing trust.

Off-chain pull is an opt-in strategy for updating software, and is widely used for Linux distribution updates. We use this model to distribute Sawtooth, including the transaction processors. By adding the Sawtooth apt repository on an Ubuntu system, you pull the software and install it via the apt-get command. Each software repository is centrally managed, though it is possible to have multiple software repositories configured and managed independently. This model has a practical problem—it requires administrators across organizations to coordinate software updates—which makes business logic updates more complicated than we would like.

On-chain smart contracts are installed on the blockchain with a transaction that stores the contract into an address in global state. The smart contract can later be executed with another transaction. The execution of the smart contract starts by loading it from global state and continues by executing the smart contract in a virtual machine (or interpreter). On-chain smart contracts have a big advantage over off-chain contracts: because the blockchain is immutable and the smart contract itself is now on the chain, we can guarantee that the same smart contract code was used to create the original block and during replay. Specifically, the transaction will always be executed using the same global state, including the stored smart contract. Because contracts are deployed by submitting transactions onto the network, we can define the process that controls the smart contract creation and deletion with other smart contracts! Yes, this is a little meta, but isn’t it great?

The on-chain approach seems superior, so why did we implement Hyperledger Sawtooth transaction processors with the off-chain model? Because our long-term vision—and a main focus for Sawtooth—has been smart contract engines that run on-chain smart contracts. Smart contract engines are more suitable for off-chain distribution, because they do not contain business logic, and are likely to be upgraded at the same time as the rest of the software.

Our initial transaction processor design reflected our goal for several types of smart contract engines. We later implemented one of them: Sawtooth Seth, a smart contract engine that runs Ethereum Virtual Machine (EVM) smart contracts. For us, Seth was a validation that our transaction processor design was flexible enough to implement radically different approaches for smart contracts. Like Ethereum, Seth uses on-chain smart contracts, so Seth is great if you want Ethereum characteristics and compatibility with tools such as Truffle. However, Seth is limited by Ethereum’s design and ecosystem, and does not expose all the features in our blockchain platform. We knew that we needed an additional approach for smart contracts in Hyperledger Sawtooth.

Crafting a Compatible Path Forward

Sawtooth Sabre, our WebAssembly smart contract engine, is our solution for native, on-chain smart contracts.  

The programming model for Sabre smart contracts is the same as that for transaction processors. A transaction processor has full control of data representation, both in global state and in transaction payloads (within certain determinism requirements). Hyperledger Sawtooth uses a global state Merkle-Radix tree, and the transaction processors handle addressing within the tree. A transaction processor can use different approaches for addressing, ranging from calculating an address with a simple field hash to organizing data within the tree in a complex way (to optimize for parallel execution, for example). Multiple transaction processors can access the same global state if they agree on the conventions used in that portion of state.

Sawtooth Sabre smart contracts use this same method for data storage, which means they can access global state in the same way that transaction processors do. In fact, smart contracts and transaction processors can comfortably coexist on the same blockchain.

The other major feature is SDK compatibility. The Sawtooth Sabre SDK API is compatible with the Hyperledger Sawtooth transaction processor API, which means that smart contracts written in Rust can switch between the Sawtooth SDK and the Sabre SDK with a simple compile-time flag. (Currently, Rust is the only supported Sabre SDK.) The details of running within a WebAssembly interpreter are hidden from the smart contract author. Because Sabre smart contracts use the same API as transaction processors, porting a transaction processor to Sabre is relatively easy—just change a few import statements to refer to the Sabre SDK instead of the Hyperledger Sawtooth SDK.

Now the choice between off-chain and on-chain smart contracts is a compile-time option. We use this approach regularly, because we can separate our deployment decisions from the decisions for smart contract development. Most of the transaction-processor-based smart contracts included in Hyperledger Sawtooth are now compatible with Sawtooth Sabre.

A Stately Approach to Permissioning

Hyperledger Sawtooth provides several ways to control which transaction processors can participate on a network. As explained above, transaction processors are deployed with the off-chain pull method. This method lets administrators verify the the transaction processors before adding them to the network. Note that Hyperledger Sawtooth requires the same set of transaction processors for every node in the network, which prevents a single node from adding a malicious transaction processor. Additional controls can limit the accepted transactions (by setting the allowed transaction types) and specify each transaction processor’s read and write access to global state (by restricting namespaces).

These permissions, however, are not granular enough for Sawtooth Sabre, which is itself a transaction processor. Sabre is therefore subject to the same restrictions, which would then apply to all smart contracts. Using the same permission control has several problems:

  • Sabre smart contracts are transaction-based, which means that a smart contract is created by submitting a transaction. This removes the chance to review a contract before it is deployed.
  • Sabre transactions must be accepted by the network to run smart contracts, but we cannot limit which smart contracts these transactions are for, because this information is not available to validator.
  • Sabre must be allowed to access the same areas of global state that the smart contracts can access.

An “uncontrolled” version of Sabre would make it too easy to deploy smart contracts that are  not inherently restricted to a to the permissions that the publisher of the smart contract selects.

Our solution in Sawtooth Sabre is to assign owners for both contracts and namespaces (a subset of global state). A contract has a set of owners and a list of namespaces that it expects to read from and write to. Each namespace also has an owner. The namespace owner can choose which contracts have read and write access to that owner’s area of state. If a contract does not have the namespace permissions it needs, a transaction run against the smart contract will fail. So, while the namespace owner and contract owner are not necessarily the same, there is an implied degree of trust and coordination between them.

Also, contracts are versioned. Only the owners of a contract are able to submit new versions to Sabre, which removes the chance that a malicious smart contract change could be accepted.

A Final Note About WebAssembly

On-chain WebAssembly isn’t limited to just smart contracts. For example, in Hyperledger Grid, we are using on-chain WebAssembly to execute smart permissions for organization-specific permissioning. Another example is smart consensus, which allows consensus algorithm updates to be submitted as a transaction. There are several more possibilities for on-chain WebAssembly as well.

In short, we think WebAssembly is awesome! Sawtooth Sabre combines WebAssembly with existing Hyperledger Sawtooth transaction processors to provide flexible smart contracts with all the benefits of both a normal transaction processor and on-chain smart-contract execution. Sabre also takes advantage of WebAssembly’s ability to maintain dual-target smart contracts, where the contract can be run as either a native transaction processor or a Sabre contract. And the permission control in Sawtooth Sabre allows fine-grained control over both contract changes and access to global state.

We are incredibly grateful for Cargill’s sponsorship of Sawtooth Sabre and Hyperledger Grid (a supply chain platform built with Sawtooth Sabre). We would also like to thank the following people who help make our blog posts a success: Anne Chenette, Mark Ford, David Huseby, and Jessica Rampen.

About the Authors

Andi Gunderson is a Software Engineer at Bitwise IO and maintainer on Hyperledger Sawtooth and Sawtooth Sabre.

Shawn Amundson is Chief Technology Officer at Bitwise IO, a Hyperledger technical ambassador, and a maintainer and architect on Hyperledger Sawtooth and Hyperledger Grid.

Developer showcase series: Raj Sadaye, Arizona State University

By | Blog, Developer Showcase, Hyperledger Composer, Hyperledger Fabric, Hyperledger Quilt

Back to our Developer Showcase Series to learn what developers in the real world are doing with Hyperledger technologies. Next up is Raj Sadaye.

What advice would you offer other technologists or developers interested in getting started working on blockchain?

One piece of advice I would offer someone interested in getting started with blockchain is to start working on a project or an application using the technology that they want to learn. It doesn’t have to be an elaborate or a complicated application, but could be something that has utility in the real world. While working on it, they might face difficulties or technical setbacks. The best way to tackle this is to reach out to the community of several other developers who are currently maintaining/ working on that technology. We can learn a lot by working on a project and reading the documentation thoroughly.

Give a bit of background on what you’re working on, and let us know what was it that made you want to get into blockchain?

My interest in blockchains developed when I was searching for a method to secure IoT device communication as well as make it de-centralized to increase speed. Blockchain technology turned out to be the perfect solution for this. Over the past 8 months, I’ve worked on several projects at Arizona State University’s Blockchain research Lab. In March 2018, I worked on building a PoC for a Carbon credits trading ecosystem using blockchain for Lightworks at ASU. The system enables various players of the market to control carbon emissions while maintaining sustainable growth by incentivizing carbon capturing actors. A brief description of this project can be found here. Currently, we’re working with the Center for Negative Emission of Carbon to design a way to verify capture as well as emission of carbon with minimal human intervention. My current research focus is developing a data sharing protocol that enables edge to edge communication in IoT devices. I’ve also been working on building the CSE 598: Engineering Blockchain Applications course on Coursera for Arizona State University.

What project in Hyperledger are you working on? Any new developments to share? Can you sum up your experience with Hyperledger?

Primarily, I’ve been working with Hyperledger Fabric and Hyperledger Composer and overall, the experience has been really good. Hyperledger Fabric has a good set of tools to build the infrastructure for a distributed ledger solution. The certification authority is a high-quality tool that helps us with cryptographic validation and dynamically assigning certificates for actors being added to the network. Once a person is familiar with the documentation, it’s really simple to go about building applications. Hyperledger Composer is a tool that excited me the most over the last 8 month because it runs on top of Hyperledger Fabric and it can help a blockchain novice on how to build a distributed application. Both frameworks have really good tutorial sections that help developers get familiar with the technology.

As Hyperledger’s incubated projects start maturing and hit 1.0s and beyond, what are the most interesting technologies, apps, or use cases coming out as a result from your perspective?

In my opinion, supply chains and blockchain technology were always meant to go hand in hand. The most interesting app or use case that I’ve been recently come across is Everledger. Everledger rewires trust and confidence in a previously broken market by building consortiums of actors participating and maintaining provenance by supplementing blockchain technology with various other verification techniques. In the near future, I see other products also adopt such an architecture to avoid counterfeiting and adultery.

What’s the one issue or problem you hope blockchain can solve?

One issue I’m hoping to be solved using blockchain technology is verification of identity through digitization of personal documents. Verification of documents using hash-based fingerprinting assigning ownership of this digital record to the person rather than a centralized authority can help a lot in maintaining the privacy of data as well as avoiding frauds through detection of counterfeit documents being used.

Where do you hope to see Hyperledger and/or blockchain in 5 years?

In 5 years, I expect blockchain technology to move over the crypto-hype and focus on the real applications and use cases that it be integrated with. For Hyperledger, the most interesting upcoming project, in my opinion, is Hyperledger Quilt which aims to achieve interoperability in blockchains. I’d also like to see a solution within the Hyperledger project that enables seamless integration of blockchain application with the existing infrastructure.

Hyperledger Iroha Security Audit Results

By | Blog, Hyperledger Iroha


The time has come again for another Hyperledger project to begin their version 1.0 release process. Hyperledger Iroha is getting close to a 1.0 release and as part of that, Hyperledger hired an outside security auditing firm to review the code and audit it for security vulnerabilities. Nettitude conducted a review of the code this past fall and reported their findings to the Hyperledger security team and the Iroha developers.

The Iroha audit found four security issues, including one that was critical enough to require us to issue our first Common Vulnerabilities and Exposure (CVE) notice. All four issues were tracked using our JIRA and resolved shortly after the audit concluded.

I want to highlight the details of two of the security issues that the audit discovered because they show how easy it is to make bad assumptions about cryptography that results in a critical failure. Crypto code is always difficult to get right and as you will see, knowing good coding practices isn’t always enough. A developer must also be aware of algorithm and implementation details and the guarantees offered by a cryptographic primitive.

Blockchain Review

Before digging into the error, let us review the way things are supposed to work in a permissioned blockchain network. Figure 1 shows the normal process of transaction proposal and verification. In the diagram, Node 1 proposes the transaction by signing it and forwarding it to Node 2. Node 2 verifies the validity of the transaction as well as the validity of Node 1’s digital signature endorsement. Node 2 then endorses the transaction and forwards it to Node 3. Node 3 does the same checks as Node 2 except that it is also careful to ensure that the endorsements from Node 1 and Node 2 are both valid and unique. If everything passes the checks, Node 3 endorses the transaction and forwards it to Node 4. Node 4 now repeats the checks of Node 2 and Node 3 and sees that the transaction has enough valid and unique endorsements to be accepted into the next block of the blockchain. Node 4 transmits the fully endorsed and accepted transaction to all other nodes in preparation of the block construction and consensus steps. It is important to point out that not only is the validity of each digital signature important, but that a transaction also has enough unique endorsements before it will be accepted.

Figure 1—How a transaction is endorsed and validated.

Signature Schemes

Hyperledger Iroha uses the Twisted Edwards Curves based elliptic curve digital signature scheme more commonly known as Ed25519 or EdDSA. Unlike almost every other elliptic curve digital signature scheme, Ed25519 doesn’t take random data as one of its inputs. Most digital signature schemes generate a random number used only once—also known as a nonce (Number used ONCE)1—when calculating a digital signature of a message. The reason for this is because a digital signature is just a message digest encrypted using a public key encryption algorithm. Public key encryption algorithms are trivial to break if there is no nonce or a nonce gets reused, with the same secret key, to encrypt multiple messages.2 This is called a “chosen plaintext attack”.3 Figure 2 shows how a random nonce is used when encrypting the message digest to create the digital signature. By including a nonce, repeated use of the secret key over different messages does not compromise the encryption. Digital signatures using this method are different even though the same secret key and message are used.

Figure 2—Digital signature calculation with random nonce.

The Ed25519 signature scheme used by Iroha is different in that it generates the nonce by processing the inputs to the signing algorithm and thus repeated signatures of the same data with the same key result in the same encrypted data.4 This doesn’t compromise the key because the nonce is still different for different inputs. Figure 3 illustrates how the nonce for an Ed25519 digital signature is calculated from the input message and are therefore deterministic rather than generated randomly. Digital signatures using this method are the same when the same secret key and message are given.

Figure 3—Digital signature calculated with deterministic nonce.

The Bug

The flaw in Iroha was that the developers wrote the signature checking code to assume that signing the same data with the same key would always result in the same encrypted data. When determining if a transaction has enough different signatures to be valid, the code was comparing the public key bytes as well as the digital signature bytes when testing to see if two signatures were different. Figure 4 shows how the public key bytes and the digital signature bytes were combined when checking to see if two endorsements were different.

Figure 4—Flawed endorsement check that includes digital signature bytes.

The auditors at Nettitude created a modified version of the Ed25519 signature library so that it instead used random nonces, thus creating different encrypted data for the same secret key and message data. Figure 5 shows how the comparison of endorsements fails when random nonces are used. The resulting endorsements are not the same even though the message and secret key used to sign the message are the same.

Figure 5—Random nonces produce different signatures from the same inputs.

The result is that other nodes in the Iroha network—nodes running unmodified Ed25519 libraries—correctly validate the signatures because the public key correctly decrypts the digital signatures but the code for testing the uniqueness of the signatures is fooled. Each validating node sees different signatures for the same data and the same secret key and assumes they are unique endorsements and that the transaction is properly endorsed. Figure 6 shows how the Nettitude engineers were able to fully bypass this check with their single malicious node. It resulted in a bypass of the Byzantine guarantees of the system.

Figure 6—A malicious node bypassing the Byzantine checks.

The Fix

The correction for this security bug is to change the transaction and block signature validation code to first check that all signatures are valid and then check only the public keys for uniqueness when determining if there are enough valid and unique signatures on a transaction or block. Figure 7 shows how the scenario in Figure 6 plays out with the fixed code. Again a malicious node with a modified Ed25519 implementation signs a transaction multiple times with the same key. The signature bytes are unique, but the keys are not. When the other nodes in the network check the transaction, they see three valid signatures but the keys are not different. Each nodes determines that there is only one unique and valid signature and rejects the transaction.

Figure 7—A malicious node unable to bypass the Byzantine checks.

Two bugs were filed, one for transaction validation and one for block validation to address this flaw. The first bug is titled “multi-signature transactions can potentially be authorised by single user”5 The second bug is titled “vote early, vote often”6 Both flaws were fixed shortly after the report was given to us from Nettitude and the current version of Iroha has been fixed.


It is very important for developers to understand the subtleties of cryptography and applying it to engineering problems. Careful study and consideration of the guarantees and assumptions is required as well as multiple reviews from other engineers with similar knowledge and attention to detail. The “many eyeballs” theory of open source software development does work. This audit proved it.

The management and technical reports from the audit can be found on the Hyperledger wiki.



Hyperledger Sawtooth Blockchain Performance Metrics with Grafana

By | Blog, Hyperledger Sawtooth

This blog post shows how to setup Grafana to display Sawtooth and system statistics.


Grafana is a useful tool for displaying Sawtooth performance statistics. Hyperledger Sawtooth optionally generates performance metrics from the validator and REST API components for a each node. Sawtooth sends the metrics to InfluxDB, a time series database that is optimized for fast access to time series data. Telegraf, a metrics reporting agent, gathers supplemental system information from the Linux kernel and also sends it to InfluxDB. Finally, Grafana reads from the InfluxDB and displays an assortment of statistics on several graphical charts in your web browser. Figure 1 illustrates the flow of data.

Figure 1. Metrics gathering data flow.


Grafana can display many validator, REST API, and system statistics. The following lists all supported metrics:

Sawtooth Validator Metrics

  • Block number
  • Committed transactions
  • Blocks published
  • Blocks considered
  • Chain head moved to fork
  • Pending batchesnumber of batches waiting to be processed
  • Batches rejected (back-pressure)number of rejected batches due to back-pressure tests
  • Transaction execution rate, in batches per second
  • Transactions in process
  • Transaction processing duration (99th percentile), in milliseconds
  • Valid transaction response rate
  • Invalid transaction response rate
  • Internal error response rate
  • Message round trip times, by message type (95th percentile), in seconds
  • Messages sent, per second, by message type
  • Message received, per second, by message type

Sawtooth REST API Metrics

  • REST API validator response time (75th percentile), in seconds
  • REST API batch submission rate, in batches per second

System Metrics

  • User and system host CPU usage
  • Disk I/O, in kilobytes per second
  • I/O wait percentage
  • RAM usage, in megabytes
  • Context switches
  • Read and write I/O ops
  • Thread pool task run time and task queue times
  • Executing thread pool workers in use
  • Dispatcher server thread queue size

The screenshot in Figure 2 gives you an idea of the metrics that Grafana can show.

Figure 2. Example Grafana graph display.

Setting Up InfluxDB and Grafana

By default, Hyperledger Sawtooth does not gather performance metrics. The rest of this post explains the steps for enabling this feature. The overall order of steps is listed below with in-depth explanations of each step following.

  1. Have the required prerequisites: Sawtooth blockchain software is running on Ubuntu and Docker CE software is installed
  2. Installing and configuring InfluxDB to store performance metrics
  3. Building and installing Grafana
  4. Configuring Grafana to display the performance metrics
  5. Configuring Sawtooth to generate performance metrics
  6. Installing and configuring Telegraf to collect metrics

1. Prerequisites: Sawtooth and Docker

Install Hyperledger Sawtooth software and Docker containers. I recommend Sawtooth 1.1 on Ubuntu 16 LTS (Xenial). Sawtooth installation instructions are here:

The Sawtooth blockchain software must be up and running before you proceed.

Docker CE installation instructions are here:

ProTip: These instructions assume a Sawtooth node is running directly on Ubuntu, not in Docker containers. To use Grafana with Sawtooth on Docker containers, additional steps (not described here) are required to allow the Sawtooth validator and REST API containers to communicate with the InfluxDB daemon at TCP port 8086.

2. Installing and Configuring the InfluxDB Container

InfluxDB stores the Sawtooth metrics used in the analysis and graphing. Listing 1 shows the commands to download the InfluxDB Docker container, create a database directory, start the Docker container, and verify that it is running.

sudo docker pull influxdb
sudo mkdir -p /var/lib/influx-data
sudo docker run -d -p 8086:8086 \
    -v /var/lib/influx-data:/var/lib/influxdb \
    -e INFLUXDB_DB=metrics \
    -e INFLUXDB_ADMIN_USER="admin" \
    -e INFLUXDB_ADMIN_PASSWORD="pwadmin" \
    -e INFLUXDB_USER="lrdata" \
    -e INFLUXDB_USER_PASSWORD="pwlrdata" \
    --name sawtooth-stats-influxdb influxdb
sudo docker ps --filter name=sawtooth-stats-influxdb

Listing 1. Commands to set up InfluxDB.

ProTip: You can change the sample passwords here, pwadmin and pwlrdata, to anything you like. If you do, you must use your passwords in all the steps below. Avoid or escape special characters in your password such as “,@!$” or you will not be able to connect to InfluxDB.

3. Building and Installing the Grafana Container

Grafana displays the Sawtooth metrics in a web browser. Listing 2 shows the commands to download the Sawtooth repository, build the Grafana Docker container, download the InfluxDB Docker container, create a database directory, start the Docker container, start the Grafana container, and verify that everything is running.

git clone
cd sawtooth-core/docker
sudo docker build . -f grafana/sawtooth-stats-grafana \
    -t sawtooth-stats-grafana
sudo docker run -d -p 3000:3000 --name sawtooth-stats-grafana \
sudo docker ps --filter name=sawtooth-stats-grafana

Listing 2. Commands to set up Grafana.

Building the Grafana Docker container takes several steps and downloads several packages into the container. It ends with “successfully built” and “successfully tagged” messages.

4. Configuring Grafana

Configure Grafana from your web browser. Navigate to http://localhost:3000/ (replace “localhost” with the hostname or IP address of the system where you started the Grafana container in the previous step).

  1. Login as user “admin”, password “admin”
  2. (Optional step) If you wish, change the Grafana webpage “admin” password by clicking the orange spiral icon on the top left, selecting “admin” in the pull-down menu, click on “Profile” and “Change Password”, then enter the old password, (admin) and your new password and, finally, click on “Change Password”. This Grafana password is not related to the InfluxDB passwords used in a previous step.
  3. Click the orange spiral icon again on the top left, then click on “Data Sources” in the drop-down menu.
  4. Click on the “metrics” data source.
  5. Under “URL”, change “influxdb” in the URL to the hostname or IP address where you are running InfluxDB. (Use the same hostname that you used for Grafana web page, since the Grafana and InfluxDB containers run on the same host.) This is where Grafana accesses the InfluxDB
  6. Under “Access”, change “proxy” to “direct” (unless you are going through a proxy to access the remote host running InfluxDB)
  7. Under “InfluxDB Details”, set “User” to “lrdata” and “Password” to “pwlrdata”
  8. Click “Save & Test” to save the configuration in the Grafana container
  9. If the test succeeds, the green messages “Data source updated” and “Data source is working” will appear. Figure 3 illustrates the green messages. Otherwise, you get a red error message that you must fix before proceeding. An error at this point is usually a network problem, such as a firewall or proxy configuration or a wrong hostname or IP address.

Figure 3. Test success messages in Grafana.


For the older Sawtooth 1.0 release, follow these additional steps to add the Sawtooth 1.0 dashboard to Grafana (skip these steps for Sawtooth 1.1):

  1. In your terminal, copy the file sawtooth_performance.json from the sawtooth-core repository you cloned earlier to your current directory by issuing the commands in Listing 3.
$ cp \
sawtooth-core/docker/grafana/dashboards/sawtooth_performance.json .

Or download this file:

$ wget \

Listing 3. Commands for getting the Sawtooth 1.0 dashboard file.

  1. In your web browser, click the orange spiral icon again on the top left, select “Dashboards” in the drop-down menu, then click on “Import” and “Upload .json file”.
  2. Navigate to the directory where you saved sawtooth_performance.json.
  3. Select “metrics” in the drop-down menu and click on “Import”.

5. Configuring Sawtooth

The Sawtooth validator and REST API components each report their own set of metrics, so you must configure the login credentials and destination for InfluxDB. In your terminal window, run the shell commands in Listing 4 to create or update the Sawtooth configuration files validator.toml and rest_api.toml:

for i in /etc/sawtooth/validator.toml /etc/sawtooth/rest_api.toml
    [[ -f $i ]] || sudo -u sawtooth cp $i.example $i
    echo 'opentsdb_url = "http://localhost:8086"' \
       | sudo -u sawtooth tee -a $i
    echo 'opentsdb_db = "metrics"' \

       | sudo -u sawtooth tee -a $i
    echo 'opentsdb_username  = "lrdata"' \

       | sudo -u sawtooth tee -a $i
    echo 'opentsdb_password  = "pwlrdata"' \

       | sudo -u sawtooth tee -a $i

Listing 4. Commands to create or update Sawooth configuration.

After verifying that the files validator.toml and rest_api.toml each have the four new opentsdb_* configuration lines, restart the sawtooth-validator and sawtooth-rest-api using the commands in Listing 5

sudo -v
sudo -u sawtooth pkill sawtooth-rest-api
sudo -u sawtooth pkill sawtooth-validator
sudo -u sawtooth sawtooth-validator -vvv &
sudo -u sawtooth sawtooth-rest-api -vv &

Listing 5. Manual restart commands.

Add any command line parameters you may use to the above example.

If you use systemctl, Listing 6 shows the commands needed to restart.:

systemctl restart sawtooth-rest-api
systemctl restart sawtooth-validator

Listing 6. Systemctl restart commands.

Protip: The InfluxDB daemon, influxd, listens to TCP port 8086, so this port must be accessible over the local network from the validator and REST API components. By default, influxd only listens to localhost.

6. Installing and Configuring Telegraf

Telegraf, InfluxDB’s metrics reporting agent, gathers metrics information from the Linux kernel to supplement the metrics information sent from Sawtooth. Telegraf needs the login credentials and destination for InfluxDB. Install Telegraf use the commands in Listing 7.

curl -sL \
   | sudo apt-key add -
sudo apt-add-repository \
   "deb xenial stable"
sudo apt-get update
sudo apt-get install telegraf

Listing 7. Commands for installing Telegraf.

The commands in Listing 8 set up the Telegraf configuration file correctly.

sudo echo '[[outputs.influxdb]]' \
sudo echo 'urls = ["http://localhost:8086"]' \
sudo echo 'database = "metrics"' \
sudo echo 'username = "lrdata"' \
sudo echo 'password = "pwlrdata"' \

Listing 8. Create the Telegraf configuration file.

Finally restart telegraf with the command in Listing 9.

sudo systemctl restart telegraf

Listing 9. Restart Telegraf.

Try it out!

After completing all the previous steps, Sawtooth and system statics should appear in the Grafana dashboard webpage. To see them, click the orange spiral icon on the top left, then click on “Dashboards” in the drop-down menu, then click on “Home” next to the spiral icon, and then click on “dashboard”. This is the dashboard for Grafana.

Generate some transactions so you can see activity on the Grafana dashboard. For example, run the intkey workload generator by issuing the Listing 10 commands in a terminal window to create test transactions at the rate of 1 batch per second.

intkey-tp-python -v &
intkey workload --rate 1 -d 5

Listing 10. Start the workload generator to get some statistics.

I recommend changing the time interval in the dashboard from 24 hours to something like 30 minutes so you can see new statistics. Do that by clicking on the clock icon in the upper right of the dashboard. Then click on the refresh icon, ♻, to update the page. Individual graphs can be enlarged or shrunk by moving the dotted triangle tab in the lower right of each graph.

Troubleshooting Tips

    • If the Grafana webpage is not accessible, the Grafana container is not running or is not accessible over the network. To verify that it is running and start it:


$ docker ps --filter name-sawtooth-stats-grafana
$ docker start sawtooth-stats-grafana


    • If the container is running, the docker host may not be accessible on the network
    • If no system statistics appear at the bottom of the dashboard, either Telegraf is not configured or the InfluxDB container is not running or is not accessible over the network. To verify that InfluxDB is running and start it:


$ docker ps --filter name-sawtooth-stats-influxdb
$ docker start sawtooth-stats-influxdb


    • Check that the InfluxDB server, influxd, is reachable from the local network. Use the InfluxDB client (package influxdb-client) or curl or both to test. The InfluxDB client command should show a “Connected to” message and the curl command should show a  “204 No Content” message.


$ influx -username lrdata -password pwlrdata -port 8086 \
   -host localhost
$ curl -sl -I localhost:8086/ping


  • Check that the interval range (shown next to the clock on the upper right of the dashboard) is low enough (such as 1 hour).
  • Check that the validator and REST API .toml files and Telegraf sawtooth.conf files have the opentsdb_* configuration lines. Make sure that the passwords and URLs are correct and that they match each other and the passwords set when you started the InfluxDB container.
  • Click the refresh icon, ♻, on the upper right of the dashboard.

Further Information

Hyperledger Fabric in Action: Conflict-proofing tantalum mining in Rwanda

By | Blog, Hyperledger Fabric

The real-world impact of blockchain is growing quickly with the upswing of production systems bringing new approaches to complex problems. One such case: a supply chain solution powered by Hyperledger blockchain technology is providing critical traceability to tantalum mining in Rwanda. This mineral, widely used in the manufacturing of electronics and medical and dental devices and implants, is plentiful and a key to economic development in Rwanda. However, mining practices in nearby regions are muddying the supply chain, making some tantalum a potential “conflict mineral.” To protect against conflict concerns and practices and thus ensure investment and stability in Rwanda-sourced tantalum, the Rwandan Mining, Petroleum and Gas Board turned to a blockchain system to increase transparency in the mining supply chain.

Circulor, a U.K based startup, working with mining company, Power Resources Group (PRG), piloted a system built on Hyperledger Fabric that creates an immutable record of custody to trace Rwandan tantalum from the mine to the manufacturer.

We sat down with Circulor to get the details on rolling out this Hyperledger Fabric network and plans for future development. This case study is a great roadmap for others looking to out Hyperledger into action. You can read it here.

If you’re curious about other production use cases with Hyperledger technology, be sure to check out this list of six intriguing initiatives across a wide range of industries, including food supply, fine art, insurance, aviation and accounting. You can also take a look at our other case studies here.

Hyperledger Global Forum: Takeaways from a local blockchain professional

By | Blog, Hyperledger Fabric

I’m very curious and I love to meet people from all over the world. I’m fascinated by new technologies, learning what can be done, seeing what has been done and realizing the potential.

Meeting like-minded, open people is the favorite way my team at 4eyes and I satisfy our curiosity. Learning from each other, sharing experiences and discussing problems and questions is the best way to achieve this.

Tangible results are sometimes difficult to see in software frameworks, and perhaps more so within the DLT/Blockchain space. However, they are very important to understanding the technology and its potential. So looking at projects in various stages from prototype to production is immensely helpful for us and, eventually, for our customers and partners.

With that in mind, we were excited to hear that the global Hyperledger community was getting together in Basel, about 15 minutes from our offices! While I already got an in-person impression of the Hyperledger Community at the Hackfest in Amsterdam and in my activities within the Special Interest Groups for the public sector, it was wonderful to have the whole Hyperledger Community as a guest in my hometown.

I’ve been to dozens of blockchain-related events this year with the Hyperledger Global Forum serving as the grand finale. In my opinion, it was by far the best experience as Hyperledger is the most inclusive, down-to-earth and also self-critical blockchain community. I remember Brian Behlendorf reminding people in a working group call of the importance of honest and transparent communication about Hyperledger as it’s crucial for the credibility and future development of Hyperledger. This spirit permeated the whole conference. While we know that Hyperledger has great frameworks and tools for a broad variety of real-life applications, as open-minded professionals we all realize that there is no one-size-fits-all holy grail kind of solution. This mindset leads to very constructive discussions about the different ways of solving a task.

To satisfy my curiosity, I attended a variety of workshops, especially for those Hyperledger frameworks that were still new to me. So, for example, I experienced Indy hands on in John Jordan’s workshop and learned to bring natural language legal contracts into the blockchain using Accord from Dan Selman. As sharing is a essential part of the learning experience, Waleed El Sayed and I talked about our experience developing blockchain-based projects using Fabric and Composer, which led to very interesting discussions with the audience and also during the rest of the conference. Apart from the inspiring keynotes ranging from consensus and application of blockchain in various industries to philosophy of trust, Global Forum was an opportunity to talk to the broad variety of companies showcasing their products and services.

In terms of my developer skills, the time I spent talking to Caroline Church, IBM’s Lead for Blockchain Tooling, was probably the most impactful. At Accenture’s Hack-For-Good hackathon towards the end of the Forum, Caroline showed us the new way of coding chaincode in Fabric 1.4 and, even more important, the Visual Code extension she created that allows for easy testing and debugging of chaincode. This is a huge step forward in terms of developing chaincode in Fabric. Caroline’s tools will immediately help my team and me and will increase 4eyes’ efficiency dramatically.

From my point of view as a consultant, I learned a lot from the presented use cases, such as the very interesting talk from Marco Alarcon and Andrés Falcone about the Short Sales Lending solution they’ve created at the Santiago Stock Exchange. I was also very impressed by David Berger’s very pragmatic solution to facilitating proof of existence for the legal industry.

As I love to spend time with people all over the world, I skipped one or the other presentation in favor for a in-depth conversations ranging from technical to conceptual during the forum and fun to philosophy during the delegates party at the fantastic Pantheon. This way I met amazing people from all over the world: Japan, France, USA, South Korea, Chile, Saudi Arabia, Canada, Switzerland, Russia, China and many more.

My experiences at the Hyperledger Global Forum helped me to assess the possibilities and the maturity of the framework and tools and inspired many new ideas, which in turn will help my team and me to provide better guidance and consultancy for clients and partners.

We are looking forward to the next opportunities to learn and share at upcoming Hyperledger events such as the Hackfest.

The whole 4eyes team and I would love to welcome the Hyperledger family back in Basel in 2019!


Announcing Hyperledger Grid, a new project to help build and deliver supply chain solutions!

By | Blog, Hyperledger Grid

Supply chain is commonly cited as one of the most promising distributed ledger use-cases. Initiatives focused on building supply chain solutions will benefit from shared, reusable tools. Hyperledger Grid seeks to assemble these shared capabilities in order to accelerate the development of ledger-based solutions for all types of cross-industry supply chain scenarios.

Grid intends to:

  • Provide reference implementations of supply chain-centric data types, data models, and smart contract based business logic – all anchored on existing, open standards and industry best practices.
  • Showcase in authentic and practical ways how to combine components from the Hyperledger stack into a single, effective business solution.

What is Grid?

Hyperledger Grid is a framework.  It’s not a blockchain and it’s not an application.  Grid is an ecosystem of technologies, frameworks, and libraries that work together, letting application developers make the choice as to which components are most appropriate for their industry or market model.

Hyperledger platforms are flexible by design, but, as such, do not speak the language of any business or have opinions on how basic data types should be stored. In reality, enterprise business systems and market models are actually quite mature, as organizations have been transacting electronically based on common standards for decades.  Grid will provide a place for implementations of these standards and norms.

The initial linkage between Grid and other elements in the stack will be via Sabre, a WebAssembly (WASM) Smart Contract engine.  By adopting this approach, Grid asserts the strategic importance of WASM and provides a clear interface for integration with platforms inside and outside of Hyperledger. It is our hope and expectation that WASM and Sabre become a de facto Hyperledger standard.

What’s Next?

Initially, we plan to anchor much of the domain model work on GS1/GTIN standards, but many other implementations could be contributed and published, including models such as those being created by the Open Data Initiative, or more nuanced industry models like Identification of Medicinal Products (IDMP).

Examples of what Grid has on its roadmap:

  • Product
  • Identity
  • Location
  • Certification

Likewise, there are some common types of transactions that occur in supply chain scenarios.  Grid will also provide a reference implementation of scenarios, such as:

  • Asset transformation / refinement
  • Asset exchange
  • Asset tracking

There is also work planned around sample applications that demonstrate how to use these models.  And much, much more!

Who’s Involved?

Cargill, Intel and Bitwise IO have been the primary contributors to this initial initiative, but endorsements and/or contributions are in flight from several other organizations.  We are excited by the enthusiastic response from like-minded members of the community and look forward to collaborating further.

Want to Learn More?

If you’re interested in learning more about Grid, consider visiting or #grid on Hyperledger chat at

We welcome interest from all groups and organizations, including enterprises and standards organizations.  We are looking forward to hearing from you!

Safety, Performance and Innovation: Rust in Hyperledger Sawtooth

By | Blog, Hyperledger Sawtooth

Hello, fellow Rustaceans and those curious about Rust. The Hyperledger Sawtooth team is using Rust for new development, so these are exciting times for both Rust and Hyperledger Sawtooth. Rust is a new language that is quickly growing in popularity. The Hyperledger Sawtooth community is using Rust to build components to give application developers and administrators more control, more flexibility, and greater security for their blockchain networks. This blog post will give an overview of some of the new components being built in Rust.

Hyperledger Sawtooth was originally written in Python, which was a good choice for initial research and design. In 2018, the Sawtooth team chose the Rust language for new development. A key benefit is that Rust supports concurrency while also emphasizing memory safety. Several new core components, transaction processors, and consensus engines have already been written in Rust.

Compared to Python, Rust’s most noticeable feature is the expressive type system, along with its compile-time checks. Rust has ownership and borrowing rules to guarantee at compile time that an object has either only one mutable reference of an object or an unlimited number of immutable references. This feature of Rust forces the developer to account for all possible error and edgecases, making our interfaces more robust as we design them.

The validator’s block validation and publishing components are a good example of our recent interface changes. Before release 1.1, these components were heavily tied to PoET, the original consensus in Hyperledger Sawtooth. In addition, they were largely synchronous, where committing a block started the process of building a new block to publish. As we implemented the consensus engine interface, we took the opportunity to rewrite these components in Rust, which helped us to separate them more cleanly. Now there are three separate asynchronous tasks—block validation, block commit, and block publishing—that share a small amount of information. For example, the block publishing component is informed when batches are committed so that it can take them out of a pending queue, but none of the tasks starts either of the other tasks. For more information, see the block validation and block publishing components in the sawtooth-core repository.

This clean separation of tasks allows the new consensus interface to function correctly and makes it easier to develop new consensus engines. The Sawtooth team has already written two new engines in Rust:  Sawtooth PBFT and Sawtooth Raft (which uses the PingCap raft library, raft-rs). The Sawtooth team is proud of the work we have done on these consensus engines, and the flexibility it provides Sawtooth community members who are building a blockchain application.

Rust also excels in its support for compiling to WASM, which can be used as a smart contract. Hyperledger Sawtooth already had Seth, which supports running Ethereum Solidity smart contracts using a transaction processor, but now has Sawtooth Sabre, a transaction processor that runs a WASM smart contract that is compiled from Rust to the WASM target. Sawtooth Sabre includes an innovative feature: using registries for namespaces and contracts. The namespace registry lets administrators control what information a contract can access. The contract registry lists versions of the contract, along with a SHA-512 hash of the contract, giving application developers confidence that the correct contract is registered. Sabre supports API compatibility with the Sawtooth Rust SDK, so developers can write a smart contract that can run either within Sabre or natively as a transaction processor, depending on the deployment methodology.

Rust has also influenced how changes to Hyperledger Sawtooth are handled. Our new RFC process is modeled after Rust’s RFC process, which provides a community-oriented forum for proposing and designing large changes. The Hyperledger Sawtooth team has put effort into a community-oriented design process at sawtooth-rfcs. The consensus API RFC is a good example: The guide-level explanation clearly lays out the purpose and reasoning behind the new component, then has a reference-level explanation of the technical details needed to guide implementation. The Sawtooth RFC process has been a good way to involve the larger Sawtooth community in driving the design and implementation of Sawtooth.

What’s next for Rust in Sawtooth? In 2019, the Sawtooth team is rewriting the remaining Sawtooth validator components in Rust. That means the networking and transaction processing components will be getting an overhaul. Expect that the networking components will be redesigned. The transaction processing components will have minor changes internally, while keeping a stable API. In both cases, there will be an increase in performance and stability thanks to Rust.

Come join the Hyperledger Sawtooth community in 2019 by writing your own transaction processor in Rust or even a consensus engine. Get in touch on the #sawtooth channel on RocketChat.

To learn more about Rust in Hyperledger Sawtooth, check out our recent changes:


About the Author:

Boyd Johnson is a Software Engineer at Bitwise IO who has worked on many core components of Hyperledger Sawtooth, including transaction processing components in Python and block validation and block publishing components in Rust. While originally a committed Pythonista, he has oxidized into a Rustacean.