Hyperledger Sawtooth

Introduction to Sawtooth PBFT

By | Blog, Hyperledger Sawtooth

As of release 1.1, Hyperledger Sawtooth supports dynamic consensus through its consensus API and SDKs. These tools, which were covered in a previous blog post, are the building blocks that make it easy to implement different consensus algorithms as consensus engines for Sawtooth. We chose to implement the Raft algorithm as our first consensus engine, which we describe in another blog post. While our Raft implementation is an excellent proof of concept, it is not Byzantine-fault-tolerant, which makes it unsuitable for consortium-style networks with adversarial trust characteristics.

To fill this gap, we chose the Practical Byzantine Fault Tolerance (PBFT) consensus algorithm. We started work on the Sawtooth PBFT consensus engine in the summer of 2018 and continue to develop and improve on it as we work towards its first stable release. This blog post summarizes the PBFT algorithm and describes how it works in Sawtooth.

What Is PBFT?

PBFT dates back to a 1999 paper written by Miguel Castro and Barbara Liskov at MIT. Unlike other algorithms at the time, PBFT was the first Byzantine fault tolerant algorithm designed to work in practical, asynchronous environments. PBFT is thoughtfully defined, well established, and widely understood, which makes it an excellent choice for Hyperledger Sawtooth.

PBFT is similar to Raft in some general ways:

  • It is leader-based and non-forking (unlike lottery-style algorithms)
  • It does not support open-enrollment, but nodes can be added and removed by an administrator
  • It requires full peering (all nodes must be connected to all other nodes)

PBFT provides Byzantine fault tolerance, whereas Raft only supports crash fault tolerance. Byzantine fault tolerance means that liveness and safety are guaranteed even when some portion of the network is faulty or malicious. As long as a minimum percentage of nodes in the PBFT network are connected, working properly, and behaving honestly, the network will always make progress and will not allow any of the nodes to manipulate the network.

How Does PBFT Work?

The original PBFT paper has a detailed and rigorous explanation of the consensus algorithm. What follows is a summary of the algorithm’s key points in the context of Hyperledger Sawtooth. The original definition is broadly applicable to any kind of replicated system; by keeping this information blockchain-specific, we can more easily describe the functionality of the Sawtooth PBFT consensus engine.

Network Overview

A PBFT network consists of a series of nodes that are ordered from 0 to n-1, where n is the number of nodes in the network. As mentioned earlier, there is a maximum number of “bad” nodes that the PBFT network can tolerate. As long as this number of bad nodes—referred to as the constant f—is not exceeded, the network will work properly. For PBFT, the constant f is equal to one third of the nodes in the network. No more than a third of the network (rounded down) can be “out of order” or dishonest at any given time for the algorithm to work. The values of n and f are very important; you’ll see them later as we discuss how the algorithm operates.

Figure 1 — n and f in the PBFT algorithm

As the network progresses, the nodes move through a series of “views”. A view is a period of time that a given node is the primary (leader) of the network. In simple terms, each node takes turns being the primary in a never-ending cycle, starting with the first node. For a four-node network, node 0 is the primary at view 0, node 1 is the primary at view 1, and so on. When the network gets to view 4, it will “wrap back around” so that node 0 is the primary again.

In more technical terms, the primary (p) for each view is determined based on the view number (v) and the ordering of the nodes. The formula for determining the primary for any view on a given network is p = v mod n. For instance, on a four-node network at view 7, the formula p = 7 mod 4 means that node 3 will be the primary (7 mod 4 = 3).

In addition to moving through a series of views, the network moves through a series of “sequence numbers.” In the context of a Sawtooth blockchain, a sequence number is equivalent to a block number; thus, saying that a node is on sequence number 10 is the same as saying that the node is performing consensus on block 10 in the chain.

Each node maintains a few key pieces of information as part of its state:

  • The list of nodes that belong to the network
  • Its current view number
  • Its current sequence number (the block it is working on)
  • The phase of the algorithm it is currently in (see “Normal-Case Operation”)
  • A log of the blocks it has received
  • A log of all valid messages it has received from the other nodes

Normal-Case Operation

Figure 2 — Messages sent during normal operation of PBFT (Node 3 is faulty)

To commit a block and make progress, the nodes in a PBFT network go through three phases:

  1. Pre-preparing
  2. Preparing
  3. Committing

Figure 2 shows these phases for a simple four-node network. In this example, node 0 is the primary and node 3 is a faulty node (so it does not send any messages). Because there are four nodes in the network (n = 4), the value of f for the network is 4-13=1. This means the example network can tolerate only one faulty node.


To kick things off, the primary for the current view will create a block and publish it to the network; each of the nodes will receive this block and perform some preliminary verification to make sure that the block is valid.

After the primary has published a block to the network, it broadcasts a pre-prepare message to all of the nodes. Pre-prepare messages contain four key pieces of information: the ID of the block the primary just published, the block’s number, the primary’s view number, and the primary’s ID. When a node receives a pre-prepare message from the primary, it will validate the message and add the message to its internal log. Message validation includes verifying the digital signature of the message, checking that the message’s view number matches the node’s current view number, and ensuring that the message is from the primary for the current view.

The pre-prepare message serves as a way for the primary node to publicly endorse a given block and for the network to agree about which block to perform consensus on for this sequence number. To ensure that only one block is considered at a time, nodes do not allow more than one pre-prepare message at a given view and sequence number.


Once a node has received a block and a pre-prepare message for the block, and both the block and message have been added to the node’s log, the node will move on to the preparing phase. In the preparing phase, the node will broadcast a prepare message to the rest of the network (including itself). Prepare messages, like pre-prepare messages, contain the ID and number of the block they are for, as well as the node’s view number and ID.

In order to move onto the next phase, the node must wait until it has received 2f + 1 prepare messages that have the same block ID, block number, and view number, and are from different nodes. By waiting for 2f + 1 matching prepare messages, the node can be sure that all properly functioning nodes (those that are non-faulty and non-malicious) are in agreement at this stage. Once the node has accepted the required 2f + 1 matching prepare messages and added them to its log, it is ready to move onto the committing phase.


When a node enters the committing phase, it broadcasts a commit message to the whole network (including itself). Like the other message types, commit messages contain the ID and number of the block they are for, along with the node’s view number and ID. As with the preparing phase, a node cannot complete the committing phase until it has received 2f + 1 matching commit messages from different nodes. Again, this guarantees that all non-faulty nodes in the network have agreed to commit this block, which means that the node can safely commit the block knowing that it will not need to be reverted. With the required 2f + 1 commit messages accepted and in its log, the node can safely commit the block.

Once the primary node has finished the committing phase and has committed the block, it will start the whole process over again by creating a block, publishing it, and broadcasting a pre-prepare message for it.

View Changing

In order to be Byzantine fault tolerant, a consensus algorithm must prevent nodes from improperly altering the network (to guarantee safety) or indefinitely halting progress (to ensure liveness). PBFT guarantees safety by requiring all non-faulty nodes to agree in order to move beyond the preparing and committing phases. To guarantee liveness, though, there must be a mechanism to determine if the leader is behaving improperly (such as producing invalid messages or simply not doing anything). PBFT provides the liveness guarantee with view changes.

Figure 3 — Messages sent for a view change in PBFT (Node 0 is the faulty primary, Node 1 is the new primary)

When a node has determined that the primary of view v is faulty (perhaps because the primary sent an invalid message or did not produce a valid block in time), it will broadcast a view change message for view v + 1 to the network. If the primary is indeed faulty, all non-faulty nodes will broadcast view change messages. When the primary for the new view (v + 1) receives 2f + 1 view change messages from different nodes, it will broadcast a new view message for view v + 1 to all the nodes. When the other nodes receive the new view message, they will switch to the new view, and the new primary will start publishing blocks and sending pre-prepare messages.

View changes guarantee that the network can move on to a new primary if the current one is faulty. This PBFT feature allows the network to continue to make progress and not be stalled by a bad primary node.

Want to Learn More?

This blog post only scratches the surface of the PBFT consensus algorithm. Stay tuned to the Hyperledger blog for more information on PBFT, including a future post about our extensions and additional features for Sawtooth PBFT.

In the meantime, learn more about PBFT in the original PBFT paper, read the Sawtooth PBFT RFC, and check out the Sawtooth PBFT source code on GitHub.

About the Author

Logan Seeley is a Software Engineer at Bitwise IO. He has been involved in a variety of Hyperledger Sawtooth projects, including the development of the consensus API, Sawtooth Raft, and Sawtooth PBFT.

Assembling the Future of Smart Contracts with Sawtooth Sabre

By | Blog, Hyperledger Sawtooth

Is WebAssembly the future of smart contracts? We think so. In this post, we will talk about Sawtooth Sabre, a WebAssembly smart contract engine for Hyperledger Sawtooth.

We first learned about WebAssembly a couple of years ago at Midwest JS, a JavaScript conference in Minneapolis. The lecture focused on using WebAssembly inside a web browser, which had nothing to do with blockchain or distributed ledgers. Nonetheless, as we left the conference, we were excitedly discussing the possibilities for the future of smart contracts. WebAssembly is a stack-based virtual machine, newly implemented in major browsers, that provides a sandboxed approach to fast code execution. While that sounds like a perfect way to run smart contracts, what really excited us was the potential for WebAssembly to grow a large ecosystem of libraries and tools because of its association with the browser community.

A smart contract is software that encapsulates the business logic for modifying a database by processing a transaction. In Hyperledger Sawtooth, this database is called “global state”. A smart contract engine is software that can execute a smart contract. By developing Sawtooth Sabre, we hope to leverage the WebAssembly ecosystem for the benefit of application developers writing the business logic for distributed ledger systems. We expect an ever-growing list of WebAssembly programming languages and development environments.

Unblocking Contract Deployment

The primary mechanism for smart contract development in Hyperledger Sawtooth is a transaction processor, which takes a transaction as input and updates global state. Sound like a smart contract? It is! If you implement business logic in the transaction processor, then you are creating a smart contract. If you instead implement support for smart contracts with a virtual machine (or interpreter) like WebAssembly, then you have created a smart contract engine.

If we can implement smart contracts as transaction processors, why bother with a WebAssembly model like Sabre? Well, it is really about deployment strategy. There are three deployment models for smart contracts:

  • Off-chain push: Smart contracts are deployed by pushing them to all nodes from a central authority on the network.
  • Off-chain pull: Smart contracts are deployed by network administrators pulling the code from a centralized location. Network administrators operate independently.
  • On-chain: Smart contracts are submitted to the network and inserted into state. Later, as transactions are submitted, the smart contracts are read from state and executed (generally in a sandboxed environment).

We won’t discuss off-chain push, other than to note that this strategy—having a centralized authority push code to everyone in the network—isn’t consistent with distributed ledgers and blockchain’s promise of distributing trust.

Off-chain pull is an opt-in strategy for updating software, and is widely used for Linux distribution updates. We use this model to distribute Sawtooth, including the transaction processors. By adding the Sawtooth apt repository on an Ubuntu system, you pull the software and install it via the apt-get command. Each software repository is centrally managed, though it is possible to have multiple software repositories configured and managed independently. This model has a practical problem—it requires administrators across organizations to coordinate software updates—which makes business logic updates more complicated than we would like.

On-chain smart contracts are installed on the blockchain with a transaction that stores the contract into an address in global state. The smart contract can later be executed with another transaction. The execution of the smart contract starts by loading it from global state and continues by executing the smart contract in a virtual machine (or interpreter). On-chain smart contracts have a big advantage over off-chain contracts: because the blockchain is immutable and the smart contract itself is now on the chain, we can guarantee that the same smart contract code was used to create the original block and during replay. Specifically, the transaction will always be executed using the same global state, including the stored smart contract. Because contracts are deployed by submitting transactions onto the network, we can define the process that controls the smart contract creation and deletion with other smart contracts! Yes, this is a little meta, but isn’t it great?

The on-chain approach seems superior, so why did we implement Hyperledger Sawtooth transaction processors with the off-chain model? Because our long-term vision—and a main focus for Sawtooth—has been smart contract engines that run on-chain smart contracts. Smart contract engines are more suitable for off-chain distribution, because they do not contain business logic, and are likely to be upgraded at the same time as the rest of the software.

Our initial transaction processor design reflected our goal for several types of smart contract engines. We later implemented one of them: Sawtooth Seth, a smart contract engine that runs Ethereum Virtual Machine (EVM) smart contracts. For us, Seth was a validation that our transaction processor design was flexible enough to implement radically different approaches for smart contracts. Like Ethereum, Seth uses on-chain smart contracts, so Seth is great if you want Ethereum characteristics and compatibility with tools such as Truffle. However, Seth is limited by Ethereum’s design and ecosystem, and does not expose all the features in our blockchain platform. We knew that we needed an additional approach for smart contracts in Hyperledger Sawtooth.

Crafting a Compatible Path Forward

Sawtooth Sabre, our WebAssembly smart contract engine, is our solution for native, on-chain smart contracts.  

The programming model for Sabre smart contracts is the same as that for transaction processors. A transaction processor has full control of data representation, both in global state and in transaction payloads (within certain determinism requirements). Hyperledger Sawtooth uses a global state Merkle-Radix tree, and the transaction processors handle addressing within the tree. A transaction processor can use different approaches for addressing, ranging from calculating an address with a simple field hash to organizing data within the tree in a complex way (to optimize for parallel execution, for example). Multiple transaction processors can access the same global state if they agree on the conventions used in that portion of state.

Sawtooth Sabre smart contracts use this same method for data storage, which means they can access global state in the same way that transaction processors do. In fact, smart contracts and transaction processors can comfortably coexist on the same blockchain.

The other major feature is SDK compatibility. The Sawtooth Sabre SDK API is compatible with the Hyperledger Sawtooth transaction processor API, which means that smart contracts written in Rust can switch between the Sawtooth SDK and the Sabre SDK with a simple compile-time flag. (Currently, Rust is the only supported Sabre SDK.) The details of running within a WebAssembly interpreter are hidden from the smart contract author. Because Sabre smart contracts use the same API as transaction processors, porting a transaction processor to Sabre is relatively easy—just change a few import statements to refer to the Sabre SDK instead of the Hyperledger Sawtooth SDK.

Now the choice between off-chain and on-chain smart contracts is a compile-time option. We use this approach regularly, because we can separate our deployment decisions from the decisions for smart contract development. Most of the transaction-processor-based smart contracts included in Hyperledger Sawtooth are now compatible with Sawtooth Sabre.

A Stately Approach to Permissioning

Hyperledger Sawtooth provides several ways to control which transaction processors can participate on a network. As explained above, transaction processors are deployed with the off-chain pull method. This method lets administrators verify the the transaction processors before adding them to the network. Note that Hyperledger Sawtooth requires the same set of transaction processors for every node in the network, which prevents a single node from adding a malicious transaction processor. Additional controls can limit the accepted transactions (by setting the allowed transaction types) and specify each transaction processor’s read and write access to global state (by restricting namespaces).

These permissions, however, are not granular enough for Sawtooth Sabre, which is itself a transaction processor. Sabre is therefore subject to the same restrictions, which would then apply to all smart contracts. Using the same permission control has several problems:

  • Sabre smart contracts are transaction-based, which means that a smart contract is created by submitting a transaction. This removes the chance to review a contract before it is deployed.
  • Sabre transactions must be accepted by the network to run smart contracts, but we cannot limit which smart contracts these transactions are for, because this information is not available to validator.
  • Sabre must be allowed to access the same areas of global state that the smart contracts can access.

An “uncontrolled” version of Sabre would make it too easy to deploy smart contracts that are  not inherently restricted to a to the permissions that the publisher of the smart contract selects.

Our solution in Sawtooth Sabre is to assign owners for both contracts and namespaces (a subset of global state). A contract has a set of owners and a list of namespaces that it expects to read from and write to. Each namespace also has an owner. The namespace owner can choose which contracts have read and write access to that owner’s area of state. If a contract does not have the namespace permissions it needs, a transaction run against the smart contract will fail. So, while the namespace owner and contract owner are not necessarily the same, there is an implied degree of trust and coordination between them.

Also, contracts are versioned. Only the owners of a contract are able to submit new versions to Sabre, which removes the chance that a malicious smart contract change could be accepted.

A Final Note About WebAssembly

On-chain WebAssembly isn’t limited to just smart contracts. For example, in Hyperledger Grid, we are using on-chain WebAssembly to execute smart permissions for organization-specific permissioning. Another example is smart consensus, which allows consensus algorithm updates to be submitted as a transaction. There are several more possibilities for on-chain WebAssembly as well.

In short, we think WebAssembly is awesome! Sawtooth Sabre combines WebAssembly with existing Hyperledger Sawtooth transaction processors to provide flexible smart contracts with all the benefits of both a normal transaction processor and on-chain smart-contract execution. Sabre also takes advantage of WebAssembly’s ability to maintain dual-target smart contracts, where the contract can be run as either a native transaction processor or a Sabre contract. And the permission control in Sawtooth Sabre allows fine-grained control over both contract changes and access to global state.

We are incredibly grateful for Cargill’s sponsorship of Sawtooth Sabre and Hyperledger Grid (a supply chain platform built with Sawtooth Sabre). We would also like to thank the following people who help make our blog posts a success: Anne Chenette, Mark Ford, David Huseby, and Jessica Rampen.

About the Authors

Andi Gunderson is a Software Engineer at Bitwise IO and maintainer on Hyperledger Sawtooth and Sawtooth Sabre.

Shawn Amundson is Chief Technology Officer at Bitwise IO, a Hyperledger technical ambassador, and a maintainer and architect on Hyperledger Sawtooth and Hyperledger Grid.

Hyperledger Sawtooth Blockchain Performance Metrics with Grafana

By | Blog, Hyperledger Sawtooth

This blog post shows how to setup Grafana to display Sawtooth and system statistics.


Grafana is a useful tool for displaying Sawtooth performance statistics. Hyperledger Sawtooth optionally generates performance metrics from the validator and REST API components for a each node. Sawtooth sends the metrics to InfluxDB, a time series database that is optimized for fast access to time series data. Telegraf, a metrics reporting agent, gathers supplemental system information from the Linux kernel and also sends it to InfluxDB. Finally, Grafana reads from the InfluxDB and displays an assortment of statistics on several graphical charts in your web browser. Figure 1 illustrates the flow of data.

Figure 1. Metrics gathering data flow.


Grafana can display many validator, REST API, and system statistics. The following lists all supported metrics:

Sawtooth Validator Metrics

  • Block number
  • Committed transactions
  • Blocks published
  • Blocks considered
  • Chain head moved to fork
  • Pending batchesnumber of batches waiting to be processed
  • Batches rejected (back-pressure)number of rejected batches due to back-pressure tests
  • Transaction execution rate, in batches per second
  • Transactions in process
  • Transaction processing duration (99th percentile), in milliseconds
  • Valid transaction response rate
  • Invalid transaction response rate
  • Internal error response rate
  • Message round trip times, by message type (95th percentile), in seconds
  • Messages sent, per second, by message type
  • Message received, per second, by message type

Sawtooth REST API Metrics

  • REST API validator response time (75th percentile), in seconds
  • REST API batch submission rate, in batches per second

System Metrics

  • User and system host CPU usage
  • Disk I/O, in kilobytes per second
  • I/O wait percentage
  • RAM usage, in megabytes
  • Context switches
  • Read and write I/O ops
  • Thread pool task run time and task queue times
  • Executing thread pool workers in use
  • Dispatcher server thread queue size

The screenshot in Figure 2 gives you an idea of the metrics that Grafana can show.

Figure 2. Example Grafana graph display.

Setting Up InfluxDB and Grafana

By default, Hyperledger Sawtooth does not gather performance metrics. The rest of this post explains the steps for enabling this feature. The overall order of steps is listed below with in-depth explanations of each step following.

  1. Have the required prerequisites: Sawtooth blockchain software is running on Ubuntu and Docker CE software is installed
  2. Installing and configuring InfluxDB to store performance metrics
  3. Building and installing Grafana
  4. Configuring Grafana to display the performance metrics
  5. Configuring Sawtooth to generate performance metrics
  6. Installing and configuring Telegraf to collect metrics

1. Prerequisites: Sawtooth and Docker

Install Hyperledger Sawtooth software and Docker containers. I recommend Sawtooth 1.1 on Ubuntu 16 LTS (Xenial). Sawtooth installation instructions are here:

The Sawtooth blockchain software must be up and running before you proceed.

Docker CE installation instructions are here:

ProTip: These instructions assume a Sawtooth node is running directly on Ubuntu, not in Docker containers. To use Grafana with Sawtooth on Docker containers, additional steps (not described here) are required to allow the Sawtooth validator and REST API containers to communicate with the InfluxDB daemon at TCP port 8086.

2. Installing and Configuring the InfluxDB Container

InfluxDB stores the Sawtooth metrics used in the analysis and graphing. Listing 1 shows the commands to download the InfluxDB Docker container, create a database directory, start the Docker container, and verify that it is running.

sudo docker pull influxdb
sudo mkdir -p /var/lib/influx-data
sudo docker run -d -p 8086:8086 \
    -v /var/lib/influx-data:/var/lib/influxdb \
    -e INFLUXDB_DB=metrics \
    -e INFLUXDB_ADMIN_USER="admin" \
    -e INFLUXDB_ADMIN_PASSWORD="pwadmin" \
    -e INFLUXDB_USER="lrdata" \
    -e INFLUXDB_USER_PASSWORD="pwlrdata" \
    --name sawtooth-stats-influxdb influxdb
sudo docker ps --filter name=sawtooth-stats-influxdb

Listing 1. Commands to set up InfluxDB.

ProTip: You can change the sample passwords here, pwadmin and pwlrdata, to anything you like. If you do, you must use your passwords in all the steps below. Avoid or escape special characters in your password such as “,@!$” or you will not be able to connect to InfluxDB.

3. Building and Installing the Grafana Container

Grafana displays the Sawtooth metrics in a web browser. Listing 2 shows the commands to download the Sawtooth repository, build the Grafana Docker container, download the InfluxDB Docker container, create a database directory, start the Docker container, start the Grafana container, and verify that everything is running.

git clone
cd sawtooth-core/docker
sudo docker build . -f grafana/sawtooth-stats-grafana \
    -t sawtooth-stats-grafana
sudo docker run -d -p 3000:3000 --name sawtooth-stats-grafana \
sudo docker ps --filter name=sawtooth-stats-grafana

Listing 2. Commands to set up Grafana.

Building the Grafana Docker container takes several steps and downloads several packages into the container. It ends with “successfully built” and “successfully tagged” messages.

4. Configuring Grafana

Configure Grafana from your web browser. Navigate to http://localhost:3000/ (replace “localhost” with the hostname or IP address of the system where you started the Grafana container in the previous step).

  1. Login as user “admin”, password “admin”
  2. (Optional step) If you wish, change the Grafana webpage “admin” password by clicking the orange spiral icon on the top left, selecting “admin” in the pull-down menu, click on “Profile” and “Change Password”, then enter the old password, (admin) and your new password and, finally, click on “Change Password”. This Grafana password is not related to the InfluxDB passwords used in a previous step.
  3. Click the orange spiral icon again on the top left, then click on “Data Sources” in the drop-down menu.
  4. Click on the “metrics” data source.
  5. Under “URL”, change “influxdb” in the URL to the hostname or IP address where you are running InfluxDB. (Use the same hostname that you used for Grafana web page, since the Grafana and InfluxDB containers run on the same host.) This is where Grafana accesses the InfluxDB
  6. Under “Access”, change “proxy” to “direct” (unless you are going through a proxy to access the remote host running InfluxDB)
  7. Under “InfluxDB Details”, set “User” to “lrdata” and “Password” to “pwlrdata”
  8. Click “Save & Test” to save the configuration in the Grafana container
  9. If the test succeeds, the green messages “Data source updated” and “Data source is working” will appear. Figure 3 illustrates the green messages. Otherwise, you get a red error message that you must fix before proceeding. An error at this point is usually a network problem, such as a firewall or proxy configuration or a wrong hostname or IP address.

Figure 3. Test success messages in Grafana.


For the older Sawtooth 1.0 release, follow these additional steps to add the Sawtooth 1.0 dashboard to Grafana (skip these steps for Sawtooth 1.1):

  1. In your terminal, copy the file sawtooth_performance.json from the sawtooth-core repository you cloned earlier to your current directory by issuing the commands in Listing 3.
$ cp \
sawtooth-core/docker/grafana/dashboards/sawtooth_performance.json .

Or download this file:

$ wget \

Listing 3. Commands for getting the Sawtooth 1.0 dashboard file.

  1. In your web browser, click the orange spiral icon again on the top left, select “Dashboards” in the drop-down menu, then click on “Import” and “Upload .json file”.
  2. Navigate to the directory where you saved sawtooth_performance.json.
  3. Select “metrics” in the drop-down menu and click on “Import”.

5. Configuring Sawtooth

The Sawtooth validator and REST API components each report their own set of metrics, so you must configure the login credentials and destination for InfluxDB. In your terminal window, run the shell commands in Listing 4 to create or update the Sawtooth configuration files validator.toml and rest_api.toml:

for i in /etc/sawtooth/validator.toml /etc/sawtooth/rest_api.toml
    [[ -f $i ]] || sudo -u sawtooth cp $i.example $i
    echo 'opentsdb_url = "http://localhost:8086"' \
       | sudo -u sawtooth tee -a $i
    echo 'opentsdb_db = "metrics"' \

       | sudo -u sawtooth tee -a $i
    echo 'opentsdb_username  = "lrdata"' \

       | sudo -u sawtooth tee -a $i
    echo 'opentsdb_password  = "pwlrdata"' \

       | sudo -u sawtooth tee -a $i

Listing 4. Commands to create or update Sawooth configuration.

After verifying that the files validator.toml and rest_api.toml each have the four new opentsdb_* configuration lines, restart the sawtooth-validator and sawtooth-rest-api using the commands in Listing 5

sudo -v
sudo -u sawtooth pkill sawtooth-rest-api
sudo -u sawtooth pkill sawtooth-validator
sudo -u sawtooth sawtooth-validator -vvv &
sudo -u sawtooth sawtooth-rest-api -vv &

Listing 5. Manual restart commands.

Add any command line parameters you may use to the above example.

If you use systemctl, Listing 6 shows the commands needed to restart.:

systemctl restart sawtooth-rest-api
systemctl restart sawtooth-validator

Listing 6. Systemctl restart commands.

Protip: The InfluxDB daemon, influxd, listens to TCP port 8086, so this port must be accessible over the local network from the validator and REST API components. By default, influxd only listens to localhost.

6. Installing and Configuring Telegraf

Telegraf, InfluxDB’s metrics reporting agent, gathers metrics information from the Linux kernel to supplement the metrics information sent from Sawtooth. Telegraf needs the login credentials and destination for InfluxDB. Install Telegraf use the commands in Listing 7.

curl -sL \
   | sudo apt-key add -
sudo apt-add-repository \
   "deb xenial stable"
sudo apt-get update
sudo apt-get install telegraf

Listing 7. Commands for installing Telegraf.

The commands in Listing 8 set up the Telegraf configuration file correctly.

sudo echo '[[outputs.influxdb]]' \
sudo echo 'urls = ["http://localhost:8086"]' \
sudo echo 'database = "metrics"' \
sudo echo 'username = "lrdata"' \
sudo echo 'password = "pwlrdata"' \

Listing 8. Create the Telegraf configuration file.

Finally restart telegraf with the command in Listing 9.

sudo systemctl restart telegraf

Listing 9. Restart Telegraf.

Try it out!

After completing all the previous steps, Sawtooth and system statics should appear in the Grafana dashboard webpage. To see them, click the orange spiral icon on the top left, then click on “Dashboards” in the drop-down menu, then click on “Home” next to the spiral icon, and then click on “dashboard”. This is the dashboard for Grafana.

Generate some transactions so you can see activity on the Grafana dashboard. For example, run the intkey workload generator by issuing the Listing 10 commands in a terminal window to create test transactions at the rate of 1 batch per second.

intkey-tp-python -v &
intkey workload --rate 1 -d 5

Listing 10. Start the workload generator to get some statistics.

I recommend changing the time interval in the dashboard from 24 hours to something like 30 minutes so you can see new statistics. Do that by clicking on the clock icon in the upper right of the dashboard. Then click on the refresh icon, ♻, to update the page. Individual graphs can be enlarged or shrunk by moving the dotted triangle tab in the lower right of each graph.

Troubleshooting Tips

    • If the Grafana webpage is not accessible, the Grafana container is not running or is not accessible over the network. To verify that it is running and start it:


$ docker ps --filter name-sawtooth-stats-grafana
$ docker start sawtooth-stats-grafana


    • If the container is running, the docker host may not be accessible on the network
    • If no system statistics appear at the bottom of the dashboard, either Telegraf is not configured or the InfluxDB container is not running or is not accessible over the network. To verify that InfluxDB is running and start it:


$ docker ps --filter name-sawtooth-stats-influxdb
$ docker start sawtooth-stats-influxdb


    • Check that the InfluxDB server, influxd, is reachable from the local network. Use the InfluxDB client (package influxdb-client) or curl or both to test. The InfluxDB client command should show a “Connected to” message and the curl command should show a  “204 No Content” message.


$ influx -username lrdata -password pwlrdata -port 8086 \
   -host localhost
$ curl -sl -I localhost:8086/ping


  • Check that the interval range (shown next to the clock on the upper right of the dashboard) is low enough (such as 1 hour).
  • Check that the validator and REST API .toml files and Telegraf sawtooth.conf files have the opentsdb_* configuration lines. Make sure that the passwords and URLs are correct and that they match each other and the passwords set when you started the InfluxDB container.
  • Click the refresh icon, ♻, on the upper right of the dashboard.

Further Information

Safety, Performance and Innovation: Rust in Hyperledger Sawtooth

By | Blog, Hyperledger Sawtooth

Hello, fellow Rustaceans and those curious about Rust. The Hyperledger Sawtooth team is using Rust for new development, so these are exciting times for both Rust and Hyperledger Sawtooth. Rust is a new language that is quickly growing in popularity. The Hyperledger Sawtooth community is using Rust to build components to give application developers and administrators more control, more flexibility, and greater security for their blockchain networks. This blog post will give an overview of some of the new components being built in Rust.

Hyperledger Sawtooth was originally written in Python, which was a good choice for initial research and design. In 2018, the Sawtooth team chose the Rust language for new development. A key benefit is that Rust supports concurrency while also emphasizing memory safety. Several new core components, transaction processors, and consensus engines have already been written in Rust.

Compared to Python, Rust’s most noticeable feature is the expressive type system, along with its compile-time checks. Rust has ownership and borrowing rules to guarantee at compile time that an object has either only one mutable reference of an object or an unlimited number of immutable references. This feature of Rust forces the developer to account for all possible error and edgecases, making our interfaces more robust as we design them.

The validator’s block validation and publishing components are a good example of our recent interface changes. Before release 1.1, these components were heavily tied to PoET, the original consensus in Hyperledger Sawtooth. In addition, they were largely synchronous, where committing a block started the process of building a new block to publish. As we implemented the consensus engine interface, we took the opportunity to rewrite these components in Rust, which helped us to separate them more cleanly. Now there are three separate asynchronous tasks—block validation, block commit, and block publishing—that share a small amount of information. For example, the block publishing component is informed when batches are committed so that it can take them out of a pending queue, but none of the tasks starts either of the other tasks. For more information, see the block validation and block publishing components in the sawtooth-core repository.

This clean separation of tasks allows the new consensus interface to function correctly and makes it easier to develop new consensus engines. The Sawtooth team has already written two new engines in Rust:  Sawtooth PBFT and Sawtooth Raft (which uses the PingCap raft library, raft-rs). The Sawtooth team is proud of the work we have done on these consensus engines, and the flexibility it provides Sawtooth community members who are building a blockchain application.

Rust also excels in its support for compiling to WASM, which can be used as a smart contract. Hyperledger Sawtooth already had Seth, which supports running Ethereum Solidity smart contracts using a transaction processor, but now has Sawtooth Sabre, a transaction processor that runs a WASM smart contract that is compiled from Rust to the WASM target. Sawtooth Sabre includes an innovative feature: using registries for namespaces and contracts. The namespace registry lets administrators control what information a contract can access. The contract registry lists versions of the contract, along with a SHA-512 hash of the contract, giving application developers confidence that the correct contract is registered. Sabre supports API compatibility with the Sawtooth Rust SDK, so developers can write a smart contract that can run either within Sabre or natively as a transaction processor, depending on the deployment methodology.

Rust has also influenced how changes to Hyperledger Sawtooth are handled. Our new RFC process is modeled after Rust’s RFC process, which provides a community-oriented forum for proposing and designing large changes. The Hyperledger Sawtooth team has put effort into a community-oriented design process at sawtooth-rfcs. The consensus API RFC is a good example: The guide-level explanation clearly lays out the purpose and reasoning behind the new component, then has a reference-level explanation of the technical details needed to guide implementation. The Sawtooth RFC process has been a good way to involve the larger Sawtooth community in driving the design and implementation of Sawtooth.

What’s next for Rust in Sawtooth? In 2019, the Sawtooth team is rewriting the remaining Sawtooth validator components in Rust. That means the networking and transaction processing components will be getting an overhaul. Expect that the networking components will be redesigned. The transaction processing components will have minor changes internally, while keeping a stable API. In both cases, there will be an increase in performance and stability thanks to Rust.

Come join the Hyperledger Sawtooth community in 2019 by writing your own transaction processor in Rust or even a consensus engine. Get in touch on the #sawtooth channel on RocketChat.

To learn more about Rust in Hyperledger Sawtooth, check out our recent changes:


About the Author:

Boyd Johnson is a Software Engineer at Bitwise IO who has worked on many core components of Hyperledger Sawtooth, including transaction processing components in Python and block validation and block publishing components in Rust. While originally a committed Pythonista, he has oxidized into a Rustacean.

Floating the Sawtooth Raft: Implementing a Consensus Algorithm in Rust

By | Blog, Hyperledger Sawtooth

The 1.1 release of Hyperledger Sawtooth includes official support for a new consensus API and SDKs. These tools, covered in an earlier blog post, open up new possibilities for Sawtooth developers, giving them the power to choose a consensus algorithm that best suits their needs. With support for Proof of Elapsed Time (PoET) and Dev mode consensus engines already available, we decided to expand the platform’s repertoire to include a wider variety of engines and support a broader array of features and use cases. The first of these new engines implements the Raft consensus algorithm. This blog post gives a brief overview of the Raft algorithm, explains our decision to implement it, and takes a quick look at the development of the Raft consensus engine.

What Is Raft?

Originally developed by Diego Ongaro and John Ousterhout at Stanford University in 2013, Raft is designed to be an easy-to-understand, crash fault tolerant consensus algorithm for managing a replicated log. Its primary goal is understandability, since most deterministic consensus algorithms previously developed were convoluted and difficult to grasp. Raft provides crash fault tolerance, allowing a network to continue to make progress as long as at least half of the nodes are available.

Raft has the following key characteristics that set it apart from many other consensus algorithms:

  • Strong leadership: Networks elect a leader that is responsible for making progress
  • Non-forking: Unlike lottery-based algorithms, Raft does not produce forks
  • Closed membership: Raft does not support open-enrollment, but nodes can be added and removed by an administrator
  • Fully peered: All nodes must be peered with all other nodes
  • Crash fault tolerant: Raft does not provide Byzantine fault tolerance, only crash fault tolerance

Raft’s leader-follower model is a direct result of the emphasis placed on simplicity and understandability. With a single node controlling the progress of the log, no forks arise so no extra logic is needed to choose between forks. The leadership model has important implications for other aspects of the algorithm. Because a majority of nodes must agree on the elected leader and on all network progress, membership must be semi-fixed to prevent disjoint majorities. This means that Raft networks do not support open enrollment; membership in the network is restricted and can only be modified by a privileged user.

Raft consensus networks must also be fully peered—with each node connected to all other nodes—because messages need to be passed between all nodes. Furthermore, because a large volume of messages is required for the algorithm to work, larger Raft networks perform slower than smaller networks. If high performance is important, Raft would be best used for smaller networks—usually 10 nodes or fewer.

Lastly, Raft is limited to just guaranteeing crash fault tolerance, not Byzantine fault tolerance. This makes the Raft algorithm ill-suited for networks that are subject to Byzantine faults such as groups of malicious nodes. For more information about the Raft algorithm, please see the original Raft paper and the Raft website.

Why Raft?

Raft was our choice for the first algorithm with the new consensus API for several reasons. First, it is very different from PoET. Where PoET is a forking, lottery-style algorithm, Raft is leader-based and non-forking. This allowed us to not only demonstrate the flexibility of the Sawtooth consensus API, but also to make an algorithm available that is well-suited for situations that an algorithm like PoET is not a good fit for.

Also, Raft is an inherently simple and easy-to-understand algorithm. This made it trivial to adapt to Sawtooth and also made it an excellent example for developing other engines.  Furthermore, we took advantage of an existing high quality implementation of Raft in the Rust programming language called raft-rs.

However, Raft lacks Byzantine fault tolerance. Therefore, we are also working on a PBFT consensus engine that is suitable for consortium-style networks with adversarial trust characteristics.

The Implementation

The raft-rs library, developed by PingCAP, provides almost everything we needed to implement a consensus engine based on the Raft algorithm; it provided a class representing a Raft “node” with a handful of straightforward functions for “driving” the algorithm. The folks at PingCAP wrote an excellent blog post explaining how they implemented this library, so we will not duplicate their efforts here.

Our only major extension to the raft-rs library is a stable storage mechanism, since the library only provided in-memory storage. This extension is required to ensure that Sawtooth nodes can restart in the event of a crash or arbitrary shutdown. If you would like to see the end results, all of the code that follows can be found in the Sawtooth Raft GitHub repository and the Rust SDK.

Defining the Engine

The first step in creating a consensus engine with the Rust SDK is to implement the Engine trait:

pub trait Engine {

    /// Called after the engine is initialized, when a connection

    /// to the validator has been established. Notifications from

    /// the validator are sent along `updates`. `service` is used

    /// to send requests to the validator.

    fn start(

        &mut self,

        updates: Receiver<Update>,

        service: Box<Service>,

        startup_state: StartupState,

    ) -> Result<(), Error>;


    /// Get the version of this engine

    fn version(&self) -> String;


    /// Get the name of the engine, typically the algorithm being

    /// implemented

    fn name(&self) -> String;


Raft’s Engine implementation is in The start method is the main entry point. In Raft—as well as most consensus engines—three main tasks need to be performed here: loading configuration, creating the struct(s) that contain the core logic, and entering a main loop.

Loading Configuration

For Raft, loading configuration consists primarily of reading a few settings that are stored on-chain. We do this by making a call to the load_raft_config function in

// Create the configuration for the Raft node.

let cfg = config::load_raft_config(



    &mut service


info!("Raft Engine Config Loaded: {:?}", cfg);

let RaftEngineConfig {



    raft: raft_config,

    storage: raft_storage

} = cfg;

The settings are loaded by calling the get_settings method in the consensus service, with the chain head provided in the startup_state:

let settings_keys = vec![







let settings: HashMap<String, String> = service



    .expect("Failed to get settings keys");


Some of these settings are optional, so defaults are used if they’re unset.

Creating the Raft Node

Once the configuration is loaded, we create the Raft node that contains the main logic of the algorithm:

// Create the Raft node.

let raft_peers: Vec<RaftPeer> = raft_config.peers


    .map(|id| RaftPeer { id: *id, context: None })


let raw_node = RawNode::new(




).expect("Failed to create new RawNode");


let mut node = SawtoothRaftNode::new(







The RawNode struct is provided by the raft-rs library; it contains the logic for the Raft algorithm itself and provides methods for SawtoothRaftNode to direct it. The SawtoothRaftNode, found in, defines six methods that are called by the consensus engine:

  • on_block_new is called when the validator notifies the engine that it has received a new block
  • on_block_valid is called when the validator notifies the engine that it has validated a block
  • on_block_commit is called when the validator notifies the engine that it has committed a block
  • on_peer_message is called when one node’s consensus engine sends a message to another
  • tick is used to move the Raft algorithm forward by one “tick”
  • process_ready contains much of the logic that changes the state of Raft

The first four methods (on_block_new, on_block_valid, on_block_commit, and on_peer_message) will be defined for the majority of consensus engines since they handle important messages that are delivered by the validator. The last two methods (tick and process_ready) are specific to Raft; other consensus engines will likely have different methods to handle the logic of the engine.

Entering the Main Loop

With a Raft node created and ready to handle updates, we enter the main loop of our consensus engine:

let mut raft_ticker = ticker::Ticker::new(RAFT_TIMEOUT);

let mut timeout = RAFT_TIMEOUT;


// Loop forever to drive the Raft.

loop {

    match updates.recv_timeout(timeout) {

        Err(RecvTimeoutError::Timeout) => (),

        Err(RecvTimeoutError::Disconnected) => break,

        Ok(update) => {

            debug!("Update: {:?}", update);

            if !handle_update(&mut node, update) {






    timeout = raft_ticker.tick(|| {




    if let ReadyStatus::Shutdown = node.process_ready() {




Raft’s main loop performs three main tasks. First, check if there are any updates that have been sent to the engine by the validator. If there is an update, handle it by calling the appropriate method of the SawtoothRaftNode:

fn handle_update<S: StorageExt>(node: &mut SawtoothRaftNode<S>, 

  update: Update) -> bool


    match update {

        Update::BlockNew(block) => node.on_block_new(block),

        Update::BlockValid(block_id) =>


        Update::BlockCommit(block_id) => 


        Update::PeerMessage(message, _id) => 


        Update::Shutdown => {

            warn!("Shutting down");

            return false



        update => warn!("Unhandled update: {:?}", update),




Second, move the Raft algorithm forward by one “tick” at a regular interval, using the Ticker object defined in and a call to the node’s tick method. This “tick” roughly corresponds to progress in the Raft algorithm itself.

Finally, call the node’s process_ready method, which checks the state of the Raft algorithm to determine if it needs to take any actions as a result of the last “tick”.

Starting the Engine

Once the consensus engine itself has been defined, starting it up and connecting it to the validator is easy. In the main function of, all we need to do is simply determine the validator’s endpoint (using a command-line argument in Raft), instantiate the engine, and start it using the SDK’s ZmqDriver:

let raft_engine = engine::RaftEngine::new();


let (driver, _stop) = ZmqDriver::new();


info!("Raft Node connecting to '{}'", &args.endpoint);

driver.start(&args.endpoint, raft_engine).unwrap_or_else(|err| {

    error!("{}", err);



See for Yourself!

Want to try running a Sawtooth network with Raft consensus? Check out the Raft source code on GitHub as well as the Sawtooth Raft documentation for all you need to get started.

For more on the consensus API and developing your own consensus engine for Hyperledger Sawtooth, take a look at our previous blog post.


About the Author


Logan Seeley is a Software Engineer at Bitwise IO. He has been involved in a variety of Hyperledger Sawtooth projects, including the development of the consensus API, Sawtooth Raft, and Sawtooth PBFT.

Making Dynamic Consensus More Dynamic

By | Blog, Hyperledger Sawtooth

In October 2017, the Hyperledger Sawtooth team started to implement a new consensus algorithm for Hyperledger Sawtooth. We wanted a voting-based algorithm with finality, which is very different from the Proof of Elapsed Time (PoET) consensus algorithm that has been closely associated with Hyperledger Sawtooth since its start. This project presented a number of challenges and opportunities.

The greatest challenge in implementing this new consensus algorithm with Sawtooth was in breaking apart an architecture that has been heavily influenced by a lottery-based consensus algorithm with forking. A lot of refactoring and architectural work went into making both voting-based and lottery-based algorithms work well with Sawtooth.

However, the opportunities that we discovered from this effort made overcoming these challenges more than worth it. We designed a new consensus API that simplifies the process of adding new consensus algorithms while continuing to support the existing PoET and Dev mode consensus algorithms. We completed the first prototype validator with consensus API support in July 2018. Since then, we have been able to implement two new voting-based consensus algorithms for the Hyperledger Sawtooth platform: Raft and PBFT.

We are pleased to announce that the Sawtooth 1.1 release supports the new consensus API. This release also includes consensus SDKs to make it easier to implement new consensus algorithms.

Consensus as a Process

The new consensus architecture moves consensus functionality to a separate process, called a consensus engine, and provides an API for each consensus engine to interact with the validator.

Moving the consensus functionality to a separate process allows consensus engines to be implemented in a variety of languages. Currently, SDKs are available for Python and Rust and have been used to create the consensus engines for PoET, PBFT, and Raft.

Multi-language support is important beyond providing a choice for implementing a new consensus engine. This support makes it much easier to reuse existing implementations of consensus algorithms. For example, the Sawtooth Raft consensus engine is built on the pingcap/raft-rs library. We were able to easily integrate this well-regarded Raft library, which is itself a port from the widely-used etcd Raft library.

As SDKs for additional languages are built on top of the consensus API, it will be possible to add more and more consensus algorithms into Hyperledger Sawtooth. For example, a consensus SDK for Go would bring existing implementations such as Hyperledger Labs’ MinBFT one step closer to being compatible with Sawtooth.

Driving the Blockchain with a Consensus Engine

The consensus API is centered around a new consensus engine abstraction that handles consensus-specific functionality. A consensus engine is a separate process that interacts with the validator through the consensus API using protobuf messages and ZMQ.

The role of a consensus engine is to advance the blockchain by creating new blocks and deciding which blocks should be committed. Specifically, a consensus engine must accomplish the following tasks:

  • Determine consensus-related messages to send to peers
  • Send commands to progress the blockchain
  • React to updates from the validator

The validator continues to handle the mechanics of validation, communication, and storage for blocks, batches, and transactions. The validator must perform these tasks:

  • Validate the integrity of blocks, batches, and transactions
  • Validate the signatures for blocks, batches, transactions, and messages
  • Gossip blocks, batches, and transactions
  • Handle the mechanics of block creation and storage
  • Manage the chain head directly

New Consensus API and SDKs

The validator exposes the API for consensus engines as a set of protobuf messages sent over a network interface. This API is split into two types of interactions:

  • Service: A pair of (request, response) messages that allow a consensus engine to send commands to the validator and receive information back. For example, a consensus engine can instruct the validator to commit a block or request an on-chain setting from a specific block. Services are synchronous and on-demand.
  • Updates: Information that the validator sends to a consensus engine, such as the arrival of a new block or receipt of a new consensus message from a peer. Updates are sent asynchronously as they occur.

Although you could use the API directly to implement a new consensus engine, the recommended interface is a consensus SDK. The SDK provides several useful classes that make it easier to implement a consensus engine. Sawtooth currently provides consensus SDKs for Python and Rust. We have used these SDKs to create the consensus engines for the PoET engine (Python), PBFT engine (Rust), and Raft engine (Rust).

These SDKs have a consistent design with an abstract Engine class, an engine Driver, and a validator Service. The abstract Engine class provides a clear starting point for new consensus engine implementations. If you plan to write your own consensus SDK, we recommend conforming to this design.

Try it Today!

One of the most important decisions for a distributed ledger application is the choice of consensus. By opening up this interface, we hope that each application built on Hyperledger Sawtooth can select the consensus algorithm that suits it best.

Hyperledger Sawtooth Blockchain Security (Part Three)

By | Blog, Hyperledger Sawtooth

This is the conclusion of my three-part series on Hyperledger Sawtooth Security. I started with Sawtooth consensus algorithms in part one, then continued with Sawtooth node and transaction processor security in part two. Here I will conclude by discussing Sawtooth application security and Sawtooth network security.

Client Application Security

The client part of a Sawtooth application is written by the application developer. The Sawtooth client communicates with a Sawtooth node by REST API requests, including signed transactions and batches. The signing is performed with a private key and, as such, key management and security is important. With Bitcoins, for example, poor key management has resulted in stolen Bitcoins and a “graveyard of Bitcoins” that are inaccessible forever. Key management is the responsibility of the client application as keys are not managed by Sawtooth software.

A keystore is where you securely store your keys. The public key for a keypair, used for signature verification, can be and should be distributed to anyone. The private key portion, used for signing, must be safeguarded from access by others. Here are some keystore methods, ordered from low to high security:

  • The minimum security used should restrict access to the private key. That is either restrict access to the machine holding the key or restrict read access to the private key file to the signer or (better yet) both
  • Better protection would be the use of software-encrypted keystore. This would be a private keystore accessible by a PIN
  • The best protection is from a Hardware Security Module (HSM) keystore or a network-accessible key manager, accessed using the Key Management Interoperability Protocol (KMIP)

Client Authentication

A Sawtooth client may take external user input. In which case, it is important to authenticate that the user is who they say they are. Authentication methods are usually categorized, from low to high security, into:

  • Single-factor Authentication (SFA). SFA is something you know. It could be something like a PIN, password, passphrase, or one-time password (OTP). The main disadvantage with SFA is it could be weak or hard to remember
  • Two-factor Authentication (2FA). 2FA is SFA plus something you have. It could be a security key, such as a U2F token (e.g., YubiKey). The main disadvantage with 2FA is it can be lost or stolen

  • Three-factor Authentication (3FA). 3FA is 1FA and 2FA plus something you are (biometrics). Examples include fingerprints, face recognition, or retina scan. The main disadvantages with 3FA is it can be forged and cannot be easily changed

With 2FA and 3FA, the idea is defense-in-depth (i.e., multiple hurdles to authenticate).

Network Security

Blockchains are subject to Distributed Denial of Service (DDoS) attacks. That is, an attack that attempts to overload blockchain nodes by flooding the targeted nodes with bogus messages. Classical public, unpermissioned blockchain networks avoid DDoS attacks because transactions require spending digital currency (such as Bitcoin), making attacks costly. Also, public blockchain networks are highly distributed—with thousands of nodes—making a DDoS attack on the entire network impractical.

Private or permissioned blockchains, such as Sawtooth, are not designed to run on a public network. As such, they do not require digital currency and “mining.”

Sawtooth network can and should be mitigated against DDoS attacks as follows:

  • Back pressure, a flow-control technique to reject unusually frequent client submissions. If the validator is overwhelmed, it will stop accepting new batches until it can handle more work. The number of batches the validator can accept is based on a multiplier (currently two) of a rolling average of the number of published batches.
  • Sawtooth communication uses the Zero Message Queue (ZMQ or 0MQ) message library. Sawtooth optionally enables encryption with ZMQ when the network_public_key and network_private_key settings are defined in validator.toml. For production, generate your own key pair instead of using a predefined key that may be present.
  • REST API input is validated to avoid buffer corruption or overflow attacks.
  • TCP port 4004, used for communication between internal validator node components, should be closed to outside access in any firewall configuration,
  • TCP port 5050, used to communicate between the validator node and the consensus engine, should be closed to outside access in any firewall configuration.
  • TCP port 8008, used for the REST API, should be closed to outside access in a firewall configuration providing all application clients accessing the REST API come from the local host.
  • If you use the Seth TP (for WASM smart contracts), TCP port 3030, used for Seth RPC, should be closed to outside access in a firewall configuration, providing all RPC requests come from the local host.
  • TCP port 8800, used to communicate between validator nodes, must be open to outside access in any firewall configuration.

Sawtooth validator nodes should be deployed on a VPN or other private network to prevent any outside access to Sawtooth TCP ports.

Basically, best practices dictate closing as many network ports as possible, encrypting network communications, and deploying in a protected network environment (such as a VPN).

Further Information

Announcing Hyperledger Sawtooth 1.1

By | Blog, Hyperledger Sawtooth

It is with great excitement that we would like to announce the release of Sawtooth version 1.1. Earlier this year we released Sawtooth 1.0, marking the production ready status of the platform. Since then the community has been hard at work adding new features, improving the privacy and performance of the platform, and growing the ecosystem.

The Sawtooth development team has been focused on two major new features for the Sawtooth 1.1 release, an improved consensus interface and support for WebAssembly smart contracts. For a full list of new features and improvements see the Sawtooth 1.1 Release Notes.

Improved consensus interface and new consensus options

While Sawtooth has always enabled ‘pluggable’ consensus and multiple consensus algorithms, recent experiences indicated that the existing consensus interface could be improved. Sawtooth has always aspired to be a modular platform that would enable lean experimentation and rapid adoption of new technologies, in particular, with regards to consensus. After analyzing a number of consensus algorithms that are available today, both Nakamoto (PoW/PoET) and classical (Raft/PBFT), the team decided to re-architect the consensus interface to improve the ease of integration. As a result of this new interface, the team has been able to port the existing Sawtooth consensus options, as well as add two new classical consensus options. Below is the state of these consensus options today:

    • Developer Mode (stable)
    • PoET-Simulator (Crash Fault Tolerant) (stable)
    • PoET-SGX (under development)
    • Raft (alpha)
    • PBFT (under development)

If you are interested in learning more about the new consensus interface, or writing your own, please see the detailed documentation.

Support for WebAssembly smart contracts (Sawtooth Sabre)

Sawtooth Sabre is a new smart contract engine for Sawtooth that enables the execution of WebAssembly-based smart contracts. WebAssembly (WASM) is a new web standard developed at the W3C with participation from major corporations like Apple, Google, and Microsoft. The Sawtooth Sabre project leverages an existing open source WASM interpreter from the broader blockchain community. This on-chain interpreter enables developers to write their code in a variety of languages, compile it down to WebAssembly, and then deploy it directly to the Sawtooth blockchain.

In addition to new feature development, the Sawtooth developer team has continued research and development on improving the privacy and performance of the Sawtooth platform.


On the privacy front, a new Hyperledger Lab called ‘Private Data Objects (PDO)’ has been created. PDO enables smart contracts to execute off-chain with confidentiality and integrity through the use of trusted execution environments. For more information, take a look at this video or read the paper. Private data objects are just one way of addressing blockchain confidentiality, but expect to see more techniques available to Sawtooth over the coming months.


On the performance front, much of the effort has been spent porting core Sawtooth components from Python to Rust. While Python was a great language to start with, and enabled the team to rapidly iterate and define the appropriate modularity in the architecture, it is not the most performant language. The 1.0 release stabilized many of the Sawtooth APIs, and as we began tuning the system, we identified bottlenecks arising from the design of the Python programing language. The speed and type safety of the Rust programming language made it a natural fit for the evolution of Sawtooth. As of today, roughly 40% of the Sawtooth validator components have been ported to Rust, a number that we expect will continue to increase over time.

Finally, in addition to adding new features and improving the robustness of the Sawtooth platform, we have also seen an explosion of activity in the community, with dozens of new developers and a variety of tools and applications being openly built on top the Sawtooth infrastructure. Notable new projects in the Sawtooth ecosystem include:


  • Sawtooth Supply Chain – A platform focused on supply train traceability with contributors from Bitwise IO and Cargill.
  • Sawtooth Next-Directory – An application focused on role-based access control with contributors from T-Mobile.


  • Truffle integration with Sawtooth-Seth – A new integration that allows you to deploy Ethereum smart contracts to Sawtooth using the leading Ethereum development tool, Truffle. Built in collaboration with the Truffle team.
  • Caliper support for Sawtooth – Benchmark Sawtooth in a variety of configurations with Hyperledger Caliper.
  • Sawooth Explorer – A blockchain explorer built for Sawtooth by the team at PokitDok.
  • Grafana monitoring – A set of tools for data collection and visualization for live Sawtooth deployments.

Part of a Grafana dashboard for a Sawtooth Testnet running Raft

The Sawtooth ecosystem and functionality is rapidly expanding, which wouldn’t be possible without the community behind it. I’d like to thank all of the developers who have put in time building tools and applications, or providing support, for their effort, including, but not limited to:

Adam Gering, Adam Ludvik, Adam Parker, Al Hulaton, Amol Kulkarni, Andrea Gunderson, Andrew Backer, Andrew Donald Kennedy, Anne Chenette, Arthur Greef, Ashish Kumar Mishra, Benoit Razet, Boyd Johnson, Bridger Herman, Chris Spanton, Dan Anderson, Dan Middleton, Darian Plumb, Eloá Franca Verona, Gini Harrison, Griffin Howlett, James Mitchell, Joel Dudley, Jonathan Langlois, Kelly Olson, Keith Bloomfield Kenneth Koski, Kevin O’Donnell, Kevin Solorio, Logan Seeley, Manoj Gopalakrishnan, Michael Nguyen, Mike Zaccardo, Nick Drozd, Pankaj Goyal, PGobz, Patrick BUI, Peter Schwarz, Rajeev Ranjan, Richard Berg, Ry Jones, Ryan Banks, Ryan Beck-Buysse, Serge Koba, Shawn T. Amundson, Sutrannu, Tom Barnes, Tomislav Markovski, Yunhang Chen, Zac Delventhal, devsatishm, feihujiang, joewright, kidrecursive, mithunshashidhara, and ruffsl.

If you’d like to join the community or learn more, you can find more information here:

Chat: #Sawtooth in Hyperledger RocketChat

Docs: Sawtooth 1.1 Documentation

Code: Sawtooth-core Github

Website: Hyperledger Sawtooth Homepage

Thanks for reading and look forward to more posts detailing new Sawtooth 1.1 features and improvements. We encourage developers to try these new feature out and give us feedback!


All Are Welcome Here

By | Blog, Hyperledger Burrow, Hyperledger Fabric, Hyperledger Indy, Hyperledger Iroha, Hyperledger Sawtooth

A Minneapolis coffee shop that has fueled or at least caffeinated a lot of Hyperledger commits.

One of the first things people learn when coming to Hyperledger is that Hyperledger isn’t, like it’s name may imply, a ledger. It is a collection of blockchain technology projects. When we started out it was clear almost immediately that a single project could not satisfy the broad range of uses nor explore enough creative and useful approaches to fit those needs. Having a portfolio of projects, though, enables us to have the variety of ideas and contributors to become a strong open source community. Back in January of 2016 Sawtooth and Fabric were both on the horizon followed shortly by Iroha, but we wouldn’t have predicted that we would have Hyperledger Burrow and Hyperledger Indy – two projects that bear no resemblance to each other. Burrow is a permissioned Ethereum-based platform and Indy is a distributed identity ledger. Burrow is written in Go, and Indy was created in Python and is porting to Rust.

Both of these platforms are interesting in their own rights, but Hyperledger is even more interesting for the combination of these projects with the others. Both Sawtooth and Fabric have already integrated with Burrow’s EVM. Now Hyperledger has a set of offerings that can simultaneously satisfy diverse requirements for smart contract language, permissioning, and consensus. Likewise Sawtooth and Indy have been working together at our last several hackfests. The results of that may unlock new use cases and deployment architectures for distributed identity. So it’s not that our multiplicity of projects has given us strength through numbers, but rather strength through diversity.

Hyperledger Hackfest – December 2017 at The Underground Lisboa

The hackfests that we mentioned are one of the rare times that we get together face to face. Most of our collaboration is over mail list, chat, and pull-requests. When we do get together though it’s always in a new city with new faces. One of our most recent projects was hatched inside one of those buses. It wasn’t the most ergonomic meeting I’ve ever had but there was room for everyone on that bus.

Hyperledger Hackfest in Chicago

Our hackfest in Chicago was in a lot more conventional surroundings (still a very cool shared creative space .. lots of lab equipment and benches out of view on the other side of the wall to the right). Looking back at this photo is fun for me. I can see a lot of separate conversations happening at each table… people sharing different ideas, helping ramp new contributors, working on advancing new concepts with existing contributors. I can see a lot of similarity but also a little variety. It’s a busy room but there’s still open chairs and room for more variety.

Our next hackfest won’t be until March 2019 (Hyperledger is hosting Hyperledger Global Forum in December in Basel though). The March hackfest will be somewhere in Asia – location to be settled soon. The dates and locations of the other 2019 hackfests aren’t set yet. I don’t know where they will be specifically, but I do know that there will be a seat available and you will be welcome there.

These face to face meetings really are more the exception than the rule at Hyperledger. There are now more than 780 contributors spread all across the globe. 165 of those were just in the last few months. That means that every day we have a new person contributing to Hyperledger. Most of our engagement is through the development process. People contribute bug fixes, write new documentation, develop new features, file bugs, etc. If you’ve never contributed open source code before getting started might be intimidating. We don’t want it to be, though. There are a number of resources to help you get started. You can watch this quick video from Community Architect, Tracy Kuhrt. There’s documentation for each project, mail lists, a chat server, working groups, and some of the projects even host weekly phone calls to help new developers get engaged. Everyone in Hyperledger abides by a Code of Conduct so you can feel comfortable knowing that when you join any of those forums you will be treated respectfully. Anyone who wants to get involved can regardless of “physical appearance, race, ethnic origin, genetic differences, national or social origin, name, religion, gender, sexual orientation, family or health situation, pregnancy, disability, age, education, wealth, domicile, political view, morals, employment, or union activity.” We know that to get the best ideas, best code, best user experience we need your involvement. Please come join our community.

Image created by for Hyperledger

As always, you can keep up with what’s new with Hyperledger on Twitter or email us with any questions:

Hyperledger Sawtooth Blockchain Security (Part Two)

By | Blog, Hyperledger Sawtooth

Guest Post by Dan  Anderson, Intel


This is a continuation of my three-part series on Hyperledger Sawtooth Security. I began with Sawtooth consensus algorithms in part one. Here I will continue this series discussing Sawtooth node and transaction processor security.

Sawtooth Node and Transaction Processor Security

Sawtooth has several mechanisms to restrict and secure access to validator peer nodes. These include the following topics, which I’ll discuss below:

  • Sawtooth Permissioning, Policies and Roles
  • Network Roles
  • Challenge-Response Authorization
  • Sawtooth Encryption
  • Transaction Input/Output lists
  • Observability
  • Internal Security Mechanisms

Sawtooth Permissioning

Prelude: Configuration

Permissioning restricts who may access a Sawtooth validator node. Permissioning is set with Sawtooth configuration, so before we can discuss permissioning, we need to review configuration. After that, we will discuss various types of Sawtooth Permissioning.

Sawtooth configuration is set with on-chain configuration or off-chain configuration. On-chain configuration is configuration settings recorded in the blockchain, with changes or additions made as new blocks added to the blockchain. On-chain configuration applies to the entire Sawtooth network for that blockchain. Off-chain configuration is configuration settings recorded in the local validator.toml file, located by default at /etc/sawtooth/validator.toml, and applies to only to the local validator node. This allows further local restrictions for a site, if desired.

The initial permission values are configured in the genesis node (node 0), or, if not set, assume default values. On-chain settings can be modified any time by adding a transaction to the blockchain using the Settings Transaction Processor (which is the only mandatory TP). The change does not take effect until the next block (never the current block that contains the new setting).

On-chain settings are changed through a voting mechanism. Voters (individual peer nodes) are listed in The votes are signed by each peer node as a transaction and recorded on-chain. If only one voter is authorized, the change is immediate. If multiple voters are authorized, the change takes effect when the minimum percentage of votes is reached. The Settings TP manages the election results.

Transaction Family Permissioning

Transaction family permissioning controls what TFs are supported by the current Sawtooth network. All nodes in a Sawtooth network must support the same set of TFs and versions. The applicable setting is sawtooth.validator.transaction_families For example,
[{“family”:”sawtooth_settings”, “version”:”1.0″}, {“family”:”xo”, “version”:”1.0″}]
By default, any Transaction Family is supported by a Sawtooth network.

One can also restrict transaction processors to their own namespaces (the 6 hex character TF namespace). When set, the validator prohibits reads and writes outside a TF namespace. For example,

[{“family”:”sawtooth_settings”, “version”:”1.0″}, {“family”:”intkey”, “version”:”1.0″}]
[{“family”:”sawtooth_settings”, “version”:”1.0″}, {“family”:”intkey”, “version”:”1.0″, “namespaces”:[“1cf126”]}]’

Here is an example of setting the transaction family permissions on the command line on-chain:

$ sawset proposal create –url http://localhost:8008 –key /etc/sawtooth/keys/validator.priv sawtooth.validator.transaction_families='[{“family”:”sawtooth_settings”, “version”:”1.0″}, {“family”:”intkey”, “version”:”1.0″}]’

The above settings can also be set off-chain in a configuration file, in which case it applies only to the local node. For example, in validator.toml :



“sawtooth.validator.transaction_families” = “[{\”family\”:\”sawtooth_settings\”, \”version\”:\”1.0\”}, {\”family\”:\”intkey\”, \”version\”:\”1.0\”}]”

Policies and Roles

Transaction key permissioning use policies and roles, which are implemented using the Identity Transaction Family. A policy is just a set of PERMIT_KEY and DENY_KEY rules that are evaluated in the order listed. A role is an authorization that grants permission to perform operations and access data. Roles and policies may be stored on-chain, as blockchain transactions, or off-chain, in configuration files, in which case they apply to the local node only. An example of a role is transactor.transaction_signer.intkey, which authorizes who can sign intkey transaction family transactions. An example of a policy is
“PERMIT_KEY 03eb5418588737e1b3982f4d863e01e13fd0da03ee2ac51b090860db3bdbbf39b2” “DENY_KEY *”
which denies access to all but the signer identified by their public key beginning with 03eb.

Before roles and policies can be set, sawtooth.identity.allowed_keys must be set to the key(s) of the authorized signers of Identity Transaction Family transactions. For example, the following allows Alice to make Identity TF transactions:

$ sudo sawset proposal create –key /etc/sawtooth/keys/validator.priv sawtooth.identity.allowed_keys=$(cat ~/.sawtooth/keys/

Before roles can be set to policies, policies must be created. A policy is a sequence of PERMIT_KEY and DENY_KEY keywords followed by an identity, which is the public key of a signer. The public key is 64 hex digits, such as 03305c4911bfdbe36c3be526ba665b0638e4376a920844a351708ec94c89ae70fa . A policy can be set on-chain or off-chain. Here’s an example of an on-chain setting:


$ sawtooth identity policy create dans_policy1 \

   “PERMIT_KEY 02a1035d8a6277adf5b92e8f831f647235224fe4dc8660f8bcddf85707156307b5” \

   “PERMIT_KEY 039e4b768b2c8280501fb7b5c56992088b704fb3ef8fd0efced6204ec975d1382f” \

   “DENY_KEY *”

$ sawtooth identity policy list

In the above example, two public keys are permitted and everyone else is denied; For the public key, use a 64 hex character public key from a .pub file.

Off-chain settings, which apply only to a single Sawtooth node, are kept by default in directory /etc/sawtooth/policy/ . For example, file /etc/sawtooth/policy/dans_policy1 may contain


PERMIT_KEY 02a1035d8a6277adf5b92e8f831f647235224fe4dc8660f8bcddf85707156307b5

PERMIT_KEY 039e4b768b2c8280501fb7b5c56992088b704fb3ef8fd0efced6204ec975d1382f


Once we establish policies, we can now set roles to specific policies. For example, if we want to use dans_policy1 above to guide who can submit intkey transactions, set the following on-chain role:

$ sawtooth identity role create transactor.transaction_signer.intkey dans_policy1

Or, if we prefer an off-chain role setting, which applies only to the local node, we can add something like the following to file validator.toml :



“transactor.transaction_signer.intkey” = “dans_policy1”


Note that the key is in quotes, as required by TOML format for dotted keys.

On-chain permissioning is checked with batch submissions from a client and when publishing or validating a block. Off-chain permissioning applies only to batch submissions from a client—not transactions from peer nodes. The latter prevents unnecessary blockchain forks from different permissioning among nodes.

Transaction Key Permissions

Transaction key permissioning controls what clients can submit transactions, based on the signing public key. The relevant permissioning roles are:

  • transactor.transaction_signer.<name of TF> controls what clients can sign transactions for a particular Transaction Family (TF). For example, transactor.transaction_signer.intkey controls what clients can sign intkey TF transactions
  • transactor.transaction_signer controls what clients can sign transactions for any Transaction Family (TF)
  • transactor.batch_signer controls what clients can sign batches (groups of transactions that must be processed atomically—all or none)
  • transactor controls what clients can sign transactions or batches

The most specific role takes precedence over a more general role (for example, for batches, transactor.batch_signer is checked first and transactor is checked only if no rule was found in transactor.batch_signer . By default, anyone can sign a transaction or batch.

Challenge-Response Authorization

When a Sawtooth validator node receives a connection request, it has two authorization modes for the other node—Trust Authorization and Challenge Authorization.

For Trust Authorization, a node trusts connections from other nodes. It checks the public key for role authorizations. This is intended mainly for development and is the default value.

For Challenge Authorization, a connecting node must prove who they are. On a connection request, a node sends a challenge response containing a random nonce. The other node signs the nonce and sends it back to prove they are who they say they are. The node verifies the signed nonce is the same one as it sent, to guard against replay attacks.

To set authorization type, use
$ sawtooth-validator –network-auth {trust|challenge}
on the command line or set network = “trust” or network = “challenge” in configuration file validator.toml .

Sawtooth Encryption

Encryption in Sawtooth is used for

  • Digests for transactions, batches, and blocks
  • Signing transaction and batch headers by the client and blocks by the validator
  • Encrypting data in transit—either between peer nodes or between a components within a node

Sawtooth Transaction and Batch Signing

A Sawtooth node receives transactions from a client in the form of a batch list. A batch list contains one or more batches. A batch contains one or more transactions that must be processed, in order, as one atomic unit. For example, here’s a batch list with two batches containing two transactions and one transaction, respectively:


The client creating a transaction calculates the SHA-512 digest and sets it in the transaction header. The digest ensures the payload data in transactions cannot be altered without detection. Each transaction header contains a client-generated nonce value. The nonce makes every transaction unique and prevents anyone from replaying the transaction. The client signs the transaction header and includes the signing public key in the transaction header. The client then signs each batch and includes the batch signer public key in the batch header. The batch signer and transaction signer are usually, but do not have to be, the same. The public key of the batch signer is also in the transaction header to prevent repackaging of the transaction in another batch. The transaction and batch signer public keys in the transaction and batch header, respectively, allows anyone to identify the signers and to verify the signatures.

All Sawtooth signatures, including client-signed transactions and batches, use ECDSA curve secp256k1. This is the same algorithm and curve used by Bitcoin and Ethereum and allows for signature compatibility with these platforms. The 64-byte signature is the concatenation of the “raw” (unencoded) R and S values of a standard ECDSA signature.

Sawtooth Block Signing and Validation

The Sawtooth validator node creates proposed blocks from transactions it receives. These proposed blocks are signed by the validator and transmitted to the peer nodes on the Sawtooth network. The validator node signs blocks with ECDSA curve secp256k1, the same algorithm used for transaction and batch signatures. The peer nodes’ Validator validates candidate blocks proposed by a node, including verifying the block, batch, and transaction signatures. The digests and signatures not only prevent altering the payload data, but also prevents deleting, reordering, or duplicating transactions within a block or blocks within a blockchain.

Sawtooth Communication Encryption

Sawtooth encrypts data-in-motion —that is, communications between Sawtooth nodes and between components within a sawtooth node (such as between Validator, REST API, and Transaction Processor node processes). Sawtooth uses ZeroMQ (ZMQ or 0MQ) for communications. ZMQ encryption and authentication is implemented with CurveZMQ, which uses a 256-bit ECC key with elliptical curve Curve25519.

Transaction Input/Output lists

All Sawtooth transactions (ledger entries) have a list of input addresses and output addresses in the transaction header. These are optional but highly recommended for two reasons:

  • It allows transactions that do not conflict to be processed in parallel
  • It provides a measure of security by restricting the transaction processor from modifying addresses in state that are not listed in the transaction header.


Observability is the ability to see what the software is doing. This is important not only for debugging code, but for security analysis. One can see during a breach, or with post-mortem forensics, exactly what went wrong. Sawtooth is observable in that its components log time-stamped entries at various verbosity levels. The -v flag means log warning messages, –vv means log information and warning messages, and –vvv means log debug, info, and warning messages.

Additionally, Sawtooth has event subscriptions. The Sawtooth Events API allows an application to subscribe to “block-commit” events (triggered when a block is committed) and “state-delta” events (triggered when data in the blockchain state changes). Events are extensible in that application-defined events may be created and subscribed to by an application. An event handler could look for anomalies (such as too-frequent or over-limit transactions) and take further action to block or warn on these events.

Internal Security Mechanisms

Some security mechanisms are “under the hood” and are not always visible, but they are still important to mention:


This concludes part two of my blog on Hyperledger Sawtooth Security, where I discussed Sawtooth node and transaction processor security. This provides a toolbox to tighten down Sawtooth nodes as your needs require—tightening allowed transaction signers, transaction families, and peer nodes. I also discussed other security mechanisms including node authorization, encryption, observability, and internal security processes. Part three will conclude this series with a discussion on Sawtooth client application security and network security.