How to deploy Hyperledger Fabric on Kubernetes Part II

By | Blog, Hyperledger Fabric

We recently hosted a webinar about deploying Hyperledger Fabric on Kubernetes. It was taught by Alejandro (Sasha) Vicente Grabovetsky and Nicola Paoli from AID:Tech.

The webinar contained a detailed, step-by-step instruction showing exactly how to deploy Hyperledger Fabric on Kubernetes. For those who prefer reading to watching, we have prepared a condensed transcript with screenshots that will take you through the process that has been adapted to recent updates in the Helm charts for the Orderers and Peers.

Are you ready? Let’s dive in!

What we will build

  • Fabric CA

First, we will deploy a Fabric Certificate Authority (CA) serviced by a PostgreSQL database for managing identities.

  • Fabric Orderer

Then, we will deploy an ordering service of several Fabric ordering nodes communicating and establishing consensus over an Apache Kafka cluster. The Fabric Ordering service provides consensus for development (solo) and production (Kafka) networks.

  • Fabric Peer

Finally, we will deploy several Peers and connect them with a channel. We will bind them to a CouchDB database.

What you’ll need

You will need 3 things to get started:

  1. A running Kubernetes cluster.

You can get a free managed cluster from most cloud providers; many give generous free credits.

  1. A domain name for your cluster.

Just get a free or cheap one.

  1. The repository prepared for this webinar.

Download it at

Once you have a cluster, install an Ingress controller. In this case, we use NGINX.

You’ll also install a certificate manager, which will renew the Tiller certificates to access your sub-domains by using a service called Let’s Encrypt.


Let’s get the party started!

The deployment has three key parts:

  • Fabric CA

Fabric Certificate Authority registration & enrolls identities

  • Fabric Orderer

Fabric Ordering service provides consensus for development (solo) and production (Kafka) networks.

  • Fabric Peer

Fabric Peer maintains the blockchain by communicating with the Ordering service.

Let’s go through these one by one.

  • Fabric CA

To install Fabric CA, you’ll need to follow these steps:

  • Install Fabric CA Helm Chart
  • Generate Fabric CA Identity
  • Obtain Crypto Material
  • Save Crypto Material to K8S
  • Generate Genesis and Channel

Let us go through them one by one.

  • Install Fabric CA Helm Chart

Let’s install the actual Fabric CA Chart. We can run the command Kubectl get pods to get the actual Pod running the CA. And then, run kubectl logs, to check if the CA server has started running.

helm install stable/hlf-ca -n ca –namespace blockchain -f./helm_values/ca_values.yaml

CA_POD=$(kubectl get pods -n blockchain -l “app=hlfca,release=ca” -ojsonpath=”{.items[0]}”)

kubectl logs -n blockchain $CA_POD | grep ‘Listening on’

The values file for the CA is rather more involved.

First, we define the image tag where we use Fabric 1.2. We also implement an Ingress. This is basically a way for you to access the Certificate Authority from outside the Kubernetes cluster. In the webinar, they used a test domain that they owned to enable access to the CA.

We again implement persistence to store the certificates permanently, so we don’t lose them if the POD restarts. We also specify a name for our Certificate Authority, which in this case is CA, and also specify the PostgreSQL dependency, so that our CA chart knows to create the database chart.

Then, we define some configuration, such as the Hyperledger tools version we use, which customizes one of the config maps in the chart. A config map is a Kubernetes abstraction that stores some values that are made available to the POD.

Importantly, we also specify the Certificate Signing request and an affiliation list, which in this case, is empty.

And finally, we again add a Pod affinity. In this case, we make sure that our Fabric CA server gets deployed to the same Kubernetes machine as our PostgreSQL server.


  • Generate Fabric CA Identity

Once our CA is running, we will need to enroll the identity of the Fabric CA itself. For the first command, we use kubectl exec to check if a certificate exists in the Fabric CA membership service provider folder.

If it does not, we will run the Fabric CA client enroll command inside the CA, pointing to the Ingress of the CA. Then, we can run Kubectl get ingress on our own machine to obtain the Ingress connected to the Fabric CA service. So, this will essentially be the domain name that we spoke about.

Once you have that set up, you should be able to use the curl command to get the CA info. Bear in mind that this depends on the Certificate Manager setting up the TLS correctly, so you might need to wait for a little while before this works.

kubectl exec -n blockchain $CA_POD — cat/var/hyperledger/fabric-ca/msp/signcerts/cert.pem

kubectl exec -n blockchain $CA_POD — bash -c ‘fabric-ca-client

enroll -d -uhttp://$CA_ADMIN:$CA_PASSWORD@$SERVICE_DNS:7054′

CA_INGRESS=$(kubectl get ingress -n blockchain -l “app=hlfca,release=ca” -ojsonpath=”{.items[0].spec.rules[0].host}”)

curl https://$CA_INGRESS/cainfo

  • Obtain Crypto Material

Now that your CA is installed and set up, you can give the certificate or the CA server to your own machine, by using the Fabric CA Client binary. If you’re using a Mac, you can for example use Homebrew to install it.

Then, you can use Kubectl exec to execute the register command inside the Fabric CA, to register the organization we will use to host the Orderers and the Peers.

Once we have registered the identity, we can use the Fabric CA client again and actually enroll the identity and get our private key and our certificate. We’ll copy the certificate from signcerts also into the admincerts folder, since this identity is the admin.

FABRIC_CA_CLIENT_HOME=./config fabric-ca-client getcacert -uhttp://$CA_INGRESS -M ./AidTechMSP

kubectl exec -n blockchain $CA_POD fabric-ca-client register – org-admin –id.secret OrgAdm1nPW –id.attrs ‘admin=true:ecert’

FABRIC_CA_CLIENT_HOME=./config fabric-ca-client enroll -uhttp://org-admin:OrgAdm1nPW@$CA_INGRESS -M ./AidTechMSP

mkdir -p ./config/AidTechMSP/admincerts

cp ./config/AidTechMSP/signcerts/* ./config/AidTechMSP/admincerts

  • Save Crypto Material to K8S

Notice that when speaking with configure CA, we need to specify the lets encrypt authority’s certificate for the TLS connection, because the Fabric CA client will check that the certificate corresponds to the one it sees when connecting to the CA server. So, since our apps will use the certificate and the key, we need to use a kubectl create secret generic to add this file to the Kubernetes service as secret:

ORG_CERT=$(ls ./config/AidTechMSP/admincerts/cert.pem)

kubectl create secret generic -n blockchain hlf–org-admincert–from-file=cert.pem=$ORG_CERT

ORG_KEY=$(ls ./config/AidTechMSP/keystore/*_sk)

kubectl create secret generic -n blockchain hlf–org-adminkey–from-file=key.pem=$ORG_KEY

CA_CERT =$(ls ./config/AidTechMSP/cacert/*.pem)

kubectl create secret generic -n blockchain hlf–ca-cert–from-file=cert.pem=$CA_CERT

  • Generate Genesis and Channel

Now we can run the config X Gen2. We use it twice, in the two profiles we defined to produce a genesis block for the orderers and the chain of transactions for the peers. So we have these two Kubernetes secrets to store them on our cluster. These secrets will act like the config maps that we defined before: they will act as obstructions that will hold values. But they will encrypt them at the same time and hold them in a secure manner.

cd ./config

configtxgen -profile OrdererGenesis -outputBlock ./genesis.block

configtxgen -profile MyChannel -channelID mychannel -outputCreateChannelTx ./mychannel.tx

kubectl create secret generic -n blockchain hlf–genesis –fromfile=genesis.block

kubectl create secret generic -n blockchain hlf–channel –fromfile=mychannel.tx

  • Fabric Orderer

Now that we have a functioning Fabric CA, we can get to the Orderer. We will need to follow three steps to get this running.

  • Install Kafka Helm Chart
  • Setup Orderer Identity
  • Save Orderer crypto-material to Kubernetes
  • Install Fabric Orderer Helm Chart

Let’s go through them one by one:

  • Install Kafka Helm Chart

First, we need to install the Kafka cluster itself, which is made easy with an incubator chart available:

helm install incubator/kafka -n kafka-hlf –namespace blockchain -f ./helm_values/kafkahlf_values.yaml

Note that you need some very specific configuration to make this work correctly.

First, we need to ensure we have at least four replicas of Kafka, in order to make it properly crash-tolerant. This means that your Kubernetes cluster must be composed of at least four machines. We also specify, again, the image and the tag that we want to use. In this case, we use the image and the tag 4.112. And we also add role-based access control to our Kubernetes cluster.

Again, we have anti-affinities to make sure that the Kafka pods actually end up on different Kubernetes machines, because otherwise, several of your Kafka pods could end up on one machine, and then one single machine crashing could actually get your network down.

And here is the most important part. The configuration overrides section contains several options that are necessary to work with Fabric. All these options are necessary but the most important one is the last one, called which specifies that we never want to forget a record, because by default, Kafka will forget records after typically around two weeks. This is a common mistake.


  • Set up Orderer Identity

Now, we can set up the Orderer Identity. We first use kubectl exec to connect to the fabric-ca and register the identity of the order with a CA. We then enroll the Orderer and obtain the cryptographic materials, which will identify the orderer. In this case, we specify a dummy password “ord${NUM}_pw”, but in a real deployment you should use a complex (e.g. alphanumeric of length at least 12) string instead.

Here we work with the first orderer, if you want to install the second one, just change the initial export to set the NUM to 2.

export NUM=1

kubectl exec $CA_POD -n blockchain  — fabric-ca-client register – ord${NUM} –id.secret ord${NUM}_pw –id.type orderer

FABRIC_CA_CLIENT_HOME=./config fabric-ca-client enroll -d -u https://ord${NUM}:ord${NUM}_pw@$CA_INGRESS -M ord${NUM}_MSP

  • Save Orderer crypto-material to Kubernetes

Having created the crypto-material for the Orderer, we can save it to Kubernetes, so that we can utilise it when we launch the appropriate Helm chart. Below we show how to save the certificate and key of the Orderer.

NODE_CERT=$(ls ./config/ord_${NUM}_MSP/signcerts/*.pem)

kubectl create secret generic -n blockchain hlf–ord${NUM}-idcert –from-file=cert.pem=$NODE_CERT

NODE_KEY=$(ls ./config/ord_${NUM}_MSP/keystore/*_sk)

kubectl create secret generic -n blockchain hlf–ord${NUM}-idkey –from-file=key.pem=$NODE_KEY

  • Install Fabric Orderer Helm Chart

Now we can install the actual orderers and check that they have been correctly initialised.helm install stable/hlf-ord -n ord${NUM} –namespace blockchain -f./helm_values/ord${NUM}_values.yaml

ORD_POD=$(kubectl get pods -n blockchain -l “app=hlf-ord,release=ord${NUM}” -o jsonpath=”{.items[0]}”)

kubectl logs -n blockchain $ORD_POD | grep ‘completeInitialization’

We also specified the consensus type, Kafka, and the ID of the organization membership service provider (in our case, it’s AidTechMSP). Finally, we specify a set of secrets that we will need to correctly connect to the Certificate of Authority, such as the caServerTls secret, the genesis block secret which in this case is called hlf–genesis, and the admin certificate for the organization.

We can then get the orderer Pod using kubectrl get pods and check that the order has started by running kubectl logs and filtering for the string completeInitialization. And that’s it, you have a basic ordering service, using Kafka.


  • Fabric Peer

It is finally time to install Fabric Peer, which maintains the blockchain ledger. It involves several steps, most of which should look familiar by now:

  • Install CouchDB Helm Chart
  • Set up Peer Identity
  • Save Peer crypto-material to Kubernetes
  • Install Fabric Peer Helm Chart
  • Create Channel
  • Fetch and Join Channel

Let us go through them one by one.

  • Install CouchDB Helm Chart

The first step is to install the CouchDB database. It is similar to installing the PostgreSQL chart. Once you’ve deployed it, use the Kubernetes logs to check if it is running.

The values file in this case is really simple. It specifies the version of the Hyperledger carsDB, which in this case is 0.4.10. We also again specify a persistence, so that the couchDB database holds the data and an anti-affinity so that couchDB pods are deployed on different Kubernetes machines.

  • Set up Peer Identity

To set up the peer identity we get the CA_PASSWORD, which again is a one-time password generated automatically by the chart, and we register the peer with the certificate authority just like we did with the orderer, we just specify a different ID type, in this case we specify a peer. Again, the peer will periodically try to enroll with the CA until it succeeds. Once it does you will see the strings start to appear in the logs.

kubectl exec $CA_POD -n blockchain  — fabric-ca-client register – peer${NUM} –id.secret peer${NUM}_pw –id.type peer

FABRIC_CA_CLIENT_HOME=./config fabric-ca-client enroll -d -u https://peer${NUM}:peer${NUM}_pw@$CA_INGRESS -M peer${NUM}_MSP

  • Save Peer crypto-material to Kubernetes

Just as with the Orderer, we save the crypto-material for the Pee to Kubernetes. Again, we save the certificate and key of the Peer.

NODE_CERT=$(ls ./config/peer_${NUM}_MSP/signcerts/*.pem)

kubectl create secret generic -n blockchain hlf–peer${NUM}-idcert –from-file=cert.pem=$NODE_CERT

NODE_KEY=$(ls ./config/peer_${NUM}_MSP/keystore/*_sk)

kubectl create secret generic -n blockchain hlf–peer${NUM}-idkey –from-file=key.pem=$NODE_KEY

  • Install Fabric Peer Helm Chart

Now let’s install the Fabric Peer Helm Chart.

helm install stable/hlf-peer -n peer${NUM} –namespace blockchain -f ./helm_values/peer${NUM}_values.yaml

PEER_POD=$(kubectl get pods -n blockchain -l “app=hlf-peer,release=peer${NUM}” -o jsonpath=”{.items[0]}”)

kubectl logs -n blockchain $PEER_POD | grep ‘Starting peer

The values file looks very similar to the orderer values, since it mentions the ‘fabric-ca’ address and the peer username on the certificate authority. It also specifies that we are using CouchDB and the name of the CouchDB Helm deployment. Again, we note the secrets that we need such as the ca-TLS secret which we need to communicate securely, the channel secret specifying the channel transaction that will allow the peer to create and join channels, and the organization admin certificate key which is also necessary to join the channel to ask permission to network.


  • Create Channel

Once the first peer has been created, you can create the channel. For this, you will need to specify the address of the orderer, the name of the channel, and the location of the channeled transaction.

kubectl exec -n blockchain $PEER_POD peer channel create -o ord1-hlford.

blockchain.svc.cluster.local:7050 -c mychannel -f/hl_config/channel/mychannel.tx

  • Fetch and Join Channel

Once the channel is created, you will need to fetch it and join it on every peer that you create. You can do this by running the peer channel fetch config command inside the Peer. And then run the peer channel join inside each peer container as well. Once all of that is done, you can run peer channel list to check that the peer has indeed joined the channel.

kubectl exec -n blockchain $PEER_POD peer channel fetch

config /var/hyperledger/mychannel.block -c mychannel -o ord1-hlford.blockchain.svc.cluster.local:7050

kubectl exec -n blockchain $PEER_POD — bash -c


peer channel join -b /var/hyperledger/mychannel.block

kubectl exec -n blockchain $PEER_POD peer channel list

And that’s it! If you followed along with all the steps, you should have successfully deployed Hyperledger Fabric on Kubernetes!

Watch the replay of the webinar here, or download slides and other resources on the same page. You can also head straight to the Github repository set up for the webinar.


Hyperledger Sawtooth Blockchain Security (Part One)

By | Blog

By Dan Anderson, Intel 



Hyperledger Sawtooth is a permissioned enterprise blockchain platform that is built in a modular fashion. The Sawtooth blockchain (essentially an electronic ledger) is immutable (transactions cannot be deleted or modified), highly available, transparent (all transactions are visible), distributed and has several security mechanisms, which I describe below.

By the way, for the record, there is no Sawtoothium, Sawtooth Coin, Sawbucks, STC, or anything like that previously built into the Sawtooth platform. Sawtooth has no digital currency and has no miners. Of course, a Sawtooth application can create and manage digital currency or other assets.

Security for an enterprise blockchain is important because it’s shared among frenemies—mutually distrusting organizations that need to work together. This is the first of three articles on Sawtooth Security:

  • Part 1: Sawtooth Consensus Algorithms
  • Part 2: Sawtooth Node and Transaction Processor Security
  • Part 3: Client Application Security and Network Security

Consensus Algorithms

In Sawtooth, a major task of a consensus algorithm is leader selection, deciding which competing peer node can publish a new block on the blockchain. What are the security implications of a consensus algorithm? A secure consensus algorithm is important to prevent a “rogue” node (bad actor) from “gaming” the system. A bad actor can publish transactions favorable to it or a third-party before or in lieu of transactions from other nodes. In several applications, just having better timing is a big advantage—consider the stock market or real estate or any market with scarce goods at a fluctuating price.

Prelude: Byzantine Generals Problem


Before discussing consensus algorithm details, it’s good to review the Byzantine Generals Problem. The problem was first proposed in 1982 by Leslie Lamport et al. Basically you have n generals deciding whether to attack or retreat. But some generals may be malicious traitors (bad actors) and send the wrong advice. The goal is to agree on a common plan (attack or retreat) that is not influenced by the malicious generals. The classical solution to the Byzantine Generals Problem requires (3m + 1) total generals to safeguard against the votes of m malicious generals.

Byzantine Fault Tolerance (BFT) v. Crash Fault Tolerance (CFT)

In general (not a pun),  Byzantine Fault Tolerant (BFT) solutions can withstand malicious (or bad actors) among the nodes. Crash Fault Tolerant (CFT) solutions assume no node is malicious—that is, there are no bad actors. A node can crash or disappear from the network, but it does not misbehave. CFT solutions are usually less expensive than BFT ones.

Types of Consensus Algorithms

A Consensus Algorithm decides which of several competing block candidates from multiple nodes will be added to the blockchain. Consensus algorithms come in two classes: Nakamoto lottery-like consensus and Classical or voting-like consensus

Nakamoto Consensus

Nakamoto-style consensus algorithms all use some kind of lottery-like system to pick a winner. It includes Proof of Work (PoW), the consensus algorithm used in Bitcoin. PoW selects a winner by solving the cryptographic puzzle of finding the most leading 0s in a SHA-256 result. Unfortunately PoW is wasteful—it’s estimated to use about ~2.5 gigawatts/year. As a comparison, Ireland consumes ~3.1 gigawatts/year.

Sawtooth’s Proof of Elapsed Time (PoET) is also a Nakamoto-style consensus algorithm. PoET selects as the winner the node with the first expired randomly generated wait time. More on PoET later.

Another Nakamoto-style consensus algorithm, still in the experimental stage, is Proof of Stake (PoS), where the winner has the most of something (wealth, age, etc.)

All Nakamoto-style consensus algorithms can fork—that is have two competing blockchains with different blocks appended to the end. The forking consensus algorithms have various resolution methods to solve forking.

Classical Consensus

Classical Consensus uses an agreement or voting mechanism to select a leader. Examples include Practical Byzantine Fault Tolerance (PBFT) or Raft.

PBFT uses a state machine and selects a leader by block election. PBFT does not fork. PBFT uses a three-phase, network-intensive algorithm (n^2 messages), so is not scalable to large networks.

Raft consensus elects a leader for an arbitrary term of time. If the leader times out, the leader is replaced. Raft is both fast and CFT, but it is not BFT. Also Raft does not fork.

Sawtooth’s Pluggable Consensus

Sawtooth supports “pluggable” consensus algorithms—that is it is configurable. Not only is the consensus configurable at blockchain genesis (block 0), but the consensus algorithm can be changed at any point after after blockchain creation with an in-chain setting, sawtooth.consensus.algorithm.

Consensus algorithms supported by Sawtooth are:

  • DevMode – a simple consensus algorithm for development use only
  • PoET CFT – PoET runs without SGX hardware, which is CFT
  • PoET SGX – PoET runs with SGX hardware, which is BFT
  • RAFT – uses an elected leader and it is faster than PoET. It is only CFT and not BFT

Other consensus algorithms are in the works, such as PBFT. The Sawtooth community encourages contributions of other consensus algorithms by third parties. There are many consensus algorithms, each with advantages and disadvantages, and consensus algorithms in general is an active area of research!

Proof of Elapsed Time (PoET) Consensus

Proof of Elapsed Time (PoET) consensus is one of the consensus algorithms available with Sawtooth. PoET comes in two flavors, both for production use:

  • PoET SGX runs with SGX hardware. It is BFT
  • PoET CFT (also called PoET Simulator Mode) runs without SGX hardware. It is only CFT

Prelude: Intel® Software Guard Extensions (SGX) Overview

But before we can discuss PoET, we need to review SGX. A TEE (Trusted Execution Environment) runs code in a protected region of memory, called an enclave. Intel’s implementation of a TEE is Software Guard Extensions (SGX), which was first released in 2015. A SGX enclave can be thought of as a “reverse sandbox” or “fort.” The enclave prevents the rest of the system from accessing protected code or data residing inside the enclave. This is implemented with encryption.

Contrast a SGX enclave with a more traditional “sandbox” where the code in the sandbox is prevented from accessing code or data outside the sandbox. A SGX enclave is the reverse—it prevents malicious actors (including rogue operating system or application code) from access inside the enclave. This allows the enclave to safely execute in a hostile outside environment. SGX does not have to trust the OS or other software on a host.

Another important SGX function is its ability to “attest” to code that’s in the enclave—it provides high confidence that code is authentic and not tampered with before it’s been loaded.

SGX allows multiple enclaves on the same host. Separate enclaves would have separate functions and run independent of each other. The code and data should be as small as possible. Limiting enclave size reduces exposure to possible vulnerabilities from outside the enclave. So, for example, hosting a programming language interpreter would be inappropriate for an enclave.

Proof of Elapsed Time – Software Guard Extensions (PoET SGX)

Proof of Elapsed Time (PoET) consensus, as mentioned above, is a Nakamoto-style consensus algorithm. That is, it uses a lottery-based mechanism. Each node generates a random wait time value in a secure SGX enclave, then waits that many seconds. The node with the timer that expires first wins and can add its proposed block to the blockchain. The random wait time value is signed in the enclave, and all other nodes verify the timer signature.

The PoET Timer is implemented to run within a SGX enclave. The SGX enclave securely generates a tamper-proof random wait time value. The enclave then signs a certificate with the wait time value. Outside the enclave, each node waits the required amount of time. After the timer expires, the SGX attestation is sent to the other network nodes. The peer nodes verify the wait time signature generated by the winning node. The winning node gets to publish its proposed block onto the blockchain.

PoET has multiple defense-in-depth tests to prevent cheating by a rogue node. They are:

  • Z Test confirms that a block-claiming validator is not winning too frequently
  • C Test requires that a new node must wait C blocks after admission before its blocks are accepted. This is to prevent gaming identities and some obscure corner scenario
  • K Test restricts a node to publishing at most K blocks before its peers require the node to recertify itself

Proof of Elapsed Time – Crash Fault Tolerant (PoET CFT)

PoET is also available without SGX. It uses the same algorithm and shares the same code as PoET SGX, except that the enclave module is simulated. Because the SGX protections are simulated, PoET CFT is not BFT. Keep in mind that other consensus algorithms, such as Raft or Kafka, are also CFT and not BFT. No special hardware is required for PoET CFT, and it is still a stable algorithm.



This concludes part one of my blog on Hyperledger Sawtooth Security, where I discussed consensus algorithms. As you can see, the choice of consensus algorithm is a trade-off between performance, security (resistance to bad actors), and scalability (number of network nodes). Part two will continue with a discussion on Sawtooth node and transaction processor security.

Deploying Hyperledger Fabric on Kubernetes

By | Blog

Have you always wanted to know how to deploy Hyperledger Fabric on Kubernetes? Then we have something just for you!

We recently hosted a webinar about deploying Hyperledger Fabric on Kubernetes.

Deploying a multi-component system like Hyperledger Fabric to production is challenging, and the event was aimed at developers and DevOps specialists with limited knowledge in the area.

The webinar contained four sections. First, the presenters introduced Hyperledger Fabric. They then covered Kubernetes, a platform for deploying micro-services. In the third and main section, the presenters showed step-by-step how to deploy the Fabric Certificate Authority, the Fabric Orderer and Fabric Peers. Each basic concept (like Certificate Authority or Helm chart) was explained. Finally, the presenters shared resources for further learning and involvement.

Alejandro (Sasha) Vicente Grabovetsky, CTO and Chief Data Scientist at AID:Tech, and Nicola Paoli, Lead Blockchain Engineer, led the webinar. AID:Tech is an award-winning company that focuses on the delivery of digital entitlements, including welfare, aid, remittance and donations using Blockchain & Digital Identity.

This webinar provides a helpful, hands-on guide to deploying Hyperledger Fabric on Kubernetes. Watch the replay here, or download the webinar slides and other resources on the same page. You can also head straight to the Github repository set up for the webinar.

Be sure to stay tuned for Part II of this post, where we will actually walk through how to deploy Hyperledger Fabric on Kubernetes. Also, make sure you do not miss any upcoming webinars from Hyperledger by signing up here!


6 Blockchain Best Practices Enterprises Need to Know

By | Blog

The use of blockchains in business IT is still emerging as companies continue to explore new ways to use the technology. Its strength as a platform to build new generations of transactional applications that will allow users to establish trust and maintain high security for their data and processes is one of its greatest promises and attractions.

To help make blockchains more approachable, here are six best practices from Hyperledger that can be expanded and incorporated by businesses as they dive into blockchains to help their companies deal with their data, security and business processes in the future.

1. Secure today does not mean secure tomorrow

When people hear about blockchain, one of the things they learn about is that it is secure and cryptographically protected. With that information, they then often think they shouldn’t have concerns about using blockchain along with personally identifiable information (PII). The problem is that such an assumption fails to consider the future because as soon as hackers and other bad actors eventually break the cryptographic algorithms that protect such blockchain data today, then all that data will become a treasure trove for criminals. Regardless, developing technologies such as quantum computing could one day make successful attacks on the security of cryptographic keys possible.

So, when it comes to security, even with secure systems like blockchain, things are only as secure as they are today. Tomorrow there may be mechanisms to crack those cryptographic keys, allowing attackers to see all the information that’s put onto a blockchain. With that in mind, a critical best practice is that users should never put PII on their blockchains. In today’s use of blockchain, this best practice is table stakes.

2. Never store large files on a blockchain

Blockchains work by using large numbers of computers that are basically replicating data. So, when data is stored on the blockchain, it gets sent to every other node or peer on the blockchain network. When that happens, storage and compute costs can go up exponentially. To avoid those kinds of added costs, other means of storing and replicating that data should be used, including options such as the Amazon’s Simple Storage System (AWS S3), Google Cloud Platform’s Filestore or other cloud networks. That way, when users are storing big files, they are not paying extra to store, transfer and replicate multiple copies of the same data.

Instead, when using a blockchain, users can store a pointer or a link to a file but keep the actual data on whatever cloud platform they are using. They can also include a hash which notes the content of the file when it is stored, which can then be checked when the file is retrieved to see if the contents of that file have changed by checking it with the same hash algorithm. If it is the same, that tells a user that it’s unchanged and that somebody hasn’t gone in and changed its contents.

3. If you don’t want your data to be public, use a permissioned blockchain

Not all blockchains are public, where anybody has access to the information and can add transactions and read the data that’s in it. When enterprises want to keep things private, that’s where permissioned blockchains come in – data can be stored, accessed and used only between the partners who need to have access. That’s the main reason such permissioned blockchains exist. While things like Bitcoin and Ethereum are public blockchains, the Hyperledger projects are mostly permissioned blockchains. And that’s exactly why they are suitable for business. If your data must remain private, then use a permissioned blockchain. Some people call them private blockchains or consortium blockchains, but those normally fall into the permissioned blockchain space.

4. Create a governance structure for the blockchain

With blockchains, the challenges aren’t technical. Instead, the challenges involve the governance model that is chosen. To keep things working smoothly, it’s best to define the governance structure upfront and even before you dive into blockchain. For example, be sure to decide things like how new users or organizations will be added to a blockchain network, as well as how to determine if a user or organization should be cut out of the network. To protect the blockchain, the data and the rest, be sure to include a mechanism to deal with and remove bad actors who were previously allowed into the network. The governance structure can also address procedures for many other possible situations, as well as how to cope with the politics of the user group. Just remember, these things are still evolving, so those governance procedures will likely change over time. You can learn more about governance and how to manage it in blockchain networks via this webinar we did recently with MonetaGo.

5. Decide on performance and scalability requirements

Different tasks may require different blockchains. As a best practice, architects must understand the requirements for their specific use cases and ensure that their blockchains meet those requirements, just like they would evaluate for any other technology. Certain technologies fit better with specific requirements, so architects must decide on their trade-offs. Are they okay giving up scalability for performance or are they okay giving up the performance to get needed scalability? Those are the kinds of decisions that need to be made early on with each deployment and use case. With the different Hyperledger frameworks, enterprises can set up their own blockchain networks as needed. Enterprises might have multiple blockchains, one based on performance, one based on scalability, allowing them to hone-in on what the need.

6. Analyze blockchain business cases early

To ensure success for the project, says Jesse Chenard, the CEO of finance start-up MonetaGo, IT leaders should ask themselves lots of detailed questions early in the process about their goals for a blockchain initiative.

“You really need to analyze the business case and go through a checklist,” says Chenard. “Do multiple people need access to the data? Do you need an audit trail? Do you actually even need a blockchain? Does it make sense for us?” For some projects, the use of a database can be the right choice, according to Chenard.

Enterprises should approach the project by designing and building a strategy that will help the project reach its goals, and not just aimlessly look to create a blockchain just to dive into the latest technology, he says. At the same time, enterprise IT leaders shouldn’t try to plan ahead for every feature and capability for their blockchains because some will become more apparent later and can be added as the project proceeds, says Chenard.

Blockchain can be a great choice for projects that rely on security, controlled access, accountability, transparency and efficiency in a wide range of fields, from finance to banking, supply chains, manufacturing and more. Having well laid plans, goals and best practices can all help enterprise IT leaders explore the growing blockchain ecosystem as they work to capture its strengths for their businesses.

New Keynote Speakers Announced for Hyperledger Global Forum

By | Blog, Events

With over 75 sessions, keynotes, hands-on technical workshops, social activities, evening events, and more, Hyperledger Global Forum gives you a unique opportunity to collaborate with the Hyperledger community, make new connections, learn about the latest production deployments, and further advance your blockchain skills. In addition to previously announced keynote speakers, new keynote speakers include:

  • Frank Yiannas, Vice President of Food Safety, Walmart
  • David Treat, Managing Director, Accenture

Session Highlights Include:

Technical Track:

  • Approaches to Consortia Governance and Access Control in Hyperledger Fabric Applications – Mark Rakhmilevich, Oracle
  • Chaincode Best Practices – Sheehan Anderson, State Street
  • Lessons Learned Creating a Usable, Real-world Web Application using Fabric/Composer – Waleed El Sayed & Markus Stauffiger, 4eyes GmbH

Innovation Theater Track:

  • MyCuID: Blockchains, Credentials and Credit Unions – Julie Esser, CULedger
  • Live Demo of Omnitude ID Utilizing Hyperledger Indy, Fabric, and Sovrin – James Worthington, Omnitude
  • Giving Money Identity and Purpose – Raj Cherla, Spoole Systems Pvt Ltd

Business Track:

  • Panel Discussion: Hyperledger in Supply Chains – Kari Korpela, Lappeenranta University of Technology; Petr Novotny, IBM Research; Yu Zhang, Huawei and moderated by Allison Clift-Jennings, Filament
  • Panel Discussion: Where Are We Now with Identity? – Daniel Haudenschild, Swisscom Blockchain AG; James Worthington, Omnitude and moderated by Heather Dahl, The Sovrin Foundation
  • Financial Inclusion: How DLT Provides Hope For 1.7 Billion Unbanked People – Matthew Davie, Kiva

Take a look at the full schedule!

Secure your spot now and save up to $150 with the current registration rate, available through November 25. Register now >>

Hyperledger Fabric Now Supports Ethereum

By | Blog, Hyperledger Fabric

Guest post: Swetha Repakula of IBM 

At this point, there are few that haven’t heard the word blockchain. Despite all the buzz and hype around the technology, the learning curve is steep. When we meet people new to the technology, we like to break down blockchain into four main parts:

  1. The ledger technology: How is the data stored? Data could be many things such as the transaction log, or just the current state of the world. Each node could use its own database, and the database need not be the same across all the nodes in the network.
  2. The consensus mechanism: How are all the participants coming to agreement about the block ordering and current state?
  3. Membership Services: How is identity managed and who is allowed into the network?
  4. Smart Contract Runtime or Application: What smart contracts can I deploy or what kind of application is this?

We believe most blockchain technologies can be separated into four parts and that developers/consumers should have the ability to choose at each of those levels. Take Hyperledger Fabric for example:

  1. The Ledger Technology: The actual blockchain, or transaction log, is stored in the file system of the peer using merkle hashes while the current state of the world is stored separately in a database for quick lookup.
  2. The Consensus Mechanism: The combined effect of the endorsement model and the ordering service achieve consensus in the network.
  3. Membership Services: Fabric has the concept of Membership Service Providers (MSP) which manages the concept of identity by issuing certificates,  validating them and authenticating users. The MSP is a core part of permissioning in Fabric.
  4. Smart Contract Runtime: Fabric mainly supports smart contracts that are written in Go or Node.js.

In the spirit of expanding choices, Hyperledger Fabric now supports Ethereum Virtual Machine (EVM) bytecode smart contracts. Contracts can now be written in languages such as Solidity or Vyper. Along with introducing a new smart contract runtime, Fabric also has a corresponding web3 provider which can be used to develop decentralized applications (DApps) using web3.js. This new feature comes as part of the 1.3 release and is motivated with the goal to enable developers to be able to migrate or create DApps for a permissioned platform.

Smart Contract Runtimes

Before diving into the details of the EVM integration, let’s expand on the concept of a smart contract runtime. In general the runtime refers to what are the languages that are supported by a specific platform. But there are many other considerations that should be weighed in. Due to the nature of blockchain, these runtimes have to be evaluated in a distributed nature. Since many nodes, if not all of them, have to run and store these contracts, the network has to be mindful of the runtimes being supported. Languages affect how computationally intensive an arbitrary contract can be as well as the deterministic nature of them. Though neither is necessarily a limitation, they can place an unfair burden on the contract developer. Another important factor is what languages the contract developers themselves are experienced in. With the emergence of blockchain, there has been an increase in developers that do not have technical backgrounds, so picking up new languages is not always a practical solution. The implications of smart contract runtimes make choosing a blockchain network even more difficult. Through the introduction of an EVM, we hope to make sure that Solidity smart contracts and permissioned networks are not mutually exclusive.


As part of integrating an EVM, we wanted to recreate some of the developer experience of Ethereum. Therefore the integration can be broken into two key pieces, an EVM user chaincode, and a web3 provider Fab3.

User Chaincode

The EVM user chaincode is a wrapper around the Hyperledger Burrow EVM. We have also added functionality to enable queries about accounts and contract code. Below are a couple of the key design decisions made as part of the integration.


Ethereum has two types of accounts, a Externally Owned Account (EOA) and Contract accounts. EOAs essentially are an address that is generated from a user’s public key and a balance of ether. As part of this work, Fabric was not introducing ether or any other tokens so EOAs are not explicitly stored. However a user account address is generated on the fly from a user’s public key.

Contract accounts contain the runtime EVM bytecode for a contract. Following Ethereum, the EVM chaincode will be storing these types of accounts on the chain. Smart contract deployment through the EVM will not need a manual step of installing a smart contract like in the Fabric workflow.


Every instruction in the EVM requires a certain amount of gas. For every transaction that is run through the EVM, enough gas must be provided to ensure completion. This makes sure “miners” don’t risk DoS caused by infinite loop, and waste computational resources Essentially if enough gas is not provided for a certain transaction, it will exit before finishing. In its current iteration, the EVM chaincode provides a large hardcoded amount of gas per transaction.


Another key piece we adopted from the Ethereum ecosystem is the Ethereum JSON RPC API. The API defines a systematic way clients can interact with the Ethereum network. However, due to the differences of Ethereum and Fabric, Fab3 does not completely implement the API. It does support enough instructions to allow for DApps written using the web3.js library.

Try out the new feature by following this tutorial.

Future Plans

Our next goals include enabling smart contract events, and expanding the Fab3 support so that clients such as Remix and Truffle can be used to interact with Fabric. We are also looking for other aspects of the Ethereum ecosystem that can be adopted. This new feature is also a topic of one of the workshops at Hyperledger Global Forum in Basel, Switzerland this December. Come join us and bring your EVM smart contracts and DApps to play with.

We encourage developers to try the feature out and give us feedback! To get started with using Hyperledger Fabric and EVM, you can download the 1.3 code today:

Developer Showcase Series: Tian Chen, ArcBlock

By | Blog, Developer Showcase

Back to our Developer Showcase Series to learn what developers in the real world are doing with Hyperledger technologies. Next up is Tian Chen of ArcBlock.

What advice would you offer other technologists or developers interested in getting started working on blockchain? 

First, get the concept and basic ideas right – read the white paper twice and think about it. Do not read the superficial blogs from the internet, which may lead you to a wrong direction. Then get your hands dirty with the node and RPC, look at the data, think about how things are working and then try to use the RPC to create small apps. After that, you will find which direction you’d like to go in – build the apps or build the infrastructure.

Give a bit of background on what you’re working on, and let us know what was it that made you want to get into blockchain?

At ArcBlock, we’re building tools that try to make developer’s lives easier. Our goal is to create an ecosystem around building dApps that is easy and joyful, not hard and tedious. We believe that blockchain technology could lead to the next revolution, and even move the entire human society. However the technology is still pretty immature -like the early internet, or pre web 2.0 internet, and building complicated apps are pretty hard. We would like to improve it. That’s why we got into blockchain.

What do you think is most important for Hyperledger to focus on in the next year?

The usability of Hyperledger project is a major issue to me.

What’s the one issue or problem you hope blockchain can solve?

We hope the usability issue could be solved in near future. It is not an outside world issue but the issue rooted in blockchain tech itself. We hope there becomes an easier way to use token assets without having to be super technical. 

What technology could you not live without?

Internet. of course.



Private Data Collections: A High-Level Overview

By | Blog

Guest post: Nathan Aw, Blockchain Engineer

Hyperledger Fabric v1.2 (HLF v1.2) is truly, truly an exciting release. I have been long awaiting this release, as it addresses some of the challenges I personally encountered when using Hyperledger Fabric.


Before I share the full list of exciting new features in HLF v1.2, some context is first needed.



Privacy and scalability are the foremost reasons why organisations choose permissioned/consortium blockchains/DLTs over the public ones.


With Hyperledger Fabric, privacy is enabled through the use of channels. The channel design is elegant, providing a number of sub-networks across a larger Hyperledger Fabric network.


However, there some downsides introduced with the initial channel design. After much industry experimentation with Hyperledger Fabric, the v1.0 channels design was a common frustration. To take one example, SWIFT completed a POC and one of the observation/findings was “…to productise the solution, more than 100,000 channels would need to be established and maintained, covering all existing Nostro relationships, presenting significant operational challenges.”


Instead of setting up many multiple channels, private data collections are introduced. This feature is the most important update in the 1.2 release. One of the features of Fabric is the concept of channels, but the overall service sees everything within the channel, so it has access to the payload of a transaction even though it might not be required to do so. From a privacy perspective, there’s still access to it and so private data collections actually give you more control in allowing transactions to be made private.


High-Level Features in HLF v1.2

From the What’s New in v1.2, some of the major highlights rolled out in HLF v1.2 are:

  1. Private Data Collections: A way to keep certain data/transactions confidential among a subset of channel members.
  2. Service Discovery: Discover network services dynamically, including orderers, peers, chaincode, and endorsement policies, to simplify client applications.
  3. Access control: How to configure which client identities can interact with peer functions on a per channel basis.
  4. Pluggable endorsement and validation: Utilize pluggable endorsement and validation logic per chaincode.


Improvements to privacy design and setup simplification (service discovery) is a key focus in this release. Not to worry if some of these terms above are new to you as a primer is provided below.


Fundamentals First – Definitions (see the Hyperledger Fabric Glossary for more details)


A channel is a private blockchain overlay which allows for data isolation and confidentiality. Channels are defined by a Configuration-Block, which contains configuration data that specifies members and policies.


Channel-specific Ledger:

A channel-specific ledger is shared across the peers in the channel, and transacting parties must be properly authenticated to a channel in order to interact with it


Private Data Collections:

Used to manage confidential data that two or more organizations on a single channel want to keep private from other organizations on that channel. The collection definition describes a subset of organizations on a channel entitled to store a set of private data, which by extension implies that only these organizations can transact with the private data.


Private Data:

Confidential data that is stored in a private database on each authorized peer, logically separate from the channel ledger data. Access to this data is restricted to one or more organizations on a channel via a private data collection definition. Unauthorized organizations will have a hash of the private data on the channel ledger as evidence of the transaction data. Also, for further privacy, hashes of the private data go through the Ordering-Service, not the private data itself, so this keeps private data confidential from the Orderer.


Now that we have some of the background, context and terminology covered, let’s get into the nitty gritty of private data, which is the key feature in the HLF v1.2 release.


Private Data – Background, Context and Problem Statement for Private Data Collections


Before HLF v1.2, when a group of organizations on a channel needed to keep data private from other organizations on that channel, they had the option to create a new channel comprising just the organizations who needed access to the data.

However, creating separate channels in each of these cases creates additional administrative overhead (maintaining chaincode versions, policies, MSPs, etc), and doesn’t allow for use cases in which you want all channel participants to see a transaction while keeping a portion of the data private.

Therefore, we need private data within a channel solution – a private data collection. Some use cases of private data collection could be building a GDPR compliant blockchain and/or privacy-focused use cases such as selectively sharing of health care to the only certain partners.


A high-level diagram is shown below:


Let’s take a look at a sample policy json configuration file – the json configuration file.


Collection Definition JSON



      “name”: “collectionMarbles”,

      “policy”: “OR(‘Org1MSP.member’, ‘Org2MSP.member’)”,

      “requiredPeerCount”: 0,

      “maxPeerCount”: 3,





      “name”: “collectionMarblePrivateDetails”,

      “policy”: “OR(‘Org1MSP.member’)”,

      “requiredPeerCount”: 0,

      “maxPeerCount”: 3,





Pay careful attention to the key “policy” here.  There are two collections, collectionMarbles and collectionMarblePrivateDetails, for which different policy is enforced.


The collectionMarblesPrivateDetails collection allows only members of Org1 to have the private data in their private database.


The full tutorial to private data collection, which I have already gone through, was really instructive. You could run through the tutorial at


As always, feel free to reach out to me at if you have challenges with the setup.



Being both a Hyperledger and Enterprise Ethereum DLT practitioner in the finance industry within the capital markets, the introduction of private data collection in the Hyperledger Fabric v1.2 release is a real game changer. Instead of creating multiple separate channels among participants to achieve privacy needs, you need only to create one channel to keep chaincode data confidential among a subset of channel members.


Speaking from personal experience, having to evaluate, build and operate DLT solutions in a production environment/setting that can scale, private data collection in v1.2 release really makes scaling up and operating a lot easier.  


Enjoy building your distributed application on top of Hyperledger Fabric  v1.2 and as always, feel free to ask me any questions relating to Hyperledger at and/or connect with me on linkedin –



Open source eKYC blockchain built on Hyperledger Sawtooth

By | Blog

Guest post: Rohas Nagpal, Primechain Technologies

1. Introduction

Financial and capital markets use the KYC (Know Your Customer) system to identify “bad” customers and minimize money laundering, tax evasion, and terrorism financing. Efforts to prevent money laundering and the financing of terrorism are costing the financial sector billions of dollars. Banks are also exposed to huge penalties for failure to follow KYC guidelines. Costs aside, KYC can delay transactions and lead to duplication of effort between banks.

Blockchain-eKYC is a permissioned Hyperledger Sawtooth blockchain for sharing corporate KYC records amongst banks and other financial institutions.

The records are stored in the blockchain in an encrypted form and can only be viewed by entities that have been “whitelisted” by the issuer entity. This ensures data privacy and confidentiality while at the same time ensuring that records are shared only between entities that trust each other.

Blockchain-eKYC is maintained by Rahul Tiwari, Blockchain Developer, Primechain Technologies Pvt. Ltd.

The source code of Blockchain-eKYC is available on GitHub at:

Primary benefits

  1. Removes duplication of effort, automates processes and reduces compliance errors.
  2. Enables the distribution of encrypted updates to client information in real time.
  3. Provides the historical record of all compliance activities undertaken for each customer.
  4. Provides the historical record of all documents pertaining to each customer.
  5. Establishes records that can be used as evidence to prove to regulators that the bank has complied with all relevant regulations.
  6. Enables identification of entities attempting to create fraudulent histories.
  7. Enables data and records to be analyzed to spot criminal activities.

2. Uploading records

Records can be uploaded in any format (doc, pdf, jpg etc.) up to a maximum of 10 MB per record. These records are automatically encrypted using AES symmetric encryption algorithm and the decryption keys are automatically stored in the exclusive web application of the uploading entity.

When a new record is uploaded to the blockchain, the following information must be provided:

  1. Corporate Identity Number (CIN) of the entity to which this document relates – this information is stored in the blockchain in plain text / un-encrypted form and cannot be changed.
  2. Document category – this information is stored in the blockchain in plain text / un-encrypted form and cannot be changed.
  3. Document type – this information is stored in the blockchain in plain text / un-encrypted form and cannot be changed.
  4. A brief description of the document – this information is stored in the blockchain in plain text / un-encrypted form and cannot be changed.
  5. The document – this can be in pdf, word, excel, image or other format and is stored in the blockchain in AES-encrypted form and cannot be changed. The decryption key is stored in the relevant bank’s dedicated database and does NOT go into the blockchain.

When the above information is provided, this is what happens:

  1. Hash of the uploaded file is calculated.
  2. The file is digitally signed using the private key of the uploader bank.
  3. The file is encrypted using AES symmetric encryption.
  4. The encrypted data is converted into hexadecimal.
  5. The non-encrypted data is converted into hexadecimal.
  6. Hexadecimal content is uploaded to the blockchain.

Sample output:

  {file_hash: 84a9ceb1ee3a8b0dc509dded516483d1c4d976c13260ffcedf508cfc32b52fbe
     file_txid: 2e770002051216052b3fdb94bf78d43a8420878063f9c3411b223b38a60da81d
     data_txid: 85fc7ff1320dd43d28d459520fe5b06ebe7ad89346a819b31a5a61b01e7aac74
     signature: IBJNCjmclS2d3jd/jfepfJHFeevLdfYiN22V0T2VuetiBDMH05vziUWhUUH/tgn5HXdpSXjMFISOqFl7JPU8Tt8=
     secrect_key: ZOwWyWHiOvLGgEr4sTssiir6qUX0g3u0
     initialisation_vector: FAaZB6MuHIuX}


3. Transaction Processor and State

This section uses the following terminology:

  • Transaction Processor – this is the business logic / smart contracts layer.
  • Validator Process – this is the Global State Store layer.
  • Client Application (User) – this implies a user of the solution; the user’s public key executes the transactions.

The Transaction Processor of the eKYC application is written in Java. It contains all the business logic of the application. Hyperledger Sawtooth stores data within a Merkle Tree. Data is stored in leaf nodes and each node is accessed using an addressing scheme that is composed of 35 bytes, represented as 70 hex characters.

Using the Corporate Identity Number, or CIN, provided by the user while uploading, a 70 characters (35 bytes) address is created for uploading a record to the blockchain. To understand the address creation and namespace design process, see the documentation regarding Address and Namespace Design.

Below is the address creation logic in the application:


  • uniqueValue is the type of data (can be any value)
  • kycAddress is the CIN of the uploaded document.

The User can upload multiple files using the same CIN. However, state will return only the latest uploaded document. To get all the uploaded documents on the same address, business logic is written in Transaction Processor.

The else { part will do the uploading of multiple documents on the same address and fetching every uploaded document from the state.

4. Client Application

The client application uses REST API endpoints to upload (POST) and get (GET) documents on the Sawtooth blockchain platform. It is written in Nodejs. In case of uploading, few steps to be considered:

  • Creating and encoding transactions having header, header signature, and payload.(Transaction payloads are composed of binary-encoded data that is opaque to the validator.)

  • Creating BatchHeader, Batch, and encoding Batches.

  • Submitting batches to the validator.

When getting uploaded data from blockchain, the following steps needs to be considered:

  1. Creating the same address from the CIN given by User, using GET method to fetch the data stored on the particular address. As shown in  the following code snippet, updatedAddress is created by getting user input either from User (search using CIN in the network) or from the private database of the user (Records uploaded by the user). Similarly, splitStringArray splits the data returned from a particular address because of the transaction logic written in the Transaction Processor to upload multiple documents on the same address while updating state with the list of all the uploaded data (not only the current payload).

2. The client side logic is then written to convert the splitStringArray by decoding it to the required format and giving User an option to download the same in the form of a file.

5. Installation and setup

Please refer to the guide here:

6. Third party software and components

Third party software and components: bcryptjs, body-parser, connect-flash, cookie-parser, express, express-fileupload, express-handlebars, express-session, express-validator, mongodb, mongoose, multichain, passport, passport-local, sendgrid/mail.

7. License

Blockchain-eKYC is available under Apache License 2.0. This license does not extend to third party software and components.

Debunking Myths Surrounding Hyperledger

By | Blog

Since its inception at the end of 2015, Hyperledger has grown from two projects to ten, and the adoption of the Hyperledger platforms and tools has spread across a wide range of industries. Even as Hyperledger has become a trusted name when it comes to using blockchain for the enterprise, there are still some misperceptions that we’d like to debunk once and for all.

Myth #1: Hyperledger is a vendor.

Reality: Hyperledger is not a vendor. It is a non-profit industry consortium with a membership-based model. Anyone can use the code, contribute, and even become a core maintainer on any of the projects, whether or not they work at member companies. We do have a growing subset of our member community that offer business blockchain products and services based on Hyperledger projects; you can check the 80+ organizations and offerings in the vendor directory.

Myth #2: Hyperledger is an IBM- and Intel-run shop.

Reality: Though a number of Hyperledger projects were originally contributed by IBM (Hyperledger Fabric, Hyperledger Composer, Hyperledger Cello, Hyperledger Explorer) and Intel (Hyperledger Sawtooth, Hyperledger Explorer), the diversity of the developers working on the projects grows every day. Over the lifetime of all 10 Hyperledger projects, there have been 729 unique code contributors, representing more than 150 organizations. The recent 1.2 release of Hyperledger Fabric featured contributions from 15 companies, including Accenture, BBVA, Oracle, and Blocledger; 22% of the commits came from non-IBM sources. In the Hyperledger Sawtooth project, Bitwise has eclipsed Intel in the maintainer count. We are grateful for the continued resources and support that IBM and Intel provide, and encourage other companies to follow their lead, so that Hyperledger remains a healthy, multi-stakeholder community.

Myth #3: Hyperledger doesn’t support interoperability.

Reality: Hyperledger Quilt, one of the five tools in the Hyperledger greenhouse, offers a way of guaranteeing transactional coherence across two ledgers. And that’s just the start of what Hyperledger can do for interoperability.

An early example of this was the integration between the Hyperledger Sawtooth and Hyperledger Burrow projects last year. As a result of that integration, simple EVM smart contracts can be deployed to Hyperledger Sawtooth using the “Seth” (Sawtooth Ethereum) Transaction Family.

More recently, the  Hyperledger Fabric community began working to create a bridge to the Ethereum community so that developers can write EVM smart contracts on Fabric. The hope is that our community will continue to tighten integration and interoperability across Hyperledger projects and beyond, creating a greater number of options for developers. We hope that even more developers can start to think out of the box, connecting blockchains, and doing it securely. The problem of working with more than one technology stack is no longer a technical one.

Our philosophy has always been that you can write one blockchain that talks to multiple other blockchains at the same time. They’re not hermetically sealed.

Myth #4: Hyperledger isn’t focused on scalability and privacy.

Reality: Hyperledger is working on multiple fronts to improve scalability and privacy. The Performance and Scale Working Group is collaborating on defining key metrics for scalability in blockchain technology. Hyperledger Fabric already has a scalability feature called ordering nodes, which lets you focus a subset of your network on the performance-critical part of it in order to improve performance.

When it comes to personal data, Hyperledger Indy upholds the standards mandated by GDPR. Hyperledger Fabric has support for private channels, which is one of the techniques for providing confidentiality between parties with their transactions on the blockchain.

At the same time, our Hyperledger Architecture Working Group has a working draft of an evaluation of the different Hyperledger projects’ approaches to privacy and confidentiality.

Myth #5: Hyperledger blockchains are one network per application.

Reality: Our vision is that there can be multiple, different applications for each network. The food trust network that is being developed with Walmart, for example, could be applied to trace fish, packaged greens, and consumer products, all at the same time. Plus, we are eager to see the interesting applications that can be built on top of that traceability.

The Hyperledger community keeps growing: We’re up to 277 member organizations, including new members FedEx and Honeywell. While that should mean greater awareness of who we are and what we do, we also want to continue to answer your questions. Are there other myths you have heard or seen? Not sure if something is true or not about Hyperledger and blockchain in general? Feel free to share with us on Twitter @Hyperledger.

We hope you join us in the effort by contributing to Hyperledger projects. You can plug into the Hyperledger community at github, Rocket.Chat the wiki or our mailing list. As always, you can or email us with any questions: