Does Hyperledger Fabric perform at scale?

I’m glad you asked! The short answer is: yes, it does indeed!

I get questions

I get a lot of questions about the performance of Hyperledger Fabric (Fabric) at scale. Often times, people have done some (or read/heard about) performance testing (say using early versions of Caliper on their laptop, or with earlier versions of Fabric) and came away with a sense that the performance was not all that great.

The Fabric maintainers readily acknowledge that prior to the release of Hyperledger Fabric v1.1.0, performance was not great. The Fabric maintainers had recognized with Fabric v0.6 that we needed to adopt a new architecture to enable the platform to achieve the performance characteristics that many potential enterprise blockchain use cases demanded. Our objective for v1.0.0 was to get a functioning version of our new architecture available to users. We did not want to get caught up in premature optimization. Since that time, we have invested considerably in performance improvements, starting with the v1.1.0 release and continuing to this day.

On my IBM-hosted blog, I’ve started a series of posts aimed at providing information on performance and scale of Hyperledger Fabric. My initial blog post on the subject started to outline some best practices to improve the performance of Hyperledger Fabric that I have gleaned from experimentation with Fabric endorsement policies, load-balancing and orderer configuration. My most recent post addresses a common misconception about Fabric’s ability to scale its channel architecture.

Does the Fabric channel architecture perform at scale?

So, does Hyperledger Fabric performance suffer with a proliferation of channels? The short answer is: not that I have observed with the latest versions of Fabric v1.4.0 and v1.4.1. I’d encourage you to hop over to my two posts above for the details.

Another interesting development that actually came as a pleasant surprise is that the introduction of Raft consensus for the Fabric ordering service has yielded a nice improvement in latency, which actually allows one to push the overall throughput to new heights while keeping the latency at acceptable levels. It also significantly reduces the operational complexity of running Kafka and Zookeeper.

While it is too early to make sweeping statements, initial testing has yielded impressive improvements in throughput while keeping latencies under a second.

Moving forward

The Fabric community continues to work on various aspects of performance. Our next release (v1.4.1-rc1 is available for testing now) will focus on the addition of Raft consensus mentioned above. The next release, v2.0, will include a state database cache that should realize an overall performance improvement in accessing the state database.

Following on that, we will be working on releasing the lock on the state database once the cache has been updated to reduce lock contention and enable even greater throughput. We are receiving great insights and recommendations from members of the community who are focused on the performance of Fabric, and gradually we hope to leverage that learning in subsequent releases this year.

Of course, the Fabric maintainers are always interested in having new (and old) members contribute to improving Fabric. Performance is just one area to engage -.there are  myriad other ways to contribute as well., Feel free to reach out in Chat (#fabric) or in email (fabric@lists.hyperledger.org).

Back to all blog posts

Sign up for Hyperledger Horizon & /dev/weekly newsletters 

By signing up, you acknowledge that your information is subject to The Linux Foundation's Privacy Policy