Category

Blog

Hyperledger Sawtooth: Improving the Devops CI Workflow with Kubernetes

By | Blog, Hyperledger Sawtooth

Every devops engineer knows the importance of continuous integration (CI) testing. It’s vital to prevent regressions, as well as maintain performance, security, and supportability. At Bitwise IO, we are experimenting with Kubernetes as an automated CI deployment tool. We like the simplicity of extending tests with deployments on Kubernetes. We think Kubernetes has compelling potential for use in the CI workflow.

Figure 1: The main tools in our CI workflow

This blog post explains how Kubernetes fits into our CI workflow for Hyperledger Sawtooth. We don’t go into detail, but we provide links so you can learn more about Kubernetes and the other tools we use.

Building Sawtooth

Hyperledger Sawtooth uses Jenkins to automate builds of around 20 GitHub repositories. Each new or merged pull request in the master branch initiates a build that contains project-specific tests. The next logical step is to deploy into a test environment.

We have two deployment methods: Debian packages and Docker images. We install the Debian packages inside the Docker deployment image to ensure both that the binaries are tested and that the packages are installable.

Using Docker’s multi-stage build capability, an intermediate build container makes the deployment image smaller and narrows its exposed attack surface (possible vulnerabilities).

Handing off Docker Images with Docker Registry

Jenkins is a great place for build artifacts, but Docker has its own way to easily retrieve images. Docker Registry allows you to push your newly created images and easily retrieve them with Docker, Kubernetes, or anything else that uses the Docker Registry model.

This example shows how to tag an image with the URL of an internal registry, then upload the image to that registry.

$ docker build -f Dockerfile -t registry.url/repo/image_name:${tag} .
$ docker push registry.url/repo/image_name:${tag}

We also use Portus, because Docker Registry does not provide user and access management on its own. The Portus project makes it simple to place an authentication layer over Docker Registry. Now, any authenticated user can pull and deploy the same images that are being deployed into the test environment.

Kubernetes: Simulating Scaled Deployments

Kubernetes excels at creating deployments within and across abstracted infrastructures. We have done our experiments on local (“on-prem”) hardware with a cluster of small Kubernetes nodes dedicated to Sawtooth deployments. Each deployment consists of several pods partitioned in a namespace, which allows us to run multiple networks based on the same deployment file. (Without namespaces, Kubernetes would think we are updating the deployment.) A pod represents a Sawtooth node and contains several containers, each running a specific Sawtooth component: validator, transaction processor, REST API, and so on. Each namespace can have independent quotas for resources such as CPU time, memory, and storage, which prevents a misbehaving network from impacting another network.

Figure 2: Containerized services grouped together in pods.

Because we use Kubernetes, these deployments are portable. We can use them on any cloud infrastructure that supports Kubernetes.

Kubernetes also allows us to scale the number of CI test deployments. With an elastic cloud infrastructure, Kubernetes provides effortless testing on a large number of virtual systems (limited only by the cost of cloud hosting). This solves the issue of limited hardware resources, where each additional deployment will stress existing deployments when they share a node’s resources.

Workload Generation: Deploying Changes Under Load

Deploying Sawtooth is the first step, but you need to give it something to do—better yet, lots to do. Sawtooth includes several workload generators and corresponding transaction processors. In our Kubernetes environment, we deploy intkey_workload and smallbank_workload at rates slightly above what we think the hardware can handle for shorter runs.

Modifying workload rates is as simple as editing the deployment file, changing the rate settings, and reapplying with kubectl. When Kubernetes detects that the pod’s configuration has changed, it terminates the existing workload pod and creates a new one with the changed settings.

This example shows a container definition for an intkey workload pod.

containers:
  - name: sawtooth-intkey-workload
    image: registry.url/repo/sawtooth-intkey-workload:latest
    resources:
      limits:
        memory: "1Gi"
    command:
      - bash
    args:
      - -c
      - |
         intkey-workload \
           --rate 10 \
           --urls ...

Retaining Data with Kubernetes: Logging and Graphing

All this testing isn’t much use if you can’t troubleshoot issues when they arise. Kubernetes can streamline deployments, but it can also frustrate your attempts to gather logs and data embedded inside Docker containers after a pod has failed or stopped. Luckily, Sawtooth provides real-time metrics (which we view with Grafana) and remote logging through syslog. We actively collect logs and metrics from the Sawtooth networks, even down to syslog running on the hardware, then carefully match the logging and metrics artifacts to the testing instance. In the end, we can provide a comprehensive set of log data and system metrics for each code change.

Try It Out!

The Sawtooth documentation can help you get started: See Using Kubernetes for Your Development Environment and Kubernetes: Start a Multiple-node Sawtooth Network.

To configure Grafana, see Using Grafana to Display Sawtooth Metrics.

See these links for more information about each tool in our CI workflow:

About the Authors

Richard Berg is a Senior Systems Engineer at Bitwise IO. He has several years’ experience in sysadmin and devops roles. When not behind a terminal, Richard can be found in the woods with his snow-loving adventure cat.

Ben Betts is a Senior Systems Engineer at Bitwise IO. Ben has lots of experience with deploying, monitoring, coordinating, and supporting large systems and with writing long lists of experiences. He only uses Oxford commas because he has to.

 

 

Forbes Blockchain 50: Half of the biggest companies deploying blockchain use Hyperledger

By | Blog

Hyperledger, “the gold standard for corporate blockchain projects,” powers half of the “Forbes Blockchain 50,” according to a new set of articles deep diving into the state of enterprise blockchain.

This week, Forbes took on the challenge of chronicling “the rise of so called ‘enterprise’ blockchain” with the creation of its inaugural Blockchain 50, a list of 50 of the biggest companies deploying DLT technology within their operations. Forbes also captured the blockchain platforms at work in each of the 50 companies on its list. Half of the companies on the list, including Amazon, Cargill, CVS, IBM, Seagate, Visa and more, are using Hyperledger technologies.

The list was accompanied by the in-depth, case study-filled feature “Blockchain Goes to Work,” which welcomes readers to “the brave new world of enterprise blockchain, where corporations are embracing the technology underlying cryptocurrencies like bitcoin and using it to speed up business processes, increase transparency and potentially save billions of dollars.”

Hyperleger has focused exclusively on enterprise blockchain and DLT solutions, applying rigorous standards to how open source blockchain is developed. The recognition that some of the world’s largest organizations are building on Hyperledger is validation for the course set three years ago by Executive Director, Brian Behlendorf, to be the premiere community advancing the commercial adoption of blockchain tech.

The Forbes articles solidified Hyperledger technologies’ position as the de facto infrastructure for enterprise blockchain. Or, as Forbes says, “the gold standard for corporate blockchain projects.”

The transition from possibility to reality is well underway, which makes it more important than ever that we push ahead with a transparent, rigorous community-driven development across all our current and future projects. Our community is conceiving, building and deploying foundational technologies that are forever changing the way global companies, customers and communities interact.

DAML smart contracts coming to Hyperledger Sawtooth

By | Blog, Hyperledger Sawtooth

It has been a busy two weeks at Digital Asset… first, we announced that we have open-sourced DAML under the Apache 2.0 license and that the DAML SDK is available to all. Five days later, ISDA (the standards body for the derivatives market) announced that DAML is the exclusive smart contract language for their Common Domain Model, and we open-sourced a reference library and application. Next up, we announced that we’ve been working with the team at VMware to integrate DAML with their enterprise blockchain platform, VMware Blockchain.

Today, we’re delighted to share that we have been working with fellow Hyperledger members, Blockchain Technology Partners (BTP), to integrate the DAML runtime with Hyperledger Sawtooth! In this blog post, I’ll describe why we believe it’s important to architect a DLT application independently of the platform, why a new language is needed for smart contracts, and why we are working with BTP to integrate it with Hyperledger Sawtooth.

“Following the recent announcement that DAML has been open-sourced, we are delighted that work is already underway to integrate the DAML runtime with Hyperledger Sawtooth. This demonstrates the power of the open source community to enable collaboration and give developers the freedom required to truly move the industry forward.”

Brian Behlendorf, Executive Director of Hyperledger

One language for multiple platforms

As you all know, the enterprise blockchain space is fairly nascent and highly competitive. There are multiple platforms and protocols battling it out to be the “one true blockchain,” each with their own version of maximalists. Hyperledger alone has six distinct frameworks, each tailored to different needs, making necessary trade-offs to solve different problems. The field is rapidly evolving and we are all learning from the contributions of others to better the industry as a whole. One thing all these platforms have in common: Their purpose is to execute multi-party business processes. The differences arise in how a given platform deals with data representation and privacy, transaction authorization, progressing the state of an agreement, and so on.

And so each platform has its own patterns for writing distributed ledger applications, typically in a general-purpose language such as Java, JavaScript, Kotlin, Go, Python, and C++. The result of this is that developers must pick which framework they want to use and then develop their application specifically for that platform. Their application is now tightly coupled to the underlying architecture of that ledger and if a better alternative arises for their needs, that likely results in a wholesale rewrite.

One of the primary goals of DAML was to decouple smart contracts, the business logic itself, from the ledger by defining an abstraction over implementation details such as data distribution, cryptography, notifications, and the underlying shared store. This provides a clean ledger model accessible via a well specified API. With a mapping between this abstraction layer and the specifics of a given platform, as BTP is developing for Hyperledger Sawtooth, DAML applications can be ported from platform to platform without complex rewrites.

Why do smart contracts need a new language?

DAML’s deep abstraction doesn’t just enable the portability of applications—it greatly improves the productivity of the developer by delivering language-level constructs that deal with boilerplate concerns like signatures, data schemas, and privacy. Blockchain applications are notoriously difficult to get right. Libraries and packages can help improve productivity in some cases, but the application will remain bound to a given platform. Even Solidity, the language of choice for writing to an Ethereum Virtual Machine (EVM), exposes elements of the Ethereum platform directly to the developer. And we’ve seen several examples of how damaging a bug in a smart contract, or even the language itself, can be.

Abstracting away the underlying complexities of blockchains allows you to focus only on the business logic of your project and leave lower-level issues to the platform.

For example, when a contract involves many parties and data types it can be extremely difficult to define fine-grained data permissions in a general-purpose language. DAML allows you to define explicitly in code who is able to see which parts of your model, and who is allowed to perform which updates to it.

As a very simple illustration, consider the model for a cash transfer. DAML’s powerful type system makes it easy to model data schemas—even far more complex schemas than this—directly in the application.

DAML data model

Built-in language elements simplify the specification of which party or parties need to sign a given contract, who can see it, and who is allowed to perform actions on it. These permissions can be specified on a very fine-grained, sub-transaction basis. For example, the issuer of cash does not need to know who owns that currency or what they do with it.

DAML permissions

DAML provides a very clean syntax for describing the actions available on a contract, together with their parameters, assertions, and precise consequences.

DAML business logic

What you won’t find in DAML are low-level, platform-specific concerns like hashing, cryptography, and consensus protocols. You define the rules in DAML, and the runtime enforces the rules that you set out.

If you refer to the examples in the DAML SDK documentation, or the open source code for the set of complete sample applications we’ve provided, you’ll really come to appreciate the full richness of DAML and the simplifying effect it can have on complicated workflows.

Why Hyperledger Sawtooth?

Digital Asset has a long history with Hyperledger, being founding premier members and serving as both the Chairs of the Governing Board and Marketing Committees. In fact, we donated code to the initial implementation of Hyperledger Fabric and the trademark “Hyperledger” itself! I personally worked with Jim Zemlin and team at the Linux Foundation to establish the project and co-founded the company Hyperledger with my colleague Daniel Feichtinger back in early 2014.

We have clearly always believed in the need for an organization such as Hyperledger to exist, to create an open source foundation of common components that can serve as the underlying plumbing for the future of global commerce.

Hyperledger Sawtooth has quickly been emerging as an enterprise-grade platform that exemplifies the umbrella strategy that Brian laid out in his first blog post after joining as executive director. It has an extremely modular architecture that lends itself well to the plug-and-play composability that Hyperledger set out to achieve.

An example of this is that Hyperledger Sawtooth originally only offered support for the Proof of Elapsed Time, or PoET, consensus algorithm; consensus is now a pluggable feature. This modularity is accompanied by a very clean separation of business logic from platform logic, offering developers a degree of ‘future-proofing’ by limiting the amount of code that needs to be changed should a core component such as consensus be replaced.

Modularity also makes Hyperledger Sawtooth very amenable to plugging in new language runtimes. We’ve already seen this in action with Hyperledger Burrow, which integrates an Ethereum Virtual Machine into Hyperledger Sawtooth to support contracts written in Solidity. Incorporating the DAML runtime into Hyperledger Sawtooth similarly enables support for contracts written in DAML as an enterprise-grade alternative to Solidity.

Finally, from a ledger model point of view, many of the Hyperledger Sawtooth characteristics already map well to what DAML expects. Hyperledger Sawtooth’s Transaction Processor has a very flexible approach towards roles and permissions, for example, and is based on a very natural DLT network topology of fully distributed peers. DAML is based on a permissioned architecture and Hyperledger Sawtooth can be configured to be permissioned without requiring special nodes.

What comes next?

Digital Asset and BTP will soon be submitting the DAML Integration to the upstream Hyperledger Sawtooth framework, fully open sourcing our work.

The integration will also be commercially supported by BTP’s blockchain management platform, Sextant, which provides application developers with a cloud-ready instance of Hyperledger Sawtooth. Sextant is already available on the AWS Marketplace for Containers, and DAML support for Sextant will be added in July. BTP expects to support Sextant on other cloud provider support soon thereafter.

BTP is one of Digital Asset’s first partners to use the DAML Integration Toolkit, a new tool designed to enable developers and partners to easily integrate our open source DAML runtime with their own products, immediately offering the benefits of the best in class smart contract language to their end customers. We look forward to any collaboration that brings DAML to even more platforms, including the other frameworks in the Hyperledger family!

To learn more, download the DAML SDK today and start building your applications for Hyperledger Sawtooth!

Hyperledger Indy Graduates To Active Status; Joins Fabric And Sawtooth As “Production Ready” Hyperledger Projects

By | Blog, Hyperledger Fabric, Hyperledger Indy, Hyperledger Sawtooth

By Steven Gubler, Hyperledger Indy contributor and Sovrin infrastructure and pipeline engineer

The Hyperledger Technical Steering Committee (TSC) just approved Indy to be the third of Hyperledger’s twelve projects to graduate from incubation to active status.

This is a major milestone as it shows that Hyperledger’s technical leadership recognizes the maturity of the Indy project. The TSC applies rigorous standards to active projects including code quality, security best practices, open source governance, and a diverse pool of contributors. Becoming an active Hyperledger project is a sign that Indy is ready for prime time and is a big step forward for the project and the digital identity community.

Hyperledger Indy is a distributed ledger purpose-built for decentralized identity. This ledger leverages blockchain technology to enable privacy-preserving digital identity. It provides a decentralized platform for issuing, storing, and verifying credentials that are transferable, private, and secure.

Hyperledger Indy grew out of the need for an identity solution that could face the issues that plague our digital lives like identity theft, lack of privacy, and the centralization of user data. Pioneers in self-sovereign identity realized we could fix many of these issues by creating verifiable credentials that are anchored to a blockchain with strong cryptography and privacy preserving protocols. To this end, the private company Evernym and the non profit Sovrin Foundation teamed up with Hyperledger to contribute the source code that became Hyperledger Indy. The project has advanced significantly due to the efforts of these two organizations and many teams and individuals from around the world.

A diverse ecosystem of people and organizations are already building real-world solutions using Indy. The Sovrin Foundation has organized the largest production network powered by Indy. The Province of British Columbia was the first to deploy a production use case to the Sovrin Network with its pioneering work on Verifiable Organizations Network, a promising platform for managing trust at an institutional level. Evernym, IBM, and others are bringing to market robust commercial solutions for managing credentials. Many other institutions, researchers, and enthusiasts are also actively engaged in improving the protocols, building tools, contributing applications, and bringing solutions to production.

The team behind the project is excited about current efforts that will lead to increased scalability, better performance, easier development tools, and greater security. User agents for managing Indy credentials are under active development, making it easy to adopt Indy as an identity solution for diverse use cases.

If you’d like to support Indy, join our community and contribute! Your contributions will help to fix digital identity for everyone. You can participate in the discussions or help write the code powering Indy. Together, we will build a better platform for digital identity.A

Does Hyperledger Fabric perform at scale?

By | Blog, Hyperledger Fabric

I’m glad you asked! The short answer is: yes, it does indeed!

I get questions

I get a lot of questions about the performance of Hyperledger Fabric (Fabric) at scale. Often times, people have done some (or read/heard about) performance testing (say using early versions of Caliper on their laptop, or with earlier versions of Fabric) and came away with a sense that the performance was not all that great.

The Fabric maintainers readily acknowledge that prior to the release of Hyperledger Fabric v1.1.0, performance was not great. The Fabric maintainers had recognized with Fabric v0.6 that we needed to adopt a new architecture to enable the platform to achieve the performance characteristics that many potential enterprise blockchain use cases demanded. Our objective for v1.0.0 was to get a functioning version of our new architecture available to users. We did not want to get caught up in premature optimization. Since that time, we have invested considerably in performance improvements, starting with the v1.1.0 release and continuing to this day.

On my IBM-hosted blog, I’ve started a series of posts aimed at providing information on performance and scale of Hyperledger Fabric. My initial blog post on the subject started to outline some best practices to improve the performance of Hyperledger Fabric that I have gleaned from experimentation with Fabric endorsement policies, load-balancing and orderer configuration. My most recent post addresses a common misconception about Fabric’s ability to scale its channel architecture.

Does the Fabric channel architecture perform at scale?

So, does Hyperledger Fabric performance suffer with a proliferation of channels? The short answer is: not that I have observed with the latest versions of Fabric v1.4.0 and v1.4.1. I’d encourage you to hop over to my two posts above for the details.

Another interesting development that actually came as a pleasant surprise is that the introduction of Raft consensus for the Fabric ordering service has yielded a nice improvement in latency, which actually allows one to push the overall throughput to new heights while keeping the latency at acceptable levels. It also significantly reduces the operational complexity of running Kafka and Zookeeper.

While it is too early to make sweeping statements, initial testing has yielded impressive improvements in throughput while keeping latencies under a second.

Moving forward

The Fabric community continues to work on various aspects of performance. Our next release (v1.4.1-rc1 is available for testing now) will focus on the addition of Raft consensus mentioned above. The next release, v2.0, will include a state database cache that should realize an overall performance improvement in accessing the state database.

Following on that, we will be working on releasing the lock on the state database once the cache has been updated to reduce lock contention and enable even greater throughput. We are receiving great insights and recommendations from members of the community who are focused on the performance of Fabric, and gradually we hope to leverage that learning in subsequent releases this year.

Of course, the Fabric maintainers are always interested in having new (and old) members contribute to improving Fabric. Performance is just one area to engage -.there are  myriad other ways to contribute as well., Feel free to reach out in Chat (#fabric) or in email (fabric@lists.hyperledger.org).

Breaking Down Barriers in Diversity and Technology

By | Blog

Diversity is a powerful term. When on its own, the term may symbolize differences; when paired with inclusion – as it is often presented today – it symbolizes equality, togetherness and strength. Women’s History Month, held in March each year, is a time to celebrate diversity and the accomplishments of strong women across all demographics and industries.

Like many fields, diversity is not highly recognizable in the technology sector. Today, women represent 10 to 20 percent of professionals in the technology field. While low, it is encouraging to see the women that are leading the charge.

Throughout my career, I’ve had the privilege to work with some of the smartest women in business and technology – women that are not afraid to voice their opinions and that have the courage to stand among their peers. While more women are needed in technology, it has been inspiring to be in the company of among the best to bring innovative projects to life.

I chose the field of industrial engineering as it offered the best combination of engineering fundamentals and business strategy. This helped me adapt and deliver innovative solutions in established organizations like Nortel, Motorola and TELUS, bringing first-of-their-kind products to market, including Blackberry’s first set of mobile devices, mobile email, virtual networks and international text messaging. These service rollouts required a wide range of skills to deliver, including product and go-to-market strategy, program management and technical proficiency.

All of these solutions brought forward new and innovative developments that addressed gaps in the digital marketplace. Today, in the height of the digital age, technology has never been more important to the way organizations do business and how consumers interact with services online. Identity as it currently stands was not made for the digital age – it is a broken system, but one that can be fixed through the use of new technology and approaches.

SecureKey is dedicated to addressing this gap, working tirelessly over the last three years on its soon-to-launch platform, Verified.Me. The forthcoming blockchain-based digital identity network, using IBM Blockchain’s service and built on top of the Linux Foundation’s Hyperledger Fabric, brings together the brightest minds in technology – including financial institutions, telecommunications companies, government and others – to put control and privacy of personal identity information back in the hands of consumers. Through the platform, users are able to consent to the secure sharing of digital identity attributes with network participants to gain access to desired online services.

Bringing the network to life has been a complex undertaking.

As the operating committee lead, understanding how to implement intricate processes was essential in getting Verified.Me up and running. The role has been two pronged:

  • On the program management side, it’s been critical to manage the governance process, while navigating the different interests and goals of all working groups, including seven of Canada’s major financial institutions, IBM and additional network participants like Sun Life Financial and Equifax.
  • On the technical front, not only is digital identity a new concept in the marketplace, but the technology that it’s based upon is also new. This fact, combined with each participants’ own set of technical challenges and restrictions, made for a very challenging initiative. Consider the technical components at play: IBM’s Blockchain platform, on-premise network components, mobile and web applications, back-end integrations and new operational tools. Not only do they all have to work together, but they must also comply to strict security requirements and privacy policies. While I did not design or implement these, I had to keep all the parts moving to reach the end goal.

Seeing this come to fruition has provided all network participants with a common sense of fulfillment and a rewarding reminder of how far Verified.Me has come. Three years, countless meetings and over 160 active participants later, what first started as a concept has evolved into a real service that will change digital identity for the better.

Verified.Me would not be a reality were it not for the strong, remarkably intelligent women and allies working tirelessly at SecureKey and with our network participants. Our solution, built in collaboration to better empower consumers and their digital identities, is one that perfectly encapsulates the ideals of equality, togetherness and strength. While female representation may only be 10 to 20 percent, working alongside my colleagues and partners has shown me that the women in technology are powerhouses of skill and innovation. I couldn’t be prouder to be a part of this team, this solution and this industry.

About June Macabitas

June Macabitas earned a Bachelor’s of Industrial Engineering from the University of the Philippines and holds a PMP certification from the Project Management Institute.

As SecureKey’s Senior Director of Program Management, June is responsible for the delivery of programs and initiatives that are critical to SecureKey’s success. She has extensive experience managing large, cross-functional project teams to deliver complex business, operational and technical requirements. Prior to joining SecureKey, June was responsible for delivering innovative solutions in established organizations like Nortel, Motorola and TELUS, bringing first-of-their-kind products to market.

Developer showcase series: Zilya Yagafarova, Soramitsu

By | Blog, Developer Showcase, Hyperledger Iroha

Give a bit of background on what you’re working on, and let us know what was it that made you want to get into technology? How did you get involved in blockchain? In Hyperledger?

I am a project manager for Soramitsu and I work with a team of highly skilled developers writing code for different platforms and also QA and DevOps specialists.

I have been interested in IT since I was a child and, by the age of 14, had already decided to commit myself to studying computer technologies. After graduating from university, I worked as a technical support engineer, an engineer of information systems’ implementation, and a business and system analyst. Now, I am a project manager.

The thing about IT is that you have to constantly learn new skills and work on self-development. Technology is advancing constantly, so you should become a better version of yourself everyday.

A few years ago, blockchain technology appeared on the market–it was new and seemed promising. My friends had already worked on Hyperledger projects and inspired me to join them.

What project in Hyperledger are you working on? Any new developments to share? Can you sum up your experience with Hyperledger?

I am a project manager of Project Bakong, a payment system that is developed in collaboration with the National Bank of Cambodia (NBC) using Hyperledger Iroha blockchain. We have finished with the implementation of the core system and will to launch a pilot with dozens of Cambodian banks, which is very exciting. Some of our technology is also being used in a decentralized autonomous economic system called Sora and in a decentralized digital asset custodian and settlement service called D3 Ledger. Collaborating with other projects is intellectually stimulating and enjoyable.

We decided to use Hyperledger Iroha because it is created for financial institutions to build highly performant systems that can scale to large numbers of concurrent users (in our case it is the population of a whole country!) and it proved itself capable of performing the task, in my experience.

What’s the one issue or problem you hope blockchain can solve?

The main target of the project I am working on is to help expand access to financial services for Cambodian people by providing instant payments through a mobile application and robust, modernized infrastructure.

Blockchain is a new and very promising technology, especially when it comes to finance – transactions in Hyperledger Iroha have settlement finality and the data are impossible to corrupt.

What is the best piece of developer advice you’ve ever received?

Design first–analyse the task from every angle and only then write the code that you fully comprehend; do not rely on random chance because it will not work.

What advice would you give for other women who want to build their careers in development? In blockchain?

Believe in yourself and in your capabilities. Then just work hard.

What technology could you not live without?

That must be maps and translation software. I travel a lot because our company is as decentralized as its products, and it would be impossible to discover the world as I do now without a way to communicate and navigate in it.

Hyperledger Unveils 17 Summer Internship Opportunities

By | Blog, Hyperledger Caliper, Hyperledger Fabric

The Hyperledger Summer Internship program is back and bigger than ever for 2019! This year, Hyperledger is offering 17 paid internship opportunities for students who want real world experience advancing open source blockchain technologies.

The line-up of internship projects are each led by active developers in the Hyperledger community and offer a fast path to becoming an active contributor to key blockchain frameworks and tools. This is your chance to:

  • Develop a close working relationship with open source professionals and industry leaders to expand your professional network.
  • Learn open source development infrastructure and tooling first hand by working closely with active developers in the community.
  • Build your resume by doing hands-on opportunity work that advances your academic and professional interests.

Each intern will be  paired with a mentor or mentors who designed the project to address a specific Hyperledger development or research challenge. The mentors will provide regular evaluations and feedback. Interns can work from anywhere, will receive stipends and be invited to travel (with expenses covered) to a Hyperledger event where they will present their work to the community.

This is the third year for the Hyperledger Internship program. It has grown quickly from six projects in the first year to 17 this summer. Many of last year’s interns shared their experiences in a blog post; see what they had to say here.

The application process is now open and the deadline to apply is April 22. You may submit your application here. Read on for descriptions of some of the projects planned for this year.

Hyperledger Caliper Visualization Project

Hyperledger Caliper is a platform for facilitating the execution of user-provided workloads/benchmarks on multiple blockchain platforms in a transparent way. Caliper achieves its flexibility by relying on two configuration files during its execution.

One configuration file describes the test rounds that Caliper must execute, including: the intensity/rate and content of the workload; the deployment of processes that generate the workload; and additional monitoring settings. The other configuration file describes the target blockchain network in detail, at least including the topology of the blockchain network (among other, platform-specific attributes).

The aim of the project is the following:

  • Create a GUI component for Caliper that makes the management of configuration files easier, specifically:
    • Assembling/generating configuration files through the GUI
    • Saving, loading and editing configuration files
    • Providing built-in documentation and tips for the users
  • Visualize in real-time the key performance indicators observed during the execution of a benchmark

IoT and DLT in a Telecom Multi-carriers Architecture Project

The three major characteristics of IoT are mobility, scalability and interoperability that play at three different levels/layers: identity, connectivity and application.

Blockchain and trusted identities enable the true potential of IoT.

Current needs:

  • Trusted IoT identities: enabling connection or communication among entities
  • Scalable connectivity
  • Interoperability of apps

The solutions:

  • Cryptographically secured identity
  • Autonomous provisioning
  • Decentralization

Task 1: 1º PoC

Using Indy to building up an IdM system taking into account the needs of an IoT architecture (mobility, scalability and interoperability) and challenges (access control, privacy, trust and performance).

a – proving basic feasibility and viability

b – proving feasibility with a real system and providing viability

Task 2:

Identify the metrics to measure the performance & scalability of a decentralized IdM for IoT

Task 3: 2º PoC

a – proving scalability to bi- parties (2 carriers) and a large amount of data

c – proving privacy and confidentiality in bi- parties (2 carriers) environment

d – exploring integration with different types of data & contract types

X.509 Certificate Transparency Using Hyperledger Fabric Blockchain Project

The security of web communication via the SSL/TLS protocols relies on safe distribution of public keys associated with web domains in the form of X.509 certificates. Certificate authorities (CAs) are trusted third parties that issue these X.509 certificates. However, the CA ecosystem is fragile and prone to compromises.

Leveraging recent advances in blockchain development, we recently proposed a novel system, called CTB (Certificate Transparency using Blockchain), that makes it impossible for a CA to issue a certificate for a domain without obtaining consent from the domain owner (See https://eprint.iacr.org/2018/1232 for a copy of the paper).

CTB (Certificate Transparency using Blockchain) proposes a Hyperledger Fabric (HF) network among the member certification authorities by requiring each certificate authority to play the role of endorsing peers and who belong to different organisations (orgs in HF vocabulary). The aim of this project is to scale up the existing proof-of-concept implementation through several stages:

  1. Development of client application for Certificate Authority organisation and Browser organisation facilitating access to the underlying fabric blockchain network.
  2. Setting up the CTB over cloud.
  3. Chrome extension for browser client application.
  4. Benchmarking CTB-assisted SSL/TLS handshake duration

Read more details on the above projects and many more here. Then check out the eligibility requirements and application steps.

Remember, applications are due by April 22 submit your application here.

If you have any questions, please contact internship@hyperledger.org. Remember, you can always plug into the Hyperledger community via github, Hyperledger Chat, the wiki or our mailing lists.

Connecting on the local level: Tips for getting the most out of Hyperledger meetups

By | Blog

Hyperledger meetups provide a way for people to learn more about the project, meet other people in the community who live in an area, and share about the work they are doing.

With over 160 meetup groups in more than 60 countries, there is probably a group near you (and if there isn’t, we’re happy to work with you to get one set up). When people go to a meetup we want to make sure it is a positive experience, so this blog offers some important tips for  making those events as valuable and enjoyable as possible.

Make everyone feel welcome

As a global open source community with meetup groups all over the world, it is important to us that everyone everywhere feels welcome.

[Image options to include here:]

All skill levels are welcome at meetups, so don’t feel shy about attending a meetup even if you don’t think you know everything there is to know about Hyperledger. One tip is to look at the agenda and see if it seems interesting to you — some meetups provide an introductory high-level overview, others provide non-technical real world use cases, and some go in depth on technical topics (and some have a combination of these). For organizers, please clearly label the types of content you are providing to help people choose. For instance, the Seattle meetup uses a rating system that goes from 0 for non-technical to 4 for extremely technical.

Also be aware that the Hyperledger community has a Code of Conduct that clearly documents acceptable behaviors and is there to promote high standards of professional practice both online and offline. For attendees, we offer this as a way to let you know you are welcome at our meetups. For organizers, please read through the document and make sure you’re following the guidance provided.

Run an inclusive event

Part of a well run meetup is about including people who are there and even people who couldn’t be there.  

Give people time at your meetup to introduce themselves and share what they do. This can be a great way to learn what sort of content people want in the future and can help you connect with people who can speak at future events. For bigger meetups this may not work, but it is strongly encouraged for new groups. Larger groups can still provide time in the agenda for people to introduce themselves or make announcements if they want.

Many people are interested in your events or the topics you cover but can’t attend. Try recording talks for people who couldn’t make it. If you have video recording equipment that’s great, but you may also be able to use your phone and get a decent recording. If you do record your talks, we’re happy to host those videos on Hyperledger’s YouTube channel.  Several meetup groups, including Los Angeles, Mumbai, Hong Kong, Montreal and Bangalore have recorded talks, so check out their presentations.

And don’t forget to share on social media. Organizers and attendees alike are encouraged to post details of the meetup before, during and after event. Getting everyone to use a hashtag like #HyperledgerMeetup or #HyperledgerMeetup{location} is a great way to build a following.

Use feedback to keep improving

Everyone is encouraged to provide feedback about a meetup. Is there content that attendees are interested in? Is there something that could make the events more welcoming? Feedback can be provided a number of ways.

You can post on the discussion forum of the meetup group’s site or reach out to the organizer directly. And, if you want to contact Hyperledger staff, you can email  meetups@hyperledger.org. For organizers, we encourage you to ask for this sort of feedback regularly by sending surveys or questions to your group members. For example, the San Francisco meetup organizer did a nice job of this by sending out an end of the year recap and inviting people to write back with feedback.

Meetup.com has also recently introduced a new 5 star rating system for meetups. For attendees, you will have an opportunity to provide feedback after an event. For meetup organizers, learn more about how you can view this feedback from attendees.  After an event, we encourage you to start checking the details of the feedback you’re receiving and be open to addressing comments people make when they suggest ways to improve the experience.

Other thoughts?

There are many other things for people to consider in order to run effective meetups, such as how to find venues, speakers and sponsors. Our Meetup Organizer’s Guide has some more information about that, but it is likely that there are many useful tips and suggestions we haven’t included there.  

To help us improve that guide, we’d welcome your feedback. What other thoughts or suggestions do you have about how to make a meetup a positive experience for attendees?  Please feel free to share your thoughts and ideas with the Hyperledger meetup organizers on our meetup organizer’s mailing list.

Convector: Writing an Open Source Development Framework

By | Blog, Hyperledger Fabric

Convector (a.k.a Convector Smart Contracts) is a JavaScript development framework built for enterprise blockchain frameworks. It enhances the development experience while helping developers create more robust and secure smart contract systems. It spans through the chaincode and backend all the way to the front end, allowing developers to reuse the same code base in the form of libraries. It’s based on a model/controller pattern, supports Hyperledger Fabric and runs natively along Fabric’s well-crafted patterns.

This blog post walks through the history of the project and highlights the challenges and solutions developed along the way.

Everything began when we started to work on Tellus, a code-less transaction designer written to run on a Hyperledger Fabric blockchain. Back then we had a bunch of Golang smart contracts.

Our first impression of the developer experience (DX) was not so good. There are two methods: init and invoke with no other way to add new methods other than by putting an if condition on the invoke method and using one of the parameters to indicate the method invoked. All parameters are positionally passed strings requiring complex parameters to be parsed manually and there was no way to test it locally.

At the beginning of the project, Fabric 1.1 landed adding support for Javascript chaincodes. We decided to try it out hoping for improved developer experience. Unfortunately, it follows the same pattern found in the Golang chaincodes, and you still have to do some dirty work in your everyday logic. We kept looking for a better solution and found a post about a library from TheLedger on making Fabric chaincodes in Typescript that really improves over what you have with raw Javascript.

During the migration of our smart contracts from Golang to Javascript a pattern emerged. Most of the time functions do things in the following order:

  1. Parse the arguments.
  2. Make some assertions.
  3. Perform the changes.
  4. Save the changes.

This led to a fundamental question about theproject plan: should smart contracts get migrated quickly or should more time be spent figuring out a pattern and making them flexible enough for multiple business cases. It all started in the ./src/utils/ of the project.

/** @module @worldsibu/convector-examples-token */

import * as yup from ‘yup’;
import {
 ConvectorModel,
 ReadOnly,
 Required,
 Validate
} from ‘@worldsibu/convector-core-model’;

export class Token extends ConvectorModel {
 @ReadOnly()
 public readonly type = ‘io.worldsibu.examples.token’;

 @ReadOnly()
 @Required()
 @Validate(yup.object())
 public balances: { [key: string]: number };

 @ReadOnly()
 @Required()
 @Validate(yup.number().moreThan(0))
 public totalSupply: number;

 @ReadOnly()
 @Required()
 @Validate(yup.string())
 public name: string;

 @ReadOnly()
 @Required()
 @Validate(yup.string())
 public symbol: string;
}

Figure 1 — Convector Model
Fabric does not have a restriction on the data shape stored in the blockchain. You basically have a key-value map where both are strings, which means you can serialize and store any complex object. We took apart the models to reuse them in code. We just passed all of the necessary parameters in.

@Invokable()
 public async transfer(
   @Param(yup.string())
   tokenId: string,
   @Param(yup.string())
   to: string,
   @Param(yup.number().moreThan(0))
   amount: number
 ) {
   const token = await Token.getOne(tokenId);

   if (token.balances[this.sender] < amount) {
     throw new Error(‘The sender does not have enough funds’);
   }

   token.balances[to] = token.balances[to] || 0;

   token.balances[to] += amount;
   token.balances[this.sender] -= amount;

   await token.save();
 }

Figure 2 — Convector Controller

With Fabric, you get a typed list of parameters for functions. We didn’t want to be parsing models all of the time in all of the functions, so we added some decorators to validate if all parameter type invariants were met successfully. Those parameters might be a primitive, complex or even a model.

Functions now started to look more like a controller. They handled the business logic while the model described the data.

Now came the time to integrate all of the chaincodes into our Nodejs REST API. In the process, we realized we were creating a wrapper library on the server to call my chaincodes with the fabric-client lib. This is a very common situation so we looked for a better way to automate this.

I wanted to use the same controller and model files on the server as well as on chaincode. Doing so meant decoupling the relationship between the models and the storage layer (Fabric) as well as the controllers and the execution action.

This is where we realize Hyperledger Fabric was just one of the multiple blockchains Convector can support.

Adapter and Storage come into play.

The adapter is the underlying layer for the controller. Controllers define the methods, params, and business logic while adapters deal with how to route the invocation to the right place. For example, in our API it uses an adapter to invoke the fabric-client library and send a transaction.

The storage provides the functionality to interact with the models. Whether you want to save, delete or query something, you interact with the model itself, and, behind the scenes, it interacts with the specified service. On chaincode this is the Fabric STUB object. In the Nodejs API, it might be sending a query transaction or read from CouchDB.

Pro Tip: Convector can be used with something other than a blockchain. Like, configure an adapter or a model to call an API or another database.

The weekend turned into a month of creating tools and perfecting the pattern. Here are some of the tools created in the journey that you can leverage today:

# Install the CLInpm i -g @worldsibu/convector-cli
# Create a new chaincodes projectconv new mychain -c token
cd mychainnpm i
# Install a dev environmentnpm run env:restart # Install the chaincodenpm run cc:start — token 1

Figure 3 — Convector CLI

Also, Convector already comes with a Fabric adapter, a Fabric storage, a CouchDB storage, and a mock adapter (for unit tests) that you can use to seamlessly create code for your chaincode as well for your NodeJS backend while creating tests that can be included in your CI/CD pipelines. This is critical in any real-life development.

Extra adapter and storage layers can be easily created and we’re excited to see what the community builds around these tools. At the same time we were building this, we continued working on our internal product’s migration, which helped to test the framework in real life scenarios before launching it.

I’m glad we didn’t take the easy path on this migration. We’re pretty happy with the result, and the journey of publishing an open source tool has been amazing. It’s also rewarding to see  hundreds of people using it every day.

Hyperledger Fabric is an excellent blockchain framework. The infrastructure it provides covers most of the use cases in a secure and reliable way. That’s why we think it deserves a robust interface for smart contracts too, and we want to contribute back to the community with the internal tools we created while working with it.

Because we believe the project can be useful for anyone in the blockchain ecosystem, Convector has joined the Hyperledger Labs initiative. We are really committed to building a community around Convector, which has already surpassed the 27,000 downloads mark, and welcome the input of the Hyperledger community. If you are looking to get involved in an open source project, refer to GitHub

The coordinates for the project are:

About the author
Diego Barahona is the CTO and Architect of WorldSibu, a startup dedicated to creating blockchain tools and platforms for non-blockchain experts and make the technology more accessible to solve business challenges.