Skip to content
This repository was archived by the owner on Mar 30, 2018. It is now read-only.

Fabric Next

Binh Q Nguyen edited this page Jun 28, 2016 · 34 revisions

Status: Draft (not ready for review; not yet prioritized)

This page contains a list of high-level items proposed for development of the next generation Fabric. It includes both existing feature enhancements and new features.

1. Introduction

The key motivation for the next generation is modularization to achieve plug-ability and concurrency. As outlined in the next consensus architecture and the next ledger structure, we want to enable different implementations to plug in at different points, including consensus, storage, and membership services. We can achieve concurrency at endorsers and consenters based on the transaction endorsement policy, which allows driving multiple channels of consensus as a service.

1.2. Deployment Scenarios

The new architecture offers great flexibility in deployments. As the current generation, the simple 1 peer development environment, where a developer may quickly iterate over edit-compile-debug of the fabric code or a chaincode, is still intact. What change are the varieties of network deployment options, from a simple network of a few peers to a complex hundred of peers with different configurations possible.

Conceptually a peer may perform all the functions (submitting, endorsing, ordering, and committing transactions); this is called full peer, or peer. A partial or light peer may perform a specific task like consensus (ordering transactions) or a combination of submitting, endorsing, and committing transactions. A member of a network generally owes a peer and/or some light peers. For high availability, a member may deploy multiple peers but 1 endorsement vote. Currently we have 2 types of peers: Validating peer (VP) and none-validating peer (NVP). We may roughly think of VP as a full peer and NVP as a light peer with submitting and committing tasks.

1.3. Network Deployment

A simple network could be made up of several full peers, where each peer owed by a member with unique enrollment identity in a permissioned network. Building on this, a member may add more peers to increase availability and to support fail-over; however, these single-member peers must operate as a cluster with 1 vote on endorsement. A cluster of peers could be achieved by a shared ledger and a proxy to dispatch messages to the member peers, or a leader-followers pattern.

1.4. Consensus Service

Consenters may independently operate to form a consensus network, isolating from the other peer functions (submitting, endorsing and committing transactions). This capability allows the members to set up a consensus network such that it can serve multiple blockchains, which only need to deploy light peers (without consenter enabled). The consensus network can be agnostic to the transactions enabling data isolation such that submitters may replace the transaction payload with its hash.

2. Detail Development Items

The following items either come from the proposed next architecture or known enhancements to the existing functions. Each item will be associated with 1 or more issues, but they are all here for the big picture.

2.1. Consensus
  1. Decouple consensus from peer: We've been always wanting to separate the consensus module into its own gRPC process with a simple peer interface to enable other implementation to easily plugin.

  2. Dynamic membership: Allowing peers to come and go and the blockchain network should continue to operate according to the consensus algorithm.

  3. Consensus channels: A blockchain network has 1 consensus channel by default that every peer listens on; this is called public channel. We can establish a separate channel per confidential domain where only permitted peers may subscribe to. Transactions sent to a channel will be ordered respective to each other within the channel, so a batch only contains transactions from the channel, not from any other channels.

2.2. Ledger
  1. Transaction rw-set: A submitter composes a transaction consisting of [header, payload, rw-set] where rw-set is the set containing the state variables that the transaction reads from (read-set) and the state variables that the transaction writes to (write-set). The rw-set is created via transaction execution simulation (not writing to database). This simulation is also done by the endorsers of the transaction to fulfill the endorsement policy of the transaction.

  2. Enhance API to enable pluggable datastore: Decouple the current API implementation from RocksDB to enable plugging in different datastore.

  3. File-based datastore: This is the default (reference) implementation, where blocks (transaction log) are stored in structured files as marshaled objects such that data replication is a matter of copying files.

  4. World state cache

  5. Ability to archive/prune "spent" transactions

  6. SQL-like queries (point in time, filter)

2.3. Chaincode

Currently we have 2 types of chaincodes: System and user chaincodes. System chaincode is built with the peer code and initialized during peer startup, whereas, user chaincode is built during deploy transaction and runs in a sandbox (default to Docker container).

System chaincode concept allows us to reorganize the peer chaincode services into system chaincodes. For example, chaincode life-cycle (deploy, upgrade, terminate), chaincode naming, key management, etc can be implemented as system chaincodes since these are "trusted" operations which require access to the peer.

  1. Life-cycle management issue 1127: A chaincode begins life at deployment and ends at termination (out of service). Along this timeline, there might be several upgrades either to fix bugs or to amend functions. We have implemented deployment (via Deploy transaction) but have not done (completed) upgrade and terminate. Upgrading of a chaincode is not only involved code changes but also data (and potentially security) migration. We can implement chaincode life-cycle in a system chaincode, named "uber-chaincode", such that, to deploy a chaincode, we would send an Invoke transaction to the uber chaincode; similarly for upgrade and terminate. This means that, our transaction types now would only consist of Invoke and Query. And Deploy transaction is no longer needed.

  2. Naming service issue 1077: Naming service is a system chaincode to map chaincode ID to a user-friendly name, allowing applications to use the name instead of the long ID to address a chaincode.

  3. Calling another chaincode with security: We currently have chaincode calling chaincode locally but without security. Security means both access control and transaction confidentiality, which means whether the callee is visible to the caller. Multiple confidential domains complicate the picture, so the first implementation should focus on chaincode calling chaincode within the same confidential domain and calling a chaincode in the "public" domain.

  4. Access control: The TCert attributes provide a mechanism for the chaincode code to control whom may perform the function by matching the attributes with the intended permissions. For example, uber chaincode may control who can deploy chaincodes by adding access control logic to the Deploy function.

2.4. Membership Services
  1. TCert attributes:
  2. Decentralization:
  3. Key rotation
  4. HSM support
  5. Auditability
2.5. Protocol
  1. Enhance message structures: Remove unused or duplicated fields; version number
  2. Status codes and messages
  3. Error handling
  4. Extensions
2.6. SDK
  1. Nodejs: Event handling, chaincode deployment API, REST API
  2. Java
  3. Go ?
2.7. Transaction confidentiality
  1. Application read-access management
  2. HSM support
  3. Key rotation
  4. Event security
2.8. Upgrade Fabric
  1. No protocol changes
  2. Protocol changes
  3. Ledger migration?
2.9. Integration (SoR)

Clone this wiki locally