diff --git a/README.md b/README.md new file mode 100644 index 0000000..689cace --- /dev/null +++ b/README.md @@ -0,0 +1,132 @@ +# I. INTRODUCTION + +A git repo that provides working examples of various Hyperledger Fabric network topologies that leverage a variety of features provided by the product, for production-ready deployments. + +Each folder in the topologies directory represents a different type of network that can be created. All of the available topologies are based on T1 which contains three orgs: org1 contains 3 orderers, org2 and org3 both contain 2 peers each. Each org has it's own 2 CAs: am Identities CA (for network components identities) and a TLS CA (for secure SSL communications between network components). + +## Individual Topology Documentation +| Topology | Description | +| ------------------------------- | ------------------------------------------------ | +| [T0](./topologies/t0/README.md) | T1 minus 1 Org -- 1 Peer, 1 Orderer per Org | +| [T1](./topologies/t1/README.md) | Base Topology | +| [T2](./topologies/t2/README.md) | 2 orgs with orders and nginx proxies | +| [T3](./topologies/t3/README.md) | External Chaincode | +| [T4](./topologies/t4/README.md) | Local LDAP Server | +| [T5](./topologies/t5/README.md) | Clustered CAs for one of the Orgs | +| [T6](./topologies/t6/README.md) | Mutual TLS | +| [T7](./topologies/t7/README.md) | Private Data Collections | +| [T8](./topologies/t8/README.md) | Clustered CouchDB Datastore For One of the Peers | +| [T9](./topologies/t9/README.md) | Channel Participation API | + + +# II. RUNNING A NETWORK + +## ***A. Prerequisites*** + +- Docker & Docker Compose Plugin (v2) need to be installed locally + - docker needs be able to be executed as a non-root (logged-in user); i.e. no need to use *sudo* to run docker commands + - The Docker Compose Plugin allows for *docker compose (instead of docker-compose) commands* to be executed. This is leveraged extensively in setup scripts +- Linux or Mac environment (all setup/destroy scripts are bash scripts) + +## ***B. Creating and Destroying a Network Topology*** + +1. Clone this repo locally +2. From the root folder of the repo navigate to the topology of interest + ```sh + cd topologies/t # where is the topology number + ``` +```sh +1. To create a network topology +./setup-network.sh # from the topology folder +``` +4. How do you know the network has started succesfully? +```sh +******* NETWORK SETUP COMPLETED ******* # this should be the last message displayed once the setup-network.sh script has completed +``` +5. To tear down (destroy) a network topology +```sh +./teardown-network.sh # from the topology folder +``` +6. To restart a network topology +```sh +./setup-network.sh # from the topology folder + +# towards the beginning of this script the teadown-network.sh script is run to cleanup resources from a prior run, and then the network start up occurs +``` + +## ***C. What do you get with a running network topogogy?*** + +1. A functional network with all components needed to execute some chaincode. The root folder of the topology has a README.md that explains the purpose of that topology and what are its constituent components. +2. The scripts create all the necessary crypto material (CA certs, public/signed certs, private keys, etc) for all entities (peer and orderer nodes and admin/user/client accounts) participating in the network +3. The scripts also create a genesis block for the system channel as well as an application channel joined by peers, on which the chaincode is installed. +4. Each component/server that is needed for the network runs as a Docker container. No processes run outside of Docker - therefore no Hyperledge Fabric binaries have to be installed directly in the host environment. +5. All docker containers for the same topology run within the context of the same Docker project and have unique names to associated with that topology. +6. All docker containers, for a single topology, run within the same Docker network (name scoped using the topology id) +7. Points 4 and 5 provide the isolation that allows for multiple topology networks to be up and running at the same time, even running with different versions of the Hyperledger Fabric binaries. +8. The network setup completes with the installation of the asset transfer basic chaincode from the main product Git repo fabric-examples + - to complete the verification process this chaincode is also executed by performing a write operation and then a read operation +9. When the network is torn down all the following resources are destroyed: + - all docker containers created by the setup script are stopped and deleted. + - all crypto materials created for the network topology + - all configurations and state files (blocks) for the system and application channels + - any state maintained in databases (e.g. blockchain world state, Fabric CA db, etc) +10. When the network is destroyed, docker Images pulled to create the containers for the network ARE NOT DELETED. This allows for faster start-up the next time. For any further cleanup, they'd need to be deleted through other mechanisms. +11. Please note that most of the docker containers have volumes mounted from the host file system (the most notable mounted folder is the crypto-material, present inside each topology folder). + - the docker containers run with a **root user id** and as such files created inside these containers will be owned by root. Trying to delete/modify these files/folders from the volumes mapped into the host (w/o using the teardown-network.sh script) could result in errors if the user id trying to delete/modify these files does not have proper permissions. The teardwon-network.sh runs from inside a topology container and thus has the permissions (i.e. root) to delete these files. + +## ***D. Available Configurations*** + +**1. Binaries Versions** + + Each topology has a .env file at the root of its folder that allows changing the versions of the Hyperledger binaries through environment variables. The name for docker project that houses the topology containers can be changed in this file as well. + +**2. Ports exposed to the host** + + With one exception (for topology T8) none of the ports used by processes runing inside the topology containers are exposed to the host, to avoid any conflicts with other processes using these port numbers. + + The docker-compose.yml file at the root of each topology folder can be referenced though for these purposes: + - to get an idea about all containers running for the topology and the Docker images used by them + - the port numbers used by processes inside the containers are also documented as comments. To expose access to these processes from the host, these port commands can be commented out and configured with desired ports that onE might want to use. This then allows for e.g. for peers and orders to be accessed by external applications or other components (e.g. OpemnLDAP, CoudchDB, MySQL, etc) to be accessed and have their data inspected from the host. + +**3. Other configurations** + +To make any other network configurations (e.g. change number of peers, orderers, chaincodes, customize features, etc.) one would need to study the various shell scripts and configuration files that exist in the topology folder. *A good starting point is the **setup-network.sh***. + +There is a considerable amount of repetition in terms of configuration files and code in shell scripts, between the different topologies. We purposely did not pursue creating more common code/functions to increase reuse because of the following reason: + +We wanted to make it easy to compare the code and configurations between different topologies to allow us to understand what were the changes made to get from one state of a network to another. + +With the exception of the t0 topology all other topologies use as a base (were built upon) the t1 topology. As such if one would like to understand for e.g. what is needed to configure mutual TLS (topology t6), a t1 to t6 folder compare - using any file/folder compare tool - can be done to visualize all the differences and study them. + + +# III. REASONS FOR DEVELOPING THESE TOPOLOGIES + + +## **A. Production Readiness** + +In order to deploy a Hyperledger Fabric to production in an enterprise setting with mutliple participant organizations (sometime not connected to the same point-to-point networks, i.e. connected over the public internet) a few considertions need to be kept in mind: + - Ability to provide secure and authenticated communications between all nodes: all communications between network nodes should be encrypted and authenticated with mutual TLS + - identities associated with nodes and actors that connect to the network should have their credentials stored in a secure store, using solutions frequeantly deployed in enterprises: e.g. Fabric CA should be intergrated with an LDAP compliant store instead of using its own SQL database to store account credentials + - There should be no single point of failure: this means that not only that there must be multiple instances of peers and orders but other components too: Fabric CA, World State Storage Databases, External Chaincode servers need to be replicated as well. Also wherever possible access should be done through a proxy which can load balance and provide fail over for access to each set of components. Nginx proxies have been used in some of the topologies for this purpose. + +## ***B. Lack of sufficient online examples and tutorials*** + +- The official Hyperledger Fabric documentation of architecture, features and concepts is detailed and for the most part very comprehensive. Given the complexities of a blockchain network and the number of technologies involved, this is simply not enough though. + +- There are some operations guides that show some examples how to setup certain components of the network but we often found that there were mistakes in some of the commands or steps described in them. For a new person starting to use Hyperledger it is very easy to get lost as soon as one hits the first few setup/execution errors: there are many moving parts and a network administrator needs to be familiar with many technologies. + +- The scripts available in the official product code base in Github are useful in getting an initial network up an running but they really don't go into any sufficient depth to demonstrate setup examples for capabilities needed for a production deployment, such as those described above. + +- There are a few online resources that demonstrate certain more complex aspects of Hyperledger Fabric but they frequently suffer from the same short comings present in the official documentation and the examples in the HL code base. + +- In general there is a significant lack of working examples of network topologies that leverage many of the sophisticated capabilities of Hyperledger Fabric. + +## ***C. Working examples that are easy to setup*** + +To speed up our understanding of Hyperledger Fabric and the learning curve for new developers working with these technologies, we wanted to create working examples of Hyperledger Fabric topologies that are: + +- easy to get up and running with minimal amount of installs and configurations +- facilitate the comparison of different features by being able to study, for each of the main demonstrated features, what were the main network configurations, code, scripts, etc changes that had to be done to enable those capabilities. +- isolation for network topologies by providing the ability to run more than one network topology at the same time on the same machine; even with different versions of the binaries +- setup networks that are then easy to connect to with various client applications for local development + diff --git a/topologies/docker-compose-base.yml b/topologies/docker-compose-base.yml new file mode 100644 index 0000000..bf8b9b7 --- /dev/null +++ b/topologies/docker-compose-base.yml @@ -0,0 +1,5 @@ +version: "3.9" +networks: + hl-fabric: + driver: bridge + name: hl-fabric-${CURRENT_HL_TOPOLOGY} \ No newline at end of file diff --git a/topologies/docker-compose-shell-cmd.yml b/topologies/docker-compose-shell-cmd.yml new file mode 100644 index 0000000..fcb31f2 --- /dev/null +++ b/topologies/docker-compose-shell-cmd.yml @@ -0,0 +1,14 @@ +services: + org-shell-cmd: + container_name: ${CURRENT_HL_TOPOLOGY}-shell-cmd + image: alpine + tty: true + stdin_open: true + command: sh + environment: + - HL_TOPOLOGIES_BASE_FOLDER=${HL_TOPOLOGIES_BASE_FOLDER} + - TOPOLOGY=${CURRENT_HL_TOPOLOGY} + volumes: + - "${HL_TOPOLOGIES_BASE_FOLDER}:/tmp" + networks: + - hl-fabric \ No newline at end of file diff --git a/topologies/image_store/T0.png b/topologies/image_store/T0.png new file mode 100644 index 0000000..adedb43 Binary files /dev/null and b/topologies/image_store/T0.png differ diff --git a/topologies/image_store/T1.png b/topologies/image_store/T1.png new file mode 100644 index 0000000..7865a43 Binary files /dev/null and b/topologies/image_store/T1.png differ diff --git a/topologies/image_store/T2.png b/topologies/image_store/T2.png new file mode 100644 index 0000000..c8ab6ef Binary files /dev/null and b/topologies/image_store/T2.png differ diff --git a/topologies/image_store/T3.png b/topologies/image_store/T3.png new file mode 100644 index 0000000..6c07d7e Binary files /dev/null and b/topologies/image_store/T3.png differ diff --git a/topologies/image_store/T4.png b/topologies/image_store/T4.png new file mode 100644 index 0000000..e74c1b7 Binary files /dev/null and b/topologies/image_store/T4.png differ diff --git a/topologies/image_store/T5.png b/topologies/image_store/T5.png new file mode 100644 index 0000000..c7c45cd Binary files /dev/null and b/topologies/image_store/T5.png differ diff --git a/topologies/image_store/T6.png b/topologies/image_store/T6.png new file mode 100644 index 0000000..d4f50c1 Binary files /dev/null and b/topologies/image_store/T6.png differ diff --git a/topologies/image_store/T7.png b/topologies/image_store/T7.png new file mode 100644 index 0000000..d0b5dfd Binary files /dev/null and b/topologies/image_store/T7.png differ diff --git a/topologies/image_store/T8.png b/topologies/image_store/T8.png new file mode 100644 index 0000000..c37e8ca Binary files /dev/null and b/topologies/image_store/T8.png differ diff --git a/topologies/image_store/T9.png b/topologies/image_store/T9.png new file mode 100644 index 0000000..2706b5d Binary files /dev/null and b/topologies/image_store/T9.png differ diff --git a/topologies/t0/.env b/topologies/t0/.env new file mode 100644 index 0000000..94edb7f --- /dev/null +++ b/topologies/t0/.env @@ -0,0 +1,5 @@ +COMPOSE_PROJECT_NAME=hl-fabric-topology-t0 +FABRIC_CA_VERSION=1.5 +FABRIC_PEER_VERSION=2.2.3 +FABRIC_TOOLS_VERSION=2.2.3 +PEER_ORDERER_VERSION=2.2.3 \ No newline at end of file diff --git a/topologies/t0/.gitignore b/topologies/t0/.gitignore new file mode 100644 index 0000000..ee0881a --- /dev/null +++ b/topologies/t0/.gitignore @@ -0,0 +1,2 @@ +crypto-material/*/** +homefolders/*/** \ No newline at end of file diff --git a/topologies/t0/README.md b/topologies/t0/README.md new file mode 100644 index 0000000..46d8443 --- /dev/null +++ b/topologies/t0/README.md @@ -0,0 +1,25 @@ +# T0: Stripped Network +## Description +--- +A basic network with two organizations: one for the orderer and one with a peer. This topology is a stripped down version of T1, created mostly for understanding one the most basic fabric network setups. +## Diagram +--- +![Diagram of components](../image_store/T0.png) + +## Components List +--- +* Org 1 + * Orderer 1 + * TLS CA + * Identities CA +* Org 2 + * Peer 1 + * TLS CA + * Identities CA + * Peer 1 CLI + +## Characteristics + +- World State Database Instance (LevelDB) embedded (in peer containers) +- Chaincode installed directly on peers +- Communication between all components done via TLS \ No newline at end of file diff --git a/topologies/t0/assets/chaincode/assets-transfer-basic/chaincode-go/assetTransfer.go b/topologies/t0/assets/chaincode/assets-transfer-basic/chaincode-go/assetTransfer.go new file mode 100644 index 0000000..9c619d5 --- /dev/null +++ b/topologies/t0/assets/chaincode/assets-transfer-basic/chaincode-go/assetTransfer.go @@ -0,0 +1,23 @@ +/* +SPDX-License-Identifier: Apache-2.0 +*/ + +package main + +import ( + "log" + + "github.com/hyperledger/fabric-contract-api-go/contractapi" + "github.com/hyperledger/fabric-samples/asset-transfer-basic/chaincode-go/chaincode" +) + +func main() { + assetChaincode, err := contractapi.NewChaincode(&chaincode.SmartContract{}) + if err != nil { + log.Panicf("Error creating asset-transfer-basic chaincode: %v", err) + } + + if err := assetChaincode.Start(); err != nil { + log.Panicf("Error starting asset-transfer-basic chaincode: %v", err) + } +} diff --git a/topologies/t0/assets/chaincode/assets-transfer-basic/chaincode-go/chaincode/mocks/chaincodestub.go b/topologies/t0/assets/chaincode/assets-transfer-basic/chaincode-go/chaincode/mocks/chaincodestub.go new file mode 100644 index 0000000..91348fe --- /dev/null +++ b/topologies/t0/assets/chaincode/assets-transfer-basic/chaincode-go/chaincode/mocks/chaincodestub.go @@ -0,0 +1,2878 @@ +// Code generated by counterfeiter. DO NOT EDIT. +package mocks + +import ( + "sync" + + "github.com/golang/protobuf/ptypes/timestamp" + "github.com/hyperledger/fabric-chaincode-go/shim" + "github.com/hyperledger/fabric-protos-go/peer" +) + +type ChaincodeStub struct { + CreateCompositeKeyStub func(string, []string) (string, error) + createCompositeKeyMutex sync.RWMutex + createCompositeKeyArgsForCall []struct { + arg1 string + arg2 []string + } + createCompositeKeyReturns struct { + result1 string + result2 error + } + createCompositeKeyReturnsOnCall map[int]struct { + result1 string + result2 error + } + DelPrivateDataStub func(string, string) error + delPrivateDataMutex sync.RWMutex + delPrivateDataArgsForCall []struct { + arg1 string + arg2 string + } + delPrivateDataReturns struct { + result1 error + } + delPrivateDataReturnsOnCall map[int]struct { + result1 error + } + DelStateStub func(string) error + delStateMutex sync.RWMutex + delStateArgsForCall []struct { + arg1 string + } + delStateReturns struct { + result1 error + } + delStateReturnsOnCall map[int]struct { + result1 error + } + GetArgsStub func() [][]byte + getArgsMutex sync.RWMutex + getArgsArgsForCall []struct { + } + getArgsReturns struct { + result1 [][]byte + } + getArgsReturnsOnCall map[int]struct { + result1 [][]byte + } + GetArgsSliceStub func() ([]byte, error) + getArgsSliceMutex sync.RWMutex + getArgsSliceArgsForCall []struct { + } + getArgsSliceReturns struct { + result1 []byte + result2 error + } + getArgsSliceReturnsOnCall map[int]struct { + result1 []byte + result2 error + } + GetBindingStub func() ([]byte, error) + getBindingMutex sync.RWMutex + getBindingArgsForCall []struct { + } + getBindingReturns struct { + result1 []byte + result2 error + } + getBindingReturnsOnCall map[int]struct { + result1 []byte + result2 error + } + GetChannelIDStub func() string + getChannelIDMutex sync.RWMutex + getChannelIDArgsForCall []struct { + } + getChannelIDReturns struct { + result1 string + } + getChannelIDReturnsOnCall map[int]struct { + result1 string + } + GetCreatorStub func() ([]byte, error) + getCreatorMutex sync.RWMutex + getCreatorArgsForCall []struct { + } + getCreatorReturns struct { + result1 []byte + result2 error + } + getCreatorReturnsOnCall map[int]struct { + result1 []byte + result2 error + } + GetDecorationsStub func() map[string][]byte + getDecorationsMutex sync.RWMutex + getDecorationsArgsForCall []struct { + } + getDecorationsReturns struct { + result1 map[string][]byte + } + getDecorationsReturnsOnCall map[int]struct { + result1 map[string][]byte + } + GetFunctionAndParametersStub func() (string, []string) + getFunctionAndParametersMutex sync.RWMutex + getFunctionAndParametersArgsForCall []struct { + } + getFunctionAndParametersReturns struct { + result1 string + result2 []string + } + getFunctionAndParametersReturnsOnCall map[int]struct { + result1 string + result2 []string + } + GetHistoryForKeyStub func(string) (shim.HistoryQueryIteratorInterface, error) + getHistoryForKeyMutex sync.RWMutex + getHistoryForKeyArgsForCall []struct { + arg1 string + } + getHistoryForKeyReturns struct { + result1 shim.HistoryQueryIteratorInterface + result2 error + } + getHistoryForKeyReturnsOnCall map[int]struct { + result1 shim.HistoryQueryIteratorInterface + result2 error + } + GetPrivateDataStub func(string, string) ([]byte, error) + getPrivateDataMutex sync.RWMutex + getPrivateDataArgsForCall []struct { + arg1 string + arg2 string + } + getPrivateDataReturns struct { + result1 []byte + result2 error + } + getPrivateDataReturnsOnCall map[int]struct { + result1 []byte + result2 error + } + GetPrivateDataByPartialCompositeKeyStub func(string, string, []string) (shim.StateQueryIteratorInterface, error) + getPrivateDataByPartialCompositeKeyMutex sync.RWMutex + getPrivateDataByPartialCompositeKeyArgsForCall []struct { + arg1 string + arg2 string + arg3 []string + } + getPrivateDataByPartialCompositeKeyReturns struct { + result1 shim.StateQueryIteratorInterface + result2 error + } + getPrivateDataByPartialCompositeKeyReturnsOnCall map[int]struct { + result1 shim.StateQueryIteratorInterface + result2 error + } + GetPrivateDataByRangeStub func(string, string, string) (shim.StateQueryIteratorInterface, error) + getPrivateDataByRangeMutex sync.RWMutex + getPrivateDataByRangeArgsForCall []struct { + arg1 string + arg2 string + arg3 string + } + getPrivateDataByRangeReturns struct { + result1 shim.StateQueryIteratorInterface + result2 error + } + getPrivateDataByRangeReturnsOnCall map[int]struct { + result1 shim.StateQueryIteratorInterface + result2 error + } + GetPrivateDataHashStub func(string, string) ([]byte, error) + getPrivateDataHashMutex sync.RWMutex + getPrivateDataHashArgsForCall []struct { + arg1 string + arg2 string + } + getPrivateDataHashReturns struct { + result1 []byte + result2 error + } + getPrivateDataHashReturnsOnCall map[int]struct { + result1 []byte + result2 error + } + GetPrivateDataQueryResultStub func(string, string) (shim.StateQueryIteratorInterface, error) + getPrivateDataQueryResultMutex sync.RWMutex + getPrivateDataQueryResultArgsForCall []struct { + arg1 string + arg2 string + } + getPrivateDataQueryResultReturns struct { + result1 shim.StateQueryIteratorInterface + result2 error + } + getPrivateDataQueryResultReturnsOnCall map[int]struct { + result1 shim.StateQueryIteratorInterface + result2 error + } + GetPrivateDataValidationParameterStub func(string, string) ([]byte, error) + getPrivateDataValidationParameterMutex sync.RWMutex + getPrivateDataValidationParameterArgsForCall []struct { + arg1 string + arg2 string + } + getPrivateDataValidationParameterReturns struct { + result1 []byte + result2 error + } + getPrivateDataValidationParameterReturnsOnCall map[int]struct { + result1 []byte + result2 error + } + GetQueryResultStub func(string) (shim.StateQueryIteratorInterface, error) + getQueryResultMutex sync.RWMutex + getQueryResultArgsForCall []struct { + arg1 string + } + getQueryResultReturns struct { + result1 shim.StateQueryIteratorInterface + result2 error + } + getQueryResultReturnsOnCall map[int]struct { + result1 shim.StateQueryIteratorInterface + result2 error + } + GetQueryResultWithPaginationStub func(string, int32, string) (shim.StateQueryIteratorInterface, *peer.QueryResponseMetadata, error) + getQueryResultWithPaginationMutex sync.RWMutex + getQueryResultWithPaginationArgsForCall []struct { + arg1 string + arg2 int32 + arg3 string + } + getQueryResultWithPaginationReturns struct { + result1 shim.StateQueryIteratorInterface + result2 *peer.QueryResponseMetadata + result3 error + } + getQueryResultWithPaginationReturnsOnCall map[int]struct { + result1 shim.StateQueryIteratorInterface + result2 *peer.QueryResponseMetadata + result3 error + } + GetSignedProposalStub func() (*peer.SignedProposal, error) + getSignedProposalMutex sync.RWMutex + getSignedProposalArgsForCall []struct { + } + getSignedProposalReturns struct { + result1 *peer.SignedProposal + result2 error + } + getSignedProposalReturnsOnCall map[int]struct { + result1 *peer.SignedProposal + result2 error + } + GetStateStub func(string) ([]byte, error) + getStateMutex sync.RWMutex + getStateArgsForCall []struct { + arg1 string + } + getStateReturns struct { + result1 []byte + result2 error + } + getStateReturnsOnCall map[int]struct { + result1 []byte + result2 error + } + GetStateByPartialCompositeKeyStub func(string, []string) (shim.StateQueryIteratorInterface, error) + getStateByPartialCompositeKeyMutex sync.RWMutex + getStateByPartialCompositeKeyArgsForCall []struct { + arg1 string + arg2 []string + } + getStateByPartialCompositeKeyReturns struct { + result1 shim.StateQueryIteratorInterface + result2 error + } + getStateByPartialCompositeKeyReturnsOnCall map[int]struct { + result1 shim.StateQueryIteratorInterface + result2 error + } + GetStateByPartialCompositeKeyWithPaginationStub func(string, []string, int32, string) (shim.StateQueryIteratorInterface, *peer.QueryResponseMetadata, error) + getStateByPartialCompositeKeyWithPaginationMutex sync.RWMutex + getStateByPartialCompositeKeyWithPaginationArgsForCall []struct { + arg1 string + arg2 []string + arg3 int32 + arg4 string + } + getStateByPartialCompositeKeyWithPaginationReturns struct { + result1 shim.StateQueryIteratorInterface + result2 *peer.QueryResponseMetadata + result3 error + } + getStateByPartialCompositeKeyWithPaginationReturnsOnCall map[int]struct { + result1 shim.StateQueryIteratorInterface + result2 *peer.QueryResponseMetadata + result3 error + } + GetStateByRangeStub func(string, string) (shim.StateQueryIteratorInterface, error) + getStateByRangeMutex sync.RWMutex + getStateByRangeArgsForCall []struct { + arg1 string + arg2 string + } + getStateByRangeReturns struct { + result1 shim.StateQueryIteratorInterface + result2 error + } + getStateByRangeReturnsOnCall map[int]struct { + result1 shim.StateQueryIteratorInterface + result2 error + } + GetStateByRangeWithPaginationStub func(string, string, int32, string) (shim.StateQueryIteratorInterface, *peer.QueryResponseMetadata, error) + getStateByRangeWithPaginationMutex sync.RWMutex + getStateByRangeWithPaginationArgsForCall []struct { + arg1 string + arg2 string + arg3 int32 + arg4 string + } + getStateByRangeWithPaginationReturns struct { + result1 shim.StateQueryIteratorInterface + result2 *peer.QueryResponseMetadata + result3 error + } + getStateByRangeWithPaginationReturnsOnCall map[int]struct { + result1 shim.StateQueryIteratorInterface + result2 *peer.QueryResponseMetadata + result3 error + } + GetStateValidationParameterStub func(string) ([]byte, error) + getStateValidationParameterMutex sync.RWMutex + getStateValidationParameterArgsForCall []struct { + arg1 string + } + getStateValidationParameterReturns struct { + result1 []byte + result2 error + } + getStateValidationParameterReturnsOnCall map[int]struct { + result1 []byte + result2 error + } + GetStringArgsStub func() []string + getStringArgsMutex sync.RWMutex + getStringArgsArgsForCall []struct { + } + getStringArgsReturns struct { + result1 []string + } + getStringArgsReturnsOnCall map[int]struct { + result1 []string + } + GetTransientStub func() (map[string][]byte, error) + getTransientMutex sync.RWMutex + getTransientArgsForCall []struct { + } + getTransientReturns struct { + result1 map[string][]byte + result2 error + } + getTransientReturnsOnCall map[int]struct { + result1 map[string][]byte + result2 error + } + GetTxIDStub func() string + getTxIDMutex sync.RWMutex + getTxIDArgsForCall []struct { + } + getTxIDReturns struct { + result1 string + } + getTxIDReturnsOnCall map[int]struct { + result1 string + } + GetTxTimestampStub func() (*timestamp.Timestamp, error) + getTxTimestampMutex sync.RWMutex + getTxTimestampArgsForCall []struct { + } + getTxTimestampReturns struct { + result1 *timestamp.Timestamp + result2 error + } + getTxTimestampReturnsOnCall map[int]struct { + result1 *timestamp.Timestamp + result2 error + } + InvokeChaincodeStub func(string, [][]byte, string) peer.Response + invokeChaincodeMutex sync.RWMutex + invokeChaincodeArgsForCall []struct { + arg1 string + arg2 [][]byte + arg3 string + } + invokeChaincodeReturns struct { + result1 peer.Response + } + invokeChaincodeReturnsOnCall map[int]struct { + result1 peer.Response + } + PutPrivateDataStub func(string, string, []byte) error + putPrivateDataMutex sync.RWMutex + putPrivateDataArgsForCall []struct { + arg1 string + arg2 string + arg3 []byte + } + putPrivateDataReturns struct { + result1 error + } + putPrivateDataReturnsOnCall map[int]struct { + result1 error + } + PutStateStub func(string, []byte) error + putStateMutex sync.RWMutex + putStateArgsForCall []struct { + arg1 string + arg2 []byte + } + putStateReturns struct { + result1 error + } + putStateReturnsOnCall map[int]struct { + result1 error + } + SetEventStub func(string, []byte) error + setEventMutex sync.RWMutex + setEventArgsForCall []struct { + arg1 string + arg2 []byte + } + setEventReturns struct { + result1 error + } + setEventReturnsOnCall map[int]struct { + result1 error + } + SetPrivateDataValidationParameterStub func(string, string, []byte) error + setPrivateDataValidationParameterMutex sync.RWMutex + setPrivateDataValidationParameterArgsForCall []struct { + arg1 string + arg2 string + arg3 []byte + } + setPrivateDataValidationParameterReturns struct { + result1 error + } + setPrivateDataValidationParameterReturnsOnCall map[int]struct { + result1 error + } + SetStateValidationParameterStub func(string, []byte) error + setStateValidationParameterMutex sync.RWMutex + setStateValidationParameterArgsForCall []struct { + arg1 string + arg2 []byte + } + setStateValidationParameterReturns struct { + result1 error + } + setStateValidationParameterReturnsOnCall map[int]struct { + result1 error + } + SplitCompositeKeyStub func(string) (string, []string, error) + splitCompositeKeyMutex sync.RWMutex + splitCompositeKeyArgsForCall []struct { + arg1 string + } + splitCompositeKeyReturns struct { + result1 string + result2 []string + result3 error + } + splitCompositeKeyReturnsOnCall map[int]struct { + result1 string + result2 []string + result3 error + } + invocations map[string][][]interface{} + invocationsMutex sync.RWMutex +} + +func (fake *ChaincodeStub) CreateCompositeKey(arg1 string, arg2 []string) (string, error) { + var arg2Copy []string + if arg2 != nil { + arg2Copy = make([]string, len(arg2)) + copy(arg2Copy, arg2) + } + fake.createCompositeKeyMutex.Lock() + ret, specificReturn := fake.createCompositeKeyReturnsOnCall[len(fake.createCompositeKeyArgsForCall)] + fake.createCompositeKeyArgsForCall = append(fake.createCompositeKeyArgsForCall, struct { + arg1 string + arg2 []string + }{arg1, arg2Copy}) + fake.recordInvocation("CreateCompositeKey", []interface{}{arg1, arg2Copy}) + fake.createCompositeKeyMutex.Unlock() + if fake.CreateCompositeKeyStub != nil { + return fake.CreateCompositeKeyStub(arg1, arg2) + } + if specificReturn { + return ret.result1, ret.result2 + } + fakeReturns := fake.createCompositeKeyReturns + return fakeReturns.result1, fakeReturns.result2 +} + +func (fake *ChaincodeStub) CreateCompositeKeyCallCount() int { + fake.createCompositeKeyMutex.RLock() + defer fake.createCompositeKeyMutex.RUnlock() + return len(fake.createCompositeKeyArgsForCall) +} + +func (fake *ChaincodeStub) CreateCompositeKeyCalls(stub func(string, []string) (string, error)) { + fake.createCompositeKeyMutex.Lock() + defer fake.createCompositeKeyMutex.Unlock() + fake.CreateCompositeKeyStub = stub +} + +func (fake *ChaincodeStub) CreateCompositeKeyArgsForCall(i int) (string, []string) { + fake.createCompositeKeyMutex.RLock() + defer fake.createCompositeKeyMutex.RUnlock() + argsForCall := fake.createCompositeKeyArgsForCall[i] + return argsForCall.arg1, argsForCall.arg2 +} + +func (fake *ChaincodeStub) CreateCompositeKeyReturns(result1 string, result2 error) { + fake.createCompositeKeyMutex.Lock() + defer fake.createCompositeKeyMutex.Unlock() + fake.CreateCompositeKeyStub = nil + fake.createCompositeKeyReturns = struct { + result1 string + result2 error + }{result1, result2} +} + +func (fake *ChaincodeStub) CreateCompositeKeyReturnsOnCall(i int, result1 string, result2 error) { + fake.createCompositeKeyMutex.Lock() + defer fake.createCompositeKeyMutex.Unlock() + fake.CreateCompositeKeyStub = nil + if fake.createCompositeKeyReturnsOnCall == nil { + fake.createCompositeKeyReturnsOnCall = make(map[int]struct { + result1 string + result2 error + }) + } + fake.createCompositeKeyReturnsOnCall[i] = struct { + result1 string + result2 error + }{result1, result2} +} + +func (fake *ChaincodeStub) DelPrivateData(arg1 string, arg2 string) error { + fake.delPrivateDataMutex.Lock() + ret, specificReturn := fake.delPrivateDataReturnsOnCall[len(fake.delPrivateDataArgsForCall)] + fake.delPrivateDataArgsForCall = append(fake.delPrivateDataArgsForCall, struct { + arg1 string + arg2 string + }{arg1, arg2}) + fake.recordInvocation("DelPrivateData", []interface{}{arg1, arg2}) + fake.delPrivateDataMutex.Unlock() + if fake.DelPrivateDataStub != nil { + return fake.DelPrivateDataStub(arg1, arg2) + } + if specificReturn { + return ret.result1 + } + fakeReturns := fake.delPrivateDataReturns + return fakeReturns.result1 +} + +func (fake *ChaincodeStub) DelPrivateDataCallCount() int { + fake.delPrivateDataMutex.RLock() + defer fake.delPrivateDataMutex.RUnlock() + return len(fake.delPrivateDataArgsForCall) +} + +func (fake *ChaincodeStub) DelPrivateDataCalls(stub func(string, string) error) { + fake.delPrivateDataMutex.Lock() + defer fake.delPrivateDataMutex.Unlock() + fake.DelPrivateDataStub = stub +} + +func (fake *ChaincodeStub) DelPrivateDataArgsForCall(i int) (string, string) { + fake.delPrivateDataMutex.RLock() + defer fake.delPrivateDataMutex.RUnlock() + argsForCall := fake.delPrivateDataArgsForCall[i] + return argsForCall.arg1, argsForCall.arg2 +} + +func (fake *ChaincodeStub) DelPrivateDataReturns(result1 error) { + fake.delPrivateDataMutex.Lock() + defer fake.delPrivateDataMutex.Unlock() + fake.DelPrivateDataStub = nil + fake.delPrivateDataReturns = struct { + result1 error + }{result1} +} + +func (fake *ChaincodeStub) DelPrivateDataReturnsOnCall(i int, result1 error) { + fake.delPrivateDataMutex.Lock() + defer fake.delPrivateDataMutex.Unlock() + fake.DelPrivateDataStub = nil + if fake.delPrivateDataReturnsOnCall == nil { + fake.delPrivateDataReturnsOnCall = make(map[int]struct { + result1 error + }) + } + fake.delPrivateDataReturnsOnCall[i] = struct { + result1 error + }{result1} +} + +func (fake *ChaincodeStub) DelState(arg1 string) error { + fake.delStateMutex.Lock() + ret, specificReturn := fake.delStateReturnsOnCall[len(fake.delStateArgsForCall)] + fake.delStateArgsForCall = append(fake.delStateArgsForCall, struct { + arg1 string + }{arg1}) + fake.recordInvocation("DelState", []interface{}{arg1}) + fake.delStateMutex.Unlock() + if fake.DelStateStub != nil { + return fake.DelStateStub(arg1) + } + if specificReturn { + return ret.result1 + } + fakeReturns := fake.delStateReturns + return fakeReturns.result1 +} + +func (fake *ChaincodeStub) DelStateCallCount() int { + fake.delStateMutex.RLock() + defer fake.delStateMutex.RUnlock() + return len(fake.delStateArgsForCall) +} + +func (fake *ChaincodeStub) DelStateCalls(stub func(string) error) { + fake.delStateMutex.Lock() + defer fake.delStateMutex.Unlock() + fake.DelStateStub = stub +} + +func (fake *ChaincodeStub) DelStateArgsForCall(i int) string { + fake.delStateMutex.RLock() + defer fake.delStateMutex.RUnlock() + argsForCall := fake.delStateArgsForCall[i] + return argsForCall.arg1 +} + +func (fake *ChaincodeStub) DelStateReturns(result1 error) { + fake.delStateMutex.Lock() + defer fake.delStateMutex.Unlock() + fake.DelStateStub = nil + fake.delStateReturns = struct { + result1 error + }{result1} +} + +func (fake *ChaincodeStub) DelStateReturnsOnCall(i int, result1 error) { + fake.delStateMutex.Lock() + defer fake.delStateMutex.Unlock() + fake.DelStateStub = nil + if fake.delStateReturnsOnCall == nil { + fake.delStateReturnsOnCall = make(map[int]struct { + result1 error + }) + } + fake.delStateReturnsOnCall[i] = struct { + result1 error + }{result1} +} + +func (fake *ChaincodeStub) GetArgs() [][]byte { + fake.getArgsMutex.Lock() + ret, specificReturn := fake.getArgsReturnsOnCall[len(fake.getArgsArgsForCall)] + fake.getArgsArgsForCall = append(fake.getArgsArgsForCall, struct { + }{}) + fake.recordInvocation("GetArgs", []interface{}{}) + fake.getArgsMutex.Unlock() + if fake.GetArgsStub != nil { + return fake.GetArgsStub() + } + if specificReturn { + return ret.result1 + } + fakeReturns := fake.getArgsReturns + return fakeReturns.result1 +} + +func (fake *ChaincodeStub) GetArgsCallCount() int { + fake.getArgsMutex.RLock() + defer fake.getArgsMutex.RUnlock() + return len(fake.getArgsArgsForCall) +} + +func (fake *ChaincodeStub) GetArgsCalls(stub func() [][]byte) { + fake.getArgsMutex.Lock() + defer fake.getArgsMutex.Unlock() + fake.GetArgsStub = stub +} + +func (fake *ChaincodeStub) GetArgsReturns(result1 [][]byte) { + fake.getArgsMutex.Lock() + defer fake.getArgsMutex.Unlock() + fake.GetArgsStub = nil + fake.getArgsReturns = struct { + result1 [][]byte + }{result1} +} + +func (fake *ChaincodeStub) GetArgsReturnsOnCall(i int, result1 [][]byte) { + fake.getArgsMutex.Lock() + defer fake.getArgsMutex.Unlock() + fake.GetArgsStub = nil + if fake.getArgsReturnsOnCall == nil { + fake.getArgsReturnsOnCall = make(map[int]struct { + result1 [][]byte + }) + } + fake.getArgsReturnsOnCall[i] = struct { + result1 [][]byte + }{result1} +} + +func (fake *ChaincodeStub) GetArgsSlice() ([]byte, error) { + fake.getArgsSliceMutex.Lock() + ret, specificReturn := fake.getArgsSliceReturnsOnCall[len(fake.getArgsSliceArgsForCall)] + fake.getArgsSliceArgsForCall = append(fake.getArgsSliceArgsForCall, struct { + }{}) + fake.recordInvocation("GetArgsSlice", []interface{}{}) + fake.getArgsSliceMutex.Unlock() + if fake.GetArgsSliceStub != nil { + return fake.GetArgsSliceStub() + } + if specificReturn { + return ret.result1, ret.result2 + } + fakeReturns := fake.getArgsSliceReturns + return fakeReturns.result1, fakeReturns.result2 +} + +func (fake *ChaincodeStub) GetArgsSliceCallCount() int { + fake.getArgsSliceMutex.RLock() + defer fake.getArgsSliceMutex.RUnlock() + return len(fake.getArgsSliceArgsForCall) +} + +func (fake *ChaincodeStub) GetArgsSliceCalls(stub func() ([]byte, error)) { + fake.getArgsSliceMutex.Lock() + defer fake.getArgsSliceMutex.Unlock() + fake.GetArgsSliceStub = stub +} + +func (fake *ChaincodeStub) GetArgsSliceReturns(result1 []byte, result2 error) { + fake.getArgsSliceMutex.Lock() + defer fake.getArgsSliceMutex.Unlock() + fake.GetArgsSliceStub = nil + fake.getArgsSliceReturns = struct { + result1 []byte + result2 error + }{result1, result2} +} + +func (fake *ChaincodeStub) GetArgsSliceReturnsOnCall(i int, result1 []byte, result2 error) { + fake.getArgsSliceMutex.Lock() + defer fake.getArgsSliceMutex.Unlock() + fake.GetArgsSliceStub = nil + if fake.getArgsSliceReturnsOnCall == nil { + fake.getArgsSliceReturnsOnCall = make(map[int]struct { + result1 []byte + result2 error + }) + } + fake.getArgsSliceReturnsOnCall[i] = struct { + result1 []byte + result2 error + }{result1, result2} +} + +func (fake *ChaincodeStub) GetBinding() ([]byte, error) { + fake.getBindingMutex.Lock() + ret, specificReturn := fake.getBindingReturnsOnCall[len(fake.getBindingArgsForCall)] + fake.getBindingArgsForCall = append(fake.getBindingArgsForCall, struct { + }{}) + fake.recordInvocation("GetBinding", []interface{}{}) + fake.getBindingMutex.Unlock() + if fake.GetBindingStub != nil { + return fake.GetBindingStub() + } + if specificReturn { + return ret.result1, ret.result2 + } + fakeReturns := fake.getBindingReturns + return fakeReturns.result1, fakeReturns.result2 +} + +func (fake *ChaincodeStub) GetBindingCallCount() int { + fake.getBindingMutex.RLock() + defer fake.getBindingMutex.RUnlock() + return len(fake.getBindingArgsForCall) +} + +func (fake *ChaincodeStub) GetBindingCalls(stub func() ([]byte, error)) { + fake.getBindingMutex.Lock() + defer fake.getBindingMutex.Unlock() + fake.GetBindingStub = stub +} + +func (fake *ChaincodeStub) GetBindingReturns(result1 []byte, result2 error) { + fake.getBindingMutex.Lock() + defer fake.getBindingMutex.Unlock() + fake.GetBindingStub = nil + fake.getBindingReturns = struct { + result1 []byte + result2 error + }{result1, result2} +} + +func (fake *ChaincodeStub) GetBindingReturnsOnCall(i int, result1 []byte, result2 error) { + fake.getBindingMutex.Lock() + defer fake.getBindingMutex.Unlock() + fake.GetBindingStub = nil + if fake.getBindingReturnsOnCall == nil { + fake.getBindingReturnsOnCall = make(map[int]struct { + result1 []byte + result2 error + }) + } + fake.getBindingReturnsOnCall[i] = struct { + result1 []byte + result2 error + }{result1, result2} +} + +func (fake *ChaincodeStub) GetChannelID() string { + fake.getChannelIDMutex.Lock() + ret, specificReturn := fake.getChannelIDReturnsOnCall[len(fake.getChannelIDArgsForCall)] + fake.getChannelIDArgsForCall = append(fake.getChannelIDArgsForCall, struct { + }{}) + fake.recordInvocation("GetChannelID", []interface{}{}) + fake.getChannelIDMutex.Unlock() + if fake.GetChannelIDStub != nil { + return fake.GetChannelIDStub() + } + if specificReturn { + return ret.result1 + } + fakeReturns := fake.getChannelIDReturns + return fakeReturns.result1 +} + +func (fake *ChaincodeStub) GetChannelIDCallCount() int { + fake.getChannelIDMutex.RLock() + defer fake.getChannelIDMutex.RUnlock() + return len(fake.getChannelIDArgsForCall) +} + +func (fake *ChaincodeStub) GetChannelIDCalls(stub func() string) { + fake.getChannelIDMutex.Lock() + defer fake.getChannelIDMutex.Unlock() + fake.GetChannelIDStub = stub +} + +func (fake *ChaincodeStub) GetChannelIDReturns(result1 string) { + fake.getChannelIDMutex.Lock() + defer fake.getChannelIDMutex.Unlock() + fake.GetChannelIDStub = nil + fake.getChannelIDReturns = struct { + result1 string + }{result1} +} + +func (fake *ChaincodeStub) GetChannelIDReturnsOnCall(i int, result1 string) { + fake.getChannelIDMutex.Lock() + defer fake.getChannelIDMutex.Unlock() + fake.GetChannelIDStub = nil + if fake.getChannelIDReturnsOnCall == nil { + fake.getChannelIDReturnsOnCall = make(map[int]struct { + result1 string + }) + } + fake.getChannelIDReturnsOnCall[i] = struct { + result1 string + }{result1} +} + +func (fake *ChaincodeStub) GetCreator() ([]byte, error) { + fake.getCreatorMutex.Lock() + ret, specificReturn := fake.getCreatorReturnsOnCall[len(fake.getCreatorArgsForCall)] + fake.getCreatorArgsForCall = append(fake.getCreatorArgsForCall, struct { + }{}) + fake.recordInvocation("GetCreator", []interface{}{}) + fake.getCreatorMutex.Unlock() + if fake.GetCreatorStub != nil { + return fake.GetCreatorStub() + } + if specificReturn { + return ret.result1, ret.result2 + } + fakeReturns := fake.getCreatorReturns + return fakeReturns.result1, fakeReturns.result2 +} + +func (fake *ChaincodeStub) GetCreatorCallCount() int { + fake.getCreatorMutex.RLock() + defer fake.getCreatorMutex.RUnlock() + return len(fake.getCreatorArgsForCall) +} + +func (fake *ChaincodeStub) GetCreatorCalls(stub func() ([]byte, error)) { + fake.getCreatorMutex.Lock() + defer fake.getCreatorMutex.Unlock() + fake.GetCreatorStub = stub +} + +func (fake *ChaincodeStub) GetCreatorReturns(result1 []byte, result2 error) { + fake.getCreatorMutex.Lock() + defer fake.getCreatorMutex.Unlock() + fake.GetCreatorStub = nil + fake.getCreatorReturns = struct { + result1 []byte + result2 error + }{result1, result2} +} + +func (fake *ChaincodeStub) GetCreatorReturnsOnCall(i int, result1 []byte, result2 error) { + fake.getCreatorMutex.Lock() + defer fake.getCreatorMutex.Unlock() + fake.GetCreatorStub = nil + if fake.getCreatorReturnsOnCall == nil { + fake.getCreatorReturnsOnCall = make(map[int]struct { + result1 []byte + result2 error + }) + } + fake.getCreatorReturnsOnCall[i] = struct { + result1 []byte + result2 error + }{result1, result2} +} + +func (fake *ChaincodeStub) GetDecorations() map[string][]byte { + fake.getDecorationsMutex.Lock() + ret, specificReturn := fake.getDecorationsReturnsOnCall[len(fake.getDecorationsArgsForCall)] + fake.getDecorationsArgsForCall = append(fake.getDecorationsArgsForCall, struct { + }{}) + fake.recordInvocation("GetDecorations", []interface{}{}) + fake.getDecorationsMutex.Unlock() + if fake.GetDecorationsStub != nil { + return fake.GetDecorationsStub() + } + if specificReturn { + return ret.result1 + } + fakeReturns := fake.getDecorationsReturns + return fakeReturns.result1 +} + +func (fake *ChaincodeStub) GetDecorationsCallCount() int { + fake.getDecorationsMutex.RLock() + defer fake.getDecorationsMutex.RUnlock() + return len(fake.getDecorationsArgsForCall) +} + +func (fake *ChaincodeStub) GetDecorationsCalls(stub func() map[string][]byte) { + fake.getDecorationsMutex.Lock() + defer fake.getDecorationsMutex.Unlock() + fake.GetDecorationsStub = stub +} + +func (fake *ChaincodeStub) GetDecorationsReturns(result1 map[string][]byte) { + fake.getDecorationsMutex.Lock() + defer fake.getDecorationsMutex.Unlock() + fake.GetDecorationsStub = nil + fake.getDecorationsReturns = struct { + result1 map[string][]byte + }{result1} +} + +func (fake *ChaincodeStub) GetDecorationsReturnsOnCall(i int, result1 map[string][]byte) { + fake.getDecorationsMutex.Lock() + defer fake.getDecorationsMutex.Unlock() + fake.GetDecorationsStub = nil + if fake.getDecorationsReturnsOnCall == nil { + fake.getDecorationsReturnsOnCall = make(map[int]struct { + result1 map[string][]byte + }) + } + fake.getDecorationsReturnsOnCall[i] = struct { + result1 map[string][]byte + }{result1} +} + +func (fake *ChaincodeStub) GetFunctionAndParameters() (string, []string) { + fake.getFunctionAndParametersMutex.Lock() + ret, specificReturn := fake.getFunctionAndParametersReturnsOnCall[len(fake.getFunctionAndParametersArgsForCall)] + fake.getFunctionAndParametersArgsForCall = append(fake.getFunctionAndParametersArgsForCall, struct { + }{}) + fake.recordInvocation("GetFunctionAndParameters", []interface{}{}) + fake.getFunctionAndParametersMutex.Unlock() + if fake.GetFunctionAndParametersStub != nil { + return fake.GetFunctionAndParametersStub() + } + if specificReturn { + return ret.result1, ret.result2 + } + fakeReturns := fake.getFunctionAndParametersReturns + return fakeReturns.result1, fakeReturns.result2 +} + +func (fake *ChaincodeStub) GetFunctionAndParametersCallCount() int { + fake.getFunctionAndParametersMutex.RLock() + defer fake.getFunctionAndParametersMutex.RUnlock() + return len(fake.getFunctionAndParametersArgsForCall) +} + +func (fake *ChaincodeStub) GetFunctionAndParametersCalls(stub func() (string, []string)) { + fake.getFunctionAndParametersMutex.Lock() + defer fake.getFunctionAndParametersMutex.Unlock() + fake.GetFunctionAndParametersStub = stub +} + +func (fake *ChaincodeStub) GetFunctionAndParametersReturns(result1 string, result2 []string) { + fake.getFunctionAndParametersMutex.Lock() + defer fake.getFunctionAndParametersMutex.Unlock() + fake.GetFunctionAndParametersStub = nil + fake.getFunctionAndParametersReturns = struct { + result1 string + result2 []string + }{result1, result2} +} + +func (fake *ChaincodeStub) GetFunctionAndParametersReturnsOnCall(i int, result1 string, result2 []string) { + fake.getFunctionAndParametersMutex.Lock() + defer fake.getFunctionAndParametersMutex.Unlock() + fake.GetFunctionAndParametersStub = nil + if fake.getFunctionAndParametersReturnsOnCall == nil { + fake.getFunctionAndParametersReturnsOnCall = make(map[int]struct { + result1 string + result2 []string + }) + } + fake.getFunctionAndParametersReturnsOnCall[i] = struct { + result1 string + result2 []string + }{result1, result2} +} + +func (fake *ChaincodeStub) GetHistoryForKey(arg1 string) (shim.HistoryQueryIteratorInterface, error) { + fake.getHistoryForKeyMutex.Lock() + ret, specificReturn := fake.getHistoryForKeyReturnsOnCall[len(fake.getHistoryForKeyArgsForCall)] + fake.getHistoryForKeyArgsForCall = append(fake.getHistoryForKeyArgsForCall, struct { + arg1 string + }{arg1}) + fake.recordInvocation("GetHistoryForKey", []interface{}{arg1}) + fake.getHistoryForKeyMutex.Unlock() + if fake.GetHistoryForKeyStub != nil { + return fake.GetHistoryForKeyStub(arg1) + } + if specificReturn { + return ret.result1, ret.result2 + } + fakeReturns := fake.getHistoryForKeyReturns + return fakeReturns.result1, fakeReturns.result2 +} + +func (fake *ChaincodeStub) GetHistoryForKeyCallCount() int { + fake.getHistoryForKeyMutex.RLock() + defer fake.getHistoryForKeyMutex.RUnlock() + return len(fake.getHistoryForKeyArgsForCall) +} + +func (fake *ChaincodeStub) GetHistoryForKeyCalls(stub func(string) (shim.HistoryQueryIteratorInterface, error)) { + fake.getHistoryForKeyMutex.Lock() + defer fake.getHistoryForKeyMutex.Unlock() + fake.GetHistoryForKeyStub = stub +} + +func (fake *ChaincodeStub) GetHistoryForKeyArgsForCall(i int) string { + fake.getHistoryForKeyMutex.RLock() + defer fake.getHistoryForKeyMutex.RUnlock() + argsForCall := fake.getHistoryForKeyArgsForCall[i] + return argsForCall.arg1 +} + +func (fake *ChaincodeStub) GetHistoryForKeyReturns(result1 shim.HistoryQueryIteratorInterface, result2 error) { + fake.getHistoryForKeyMutex.Lock() + defer fake.getHistoryForKeyMutex.Unlock() + fake.GetHistoryForKeyStub = nil + fake.getHistoryForKeyReturns = struct { + result1 shim.HistoryQueryIteratorInterface + result2 error + }{result1, result2} +} + +func (fake *ChaincodeStub) GetHistoryForKeyReturnsOnCall(i int, result1 shim.HistoryQueryIteratorInterface, result2 error) { + fake.getHistoryForKeyMutex.Lock() + defer fake.getHistoryForKeyMutex.Unlock() + fake.GetHistoryForKeyStub = nil + if fake.getHistoryForKeyReturnsOnCall == nil { + fake.getHistoryForKeyReturnsOnCall = make(map[int]struct { + result1 shim.HistoryQueryIteratorInterface + result2 error + }) + } + fake.getHistoryForKeyReturnsOnCall[i] = struct { + result1 shim.HistoryQueryIteratorInterface + result2 error + }{result1, result2} +} + +func (fake *ChaincodeStub) GetPrivateData(arg1 string, arg2 string) ([]byte, error) { + fake.getPrivateDataMutex.Lock() + ret, specificReturn := fake.getPrivateDataReturnsOnCall[len(fake.getPrivateDataArgsForCall)] + fake.getPrivateDataArgsForCall = append(fake.getPrivateDataArgsForCall, struct { + arg1 string + arg2 string + }{arg1, arg2}) + fake.recordInvocation("GetPrivateData", []interface{}{arg1, arg2}) + fake.getPrivateDataMutex.Unlock() + if fake.GetPrivateDataStub != nil { + return fake.GetPrivateDataStub(arg1, arg2) + } + if specificReturn { + return ret.result1, ret.result2 + } + fakeReturns := fake.getPrivateDataReturns + return fakeReturns.result1, fakeReturns.result2 +} + +func (fake *ChaincodeStub) GetPrivateDataCallCount() int { + fake.getPrivateDataMutex.RLock() + defer fake.getPrivateDataMutex.RUnlock() + return len(fake.getPrivateDataArgsForCall) +} + +func (fake *ChaincodeStub) GetPrivateDataCalls(stub func(string, string) ([]byte, error)) { + fake.getPrivateDataMutex.Lock() + defer fake.getPrivateDataMutex.Unlock() + fake.GetPrivateDataStub = stub +} + +func (fake *ChaincodeStub) GetPrivateDataArgsForCall(i int) (string, string) { + fake.getPrivateDataMutex.RLock() + defer fake.getPrivateDataMutex.RUnlock() + argsForCall := fake.getPrivateDataArgsForCall[i] + return argsForCall.arg1, argsForCall.arg2 +} + +func (fake *ChaincodeStub) GetPrivateDataReturns(result1 []byte, result2 error) { + fake.getPrivateDataMutex.Lock() + defer fake.getPrivateDataMutex.Unlock() + fake.GetPrivateDataStub = nil + fake.getPrivateDataReturns = struct { + result1 []byte + result2 error + }{result1, result2} +} + +func (fake *ChaincodeStub) GetPrivateDataReturnsOnCall(i int, result1 []byte, result2 error) { + fake.getPrivateDataMutex.Lock() + defer fake.getPrivateDataMutex.Unlock() + fake.GetPrivateDataStub = nil + if fake.getPrivateDataReturnsOnCall == nil { + fake.getPrivateDataReturnsOnCall = make(map[int]struct { + result1 []byte + result2 error + }) + } + fake.getPrivateDataReturnsOnCall[i] = struct { + result1 []byte + result2 error + }{result1, result2} +} + +func (fake *ChaincodeStub) GetPrivateDataByPartialCompositeKey(arg1 string, arg2 string, arg3 []string) (shim.StateQueryIteratorInterface, error) { + var arg3Copy []string + if arg3 != nil { + arg3Copy = make([]string, len(arg3)) + copy(arg3Copy, arg3) + } + fake.getPrivateDataByPartialCompositeKeyMutex.Lock() + ret, specificReturn := fake.getPrivateDataByPartialCompositeKeyReturnsOnCall[len(fake.getPrivateDataByPartialCompositeKeyArgsForCall)] + fake.getPrivateDataByPartialCompositeKeyArgsForCall = append(fake.getPrivateDataByPartialCompositeKeyArgsForCall, struct { + arg1 string + arg2 string + arg3 []string + }{arg1, arg2, arg3Copy}) + fake.recordInvocation("GetPrivateDataByPartialCompositeKey", []interface{}{arg1, arg2, arg3Copy}) + fake.getPrivateDataByPartialCompositeKeyMutex.Unlock() + if fake.GetPrivateDataByPartialCompositeKeyStub != nil { + return fake.GetPrivateDataByPartialCompositeKeyStub(arg1, arg2, arg3) + } + if specificReturn { + return ret.result1, ret.result2 + } + fakeReturns := fake.getPrivateDataByPartialCompositeKeyReturns + return fakeReturns.result1, fakeReturns.result2 +} + +func (fake *ChaincodeStub) GetPrivateDataByPartialCompositeKeyCallCount() int { + fake.getPrivateDataByPartialCompositeKeyMutex.RLock() + defer fake.getPrivateDataByPartialCompositeKeyMutex.RUnlock() + return len(fake.getPrivateDataByPartialCompositeKeyArgsForCall) +} + +func (fake *ChaincodeStub) GetPrivateDataByPartialCompositeKeyCalls(stub func(string, string, []string) (shim.StateQueryIteratorInterface, error)) { + fake.getPrivateDataByPartialCompositeKeyMutex.Lock() + defer fake.getPrivateDataByPartialCompositeKeyMutex.Unlock() + fake.GetPrivateDataByPartialCompositeKeyStub = stub +} + +func (fake *ChaincodeStub) GetPrivateDataByPartialCompositeKeyArgsForCall(i int) (string, string, []string) { + fake.getPrivateDataByPartialCompositeKeyMutex.RLock() + defer fake.getPrivateDataByPartialCompositeKeyMutex.RUnlock() + argsForCall := fake.getPrivateDataByPartialCompositeKeyArgsForCall[i] + return argsForCall.arg1, argsForCall.arg2, argsForCall.arg3 +} + +func (fake *ChaincodeStub) GetPrivateDataByPartialCompositeKeyReturns(result1 shim.StateQueryIteratorInterface, result2 error) { + fake.getPrivateDataByPartialCompositeKeyMutex.Lock() + defer fake.getPrivateDataByPartialCompositeKeyMutex.Unlock() + fake.GetPrivateDataByPartialCompositeKeyStub = nil + fake.getPrivateDataByPartialCompositeKeyReturns = struct { + result1 shim.StateQueryIteratorInterface + result2 error + }{result1, result2} +} + +func (fake *ChaincodeStub) GetPrivateDataByPartialCompositeKeyReturnsOnCall(i int, result1 shim.StateQueryIteratorInterface, result2 error) { + fake.getPrivateDataByPartialCompositeKeyMutex.Lock() + defer fake.getPrivateDataByPartialCompositeKeyMutex.Unlock() + fake.GetPrivateDataByPartialCompositeKeyStub = nil + if fake.getPrivateDataByPartialCompositeKeyReturnsOnCall == nil { + fake.getPrivateDataByPartialCompositeKeyReturnsOnCall = make(map[int]struct { + result1 shim.StateQueryIteratorInterface + result2 error + }) + } + fake.getPrivateDataByPartialCompositeKeyReturnsOnCall[i] = struct { + result1 shim.StateQueryIteratorInterface + result2 error + }{result1, result2} +} + +func (fake *ChaincodeStub) GetPrivateDataByRange(arg1 string, arg2 string, arg3 string) (shim.StateQueryIteratorInterface, error) { + fake.getPrivateDataByRangeMutex.Lock() + ret, specificReturn := fake.getPrivateDataByRangeReturnsOnCall[len(fake.getPrivateDataByRangeArgsForCall)] + fake.getPrivateDataByRangeArgsForCall = append(fake.getPrivateDataByRangeArgsForCall, struct { + arg1 string + arg2 string + arg3 string + }{arg1, arg2, arg3}) + fake.recordInvocation("GetPrivateDataByRange", []interface{}{arg1, arg2, arg3}) + fake.getPrivateDataByRangeMutex.Unlock() + if fake.GetPrivateDataByRangeStub != nil { + return fake.GetPrivateDataByRangeStub(arg1, arg2, arg3) + } + if specificReturn { + return ret.result1, ret.result2 + } + fakeReturns := fake.getPrivateDataByRangeReturns + return fakeReturns.result1, fakeReturns.result2 +} + +func (fake *ChaincodeStub) GetPrivateDataByRangeCallCount() int { + fake.getPrivateDataByRangeMutex.RLock() + defer fake.getPrivateDataByRangeMutex.RUnlock() + return len(fake.getPrivateDataByRangeArgsForCall) +} + +func (fake *ChaincodeStub) GetPrivateDataByRangeCalls(stub func(string, string, string) (shim.StateQueryIteratorInterface, error)) { + fake.getPrivateDataByRangeMutex.Lock() + defer fake.getPrivateDataByRangeMutex.Unlock() + fake.GetPrivateDataByRangeStub = stub +} + +func (fake *ChaincodeStub) GetPrivateDataByRangeArgsForCall(i int) (string, string, string) { + fake.getPrivateDataByRangeMutex.RLock() + defer fake.getPrivateDataByRangeMutex.RUnlock() + argsForCall := fake.getPrivateDataByRangeArgsForCall[i] + return argsForCall.arg1, argsForCall.arg2, argsForCall.arg3 +} + +func (fake *ChaincodeStub) GetPrivateDataByRangeReturns(result1 shim.StateQueryIteratorInterface, result2 error) { + fake.getPrivateDataByRangeMutex.Lock() + defer fake.getPrivateDataByRangeMutex.Unlock() + fake.GetPrivateDataByRangeStub = nil + fake.getPrivateDataByRangeReturns = struct { + result1 shim.StateQueryIteratorInterface + result2 error + }{result1, result2} +} + +func (fake *ChaincodeStub) GetPrivateDataByRangeReturnsOnCall(i int, result1 shim.StateQueryIteratorInterface, result2 error) { + fake.getPrivateDataByRangeMutex.Lock() + defer fake.getPrivateDataByRangeMutex.Unlock() + fake.GetPrivateDataByRangeStub = nil + if fake.getPrivateDataByRangeReturnsOnCall == nil { + fake.getPrivateDataByRangeReturnsOnCall = make(map[int]struct { + result1 shim.StateQueryIteratorInterface + result2 error + }) + } + fake.getPrivateDataByRangeReturnsOnCall[i] = struct { + result1 shim.StateQueryIteratorInterface + result2 error + }{result1, result2} +} + +func (fake *ChaincodeStub) GetPrivateDataHash(arg1 string, arg2 string) ([]byte, error) { + fake.getPrivateDataHashMutex.Lock() + ret, specificReturn := fake.getPrivateDataHashReturnsOnCall[len(fake.getPrivateDataHashArgsForCall)] + fake.getPrivateDataHashArgsForCall = append(fake.getPrivateDataHashArgsForCall, struct { + arg1 string + arg2 string + }{arg1, arg2}) + fake.recordInvocation("GetPrivateDataHash", []interface{}{arg1, arg2}) + fake.getPrivateDataHashMutex.Unlock() + if fake.GetPrivateDataHashStub != nil { + return fake.GetPrivateDataHashStub(arg1, arg2) + } + if specificReturn { + return ret.result1, ret.result2 + } + fakeReturns := fake.getPrivateDataHashReturns + return fakeReturns.result1, fakeReturns.result2 +} + +func (fake *ChaincodeStub) GetPrivateDataHashCallCount() int { + fake.getPrivateDataHashMutex.RLock() + defer fake.getPrivateDataHashMutex.RUnlock() + return len(fake.getPrivateDataHashArgsForCall) +} + +func (fake *ChaincodeStub) GetPrivateDataHashCalls(stub func(string, string) ([]byte, error)) { + fake.getPrivateDataHashMutex.Lock() + defer fake.getPrivateDataHashMutex.Unlock() + fake.GetPrivateDataHashStub = stub +} + +func (fake *ChaincodeStub) GetPrivateDataHashArgsForCall(i int) (string, string) { + fake.getPrivateDataHashMutex.RLock() + defer fake.getPrivateDataHashMutex.RUnlock() + argsForCall := fake.getPrivateDataHashArgsForCall[i] + return argsForCall.arg1, argsForCall.arg2 +} + +func (fake *ChaincodeStub) GetPrivateDataHashReturns(result1 []byte, result2 error) { + fake.getPrivateDataHashMutex.Lock() + defer fake.getPrivateDataHashMutex.Unlock() + fake.GetPrivateDataHashStub = nil + fake.getPrivateDataHashReturns = struct { + result1 []byte + result2 error + }{result1, result2} +} + +func (fake *ChaincodeStub) GetPrivateDataHashReturnsOnCall(i int, result1 []byte, result2 error) { + fake.getPrivateDataHashMutex.Lock() + defer fake.getPrivateDataHashMutex.Unlock() + fake.GetPrivateDataHashStub = nil + if fake.getPrivateDataHashReturnsOnCall == nil { + fake.getPrivateDataHashReturnsOnCall = make(map[int]struct { + result1 []byte + result2 error + }) + } + fake.getPrivateDataHashReturnsOnCall[i] = struct { + result1 []byte + result2 error + }{result1, result2} +} + +func (fake *ChaincodeStub) GetPrivateDataQueryResult(arg1 string, arg2 string) (shim.StateQueryIteratorInterface, error) { + fake.getPrivateDataQueryResultMutex.Lock() + ret, specificReturn := fake.getPrivateDataQueryResultReturnsOnCall[len(fake.getPrivateDataQueryResultArgsForCall)] + fake.getPrivateDataQueryResultArgsForCall = append(fake.getPrivateDataQueryResultArgsForCall, struct { + arg1 string + arg2 string + }{arg1, arg2}) + fake.recordInvocation("GetPrivateDataQueryResult", []interface{}{arg1, arg2}) + fake.getPrivateDataQueryResultMutex.Unlock() + if fake.GetPrivateDataQueryResultStub != nil { + return fake.GetPrivateDataQueryResultStub(arg1, arg2) + } + if specificReturn { + return ret.result1, ret.result2 + } + fakeReturns := fake.getPrivateDataQueryResultReturns + return fakeReturns.result1, fakeReturns.result2 +} + +func (fake *ChaincodeStub) GetPrivateDataQueryResultCallCount() int { + fake.getPrivateDataQueryResultMutex.RLock() + defer fake.getPrivateDataQueryResultMutex.RUnlock() + return len(fake.getPrivateDataQueryResultArgsForCall) +} + +func (fake *ChaincodeStub) GetPrivateDataQueryResultCalls(stub func(string, string) (shim.StateQueryIteratorInterface, error)) { + fake.getPrivateDataQueryResultMutex.Lock() + defer fake.getPrivateDataQueryResultMutex.Unlock() + fake.GetPrivateDataQueryResultStub = stub +} + +func (fake *ChaincodeStub) GetPrivateDataQueryResultArgsForCall(i int) (string, string) { + fake.getPrivateDataQueryResultMutex.RLock() + defer fake.getPrivateDataQueryResultMutex.RUnlock() + argsForCall := fake.getPrivateDataQueryResultArgsForCall[i] + return argsForCall.arg1, argsForCall.arg2 +} + +func (fake *ChaincodeStub) GetPrivateDataQueryResultReturns(result1 shim.StateQueryIteratorInterface, result2 error) { + fake.getPrivateDataQueryResultMutex.Lock() + defer fake.getPrivateDataQueryResultMutex.Unlock() + fake.GetPrivateDataQueryResultStub = nil + fake.getPrivateDataQueryResultReturns = struct { + result1 shim.StateQueryIteratorInterface + result2 error + }{result1, result2} +} + +func (fake *ChaincodeStub) GetPrivateDataQueryResultReturnsOnCall(i int, result1 shim.StateQueryIteratorInterface, result2 error) { + fake.getPrivateDataQueryResultMutex.Lock() + defer fake.getPrivateDataQueryResultMutex.Unlock() + fake.GetPrivateDataQueryResultStub = nil + if fake.getPrivateDataQueryResultReturnsOnCall == nil { + fake.getPrivateDataQueryResultReturnsOnCall = make(map[int]struct { + result1 shim.StateQueryIteratorInterface + result2 error + }) + } + fake.getPrivateDataQueryResultReturnsOnCall[i] = struct { + result1 shim.StateQueryIteratorInterface + result2 error + }{result1, result2} +} + +func (fake *ChaincodeStub) GetPrivateDataValidationParameter(arg1 string, arg2 string) ([]byte, error) { + fake.getPrivateDataValidationParameterMutex.Lock() + ret, specificReturn := fake.getPrivateDataValidationParameterReturnsOnCall[len(fake.getPrivateDataValidationParameterArgsForCall)] + fake.getPrivateDataValidationParameterArgsForCall = append(fake.getPrivateDataValidationParameterArgsForCall, struct { + arg1 string + arg2 string + }{arg1, arg2}) + fake.recordInvocation("GetPrivateDataValidationParameter", []interface{}{arg1, arg2}) + fake.getPrivateDataValidationParameterMutex.Unlock() + if fake.GetPrivateDataValidationParameterStub != nil { + return fake.GetPrivateDataValidationParameterStub(arg1, arg2) + } + if specificReturn { + return ret.result1, ret.result2 + } + fakeReturns := fake.getPrivateDataValidationParameterReturns + return fakeReturns.result1, fakeReturns.result2 +} + +func (fake *ChaincodeStub) GetPrivateDataValidationParameterCallCount() int { + fake.getPrivateDataValidationParameterMutex.RLock() + defer fake.getPrivateDataValidationParameterMutex.RUnlock() + return len(fake.getPrivateDataValidationParameterArgsForCall) +} + +func (fake *ChaincodeStub) GetPrivateDataValidationParameterCalls(stub func(string, string) ([]byte, error)) { + fake.getPrivateDataValidationParameterMutex.Lock() + defer fake.getPrivateDataValidationParameterMutex.Unlock() + fake.GetPrivateDataValidationParameterStub = stub +} + +func (fake *ChaincodeStub) GetPrivateDataValidationParameterArgsForCall(i int) (string, string) { + fake.getPrivateDataValidationParameterMutex.RLock() + defer fake.getPrivateDataValidationParameterMutex.RUnlock() + argsForCall := fake.getPrivateDataValidationParameterArgsForCall[i] + return argsForCall.arg1, argsForCall.arg2 +} + +func (fake *ChaincodeStub) GetPrivateDataValidationParameterReturns(result1 []byte, result2 error) { + fake.getPrivateDataValidationParameterMutex.Lock() + defer fake.getPrivateDataValidationParameterMutex.Unlock() + fake.GetPrivateDataValidationParameterStub = nil + fake.getPrivateDataValidationParameterReturns = struct { + result1 []byte + result2 error + }{result1, result2} +} + +func (fake *ChaincodeStub) GetPrivateDataValidationParameterReturnsOnCall(i int, result1 []byte, result2 error) { + fake.getPrivateDataValidationParameterMutex.Lock() + defer fake.getPrivateDataValidationParameterMutex.Unlock() + fake.GetPrivateDataValidationParameterStub = nil + if fake.getPrivateDataValidationParameterReturnsOnCall == nil { + fake.getPrivateDataValidationParameterReturnsOnCall = make(map[int]struct { + result1 []byte + result2 error + }) + } + fake.getPrivateDataValidationParameterReturnsOnCall[i] = struct { + result1 []byte + result2 error + }{result1, result2} +} + +func (fake *ChaincodeStub) GetQueryResult(arg1 string) (shim.StateQueryIteratorInterface, error) { + fake.getQueryResultMutex.Lock() + ret, specificReturn := fake.getQueryResultReturnsOnCall[len(fake.getQueryResultArgsForCall)] + fake.getQueryResultArgsForCall = append(fake.getQueryResultArgsForCall, struct { + arg1 string + }{arg1}) + fake.recordInvocation("GetQueryResult", []interface{}{arg1}) + fake.getQueryResultMutex.Unlock() + if fake.GetQueryResultStub != nil { + return fake.GetQueryResultStub(arg1) + } + if specificReturn { + return ret.result1, ret.result2 + } + fakeReturns := fake.getQueryResultReturns + return fakeReturns.result1, fakeReturns.result2 +} + +func (fake *ChaincodeStub) GetQueryResultCallCount() int { + fake.getQueryResultMutex.RLock() + defer fake.getQueryResultMutex.RUnlock() + return len(fake.getQueryResultArgsForCall) +} + +func (fake *ChaincodeStub) GetQueryResultCalls(stub func(string) (shim.StateQueryIteratorInterface, error)) { + fake.getQueryResultMutex.Lock() + defer fake.getQueryResultMutex.Unlock() + fake.GetQueryResultStub = stub +} + +func (fake *ChaincodeStub) GetQueryResultArgsForCall(i int) string { + fake.getQueryResultMutex.RLock() + defer fake.getQueryResultMutex.RUnlock() + argsForCall := fake.getQueryResultArgsForCall[i] + return argsForCall.arg1 +} + +func (fake *ChaincodeStub) GetQueryResultReturns(result1 shim.StateQueryIteratorInterface, result2 error) { + fake.getQueryResultMutex.Lock() + defer fake.getQueryResultMutex.Unlock() + fake.GetQueryResultStub = nil + fake.getQueryResultReturns = struct { + result1 shim.StateQueryIteratorInterface + result2 error + }{result1, result2} +} + +func (fake *ChaincodeStub) GetQueryResultReturnsOnCall(i int, result1 shim.StateQueryIteratorInterface, result2 error) { + fake.getQueryResultMutex.Lock() + defer fake.getQueryResultMutex.Unlock() + fake.GetQueryResultStub = nil + if fake.getQueryResultReturnsOnCall == nil { + fake.getQueryResultReturnsOnCall = make(map[int]struct { + result1 shim.StateQueryIteratorInterface + result2 error + }) + } + fake.getQueryResultReturnsOnCall[i] = struct { + result1 shim.StateQueryIteratorInterface + result2 error + }{result1, result2} +} + +func (fake *ChaincodeStub) GetQueryResultWithPagination(arg1 string, arg2 int32, arg3 string) (shim.StateQueryIteratorInterface, *peer.QueryResponseMetadata, error) { + fake.getQueryResultWithPaginationMutex.Lock() + ret, specificReturn := fake.getQueryResultWithPaginationReturnsOnCall[len(fake.getQueryResultWithPaginationArgsForCall)] + fake.getQueryResultWithPaginationArgsForCall = append(fake.getQueryResultWithPaginationArgsForCall, struct { + arg1 string + arg2 int32 + arg3 string + }{arg1, arg2, arg3}) + fake.recordInvocation("GetQueryResultWithPagination", []interface{}{arg1, arg2, arg3}) + fake.getQueryResultWithPaginationMutex.Unlock() + if fake.GetQueryResultWithPaginationStub != nil { + return fake.GetQueryResultWithPaginationStub(arg1, arg2, arg3) + } + if specificReturn { + return ret.result1, ret.result2, ret.result3 + } + fakeReturns := fake.getQueryResultWithPaginationReturns + return fakeReturns.result1, fakeReturns.result2, fakeReturns.result3 +} + +func (fake *ChaincodeStub) GetQueryResultWithPaginationCallCount() int { + fake.getQueryResultWithPaginationMutex.RLock() + defer fake.getQueryResultWithPaginationMutex.RUnlock() + return len(fake.getQueryResultWithPaginationArgsForCall) +} + +func (fake *ChaincodeStub) GetQueryResultWithPaginationCalls(stub func(string, int32, string) (shim.StateQueryIteratorInterface, *peer.QueryResponseMetadata, error)) { + fake.getQueryResultWithPaginationMutex.Lock() + defer fake.getQueryResultWithPaginationMutex.Unlock() + fake.GetQueryResultWithPaginationStub = stub +} + +func (fake *ChaincodeStub) GetQueryResultWithPaginationArgsForCall(i int) (string, int32, string) { + fake.getQueryResultWithPaginationMutex.RLock() + defer fake.getQueryResultWithPaginationMutex.RUnlock() + argsForCall := fake.getQueryResultWithPaginationArgsForCall[i] + return argsForCall.arg1, argsForCall.arg2, argsForCall.arg3 +} + +func (fake *ChaincodeStub) GetQueryResultWithPaginationReturns(result1 shim.StateQueryIteratorInterface, result2 *peer.QueryResponseMetadata, result3 error) { + fake.getQueryResultWithPaginationMutex.Lock() + defer fake.getQueryResultWithPaginationMutex.Unlock() + fake.GetQueryResultWithPaginationStub = nil + fake.getQueryResultWithPaginationReturns = struct { + result1 shim.StateQueryIteratorInterface + result2 *peer.QueryResponseMetadata + result3 error + }{result1, result2, result3} +} + +func (fake *ChaincodeStub) GetQueryResultWithPaginationReturnsOnCall(i int, result1 shim.StateQueryIteratorInterface, result2 *peer.QueryResponseMetadata, result3 error) { + fake.getQueryResultWithPaginationMutex.Lock() + defer fake.getQueryResultWithPaginationMutex.Unlock() + fake.GetQueryResultWithPaginationStub = nil + if fake.getQueryResultWithPaginationReturnsOnCall == nil { + fake.getQueryResultWithPaginationReturnsOnCall = make(map[int]struct { + result1 shim.StateQueryIteratorInterface + result2 *peer.QueryResponseMetadata + result3 error + }) + } + fake.getQueryResultWithPaginationReturnsOnCall[i] = struct { + result1 shim.StateQueryIteratorInterface + result2 *peer.QueryResponseMetadata + result3 error + }{result1, result2, result3} +} + +func (fake *ChaincodeStub) GetSignedProposal() (*peer.SignedProposal, error) { + fake.getSignedProposalMutex.Lock() + ret, specificReturn := fake.getSignedProposalReturnsOnCall[len(fake.getSignedProposalArgsForCall)] + fake.getSignedProposalArgsForCall = append(fake.getSignedProposalArgsForCall, struct { + }{}) + fake.recordInvocation("GetSignedProposal", []interface{}{}) + fake.getSignedProposalMutex.Unlock() + if fake.GetSignedProposalStub != nil { + return fake.GetSignedProposalStub() + } + if specificReturn { + return ret.result1, ret.result2 + } + fakeReturns := fake.getSignedProposalReturns + return fakeReturns.result1, fakeReturns.result2 +} + +func (fake *ChaincodeStub) GetSignedProposalCallCount() int { + fake.getSignedProposalMutex.RLock() + defer fake.getSignedProposalMutex.RUnlock() + return len(fake.getSignedProposalArgsForCall) +} + +func (fake *ChaincodeStub) GetSignedProposalCalls(stub func() (*peer.SignedProposal, error)) { + fake.getSignedProposalMutex.Lock() + defer fake.getSignedProposalMutex.Unlock() + fake.GetSignedProposalStub = stub +} + +func (fake *ChaincodeStub) GetSignedProposalReturns(result1 *peer.SignedProposal, result2 error) { + fake.getSignedProposalMutex.Lock() + defer fake.getSignedProposalMutex.Unlock() + fake.GetSignedProposalStub = nil + fake.getSignedProposalReturns = struct { + result1 *peer.SignedProposal + result2 error + }{result1, result2} +} + +func (fake *ChaincodeStub) GetSignedProposalReturnsOnCall(i int, result1 *peer.SignedProposal, result2 error) { + fake.getSignedProposalMutex.Lock() + defer fake.getSignedProposalMutex.Unlock() + fake.GetSignedProposalStub = nil + if fake.getSignedProposalReturnsOnCall == nil { + fake.getSignedProposalReturnsOnCall = make(map[int]struct { + result1 *peer.SignedProposal + result2 error + }) + } + fake.getSignedProposalReturnsOnCall[i] = struct { + result1 *peer.SignedProposal + result2 error + }{result1, result2} +} + +func (fake *ChaincodeStub) GetState(arg1 string) ([]byte, error) { + fake.getStateMutex.Lock() + ret, specificReturn := fake.getStateReturnsOnCall[len(fake.getStateArgsForCall)] + fake.getStateArgsForCall = append(fake.getStateArgsForCall, struct { + arg1 string + }{arg1}) + fake.recordInvocation("GetState", []interface{}{arg1}) + fake.getStateMutex.Unlock() + if fake.GetStateStub != nil { + return fake.GetStateStub(arg1) + } + if specificReturn { + return ret.result1, ret.result2 + } + fakeReturns := fake.getStateReturns + return fakeReturns.result1, fakeReturns.result2 +} + +func (fake *ChaincodeStub) GetStateCallCount() int { + fake.getStateMutex.RLock() + defer fake.getStateMutex.RUnlock() + return len(fake.getStateArgsForCall) +} + +func (fake *ChaincodeStub) GetStateCalls(stub func(string) ([]byte, error)) { + fake.getStateMutex.Lock() + defer fake.getStateMutex.Unlock() + fake.GetStateStub = stub +} + +func (fake *ChaincodeStub) GetStateArgsForCall(i int) string { + fake.getStateMutex.RLock() + defer fake.getStateMutex.RUnlock() + argsForCall := fake.getStateArgsForCall[i] + return argsForCall.arg1 +} + +func (fake *ChaincodeStub) GetStateReturns(result1 []byte, result2 error) { + fake.getStateMutex.Lock() + defer fake.getStateMutex.Unlock() + fake.GetStateStub = nil + fake.getStateReturns = struct { + result1 []byte + result2 error + }{result1, result2} +} + +func (fake *ChaincodeStub) GetStateReturnsOnCall(i int, result1 []byte, result2 error) { + fake.getStateMutex.Lock() + defer fake.getStateMutex.Unlock() + fake.GetStateStub = nil + if fake.getStateReturnsOnCall == nil { + fake.getStateReturnsOnCall = make(map[int]struct { + result1 []byte + result2 error + }) + } + fake.getStateReturnsOnCall[i] = struct { + result1 []byte + result2 error + }{result1, result2} +} + +func (fake *ChaincodeStub) GetStateByPartialCompositeKey(arg1 string, arg2 []string) (shim.StateQueryIteratorInterface, error) { + var arg2Copy []string + if arg2 != nil { + arg2Copy = make([]string, len(arg2)) + copy(arg2Copy, arg2) + } + fake.getStateByPartialCompositeKeyMutex.Lock() + ret, specificReturn := fake.getStateByPartialCompositeKeyReturnsOnCall[len(fake.getStateByPartialCompositeKeyArgsForCall)] + fake.getStateByPartialCompositeKeyArgsForCall = append(fake.getStateByPartialCompositeKeyArgsForCall, struct { + arg1 string + arg2 []string + }{arg1, arg2Copy}) + fake.recordInvocation("GetStateByPartialCompositeKey", []interface{}{arg1, arg2Copy}) + fake.getStateByPartialCompositeKeyMutex.Unlock() + if fake.GetStateByPartialCompositeKeyStub != nil { + return fake.GetStateByPartialCompositeKeyStub(arg1, arg2) + } + if specificReturn { + return ret.result1, ret.result2 + } + fakeReturns := fake.getStateByPartialCompositeKeyReturns + return fakeReturns.result1, fakeReturns.result2 +} + +func (fake *ChaincodeStub) GetStateByPartialCompositeKeyCallCount() int { + fake.getStateByPartialCompositeKeyMutex.RLock() + defer fake.getStateByPartialCompositeKeyMutex.RUnlock() + return len(fake.getStateByPartialCompositeKeyArgsForCall) +} + +func (fake *ChaincodeStub) GetStateByPartialCompositeKeyCalls(stub func(string, []string) (shim.StateQueryIteratorInterface, error)) { + fake.getStateByPartialCompositeKeyMutex.Lock() + defer fake.getStateByPartialCompositeKeyMutex.Unlock() + fake.GetStateByPartialCompositeKeyStub = stub +} + +func (fake *ChaincodeStub) GetStateByPartialCompositeKeyArgsForCall(i int) (string, []string) { + fake.getStateByPartialCompositeKeyMutex.RLock() + defer fake.getStateByPartialCompositeKeyMutex.RUnlock() + argsForCall := fake.getStateByPartialCompositeKeyArgsForCall[i] + return argsForCall.arg1, argsForCall.arg2 +} + +func (fake *ChaincodeStub) GetStateByPartialCompositeKeyReturns(result1 shim.StateQueryIteratorInterface, result2 error) { + fake.getStateByPartialCompositeKeyMutex.Lock() + defer fake.getStateByPartialCompositeKeyMutex.Unlock() + fake.GetStateByPartialCompositeKeyStub = nil + fake.getStateByPartialCompositeKeyReturns = struct { + result1 shim.StateQueryIteratorInterface + result2 error + }{result1, result2} +} + +func (fake *ChaincodeStub) GetStateByPartialCompositeKeyReturnsOnCall(i int, result1 shim.StateQueryIteratorInterface, result2 error) { + fake.getStateByPartialCompositeKeyMutex.Lock() + defer fake.getStateByPartialCompositeKeyMutex.Unlock() + fake.GetStateByPartialCompositeKeyStub = nil + if fake.getStateByPartialCompositeKeyReturnsOnCall == nil { + fake.getStateByPartialCompositeKeyReturnsOnCall = make(map[int]struct { + result1 shim.StateQueryIteratorInterface + result2 error + }) + } + fake.getStateByPartialCompositeKeyReturnsOnCall[i] = struct { + result1 shim.StateQueryIteratorInterface + result2 error + }{result1, result2} +} + +func (fake *ChaincodeStub) GetStateByPartialCompositeKeyWithPagination(arg1 string, arg2 []string, arg3 int32, arg4 string) (shim.StateQueryIteratorInterface, *peer.QueryResponseMetadata, error) { + var arg2Copy []string + if arg2 != nil { + arg2Copy = make([]string, len(arg2)) + copy(arg2Copy, arg2) + } + fake.getStateByPartialCompositeKeyWithPaginationMutex.Lock() + ret, specificReturn := fake.getStateByPartialCompositeKeyWithPaginationReturnsOnCall[len(fake.getStateByPartialCompositeKeyWithPaginationArgsForCall)] + fake.getStateByPartialCompositeKeyWithPaginationArgsForCall = append(fake.getStateByPartialCompositeKeyWithPaginationArgsForCall, struct { + arg1 string + arg2 []string + arg3 int32 + arg4 string + }{arg1, arg2Copy, arg3, arg4}) + fake.recordInvocation("GetStateByPartialCompositeKeyWithPagination", []interface{}{arg1, arg2Copy, arg3, arg4}) + fake.getStateByPartialCompositeKeyWithPaginationMutex.Unlock() + if fake.GetStateByPartialCompositeKeyWithPaginationStub != nil { + return fake.GetStateByPartialCompositeKeyWithPaginationStub(arg1, arg2, arg3, arg4) + } + if specificReturn { + return ret.result1, ret.result2, ret.result3 + } + fakeReturns := fake.getStateByPartialCompositeKeyWithPaginationReturns + return fakeReturns.result1, fakeReturns.result2, fakeReturns.result3 +} + +func (fake *ChaincodeStub) GetStateByPartialCompositeKeyWithPaginationCallCount() int { + fake.getStateByPartialCompositeKeyWithPaginationMutex.RLock() + defer fake.getStateByPartialCompositeKeyWithPaginationMutex.RUnlock() + return len(fake.getStateByPartialCompositeKeyWithPaginationArgsForCall) +} + +func (fake *ChaincodeStub) GetStateByPartialCompositeKeyWithPaginationCalls(stub func(string, []string, int32, string) (shim.StateQueryIteratorInterface, *peer.QueryResponseMetadata, error)) { + fake.getStateByPartialCompositeKeyWithPaginationMutex.Lock() + defer fake.getStateByPartialCompositeKeyWithPaginationMutex.Unlock() + fake.GetStateByPartialCompositeKeyWithPaginationStub = stub +} + +func (fake *ChaincodeStub) GetStateByPartialCompositeKeyWithPaginationArgsForCall(i int) (string, []string, int32, string) { + fake.getStateByPartialCompositeKeyWithPaginationMutex.RLock() + defer fake.getStateByPartialCompositeKeyWithPaginationMutex.RUnlock() + argsForCall := fake.getStateByPartialCompositeKeyWithPaginationArgsForCall[i] + return argsForCall.arg1, argsForCall.arg2, argsForCall.arg3, argsForCall.arg4 +} + +func (fake *ChaincodeStub) GetStateByPartialCompositeKeyWithPaginationReturns(result1 shim.StateQueryIteratorInterface, result2 *peer.QueryResponseMetadata, result3 error) { + fake.getStateByPartialCompositeKeyWithPaginationMutex.Lock() + defer fake.getStateByPartialCompositeKeyWithPaginationMutex.Unlock() + fake.GetStateByPartialCompositeKeyWithPaginationStub = nil + fake.getStateByPartialCompositeKeyWithPaginationReturns = struct { + result1 shim.StateQueryIteratorInterface + result2 *peer.QueryResponseMetadata + result3 error + }{result1, result2, result3} +} + +func (fake *ChaincodeStub) GetStateByPartialCompositeKeyWithPaginationReturnsOnCall(i int, result1 shim.StateQueryIteratorInterface, result2 *peer.QueryResponseMetadata, result3 error) { + fake.getStateByPartialCompositeKeyWithPaginationMutex.Lock() + defer fake.getStateByPartialCompositeKeyWithPaginationMutex.Unlock() + fake.GetStateByPartialCompositeKeyWithPaginationStub = nil + if fake.getStateByPartialCompositeKeyWithPaginationReturnsOnCall == nil { + fake.getStateByPartialCompositeKeyWithPaginationReturnsOnCall = make(map[int]struct { + result1 shim.StateQueryIteratorInterface + result2 *peer.QueryResponseMetadata + result3 error + }) + } + fake.getStateByPartialCompositeKeyWithPaginationReturnsOnCall[i] = struct { + result1 shim.StateQueryIteratorInterface + result2 *peer.QueryResponseMetadata + result3 error + }{result1, result2, result3} +} + +func (fake *ChaincodeStub) GetStateByRange(arg1 string, arg2 string) (shim.StateQueryIteratorInterface, error) { + fake.getStateByRangeMutex.Lock() + ret, specificReturn := fake.getStateByRangeReturnsOnCall[len(fake.getStateByRangeArgsForCall)] + fake.getStateByRangeArgsForCall = append(fake.getStateByRangeArgsForCall, struct { + arg1 string + arg2 string + }{arg1, arg2}) + fake.recordInvocation("GetStateByRange", []interface{}{arg1, arg2}) + fake.getStateByRangeMutex.Unlock() + if fake.GetStateByRangeStub != nil { + return fake.GetStateByRangeStub(arg1, arg2) + } + if specificReturn { + return ret.result1, ret.result2 + } + fakeReturns := fake.getStateByRangeReturns + return fakeReturns.result1, fakeReturns.result2 +} + +func (fake *ChaincodeStub) GetStateByRangeCallCount() int { + fake.getStateByRangeMutex.RLock() + defer fake.getStateByRangeMutex.RUnlock() + return len(fake.getStateByRangeArgsForCall) +} + +func (fake *ChaincodeStub) GetStateByRangeCalls(stub func(string, string) (shim.StateQueryIteratorInterface, error)) { + fake.getStateByRangeMutex.Lock() + defer fake.getStateByRangeMutex.Unlock() + fake.GetStateByRangeStub = stub +} + +func (fake *ChaincodeStub) GetStateByRangeArgsForCall(i int) (string, string) { + fake.getStateByRangeMutex.RLock() + defer fake.getStateByRangeMutex.RUnlock() + argsForCall := fake.getStateByRangeArgsForCall[i] + return argsForCall.arg1, argsForCall.arg2 +} + +func (fake *ChaincodeStub) GetStateByRangeReturns(result1 shim.StateQueryIteratorInterface, result2 error) { + fake.getStateByRangeMutex.Lock() + defer fake.getStateByRangeMutex.Unlock() + fake.GetStateByRangeStub = nil + fake.getStateByRangeReturns = struct { + result1 shim.StateQueryIteratorInterface + result2 error + }{result1, result2} +} + +func (fake *ChaincodeStub) GetStateByRangeReturnsOnCall(i int, result1 shim.StateQueryIteratorInterface, result2 error) { + fake.getStateByRangeMutex.Lock() + defer fake.getStateByRangeMutex.Unlock() + fake.GetStateByRangeStub = nil + if fake.getStateByRangeReturnsOnCall == nil { + fake.getStateByRangeReturnsOnCall = make(map[int]struct { + result1 shim.StateQueryIteratorInterface + result2 error + }) + } + fake.getStateByRangeReturnsOnCall[i] = struct { + result1 shim.StateQueryIteratorInterface + result2 error + }{result1, result2} +} + +func (fake *ChaincodeStub) GetStateByRangeWithPagination(arg1 string, arg2 string, arg3 int32, arg4 string) (shim.StateQueryIteratorInterface, *peer.QueryResponseMetadata, error) { + fake.getStateByRangeWithPaginationMutex.Lock() + ret, specificReturn := fake.getStateByRangeWithPaginationReturnsOnCall[len(fake.getStateByRangeWithPaginationArgsForCall)] + fake.getStateByRangeWithPaginationArgsForCall = append(fake.getStateByRangeWithPaginationArgsForCall, struct { + arg1 string + arg2 string + arg3 int32 + arg4 string + }{arg1, arg2, arg3, arg4}) + fake.recordInvocation("GetStateByRangeWithPagination", []interface{}{arg1, arg2, arg3, arg4}) + fake.getStateByRangeWithPaginationMutex.Unlock() + if fake.GetStateByRangeWithPaginationStub != nil { + return fake.GetStateByRangeWithPaginationStub(arg1, arg2, arg3, arg4) + } + if specificReturn { + return ret.result1, ret.result2, ret.result3 + } + fakeReturns := fake.getStateByRangeWithPaginationReturns + return fakeReturns.result1, fakeReturns.result2, fakeReturns.result3 +} + +func (fake *ChaincodeStub) GetStateByRangeWithPaginationCallCount() int { + fake.getStateByRangeWithPaginationMutex.RLock() + defer fake.getStateByRangeWithPaginationMutex.RUnlock() + return len(fake.getStateByRangeWithPaginationArgsForCall) +} + +func (fake *ChaincodeStub) GetStateByRangeWithPaginationCalls(stub func(string, string, int32, string) (shim.StateQueryIteratorInterface, *peer.QueryResponseMetadata, error)) { + fake.getStateByRangeWithPaginationMutex.Lock() + defer fake.getStateByRangeWithPaginationMutex.Unlock() + fake.GetStateByRangeWithPaginationStub = stub +} + +func (fake *ChaincodeStub) GetStateByRangeWithPaginationArgsForCall(i int) (string, string, int32, string) { + fake.getStateByRangeWithPaginationMutex.RLock() + defer fake.getStateByRangeWithPaginationMutex.RUnlock() + argsForCall := fake.getStateByRangeWithPaginationArgsForCall[i] + return argsForCall.arg1, argsForCall.arg2, argsForCall.arg3, argsForCall.arg4 +} + +func (fake *ChaincodeStub) GetStateByRangeWithPaginationReturns(result1 shim.StateQueryIteratorInterface, result2 *peer.QueryResponseMetadata, result3 error) { + fake.getStateByRangeWithPaginationMutex.Lock() + defer fake.getStateByRangeWithPaginationMutex.Unlock() + fake.GetStateByRangeWithPaginationStub = nil + fake.getStateByRangeWithPaginationReturns = struct { + result1 shim.StateQueryIteratorInterface + result2 *peer.QueryResponseMetadata + result3 error + }{result1, result2, result3} +} + +func (fake *ChaincodeStub) GetStateByRangeWithPaginationReturnsOnCall(i int, result1 shim.StateQueryIteratorInterface, result2 *peer.QueryResponseMetadata, result3 error) { + fake.getStateByRangeWithPaginationMutex.Lock() + defer fake.getStateByRangeWithPaginationMutex.Unlock() + fake.GetStateByRangeWithPaginationStub = nil + if fake.getStateByRangeWithPaginationReturnsOnCall == nil { + fake.getStateByRangeWithPaginationReturnsOnCall = make(map[int]struct { + result1 shim.StateQueryIteratorInterface + result2 *peer.QueryResponseMetadata + result3 error + }) + } + fake.getStateByRangeWithPaginationReturnsOnCall[i] = struct { + result1 shim.StateQueryIteratorInterface + result2 *peer.QueryResponseMetadata + result3 error + }{result1, result2, result3} +} + +func (fake *ChaincodeStub) GetStateValidationParameter(arg1 string) ([]byte, error) { + fake.getStateValidationParameterMutex.Lock() + ret, specificReturn := fake.getStateValidationParameterReturnsOnCall[len(fake.getStateValidationParameterArgsForCall)] + fake.getStateValidationParameterArgsForCall = append(fake.getStateValidationParameterArgsForCall, struct { + arg1 string + }{arg1}) + fake.recordInvocation("GetStateValidationParameter", []interface{}{arg1}) + fake.getStateValidationParameterMutex.Unlock() + if fake.GetStateValidationParameterStub != nil { + return fake.GetStateValidationParameterStub(arg1) + } + if specificReturn { + return ret.result1, ret.result2 + } + fakeReturns := fake.getStateValidationParameterReturns + return fakeReturns.result1, fakeReturns.result2 +} + +func (fake *ChaincodeStub) GetStateValidationParameterCallCount() int { + fake.getStateValidationParameterMutex.RLock() + defer fake.getStateValidationParameterMutex.RUnlock() + return len(fake.getStateValidationParameterArgsForCall) +} + +func (fake *ChaincodeStub) GetStateValidationParameterCalls(stub func(string) ([]byte, error)) { + fake.getStateValidationParameterMutex.Lock() + defer fake.getStateValidationParameterMutex.Unlock() + fake.GetStateValidationParameterStub = stub +} + +func (fake *ChaincodeStub) GetStateValidationParameterArgsForCall(i int) string { + fake.getStateValidationParameterMutex.RLock() + defer fake.getStateValidationParameterMutex.RUnlock() + argsForCall := fake.getStateValidationParameterArgsForCall[i] + return argsForCall.arg1 +} + +func (fake *ChaincodeStub) GetStateValidationParameterReturns(result1 []byte, result2 error) { + fake.getStateValidationParameterMutex.Lock() + defer fake.getStateValidationParameterMutex.Unlock() + fake.GetStateValidationParameterStub = nil + fake.getStateValidationParameterReturns = struct { + result1 []byte + result2 error + }{result1, result2} +} + +func (fake *ChaincodeStub) GetStateValidationParameterReturnsOnCall(i int, result1 []byte, result2 error) { + fake.getStateValidationParameterMutex.Lock() + defer fake.getStateValidationParameterMutex.Unlock() + fake.GetStateValidationParameterStub = nil + if fake.getStateValidationParameterReturnsOnCall == nil { + fake.getStateValidationParameterReturnsOnCall = make(map[int]struct { + result1 []byte + result2 error + }) + } + fake.getStateValidationParameterReturnsOnCall[i] = struct { + result1 []byte + result2 error + }{result1, result2} +} + +func (fake *ChaincodeStub) GetStringArgs() []string { + fake.getStringArgsMutex.Lock() + ret, specificReturn := fake.getStringArgsReturnsOnCall[len(fake.getStringArgsArgsForCall)] + fake.getStringArgsArgsForCall = append(fake.getStringArgsArgsForCall, struct { + }{}) + fake.recordInvocation("GetStringArgs", []interface{}{}) + fake.getStringArgsMutex.Unlock() + if fake.GetStringArgsStub != nil { + return fake.GetStringArgsStub() + } + if specificReturn { + return ret.result1 + } + fakeReturns := fake.getStringArgsReturns + return fakeReturns.result1 +} + +func (fake *ChaincodeStub) GetStringArgsCallCount() int { + fake.getStringArgsMutex.RLock() + defer fake.getStringArgsMutex.RUnlock() + return len(fake.getStringArgsArgsForCall) +} + +func (fake *ChaincodeStub) GetStringArgsCalls(stub func() []string) { + fake.getStringArgsMutex.Lock() + defer fake.getStringArgsMutex.Unlock() + fake.GetStringArgsStub = stub +} + +func (fake *ChaincodeStub) GetStringArgsReturns(result1 []string) { + fake.getStringArgsMutex.Lock() + defer fake.getStringArgsMutex.Unlock() + fake.GetStringArgsStub = nil + fake.getStringArgsReturns = struct { + result1 []string + }{result1} +} + +func (fake *ChaincodeStub) GetStringArgsReturnsOnCall(i int, result1 []string) { + fake.getStringArgsMutex.Lock() + defer fake.getStringArgsMutex.Unlock() + fake.GetStringArgsStub = nil + if fake.getStringArgsReturnsOnCall == nil { + fake.getStringArgsReturnsOnCall = make(map[int]struct { + result1 []string + }) + } + fake.getStringArgsReturnsOnCall[i] = struct { + result1 []string + }{result1} +} + +func (fake *ChaincodeStub) GetTransient() (map[string][]byte, error) { + fake.getTransientMutex.Lock() + ret, specificReturn := fake.getTransientReturnsOnCall[len(fake.getTransientArgsForCall)] + fake.getTransientArgsForCall = append(fake.getTransientArgsForCall, struct { + }{}) + fake.recordInvocation("GetTransient", []interface{}{}) + fake.getTransientMutex.Unlock() + if fake.GetTransientStub != nil { + return fake.GetTransientStub() + } + if specificReturn { + return ret.result1, ret.result2 + } + fakeReturns := fake.getTransientReturns + return fakeReturns.result1, fakeReturns.result2 +} + +func (fake *ChaincodeStub) GetTransientCallCount() int { + fake.getTransientMutex.RLock() + defer fake.getTransientMutex.RUnlock() + return len(fake.getTransientArgsForCall) +} + +func (fake *ChaincodeStub) GetTransientCalls(stub func() (map[string][]byte, error)) { + fake.getTransientMutex.Lock() + defer fake.getTransientMutex.Unlock() + fake.GetTransientStub = stub +} + +func (fake *ChaincodeStub) GetTransientReturns(result1 map[string][]byte, result2 error) { + fake.getTransientMutex.Lock() + defer fake.getTransientMutex.Unlock() + fake.GetTransientStub = nil + fake.getTransientReturns = struct { + result1 map[string][]byte + result2 error + }{result1, result2} +} + +func (fake *ChaincodeStub) GetTransientReturnsOnCall(i int, result1 map[string][]byte, result2 error) { + fake.getTransientMutex.Lock() + defer fake.getTransientMutex.Unlock() + fake.GetTransientStub = nil + if fake.getTransientReturnsOnCall == nil { + fake.getTransientReturnsOnCall = make(map[int]struct { + result1 map[string][]byte + result2 error + }) + } + fake.getTransientReturnsOnCall[i] = struct { + result1 map[string][]byte + result2 error + }{result1, result2} +} + +func (fake *ChaincodeStub) GetTxID() string { + fake.getTxIDMutex.Lock() + ret, specificReturn := fake.getTxIDReturnsOnCall[len(fake.getTxIDArgsForCall)] + fake.getTxIDArgsForCall = append(fake.getTxIDArgsForCall, struct { + }{}) + fake.recordInvocation("GetTxID", []interface{}{}) + fake.getTxIDMutex.Unlock() + if fake.GetTxIDStub != nil { + return fake.GetTxIDStub() + } + if specificReturn { + return ret.result1 + } + fakeReturns := fake.getTxIDReturns + return fakeReturns.result1 +} + +func (fake *ChaincodeStub) GetTxIDCallCount() int { + fake.getTxIDMutex.RLock() + defer fake.getTxIDMutex.RUnlock() + return len(fake.getTxIDArgsForCall) +} + +func (fake *ChaincodeStub) GetTxIDCalls(stub func() string) { + fake.getTxIDMutex.Lock() + defer fake.getTxIDMutex.Unlock() + fake.GetTxIDStub = stub +} + +func (fake *ChaincodeStub) GetTxIDReturns(result1 string) { + fake.getTxIDMutex.Lock() + defer fake.getTxIDMutex.Unlock() + fake.GetTxIDStub = nil + fake.getTxIDReturns = struct { + result1 string + }{result1} +} + +func (fake *ChaincodeStub) GetTxIDReturnsOnCall(i int, result1 string) { + fake.getTxIDMutex.Lock() + defer fake.getTxIDMutex.Unlock() + fake.GetTxIDStub = nil + if fake.getTxIDReturnsOnCall == nil { + fake.getTxIDReturnsOnCall = make(map[int]struct { + result1 string + }) + } + fake.getTxIDReturnsOnCall[i] = struct { + result1 string + }{result1} +} + +func (fake *ChaincodeStub) GetTxTimestamp() (*timestamp.Timestamp, error) { + fake.getTxTimestampMutex.Lock() + ret, specificReturn := fake.getTxTimestampReturnsOnCall[len(fake.getTxTimestampArgsForCall)] + fake.getTxTimestampArgsForCall = append(fake.getTxTimestampArgsForCall, struct { + }{}) + fake.recordInvocation("GetTxTimestamp", []interface{}{}) + fake.getTxTimestampMutex.Unlock() + if fake.GetTxTimestampStub != nil { + return fake.GetTxTimestampStub() + } + if specificReturn { + return ret.result1, ret.result2 + } + fakeReturns := fake.getTxTimestampReturns + return fakeReturns.result1, fakeReturns.result2 +} + +func (fake *ChaincodeStub) GetTxTimestampCallCount() int { + fake.getTxTimestampMutex.RLock() + defer fake.getTxTimestampMutex.RUnlock() + return len(fake.getTxTimestampArgsForCall) +} + +func (fake *ChaincodeStub) GetTxTimestampCalls(stub func() (*timestamp.Timestamp, error)) { + fake.getTxTimestampMutex.Lock() + defer fake.getTxTimestampMutex.Unlock() + fake.GetTxTimestampStub = stub +} + +func (fake *ChaincodeStub) GetTxTimestampReturns(result1 *timestamp.Timestamp, result2 error) { + fake.getTxTimestampMutex.Lock() + defer fake.getTxTimestampMutex.Unlock() + fake.GetTxTimestampStub = nil + fake.getTxTimestampReturns = struct { + result1 *timestamp.Timestamp + result2 error + }{result1, result2} +} + +func (fake *ChaincodeStub) GetTxTimestampReturnsOnCall(i int, result1 *timestamp.Timestamp, result2 error) { + fake.getTxTimestampMutex.Lock() + defer fake.getTxTimestampMutex.Unlock() + fake.GetTxTimestampStub = nil + if fake.getTxTimestampReturnsOnCall == nil { + fake.getTxTimestampReturnsOnCall = make(map[int]struct { + result1 *timestamp.Timestamp + result2 error + }) + } + fake.getTxTimestampReturnsOnCall[i] = struct { + result1 *timestamp.Timestamp + result2 error + }{result1, result2} +} + +func (fake *ChaincodeStub) InvokeChaincode(arg1 string, arg2 [][]byte, arg3 string) peer.Response { + var arg2Copy [][]byte + if arg2 != nil { + arg2Copy = make([][]byte, len(arg2)) + copy(arg2Copy, arg2) + } + fake.invokeChaincodeMutex.Lock() + ret, specificReturn := fake.invokeChaincodeReturnsOnCall[len(fake.invokeChaincodeArgsForCall)] + fake.invokeChaincodeArgsForCall = append(fake.invokeChaincodeArgsForCall, struct { + arg1 string + arg2 [][]byte + arg3 string + }{arg1, arg2Copy, arg3}) + fake.recordInvocation("InvokeChaincode", []interface{}{arg1, arg2Copy, arg3}) + fake.invokeChaincodeMutex.Unlock() + if fake.InvokeChaincodeStub != nil { + return fake.InvokeChaincodeStub(arg1, arg2, arg3) + } + if specificReturn { + return ret.result1 + } + fakeReturns := fake.invokeChaincodeReturns + return fakeReturns.result1 +} + +func (fake *ChaincodeStub) InvokeChaincodeCallCount() int { + fake.invokeChaincodeMutex.RLock() + defer fake.invokeChaincodeMutex.RUnlock() + return len(fake.invokeChaincodeArgsForCall) +} + +func (fake *ChaincodeStub) InvokeChaincodeCalls(stub func(string, [][]byte, string) peer.Response) { + fake.invokeChaincodeMutex.Lock() + defer fake.invokeChaincodeMutex.Unlock() + fake.InvokeChaincodeStub = stub +} + +func (fake *ChaincodeStub) InvokeChaincodeArgsForCall(i int) (string, [][]byte, string) { + fake.invokeChaincodeMutex.RLock() + defer fake.invokeChaincodeMutex.RUnlock() + argsForCall := fake.invokeChaincodeArgsForCall[i] + return argsForCall.arg1, argsForCall.arg2, argsForCall.arg3 +} + +func (fake *ChaincodeStub) InvokeChaincodeReturns(result1 peer.Response) { + fake.invokeChaincodeMutex.Lock() + defer fake.invokeChaincodeMutex.Unlock() + fake.InvokeChaincodeStub = nil + fake.invokeChaincodeReturns = struct { + result1 peer.Response + }{result1} +} + +func (fake *ChaincodeStub) InvokeChaincodeReturnsOnCall(i int, result1 peer.Response) { + fake.invokeChaincodeMutex.Lock() + defer fake.invokeChaincodeMutex.Unlock() + fake.InvokeChaincodeStub = nil + if fake.invokeChaincodeReturnsOnCall == nil { + fake.invokeChaincodeReturnsOnCall = make(map[int]struct { + result1 peer.Response + }) + } + fake.invokeChaincodeReturnsOnCall[i] = struct { + result1 peer.Response + }{result1} +} + +func (fake *ChaincodeStub) PutPrivateData(arg1 string, arg2 string, arg3 []byte) error { + var arg3Copy []byte + if arg3 != nil { + arg3Copy = make([]byte, len(arg3)) + copy(arg3Copy, arg3) + } + fake.putPrivateDataMutex.Lock() + ret, specificReturn := fake.putPrivateDataReturnsOnCall[len(fake.putPrivateDataArgsForCall)] + fake.putPrivateDataArgsForCall = append(fake.putPrivateDataArgsForCall, struct { + arg1 string + arg2 string + arg3 []byte + }{arg1, arg2, arg3Copy}) + fake.recordInvocation("PutPrivateData", []interface{}{arg1, arg2, arg3Copy}) + fake.putPrivateDataMutex.Unlock() + if fake.PutPrivateDataStub != nil { + return fake.PutPrivateDataStub(arg1, arg2, arg3) + } + if specificReturn { + return ret.result1 + } + fakeReturns := fake.putPrivateDataReturns + return fakeReturns.result1 +} + +func (fake *ChaincodeStub) PutPrivateDataCallCount() int { + fake.putPrivateDataMutex.RLock() + defer fake.putPrivateDataMutex.RUnlock() + return len(fake.putPrivateDataArgsForCall) +} + +func (fake *ChaincodeStub) PutPrivateDataCalls(stub func(string, string, []byte) error) { + fake.putPrivateDataMutex.Lock() + defer fake.putPrivateDataMutex.Unlock() + fake.PutPrivateDataStub = stub +} + +func (fake *ChaincodeStub) PutPrivateDataArgsForCall(i int) (string, string, []byte) { + fake.putPrivateDataMutex.RLock() + defer fake.putPrivateDataMutex.RUnlock() + argsForCall := fake.putPrivateDataArgsForCall[i] + return argsForCall.arg1, argsForCall.arg2, argsForCall.arg3 +} + +func (fake *ChaincodeStub) PutPrivateDataReturns(result1 error) { + fake.putPrivateDataMutex.Lock() + defer fake.putPrivateDataMutex.Unlock() + fake.PutPrivateDataStub = nil + fake.putPrivateDataReturns = struct { + result1 error + }{result1} +} + +func (fake *ChaincodeStub) PutPrivateDataReturnsOnCall(i int, result1 error) { + fake.putPrivateDataMutex.Lock() + defer fake.putPrivateDataMutex.Unlock() + fake.PutPrivateDataStub = nil + if fake.putPrivateDataReturnsOnCall == nil { + fake.putPrivateDataReturnsOnCall = make(map[int]struct { + result1 error + }) + } + fake.putPrivateDataReturnsOnCall[i] = struct { + result1 error + }{result1} +} + +func (fake *ChaincodeStub) PutState(arg1 string, arg2 []byte) error { + var arg2Copy []byte + if arg2 != nil { + arg2Copy = make([]byte, len(arg2)) + copy(arg2Copy, arg2) + } + fake.putStateMutex.Lock() + ret, specificReturn := fake.putStateReturnsOnCall[len(fake.putStateArgsForCall)] + fake.putStateArgsForCall = append(fake.putStateArgsForCall, struct { + arg1 string + arg2 []byte + }{arg1, arg2Copy}) + fake.recordInvocation("PutState", []interface{}{arg1, arg2Copy}) + fake.putStateMutex.Unlock() + if fake.PutStateStub != nil { + return fake.PutStateStub(arg1, arg2) + } + if specificReturn { + return ret.result1 + } + fakeReturns := fake.putStateReturns + return fakeReturns.result1 +} + +func (fake *ChaincodeStub) PutStateCallCount() int { + fake.putStateMutex.RLock() + defer fake.putStateMutex.RUnlock() + return len(fake.putStateArgsForCall) +} + +func (fake *ChaincodeStub) PutStateCalls(stub func(string, []byte) error) { + fake.putStateMutex.Lock() + defer fake.putStateMutex.Unlock() + fake.PutStateStub = stub +} + +func (fake *ChaincodeStub) PutStateArgsForCall(i int) (string, []byte) { + fake.putStateMutex.RLock() + defer fake.putStateMutex.RUnlock() + argsForCall := fake.putStateArgsForCall[i] + return argsForCall.arg1, argsForCall.arg2 +} + +func (fake *ChaincodeStub) PutStateReturns(result1 error) { + fake.putStateMutex.Lock() + defer fake.putStateMutex.Unlock() + fake.PutStateStub = nil + fake.putStateReturns = struct { + result1 error + }{result1} +} + +func (fake *ChaincodeStub) PutStateReturnsOnCall(i int, result1 error) { + fake.putStateMutex.Lock() + defer fake.putStateMutex.Unlock() + fake.PutStateStub = nil + if fake.putStateReturnsOnCall == nil { + fake.putStateReturnsOnCall = make(map[int]struct { + result1 error + }) + } + fake.putStateReturnsOnCall[i] = struct { + result1 error + }{result1} +} + +func (fake *ChaincodeStub) SetEvent(arg1 string, arg2 []byte) error { + var arg2Copy []byte + if arg2 != nil { + arg2Copy = make([]byte, len(arg2)) + copy(arg2Copy, arg2) + } + fake.setEventMutex.Lock() + ret, specificReturn := fake.setEventReturnsOnCall[len(fake.setEventArgsForCall)] + fake.setEventArgsForCall = append(fake.setEventArgsForCall, struct { + arg1 string + arg2 []byte + }{arg1, arg2Copy}) + fake.recordInvocation("SetEvent", []interface{}{arg1, arg2Copy}) + fake.setEventMutex.Unlock() + if fake.SetEventStub != nil { + return fake.SetEventStub(arg1, arg2) + } + if specificReturn { + return ret.result1 + } + fakeReturns := fake.setEventReturns + return fakeReturns.result1 +} + +func (fake *ChaincodeStub) SetEventCallCount() int { + fake.setEventMutex.RLock() + defer fake.setEventMutex.RUnlock() + return len(fake.setEventArgsForCall) +} + +func (fake *ChaincodeStub) SetEventCalls(stub func(string, []byte) error) { + fake.setEventMutex.Lock() + defer fake.setEventMutex.Unlock() + fake.SetEventStub = stub +} + +func (fake *ChaincodeStub) SetEventArgsForCall(i int) (string, []byte) { + fake.setEventMutex.RLock() + defer fake.setEventMutex.RUnlock() + argsForCall := fake.setEventArgsForCall[i] + return argsForCall.arg1, argsForCall.arg2 +} + +func (fake *ChaincodeStub) SetEventReturns(result1 error) { + fake.setEventMutex.Lock() + defer fake.setEventMutex.Unlock() + fake.SetEventStub = nil + fake.setEventReturns = struct { + result1 error + }{result1} +} + +func (fake *ChaincodeStub) SetEventReturnsOnCall(i int, result1 error) { + fake.setEventMutex.Lock() + defer fake.setEventMutex.Unlock() + fake.SetEventStub = nil + if fake.setEventReturnsOnCall == nil { + fake.setEventReturnsOnCall = make(map[int]struct { + result1 error + }) + } + fake.setEventReturnsOnCall[i] = struct { + result1 error + }{result1} +} + +func (fake *ChaincodeStub) SetPrivateDataValidationParameter(arg1 string, arg2 string, arg3 []byte) error { + var arg3Copy []byte + if arg3 != nil { + arg3Copy = make([]byte, len(arg3)) + copy(arg3Copy, arg3) + } + fake.setPrivateDataValidationParameterMutex.Lock() + ret, specificReturn := fake.setPrivateDataValidationParameterReturnsOnCall[len(fake.setPrivateDataValidationParameterArgsForCall)] + fake.setPrivateDataValidationParameterArgsForCall = append(fake.setPrivateDataValidationParameterArgsForCall, struct { + arg1 string + arg2 string + arg3 []byte + }{arg1, arg2, arg3Copy}) + fake.recordInvocation("SetPrivateDataValidationParameter", []interface{}{arg1, arg2, arg3Copy}) + fake.setPrivateDataValidationParameterMutex.Unlock() + if fake.SetPrivateDataValidationParameterStub != nil { + return fake.SetPrivateDataValidationParameterStub(arg1, arg2, arg3) + } + if specificReturn { + return ret.result1 + } + fakeReturns := fake.setPrivateDataValidationParameterReturns + return fakeReturns.result1 +} + +func (fake *ChaincodeStub) SetPrivateDataValidationParameterCallCount() int { + fake.setPrivateDataValidationParameterMutex.RLock() + defer fake.setPrivateDataValidationParameterMutex.RUnlock() + return len(fake.setPrivateDataValidationParameterArgsForCall) +} + +func (fake *ChaincodeStub) SetPrivateDataValidationParameterCalls(stub func(string, string, []byte) error) { + fake.setPrivateDataValidationParameterMutex.Lock() + defer fake.setPrivateDataValidationParameterMutex.Unlock() + fake.SetPrivateDataValidationParameterStub = stub +} + +func (fake *ChaincodeStub) SetPrivateDataValidationParameterArgsForCall(i int) (string, string, []byte) { + fake.setPrivateDataValidationParameterMutex.RLock() + defer fake.setPrivateDataValidationParameterMutex.RUnlock() + argsForCall := fake.setPrivateDataValidationParameterArgsForCall[i] + return argsForCall.arg1, argsForCall.arg2, argsForCall.arg3 +} + +func (fake *ChaincodeStub) SetPrivateDataValidationParameterReturns(result1 error) { + fake.setPrivateDataValidationParameterMutex.Lock() + defer fake.setPrivateDataValidationParameterMutex.Unlock() + fake.SetPrivateDataValidationParameterStub = nil + fake.setPrivateDataValidationParameterReturns = struct { + result1 error + }{result1} +} + +func (fake *ChaincodeStub) SetPrivateDataValidationParameterReturnsOnCall(i int, result1 error) { + fake.setPrivateDataValidationParameterMutex.Lock() + defer fake.setPrivateDataValidationParameterMutex.Unlock() + fake.SetPrivateDataValidationParameterStub = nil + if fake.setPrivateDataValidationParameterReturnsOnCall == nil { + fake.setPrivateDataValidationParameterReturnsOnCall = make(map[int]struct { + result1 error + }) + } + fake.setPrivateDataValidationParameterReturnsOnCall[i] = struct { + result1 error + }{result1} +} + +func (fake *ChaincodeStub) SetStateValidationParameter(arg1 string, arg2 []byte) error { + var arg2Copy []byte + if arg2 != nil { + arg2Copy = make([]byte, len(arg2)) + copy(arg2Copy, arg2) + } + fake.setStateValidationParameterMutex.Lock() + ret, specificReturn := fake.setStateValidationParameterReturnsOnCall[len(fake.setStateValidationParameterArgsForCall)] + fake.setStateValidationParameterArgsForCall = append(fake.setStateValidationParameterArgsForCall, struct { + arg1 string + arg2 []byte + }{arg1, arg2Copy}) + fake.recordInvocation("SetStateValidationParameter", []interface{}{arg1, arg2Copy}) + fake.setStateValidationParameterMutex.Unlock() + if fake.SetStateValidationParameterStub != nil { + return fake.SetStateValidationParameterStub(arg1, arg2) + } + if specificReturn { + return ret.result1 + } + fakeReturns := fake.setStateValidationParameterReturns + return fakeReturns.result1 +} + +func (fake *ChaincodeStub) SetStateValidationParameterCallCount() int { + fake.setStateValidationParameterMutex.RLock() + defer fake.setStateValidationParameterMutex.RUnlock() + return len(fake.setStateValidationParameterArgsForCall) +} + +func (fake *ChaincodeStub) SetStateValidationParameterCalls(stub func(string, []byte) error) { + fake.setStateValidationParameterMutex.Lock() + defer fake.setStateValidationParameterMutex.Unlock() + fake.SetStateValidationParameterStub = stub +} + +func (fake *ChaincodeStub) SetStateValidationParameterArgsForCall(i int) (string, []byte) { + fake.setStateValidationParameterMutex.RLock() + defer fake.setStateValidationParameterMutex.RUnlock() + argsForCall := fake.setStateValidationParameterArgsForCall[i] + return argsForCall.arg1, argsForCall.arg2 +} + +func (fake *ChaincodeStub) SetStateValidationParameterReturns(result1 error) { + fake.setStateValidationParameterMutex.Lock() + defer fake.setStateValidationParameterMutex.Unlock() + fake.SetStateValidationParameterStub = nil + fake.setStateValidationParameterReturns = struct { + result1 error + }{result1} +} + +func (fake *ChaincodeStub) SetStateValidationParameterReturnsOnCall(i int, result1 error) { + fake.setStateValidationParameterMutex.Lock() + defer fake.setStateValidationParameterMutex.Unlock() + fake.SetStateValidationParameterStub = nil + if fake.setStateValidationParameterReturnsOnCall == nil { + fake.setStateValidationParameterReturnsOnCall = make(map[int]struct { + result1 error + }) + } + fake.setStateValidationParameterReturnsOnCall[i] = struct { + result1 error + }{result1} +} + +func (fake *ChaincodeStub) SplitCompositeKey(arg1 string) (string, []string, error) { + fake.splitCompositeKeyMutex.Lock() + ret, specificReturn := fake.splitCompositeKeyReturnsOnCall[len(fake.splitCompositeKeyArgsForCall)] + fake.splitCompositeKeyArgsForCall = append(fake.splitCompositeKeyArgsForCall, struct { + arg1 string + }{arg1}) + fake.recordInvocation("SplitCompositeKey", []interface{}{arg1}) + fake.splitCompositeKeyMutex.Unlock() + if fake.SplitCompositeKeyStub != nil { + return fake.SplitCompositeKeyStub(arg1) + } + if specificReturn { + return ret.result1, ret.result2, ret.result3 + } + fakeReturns := fake.splitCompositeKeyReturns + return fakeReturns.result1, fakeReturns.result2, fakeReturns.result3 +} + +func (fake *ChaincodeStub) SplitCompositeKeyCallCount() int { + fake.splitCompositeKeyMutex.RLock() + defer fake.splitCompositeKeyMutex.RUnlock() + return len(fake.splitCompositeKeyArgsForCall) +} + +func (fake *ChaincodeStub) SplitCompositeKeyCalls(stub func(string) (string, []string, error)) { + fake.splitCompositeKeyMutex.Lock() + defer fake.splitCompositeKeyMutex.Unlock() + fake.SplitCompositeKeyStub = stub +} + +func (fake *ChaincodeStub) SplitCompositeKeyArgsForCall(i int) string { + fake.splitCompositeKeyMutex.RLock() + defer fake.splitCompositeKeyMutex.RUnlock() + argsForCall := fake.splitCompositeKeyArgsForCall[i] + return argsForCall.arg1 +} + +func (fake *ChaincodeStub) SplitCompositeKeyReturns(result1 string, result2 []string, result3 error) { + fake.splitCompositeKeyMutex.Lock() + defer fake.splitCompositeKeyMutex.Unlock() + fake.SplitCompositeKeyStub = nil + fake.splitCompositeKeyReturns = struct { + result1 string + result2 []string + result3 error + }{result1, result2, result3} +} + +func (fake *ChaincodeStub) SplitCompositeKeyReturnsOnCall(i int, result1 string, result2 []string, result3 error) { + fake.splitCompositeKeyMutex.Lock() + defer fake.splitCompositeKeyMutex.Unlock() + fake.SplitCompositeKeyStub = nil + if fake.splitCompositeKeyReturnsOnCall == nil { + fake.splitCompositeKeyReturnsOnCall = make(map[int]struct { + result1 string + result2 []string + result3 error + }) + } + fake.splitCompositeKeyReturnsOnCall[i] = struct { + result1 string + result2 []string + result3 error + }{result1, result2, result3} +} + +func (fake *ChaincodeStub) Invocations() map[string][][]interface{} { + fake.invocationsMutex.RLock() + defer fake.invocationsMutex.RUnlock() + fake.createCompositeKeyMutex.RLock() + defer fake.createCompositeKeyMutex.RUnlock() + fake.delPrivateDataMutex.RLock() + defer fake.delPrivateDataMutex.RUnlock() + fake.delStateMutex.RLock() + defer fake.delStateMutex.RUnlock() + fake.getArgsMutex.RLock() + defer fake.getArgsMutex.RUnlock() + fake.getArgsSliceMutex.RLock() + defer fake.getArgsSliceMutex.RUnlock() + fake.getBindingMutex.RLock() + defer fake.getBindingMutex.RUnlock() + fake.getChannelIDMutex.RLock() + defer fake.getChannelIDMutex.RUnlock() + fake.getCreatorMutex.RLock() + defer fake.getCreatorMutex.RUnlock() + fake.getDecorationsMutex.RLock() + defer fake.getDecorationsMutex.RUnlock() + fake.getFunctionAndParametersMutex.RLock() + defer fake.getFunctionAndParametersMutex.RUnlock() + fake.getHistoryForKeyMutex.RLock() + defer fake.getHistoryForKeyMutex.RUnlock() + fake.getPrivateDataMutex.RLock() + defer fake.getPrivateDataMutex.RUnlock() + fake.getPrivateDataByPartialCompositeKeyMutex.RLock() + defer fake.getPrivateDataByPartialCompositeKeyMutex.RUnlock() + fake.getPrivateDataByRangeMutex.RLock() + defer fake.getPrivateDataByRangeMutex.RUnlock() + fake.getPrivateDataHashMutex.RLock() + defer fake.getPrivateDataHashMutex.RUnlock() + fake.getPrivateDataQueryResultMutex.RLock() + defer fake.getPrivateDataQueryResultMutex.RUnlock() + fake.getPrivateDataValidationParameterMutex.RLock() + defer fake.getPrivateDataValidationParameterMutex.RUnlock() + fake.getQueryResultMutex.RLock() + defer fake.getQueryResultMutex.RUnlock() + fake.getQueryResultWithPaginationMutex.RLock() + defer fake.getQueryResultWithPaginationMutex.RUnlock() + fake.getSignedProposalMutex.RLock() + defer fake.getSignedProposalMutex.RUnlock() + fake.getStateMutex.RLock() + defer fake.getStateMutex.RUnlock() + fake.getStateByPartialCompositeKeyMutex.RLock() + defer fake.getStateByPartialCompositeKeyMutex.RUnlock() + fake.getStateByPartialCompositeKeyWithPaginationMutex.RLock() + defer fake.getStateByPartialCompositeKeyWithPaginationMutex.RUnlock() + fake.getStateByRangeMutex.RLock() + defer fake.getStateByRangeMutex.RUnlock() + fake.getStateByRangeWithPaginationMutex.RLock() + defer fake.getStateByRangeWithPaginationMutex.RUnlock() + fake.getStateValidationParameterMutex.RLock() + defer fake.getStateValidationParameterMutex.RUnlock() + fake.getStringArgsMutex.RLock() + defer fake.getStringArgsMutex.RUnlock() + fake.getTransientMutex.RLock() + defer fake.getTransientMutex.RUnlock() + fake.getTxIDMutex.RLock() + defer fake.getTxIDMutex.RUnlock() + fake.getTxTimestampMutex.RLock() + defer fake.getTxTimestampMutex.RUnlock() + fake.invokeChaincodeMutex.RLock() + defer fake.invokeChaincodeMutex.RUnlock() + fake.putPrivateDataMutex.RLock() + defer fake.putPrivateDataMutex.RUnlock() + fake.putStateMutex.RLock() + defer fake.putStateMutex.RUnlock() + fake.setEventMutex.RLock() + defer fake.setEventMutex.RUnlock() + fake.setPrivateDataValidationParameterMutex.RLock() + defer fake.setPrivateDataValidationParameterMutex.RUnlock() + fake.setStateValidationParameterMutex.RLock() + defer fake.setStateValidationParameterMutex.RUnlock() + fake.splitCompositeKeyMutex.RLock() + defer fake.splitCompositeKeyMutex.RUnlock() + copiedInvocations := map[string][][]interface{}{} + for key, value := range fake.invocations { + copiedInvocations[key] = value + } + return copiedInvocations +} + +func (fake *ChaincodeStub) recordInvocation(key string, args []interface{}) { + fake.invocationsMutex.Lock() + defer fake.invocationsMutex.Unlock() + if fake.invocations == nil { + fake.invocations = map[string][][]interface{}{} + } + if fake.invocations[key] == nil { + fake.invocations[key] = [][]interface{}{} + } + fake.invocations[key] = append(fake.invocations[key], args) +} diff --git a/topologies/t0/assets/chaincode/assets-transfer-basic/chaincode-go/chaincode/mocks/statequeryiterator.go b/topologies/t0/assets/chaincode/assets-transfer-basic/chaincode-go/chaincode/mocks/statequeryiterator.go new file mode 100644 index 0000000..27e3034 --- /dev/null +++ b/topologies/t0/assets/chaincode/assets-transfer-basic/chaincode-go/chaincode/mocks/statequeryiterator.go @@ -0,0 +1,232 @@ +// Code generated by counterfeiter. DO NOT EDIT. +package mocks + +import ( + "sync" + + "github.com/hyperledger/fabric-protos-go/ledger/queryresult" +) + +type StateQueryIterator struct { + CloseStub func() error + closeMutex sync.RWMutex + closeArgsForCall []struct { + } + closeReturns struct { + result1 error + } + closeReturnsOnCall map[int]struct { + result1 error + } + HasNextStub func() bool + hasNextMutex sync.RWMutex + hasNextArgsForCall []struct { + } + hasNextReturns struct { + result1 bool + } + hasNextReturnsOnCall map[int]struct { + result1 bool + } + NextStub func() (*queryresult.KV, error) + nextMutex sync.RWMutex + nextArgsForCall []struct { + } + nextReturns struct { + result1 *queryresult.KV + result2 error + } + nextReturnsOnCall map[int]struct { + result1 *queryresult.KV + result2 error + } + invocations map[string][][]interface{} + invocationsMutex sync.RWMutex +} + +func (fake *StateQueryIterator) Close() error { + fake.closeMutex.Lock() + ret, specificReturn := fake.closeReturnsOnCall[len(fake.closeArgsForCall)] + fake.closeArgsForCall = append(fake.closeArgsForCall, struct { + }{}) + fake.recordInvocation("Close", []interface{}{}) + fake.closeMutex.Unlock() + if fake.CloseStub != nil { + return fake.CloseStub() + } + if specificReturn { + return ret.result1 + } + fakeReturns := fake.closeReturns + return fakeReturns.result1 +} + +func (fake *StateQueryIterator) CloseCallCount() int { + fake.closeMutex.RLock() + defer fake.closeMutex.RUnlock() + return len(fake.closeArgsForCall) +} + +func (fake *StateQueryIterator) CloseCalls(stub func() error) { + fake.closeMutex.Lock() + defer fake.closeMutex.Unlock() + fake.CloseStub = stub +} + +func (fake *StateQueryIterator) CloseReturns(result1 error) { + fake.closeMutex.Lock() + defer fake.closeMutex.Unlock() + fake.CloseStub = nil + fake.closeReturns = struct { + result1 error + }{result1} +} + +func (fake *StateQueryIterator) CloseReturnsOnCall(i int, result1 error) { + fake.closeMutex.Lock() + defer fake.closeMutex.Unlock() + fake.CloseStub = nil + if fake.closeReturnsOnCall == nil { + fake.closeReturnsOnCall = make(map[int]struct { + result1 error + }) + } + fake.closeReturnsOnCall[i] = struct { + result1 error + }{result1} +} + +func (fake *StateQueryIterator) HasNext() bool { + fake.hasNextMutex.Lock() + ret, specificReturn := fake.hasNextReturnsOnCall[len(fake.hasNextArgsForCall)] + fake.hasNextArgsForCall = append(fake.hasNextArgsForCall, struct { + }{}) + fake.recordInvocation("HasNext", []interface{}{}) + fake.hasNextMutex.Unlock() + if fake.HasNextStub != nil { + return fake.HasNextStub() + } + if specificReturn { + return ret.result1 + } + fakeReturns := fake.hasNextReturns + return fakeReturns.result1 +} + +func (fake *StateQueryIterator) HasNextCallCount() int { + fake.hasNextMutex.RLock() + defer fake.hasNextMutex.RUnlock() + return len(fake.hasNextArgsForCall) +} + +func (fake *StateQueryIterator) HasNextCalls(stub func() bool) { + fake.hasNextMutex.Lock() + defer fake.hasNextMutex.Unlock() + fake.HasNextStub = stub +} + +func (fake *StateQueryIterator) HasNextReturns(result1 bool) { + fake.hasNextMutex.Lock() + defer fake.hasNextMutex.Unlock() + fake.HasNextStub = nil + fake.hasNextReturns = struct { + result1 bool + }{result1} +} + +func (fake *StateQueryIterator) HasNextReturnsOnCall(i int, result1 bool) { + fake.hasNextMutex.Lock() + defer fake.hasNextMutex.Unlock() + fake.HasNextStub = nil + if fake.hasNextReturnsOnCall == nil { + fake.hasNextReturnsOnCall = make(map[int]struct { + result1 bool + }) + } + fake.hasNextReturnsOnCall[i] = struct { + result1 bool + }{result1} +} + +func (fake *StateQueryIterator) Next() (*queryresult.KV, error) { + fake.nextMutex.Lock() + ret, specificReturn := fake.nextReturnsOnCall[len(fake.nextArgsForCall)] + fake.nextArgsForCall = append(fake.nextArgsForCall, struct { + }{}) + fake.recordInvocation("Next", []interface{}{}) + fake.nextMutex.Unlock() + if fake.NextStub != nil { + return fake.NextStub() + } + if specificReturn { + return ret.result1, ret.result2 + } + fakeReturns := fake.nextReturns + return fakeReturns.result1, fakeReturns.result2 +} + +func (fake *StateQueryIterator) NextCallCount() int { + fake.nextMutex.RLock() + defer fake.nextMutex.RUnlock() + return len(fake.nextArgsForCall) +} + +func (fake *StateQueryIterator) NextCalls(stub func() (*queryresult.KV, error)) { + fake.nextMutex.Lock() + defer fake.nextMutex.Unlock() + fake.NextStub = stub +} + +func (fake *StateQueryIterator) NextReturns(result1 *queryresult.KV, result2 error) { + fake.nextMutex.Lock() + defer fake.nextMutex.Unlock() + fake.NextStub = nil + fake.nextReturns = struct { + result1 *queryresult.KV + result2 error + }{result1, result2} +} + +func (fake *StateQueryIterator) NextReturnsOnCall(i int, result1 *queryresult.KV, result2 error) { + fake.nextMutex.Lock() + defer fake.nextMutex.Unlock() + fake.NextStub = nil + if fake.nextReturnsOnCall == nil { + fake.nextReturnsOnCall = make(map[int]struct { + result1 *queryresult.KV + result2 error + }) + } + fake.nextReturnsOnCall[i] = struct { + result1 *queryresult.KV + result2 error + }{result1, result2} +} + +func (fake *StateQueryIterator) Invocations() map[string][][]interface{} { + fake.invocationsMutex.RLock() + defer fake.invocationsMutex.RUnlock() + fake.closeMutex.RLock() + defer fake.closeMutex.RUnlock() + fake.hasNextMutex.RLock() + defer fake.hasNextMutex.RUnlock() + fake.nextMutex.RLock() + defer fake.nextMutex.RUnlock() + copiedInvocations := map[string][][]interface{}{} + for key, value := range fake.invocations { + copiedInvocations[key] = value + } + return copiedInvocations +} + +func (fake *StateQueryIterator) recordInvocation(key string, args []interface{}) { + fake.invocationsMutex.Lock() + defer fake.invocationsMutex.Unlock() + if fake.invocations == nil { + fake.invocations = map[string][][]interface{}{} + } + if fake.invocations[key] == nil { + fake.invocations[key] = [][]interface{}{} + } + fake.invocations[key] = append(fake.invocations[key], args) +} diff --git a/topologies/t0/assets/chaincode/assets-transfer-basic/chaincode-go/chaincode/mocks/transaction.go b/topologies/t0/assets/chaincode/assets-transfer-basic/chaincode-go/chaincode/mocks/transaction.go new file mode 100644 index 0000000..eea37db --- /dev/null +++ b/topologies/t0/assets/chaincode/assets-transfer-basic/chaincode-go/chaincode/mocks/transaction.go @@ -0,0 +1,164 @@ +// Code generated by counterfeiter. DO NOT EDIT. +package mocks + +import ( + "sync" + + "github.com/hyperledger/fabric-chaincode-go/pkg/cid" + "github.com/hyperledger/fabric-chaincode-go/shim" +) + +type TransactionContext struct { + GetClientIdentityStub func() cid.ClientIdentity + getClientIdentityMutex sync.RWMutex + getClientIdentityArgsForCall []struct { + } + getClientIdentityReturns struct { + result1 cid.ClientIdentity + } + getClientIdentityReturnsOnCall map[int]struct { + result1 cid.ClientIdentity + } + GetStubStub func() shim.ChaincodeStubInterface + getStubMutex sync.RWMutex + getStubArgsForCall []struct { + } + getStubReturns struct { + result1 shim.ChaincodeStubInterface + } + getStubReturnsOnCall map[int]struct { + result1 shim.ChaincodeStubInterface + } + invocations map[string][][]interface{} + invocationsMutex sync.RWMutex +} + +func (fake *TransactionContext) GetClientIdentity() cid.ClientIdentity { + fake.getClientIdentityMutex.Lock() + ret, specificReturn := fake.getClientIdentityReturnsOnCall[len(fake.getClientIdentityArgsForCall)] + fake.getClientIdentityArgsForCall = append(fake.getClientIdentityArgsForCall, struct { + }{}) + fake.recordInvocation("GetClientIdentity", []interface{}{}) + fake.getClientIdentityMutex.Unlock() + if fake.GetClientIdentityStub != nil { + return fake.GetClientIdentityStub() + } + if specificReturn { + return ret.result1 + } + fakeReturns := fake.getClientIdentityReturns + return fakeReturns.result1 +} + +func (fake *TransactionContext) GetClientIdentityCallCount() int { + fake.getClientIdentityMutex.RLock() + defer fake.getClientIdentityMutex.RUnlock() + return len(fake.getClientIdentityArgsForCall) +} + +func (fake *TransactionContext) GetClientIdentityCalls(stub func() cid.ClientIdentity) { + fake.getClientIdentityMutex.Lock() + defer fake.getClientIdentityMutex.Unlock() + fake.GetClientIdentityStub = stub +} + +func (fake *TransactionContext) GetClientIdentityReturns(result1 cid.ClientIdentity) { + fake.getClientIdentityMutex.Lock() + defer fake.getClientIdentityMutex.Unlock() + fake.GetClientIdentityStub = nil + fake.getClientIdentityReturns = struct { + result1 cid.ClientIdentity + }{result1} +} + +func (fake *TransactionContext) GetClientIdentityReturnsOnCall(i int, result1 cid.ClientIdentity) { + fake.getClientIdentityMutex.Lock() + defer fake.getClientIdentityMutex.Unlock() + fake.GetClientIdentityStub = nil + if fake.getClientIdentityReturnsOnCall == nil { + fake.getClientIdentityReturnsOnCall = make(map[int]struct { + result1 cid.ClientIdentity + }) + } + fake.getClientIdentityReturnsOnCall[i] = struct { + result1 cid.ClientIdentity + }{result1} +} + +func (fake *TransactionContext) GetStub() shim.ChaincodeStubInterface { + fake.getStubMutex.Lock() + ret, specificReturn := fake.getStubReturnsOnCall[len(fake.getStubArgsForCall)] + fake.getStubArgsForCall = append(fake.getStubArgsForCall, struct { + }{}) + fake.recordInvocation("GetStub", []interface{}{}) + fake.getStubMutex.Unlock() + if fake.GetStubStub != nil { + return fake.GetStubStub() + } + if specificReturn { + return ret.result1 + } + fakeReturns := fake.getStubReturns + return fakeReturns.result1 +} + +func (fake *TransactionContext) GetStubCallCount() int { + fake.getStubMutex.RLock() + defer fake.getStubMutex.RUnlock() + return len(fake.getStubArgsForCall) +} + +func (fake *TransactionContext) GetStubCalls(stub func() shim.ChaincodeStubInterface) { + fake.getStubMutex.Lock() + defer fake.getStubMutex.Unlock() + fake.GetStubStub = stub +} + +func (fake *TransactionContext) GetStubReturns(result1 shim.ChaincodeStubInterface) { + fake.getStubMutex.Lock() + defer fake.getStubMutex.Unlock() + fake.GetStubStub = nil + fake.getStubReturns = struct { + result1 shim.ChaincodeStubInterface + }{result1} +} + +func (fake *TransactionContext) GetStubReturnsOnCall(i int, result1 shim.ChaincodeStubInterface) { + fake.getStubMutex.Lock() + defer fake.getStubMutex.Unlock() + fake.GetStubStub = nil + if fake.getStubReturnsOnCall == nil { + fake.getStubReturnsOnCall = make(map[int]struct { + result1 shim.ChaincodeStubInterface + }) + } + fake.getStubReturnsOnCall[i] = struct { + result1 shim.ChaincodeStubInterface + }{result1} +} + +func (fake *TransactionContext) Invocations() map[string][][]interface{} { + fake.invocationsMutex.RLock() + defer fake.invocationsMutex.RUnlock() + fake.getClientIdentityMutex.RLock() + defer fake.getClientIdentityMutex.RUnlock() + fake.getStubMutex.RLock() + defer fake.getStubMutex.RUnlock() + copiedInvocations := map[string][][]interface{}{} + for key, value := range fake.invocations { + copiedInvocations[key] = value + } + return copiedInvocations +} + +func (fake *TransactionContext) recordInvocation(key string, args []interface{}) { + fake.invocationsMutex.Lock() + defer fake.invocationsMutex.Unlock() + if fake.invocations == nil { + fake.invocations = map[string][][]interface{}{} + } + if fake.invocations[key] == nil { + fake.invocations[key] = [][]interface{}{} + } + fake.invocations[key] = append(fake.invocations[key], args) +} diff --git a/topologies/t0/assets/chaincode/assets-transfer-basic/chaincode-go/chaincode/smartcontract.go b/topologies/t0/assets/chaincode/assets-transfer-basic/chaincode-go/chaincode/smartcontract.go new file mode 100644 index 0000000..71e8dd8 --- /dev/null +++ b/topologies/t0/assets/chaincode/assets-transfer-basic/chaincode-go/chaincode/smartcontract.go @@ -0,0 +1,185 @@ +package chaincode + +import ( + "encoding/json" + "fmt" + + "github.com/hyperledger/fabric-contract-api-go/contractapi" +) + +// SmartContract provides functions for managing an Asset +type SmartContract struct { + contractapi.Contract +} + +// Asset describes basic details of what makes up a simple asset +type Asset struct { + ID string `json:"ID"` + Color string `json:"color"` + Size int `json:"size"` + Owner string `json:"owner"` + AppraisedValue int `json:"appraisedValue"` +} + +// InitLedger adds a base set of assets to the ledger +func (s *SmartContract) InitLedger(ctx contractapi.TransactionContextInterface) error { + assets := []Asset{ + {ID: "asset1", Color: "blue", Size: 5, Owner: "Tomoko", AppraisedValue: 300}, + {ID: "asset2", Color: "red", Size: 5, Owner: "Brad", AppraisedValue: 400}, + {ID: "asset3", Color: "green", Size: 10, Owner: "Jin Soo", AppraisedValue: 500}, + {ID: "asset4", Color: "yellow", Size: 10, Owner: "Max", AppraisedValue: 600}, + {ID: "asset5", Color: "black", Size: 15, Owner: "Adriana", AppraisedValue: 700}, + {ID: "asset6", Color: "white", Size: 15, Owner: "Michel", AppraisedValue: 800}, + } + + for _, asset := range assets { + assetJSON, err := json.Marshal(asset) + if err != nil { + return err + } + + err = ctx.GetStub().PutState(asset.ID, assetJSON) + if err != nil { + return fmt.Errorf("failed to put to world state. %v", err) + } + } + + return nil +} + +// CreateAsset issues a new asset to the world state with given details. +func (s *SmartContract) CreateAsset(ctx contractapi.TransactionContextInterface, id string, color string, size int, owner string, appraisedValue int) error { + exists, err := s.AssetExists(ctx, id) + if err != nil { + return err + } + if exists { + return fmt.Errorf("the asset %s already exists", id) + } + + asset := Asset{ + ID: id, + Color: color, + Size: size, + Owner: owner, + AppraisedValue: appraisedValue, + } + assetJSON, err := json.Marshal(asset) + if err != nil { + return err + } + + return ctx.GetStub().PutState(id, assetJSON) +} + +// ReadAsset returns the asset stored in the world state with given id. +func (s *SmartContract) ReadAsset(ctx contractapi.TransactionContextInterface, id string) (*Asset, error) { + assetJSON, err := ctx.GetStub().GetState(id) + if err != nil { + return nil, fmt.Errorf("failed to read from world state: %v", err) + } + if assetJSON == nil { + return nil, fmt.Errorf("the asset %s does not exist", id) + } + + var asset Asset + err = json.Unmarshal(assetJSON, &asset) + if err != nil { + return nil, err + } + + return &asset, nil +} + +// UpdateAsset updates an existing asset in the world state with provided parameters. +func (s *SmartContract) UpdateAsset(ctx contractapi.TransactionContextInterface, id string, color string, size int, owner string, appraisedValue int) error { + exists, err := s.AssetExists(ctx, id) + if err != nil { + return err + } + if !exists { + return fmt.Errorf("the asset %s does not exist", id) + } + + // overwriting original asset with new asset + asset := Asset{ + ID: id, + Color: color, + Size: size, + Owner: owner, + AppraisedValue: appraisedValue, + } + assetJSON, err := json.Marshal(asset) + if err != nil { + return err + } + + return ctx.GetStub().PutState(id, assetJSON) +} + +// DeleteAsset deletes an given asset from the world state. +func (s *SmartContract) DeleteAsset(ctx contractapi.TransactionContextInterface, id string) error { + exists, err := s.AssetExists(ctx, id) + if err != nil { + return err + } + if !exists { + return fmt.Errorf("the asset %s does not exist", id) + } + + return ctx.GetStub().DelState(id) +} + +// AssetExists returns true when asset with given ID exists in world state +func (s *SmartContract) AssetExists(ctx contractapi.TransactionContextInterface, id string) (bool, error) { + assetJSON, err := ctx.GetStub().GetState(id) + if err != nil { + return false, fmt.Errorf("failed to read from world state: %v", err) + } + + return assetJSON != nil, nil +} + +// TransferAsset updates the owner field of asset with given id in world state. +func (s *SmartContract) TransferAsset(ctx contractapi.TransactionContextInterface, id string, newOwner string) error { + asset, err := s.ReadAsset(ctx, id) + if err != nil { + return err + } + + asset.Owner = newOwner + assetJSON, err := json.Marshal(asset) + if err != nil { + return err + } + + return ctx.GetStub().PutState(id, assetJSON) +} + +// GetAllAssets returns all assets found in world state +func (s *SmartContract) GetAllAssets(ctx contractapi.TransactionContextInterface) ([]*Asset, error) { + // range query with empty string for startKey and endKey does an + // open-ended query of all assets in the chaincode namespace. + resultsIterator, err := ctx.GetStub().GetStateByRange("", "") + if err != nil { + return nil, err + } + defer resultsIterator.Close() + + var assets []*Asset + for resultsIterator.HasNext() { + queryResponse, err := resultsIterator.Next() + if err != nil { + return nil, err + } + + var asset Asset + err = json.Unmarshal(queryResponse.Value, &asset) + if err != nil { + return nil, err + } + assets = append(assets, &asset) + } + + return assets, nil +} diff --git a/topologies/t0/assets/chaincode/assets-transfer-basic/chaincode-go/chaincode/smartcontract_test.go b/topologies/t0/assets/chaincode/assets-transfer-basic/chaincode-go/chaincode/smartcontract_test.go new file mode 100644 index 0000000..cb001de --- /dev/null +++ b/topologies/t0/assets/chaincode/assets-transfer-basic/chaincode-go/chaincode/smartcontract_test.go @@ -0,0 +1,184 @@ +package chaincode_test + +import ( + "encoding/json" + "fmt" + "testing" + + "github.com/hyperledger/fabric-chaincode-go/shim" + "github.com/hyperledger/fabric-contract-api-go/contractapi" + "github.com/hyperledger/fabric-protos-go/ledger/queryresult" + "github.com/hyperledger/fabric-samples/asset-transfer-basic/chaincode-go/chaincode" + "github.com/hyperledger/fabric-samples/asset-transfer-basic/chaincode-go/chaincode/mocks" + "github.com/stretchr/testify/require" +) + +//go:generate counterfeiter -o mocks/transaction.go -fake-name TransactionContext . transactionContext +type transactionContext interface { + contractapi.TransactionContextInterface +} + +//go:generate counterfeiter -o mocks/chaincodestub.go -fake-name ChaincodeStub . chaincodeStub +type chaincodeStub interface { + shim.ChaincodeStubInterface +} + +//go:generate counterfeiter -o mocks/statequeryiterator.go -fake-name StateQueryIterator . stateQueryIterator +type stateQueryIterator interface { + shim.StateQueryIteratorInterface +} + +func TestInitLedger(t *testing.T) { + chaincodeStub := &mocks.ChaincodeStub{} + transactionContext := &mocks.TransactionContext{} + transactionContext.GetStubReturns(chaincodeStub) + + assetTransfer := chaincode.SmartContract{} + err := assetTransfer.InitLedger(transactionContext) + require.NoError(t, err) + + chaincodeStub.PutStateReturns(fmt.Errorf("failed inserting key")) + err = assetTransfer.InitLedger(transactionContext) + require.EqualError(t, err, "failed to put to world state. failed inserting key") +} + +func TestCreateAsset(t *testing.T) { + chaincodeStub := &mocks.ChaincodeStub{} + transactionContext := &mocks.TransactionContext{} + transactionContext.GetStubReturns(chaincodeStub) + + assetTransfer := chaincode.SmartContract{} + err := assetTransfer.CreateAsset(transactionContext, "", "", 0, "", 0) + require.NoError(t, err) + + chaincodeStub.GetStateReturns([]byte{}, nil) + err = assetTransfer.CreateAsset(transactionContext, "asset1", "", 0, "", 0) + require.EqualError(t, err, "the asset asset1 already exists") + + chaincodeStub.GetStateReturns(nil, fmt.Errorf("unable to retrieve asset")) + err = assetTransfer.CreateAsset(transactionContext, "asset1", "", 0, "", 0) + require.EqualError(t, err, "failed to read from world state: unable to retrieve asset") +} + +func TestReadAsset(t *testing.T) { + chaincodeStub := &mocks.ChaincodeStub{} + transactionContext := &mocks.TransactionContext{} + transactionContext.GetStubReturns(chaincodeStub) + + expectedAsset := &chaincode.Asset{ID: "asset1"} + bytes, err := json.Marshal(expectedAsset) + require.NoError(t, err) + + chaincodeStub.GetStateReturns(bytes, nil) + assetTransfer := chaincode.SmartContract{} + asset, err := assetTransfer.ReadAsset(transactionContext, "") + require.NoError(t, err) + require.Equal(t, expectedAsset, asset) + + chaincodeStub.GetStateReturns(nil, fmt.Errorf("unable to retrieve asset")) + _, err = assetTransfer.ReadAsset(transactionContext, "") + require.EqualError(t, err, "failed to read from world state: unable to retrieve asset") + + chaincodeStub.GetStateReturns(nil, nil) + asset, err = assetTransfer.ReadAsset(transactionContext, "asset1") + require.EqualError(t, err, "the asset asset1 does not exist") + require.Nil(t, asset) +} + +func TestUpdateAsset(t *testing.T) { + chaincodeStub := &mocks.ChaincodeStub{} + transactionContext := &mocks.TransactionContext{} + transactionContext.GetStubReturns(chaincodeStub) + + expectedAsset := &chaincode.Asset{ID: "asset1"} + bytes, err := json.Marshal(expectedAsset) + require.NoError(t, err) + + chaincodeStub.GetStateReturns(bytes, nil) + assetTransfer := chaincode.SmartContract{} + err = assetTransfer.UpdateAsset(transactionContext, "", "", 0, "", 0) + require.NoError(t, err) + + chaincodeStub.GetStateReturns(nil, nil) + err = assetTransfer.UpdateAsset(transactionContext, "asset1", "", 0, "", 0) + require.EqualError(t, err, "the asset asset1 does not exist") + + chaincodeStub.GetStateReturns(nil, fmt.Errorf("unable to retrieve asset")) + err = assetTransfer.UpdateAsset(transactionContext, "asset1", "", 0, "", 0) + require.EqualError(t, err, "failed to read from world state: unable to retrieve asset") +} + +func TestDeleteAsset(t *testing.T) { + chaincodeStub := &mocks.ChaincodeStub{} + transactionContext := &mocks.TransactionContext{} + transactionContext.GetStubReturns(chaincodeStub) + + asset := &chaincode.Asset{ID: "asset1"} + bytes, err := json.Marshal(asset) + require.NoError(t, err) + + chaincodeStub.GetStateReturns(bytes, nil) + chaincodeStub.DelStateReturns(nil) + assetTransfer := chaincode.SmartContract{} + err = assetTransfer.DeleteAsset(transactionContext, "") + require.NoError(t, err) + + chaincodeStub.GetStateReturns(nil, nil) + err = assetTransfer.DeleteAsset(transactionContext, "asset1") + require.EqualError(t, err, "the asset asset1 does not exist") + + chaincodeStub.GetStateReturns(nil, fmt.Errorf("unable to retrieve asset")) + err = assetTransfer.DeleteAsset(transactionContext, "") + require.EqualError(t, err, "failed to read from world state: unable to retrieve asset") +} + +func TestTransferAsset(t *testing.T) { + chaincodeStub := &mocks.ChaincodeStub{} + transactionContext := &mocks.TransactionContext{} + transactionContext.GetStubReturns(chaincodeStub) + + asset := &chaincode.Asset{ID: "asset1"} + bytes, err := json.Marshal(asset) + require.NoError(t, err) + + chaincodeStub.GetStateReturns(bytes, nil) + assetTransfer := chaincode.SmartContract{} + err = assetTransfer.TransferAsset(transactionContext, "", "") + require.NoError(t, err) + + chaincodeStub.GetStateReturns(nil, fmt.Errorf("unable to retrieve asset")) + err = assetTransfer.TransferAsset(transactionContext, "", "") + require.EqualError(t, err, "failed to read from world state: unable to retrieve asset") +} + +func TestGetAllAssets(t *testing.T) { + asset := &chaincode.Asset{ID: "asset1"} + bytes, err := json.Marshal(asset) + require.NoError(t, err) + + iterator := &mocks.StateQueryIterator{} + iterator.HasNextReturnsOnCall(0, true) + iterator.HasNextReturnsOnCall(1, false) + iterator.NextReturns(&queryresult.KV{Value: bytes}, nil) + + chaincodeStub := &mocks.ChaincodeStub{} + transactionContext := &mocks.TransactionContext{} + transactionContext.GetStubReturns(chaincodeStub) + + chaincodeStub.GetStateByRangeReturns(iterator, nil) + assetTransfer := &chaincode.SmartContract{} + assets, err := assetTransfer.GetAllAssets(transactionContext) + require.NoError(t, err) + require.Equal(t, []*chaincode.Asset{asset}, assets) + + iterator.HasNextReturns(true) + iterator.NextReturns(nil, fmt.Errorf("failed retrieving next item")) + assets, err = assetTransfer.GetAllAssets(transactionContext) + require.EqualError(t, err, "failed retrieving next item") + require.Nil(t, assets) + + chaincodeStub.GetStateByRangeReturns(nil, fmt.Errorf("failed retrieving all assets")) + assets, err = assetTransfer.GetAllAssets(transactionContext) + require.EqualError(t, err, "failed retrieving all assets") + require.Nil(t, assets) +} diff --git a/topologies/t0/assets/chaincode/assets-transfer-basic/chaincode-go/go.mod b/topologies/t0/assets/chaincode/assets-transfer-basic/chaincode-go/go.mod new file mode 100644 index 0000000..630a157 --- /dev/null +++ b/topologies/t0/assets/chaincode/assets-transfer-basic/chaincode-go/go.mod @@ -0,0 +1,11 @@ +module github.com/hyperledger/fabric-samples/asset-transfer-basic/chaincode-go + +go 1.14 + +require ( + github.com/golang/protobuf v1.3.2 + github.com/hyperledger/fabric-chaincode-go v0.0.0-20210718160520-38d29fabecb9 + github.com/hyperledger/fabric-contract-api-go v1.1.1 + github.com/hyperledger/fabric-protos-go v0.0.0-20201028172056-a3136dde2354 + github.com/stretchr/testify v1.5.1 +) diff --git a/topologies/t0/assets/chaincode/assets-transfer-basic/chaincode-go/go.sum b/topologies/t0/assets/chaincode/assets-transfer-basic/chaincode-go/go.sum new file mode 100644 index 0000000..577c18b --- /dev/null +++ b/topologies/t0/assets/chaincode/assets-transfer-basic/chaincode-go/go.sum @@ -0,0 +1,154 @@ +cloud.google.com/go v0.26.0/go.mod h1:aQUYkXzVsufM+DwF1aE+0xfcU+56JwCaLick0ClmMTw= +github.com/BurntSushi/toml v0.3.1/go.mod h1:xHWCNGjB5oqiDr8zfno3MHue2Ht5sIBksp03qcyfWMU= +github.com/DATA-DOG/go-txdb v0.1.3/go.mod h1:DhAhxMXZpUJVGnT+p9IbzJoRKvlArO2pkHjnGX7o0n0= +github.com/PuerkitoBio/purell v1.1.1 h1:WEQqlqaGbrPkxLJWfBwQmfEAE1Z7ONdDLqrN38tNFfI= +github.com/PuerkitoBio/purell v1.1.1/go.mod h1:c11w/QuzBsJSee3cPx9rAFu61PvFxuPbtSwDGJws/X0= +github.com/PuerkitoBio/urlesc v0.0.0-20170810143723-de5bf2ad4578 h1:d+Bc7a5rLufV/sSk/8dngufqelfh6jnri85riMAaF/M= +github.com/PuerkitoBio/urlesc v0.0.0-20170810143723-de5bf2ad4578/go.mod h1:uGdkoq3SwY9Y+13GIhn11/XLaGBb4BfwItxLd5jeuXE= +github.com/armon/consul-api v0.0.0-20180202201655-eb2c6b5be1b6/go.mod h1:grANhF5doyWs3UAsr3K4I6qtAmlQcZDesFNEHPZAzj8= +github.com/client9/misspell v0.3.4/go.mod h1:qj6jICC3Q7zFZvVWo7KLAzC3yx5G7kyvSDkc90ppPyw= +github.com/coreos/etcd v3.3.10+incompatible/go.mod h1:uF7uidLiAD3TWHmW31ZFd/JWoc32PjwdhPthX9715RE= +github.com/coreos/go-etcd v2.0.0+incompatible/go.mod h1:Jez6KQU2B/sWsbdaef3ED8NzMklzPG4d5KIOhIy30Tk= +github.com/coreos/go-semver v0.2.0/go.mod h1:nnelYz7RCh+5ahJtPPxZlU+153eP4D4r3EedlOD2RNk= +github.com/cpuguy83/go-md2man v1.0.10/go.mod h1:SmD6nW6nTyfqj6ABTjUi3V3JVMnlJmwcJI5acqYI6dE= +github.com/cucumber/godog v0.8.0/go.mod h1:Cp3tEV1LRAyH/RuCThcxHS/+9ORZ+FMzPva2AZ5Ki+A= +github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38= +github.com/davecgh/go-spew v1.1.1 h1:vj9j/u1bqnvCEfJOwUhtlOARqs3+rkHYY13jYWTU97c= +github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38= +github.com/fsnotify/fsnotify v1.4.7/go.mod h1:jwhsz4b93w/PPRr/qN1Yymfu8t87LnFCMoQvtojpjFo= +github.com/go-openapi/jsonpointer v0.19.2/go.mod h1:3akKfEdA7DF1sugOqz1dVQHBcuDBPKZGEoHC/NkiQRg= +github.com/go-openapi/jsonpointer v0.19.3 h1:gihV7YNZK1iK6Tgwwsxo2rJbD1GTbdm72325Bq8FI3w= +github.com/go-openapi/jsonpointer v0.19.3/go.mod h1:Pl9vOtqEWErmShwVjC8pYs9cog34VGT37dQOVbmoatg= +github.com/go-openapi/jsonreference v0.19.2 h1:o20suLFB4Ri0tuzpWtyHlh7E7HnkqTNLq6aR6WVNS1w= +github.com/go-openapi/jsonreference v0.19.2/go.mod h1:jMjeRr2HHw6nAVajTXJ4eiUwohSTlpa0o73RUL1owJc= +github.com/go-openapi/spec v0.19.4 h1:ixzUSnHTd6hCemgtAJgluaTSGYpLNpJY4mA2DIkdOAo= +github.com/go-openapi/spec v0.19.4/go.mod h1:FpwSN1ksY1eteniUU7X0N/BgJ7a4WvBFVA8Lj9mJglo= +github.com/go-openapi/swag v0.19.2/go.mod h1:POnQmlKehdgb5mhVOsnJFsivZCEZ/vjK9gh66Z9tfKk= +github.com/go-openapi/swag v0.19.5 h1:lTz6Ys4CmqqCQmZPBlbQENR1/GucA2bzYTE12Pw4tFY= +github.com/go-openapi/swag v0.19.5/go.mod h1:POnQmlKehdgb5mhVOsnJFsivZCEZ/vjK9gh66Z9tfKk= +github.com/go-sql-driver/mysql v1.5.0/go.mod h1:DCzpHaOWr8IXmIStZouvnhqoel9Qv2LBy8hT2VhHyBg= +github.com/gobuffalo/envy v1.7.0 h1:GlXgaiBkmrYMHco6t4j7SacKO4XUjvh5pwXh0f4uxXU= +github.com/gobuffalo/envy v1.7.0/go.mod h1:n7DRkBerg/aorDM8kbduw5dN3oXGswK5liaSCx4T5NI= +github.com/gobuffalo/logger v1.0.0/go.mod h1:2zbswyIUa45I+c+FLXuWl9zSWEiVuthsk8ze5s8JvPs= +github.com/gobuffalo/packd v0.3.0 h1:eMwymTkA1uXsqxS0Tpoop3Lc0u3kTfiMBE6nKtQU4g4= +github.com/gobuffalo/packd v0.3.0/go.mod h1:zC7QkmNkYVGKPw4tHpBQ+ml7W/3tIebgeo1b36chA3Q= +github.com/gobuffalo/packr v1.30.1 h1:hu1fuVR3fXEZR7rXNW3h8rqSML8EVAf6KNm0NKO/wKg= +github.com/gobuffalo/packr v1.30.1/go.mod h1:ljMyFO2EcrnzsHsN99cvbq055Y9OhRrIaviy289eRuk= +github.com/gobuffalo/packr/v2 v2.5.1/go.mod h1:8f9c96ITobJlPzI44jj+4tHnEKNt0xXWSVlXRN9X1Iw= +github.com/golang/glog v0.0.0-20160126235308-23def4e6c14b h1:VKtxabqXZkF25pY9ekfRL6a582T4P37/31XEstQ5p58= +github.com/golang/glog v0.0.0-20160126235308-23def4e6c14b/go.mod h1:SBH7ygxi8pfUlaOkMMuAQtPIUF8ecWP5IEl/CR7VP2Q= +github.com/golang/mock v1.1.1/go.mod h1:oTYuIxOrZwtPieC+H1uAHpcLFnEyAGVDL/k47Jfbm0A= +github.com/golang/protobuf v1.2.0/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U= +github.com/golang/protobuf v1.3.2 h1:6nsPYzhq5kReh6QImI3k5qWzO4PEbvbIW2cwSfR/6xs= +github.com/golang/protobuf v1.3.2/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U= +github.com/google/go-cmp v0.2.0/go.mod h1:oXzfMopK8JAjlY9xF4vHSVASa0yLyX7SntLO5aqRK0M= +github.com/hashicorp/hcl v1.0.0/go.mod h1:E5yfLk+7swimpb2L/Alb/PJmXilQ/rhwaUYs4T20WEQ= +github.com/hyperledger/fabric-chaincode-go v0.0.0-20200424173110-d7076418f212 h1:1i4lnpV8BDgKOLi1hgElfBqdHXjXieSuj8629mwBZ8o= +github.com/hyperledger/fabric-chaincode-go v0.0.0-20200424173110-d7076418f212/go.mod h1:N7H3sA7Tx4k/YzFq7U0EPdqJtqvM4Kild0JoCc7C0Dc= +github.com/hyperledger/fabric-chaincode-go v0.0.0-20210718160520-38d29fabecb9 h1:1cAZHHrBYFrX3bwQGhOZtOB4sCM9QWVppd81O8vsPXs= +github.com/hyperledger/fabric-chaincode-go v0.0.0-20210718160520-38d29fabecb9/go.mod h1:N7H3sA7Tx4k/YzFq7U0EPdqJtqvM4Kild0JoCc7C0Dc= +github.com/hyperledger/fabric-contract-api-go v1.1.0 h1:K9uucl/6eX3NF0/b+CGIiO1IPm1VYQxBkpnVGJur2S4= +github.com/hyperledger/fabric-contract-api-go v1.1.0/go.mod h1:nHWt0B45fK53owcFpLtAe8DH0Q5P068mnzkNXMPSL7E= +github.com/hyperledger/fabric-contract-api-go v1.1.1 h1:gDhOC18gjgElNZ85kFWsbCQq95hyUP/21n++m0Sv6B0= +github.com/hyperledger/fabric-contract-api-go v1.1.1/go.mod h1:+39cWxbh5py3NtXpRA63rAH7NzXyED+QJx1EZr0tJPo= +github.com/hyperledger/fabric-protos-go v0.0.0-20190919234611-2a87503ac7c9/go.mod h1:xVYTjK4DtZRBxZ2D9aE4y6AbLaPwue2o/criQyQbVD0= +github.com/hyperledger/fabric-protos-go v0.0.0-20200424173316-dd554ba3746e h1:9PS5iezHk/j7XriSlNuSQILyCOfcZ9wZ3/PiucmSE8E= +github.com/hyperledger/fabric-protos-go v0.0.0-20200424173316-dd554ba3746e/go.mod h1:xVYTjK4DtZRBxZ2D9aE4y6AbLaPwue2o/criQyQbVD0= +github.com/hyperledger/fabric-protos-go v0.0.0-20201028172056-a3136dde2354 h1:6vLLEpvDbSlmUJFjg1hB5YMBpI+WgKguztlONcAFBoY= +github.com/hyperledger/fabric-protos-go v0.0.0-20201028172056-a3136dde2354/go.mod h1:xVYTjK4DtZRBxZ2D9aE4y6AbLaPwue2o/criQyQbVD0= +github.com/hyperledger/fabric-protos-go v0.0.0-20210722212527-1ed7094bb8fe h1:1ef+SKRVYiQCAMhZ5W6HZhStEcNNAm27V8YcptzSWnM= +github.com/hyperledger/fabric-protos-go v0.0.0-20210722212527-1ed7094bb8fe/go.mod h1:xVYTjK4DtZRBxZ2D9aE4y6AbLaPwue2o/criQyQbVD0= +github.com/inconshreveable/mousetrap v1.0.0/go.mod h1:PxqpIevigyE2G7u3NXJIT2ANytuPF1OarO4DADm73n8= +github.com/joho/godotenv v1.3.0 h1:Zjp+RcGpHhGlrMbJzXTrZZPrWj+1vfm90La1wgB6Bhc= +github.com/joho/godotenv v1.3.0/go.mod h1:7hK45KPybAkOC6peb+G5yklZfMxEjkZhHbwpqxOKXbg= +github.com/karrick/godirwalk v1.10.12/go.mod h1:RoGL9dQei4vP9ilrpETWE8CLOZ1kiN0LhBygSwrAsHA= +github.com/konsorten/go-windows-terminal-sequences v1.0.1/go.mod h1:T0+1ngSBFLxvqU3pZ+m/2kptfBszLMUkC4ZK/EgS/cQ= +github.com/konsorten/go-windows-terminal-sequences v1.0.2/go.mod h1:T0+1ngSBFLxvqU3pZ+m/2kptfBszLMUkC4ZK/EgS/cQ= +github.com/kr/pretty v0.1.0/go.mod h1:dAy3ld7l9f0ibDNOQOHHMYYIIbhfbHSm3C4ZsoJORNo= +github.com/kr/pretty v0.2.0 h1:s5hAObm+yFO5uHYt5dYjxi2rXrsnmRpJx4OYvIWUaQs= +github.com/kr/pretty v0.2.0/go.mod h1:ipq/a2n7PKx3OHsz4KJII5eveXtPO4qwEXGdVfWzfnI= +github.com/kr/pty v1.1.1/go.mod h1:pFQYn66WHrOpPYNljwOMqo10TkYh1fy3cYio2l3bCsQ= +github.com/kr/pty v1.1.5/go.mod h1:9r2w37qlBe7rQ6e1fg1S/9xpWHSnaqNdHD3WcMdbPDA= +github.com/kr/text v0.1.0 h1:45sCR5RtlFHMR4UwH9sdQ5TC8v0qDQCHnXt+kaKSTVE= +github.com/kr/text v0.1.0/go.mod h1:4Jbv+DJW3UT/LiOwJeYQe1efqtUx/iVham/4vfdArNI= +github.com/magiconair/properties v1.8.0/go.mod h1:PppfXfuXeibc/6YijjN8zIbojt8czPbwD3XqdrwzmxQ= +github.com/mailru/easyjson v0.0.0-20190614124828-94de47d64c63/go.mod h1:C1wdFJiN94OJF2b5HbByQZoLdCWB1Yqtg26g4irojpc= +github.com/mailru/easyjson v0.0.0-20190626092158-b2ccc519800e h1:hB2xlXdHp/pmPZq0y3QnmWAArdw9PqbmotexnWx/FU8= +github.com/mailru/easyjson v0.0.0-20190626092158-b2ccc519800e/go.mod h1:C1wdFJiN94OJF2b5HbByQZoLdCWB1Yqtg26g4irojpc= +github.com/mitchellh/go-homedir v1.1.0/go.mod h1:SfyaCUpYCn1Vlf4IUYiD9fPX4A5wJrkLzIz1N1q0pr0= +github.com/mitchellh/mapstructure v1.1.2/go.mod h1:FVVH3fgwuzCH5S8UJGiWEs2h04kUh9fWfEaFds41c1Y= +github.com/pelletier/go-toml v1.2.0/go.mod h1:5z9KED0ma1S8pY6P1sdut58dfprrGBbd/94hg7ilaic= +github.com/pmezard/go-difflib v1.0.0 h1:4DBwDE0NGyQoBHbLQYPwSUPoCMWR5BEzIk/f1lZbAQM= +github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4= +github.com/rogpeppe/go-internal v1.1.0/go.mod h1:M8bDsm7K2OlrFYOpmOWEs/qY81heoFRclV5y23lUDJ4= +github.com/rogpeppe/go-internal v1.3.0 h1:RR9dF3JtopPvtkroDZuVD7qquD0bnHlKSqaQhgwt8yk= +github.com/rogpeppe/go-internal v1.3.0/go.mod h1:M8bDsm7K2OlrFYOpmOWEs/qY81heoFRclV5y23lUDJ4= +github.com/russross/blackfriday v1.5.2/go.mod h1:JO/DiYxRf+HjHt06OyowR9PTA263kcR/rfWxYHBV53g= +github.com/sirupsen/logrus v1.4.2/go.mod h1:tLMulIdttU9McNUspp0xgXVQah82FyeX6MwdIuYE2rE= +github.com/spf13/afero v1.1.2/go.mod h1:j4pytiNVoe2o6bmDsKpLACNPDBIoEAkihy7loJ1B0CQ= +github.com/spf13/cast v1.3.0/go.mod h1:Qx5cxh0v+4UWYiBimWS+eyWzqEqokIECu5etghLkUJE= +github.com/spf13/cobra v0.0.5/go.mod h1:3K3wKZymM7VvHMDS9+Akkh4K60UwM26emMESw8tLCHU= +github.com/spf13/jwalterweatherman v1.0.0/go.mod h1:cQK4TGJAtQXfYWX+Ddv3mKDzgVb68N+wFjFa4jdeBTo= +github.com/spf13/pflag v1.0.3/go.mod h1:DYY7MBk1bdzusC3SYhjObp+wFpr4gzcvqqNjLnInEg4= +github.com/spf13/viper v1.3.2/go.mod h1:ZiWeW+zYFKm7srdB9IoDzzZXaJaI5eL9QjNiN/DMA2s= +github.com/stretchr/objx v0.1.0/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME= +github.com/stretchr/objx v0.1.1/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME= +github.com/stretchr/objx v0.2.0/go.mod h1:qt09Ya8vawLte6SNmTgCsAVtYtaKzEcn8ATUoHMkEqE= +github.com/stretchr/testify v1.2.2/go.mod h1:a8OnRcib4nhh0OaRAV+Yts87kKdq0PP7pXfy6kDkUVs= +github.com/stretchr/testify v1.3.0/go.mod h1:M5WIy9Dh21IEIfnGCwXGc5bZfKNJtfHm1UVUgZn+9EI= +github.com/stretchr/testify v1.4.0/go.mod h1:j7eGeouHqKxXV5pUuKE4zz7dFj8WfuZ+81PSLYec5m4= +github.com/stretchr/testify v1.5.1 h1:nOGnQDM7FYENwehXlg/kFVnos3rEvtKTjRvOWSzb6H4= +github.com/stretchr/testify v1.5.1/go.mod h1:5W2xD1RspED5o8YsWQXVCued0rvSQ+mT+I5cxcmMvtA= +github.com/ugorji/go/codec v0.0.0-20181204163529-d75b2dcb6bc8/go.mod h1:VFNgLljTbGfSG7qAOspJ7OScBnGdDN/yBr0sguwnwf0= +github.com/xeipuuv/gojsonpointer v0.0.0-20180127040702-4e3ac2762d5f h1:J9EGpcZtP0E/raorCMxlFGSTBrsSlaDGf3jU/qvAE2c= +github.com/xeipuuv/gojsonpointer v0.0.0-20180127040702-4e3ac2762d5f/go.mod h1:N2zxlSyiKSe5eX1tZViRH5QA0qijqEDrYZiPEAiq3wU= +github.com/xeipuuv/gojsonreference v0.0.0-20180127040603-bd5ef7bd5415 h1:EzJWgHovont7NscjpAxXsDA8S8BMYve8Y5+7cuRE7R0= +github.com/xeipuuv/gojsonreference v0.0.0-20180127040603-bd5ef7bd5415/go.mod h1:GwrjFmJcFw6At/Gs6z4yjiIwzuJ1/+UwLxMQDVQXShQ= +github.com/xeipuuv/gojsonschema v1.2.0 h1:LhYJRs+L4fBtjZUfuSZIKGeVu0QRy8e5Xi7D17UxZ74= +github.com/xeipuuv/gojsonschema v1.2.0/go.mod h1:anYRn/JVcOK2ZgGU+IjEV4nwlhoK5sQluxsYJ78Id3Y= +github.com/xordataexchange/crypt v0.0.3-0.20170626215501-b2862e3d0a77/go.mod h1:aYKd//L2LvnjZzWKhF00oedf4jCCReLcmhLdhm1A27Q= +golang.org/x/crypto v0.0.0-20181203042331-505ab145d0a9/go.mod h1:6SG95UA2DQfeDnfUPMdvaQW0Q7yPrPDi9nlGo2tz2b4= +golang.org/x/crypto v0.0.0-20190308221718-c2843e01d9a2/go.mod h1:djNgcEr1/C05ACkg1iLfiJU5Ep61QUkGW8qpdssI0+w= +golang.org/x/crypto v0.0.0-20190611184440-5c40567a22f8/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI= +golang.org/x/crypto v0.0.0-20190621222207-cc06ce4a13d4/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI= +golang.org/x/lint v0.0.0-20190313153728-d0100b6bd8b3/go.mod h1:6SW0HCj/g11FgYtHlgUYUwCkIfeOF89ocIRzGO/8vkc= +golang.org/x/net v0.0.0-20190311183353-d8887717615a/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg= +golang.org/x/net v0.0.0-20190404232315-eb5bcb51f2a3/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg= +golang.org/x/net v0.0.0-20190522155817-f3200d17e092/go.mod h1:HSz+uSET+XFnRR8LxR5pz3Of3rY3CfYBVs4xY44aLks= +golang.org/x/net v0.0.0-20190613194153-d28f0bde5980/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s= +golang.org/x/net v0.0.0-20190827160401-ba9fcec4b297 h1:k7pJ2yAPLPgbskkFdhRCsA77k2fySZ1zf2zCjvQCiIM= +golang.org/x/net v0.0.0-20190827160401-ba9fcec4b297/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s= +golang.org/x/oauth2 v0.0.0-20180821212333-d2e6202438be/go.mod h1:N/0e6XlmueqKjAGxoOufVs8QHGRruUQn6yWY3a++T0U= +golang.org/x/sync v0.0.0-20190423024810-112230192c58/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= +golang.org/x/sys v0.0.0-20181205085412-a5c9d58dba9a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY= +golang.org/x/sys v0.0.0-20190215142949-d0b11bdaac8a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY= +golang.org/x/sys v0.0.0-20190412213103-97732733099d/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= +golang.org/x/sys v0.0.0-20190422165155-953cdadca894/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= +golang.org/x/sys v0.0.0-20190515120540-06a5c4944438/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= +golang.org/x/sys v0.0.0-20190616124812-15dcb6c0061f/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= +golang.org/x/sys v0.0.0-20190710143415-6ec70d6a5542 h1:6ZQFf1D2YYDDI7eSwW8adlkkavTB9sw5I24FVtEvNUQ= +golang.org/x/sys v0.0.0-20190710143415-6ec70d6a5542/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= +golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ= +golang.org/x/text v0.3.2 h1:tW2bmiBqwgJj/UpqtC8EpXEZVYOwU0yG4iWbprSVAcs= +golang.org/x/text v0.3.2/go.mod h1:bEr9sfX3Q8Zfm5fL9x+3itogRgK3+ptLWKqgva+5dAk= +golang.org/x/tools v0.0.0-20180917221912-90fa682c2a6e/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ= +golang.org/x/tools v0.0.0-20190311212946-11955173bddd/go.mod h1:LCzVGOaR6xXOjkQ3onu1FJEFr0SW1gC7cKk1uF8kGRs= +golang.org/x/tools v0.0.0-20190524140312-2c0ae7006135/go.mod h1:RgjU9mgBXZiqYHBnxXauZ1Gv1EHHAz9KjViQ78xBX0Q= +golang.org/x/tools v0.0.0-20190614205625-5aca471b1d59/go.mod h1:/rFqwRUd4F7ZHNgwSSTFct+R/Kf4OFW1sUzUTQQTgfc= +golang.org/x/tools v0.0.0-20190624180213-70d37148ca0c/go.mod h1:/rFqwRUd4F7ZHNgwSSTFct+R/Kf4OFW1sUzUTQQTgfc= +google.golang.org/appengine v1.1.0/go.mod h1:EbEs0AVv82hx2wNQdGPgUI5lhzA/G0D9YwlJXL52JkM= +google.golang.org/genproto v0.0.0-20180817151627-c66870c02cf8/go.mod h1:JiN7NxoALGmiZfu7CAH4rXhgtRTLTxftemlI0sWmxmc= +google.golang.org/genproto v0.0.0-20180831171423-11092d34479b h1:lohp5blsw53GBXtLyLNaTXPXS9pJ1tiTw61ZHUoE9Qw= +google.golang.org/genproto v0.0.0-20180831171423-11092d34479b/go.mod h1:JiN7NxoALGmiZfu7CAH4rXhgtRTLTxftemlI0sWmxmc= +google.golang.org/grpc v1.23.0 h1:AzbTB6ux+okLTzP8Ru1Xs41C303zdcfEht7MQnYJt5A= +google.golang.org/grpc v1.23.0/go.mod h1:Y5yQAOtifL1yxbo5wqy6BxZv8vAUGQwXBOALyacEbxg= +gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0= +gopkg.in/check.v1 v1.0.0-20180628173108-788fd7840127/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0= +gopkg.in/check.v1 v1.0.0-20190902080502-41f04d3bba15 h1:YR8cESwS4TdDjEe65xsg0ogRM/Nc3DYOhEAlW+xobZo= +gopkg.in/check.v1 v1.0.0-20190902080502-41f04d3bba15/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0= +gopkg.in/errgo.v2 v2.1.0/go.mod h1:hNsd1EY+bozCKY1Ytp96fpM3vjJbqLJn88ws8XvfDNI= +gopkg.in/yaml.v2 v2.2.2 h1:ZCJp+EgiOT7lHqUV2J862kp8Qj64Jo6az82+3Td9dZw= +gopkg.in/yaml.v2 v2.2.2/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI= +gopkg.in/yaml.v2 v2.2.8 h1:obN1ZagJSUGI0Ek/LBmuj4SNLPfIny3KsKFopxRdj10= +gopkg.in/yaml.v2 v2.2.8/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI= +honnef.co/go/tools v0.0.0-20190523083050-ea95bdfd59fc/go.mod h1:rf3lG4BRIbNafJWhAfAdb/ePZxsR/4RtNHQocxwk9r4= diff --git a/topologies/t0/config/config.yaml b/topologies/t0/config/config.yaml new file mode 100644 index 0000000..1f4cc49 --- /dev/null +++ b/topologies/t0/config/config.yaml @@ -0,0 +1,14 @@ +NodeOUs: + Enable: true + ClientOUIdentifier: + Certificate: cacerts/ca-cert.pem + OrganizationalUnitIdentifier: client + PeerOUIdentifier: + Certificate: cacerts/ca-cert.pem + OrganizationalUnitIdentifier: peer + AdminOUIdentifier: + Certificate: cacerts/ca-cert.pem + OrganizationalUnitIdentifier: admin + OrdererOUIdentifier: + Certificate: cacerts/ca-cert.pem + OrganizationalUnitIdentifier: orderer \ No newline at end of file diff --git a/topologies/t0/config/configtx.yaml b/topologies/t0/config/configtx.yaml new file mode 100644 index 0000000..d0a4a06 --- /dev/null +++ b/topologies/t0/config/configtx.yaml @@ -0,0 +1,379 @@ +# Copyright IBM Corp. All Rights Reserved. +# +# SPDX-License-Identifier: Apache-2.0 +# + +--- +################################################################################ +# +# Section: Organizations +# +# - This section defines the different organizational identities which will +# be referenced later in the configuration. +# +################################################################################ +Organizations: + + # SampleOrg defines an MSP using the sampleconfig. It should never be used + # in production but may be used as a template for other definitions + - &org1 + # DefaultOrg defines the organization which is used in the sampleconfig + # of the fabric.git development environment + Name: org1 + + # ID to load the MSP definition as + ID: org1MSP + + # MSPDir is the filesystem path which contains the MSP configuration + MSPDir: /tmp/crypto-material/orgs/org1/msp + + # Policies defines the set of policies at this level of the config tree + # For organization policies, their canonical path is usually + # /Channel/// + Policies: + Readers: + Type: Signature + Rule: "OR('org1MSP.member')" + Writers: + Type: Signature + Rule: "OR('org1MSP.member')" + Admins: + Type: Signature + Rule: "OR('org1MSP.admin')" + Endorsement: + Type: Signature + Rule: "OR('org1MSP.member')" + + OrdererEndpoints: + - <>-org1-orderer1:7050 + + - &org2 + # DefaultOrg defines the organization which is used in the sampleconfig + # of the fabric.git development environment + Name: org2MSP + + # ID to load the MSP definition as + ID: org2MSP + + MSPDir: /tmp/crypto-material/orgs/org2/msp + + # Policies defines the set of policies at this level of the config tree + # For organization policies, their canonical path is usually + # /Channel/// + Policies: + Readers: + Type: Signature + Rule: "OR('org2MSP.member')" + Writers: + Type: Signature + Rule: "OR('org2MSP.member')" + Admins: + Type: Signature + Rule: "OR('org2MSP.admin')" + Endorsement: + Type: Signature + Rule: "OR('org2MSP.peer')" + + # leave this flag set to true. + AnchorPeers: + # AnchorPeers defines the location of peers which can be used + # for cross org gossip communication. Note, this value is only + # encoded in the genesis block in the Application section context + - Host: <>-org2-peer1 + Port: 7051 + +################################################################################ +# +# SECTION: Capabilities +# +# - This section defines the capabilities of fabric network. This is a new +# concept as of v1.1.0 and should not be utilized in mixed networks with +# v1.0.x peers and orderers. Capabilities define features which must be +# present in a fabric binary for that binary to safely participate in the +# fabric network. For instance, if a new MSP type is added, newer binaries +# might recognize and validate the signatures from this type, while older +# binaries without this support would be unable to validate those +# transactions. This could lead to different versions of the fabric binaries +# having different world states. Instead, defining a capability for a channel +# informs those binaries without this capability that they must cease +# processing transactions until they have been upgraded. For v1.0.x if any +# capabilities are defined (including a map with all capabilities turned off) +# then the v1.0.x peer will deliberately crash. +# +################################################################################ +Capabilities: + # Channel capabilities apply to both the orderers and the peers and must be + # supported by both. + # Set the value of the capability to true to require it. + Channel: &ChannelCapabilities + # V2_0 capability ensures that orderers and peers behave according + # to v2.0 channel capabilities. Orderers and peers from + # prior releases would behave in an incompatible way, and are therefore + # not able to participate in channels at v2.0 capability. + # Prior to enabling V2.0 channel capabilities, ensure that all + # orderers and peers on a channel are at v2.0.0 or later. + V2_0: true + + # Orderer capabilities apply only to the orderers, and may be safely + # used with prior release peers. + # Set the value of the capability to true to require it. + Orderer: &OrdererCapabilities + # V2_0 orderer capability ensures that orderers behave according + # to v2.0 orderer capabilities. Orderers from + # prior releases would behave in an incompatible way, and are therefore + # not able to participate in channels at v2.0 orderer capability. + # Prior to enabling V2.0 orderer capabilities, ensure that all + # orderers on channel are at v2.0.0 or later. + V2_0: true + + # Application capabilities apply only to the peer network, and may be safely + # used with prior release orderers. + # Set the value of the capability to true to require it. + Application: &ApplicationCapabilities + # V2_0 application capability ensures that peers behave according + # to v2.0 application capabilities. Peers from + # prior releases would behave in an incompatible way, and are therefore + # not able to participate in channels at v2.0 application capability. + # Prior to enabling V2.0 application capabilities, ensure that all + # peers on channel are at v2.0.0 or later. + V2_0: true + +################################################################################ +# +# SECTION: Application +# +# - This section defines the values to encode into a config transaction or +# genesis block for application related parameters +# +################################################################################ +Application: &ApplicationDefaults + ACLs: &ACLsDefault + + # ACL policy for lscc's "getid" function + lscc/ChaincodeExists: /Channel/Application/Readers + + # ACL policy for lscc's "getdepspec" function + lscc/GetDeploymentSpec: /Channel/Application/Readers + + # ACL policy for lscc's "getccdata" function + lscc/GetChaincodeData: /Channel/Application/Readers + + # ACL Policy for lscc's "getchaincodes" function + lscc/GetInstantiatedChaincodes: /Channel/Application/Readers + + + #---Query System Chaincode (qscc) function to policy mapping for access control---# + + # ACL policy for qscc's "GetChainInfo" function + qscc/GetChainInfo: /Channel/Application/Readers + + + # ACL policy for qscc's "GetBlockByNumber" function + qscc/GetBlockByNumber: /Channel/Application/Readers + + # ACL policy for qscc's "GetBlockByHash" function + qscc/GetBlockByHash: /Channel/Application/Readers + + # ACL policy for qscc's "GetTransactionByID" function + qscc/GetTransactionByID: /Channel/Application/Readers + + # ACL policy for qscc's "GetBlockByTxID" function + qscc/GetBlockByTxID: /Channel/Application/Readers + + #---Configuration System Chaincode (cscc) function to policy mapping for access control---# + + # ACL policy for cscc's "GetConfigBlock" function + cscc/GetConfigBlock: /Channel/Application/Readers + + # ACL policy for cscc's "GetConfigTree" function + cscc/GetConfigTree: /Channel/Application/Readers + + # ACL policy for cscc's "SimulateConfigTreeUpdate" function + cscc/SimulateConfigTreeUpdate: /Channel/Application/Readers + + #---Miscellanesous peer function to policy mapping for access control---# + + # ACL policy for invoking chaincodes on peer + peer/Propose: /Channel/Application/Writers + + # ACL policy for chaincode to chaincode invocation + peer/ChaincodeToChaincode: /Channel/Application/Readers + + #---Events resource to policy mapping for access control###---# + + # ACL policy for sending block events + event/Block: /Channel/Application/Readers + + # ACL policy for sending filtered block events + event/FilteredBlock: /Channel/Application/Readers + + # Chaincode Lifecycle Policies introduced in Fabric 2.x + # ACL policy for _lifecycle's "CheckCommitReadiness" function + _lifecycle/CheckCommitReadiness: /Channel/Application/Writers + + # ACL policy for _lifecycle's "CommitChaincodeDefinition" function + _lifecycle/CommitChaincodeDefinition: /Channel/Application/Writers + + # ACL policy for _lifecycle's "QueryChaincodeDefinition" function + _lifecycle/QueryChaincodeDefinition: /Channel/Application/Readers + + + # Organizations is the list of orgs which are defined as participants on + # the application side of the network + Organizations: + + # Policies defines the set of policies at this level of the config tree + # For Application policies, their canonical path is + # /Channel/Application/ + Policies: + Readers: + Type: ImplicitMeta + Rule: "ANY Readers" + Writers: + Type: ImplicitMeta + Rule: "ANY Writers" + Admins: + Type: ImplicitMeta + Rule: "MAJORITY Admins" + LifecycleEndorsement: + Type: ImplicitMeta + Rule: "MAJORITY Endorsement" + Endorsement: + Type: ImplicitMeta + Rule: "MAJORITY Endorsement" + + Capabilities: + <<: *ApplicationCapabilities +################################################################################ +# +# SECTION: Orderer +# +# - This section defines the values to encode into a config transaction or +# genesis block for orderer related parameters +# +################################################################################ +Orderer: &OrdererDefaults + + # Orderer Type: The orderer implementation to start + OrdererType: etcdraft + + # Addresses used to be the list of orderer addresses that clients and peers + # could connect to. However, this does not allow clients to associate orderer + # addresses and orderer organizations which can be useful for things such + # as TLS validation. The preferred way to specify orderer addresses is now + # to include the OrdererEndpoints item in your org definition + Addresses: + - <>-org1-orderer1:7050 + + EtcdRaft: + Consenters: + - Host: <>-org1-orderer1 + Port: 7050 + ClientTLSCert: /tmp/crypto-material/orderers/org1/orderer1/node-tls/msp/signcerts/cert.pem + ServerTLSCert: /tmp/crypto-material/orderers/org1/orderer1/node-tls/msp/signcerts/cert.pem + + # Batch Timeout: The amount of time to wait before creating a batch + BatchTimeout: 2s + + # Batch Size: Controls the number of messages batched into a block + BatchSize: + + # Max Message Count: The maximum number of messages to permit in a batch + MaxMessageCount: 10 + + # Absolute Max Bytes: The absolute maximum number of bytes allowed for + # the serialized messages in a batch. + AbsoluteMaxBytes: 99 MB + + # Preferred Max Bytes: The preferred maximum number of bytes allowed for + # the serialized messages in a batch. A message larger than the preferred + # max bytes will result in a batch larger than preferred max bytes. + PreferredMaxBytes: 512 KB + + # Organizations is the list of orgs which are defined as participants on + # the orderer side of the network + Organizations: + + # Policies defines the set of policies at this level of the config tree + # For Orderer policies, their canonical path is + # /Channel/Orderer/ + Policies: + Readers: + Type: ImplicitMeta + Rule: "ANY Readers" + Writers: + Type: ImplicitMeta + Rule: "ANY Writers" + Admins: + Type: ImplicitMeta + Rule: "MAJORITY Admins" + # BlockValidation specifies what signatures must be included in the block + # from the orderer for the peer to validate it. + BlockValidation: + Type: ImplicitMeta + Rule: "ANY Writers" + +################################################################################ +# +# CHANNEL +# +# This section defines the values to encode into a config transaction or +# genesis block for channel related parameters. +# +################################################################################ +Channel: &ChannelDefaults + # Policies defines the set of policies at this level of the config tree + # For Channel policies, their canonical path is + # /Channel/ + Policies: + # Who may invoke the 'Deliver' API + Readers: + Type: ImplicitMeta + Rule: "ANY Readers" + # Who may invoke the 'Broadcast' API + Writers: + Type: ImplicitMeta + Rule: "ANY Writers" + # By default, who may modify elements at this config level + Admins: + Type: ImplicitMeta + Rule: "MAJORITY Admins" + + # Capabilities describes the channel level capabilities, see the + # dedicated Capabilities section elsewhere in this file for a full + # description + Capabilities: + <<: *ChannelCapabilities + +################################################################################ +# +# Profile +# +# - Different configuration profiles may be encoded here to be specified +# as parameters to the configtxgen tool +# +################################################################################ +Profiles: + OrgsOrdererGenesis: + <<: *ChannelDefaults + Orderer: + <<: *OrdererDefaults + Organizations: + - *org1 + Capabilities: + <<: *OrdererCapabilities + Consortiums: + MainConsortium: + Organizations: + - *org2 + + OrgsChannel: + Consortium: MainConsortium + <<: *ChannelDefaults + Application: + <<: *ApplicationDefaults + Organizations: + - *org2 + Capabilities: + <<: *ApplicationCapabilities + diff --git a/topologies/t0/containers/cas/org1-cas/docker-compose-org1-cas.yml b/topologies/t0/containers/cas/org1-cas/docker-compose-org1-cas.yml new file mode 100644 index 0000000..1d941c5 --- /dev/null +++ b/topologies/t0/containers/cas/org1-cas/docker-compose-org1-cas.yml @@ -0,0 +1,29 @@ +version: "3.9" +services: + org1-ca-tls: + container_name: ${CURRENT_HL_TOPOLOGY}-org1-ca-tls + + command: sh -c 'fabric-ca-server start -d -b org1-ca-tls-admin:org1-ca-tls-adminpw --port 7054' + environment: + - FABRIC_CA_SERVER_HOME=/tmp/hyperledger/fabric-ca/ + - FABRIC_CA_SERVER_TLS_ENABLED=true + - FABRIC_CA_SERVER_CSR_CN=${CURRENT_HL_TOPOLOGY}-org1-ca-tls + - FABRIC_CA_SERVER_CSR_HOSTS=0.0.0.0 + - FABRIC_CA_SERVER_DEBUG=true + - TOPOLOGY=${CURRENT_HL_TOPOLOGY} + volumes: + - "${HL_TOPOLOGIES_BASE_FOLDER}/crypto-material/cas/org1/ca-tls:/tmp/hyperledger/fabric-ca" + - "${HL_TOPOLOGIES_BASE_FOLDER}:/tmp" + org1-ca-identities: + container_name: ${CURRENT_HL_TOPOLOGY}-org1-ca-identities + command: sh -c 'fabric-ca-server start -d -b org1-ca-identities-admin:org1-ca-identities-adminpw --port 7054' + environment: + - FABRIC_CA_SERVER_HOME=/tmp/hyperledger/fabric-ca/ + - FABRIC_CA_SERVER_TLS_ENABLED=true + - FABRIC_CA_SERVER_CSR_CN=${CURRENT_HL_TOPOLOGY}-org1-ca-identities + - FABRIC_CA_SERVER_CSR_HOSTS=0.0.0.0 + - FABRIC_CA_SERVER_DEBUG=true + - TOPOLOGY=${CURRENT_HL_TOPOLOGY} + volumes: + - "${HL_TOPOLOGIES_BASE_FOLDER}/crypto-material/cas/org1/ca-identities:/tmp/hyperledger/fabric-ca" + - "${HL_TOPOLOGIES_BASE_FOLDER}:/tmp" \ No newline at end of file diff --git a/topologies/t0/containers/cas/org2-cas/docker-compose-org2-cas.yml b/topologies/t0/containers/cas/org2-cas/docker-compose-org2-cas.yml new file mode 100644 index 0000000..e22813e --- /dev/null +++ b/topologies/t0/containers/cas/org2-cas/docker-compose-org2-cas.yml @@ -0,0 +1,28 @@ +version: "3.9" +services: + org2-ca-tls: + container_name: ${CURRENT_HL_TOPOLOGY}-org2-ca-tls + command: sh -c 'fabric-ca-server start -d -b org2-ca-tls-admin:org2-ca-tls-adminpw --port 7054' + environment: + - FABRIC_CA_SERVER_HOME=/tmp/hyperledger/fabric-ca/ + - FABRIC_CA_SERVER_TLS_ENABLED=true + - FABRIC_CA_SERVER_CSR_CN=${CURRENT_HL_TOPOLOGY}-org2-ca-tls + - FABRIC_CA_SERVER_CSR_HOSTS=0.0.0.0 + - FABRIC_CA_SERVER_DEBUG=true + - TOPOLOGY=${CURRENT_HL_TOPOLOGY} + volumes: + - "${HL_TOPOLOGIES_BASE_FOLDER}/crypto-material/cas/org2/ca-tls:/tmp/hyperledger/fabric-ca" + - "${HL_TOPOLOGIES_BASE_FOLDER}:/tmp" + org2-ca-identities: + container_name: ${CURRENT_HL_TOPOLOGY}-org2-ca-identities + command: sh -c 'fabric-ca-server start -d -b org2-ca-identities-admin:org2-ca-identities-adminpw --port 7054' + environment: + - FABRIC_CA_SERVER_HOME=/tmp/hyperledger/fabric-ca/ + - FABRIC_CA_SERVER_TLS_ENABLED=true + - FABRIC_CA_SERVER_CSR_CN=${CURRENT_HL_TOPOLOGY}-org2-ca-identities + - FABRIC_CA_SERVER_CSR_HOSTS=0.0.0.0 + - FABRIC_CA_SERVER_DEBUG=true + - TOPOLOGY=${CURRENT_HL_TOPOLOGY} + volumes: + - "${HL_TOPOLOGIES_BASE_FOLDER}/crypto-material/cas/org2/ca-identities:/tmp/hyperledger/fabric-ca" + - "${HL_TOPOLOGIES_BASE_FOLDER}:/tmp" \ No newline at end of file diff --git a/topologies/t0/containers/clis/org2-clis/docker-compose-org2-clis.yml b/topologies/t0/containers/clis/org2-clis/docker-compose-org2-clis.yml new file mode 100644 index 0000000..4ceb8a0 --- /dev/null +++ b/topologies/t0/containers/clis/org2-clis/docker-compose-org2-clis.yml @@ -0,0 +1,21 @@ +services: + org2-cli-peer1: + container_name: ${CURRENT_HL_TOPOLOGY}-org2-cli-peer1 + tty: true + stdin_open: true + environment: + - GOPATH=/opt/gopath + - CORE_VM_ENDPOINT=unix:///host/var/run/docker.sock + - FABRIC_LOGGING_SPEC=DEBUG + - CORE_PEER_ID=${CURRENT_HL_TOPOLOGY}-org2-cli-peer1 + - CORE_PEER_ADDRESS=${CURRENT_HL_TOPOLOGY}-org2-peer1:7051 + - CORE_PEER_LOCALMSPID=org2MSP + - CORE_PEER_TLS_ENABLED=true + - CORE_PEER_TLS_ROOTCERT_FILE=/tmp/crypto-material/cas/org2/ca-tls/ca-cert.pem + - CORE_PEER_MSPCONFIGPATH=/tmp/crypto-material/peers/org2/peer1/node/msp + - TOPOLOGY=${CURRENT_HL_TOPOLOGY} + working_dir: /opt/gopath/src/github.com/hyperledger/fabric/org2 + command: sh + volumes: + - ${HL_TOPOLOGIES_BASE_FOLDER}/assets/chaincode:/opt/gopath/src/github.com/hyperledger/fabric-samples/chaincode + - ${HL_TOPOLOGIES_BASE_FOLDER}:/tmp diff --git a/topologies/t0/containers/orderers/org1-orderers/docker-compose-org1-orderers.yml b/topologies/t0/containers/orderers/org1-orderers/docker-compose-org1-orderers.yml new file mode 100644 index 0000000..4119dca --- /dev/null +++ b/topologies/t0/containers/orderers/org1-orderers/docker-compose-org1-orderers.yml @@ -0,0 +1,28 @@ +services: + org1-orderer1: + container_name: ${CURRENT_HL_TOPOLOGY}-org1-orderer1 + environment: + - ORDERER_HOME=/tmp/hyperledger/orderer/home + - ORDERER_FILELEDGER_LOCATION=/tmp/hyperledger/orderer/fileledger + - ORDERER_HOST=${CURRENT_HL_TOPOLOGY}-org1-orderer1 + - ORDERER_GENERAL_LISTENADDRESS=0.0.0.0 + - ORDERER_GENERAL_LISTENPORT=7050 + - ORDERER_GENERAL_GENESISMETHOD=file + # - ORDERER_GENERAL_GENESISFILE=/tmp/crypto-material/artifacts/channels/genesis.block + - ORDERER_GENERAL_BOOTSTRAPFILE=/tmp/crypto-material/artifacts/channels/genesis.block + - ORDERER_GENERAL_LOCALMSPID=org1MSP + - ORDERER_GENERAL_LOCALMSPDIR=/tmp/crypto-material/orderers/org1/orderer1/node/msp + - ORDERER_GENERAL_TLS_ENABLED=true + - ORDERER_GENERAL_TLS_CERTIFICATE=/tmp/crypto-material/orderers/org1/orderer1/node-tls/msp/signcerts/cert.pem + - ORDERER_GENERAL_TLS_PRIVATEKEY=/tmp/crypto-material/orderers/org1/orderer1/node-tls/msp/keystore/key.pem + # - ORDERER_GENERAL_TLS_ROOTCAS=[/tmp/crypto-material/cas/org1/ca-tls/ca-cert.pem] + - ORDERER_GENERAL_TLS_ROOTCAS=[/tmp/crypto-material/cas/org1/ca-tls/ca-cert.pem,/tmp/crypto-material/cas/org2/ca-tls/ca-cert.pem] + - ORDERER_GENERAL_LOGLEVEL=debug + - ORDERER_DEBUG_BROADCASTTRACEDIR=data/logs + - ORDERER_OPERATIONS_LISTENADDRESS=0.0.0.0:7051 + - ORDERER_METRICS_STATSD_ADDRESS=0.0.0.0:7052 + - ORDERER_GENERAL_PROFILE_ADDRESS=0.0.0.0:7053 + - TOPOLOGY=${CURRENT_HL_TOPOLOGY} + volumes: + - ${HL_TOPOLOGIES_BASE_FOLDER}/crypto-material:/tmp/crypto-material + - ${HL_TOPOLOGIES_BASE_FOLDER}/homefolders/orderers/org1/orderer1:/tmp/hyperledger/orderer \ No newline at end of file diff --git a/topologies/t0/containers/peers/org2-peers/docker-compose-org2-peers.yml b/topologies/t0/containers/peers/org2-peers/docker-compose-org2-peers.yml new file mode 100644 index 0000000..aef6a59 --- /dev/null +++ b/topologies/t0/containers/peers/org2-peers/docker-compose-org2-peers.yml @@ -0,0 +1,25 @@ +services: + org2-peer1: + container_name: ${CURRENT_HL_TOPOLOGY}-org2-peer1 + environment: + - CORE_PEER_ID=${CURRENT_HL_TOPOLOGY}-org2-peer1 + - CORE_PEER_ADDRESS=${CURRENT_HL_TOPOLOGY}-org2-peer1:7051 + - CORE_PEER_LOCALMSPID=org2MSP + - CORE_PEER_MSPCONFIGPATH=/tmp/crypto-material/peers/org2/peer1/node/msp + - CORE_VM_ENDPOINT=unix:///host/var/run/docker.sock + - CORE_VM_DOCKER_HOSTCONFIG_NETWORKMODE=hl-fabric-${CURRENT_HL_TOPOLOGY} + # - FABRIC_LOGGING_SPEC=debug + - CORE_PEER_TLS_ENABLED=true + - CORE_PEER_TLS_CERT_FILE=/tmp/crypto-material/peers/org2/peer1/node-tls/msp/signcerts/cert.pem + - CORE_PEER_TLS_KEY_FILE=/tmp/crypto-material/peers/org2/peer1/node-tls/msp/keystore/key.pem + - CORE_PEER_TLS_ROOTCERT_FILE=/tmp/crypto-material/peers/org2/peer1/node-tls/msp/tlscacerts/ca-cert.pem + - CORE_PEER_GOSSIP_USELEADERELECTION=true + - CORE_PEER_GOSSIP_ORGLEADER=false + - CORE_PEER_GOSSIP_EXTERNALENDPOINT=${CURRENT_HL_TOPOLOGY}-org2-peer1:7051 + - CORE_PEER_GOSSIP_SKIPHANDSHAKE=true + - TOPOLOGY=${CURRENT_HL_TOPOLOGY} + # - CORE_PEER_CHAINCODELISTENADDRESS=0.0.0.0:7052 + working_dir: /opt/gopath/src/github.com/hyperledger/fabric/org2/peer1 + volumes: + - /var/run:/host/var/run + - ${HL_TOPOLOGIES_BASE_FOLDER}:/tmp \ No newline at end of file diff --git a/topologies/t0/crypto-material/.gitkeep b/topologies/t0/crypto-material/.gitkeep new file mode 100644 index 0000000..e69de29 diff --git a/topologies/t0/docker-compose.yml b/topologies/t0/docker-compose.yml new file mode 100644 index 0000000..7c9939c --- /dev/null +++ b/topologies/t0/docker-compose.yml @@ -0,0 +1,45 @@ +services: + org-shell-cmd: + image: alpine + networks: + - hl-fabric + org1-ca-tls: + image: hyperledger/fabric-ca:${FABRIC_CA_VERSION} + # ports: + # - :7054 + networks: + - hl-fabric + org1-ca-identities: + image: hyperledger/fabric-ca:${FABRIC_CA_VERSION} + # ports: + # - :7054 + networks: + - hl-fabric + org2-ca-tls: + image: hyperledger/fabric-ca:${FABRIC_CA_VERSION} + # ports: + # - :7054 + networks: + - hl-fabric + org2-ca-identities: + image: hyperledger/fabric-ca:${FABRIC_CA_VERSION} + # ports: + # - :7054 + networks: + - hl-fabric + org2-peer1: + image: hyperledger/fabric-peer:${FABRIC_PEER_VERSION} + # ports: + # - :7051 + networks: + - hl-fabric + org2-cli-peer1: + image: hyperledger/fabric-tools:${FABRIC_TOOLS_VERSION} + networks: + - hl-fabric + org1-orderer1: + image: hyperledger/fabric-orderer:${PEER_ORDERER_VERSION} + # ports: + # - :7050 + networks: + - hl-fabric \ No newline at end of file diff --git a/topologies/t0/homefolders/.gitkeep b/topologies/t0/homefolders/.gitkeep new file mode 100644 index 0000000..e69de29 diff --git a/topologies/t0/scripts/all-org-peers-commit-chaincode.sh b/topologies/t0/scripts/all-org-peers-commit-chaincode.sh new file mode 100755 index 0000000..55c1f09 --- /dev/null +++ b/topologies/t0/scripts/all-org-peers-commit-chaincode.sh @@ -0,0 +1,20 @@ +#!/bin/bash +set -e +set -x + +peer lifecycle chaincode checkcommitreadiness -C mychannel --name myccv1 -v "1.0" --sequence 1 + +export CORE_PEER_ADDRESS=${TOPOLOGY}-org2-peer1:7051 +export CORE_PEER_MSPCONFIGPATH=/tmp/crypto-material/orgs/org2/admins/admin/msp +QUERY_INSTALLED=`FABRIC_LOGGING_SPEC=ERROR peer lifecycle chaincode queryinstalled | grep myccv1` +IFS=' ' read -r -a array <<< $QUERY_INSTALLED +PACKAGE_ID=${array[2]} +PACKAGE_ID=${PACKAGE_ID::-1} +echo "The Package ID for the installed chaincode is: $PACKAGE_ID" + +peer lifecycle chaincode commit --channelID mychannel --name myccv1 --version "1.0" \ + --peerAddresses ${TOPOLOGY}-org2-peer1:7051 \ + --tlsRootCertFiles /tmp/crypto-material/cas/org2/ca-tls/ca-cert.pem \ + --sequence 1 --tls --cafile /tmp/crypto-material/cas/org1/ca-tls/ca-cert.pem + +peer lifecycle chaincode querycommitted --channelID mychannel --name myccv1 \ No newline at end of file diff --git a/topologies/t0/scripts/all-org-peers-execute-chaincode.sh b/topologies/t0/scripts/all-org-peers-execute-chaincode.sh new file mode 100755 index 0000000..5149ec1 --- /dev/null +++ b/topologies/t0/scripts/all-org-peers-execute-chaincode.sh @@ -0,0 +1,12 @@ +#!/bin/sh + +set -e +set -x + +export CORE_PEER_ADDRESS=${TOPOLOGY}-org2-peer1:7051 +export CORE_PEER_MSPCONFIGPATH=/tmp/crypto-material/orgs/org2/users/user/msp +peer chaincode invoke -C mychannel -n myccv1 -c '{"Args":["InitLedger"]}' --waitForEvent --tls --cafile /tmp/crypto-material/cas/org1/ca-tls/ca-cert.pem \ + --peerAddresses ${TOPOLOGY}-org2-peer1:7051 \ + --tlsRootCertFiles /tmp/crypto-material/cas/org2/ca-tls/ca-cert.pem + +peer chaincode query -C mychannel -n myccv1 -c '{"Args":["GetAllAssets"]}' diff --git a/topologies/t0/scripts/channels-setup.sh b/topologies/t0/scripts/channels-setup.sh new file mode 100755 index 0000000..230aac8 --- /dev/null +++ b/topologies/t0/scripts/channels-setup.sh @@ -0,0 +1,7 @@ +#!/bin/sh +set -e +set -x + +export FABRIC_CFG_PATH=/tmp/crypto-material/config +configtxgen -profile OrgsOrdererGenesis -outputBlock /tmp/crypto-material/artifacts/channels/genesis.block -channelID syschannel +configtxgen -profile OrgsChannel -outputCreateChannelTx /tmp/crypto-material/artifacts/channels/channel.tx -channelID mychannel \ No newline at end of file diff --git a/topologies/t0/scripts/delete-state-data.sh b/topologies/t0/scripts/delete-state-data.sh new file mode 100755 index 0000000..9062431 --- /dev/null +++ b/topologies/t0/scripts/delete-state-data.sh @@ -0,0 +1,10 @@ +#!/bin/bash +set -e +set -x + +# remove crypto material files +rm -rf /tmp/crypto-material/* +touch /tmp/crypto-material/.gitkeep + +rm -rf /tmp/homefolders/* +touch /tmp/homefolders/.gitkeep diff --git a/topologies/t0/scripts/org1-enroll-identities-with-ca-identities.sh b/topologies/t0/scripts/org1-enroll-identities-with-ca-identities.sh new file mode 100755 index 0000000..55f0c8e --- /dev/null +++ b/topologies/t0/scripts/org1-enroll-identities-with-ca-identities.sh @@ -0,0 +1,29 @@ +#!/bin/bash +set -e +set -x + +# enroll orderer1 node +export FABRIC_CA_CLIENT_HOME=/tmp/crypto-material/orderers/org1/orderer1/node +export FABRIC_CA_CLIENT_TLS_CERTFILES=/tmp/crypto-material/cas/org1/ca-identities/ca-cert.pem +fabric-ca-client enroll -d -u https://org1-orderer1:orderer1PW@0.0.0.0:7054 +mv /tmp/crypto-material/orderers/org1/orderer1/node/msp/cacerts/* /tmp/crypto-material/orderers/org1/orderer1/node/msp/cacerts/ca-cert.pem +cp /tmp/config/config.yaml /tmp/crypto-material/orderers/org1/orderer1/node/msp + +# enroll org1 admin +export FABRIC_CA_CLIENT_HOME=/tmp/crypto-material/orgs/org1/admins/admin +export FABRIC_CA_CLIENT_TLS_CERTFILES=/tmp/crypto-material/cas/org1/ca-identities/ca-cert.pem +fabric-ca-client enroll -d -u https://admin-org1:org1adminpw@0.0.0.0:7054 +mv /tmp/crypto-material/orgs/org1/admins/admin/msp/cacerts/* /tmp/crypto-material/orgs/org1/admins/admin/msp/cacerts/ca-cert.pem +cp /tmp/config/config.yaml /tmp/crypto-material/orgs/org1/admins/admin/msp + +mv /tmp/crypto-material/orderers/org1/orderer1/node-tls/msp/keystore/* /tmp/crypto-material/orderers/org1/orderer1/node-tls/msp/keystore/key.pem + +# setup org1 msp +mkdir -p /tmp/crypto-material/orgs/org1/msp +cp /tmp/config/config.yaml /tmp/crypto-material/orgs/org1/msp +mkdir -p /tmp/crypto-material/orgs/org1/msp/cacerts +cp /tmp/crypto-material/cas/org1/ca-identities/ca-cert.pem /tmp/crypto-material/orgs/org1/msp/cacerts/ca-cert.pem +mkdir -p /tmp/crypto-material/orgs/org1/msp/tlscacerts +cp /tmp/crypto-material/cas/org1/ca-tls/ca-cert.pem /tmp/crypto-material/orgs/org1/msp/tlscacerts/org1-tls-ca-cert.pem +cp /tmp/crypto-material/cas/org2/ca-tls/ca-cert.pem /tmp/crypto-material/orgs/org1/msp/tlscacerts/org2-tls-ca-cert.pem +mkdir -p /tmp/crypto-material/orgs/org1/msp/users \ No newline at end of file diff --git a/topologies/t0/scripts/org1-enroll-identities-with-ca-tls.sh b/topologies/t0/scripts/org1-enroll-identities-with-ca-tls.sh new file mode 100755 index 0000000..25ae375 --- /dev/null +++ b/topologies/t0/scripts/org1-enroll-identities-with-ca-tls.sh @@ -0,0 +1,12 @@ +#!/bin/bash +set -e +set -x + +# enroll orderer1 node-tls +export FABRIC_CA_CLIENT_HOME=/tmp/crypto-material/orderers/org1/orderer1/node-tls +export FABRIC_CA_CLIENT_TLS_CERTFILES=/tmp/crypto-material/cas/org1/ca-tls/ca-cert.pem +fabric-ca-client enroll -d -u https://org1-orderer1:or%24derer%5E1PW@0.0.0.0:7054 --enrollment.profile tls --csr.hosts ${TOPOLOGY}-org1-orderer1 + +mv /tmp/crypto-material/orderers/org1/orderer1/node-tls/msp/keystore/* /tmp/crypto-material/orderers/org1/orderer1/node-tls/msp/keystore/key.pem + +mv /tmp/crypto-material/orderers/org1/orderer1/node-tls/msp/tlscacerts/* /tmp/crypto-material/orderers/org1/orderer1/node-tls/msp/tlscacerts/ca-cert.pem \ No newline at end of file diff --git a/topologies/t0/scripts/org1-register-identities-with-ca-identities.sh b/topologies/t0/scripts/org1-register-identities-with-ca-identities.sh new file mode 100755 index 0000000..8e4d241 --- /dev/null +++ b/topologies/t0/scripts/org1-register-identities-with-ca-identities.sh @@ -0,0 +1,9 @@ +#!/bin/bash +set -e +set -x + +export FABRIC_CA_CLIENT_TLS_CERTFILES=/tmp/crypto-material/cas/org1/ca-identities/ca-cert.pem +export FABRIC_CA_CLIENT_HOME=/tmp/crypto-material/cas/org1/ca-identities-admin +fabric-ca-client enroll -d -u https://org1-ca-identities-admin:org1-ca-identities-adminpw@0.0.0.0:7054 +fabric-ca-client register -d --id.name org1-orderer1 --id.secret orderer1PW --id.type orderer -u https://0.0.0.0:7054 +fabric-ca-client register -d --id.name admin-org1 --id.secret org1adminpw --id.type admin --id.attrs "hf.Registrar.Roles=client,hf.Registrar.Attributes=*,hf.Revoker=true,hf.GenCRL=true,admin=true:ecert,abac.init=true:ecert" -u https://0.0.0.0:7054 \ No newline at end of file diff --git a/topologies/t0/scripts/org1-register-identities-with-ca-tls.sh b/topologies/t0/scripts/org1-register-identities-with-ca-tls.sh new file mode 100755 index 0000000..93e2efd --- /dev/null +++ b/topologies/t0/scripts/org1-register-identities-with-ca-tls.sh @@ -0,0 +1,8 @@ +#!/bin/bash +set -e +set -x + +export FABRIC_CA_CLIENT_TLS_CERTFILES=/tmp/crypto-material/cas/org1/ca-tls/ca-cert.pem +export FABRIC_CA_CLIENT_HOME=/tmp/crypto-material/cas/org1/ca-tls-admin +fabric-ca-client enroll -d -u https://org1-ca-tls-admin:org1-ca-tls-adminpw@0.0.0.0:7054 +fabric-ca-client register -d --id.name org1-orderer1 --id.secret "or\$derer^1PW" --id.type orderer -u https://0.0.0.0:7054 \ No newline at end of file diff --git a/topologies/t0/scripts/org2-approve-chaincode.sh b/topologies/t0/scripts/org2-approve-chaincode.sh new file mode 100755 index 0000000..8eea440 --- /dev/null +++ b/topologies/t0/scripts/org2-approve-chaincode.sh @@ -0,0 +1,20 @@ +#!/bin/bash +set -e +set -x + +export CORE_PEER_ADDRESS=${TOPOLOGY}-org2-peer1:7051 +export CORE_PEER_MSPCONFIGPATH=/tmp/crypto-material/orgs/org2/admins/admin/msp +QUERY_INSTALLED=`FABRIC_LOGGING_SPEC=ERROR peer lifecycle chaincode queryinstalled | grep myccv1` +IFS=' ' read -r -a array <<< $QUERY_INSTALLED +PACKAGE_ID=${array[2]} +PACKAGE_ID=${PACKAGE_ID::-1} +echo "The Package ID for the installed chaincode is: $PACKAGE_ID" + + +peer lifecycle chaincode approveformyorg --channelID mychannel --name myccv1 --version "1.0" \ + --package-id $PACKAGE_ID \ + --peerAddresses ${TOPOLOGY}-org2-peer1:7051 \ + --tlsRootCertFiles /tmp/crypto-material/cas/org2/ca-tls/ca-cert.pem \ + --sequence 1 --tls --cafile /tmp/crypto-material/cas/org1/ca-tls/ca-cert.pem + +peer lifecycle chaincode queryapproved -C mychannel --name myccv1 \ No newline at end of file diff --git a/topologies/t0/scripts/org2-create-and-join-channels.sh b/topologies/t0/scripts/org2-create-and-join-channels.sh new file mode 100755 index 0000000..6b61f78 --- /dev/null +++ b/topologies/t0/scripts/org2-create-and-join-channels.sh @@ -0,0 +1,9 @@ +#!/bin/sh +set -e +set -x + +export CORE_PEER_MSPCONFIGPATH=/tmp/crypto-material/orgs/org2/admins/admin/msp +peer channel create -c mychannel -f /tmp/crypto-material/artifacts/channels/channel.tx -o ${TOPOLOGY}-org1-orderer1:7050 --outputBlock /tmp/crypto-material/artifacts/channels/mychannel.block --tls --cafile /tmp/crypto-material/cas/org1/ca-tls/ca-cert.pem + +export CORE_PEER_ADDRESS=${TOPOLOGY}-org2-peer1:7051 +peer channel join -b /tmp/crypto-material/artifacts/channels/mychannel.block \ No newline at end of file diff --git a/topologies/t0/scripts/org2-enroll-identities-with-ca-identities.sh b/topologies/t0/scripts/org2-enroll-identities-with-ca-identities.sh new file mode 100755 index 0000000..ad932ac --- /dev/null +++ b/topologies/t0/scripts/org2-enroll-identities-with-ca-identities.sh @@ -0,0 +1,41 @@ +#!/bin/bash +set -e +set -x + +# enroll peer1 node +export FABRIC_CA_CLIENT_HOME=/tmp/crypto-material/peers/org2/peer1/node +export FABRIC_CA_CLIENT_TLS_CERTFILES=/tmp/crypto-material/cas/org2/ca-identities/ca-cert.pem +fabric-ca-client enroll -d -u https://peer1-org2:peer1PW@0.0.0.0:7054 +mv /tmp/crypto-material/peers/org2/peer1/node/msp/cacerts/* /tmp/crypto-material/peers/org2/peer1/node/msp/cacerts/ca-cert.pem +cp /tmp/config/config.yaml /tmp/crypto-material/peers/org2/peer1/node/msp + +# enroll org2 admin +export FABRIC_CA_CLIENT_HOME=/tmp/crypto-material/orgs/org2/admins/admin +export FABRIC_CA_CLIENT_TLS_CERTFILES=/tmp/crypto-material/cas/org2/ca-identities/ca-cert.pem +fabric-ca-client enroll -d -u https://admin-org2:org2AdminPW@0.0.0.0:7054 +mv /tmp/crypto-material/orgs/org2/admins/admin/msp/cacerts/* /tmp/crypto-material/orgs/org2/admins/admin/msp/cacerts/ca-cert.pem +cp /tmp/config/config.yaml /tmp/crypto-material/orgs/org2/admins/admin/msp + +# enroll org2 user +export FABRIC_CA_CLIENT_HOME=/tmp/crypto-material/orgs/org2/users/user +export FABRIC_CA_CLIENT_TLS_CERTFILES=/tmp/crypto-material/cas/org2/ca-identities/ca-cert.pem +fabric-ca-client enroll -d -u https://admin-org2:org2AdminPW@0.0.0.0:7054 +mv /tmp/crypto-material/orgs/org2/users/user/msp/cacerts/* /tmp/crypto-material/orgs/org2/users/user/msp/cacerts/ca-cert.pem +cp /tmp/config/config.yaml /tmp/crypto-material/orgs/org2/users/user/msp + +# enroll org2 client +export FABRIC_CA_CLIENT_HOME=/tmp/crypto-material/orgs/org2/clients/client +export FABRIC_CA_CLIENT_TLS_CERTFILES=/tmp/crypto-material/cas/org2/ca-identities/ca-cert.pem +fabric-ca-client enroll -d -u https://admin-org2:org2AdminPW@0.0.0.0:7054 +mv /tmp/crypto-material/orgs/org2/clients/client/msp/cacerts/* /tmp/crypto-material/orgs/org2/clients/client/msp/cacerts/ca-cert.pem +cp /tmp/config/config.yaml /tmp/crypto-material/orgs/org2/clients/client/msp + +# setup org2 msp +mkdir -p /tmp/crypto-material/orgs/org2/msp +cp /tmp/config/config.yaml /tmp/crypto-material/orgs/org2/msp +mkdir -p /tmp/crypto-material/orgs/org2/msp/cacerts +cp /tmp/crypto-material/cas/org2/ca-identities/ca-cert.pem /tmp/crypto-material/orgs/org2/msp/cacerts/ca-cert.pem +mkdir -p /tmp/crypto-material/orgs/org2/msp/tlscacerts +cp /tmp/crypto-material/cas/org1/ca-tls/ca-cert.pem /tmp/crypto-material/orgs/org2/msp/tlscacerts/org1-tls-ca-cert.pem +cp /tmp/crypto-material/cas/org2/ca-tls/ca-cert.pem /tmp/crypto-material/orgs/org2/msp/tlscacerts/org2-tls-ca-cert.pem +mkdir -p /tmp/crypto-material/orgs/org2/msp/users diff --git a/topologies/t0/scripts/org2-enroll-identities-with-ca-tls.sh b/topologies/t0/scripts/org2-enroll-identities-with-ca-tls.sh new file mode 100755 index 0000000..684a0c4 --- /dev/null +++ b/topologies/t0/scripts/org2-enroll-identities-with-ca-tls.sh @@ -0,0 +1,12 @@ +#!/bin/bash +set -e +set -x + +# enroll peer1 node-tls +export FABRIC_CA_CLIENT_HOME=/tmp/crypto-material/peers/org2/peer1/node-tls +export FABRIC_CA_CLIENT_TLS_CERTFILES=/tmp/crypto-material/cas/org2/ca-tls/ca-cert.pem +fabric-ca-client enroll -d -u https://peer1-org2:peer1PW@0.0.0.0:7054 --enrollment.profile tls --csr.hosts ${TOPOLOGY}-org2-peer1 + +mv /tmp/crypto-material/peers/org2/peer1/node-tls/msp/keystore/* /tmp/crypto-material/peers/org2/peer1/node-tls/msp/keystore/key.pem + +mv /tmp/crypto-material/peers/org2/peer1/node-tls/msp/tlscacerts/* /tmp/crypto-material/peers/org2/peer1/node-tls/msp/tlscacerts/ca-cert.pem \ No newline at end of file diff --git a/topologies/t0/scripts/org2-install-chaincode.sh b/topologies/t0/scripts/org2-install-chaincode.sh new file mode 100755 index 0000000..8918ab6 --- /dev/null +++ b/topologies/t0/scripts/org2-install-chaincode.sh @@ -0,0 +1,10 @@ +#!/bin/bash +set -e +set -x + +export CHAINCODE_DIR=/opt/gopath/src/github.com/hyperledger/fabric-samples/chaincode/assets-transfer-basic/chaincode-go + +export CORE_PEER_ADDRESS=${TOPOLOGY}-org2-peer1:7051 +export CORE_PEER_MSPCONFIGPATH=/tmp/crypto-material/orgs/org2/admins/admin/msp +peer lifecycle chaincode package mycc.tar.gz --path $CHAINCODE_DIR --lang golang --label myccv1 +peer lifecycle chaincode install mycc.tar.gz diff --git a/topologies/t0/scripts/org2-register-identities-with-ca-identities.sh b/topologies/t0/scripts/org2-register-identities-with-ca-identities.sh new file mode 100755 index 0000000..0e72e72 --- /dev/null +++ b/topologies/t0/scripts/org2-register-identities-with-ca-identities.sh @@ -0,0 +1,11 @@ +#!/bin/bash +set -e +set -x + +export FABRIC_CA_CLIENT_TLS_CERTFILES=/tmp/crypto-material/cas/org2/ca-identities/ca-cert.pem +export FABRIC_CA_CLIENT_HOME=/tmp/crypto-material/cas/org2/ca-identities-admin +fabric-ca-client enroll -d -u https://org2-ca-identities-admin:org2-ca-identities-adminpw@0.0.0.0:7054 +fabric-ca-client register -d --id.name peer1-org2 --id.secret peer1PW --id.type peer -u https://0.0.0.0:7054 +fabric-ca-client register -d --id.name admin-org2 --id.secret org2AdminPW --id.type admin -u https://0.0.0.0:7054 +fabric-ca-client register -d --id.name user-org2 --id.secret org2UserPW --id.type user -u https://0.0.0.0:7054 +fabric-ca-client register -d --id.name client-org2 --id.secret org2UserPW --id.type client -u https://0.0.0.0:7054 \ No newline at end of file diff --git a/topologies/t0/scripts/org2-register-identities-with-ca-tls.sh b/topologies/t0/scripts/org2-register-identities-with-ca-tls.sh new file mode 100755 index 0000000..3b199c0 --- /dev/null +++ b/topologies/t0/scripts/org2-register-identities-with-ca-tls.sh @@ -0,0 +1,8 @@ +#!/bin/bash +set -e +set -x + +export FABRIC_CA_CLIENT_TLS_CERTFILES=/tmp/crypto-material/cas/org2/ca-tls/ca-cert.pem +export FABRIC_CA_CLIENT_HOME=/tmp/crypto-material/cas/org2/ca-tls-admin +fabric-ca-client enroll -d -u https://org2-ca-tls-admin:org2-ca-tls-adminpw@0.0.0.0:7054 +fabric-ca-client register -d --id.name peer1-org2 --id.secret peer1PW --id.type peer -u https://0.0.0.0:7054 \ No newline at end of file diff --git a/topologies/t0/scripts/patch-configtx.sh b/topologies/t0/scripts/patch-configtx.sh new file mode 100755 index 0000000..a4359d7 --- /dev/null +++ b/topologies/t0/scripts/patch-configtx.sh @@ -0,0 +1,8 @@ +#!/bin/bash +set -e +set -x + +mkdir -p /tmp/crypto-material/config +cp /tmp/config/configtx.yaml /tmp/crypto-material/config + +cat /tmp/config/configtx.yaml | sed -r "s;<>;${TOPOLOGY};g" | tee /tmp/crypto-material/config/configtx.yaml > /dev/null \ No newline at end of file diff --git a/topologies/t0/setup-network.sh b/topologies/t0/setup-network.sh new file mode 100755 index 0000000..93c5b02 --- /dev/null +++ b/topologies/t0/setup-network.sh @@ -0,0 +1,109 @@ +#!/bin/bash +set -e +set -x + +# get the folder of where the current script is located +export HL_TOPOLOGIES_BASE_FOLDER=$( cd ${0%/*} && pwd -P ) +rm -rf ./topologies + +export CURRENT_HL_TOPOLOGY=t0 + +# -----Delete old network---- +echo "Deleting the old network..." +./teardown-network.sh + +docker compose --env-file ${HL_TOPOLOGIES_BASE_FOLDER}/.env -f ${HL_TOPOLOGIES_BASE_FOLDER}/../docker-compose-base.yml \ + -f ${HL_TOPOLOGIES_BASE_FOLDER}/../docker-compose-shell-cmd.yml up -d + +echo "Starting the setup of the new network..." + +docker exec ${CURRENT_HL_TOPOLOGY}-shell-cmd /bin/sh -c "/bin/sh /tmp/scripts/patch-configtx.sh" + + +# -----Setup CAs ----- + +# org1 CAs + +docker compose --env-file ${HL_TOPOLOGIES_BASE_FOLDER}/.env -f ${HL_TOPOLOGIES_BASE_FOLDER}/../docker-compose-base.yml \ + -f ${HL_TOPOLOGIES_BASE_FOLDER}/docker-compose.yml \ + -f ${HL_TOPOLOGIES_BASE_FOLDER}/containers/cas/org1-cas/docker-compose-org1-cas.yml up -d org1-ca-tls +sleep 2 +docker exec ${CURRENT_HL_TOPOLOGY}-org1-ca-tls /bin/sh -c "/bin/sh /tmp/scripts/org1-register-identities-with-ca-tls.sh" + +docker compose --env-file ${HL_TOPOLOGIES_BASE_FOLDER}/.env -f ${HL_TOPOLOGIES_BASE_FOLDER}/../docker-compose-base.yml \ + -f ${HL_TOPOLOGIES_BASE_FOLDER}/docker-compose.yml \ + -f ${HL_TOPOLOGIES_BASE_FOLDER}/containers/cas/org1-cas/docker-compose-org1-cas.yml up -d org1-ca-identities +sleep 2 +docker exec ${CURRENT_HL_TOPOLOGY}-org1-ca-identities /bin/sh -c "/bin/sh /tmp/scripts/org1-register-identities-with-ca-identities.sh" + +# org2 CAs + +docker compose --env-file ${HL_TOPOLOGIES_BASE_FOLDER}/.env -f ${HL_TOPOLOGIES_BASE_FOLDER}/../docker-compose-base.yml \ + -f ${HL_TOPOLOGIES_BASE_FOLDER}/docker-compose.yml \ + -f ${HL_TOPOLOGIES_BASE_FOLDER}/containers/cas/org2-cas/docker-compose-org2-cas.yml up -d org2-ca-tls +sleep 2 +docker exec ${CURRENT_HL_TOPOLOGY}-org2-ca-tls /bin/sh -c "/bin/sh /tmp/scripts/org2-register-identities-with-ca-tls.sh" + +docker compose --env-file ${HL_TOPOLOGIES_BASE_FOLDER}/.env -f ${HL_TOPOLOGIES_BASE_FOLDER}/../docker-compose-base.yml \ + -f ${HL_TOPOLOGIES_BASE_FOLDER}/docker-compose.yml \ + -f ${HL_TOPOLOGIES_BASE_FOLDER}/containers/cas/org2-cas/docker-compose-org2-cas.yml up -d org2-ca-identities +sleep 2 +docker exec ${CURRENT_HL_TOPOLOGY}-org2-ca-identities /bin/sh -c "/bin/sh /tmp/scripts/org2-register-identities-with-ca-identities.sh" + +# -----Setup Peers ----- + +docker exec ${CURRENT_HL_TOPOLOGY}-org2-ca-tls /bin/sh -c "/bin/sh /tmp/scripts/org2-enroll-identities-with-ca-tls.sh" +docker exec ${CURRENT_HL_TOPOLOGY}-org2-ca-identities /bin/sh -c "/bin/sh /tmp/scripts/org2-enroll-identities-with-ca-identities.sh" + +# org2 single peer + +docker compose --env-file ${HL_TOPOLOGIES_BASE_FOLDER}/.env -f ${HL_TOPOLOGIES_BASE_FOLDER}/../docker-compose-base.yml \ + -f ${HL_TOPOLOGIES_BASE_FOLDER}/docker-compose.yml \ + -f ${HL_TOPOLOGIES_BASE_FOLDER}/containers/peers/org2-peers/docker-compose-org2-peers.yml up -d org2-peer1 + +# # # -----Setup CLIs ----- + +docker compose --env-file ${HL_TOPOLOGIES_BASE_FOLDER}/.env -f ${HL_TOPOLOGIES_BASE_FOLDER}/../docker-compose-base.yml \ + -f ${HL_TOPOLOGIES_BASE_FOLDER}/docker-compose.yml \ + -f ${HL_TOPOLOGIES_BASE_FOLDER}/containers/clis/org2-clis/docker-compose-org2-clis.yml up -d org2-cli-peer1 + + +# -----Setup Orderers ----- + +docker exec ${CURRENT_HL_TOPOLOGY}-org1-ca-tls /bin/sh -c "/bin/sh /tmp/scripts/org1-enroll-identities-with-ca-tls.sh" +docker exec ${CURRENT_HL_TOPOLOGY}-org1-ca-identities /bin/sh -c "/bin/sh /tmp/scripts/org1-enroll-identities-with-ca-identities.sh" + +# --setup channel -- +docker exec ${CURRENT_HL_TOPOLOGY}-org2-cli-peer1 /bin/sh -c "/tmp/scripts/channels-setup.sh" + +sleep 1 + +docker compose --env-file ${HL_TOPOLOGIES_BASE_FOLDER}/.env -f ${HL_TOPOLOGIES_BASE_FOLDER}/../docker-compose-base.yml \ + -f ${HL_TOPOLOGIES_BASE_FOLDER}/docker-compose.yml \ + -f ${HL_TOPOLOGIES_BASE_FOLDER}/containers/orderers/org1-orderers/docker-compose-org1-orderers.yml up -d org1-orderer1 + +# Need to wait until raft leader selection is completed for the orderers +sleep 4 +echo "Might be able to remove this sleep 4" + +# -----Setup Channels ----- + +docker exec ${CURRENT_HL_TOPOLOGY}-org2-cli-peer1 /bin/sh -c "/tmp/scripts/org2-create-and-join-channels.sh" + +# -----Setup Chaincode ----- + +docker exec ${CURRENT_HL_TOPOLOGY}-org2-cli-peer1 /bin/sh -c "/tmp/scripts/org2-install-chaincode.sh" + +sleep 1 + +docker exec ${CURRENT_HL_TOPOLOGY}-org2-cli-peer1 /bin/sh -c "/tmp/scripts/org2-approve-chaincode.sh" + +sleep 1 + +docker exec ${CURRENT_HL_TOPOLOGY}-org2-cli-peer1 /bin/sh -c "/tmp/scripts/all-org-peers-commit-chaincode.sh" + +sleep 1 + +docker exec ${CURRENT_HL_TOPOLOGY}-org2-cli-peer1 /bin/sh -c "/tmp/scripts/all-org-peers-execute-chaincode.sh" + +echo "******* NETWORK SETUP COMPLETED *******" diff --git a/topologies/t0/teardown-network.sh b/topologies/t0/teardown-network.sh new file mode 100755 index 0000000..d4c93ff --- /dev/null +++ b/topologies/t0/teardown-network.sh @@ -0,0 +1,23 @@ +#!/bin/bash +set -e +set -x + +export CURRENT_HL_TOPOLOGY=t0 + +# remove current topoloy containers containers +# the chaincode dockers throw an error as not being able to be deleted, although they are being deleted. ignoring errors here +set +e +docker ps --filter name=${CURRENT_HL_TOPOLOGY}-org --filter status=running -aq | xargs -r docker stop | xargs -r docker rm +docker ps --filter name=${CURRENT_HL_TOPOLOGY}-org --filter status=exited -aq | xargs -r docker rm +set -e + +if [ "$( docker container inspect -f '{{.State.Status}}' ${CURRENT_HL_TOPOLOGY}-shell-cmd )" == "running" ] +then + docker exec ${CURRENT_HL_TOPOLOGY}-shell-cmd /bin/sh -c "/bin/sh /tmp/scripts/delete-state-data.sh" +else + echo "Shell Cmd Container is not running." +fi + +# remove shell command container +docker ps --filter name=${CURRENT_HL_TOPOLOGY}-shell-cmd --filter status=running -aq | xargs -r docker stop | xargs -r docker rm +docker ps --filter name=${CURRENT_HL_TOPOLOGY}-shell-cmd --filter status=exited -aq | xargs -r docker rm diff --git a/topologies/t1/.env b/topologies/t1/.env new file mode 100644 index 0000000..87754d9 --- /dev/null +++ b/topologies/t1/.env @@ -0,0 +1,5 @@ +COMPOSE_PROJECT_NAME=hl-fabric-topology-t1 +FABRIC_CA_VERSION=1.5 +FABRIC_PEER_VERSION=2.2.8 +FABRIC_TOOLS_VERSION=2.2.8 +PEER_ORDERER_VERSION=2.2.8 \ No newline at end of file diff --git a/topologies/t1/.gitignore b/topologies/t1/.gitignore new file mode 100644 index 0000000..ee0881a --- /dev/null +++ b/topologies/t1/.gitignore @@ -0,0 +1,2 @@ +crypto-material/*/** +homefolders/*/** \ No newline at end of file diff --git a/topologies/t1/README.md b/topologies/t1/README.md new file mode 100644 index 0000000..806c3e6 --- /dev/null +++ b/topologies/t1/README.md @@ -0,0 +1,34 @@ +# T1: Base Topology +## Description +--- +The base topology used as a starting point for all the other topologies (aside from the T0 topology) +## Diagram +--- +![Diagram of components](../image_store/T1.png) + +## Components List +--- +* Org 1 + * Orderer 1 + * Orderer 2 + * Orderer 3 + * TLS CA + * Identities CA +* Org 2 + * Peer 1 + * Peer 1 CLI + * Peer 2 + * TLS CA + * Identities CA +* Org 3 + * Peer 1 + * Peer 1 CLI + * Peer 2 + * TLS CA + * Identities CA + +## Characteristics + +- World State Database Instance (LevelDB) embedded (in peer containers) +- Chaincode installed directly on peers +- Communication between all components done via TLS \ No newline at end of file diff --git a/topologies/t1/assets/chaincode/assets-transfer-basic/chaincode-go/assetTransfer.go b/topologies/t1/assets/chaincode/assets-transfer-basic/chaincode-go/assetTransfer.go new file mode 100644 index 0000000..9c619d5 --- /dev/null +++ b/topologies/t1/assets/chaincode/assets-transfer-basic/chaincode-go/assetTransfer.go @@ -0,0 +1,23 @@ +/* +SPDX-License-Identifier: Apache-2.0 +*/ + +package main + +import ( + "log" + + "github.com/hyperledger/fabric-contract-api-go/contractapi" + "github.com/hyperledger/fabric-samples/asset-transfer-basic/chaincode-go/chaincode" +) + +func main() { + assetChaincode, err := contractapi.NewChaincode(&chaincode.SmartContract{}) + if err != nil { + log.Panicf("Error creating asset-transfer-basic chaincode: %v", err) + } + + if err := assetChaincode.Start(); err != nil { + log.Panicf("Error starting asset-transfer-basic chaincode: %v", err) + } +} diff --git a/topologies/t1/assets/chaincode/assets-transfer-basic/chaincode-go/chaincode/mocks/chaincodestub.go b/topologies/t1/assets/chaincode/assets-transfer-basic/chaincode-go/chaincode/mocks/chaincodestub.go new file mode 100644 index 0000000..91348fe --- /dev/null +++ b/topologies/t1/assets/chaincode/assets-transfer-basic/chaincode-go/chaincode/mocks/chaincodestub.go @@ -0,0 +1,2878 @@ +// Code generated by counterfeiter. DO NOT EDIT. +package mocks + +import ( + "sync" + + "github.com/golang/protobuf/ptypes/timestamp" + "github.com/hyperledger/fabric-chaincode-go/shim" + "github.com/hyperledger/fabric-protos-go/peer" +) + +type ChaincodeStub struct { + CreateCompositeKeyStub func(string, []string) (string, error) + createCompositeKeyMutex sync.RWMutex + createCompositeKeyArgsForCall []struct { + arg1 string + arg2 []string + } + createCompositeKeyReturns struct { + result1 string + result2 error + } + createCompositeKeyReturnsOnCall map[int]struct { + result1 string + result2 error + } + DelPrivateDataStub func(string, string) error + delPrivateDataMutex sync.RWMutex + delPrivateDataArgsForCall []struct { + arg1 string + arg2 string + } + delPrivateDataReturns struct { + result1 error + } + delPrivateDataReturnsOnCall map[int]struct { + result1 error + } + DelStateStub func(string) error + delStateMutex sync.RWMutex + delStateArgsForCall []struct { + arg1 string + } + delStateReturns struct { + result1 error + } + delStateReturnsOnCall map[int]struct { + result1 error + } + GetArgsStub func() [][]byte + getArgsMutex sync.RWMutex + getArgsArgsForCall []struct { + } + getArgsReturns struct { + result1 [][]byte + } + getArgsReturnsOnCall map[int]struct { + result1 [][]byte + } + GetArgsSliceStub func() ([]byte, error) + getArgsSliceMutex sync.RWMutex + getArgsSliceArgsForCall []struct { + } + getArgsSliceReturns struct { + result1 []byte + result2 error + } + getArgsSliceReturnsOnCall map[int]struct { + result1 []byte + result2 error + } + GetBindingStub func() ([]byte, error) + getBindingMutex sync.RWMutex + getBindingArgsForCall []struct { + } + getBindingReturns struct { + result1 []byte + result2 error + } + getBindingReturnsOnCall map[int]struct { + result1 []byte + result2 error + } + GetChannelIDStub func() string + getChannelIDMutex sync.RWMutex + getChannelIDArgsForCall []struct { + } + getChannelIDReturns struct { + result1 string + } + getChannelIDReturnsOnCall map[int]struct { + result1 string + } + GetCreatorStub func() ([]byte, error) + getCreatorMutex sync.RWMutex + getCreatorArgsForCall []struct { + } + getCreatorReturns struct { + result1 []byte + result2 error + } + getCreatorReturnsOnCall map[int]struct { + result1 []byte + result2 error + } + GetDecorationsStub func() map[string][]byte + getDecorationsMutex sync.RWMutex + getDecorationsArgsForCall []struct { + } + getDecorationsReturns struct { + result1 map[string][]byte + } + getDecorationsReturnsOnCall map[int]struct { + result1 map[string][]byte + } + GetFunctionAndParametersStub func() (string, []string) + getFunctionAndParametersMutex sync.RWMutex + getFunctionAndParametersArgsForCall []struct { + } + getFunctionAndParametersReturns struct { + result1 string + result2 []string + } + getFunctionAndParametersReturnsOnCall map[int]struct { + result1 string + result2 []string + } + GetHistoryForKeyStub func(string) (shim.HistoryQueryIteratorInterface, error) + getHistoryForKeyMutex sync.RWMutex + getHistoryForKeyArgsForCall []struct { + arg1 string + } + getHistoryForKeyReturns struct { + result1 shim.HistoryQueryIteratorInterface + result2 error + } + getHistoryForKeyReturnsOnCall map[int]struct { + result1 shim.HistoryQueryIteratorInterface + result2 error + } + GetPrivateDataStub func(string, string) ([]byte, error) + getPrivateDataMutex sync.RWMutex + getPrivateDataArgsForCall []struct { + arg1 string + arg2 string + } + getPrivateDataReturns struct { + result1 []byte + result2 error + } + getPrivateDataReturnsOnCall map[int]struct { + result1 []byte + result2 error + } + GetPrivateDataByPartialCompositeKeyStub func(string, string, []string) (shim.StateQueryIteratorInterface, error) + getPrivateDataByPartialCompositeKeyMutex sync.RWMutex + getPrivateDataByPartialCompositeKeyArgsForCall []struct { + arg1 string + arg2 string + arg3 []string + } + getPrivateDataByPartialCompositeKeyReturns struct { + result1 shim.StateQueryIteratorInterface + result2 error + } + getPrivateDataByPartialCompositeKeyReturnsOnCall map[int]struct { + result1 shim.StateQueryIteratorInterface + result2 error + } + GetPrivateDataByRangeStub func(string, string, string) (shim.StateQueryIteratorInterface, error) + getPrivateDataByRangeMutex sync.RWMutex + getPrivateDataByRangeArgsForCall []struct { + arg1 string + arg2 string + arg3 string + } + getPrivateDataByRangeReturns struct { + result1 shim.StateQueryIteratorInterface + result2 error + } + getPrivateDataByRangeReturnsOnCall map[int]struct { + result1 shim.StateQueryIteratorInterface + result2 error + } + GetPrivateDataHashStub func(string, string) ([]byte, error) + getPrivateDataHashMutex sync.RWMutex + getPrivateDataHashArgsForCall []struct { + arg1 string + arg2 string + } + getPrivateDataHashReturns struct { + result1 []byte + result2 error + } + getPrivateDataHashReturnsOnCall map[int]struct { + result1 []byte + result2 error + } + GetPrivateDataQueryResultStub func(string, string) (shim.StateQueryIteratorInterface, error) + getPrivateDataQueryResultMutex sync.RWMutex + getPrivateDataQueryResultArgsForCall []struct { + arg1 string + arg2 string + } + getPrivateDataQueryResultReturns struct { + result1 shim.StateQueryIteratorInterface + result2 error + } + getPrivateDataQueryResultReturnsOnCall map[int]struct { + result1 shim.StateQueryIteratorInterface + result2 error + } + GetPrivateDataValidationParameterStub func(string, string) ([]byte, error) + getPrivateDataValidationParameterMutex sync.RWMutex + getPrivateDataValidationParameterArgsForCall []struct { + arg1 string + arg2 string + } + getPrivateDataValidationParameterReturns struct { + result1 []byte + result2 error + } + getPrivateDataValidationParameterReturnsOnCall map[int]struct { + result1 []byte + result2 error + } + GetQueryResultStub func(string) (shim.StateQueryIteratorInterface, error) + getQueryResultMutex sync.RWMutex + getQueryResultArgsForCall []struct { + arg1 string + } + getQueryResultReturns struct { + result1 shim.StateQueryIteratorInterface + result2 error + } + getQueryResultReturnsOnCall map[int]struct { + result1 shim.StateQueryIteratorInterface + result2 error + } + GetQueryResultWithPaginationStub func(string, int32, string) (shim.StateQueryIteratorInterface, *peer.QueryResponseMetadata, error) + getQueryResultWithPaginationMutex sync.RWMutex + getQueryResultWithPaginationArgsForCall []struct { + arg1 string + arg2 int32 + arg3 string + } + getQueryResultWithPaginationReturns struct { + result1 shim.StateQueryIteratorInterface + result2 *peer.QueryResponseMetadata + result3 error + } + getQueryResultWithPaginationReturnsOnCall map[int]struct { + result1 shim.StateQueryIteratorInterface + result2 *peer.QueryResponseMetadata + result3 error + } + GetSignedProposalStub func() (*peer.SignedProposal, error) + getSignedProposalMutex sync.RWMutex + getSignedProposalArgsForCall []struct { + } + getSignedProposalReturns struct { + result1 *peer.SignedProposal + result2 error + } + getSignedProposalReturnsOnCall map[int]struct { + result1 *peer.SignedProposal + result2 error + } + GetStateStub func(string) ([]byte, error) + getStateMutex sync.RWMutex + getStateArgsForCall []struct { + arg1 string + } + getStateReturns struct { + result1 []byte + result2 error + } + getStateReturnsOnCall map[int]struct { + result1 []byte + result2 error + } + GetStateByPartialCompositeKeyStub func(string, []string) (shim.StateQueryIteratorInterface, error) + getStateByPartialCompositeKeyMutex sync.RWMutex + getStateByPartialCompositeKeyArgsForCall []struct { + arg1 string + arg2 []string + } + getStateByPartialCompositeKeyReturns struct { + result1 shim.StateQueryIteratorInterface + result2 error + } + getStateByPartialCompositeKeyReturnsOnCall map[int]struct { + result1 shim.StateQueryIteratorInterface + result2 error + } + GetStateByPartialCompositeKeyWithPaginationStub func(string, []string, int32, string) (shim.StateQueryIteratorInterface, *peer.QueryResponseMetadata, error) + getStateByPartialCompositeKeyWithPaginationMutex sync.RWMutex + getStateByPartialCompositeKeyWithPaginationArgsForCall []struct { + arg1 string + arg2 []string + arg3 int32 + arg4 string + } + getStateByPartialCompositeKeyWithPaginationReturns struct { + result1 shim.StateQueryIteratorInterface + result2 *peer.QueryResponseMetadata + result3 error + } + getStateByPartialCompositeKeyWithPaginationReturnsOnCall map[int]struct { + result1 shim.StateQueryIteratorInterface + result2 *peer.QueryResponseMetadata + result3 error + } + GetStateByRangeStub func(string, string) (shim.StateQueryIteratorInterface, error) + getStateByRangeMutex sync.RWMutex + getStateByRangeArgsForCall []struct { + arg1 string + arg2 string + } + getStateByRangeReturns struct { + result1 shim.StateQueryIteratorInterface + result2 error + } + getStateByRangeReturnsOnCall map[int]struct { + result1 shim.StateQueryIteratorInterface + result2 error + } + GetStateByRangeWithPaginationStub func(string, string, int32, string) (shim.StateQueryIteratorInterface, *peer.QueryResponseMetadata, error) + getStateByRangeWithPaginationMutex sync.RWMutex + getStateByRangeWithPaginationArgsForCall []struct { + arg1 string + arg2 string + arg3 int32 + arg4 string + } + getStateByRangeWithPaginationReturns struct { + result1 shim.StateQueryIteratorInterface + result2 *peer.QueryResponseMetadata + result3 error + } + getStateByRangeWithPaginationReturnsOnCall map[int]struct { + result1 shim.StateQueryIteratorInterface + result2 *peer.QueryResponseMetadata + result3 error + } + GetStateValidationParameterStub func(string) ([]byte, error) + getStateValidationParameterMutex sync.RWMutex + getStateValidationParameterArgsForCall []struct { + arg1 string + } + getStateValidationParameterReturns struct { + result1 []byte + result2 error + } + getStateValidationParameterReturnsOnCall map[int]struct { + result1 []byte + result2 error + } + GetStringArgsStub func() []string + getStringArgsMutex sync.RWMutex + getStringArgsArgsForCall []struct { + } + getStringArgsReturns struct { + result1 []string + } + getStringArgsReturnsOnCall map[int]struct { + result1 []string + } + GetTransientStub func() (map[string][]byte, error) + getTransientMutex sync.RWMutex + getTransientArgsForCall []struct { + } + getTransientReturns struct { + result1 map[string][]byte + result2 error + } + getTransientReturnsOnCall map[int]struct { + result1 map[string][]byte + result2 error + } + GetTxIDStub func() string + getTxIDMutex sync.RWMutex + getTxIDArgsForCall []struct { + } + getTxIDReturns struct { + result1 string + } + getTxIDReturnsOnCall map[int]struct { + result1 string + } + GetTxTimestampStub func() (*timestamp.Timestamp, error) + getTxTimestampMutex sync.RWMutex + getTxTimestampArgsForCall []struct { + } + getTxTimestampReturns struct { + result1 *timestamp.Timestamp + result2 error + } + getTxTimestampReturnsOnCall map[int]struct { + result1 *timestamp.Timestamp + result2 error + } + InvokeChaincodeStub func(string, [][]byte, string) peer.Response + invokeChaincodeMutex sync.RWMutex + invokeChaincodeArgsForCall []struct { + arg1 string + arg2 [][]byte + arg3 string + } + invokeChaincodeReturns struct { + result1 peer.Response + } + invokeChaincodeReturnsOnCall map[int]struct { + result1 peer.Response + } + PutPrivateDataStub func(string, string, []byte) error + putPrivateDataMutex sync.RWMutex + putPrivateDataArgsForCall []struct { + arg1 string + arg2 string + arg3 []byte + } + putPrivateDataReturns struct { + result1 error + } + putPrivateDataReturnsOnCall map[int]struct { + result1 error + } + PutStateStub func(string, []byte) error + putStateMutex sync.RWMutex + putStateArgsForCall []struct { + arg1 string + arg2 []byte + } + putStateReturns struct { + result1 error + } + putStateReturnsOnCall map[int]struct { + result1 error + } + SetEventStub func(string, []byte) error + setEventMutex sync.RWMutex + setEventArgsForCall []struct { + arg1 string + arg2 []byte + } + setEventReturns struct { + result1 error + } + setEventReturnsOnCall map[int]struct { + result1 error + } + SetPrivateDataValidationParameterStub func(string, string, []byte) error + setPrivateDataValidationParameterMutex sync.RWMutex + setPrivateDataValidationParameterArgsForCall []struct { + arg1 string + arg2 string + arg3 []byte + } + setPrivateDataValidationParameterReturns struct { + result1 error + } + setPrivateDataValidationParameterReturnsOnCall map[int]struct { + result1 error + } + SetStateValidationParameterStub func(string, []byte) error + setStateValidationParameterMutex sync.RWMutex + setStateValidationParameterArgsForCall []struct { + arg1 string + arg2 []byte + } + setStateValidationParameterReturns struct { + result1 error + } + setStateValidationParameterReturnsOnCall map[int]struct { + result1 error + } + SplitCompositeKeyStub func(string) (string, []string, error) + splitCompositeKeyMutex sync.RWMutex + splitCompositeKeyArgsForCall []struct { + arg1 string + } + splitCompositeKeyReturns struct { + result1 string + result2 []string + result3 error + } + splitCompositeKeyReturnsOnCall map[int]struct { + result1 string + result2 []string + result3 error + } + invocations map[string][][]interface{} + invocationsMutex sync.RWMutex +} + +func (fake *ChaincodeStub) CreateCompositeKey(arg1 string, arg2 []string) (string, error) { + var arg2Copy []string + if arg2 != nil { + arg2Copy = make([]string, len(arg2)) + copy(arg2Copy, arg2) + } + fake.createCompositeKeyMutex.Lock() + ret, specificReturn := fake.createCompositeKeyReturnsOnCall[len(fake.createCompositeKeyArgsForCall)] + fake.createCompositeKeyArgsForCall = append(fake.createCompositeKeyArgsForCall, struct { + arg1 string + arg2 []string + }{arg1, arg2Copy}) + fake.recordInvocation("CreateCompositeKey", []interface{}{arg1, arg2Copy}) + fake.createCompositeKeyMutex.Unlock() + if fake.CreateCompositeKeyStub != nil { + return fake.CreateCompositeKeyStub(arg1, arg2) + } + if specificReturn { + return ret.result1, ret.result2 + } + fakeReturns := fake.createCompositeKeyReturns + return fakeReturns.result1, fakeReturns.result2 +} + +func (fake *ChaincodeStub) CreateCompositeKeyCallCount() int { + fake.createCompositeKeyMutex.RLock() + defer fake.createCompositeKeyMutex.RUnlock() + return len(fake.createCompositeKeyArgsForCall) +} + +func (fake *ChaincodeStub) CreateCompositeKeyCalls(stub func(string, []string) (string, error)) { + fake.createCompositeKeyMutex.Lock() + defer fake.createCompositeKeyMutex.Unlock() + fake.CreateCompositeKeyStub = stub +} + +func (fake *ChaincodeStub) CreateCompositeKeyArgsForCall(i int) (string, []string) { + fake.createCompositeKeyMutex.RLock() + defer fake.createCompositeKeyMutex.RUnlock() + argsForCall := fake.createCompositeKeyArgsForCall[i] + return argsForCall.arg1, argsForCall.arg2 +} + +func (fake *ChaincodeStub) CreateCompositeKeyReturns(result1 string, result2 error) { + fake.createCompositeKeyMutex.Lock() + defer fake.createCompositeKeyMutex.Unlock() + fake.CreateCompositeKeyStub = nil + fake.createCompositeKeyReturns = struct { + result1 string + result2 error + }{result1, result2} +} + +func (fake *ChaincodeStub) CreateCompositeKeyReturnsOnCall(i int, result1 string, result2 error) { + fake.createCompositeKeyMutex.Lock() + defer fake.createCompositeKeyMutex.Unlock() + fake.CreateCompositeKeyStub = nil + if fake.createCompositeKeyReturnsOnCall == nil { + fake.createCompositeKeyReturnsOnCall = make(map[int]struct { + result1 string + result2 error + }) + } + fake.createCompositeKeyReturnsOnCall[i] = struct { + result1 string + result2 error + }{result1, result2} +} + +func (fake *ChaincodeStub) DelPrivateData(arg1 string, arg2 string) error { + fake.delPrivateDataMutex.Lock() + ret, specificReturn := fake.delPrivateDataReturnsOnCall[len(fake.delPrivateDataArgsForCall)] + fake.delPrivateDataArgsForCall = append(fake.delPrivateDataArgsForCall, struct { + arg1 string + arg2 string + }{arg1, arg2}) + fake.recordInvocation("DelPrivateData", []interface{}{arg1, arg2}) + fake.delPrivateDataMutex.Unlock() + if fake.DelPrivateDataStub != nil { + return fake.DelPrivateDataStub(arg1, arg2) + } + if specificReturn { + return ret.result1 + } + fakeReturns := fake.delPrivateDataReturns + return fakeReturns.result1 +} + +func (fake *ChaincodeStub) DelPrivateDataCallCount() int { + fake.delPrivateDataMutex.RLock() + defer fake.delPrivateDataMutex.RUnlock() + return len(fake.delPrivateDataArgsForCall) +} + +func (fake *ChaincodeStub) DelPrivateDataCalls(stub func(string, string) error) { + fake.delPrivateDataMutex.Lock() + defer fake.delPrivateDataMutex.Unlock() + fake.DelPrivateDataStub = stub +} + +func (fake *ChaincodeStub) DelPrivateDataArgsForCall(i int) (string, string) { + fake.delPrivateDataMutex.RLock() + defer fake.delPrivateDataMutex.RUnlock() + argsForCall := fake.delPrivateDataArgsForCall[i] + return argsForCall.arg1, argsForCall.arg2 +} + +func (fake *ChaincodeStub) DelPrivateDataReturns(result1 error) { + fake.delPrivateDataMutex.Lock() + defer fake.delPrivateDataMutex.Unlock() + fake.DelPrivateDataStub = nil + fake.delPrivateDataReturns = struct { + result1 error + }{result1} +} + +func (fake *ChaincodeStub) DelPrivateDataReturnsOnCall(i int, result1 error) { + fake.delPrivateDataMutex.Lock() + defer fake.delPrivateDataMutex.Unlock() + fake.DelPrivateDataStub = nil + if fake.delPrivateDataReturnsOnCall == nil { + fake.delPrivateDataReturnsOnCall = make(map[int]struct { + result1 error + }) + } + fake.delPrivateDataReturnsOnCall[i] = struct { + result1 error + }{result1} +} + +func (fake *ChaincodeStub) DelState(arg1 string) error { + fake.delStateMutex.Lock() + ret, specificReturn := fake.delStateReturnsOnCall[len(fake.delStateArgsForCall)] + fake.delStateArgsForCall = append(fake.delStateArgsForCall, struct { + arg1 string + }{arg1}) + fake.recordInvocation("DelState", []interface{}{arg1}) + fake.delStateMutex.Unlock() + if fake.DelStateStub != nil { + return fake.DelStateStub(arg1) + } + if specificReturn { + return ret.result1 + } + fakeReturns := fake.delStateReturns + return fakeReturns.result1 +} + +func (fake *ChaincodeStub) DelStateCallCount() int { + fake.delStateMutex.RLock() + defer fake.delStateMutex.RUnlock() + return len(fake.delStateArgsForCall) +} + +func (fake *ChaincodeStub) DelStateCalls(stub func(string) error) { + fake.delStateMutex.Lock() + defer fake.delStateMutex.Unlock() + fake.DelStateStub = stub +} + +func (fake *ChaincodeStub) DelStateArgsForCall(i int) string { + fake.delStateMutex.RLock() + defer fake.delStateMutex.RUnlock() + argsForCall := fake.delStateArgsForCall[i] + return argsForCall.arg1 +} + +func (fake *ChaincodeStub) DelStateReturns(result1 error) { + fake.delStateMutex.Lock() + defer fake.delStateMutex.Unlock() + fake.DelStateStub = nil + fake.delStateReturns = struct { + result1 error + }{result1} +} + +func (fake *ChaincodeStub) DelStateReturnsOnCall(i int, result1 error) { + fake.delStateMutex.Lock() + defer fake.delStateMutex.Unlock() + fake.DelStateStub = nil + if fake.delStateReturnsOnCall == nil { + fake.delStateReturnsOnCall = make(map[int]struct { + result1 error + }) + } + fake.delStateReturnsOnCall[i] = struct { + result1 error + }{result1} +} + +func (fake *ChaincodeStub) GetArgs() [][]byte { + fake.getArgsMutex.Lock() + ret, specificReturn := fake.getArgsReturnsOnCall[len(fake.getArgsArgsForCall)] + fake.getArgsArgsForCall = append(fake.getArgsArgsForCall, struct { + }{}) + fake.recordInvocation("GetArgs", []interface{}{}) + fake.getArgsMutex.Unlock() + if fake.GetArgsStub != nil { + return fake.GetArgsStub() + } + if specificReturn { + return ret.result1 + } + fakeReturns := fake.getArgsReturns + return fakeReturns.result1 +} + +func (fake *ChaincodeStub) GetArgsCallCount() int { + fake.getArgsMutex.RLock() + defer fake.getArgsMutex.RUnlock() + return len(fake.getArgsArgsForCall) +} + +func (fake *ChaincodeStub) GetArgsCalls(stub func() [][]byte) { + fake.getArgsMutex.Lock() + defer fake.getArgsMutex.Unlock() + fake.GetArgsStub = stub +} + +func (fake *ChaincodeStub) GetArgsReturns(result1 [][]byte) { + fake.getArgsMutex.Lock() + defer fake.getArgsMutex.Unlock() + fake.GetArgsStub = nil + fake.getArgsReturns = struct { + result1 [][]byte + }{result1} +} + +func (fake *ChaincodeStub) GetArgsReturnsOnCall(i int, result1 [][]byte) { + fake.getArgsMutex.Lock() + defer fake.getArgsMutex.Unlock() + fake.GetArgsStub = nil + if fake.getArgsReturnsOnCall == nil { + fake.getArgsReturnsOnCall = make(map[int]struct { + result1 [][]byte + }) + } + fake.getArgsReturnsOnCall[i] = struct { + result1 [][]byte + }{result1} +} + +func (fake *ChaincodeStub) GetArgsSlice() ([]byte, error) { + fake.getArgsSliceMutex.Lock() + ret, specificReturn := fake.getArgsSliceReturnsOnCall[len(fake.getArgsSliceArgsForCall)] + fake.getArgsSliceArgsForCall = append(fake.getArgsSliceArgsForCall, struct { + }{}) + fake.recordInvocation("GetArgsSlice", []interface{}{}) + fake.getArgsSliceMutex.Unlock() + if fake.GetArgsSliceStub != nil { + return fake.GetArgsSliceStub() + } + if specificReturn { + return ret.result1, ret.result2 + } + fakeReturns := fake.getArgsSliceReturns + return fakeReturns.result1, fakeReturns.result2 +} + +func (fake *ChaincodeStub) GetArgsSliceCallCount() int { + fake.getArgsSliceMutex.RLock() + defer fake.getArgsSliceMutex.RUnlock() + return len(fake.getArgsSliceArgsForCall) +} + +func (fake *ChaincodeStub) GetArgsSliceCalls(stub func() ([]byte, error)) { + fake.getArgsSliceMutex.Lock() + defer fake.getArgsSliceMutex.Unlock() + fake.GetArgsSliceStub = stub +} + +func (fake *ChaincodeStub) GetArgsSliceReturns(result1 []byte, result2 error) { + fake.getArgsSliceMutex.Lock() + defer fake.getArgsSliceMutex.Unlock() + fake.GetArgsSliceStub = nil + fake.getArgsSliceReturns = struct { + result1 []byte + result2 error + }{result1, result2} +} + +func (fake *ChaincodeStub) GetArgsSliceReturnsOnCall(i int, result1 []byte, result2 error) { + fake.getArgsSliceMutex.Lock() + defer fake.getArgsSliceMutex.Unlock() + fake.GetArgsSliceStub = nil + if fake.getArgsSliceReturnsOnCall == nil { + fake.getArgsSliceReturnsOnCall = make(map[int]struct { + result1 []byte + result2 error + }) + } + fake.getArgsSliceReturnsOnCall[i] = struct { + result1 []byte + result2 error + }{result1, result2} +} + +func (fake *ChaincodeStub) GetBinding() ([]byte, error) { + fake.getBindingMutex.Lock() + ret, specificReturn := fake.getBindingReturnsOnCall[len(fake.getBindingArgsForCall)] + fake.getBindingArgsForCall = append(fake.getBindingArgsForCall, struct { + }{}) + fake.recordInvocation("GetBinding", []interface{}{}) + fake.getBindingMutex.Unlock() + if fake.GetBindingStub != nil { + return fake.GetBindingStub() + } + if specificReturn { + return ret.result1, ret.result2 + } + fakeReturns := fake.getBindingReturns + return fakeReturns.result1, fakeReturns.result2 +} + +func (fake *ChaincodeStub) GetBindingCallCount() int { + fake.getBindingMutex.RLock() + defer fake.getBindingMutex.RUnlock() + return len(fake.getBindingArgsForCall) +} + +func (fake *ChaincodeStub) GetBindingCalls(stub func() ([]byte, error)) { + fake.getBindingMutex.Lock() + defer fake.getBindingMutex.Unlock() + fake.GetBindingStub = stub +} + +func (fake *ChaincodeStub) GetBindingReturns(result1 []byte, result2 error) { + fake.getBindingMutex.Lock() + defer fake.getBindingMutex.Unlock() + fake.GetBindingStub = nil + fake.getBindingReturns = struct { + result1 []byte + result2 error + }{result1, result2} +} + +func (fake *ChaincodeStub) GetBindingReturnsOnCall(i int, result1 []byte, result2 error) { + fake.getBindingMutex.Lock() + defer fake.getBindingMutex.Unlock() + fake.GetBindingStub = nil + if fake.getBindingReturnsOnCall == nil { + fake.getBindingReturnsOnCall = make(map[int]struct { + result1 []byte + result2 error + }) + } + fake.getBindingReturnsOnCall[i] = struct { + result1 []byte + result2 error + }{result1, result2} +} + +func (fake *ChaincodeStub) GetChannelID() string { + fake.getChannelIDMutex.Lock() + ret, specificReturn := fake.getChannelIDReturnsOnCall[len(fake.getChannelIDArgsForCall)] + fake.getChannelIDArgsForCall = append(fake.getChannelIDArgsForCall, struct { + }{}) + fake.recordInvocation("GetChannelID", []interface{}{}) + fake.getChannelIDMutex.Unlock() + if fake.GetChannelIDStub != nil { + return fake.GetChannelIDStub() + } + if specificReturn { + return ret.result1 + } + fakeReturns := fake.getChannelIDReturns + return fakeReturns.result1 +} + +func (fake *ChaincodeStub) GetChannelIDCallCount() int { + fake.getChannelIDMutex.RLock() + defer fake.getChannelIDMutex.RUnlock() + return len(fake.getChannelIDArgsForCall) +} + +func (fake *ChaincodeStub) GetChannelIDCalls(stub func() string) { + fake.getChannelIDMutex.Lock() + defer fake.getChannelIDMutex.Unlock() + fake.GetChannelIDStub = stub +} + +func (fake *ChaincodeStub) GetChannelIDReturns(result1 string) { + fake.getChannelIDMutex.Lock() + defer fake.getChannelIDMutex.Unlock() + fake.GetChannelIDStub = nil + fake.getChannelIDReturns = struct { + result1 string + }{result1} +} + +func (fake *ChaincodeStub) GetChannelIDReturnsOnCall(i int, result1 string) { + fake.getChannelIDMutex.Lock() + defer fake.getChannelIDMutex.Unlock() + fake.GetChannelIDStub = nil + if fake.getChannelIDReturnsOnCall == nil { + fake.getChannelIDReturnsOnCall = make(map[int]struct { + result1 string + }) + } + fake.getChannelIDReturnsOnCall[i] = struct { + result1 string + }{result1} +} + +func (fake *ChaincodeStub) GetCreator() ([]byte, error) { + fake.getCreatorMutex.Lock() + ret, specificReturn := fake.getCreatorReturnsOnCall[len(fake.getCreatorArgsForCall)] + fake.getCreatorArgsForCall = append(fake.getCreatorArgsForCall, struct { + }{}) + fake.recordInvocation("GetCreator", []interface{}{}) + fake.getCreatorMutex.Unlock() + if fake.GetCreatorStub != nil { + return fake.GetCreatorStub() + } + if specificReturn { + return ret.result1, ret.result2 + } + fakeReturns := fake.getCreatorReturns + return fakeReturns.result1, fakeReturns.result2 +} + +func (fake *ChaincodeStub) GetCreatorCallCount() int { + fake.getCreatorMutex.RLock() + defer fake.getCreatorMutex.RUnlock() + return len(fake.getCreatorArgsForCall) +} + +func (fake *ChaincodeStub) GetCreatorCalls(stub func() ([]byte, error)) { + fake.getCreatorMutex.Lock() + defer fake.getCreatorMutex.Unlock() + fake.GetCreatorStub = stub +} + +func (fake *ChaincodeStub) GetCreatorReturns(result1 []byte, result2 error) { + fake.getCreatorMutex.Lock() + defer fake.getCreatorMutex.Unlock() + fake.GetCreatorStub = nil + fake.getCreatorReturns = struct { + result1 []byte + result2 error + }{result1, result2} +} + +func (fake *ChaincodeStub) GetCreatorReturnsOnCall(i int, result1 []byte, result2 error) { + fake.getCreatorMutex.Lock() + defer fake.getCreatorMutex.Unlock() + fake.GetCreatorStub = nil + if fake.getCreatorReturnsOnCall == nil { + fake.getCreatorReturnsOnCall = make(map[int]struct { + result1 []byte + result2 error + }) + } + fake.getCreatorReturnsOnCall[i] = struct { + result1 []byte + result2 error + }{result1, result2} +} + +func (fake *ChaincodeStub) GetDecorations() map[string][]byte { + fake.getDecorationsMutex.Lock() + ret, specificReturn := fake.getDecorationsReturnsOnCall[len(fake.getDecorationsArgsForCall)] + fake.getDecorationsArgsForCall = append(fake.getDecorationsArgsForCall, struct { + }{}) + fake.recordInvocation("GetDecorations", []interface{}{}) + fake.getDecorationsMutex.Unlock() + if fake.GetDecorationsStub != nil { + return fake.GetDecorationsStub() + } + if specificReturn { + return ret.result1 + } + fakeReturns := fake.getDecorationsReturns + return fakeReturns.result1 +} + +func (fake *ChaincodeStub) GetDecorationsCallCount() int { + fake.getDecorationsMutex.RLock() + defer fake.getDecorationsMutex.RUnlock() + return len(fake.getDecorationsArgsForCall) +} + +func (fake *ChaincodeStub) GetDecorationsCalls(stub func() map[string][]byte) { + fake.getDecorationsMutex.Lock() + defer fake.getDecorationsMutex.Unlock() + fake.GetDecorationsStub = stub +} + +func (fake *ChaincodeStub) GetDecorationsReturns(result1 map[string][]byte) { + fake.getDecorationsMutex.Lock() + defer fake.getDecorationsMutex.Unlock() + fake.GetDecorationsStub = nil + fake.getDecorationsReturns = struct { + result1 map[string][]byte + }{result1} +} + +func (fake *ChaincodeStub) GetDecorationsReturnsOnCall(i int, result1 map[string][]byte) { + fake.getDecorationsMutex.Lock() + defer fake.getDecorationsMutex.Unlock() + fake.GetDecorationsStub = nil + if fake.getDecorationsReturnsOnCall == nil { + fake.getDecorationsReturnsOnCall = make(map[int]struct { + result1 map[string][]byte + }) + } + fake.getDecorationsReturnsOnCall[i] = struct { + result1 map[string][]byte + }{result1} +} + +func (fake *ChaincodeStub) GetFunctionAndParameters() (string, []string) { + fake.getFunctionAndParametersMutex.Lock() + ret, specificReturn := fake.getFunctionAndParametersReturnsOnCall[len(fake.getFunctionAndParametersArgsForCall)] + fake.getFunctionAndParametersArgsForCall = append(fake.getFunctionAndParametersArgsForCall, struct { + }{}) + fake.recordInvocation("GetFunctionAndParameters", []interface{}{}) + fake.getFunctionAndParametersMutex.Unlock() + if fake.GetFunctionAndParametersStub != nil { + return fake.GetFunctionAndParametersStub() + } + if specificReturn { + return ret.result1, ret.result2 + } + fakeReturns := fake.getFunctionAndParametersReturns + return fakeReturns.result1, fakeReturns.result2 +} + +func (fake *ChaincodeStub) GetFunctionAndParametersCallCount() int { + fake.getFunctionAndParametersMutex.RLock() + defer fake.getFunctionAndParametersMutex.RUnlock() + return len(fake.getFunctionAndParametersArgsForCall) +} + +func (fake *ChaincodeStub) GetFunctionAndParametersCalls(stub func() (string, []string)) { + fake.getFunctionAndParametersMutex.Lock() + defer fake.getFunctionAndParametersMutex.Unlock() + fake.GetFunctionAndParametersStub = stub +} + +func (fake *ChaincodeStub) GetFunctionAndParametersReturns(result1 string, result2 []string) { + fake.getFunctionAndParametersMutex.Lock() + defer fake.getFunctionAndParametersMutex.Unlock() + fake.GetFunctionAndParametersStub = nil + fake.getFunctionAndParametersReturns = struct { + result1 string + result2 []string + }{result1, result2} +} + +func (fake *ChaincodeStub) GetFunctionAndParametersReturnsOnCall(i int, result1 string, result2 []string) { + fake.getFunctionAndParametersMutex.Lock() + defer fake.getFunctionAndParametersMutex.Unlock() + fake.GetFunctionAndParametersStub = nil + if fake.getFunctionAndParametersReturnsOnCall == nil { + fake.getFunctionAndParametersReturnsOnCall = make(map[int]struct { + result1 string + result2 []string + }) + } + fake.getFunctionAndParametersReturnsOnCall[i] = struct { + result1 string + result2 []string + }{result1, result2} +} + +func (fake *ChaincodeStub) GetHistoryForKey(arg1 string) (shim.HistoryQueryIteratorInterface, error) { + fake.getHistoryForKeyMutex.Lock() + ret, specificReturn := fake.getHistoryForKeyReturnsOnCall[len(fake.getHistoryForKeyArgsForCall)] + fake.getHistoryForKeyArgsForCall = append(fake.getHistoryForKeyArgsForCall, struct { + arg1 string + }{arg1}) + fake.recordInvocation("GetHistoryForKey", []interface{}{arg1}) + fake.getHistoryForKeyMutex.Unlock() + if fake.GetHistoryForKeyStub != nil { + return fake.GetHistoryForKeyStub(arg1) + } + if specificReturn { + return ret.result1, ret.result2 + } + fakeReturns := fake.getHistoryForKeyReturns + return fakeReturns.result1, fakeReturns.result2 +} + +func (fake *ChaincodeStub) GetHistoryForKeyCallCount() int { + fake.getHistoryForKeyMutex.RLock() + defer fake.getHistoryForKeyMutex.RUnlock() + return len(fake.getHistoryForKeyArgsForCall) +} + +func (fake *ChaincodeStub) GetHistoryForKeyCalls(stub func(string) (shim.HistoryQueryIteratorInterface, error)) { + fake.getHistoryForKeyMutex.Lock() + defer fake.getHistoryForKeyMutex.Unlock() + fake.GetHistoryForKeyStub = stub +} + +func (fake *ChaincodeStub) GetHistoryForKeyArgsForCall(i int) string { + fake.getHistoryForKeyMutex.RLock() + defer fake.getHistoryForKeyMutex.RUnlock() + argsForCall := fake.getHistoryForKeyArgsForCall[i] + return argsForCall.arg1 +} + +func (fake *ChaincodeStub) GetHistoryForKeyReturns(result1 shim.HistoryQueryIteratorInterface, result2 error) { + fake.getHistoryForKeyMutex.Lock() + defer fake.getHistoryForKeyMutex.Unlock() + fake.GetHistoryForKeyStub = nil + fake.getHistoryForKeyReturns = struct { + result1 shim.HistoryQueryIteratorInterface + result2 error + }{result1, result2} +} + +func (fake *ChaincodeStub) GetHistoryForKeyReturnsOnCall(i int, result1 shim.HistoryQueryIteratorInterface, result2 error) { + fake.getHistoryForKeyMutex.Lock() + defer fake.getHistoryForKeyMutex.Unlock() + fake.GetHistoryForKeyStub = nil + if fake.getHistoryForKeyReturnsOnCall == nil { + fake.getHistoryForKeyReturnsOnCall = make(map[int]struct { + result1 shim.HistoryQueryIteratorInterface + result2 error + }) + } + fake.getHistoryForKeyReturnsOnCall[i] = struct { + result1 shim.HistoryQueryIteratorInterface + result2 error + }{result1, result2} +} + +func (fake *ChaincodeStub) GetPrivateData(arg1 string, arg2 string) ([]byte, error) { + fake.getPrivateDataMutex.Lock() + ret, specificReturn := fake.getPrivateDataReturnsOnCall[len(fake.getPrivateDataArgsForCall)] + fake.getPrivateDataArgsForCall = append(fake.getPrivateDataArgsForCall, struct { + arg1 string + arg2 string + }{arg1, arg2}) + fake.recordInvocation("GetPrivateData", []interface{}{arg1, arg2}) + fake.getPrivateDataMutex.Unlock() + if fake.GetPrivateDataStub != nil { + return fake.GetPrivateDataStub(arg1, arg2) + } + if specificReturn { + return ret.result1, ret.result2 + } + fakeReturns := fake.getPrivateDataReturns + return fakeReturns.result1, fakeReturns.result2 +} + +func (fake *ChaincodeStub) GetPrivateDataCallCount() int { + fake.getPrivateDataMutex.RLock() + defer fake.getPrivateDataMutex.RUnlock() + return len(fake.getPrivateDataArgsForCall) +} + +func (fake *ChaincodeStub) GetPrivateDataCalls(stub func(string, string) ([]byte, error)) { + fake.getPrivateDataMutex.Lock() + defer fake.getPrivateDataMutex.Unlock() + fake.GetPrivateDataStub = stub +} + +func (fake *ChaincodeStub) GetPrivateDataArgsForCall(i int) (string, string) { + fake.getPrivateDataMutex.RLock() + defer fake.getPrivateDataMutex.RUnlock() + argsForCall := fake.getPrivateDataArgsForCall[i] + return argsForCall.arg1, argsForCall.arg2 +} + +func (fake *ChaincodeStub) GetPrivateDataReturns(result1 []byte, result2 error) { + fake.getPrivateDataMutex.Lock() + defer fake.getPrivateDataMutex.Unlock() + fake.GetPrivateDataStub = nil + fake.getPrivateDataReturns = struct { + result1 []byte + result2 error + }{result1, result2} +} + +func (fake *ChaincodeStub) GetPrivateDataReturnsOnCall(i int, result1 []byte, result2 error) { + fake.getPrivateDataMutex.Lock() + defer fake.getPrivateDataMutex.Unlock() + fake.GetPrivateDataStub = nil + if fake.getPrivateDataReturnsOnCall == nil { + fake.getPrivateDataReturnsOnCall = make(map[int]struct { + result1 []byte + result2 error + }) + } + fake.getPrivateDataReturnsOnCall[i] = struct { + result1 []byte + result2 error + }{result1, result2} +} + +func (fake *ChaincodeStub) GetPrivateDataByPartialCompositeKey(arg1 string, arg2 string, arg3 []string) (shim.StateQueryIteratorInterface, error) { + var arg3Copy []string + if arg3 != nil { + arg3Copy = make([]string, len(arg3)) + copy(arg3Copy, arg3) + } + fake.getPrivateDataByPartialCompositeKeyMutex.Lock() + ret, specificReturn := fake.getPrivateDataByPartialCompositeKeyReturnsOnCall[len(fake.getPrivateDataByPartialCompositeKeyArgsForCall)] + fake.getPrivateDataByPartialCompositeKeyArgsForCall = append(fake.getPrivateDataByPartialCompositeKeyArgsForCall, struct { + arg1 string + arg2 string + arg3 []string + }{arg1, arg2, arg3Copy}) + fake.recordInvocation("GetPrivateDataByPartialCompositeKey", []interface{}{arg1, arg2, arg3Copy}) + fake.getPrivateDataByPartialCompositeKeyMutex.Unlock() + if fake.GetPrivateDataByPartialCompositeKeyStub != nil { + return fake.GetPrivateDataByPartialCompositeKeyStub(arg1, arg2, arg3) + } + if specificReturn { + return ret.result1, ret.result2 + } + fakeReturns := fake.getPrivateDataByPartialCompositeKeyReturns + return fakeReturns.result1, fakeReturns.result2 +} + +func (fake *ChaincodeStub) GetPrivateDataByPartialCompositeKeyCallCount() int { + fake.getPrivateDataByPartialCompositeKeyMutex.RLock() + defer fake.getPrivateDataByPartialCompositeKeyMutex.RUnlock() + return len(fake.getPrivateDataByPartialCompositeKeyArgsForCall) +} + +func (fake *ChaincodeStub) GetPrivateDataByPartialCompositeKeyCalls(stub func(string, string, []string) (shim.StateQueryIteratorInterface, error)) { + fake.getPrivateDataByPartialCompositeKeyMutex.Lock() + defer fake.getPrivateDataByPartialCompositeKeyMutex.Unlock() + fake.GetPrivateDataByPartialCompositeKeyStub = stub +} + +func (fake *ChaincodeStub) GetPrivateDataByPartialCompositeKeyArgsForCall(i int) (string, string, []string) { + fake.getPrivateDataByPartialCompositeKeyMutex.RLock() + defer fake.getPrivateDataByPartialCompositeKeyMutex.RUnlock() + argsForCall := fake.getPrivateDataByPartialCompositeKeyArgsForCall[i] + return argsForCall.arg1, argsForCall.arg2, argsForCall.arg3 +} + +func (fake *ChaincodeStub) GetPrivateDataByPartialCompositeKeyReturns(result1 shim.StateQueryIteratorInterface, result2 error) { + fake.getPrivateDataByPartialCompositeKeyMutex.Lock() + defer fake.getPrivateDataByPartialCompositeKeyMutex.Unlock() + fake.GetPrivateDataByPartialCompositeKeyStub = nil + fake.getPrivateDataByPartialCompositeKeyReturns = struct { + result1 shim.StateQueryIteratorInterface + result2 error + }{result1, result2} +} + +func (fake *ChaincodeStub) GetPrivateDataByPartialCompositeKeyReturnsOnCall(i int, result1 shim.StateQueryIteratorInterface, result2 error) { + fake.getPrivateDataByPartialCompositeKeyMutex.Lock() + defer fake.getPrivateDataByPartialCompositeKeyMutex.Unlock() + fake.GetPrivateDataByPartialCompositeKeyStub = nil + if fake.getPrivateDataByPartialCompositeKeyReturnsOnCall == nil { + fake.getPrivateDataByPartialCompositeKeyReturnsOnCall = make(map[int]struct { + result1 shim.StateQueryIteratorInterface + result2 error + }) + } + fake.getPrivateDataByPartialCompositeKeyReturnsOnCall[i] = struct { + result1 shim.StateQueryIteratorInterface + result2 error + }{result1, result2} +} + +func (fake *ChaincodeStub) GetPrivateDataByRange(arg1 string, arg2 string, arg3 string) (shim.StateQueryIteratorInterface, error) { + fake.getPrivateDataByRangeMutex.Lock() + ret, specificReturn := fake.getPrivateDataByRangeReturnsOnCall[len(fake.getPrivateDataByRangeArgsForCall)] + fake.getPrivateDataByRangeArgsForCall = append(fake.getPrivateDataByRangeArgsForCall, struct { + arg1 string + arg2 string + arg3 string + }{arg1, arg2, arg3}) + fake.recordInvocation("GetPrivateDataByRange", []interface{}{arg1, arg2, arg3}) + fake.getPrivateDataByRangeMutex.Unlock() + if fake.GetPrivateDataByRangeStub != nil { + return fake.GetPrivateDataByRangeStub(arg1, arg2, arg3) + } + if specificReturn { + return ret.result1, ret.result2 + } + fakeReturns := fake.getPrivateDataByRangeReturns + return fakeReturns.result1, fakeReturns.result2 +} + +func (fake *ChaincodeStub) GetPrivateDataByRangeCallCount() int { + fake.getPrivateDataByRangeMutex.RLock() + defer fake.getPrivateDataByRangeMutex.RUnlock() + return len(fake.getPrivateDataByRangeArgsForCall) +} + +func (fake *ChaincodeStub) GetPrivateDataByRangeCalls(stub func(string, string, string) (shim.StateQueryIteratorInterface, error)) { + fake.getPrivateDataByRangeMutex.Lock() + defer fake.getPrivateDataByRangeMutex.Unlock() + fake.GetPrivateDataByRangeStub = stub +} + +func (fake *ChaincodeStub) GetPrivateDataByRangeArgsForCall(i int) (string, string, string) { + fake.getPrivateDataByRangeMutex.RLock() + defer fake.getPrivateDataByRangeMutex.RUnlock() + argsForCall := fake.getPrivateDataByRangeArgsForCall[i] + return argsForCall.arg1, argsForCall.arg2, argsForCall.arg3 +} + +func (fake *ChaincodeStub) GetPrivateDataByRangeReturns(result1 shim.StateQueryIteratorInterface, result2 error) { + fake.getPrivateDataByRangeMutex.Lock() + defer fake.getPrivateDataByRangeMutex.Unlock() + fake.GetPrivateDataByRangeStub = nil + fake.getPrivateDataByRangeReturns = struct { + result1 shim.StateQueryIteratorInterface + result2 error + }{result1, result2} +} + +func (fake *ChaincodeStub) GetPrivateDataByRangeReturnsOnCall(i int, result1 shim.StateQueryIteratorInterface, result2 error) { + fake.getPrivateDataByRangeMutex.Lock() + defer fake.getPrivateDataByRangeMutex.Unlock() + fake.GetPrivateDataByRangeStub = nil + if fake.getPrivateDataByRangeReturnsOnCall == nil { + fake.getPrivateDataByRangeReturnsOnCall = make(map[int]struct { + result1 shim.StateQueryIteratorInterface + result2 error + }) + } + fake.getPrivateDataByRangeReturnsOnCall[i] = struct { + result1 shim.StateQueryIteratorInterface + result2 error + }{result1, result2} +} + +func (fake *ChaincodeStub) GetPrivateDataHash(arg1 string, arg2 string) ([]byte, error) { + fake.getPrivateDataHashMutex.Lock() + ret, specificReturn := fake.getPrivateDataHashReturnsOnCall[len(fake.getPrivateDataHashArgsForCall)] + fake.getPrivateDataHashArgsForCall = append(fake.getPrivateDataHashArgsForCall, struct { + arg1 string + arg2 string + }{arg1, arg2}) + fake.recordInvocation("GetPrivateDataHash", []interface{}{arg1, arg2}) + fake.getPrivateDataHashMutex.Unlock() + if fake.GetPrivateDataHashStub != nil { + return fake.GetPrivateDataHashStub(arg1, arg2) + } + if specificReturn { + return ret.result1, ret.result2 + } + fakeReturns := fake.getPrivateDataHashReturns + return fakeReturns.result1, fakeReturns.result2 +} + +func (fake *ChaincodeStub) GetPrivateDataHashCallCount() int { + fake.getPrivateDataHashMutex.RLock() + defer fake.getPrivateDataHashMutex.RUnlock() + return len(fake.getPrivateDataHashArgsForCall) +} + +func (fake *ChaincodeStub) GetPrivateDataHashCalls(stub func(string, string) ([]byte, error)) { + fake.getPrivateDataHashMutex.Lock() + defer fake.getPrivateDataHashMutex.Unlock() + fake.GetPrivateDataHashStub = stub +} + +func (fake *ChaincodeStub) GetPrivateDataHashArgsForCall(i int) (string, string) { + fake.getPrivateDataHashMutex.RLock() + defer fake.getPrivateDataHashMutex.RUnlock() + argsForCall := fake.getPrivateDataHashArgsForCall[i] + return argsForCall.arg1, argsForCall.arg2 +} + +func (fake *ChaincodeStub) GetPrivateDataHashReturns(result1 []byte, result2 error) { + fake.getPrivateDataHashMutex.Lock() + defer fake.getPrivateDataHashMutex.Unlock() + fake.GetPrivateDataHashStub = nil + fake.getPrivateDataHashReturns = struct { + result1 []byte + result2 error + }{result1, result2} +} + +func (fake *ChaincodeStub) GetPrivateDataHashReturnsOnCall(i int, result1 []byte, result2 error) { + fake.getPrivateDataHashMutex.Lock() + defer fake.getPrivateDataHashMutex.Unlock() + fake.GetPrivateDataHashStub = nil + if fake.getPrivateDataHashReturnsOnCall == nil { + fake.getPrivateDataHashReturnsOnCall = make(map[int]struct { + result1 []byte + result2 error + }) + } + fake.getPrivateDataHashReturnsOnCall[i] = struct { + result1 []byte + result2 error + }{result1, result2} +} + +func (fake *ChaincodeStub) GetPrivateDataQueryResult(arg1 string, arg2 string) (shim.StateQueryIteratorInterface, error) { + fake.getPrivateDataQueryResultMutex.Lock() + ret, specificReturn := fake.getPrivateDataQueryResultReturnsOnCall[len(fake.getPrivateDataQueryResultArgsForCall)] + fake.getPrivateDataQueryResultArgsForCall = append(fake.getPrivateDataQueryResultArgsForCall, struct { + arg1 string + arg2 string + }{arg1, arg2}) + fake.recordInvocation("GetPrivateDataQueryResult", []interface{}{arg1, arg2}) + fake.getPrivateDataQueryResultMutex.Unlock() + if fake.GetPrivateDataQueryResultStub != nil { + return fake.GetPrivateDataQueryResultStub(arg1, arg2) + } + if specificReturn { + return ret.result1, ret.result2 + } + fakeReturns := fake.getPrivateDataQueryResultReturns + return fakeReturns.result1, fakeReturns.result2 +} + +func (fake *ChaincodeStub) GetPrivateDataQueryResultCallCount() int { + fake.getPrivateDataQueryResultMutex.RLock() + defer fake.getPrivateDataQueryResultMutex.RUnlock() + return len(fake.getPrivateDataQueryResultArgsForCall) +} + +func (fake *ChaincodeStub) GetPrivateDataQueryResultCalls(stub func(string, string) (shim.StateQueryIteratorInterface, error)) { + fake.getPrivateDataQueryResultMutex.Lock() + defer fake.getPrivateDataQueryResultMutex.Unlock() + fake.GetPrivateDataQueryResultStub = stub +} + +func (fake *ChaincodeStub) GetPrivateDataQueryResultArgsForCall(i int) (string, string) { + fake.getPrivateDataQueryResultMutex.RLock() + defer fake.getPrivateDataQueryResultMutex.RUnlock() + argsForCall := fake.getPrivateDataQueryResultArgsForCall[i] + return argsForCall.arg1, argsForCall.arg2 +} + +func (fake *ChaincodeStub) GetPrivateDataQueryResultReturns(result1 shim.StateQueryIteratorInterface, result2 error) { + fake.getPrivateDataQueryResultMutex.Lock() + defer fake.getPrivateDataQueryResultMutex.Unlock() + fake.GetPrivateDataQueryResultStub = nil + fake.getPrivateDataQueryResultReturns = struct { + result1 shim.StateQueryIteratorInterface + result2 error + }{result1, result2} +} + +func (fake *ChaincodeStub) GetPrivateDataQueryResultReturnsOnCall(i int, result1 shim.StateQueryIteratorInterface, result2 error) { + fake.getPrivateDataQueryResultMutex.Lock() + defer fake.getPrivateDataQueryResultMutex.Unlock() + fake.GetPrivateDataQueryResultStub = nil + if fake.getPrivateDataQueryResultReturnsOnCall == nil { + fake.getPrivateDataQueryResultReturnsOnCall = make(map[int]struct { + result1 shim.StateQueryIteratorInterface + result2 error + }) + } + fake.getPrivateDataQueryResultReturnsOnCall[i] = struct { + result1 shim.StateQueryIteratorInterface + result2 error + }{result1, result2} +} + +func (fake *ChaincodeStub) GetPrivateDataValidationParameter(arg1 string, arg2 string) ([]byte, error) { + fake.getPrivateDataValidationParameterMutex.Lock() + ret, specificReturn := fake.getPrivateDataValidationParameterReturnsOnCall[len(fake.getPrivateDataValidationParameterArgsForCall)] + fake.getPrivateDataValidationParameterArgsForCall = append(fake.getPrivateDataValidationParameterArgsForCall, struct { + arg1 string + arg2 string + }{arg1, arg2}) + fake.recordInvocation("GetPrivateDataValidationParameter", []interface{}{arg1, arg2}) + fake.getPrivateDataValidationParameterMutex.Unlock() + if fake.GetPrivateDataValidationParameterStub != nil { + return fake.GetPrivateDataValidationParameterStub(arg1, arg2) + } + if specificReturn { + return ret.result1, ret.result2 + } + fakeReturns := fake.getPrivateDataValidationParameterReturns + return fakeReturns.result1, fakeReturns.result2 +} + +func (fake *ChaincodeStub) GetPrivateDataValidationParameterCallCount() int { + fake.getPrivateDataValidationParameterMutex.RLock() + defer fake.getPrivateDataValidationParameterMutex.RUnlock() + return len(fake.getPrivateDataValidationParameterArgsForCall) +} + +func (fake *ChaincodeStub) GetPrivateDataValidationParameterCalls(stub func(string, string) ([]byte, error)) { + fake.getPrivateDataValidationParameterMutex.Lock() + defer fake.getPrivateDataValidationParameterMutex.Unlock() + fake.GetPrivateDataValidationParameterStub = stub +} + +func (fake *ChaincodeStub) GetPrivateDataValidationParameterArgsForCall(i int) (string, string) { + fake.getPrivateDataValidationParameterMutex.RLock() + defer fake.getPrivateDataValidationParameterMutex.RUnlock() + argsForCall := fake.getPrivateDataValidationParameterArgsForCall[i] + return argsForCall.arg1, argsForCall.arg2 +} + +func (fake *ChaincodeStub) GetPrivateDataValidationParameterReturns(result1 []byte, result2 error) { + fake.getPrivateDataValidationParameterMutex.Lock() + defer fake.getPrivateDataValidationParameterMutex.Unlock() + fake.GetPrivateDataValidationParameterStub = nil + fake.getPrivateDataValidationParameterReturns = struct { + result1 []byte + result2 error + }{result1, result2} +} + +func (fake *ChaincodeStub) GetPrivateDataValidationParameterReturnsOnCall(i int, result1 []byte, result2 error) { + fake.getPrivateDataValidationParameterMutex.Lock() + defer fake.getPrivateDataValidationParameterMutex.Unlock() + fake.GetPrivateDataValidationParameterStub = nil + if fake.getPrivateDataValidationParameterReturnsOnCall == nil { + fake.getPrivateDataValidationParameterReturnsOnCall = make(map[int]struct { + result1 []byte + result2 error + }) + } + fake.getPrivateDataValidationParameterReturnsOnCall[i] = struct { + result1 []byte + result2 error + }{result1, result2} +} + +func (fake *ChaincodeStub) GetQueryResult(arg1 string) (shim.StateQueryIteratorInterface, error) { + fake.getQueryResultMutex.Lock() + ret, specificReturn := fake.getQueryResultReturnsOnCall[len(fake.getQueryResultArgsForCall)] + fake.getQueryResultArgsForCall = append(fake.getQueryResultArgsForCall, struct { + arg1 string + }{arg1}) + fake.recordInvocation("GetQueryResult", []interface{}{arg1}) + fake.getQueryResultMutex.Unlock() + if fake.GetQueryResultStub != nil { + return fake.GetQueryResultStub(arg1) + } + if specificReturn { + return ret.result1, ret.result2 + } + fakeReturns := fake.getQueryResultReturns + return fakeReturns.result1, fakeReturns.result2 +} + +func (fake *ChaincodeStub) GetQueryResultCallCount() int { + fake.getQueryResultMutex.RLock() + defer fake.getQueryResultMutex.RUnlock() + return len(fake.getQueryResultArgsForCall) +} + +func (fake *ChaincodeStub) GetQueryResultCalls(stub func(string) (shim.StateQueryIteratorInterface, error)) { + fake.getQueryResultMutex.Lock() + defer fake.getQueryResultMutex.Unlock() + fake.GetQueryResultStub = stub +} + +func (fake *ChaincodeStub) GetQueryResultArgsForCall(i int) string { + fake.getQueryResultMutex.RLock() + defer fake.getQueryResultMutex.RUnlock() + argsForCall := fake.getQueryResultArgsForCall[i] + return argsForCall.arg1 +} + +func (fake *ChaincodeStub) GetQueryResultReturns(result1 shim.StateQueryIteratorInterface, result2 error) { + fake.getQueryResultMutex.Lock() + defer fake.getQueryResultMutex.Unlock() + fake.GetQueryResultStub = nil + fake.getQueryResultReturns = struct { + result1 shim.StateQueryIteratorInterface + result2 error + }{result1, result2} +} + +func (fake *ChaincodeStub) GetQueryResultReturnsOnCall(i int, result1 shim.StateQueryIteratorInterface, result2 error) { + fake.getQueryResultMutex.Lock() + defer fake.getQueryResultMutex.Unlock() + fake.GetQueryResultStub = nil + if fake.getQueryResultReturnsOnCall == nil { + fake.getQueryResultReturnsOnCall = make(map[int]struct { + result1 shim.StateQueryIteratorInterface + result2 error + }) + } + fake.getQueryResultReturnsOnCall[i] = struct { + result1 shim.StateQueryIteratorInterface + result2 error + }{result1, result2} +} + +func (fake *ChaincodeStub) GetQueryResultWithPagination(arg1 string, arg2 int32, arg3 string) (shim.StateQueryIteratorInterface, *peer.QueryResponseMetadata, error) { + fake.getQueryResultWithPaginationMutex.Lock() + ret, specificReturn := fake.getQueryResultWithPaginationReturnsOnCall[len(fake.getQueryResultWithPaginationArgsForCall)] + fake.getQueryResultWithPaginationArgsForCall = append(fake.getQueryResultWithPaginationArgsForCall, struct { + arg1 string + arg2 int32 + arg3 string + }{arg1, arg2, arg3}) + fake.recordInvocation("GetQueryResultWithPagination", []interface{}{arg1, arg2, arg3}) + fake.getQueryResultWithPaginationMutex.Unlock() + if fake.GetQueryResultWithPaginationStub != nil { + return fake.GetQueryResultWithPaginationStub(arg1, arg2, arg3) + } + if specificReturn { + return ret.result1, ret.result2, ret.result3 + } + fakeReturns := fake.getQueryResultWithPaginationReturns + return fakeReturns.result1, fakeReturns.result2, fakeReturns.result3 +} + +func (fake *ChaincodeStub) GetQueryResultWithPaginationCallCount() int { + fake.getQueryResultWithPaginationMutex.RLock() + defer fake.getQueryResultWithPaginationMutex.RUnlock() + return len(fake.getQueryResultWithPaginationArgsForCall) +} + +func (fake *ChaincodeStub) GetQueryResultWithPaginationCalls(stub func(string, int32, string) (shim.StateQueryIteratorInterface, *peer.QueryResponseMetadata, error)) { + fake.getQueryResultWithPaginationMutex.Lock() + defer fake.getQueryResultWithPaginationMutex.Unlock() + fake.GetQueryResultWithPaginationStub = stub +} + +func (fake *ChaincodeStub) GetQueryResultWithPaginationArgsForCall(i int) (string, int32, string) { + fake.getQueryResultWithPaginationMutex.RLock() + defer fake.getQueryResultWithPaginationMutex.RUnlock() + argsForCall := fake.getQueryResultWithPaginationArgsForCall[i] + return argsForCall.arg1, argsForCall.arg2, argsForCall.arg3 +} + +func (fake *ChaincodeStub) GetQueryResultWithPaginationReturns(result1 shim.StateQueryIteratorInterface, result2 *peer.QueryResponseMetadata, result3 error) { + fake.getQueryResultWithPaginationMutex.Lock() + defer fake.getQueryResultWithPaginationMutex.Unlock() + fake.GetQueryResultWithPaginationStub = nil + fake.getQueryResultWithPaginationReturns = struct { + result1 shim.StateQueryIteratorInterface + result2 *peer.QueryResponseMetadata + result3 error + }{result1, result2, result3} +} + +func (fake *ChaincodeStub) GetQueryResultWithPaginationReturnsOnCall(i int, result1 shim.StateQueryIteratorInterface, result2 *peer.QueryResponseMetadata, result3 error) { + fake.getQueryResultWithPaginationMutex.Lock() + defer fake.getQueryResultWithPaginationMutex.Unlock() + fake.GetQueryResultWithPaginationStub = nil + if fake.getQueryResultWithPaginationReturnsOnCall == nil { + fake.getQueryResultWithPaginationReturnsOnCall = make(map[int]struct { + result1 shim.StateQueryIteratorInterface + result2 *peer.QueryResponseMetadata + result3 error + }) + } + fake.getQueryResultWithPaginationReturnsOnCall[i] = struct { + result1 shim.StateQueryIteratorInterface + result2 *peer.QueryResponseMetadata + result3 error + }{result1, result2, result3} +} + +func (fake *ChaincodeStub) GetSignedProposal() (*peer.SignedProposal, error) { + fake.getSignedProposalMutex.Lock() + ret, specificReturn := fake.getSignedProposalReturnsOnCall[len(fake.getSignedProposalArgsForCall)] + fake.getSignedProposalArgsForCall = append(fake.getSignedProposalArgsForCall, struct { + }{}) + fake.recordInvocation("GetSignedProposal", []interface{}{}) + fake.getSignedProposalMutex.Unlock() + if fake.GetSignedProposalStub != nil { + return fake.GetSignedProposalStub() + } + if specificReturn { + return ret.result1, ret.result2 + } + fakeReturns := fake.getSignedProposalReturns + return fakeReturns.result1, fakeReturns.result2 +} + +func (fake *ChaincodeStub) GetSignedProposalCallCount() int { + fake.getSignedProposalMutex.RLock() + defer fake.getSignedProposalMutex.RUnlock() + return len(fake.getSignedProposalArgsForCall) +} + +func (fake *ChaincodeStub) GetSignedProposalCalls(stub func() (*peer.SignedProposal, error)) { + fake.getSignedProposalMutex.Lock() + defer fake.getSignedProposalMutex.Unlock() + fake.GetSignedProposalStub = stub +} + +func (fake *ChaincodeStub) GetSignedProposalReturns(result1 *peer.SignedProposal, result2 error) { + fake.getSignedProposalMutex.Lock() + defer fake.getSignedProposalMutex.Unlock() + fake.GetSignedProposalStub = nil + fake.getSignedProposalReturns = struct { + result1 *peer.SignedProposal + result2 error + }{result1, result2} +} + +func (fake *ChaincodeStub) GetSignedProposalReturnsOnCall(i int, result1 *peer.SignedProposal, result2 error) { + fake.getSignedProposalMutex.Lock() + defer fake.getSignedProposalMutex.Unlock() + fake.GetSignedProposalStub = nil + if fake.getSignedProposalReturnsOnCall == nil { + fake.getSignedProposalReturnsOnCall = make(map[int]struct { + result1 *peer.SignedProposal + result2 error + }) + } + fake.getSignedProposalReturnsOnCall[i] = struct { + result1 *peer.SignedProposal + result2 error + }{result1, result2} +} + +func (fake *ChaincodeStub) GetState(arg1 string) ([]byte, error) { + fake.getStateMutex.Lock() + ret, specificReturn := fake.getStateReturnsOnCall[len(fake.getStateArgsForCall)] + fake.getStateArgsForCall = append(fake.getStateArgsForCall, struct { + arg1 string + }{arg1}) + fake.recordInvocation("GetState", []interface{}{arg1}) + fake.getStateMutex.Unlock() + if fake.GetStateStub != nil { + return fake.GetStateStub(arg1) + } + if specificReturn { + return ret.result1, ret.result2 + } + fakeReturns := fake.getStateReturns + return fakeReturns.result1, fakeReturns.result2 +} + +func (fake *ChaincodeStub) GetStateCallCount() int { + fake.getStateMutex.RLock() + defer fake.getStateMutex.RUnlock() + return len(fake.getStateArgsForCall) +} + +func (fake *ChaincodeStub) GetStateCalls(stub func(string) ([]byte, error)) { + fake.getStateMutex.Lock() + defer fake.getStateMutex.Unlock() + fake.GetStateStub = stub +} + +func (fake *ChaincodeStub) GetStateArgsForCall(i int) string { + fake.getStateMutex.RLock() + defer fake.getStateMutex.RUnlock() + argsForCall := fake.getStateArgsForCall[i] + return argsForCall.arg1 +} + +func (fake *ChaincodeStub) GetStateReturns(result1 []byte, result2 error) { + fake.getStateMutex.Lock() + defer fake.getStateMutex.Unlock() + fake.GetStateStub = nil + fake.getStateReturns = struct { + result1 []byte + result2 error + }{result1, result2} +} + +func (fake *ChaincodeStub) GetStateReturnsOnCall(i int, result1 []byte, result2 error) { + fake.getStateMutex.Lock() + defer fake.getStateMutex.Unlock() + fake.GetStateStub = nil + if fake.getStateReturnsOnCall == nil { + fake.getStateReturnsOnCall = make(map[int]struct { + result1 []byte + result2 error + }) + } + fake.getStateReturnsOnCall[i] = struct { + result1 []byte + result2 error + }{result1, result2} +} + +func (fake *ChaincodeStub) GetStateByPartialCompositeKey(arg1 string, arg2 []string) (shim.StateQueryIteratorInterface, error) { + var arg2Copy []string + if arg2 != nil { + arg2Copy = make([]string, len(arg2)) + copy(arg2Copy, arg2) + } + fake.getStateByPartialCompositeKeyMutex.Lock() + ret, specificReturn := fake.getStateByPartialCompositeKeyReturnsOnCall[len(fake.getStateByPartialCompositeKeyArgsForCall)] + fake.getStateByPartialCompositeKeyArgsForCall = append(fake.getStateByPartialCompositeKeyArgsForCall, struct { + arg1 string + arg2 []string + }{arg1, arg2Copy}) + fake.recordInvocation("GetStateByPartialCompositeKey", []interface{}{arg1, arg2Copy}) + fake.getStateByPartialCompositeKeyMutex.Unlock() + if fake.GetStateByPartialCompositeKeyStub != nil { + return fake.GetStateByPartialCompositeKeyStub(arg1, arg2) + } + if specificReturn { + return ret.result1, ret.result2 + } + fakeReturns := fake.getStateByPartialCompositeKeyReturns + return fakeReturns.result1, fakeReturns.result2 +} + +func (fake *ChaincodeStub) GetStateByPartialCompositeKeyCallCount() int { + fake.getStateByPartialCompositeKeyMutex.RLock() + defer fake.getStateByPartialCompositeKeyMutex.RUnlock() + return len(fake.getStateByPartialCompositeKeyArgsForCall) +} + +func (fake *ChaincodeStub) GetStateByPartialCompositeKeyCalls(stub func(string, []string) (shim.StateQueryIteratorInterface, error)) { + fake.getStateByPartialCompositeKeyMutex.Lock() + defer fake.getStateByPartialCompositeKeyMutex.Unlock() + fake.GetStateByPartialCompositeKeyStub = stub +} + +func (fake *ChaincodeStub) GetStateByPartialCompositeKeyArgsForCall(i int) (string, []string) { + fake.getStateByPartialCompositeKeyMutex.RLock() + defer fake.getStateByPartialCompositeKeyMutex.RUnlock() + argsForCall := fake.getStateByPartialCompositeKeyArgsForCall[i] + return argsForCall.arg1, argsForCall.arg2 +} + +func (fake *ChaincodeStub) GetStateByPartialCompositeKeyReturns(result1 shim.StateQueryIteratorInterface, result2 error) { + fake.getStateByPartialCompositeKeyMutex.Lock() + defer fake.getStateByPartialCompositeKeyMutex.Unlock() + fake.GetStateByPartialCompositeKeyStub = nil + fake.getStateByPartialCompositeKeyReturns = struct { + result1 shim.StateQueryIteratorInterface + result2 error + }{result1, result2} +} + +func (fake *ChaincodeStub) GetStateByPartialCompositeKeyReturnsOnCall(i int, result1 shim.StateQueryIteratorInterface, result2 error) { + fake.getStateByPartialCompositeKeyMutex.Lock() + defer fake.getStateByPartialCompositeKeyMutex.Unlock() + fake.GetStateByPartialCompositeKeyStub = nil + if fake.getStateByPartialCompositeKeyReturnsOnCall == nil { + fake.getStateByPartialCompositeKeyReturnsOnCall = make(map[int]struct { + result1 shim.StateQueryIteratorInterface + result2 error + }) + } + fake.getStateByPartialCompositeKeyReturnsOnCall[i] = struct { + result1 shim.StateQueryIteratorInterface + result2 error + }{result1, result2} +} + +func (fake *ChaincodeStub) GetStateByPartialCompositeKeyWithPagination(arg1 string, arg2 []string, arg3 int32, arg4 string) (shim.StateQueryIteratorInterface, *peer.QueryResponseMetadata, error) { + var arg2Copy []string + if arg2 != nil { + arg2Copy = make([]string, len(arg2)) + copy(arg2Copy, arg2) + } + fake.getStateByPartialCompositeKeyWithPaginationMutex.Lock() + ret, specificReturn := fake.getStateByPartialCompositeKeyWithPaginationReturnsOnCall[len(fake.getStateByPartialCompositeKeyWithPaginationArgsForCall)] + fake.getStateByPartialCompositeKeyWithPaginationArgsForCall = append(fake.getStateByPartialCompositeKeyWithPaginationArgsForCall, struct { + arg1 string + arg2 []string + arg3 int32 + arg4 string + }{arg1, arg2Copy, arg3, arg4}) + fake.recordInvocation("GetStateByPartialCompositeKeyWithPagination", []interface{}{arg1, arg2Copy, arg3, arg4}) + fake.getStateByPartialCompositeKeyWithPaginationMutex.Unlock() + if fake.GetStateByPartialCompositeKeyWithPaginationStub != nil { + return fake.GetStateByPartialCompositeKeyWithPaginationStub(arg1, arg2, arg3, arg4) + } + if specificReturn { + return ret.result1, ret.result2, ret.result3 + } + fakeReturns := fake.getStateByPartialCompositeKeyWithPaginationReturns + return fakeReturns.result1, fakeReturns.result2, fakeReturns.result3 +} + +func (fake *ChaincodeStub) GetStateByPartialCompositeKeyWithPaginationCallCount() int { + fake.getStateByPartialCompositeKeyWithPaginationMutex.RLock() + defer fake.getStateByPartialCompositeKeyWithPaginationMutex.RUnlock() + return len(fake.getStateByPartialCompositeKeyWithPaginationArgsForCall) +} + +func (fake *ChaincodeStub) GetStateByPartialCompositeKeyWithPaginationCalls(stub func(string, []string, int32, string) (shim.StateQueryIteratorInterface, *peer.QueryResponseMetadata, error)) { + fake.getStateByPartialCompositeKeyWithPaginationMutex.Lock() + defer fake.getStateByPartialCompositeKeyWithPaginationMutex.Unlock() + fake.GetStateByPartialCompositeKeyWithPaginationStub = stub +} + +func (fake *ChaincodeStub) GetStateByPartialCompositeKeyWithPaginationArgsForCall(i int) (string, []string, int32, string) { + fake.getStateByPartialCompositeKeyWithPaginationMutex.RLock() + defer fake.getStateByPartialCompositeKeyWithPaginationMutex.RUnlock() + argsForCall := fake.getStateByPartialCompositeKeyWithPaginationArgsForCall[i] + return argsForCall.arg1, argsForCall.arg2, argsForCall.arg3, argsForCall.arg4 +} + +func (fake *ChaincodeStub) GetStateByPartialCompositeKeyWithPaginationReturns(result1 shim.StateQueryIteratorInterface, result2 *peer.QueryResponseMetadata, result3 error) { + fake.getStateByPartialCompositeKeyWithPaginationMutex.Lock() + defer fake.getStateByPartialCompositeKeyWithPaginationMutex.Unlock() + fake.GetStateByPartialCompositeKeyWithPaginationStub = nil + fake.getStateByPartialCompositeKeyWithPaginationReturns = struct { + result1 shim.StateQueryIteratorInterface + result2 *peer.QueryResponseMetadata + result3 error + }{result1, result2, result3} +} + +func (fake *ChaincodeStub) GetStateByPartialCompositeKeyWithPaginationReturnsOnCall(i int, result1 shim.StateQueryIteratorInterface, result2 *peer.QueryResponseMetadata, result3 error) { + fake.getStateByPartialCompositeKeyWithPaginationMutex.Lock() + defer fake.getStateByPartialCompositeKeyWithPaginationMutex.Unlock() + fake.GetStateByPartialCompositeKeyWithPaginationStub = nil + if fake.getStateByPartialCompositeKeyWithPaginationReturnsOnCall == nil { + fake.getStateByPartialCompositeKeyWithPaginationReturnsOnCall = make(map[int]struct { + result1 shim.StateQueryIteratorInterface + result2 *peer.QueryResponseMetadata + result3 error + }) + } + fake.getStateByPartialCompositeKeyWithPaginationReturnsOnCall[i] = struct { + result1 shim.StateQueryIteratorInterface + result2 *peer.QueryResponseMetadata + result3 error + }{result1, result2, result3} +} + +func (fake *ChaincodeStub) GetStateByRange(arg1 string, arg2 string) (shim.StateQueryIteratorInterface, error) { + fake.getStateByRangeMutex.Lock() + ret, specificReturn := fake.getStateByRangeReturnsOnCall[len(fake.getStateByRangeArgsForCall)] + fake.getStateByRangeArgsForCall = append(fake.getStateByRangeArgsForCall, struct { + arg1 string + arg2 string + }{arg1, arg2}) + fake.recordInvocation("GetStateByRange", []interface{}{arg1, arg2}) + fake.getStateByRangeMutex.Unlock() + if fake.GetStateByRangeStub != nil { + return fake.GetStateByRangeStub(arg1, arg2) + } + if specificReturn { + return ret.result1, ret.result2 + } + fakeReturns := fake.getStateByRangeReturns + return fakeReturns.result1, fakeReturns.result2 +} + +func (fake *ChaincodeStub) GetStateByRangeCallCount() int { + fake.getStateByRangeMutex.RLock() + defer fake.getStateByRangeMutex.RUnlock() + return len(fake.getStateByRangeArgsForCall) +} + +func (fake *ChaincodeStub) GetStateByRangeCalls(stub func(string, string) (shim.StateQueryIteratorInterface, error)) { + fake.getStateByRangeMutex.Lock() + defer fake.getStateByRangeMutex.Unlock() + fake.GetStateByRangeStub = stub +} + +func (fake *ChaincodeStub) GetStateByRangeArgsForCall(i int) (string, string) { + fake.getStateByRangeMutex.RLock() + defer fake.getStateByRangeMutex.RUnlock() + argsForCall := fake.getStateByRangeArgsForCall[i] + return argsForCall.arg1, argsForCall.arg2 +} + +func (fake *ChaincodeStub) GetStateByRangeReturns(result1 shim.StateQueryIteratorInterface, result2 error) { + fake.getStateByRangeMutex.Lock() + defer fake.getStateByRangeMutex.Unlock() + fake.GetStateByRangeStub = nil + fake.getStateByRangeReturns = struct { + result1 shim.StateQueryIteratorInterface + result2 error + }{result1, result2} +} + +func (fake *ChaincodeStub) GetStateByRangeReturnsOnCall(i int, result1 shim.StateQueryIteratorInterface, result2 error) { + fake.getStateByRangeMutex.Lock() + defer fake.getStateByRangeMutex.Unlock() + fake.GetStateByRangeStub = nil + if fake.getStateByRangeReturnsOnCall == nil { + fake.getStateByRangeReturnsOnCall = make(map[int]struct { + result1 shim.StateQueryIteratorInterface + result2 error + }) + } + fake.getStateByRangeReturnsOnCall[i] = struct { + result1 shim.StateQueryIteratorInterface + result2 error + }{result1, result2} +} + +func (fake *ChaincodeStub) GetStateByRangeWithPagination(arg1 string, arg2 string, arg3 int32, arg4 string) (shim.StateQueryIteratorInterface, *peer.QueryResponseMetadata, error) { + fake.getStateByRangeWithPaginationMutex.Lock() + ret, specificReturn := fake.getStateByRangeWithPaginationReturnsOnCall[len(fake.getStateByRangeWithPaginationArgsForCall)] + fake.getStateByRangeWithPaginationArgsForCall = append(fake.getStateByRangeWithPaginationArgsForCall, struct { + arg1 string + arg2 string + arg3 int32 + arg4 string + }{arg1, arg2, arg3, arg4}) + fake.recordInvocation("GetStateByRangeWithPagination", []interface{}{arg1, arg2, arg3, arg4}) + fake.getStateByRangeWithPaginationMutex.Unlock() + if fake.GetStateByRangeWithPaginationStub != nil { + return fake.GetStateByRangeWithPaginationStub(arg1, arg2, arg3, arg4) + } + if specificReturn { + return ret.result1, ret.result2, ret.result3 + } + fakeReturns := fake.getStateByRangeWithPaginationReturns + return fakeReturns.result1, fakeReturns.result2, fakeReturns.result3 +} + +func (fake *ChaincodeStub) GetStateByRangeWithPaginationCallCount() int { + fake.getStateByRangeWithPaginationMutex.RLock() + defer fake.getStateByRangeWithPaginationMutex.RUnlock() + return len(fake.getStateByRangeWithPaginationArgsForCall) +} + +func (fake *ChaincodeStub) GetStateByRangeWithPaginationCalls(stub func(string, string, int32, string) (shim.StateQueryIteratorInterface, *peer.QueryResponseMetadata, error)) { + fake.getStateByRangeWithPaginationMutex.Lock() + defer fake.getStateByRangeWithPaginationMutex.Unlock() + fake.GetStateByRangeWithPaginationStub = stub +} + +func (fake *ChaincodeStub) GetStateByRangeWithPaginationArgsForCall(i int) (string, string, int32, string) { + fake.getStateByRangeWithPaginationMutex.RLock() + defer fake.getStateByRangeWithPaginationMutex.RUnlock() + argsForCall := fake.getStateByRangeWithPaginationArgsForCall[i] + return argsForCall.arg1, argsForCall.arg2, argsForCall.arg3, argsForCall.arg4 +} + +func (fake *ChaincodeStub) GetStateByRangeWithPaginationReturns(result1 shim.StateQueryIteratorInterface, result2 *peer.QueryResponseMetadata, result3 error) { + fake.getStateByRangeWithPaginationMutex.Lock() + defer fake.getStateByRangeWithPaginationMutex.Unlock() + fake.GetStateByRangeWithPaginationStub = nil + fake.getStateByRangeWithPaginationReturns = struct { + result1 shim.StateQueryIteratorInterface + result2 *peer.QueryResponseMetadata + result3 error + }{result1, result2, result3} +} + +func (fake *ChaincodeStub) GetStateByRangeWithPaginationReturnsOnCall(i int, result1 shim.StateQueryIteratorInterface, result2 *peer.QueryResponseMetadata, result3 error) { + fake.getStateByRangeWithPaginationMutex.Lock() + defer fake.getStateByRangeWithPaginationMutex.Unlock() + fake.GetStateByRangeWithPaginationStub = nil + if fake.getStateByRangeWithPaginationReturnsOnCall == nil { + fake.getStateByRangeWithPaginationReturnsOnCall = make(map[int]struct { + result1 shim.StateQueryIteratorInterface + result2 *peer.QueryResponseMetadata + result3 error + }) + } + fake.getStateByRangeWithPaginationReturnsOnCall[i] = struct { + result1 shim.StateQueryIteratorInterface + result2 *peer.QueryResponseMetadata + result3 error + }{result1, result2, result3} +} + +func (fake *ChaincodeStub) GetStateValidationParameter(arg1 string) ([]byte, error) { + fake.getStateValidationParameterMutex.Lock() + ret, specificReturn := fake.getStateValidationParameterReturnsOnCall[len(fake.getStateValidationParameterArgsForCall)] + fake.getStateValidationParameterArgsForCall = append(fake.getStateValidationParameterArgsForCall, struct { + arg1 string + }{arg1}) + fake.recordInvocation("GetStateValidationParameter", []interface{}{arg1}) + fake.getStateValidationParameterMutex.Unlock() + if fake.GetStateValidationParameterStub != nil { + return fake.GetStateValidationParameterStub(arg1) + } + if specificReturn { + return ret.result1, ret.result2 + } + fakeReturns := fake.getStateValidationParameterReturns + return fakeReturns.result1, fakeReturns.result2 +} + +func (fake *ChaincodeStub) GetStateValidationParameterCallCount() int { + fake.getStateValidationParameterMutex.RLock() + defer fake.getStateValidationParameterMutex.RUnlock() + return len(fake.getStateValidationParameterArgsForCall) +} + +func (fake *ChaincodeStub) GetStateValidationParameterCalls(stub func(string) ([]byte, error)) { + fake.getStateValidationParameterMutex.Lock() + defer fake.getStateValidationParameterMutex.Unlock() + fake.GetStateValidationParameterStub = stub +} + +func (fake *ChaincodeStub) GetStateValidationParameterArgsForCall(i int) string { + fake.getStateValidationParameterMutex.RLock() + defer fake.getStateValidationParameterMutex.RUnlock() + argsForCall := fake.getStateValidationParameterArgsForCall[i] + return argsForCall.arg1 +} + +func (fake *ChaincodeStub) GetStateValidationParameterReturns(result1 []byte, result2 error) { + fake.getStateValidationParameterMutex.Lock() + defer fake.getStateValidationParameterMutex.Unlock() + fake.GetStateValidationParameterStub = nil + fake.getStateValidationParameterReturns = struct { + result1 []byte + result2 error + }{result1, result2} +} + +func (fake *ChaincodeStub) GetStateValidationParameterReturnsOnCall(i int, result1 []byte, result2 error) { + fake.getStateValidationParameterMutex.Lock() + defer fake.getStateValidationParameterMutex.Unlock() + fake.GetStateValidationParameterStub = nil + if fake.getStateValidationParameterReturnsOnCall == nil { + fake.getStateValidationParameterReturnsOnCall = make(map[int]struct { + result1 []byte + result2 error + }) + } + fake.getStateValidationParameterReturnsOnCall[i] = struct { + result1 []byte + result2 error + }{result1, result2} +} + +func (fake *ChaincodeStub) GetStringArgs() []string { + fake.getStringArgsMutex.Lock() + ret, specificReturn := fake.getStringArgsReturnsOnCall[len(fake.getStringArgsArgsForCall)] + fake.getStringArgsArgsForCall = append(fake.getStringArgsArgsForCall, struct { + }{}) + fake.recordInvocation("GetStringArgs", []interface{}{}) + fake.getStringArgsMutex.Unlock() + if fake.GetStringArgsStub != nil { + return fake.GetStringArgsStub() + } + if specificReturn { + return ret.result1 + } + fakeReturns := fake.getStringArgsReturns + return fakeReturns.result1 +} + +func (fake *ChaincodeStub) GetStringArgsCallCount() int { + fake.getStringArgsMutex.RLock() + defer fake.getStringArgsMutex.RUnlock() + return len(fake.getStringArgsArgsForCall) +} + +func (fake *ChaincodeStub) GetStringArgsCalls(stub func() []string) { + fake.getStringArgsMutex.Lock() + defer fake.getStringArgsMutex.Unlock() + fake.GetStringArgsStub = stub +} + +func (fake *ChaincodeStub) GetStringArgsReturns(result1 []string) { + fake.getStringArgsMutex.Lock() + defer fake.getStringArgsMutex.Unlock() + fake.GetStringArgsStub = nil + fake.getStringArgsReturns = struct { + result1 []string + }{result1} +} + +func (fake *ChaincodeStub) GetStringArgsReturnsOnCall(i int, result1 []string) { + fake.getStringArgsMutex.Lock() + defer fake.getStringArgsMutex.Unlock() + fake.GetStringArgsStub = nil + if fake.getStringArgsReturnsOnCall == nil { + fake.getStringArgsReturnsOnCall = make(map[int]struct { + result1 []string + }) + } + fake.getStringArgsReturnsOnCall[i] = struct { + result1 []string + }{result1} +} + +func (fake *ChaincodeStub) GetTransient() (map[string][]byte, error) { + fake.getTransientMutex.Lock() + ret, specificReturn := fake.getTransientReturnsOnCall[len(fake.getTransientArgsForCall)] + fake.getTransientArgsForCall = append(fake.getTransientArgsForCall, struct { + }{}) + fake.recordInvocation("GetTransient", []interface{}{}) + fake.getTransientMutex.Unlock() + if fake.GetTransientStub != nil { + return fake.GetTransientStub() + } + if specificReturn { + return ret.result1, ret.result2 + } + fakeReturns := fake.getTransientReturns + return fakeReturns.result1, fakeReturns.result2 +} + +func (fake *ChaincodeStub) GetTransientCallCount() int { + fake.getTransientMutex.RLock() + defer fake.getTransientMutex.RUnlock() + return len(fake.getTransientArgsForCall) +} + +func (fake *ChaincodeStub) GetTransientCalls(stub func() (map[string][]byte, error)) { + fake.getTransientMutex.Lock() + defer fake.getTransientMutex.Unlock() + fake.GetTransientStub = stub +} + +func (fake *ChaincodeStub) GetTransientReturns(result1 map[string][]byte, result2 error) { + fake.getTransientMutex.Lock() + defer fake.getTransientMutex.Unlock() + fake.GetTransientStub = nil + fake.getTransientReturns = struct { + result1 map[string][]byte + result2 error + }{result1, result2} +} + +func (fake *ChaincodeStub) GetTransientReturnsOnCall(i int, result1 map[string][]byte, result2 error) { + fake.getTransientMutex.Lock() + defer fake.getTransientMutex.Unlock() + fake.GetTransientStub = nil + if fake.getTransientReturnsOnCall == nil { + fake.getTransientReturnsOnCall = make(map[int]struct { + result1 map[string][]byte + result2 error + }) + } + fake.getTransientReturnsOnCall[i] = struct { + result1 map[string][]byte + result2 error + }{result1, result2} +} + +func (fake *ChaincodeStub) GetTxID() string { + fake.getTxIDMutex.Lock() + ret, specificReturn := fake.getTxIDReturnsOnCall[len(fake.getTxIDArgsForCall)] + fake.getTxIDArgsForCall = append(fake.getTxIDArgsForCall, struct { + }{}) + fake.recordInvocation("GetTxID", []interface{}{}) + fake.getTxIDMutex.Unlock() + if fake.GetTxIDStub != nil { + return fake.GetTxIDStub() + } + if specificReturn { + return ret.result1 + } + fakeReturns := fake.getTxIDReturns + return fakeReturns.result1 +} + +func (fake *ChaincodeStub) GetTxIDCallCount() int { + fake.getTxIDMutex.RLock() + defer fake.getTxIDMutex.RUnlock() + return len(fake.getTxIDArgsForCall) +} + +func (fake *ChaincodeStub) GetTxIDCalls(stub func() string) { + fake.getTxIDMutex.Lock() + defer fake.getTxIDMutex.Unlock() + fake.GetTxIDStub = stub +} + +func (fake *ChaincodeStub) GetTxIDReturns(result1 string) { + fake.getTxIDMutex.Lock() + defer fake.getTxIDMutex.Unlock() + fake.GetTxIDStub = nil + fake.getTxIDReturns = struct { + result1 string + }{result1} +} + +func (fake *ChaincodeStub) GetTxIDReturnsOnCall(i int, result1 string) { + fake.getTxIDMutex.Lock() + defer fake.getTxIDMutex.Unlock() + fake.GetTxIDStub = nil + if fake.getTxIDReturnsOnCall == nil { + fake.getTxIDReturnsOnCall = make(map[int]struct { + result1 string + }) + } + fake.getTxIDReturnsOnCall[i] = struct { + result1 string + }{result1} +} + +func (fake *ChaincodeStub) GetTxTimestamp() (*timestamp.Timestamp, error) { + fake.getTxTimestampMutex.Lock() + ret, specificReturn := fake.getTxTimestampReturnsOnCall[len(fake.getTxTimestampArgsForCall)] + fake.getTxTimestampArgsForCall = append(fake.getTxTimestampArgsForCall, struct { + }{}) + fake.recordInvocation("GetTxTimestamp", []interface{}{}) + fake.getTxTimestampMutex.Unlock() + if fake.GetTxTimestampStub != nil { + return fake.GetTxTimestampStub() + } + if specificReturn { + return ret.result1, ret.result2 + } + fakeReturns := fake.getTxTimestampReturns + return fakeReturns.result1, fakeReturns.result2 +} + +func (fake *ChaincodeStub) GetTxTimestampCallCount() int { + fake.getTxTimestampMutex.RLock() + defer fake.getTxTimestampMutex.RUnlock() + return len(fake.getTxTimestampArgsForCall) +} + +func (fake *ChaincodeStub) GetTxTimestampCalls(stub func() (*timestamp.Timestamp, error)) { + fake.getTxTimestampMutex.Lock() + defer fake.getTxTimestampMutex.Unlock() + fake.GetTxTimestampStub = stub +} + +func (fake *ChaincodeStub) GetTxTimestampReturns(result1 *timestamp.Timestamp, result2 error) { + fake.getTxTimestampMutex.Lock() + defer fake.getTxTimestampMutex.Unlock() + fake.GetTxTimestampStub = nil + fake.getTxTimestampReturns = struct { + result1 *timestamp.Timestamp + result2 error + }{result1, result2} +} + +func (fake *ChaincodeStub) GetTxTimestampReturnsOnCall(i int, result1 *timestamp.Timestamp, result2 error) { + fake.getTxTimestampMutex.Lock() + defer fake.getTxTimestampMutex.Unlock() + fake.GetTxTimestampStub = nil + if fake.getTxTimestampReturnsOnCall == nil { + fake.getTxTimestampReturnsOnCall = make(map[int]struct { + result1 *timestamp.Timestamp + result2 error + }) + } + fake.getTxTimestampReturnsOnCall[i] = struct { + result1 *timestamp.Timestamp + result2 error + }{result1, result2} +} + +func (fake *ChaincodeStub) InvokeChaincode(arg1 string, arg2 [][]byte, arg3 string) peer.Response { + var arg2Copy [][]byte + if arg2 != nil { + arg2Copy = make([][]byte, len(arg2)) + copy(arg2Copy, arg2) + } + fake.invokeChaincodeMutex.Lock() + ret, specificReturn := fake.invokeChaincodeReturnsOnCall[len(fake.invokeChaincodeArgsForCall)] + fake.invokeChaincodeArgsForCall = append(fake.invokeChaincodeArgsForCall, struct { + arg1 string + arg2 [][]byte + arg3 string + }{arg1, arg2Copy, arg3}) + fake.recordInvocation("InvokeChaincode", []interface{}{arg1, arg2Copy, arg3}) + fake.invokeChaincodeMutex.Unlock() + if fake.InvokeChaincodeStub != nil { + return fake.InvokeChaincodeStub(arg1, arg2, arg3) + } + if specificReturn { + return ret.result1 + } + fakeReturns := fake.invokeChaincodeReturns + return fakeReturns.result1 +} + +func (fake *ChaincodeStub) InvokeChaincodeCallCount() int { + fake.invokeChaincodeMutex.RLock() + defer fake.invokeChaincodeMutex.RUnlock() + return len(fake.invokeChaincodeArgsForCall) +} + +func (fake *ChaincodeStub) InvokeChaincodeCalls(stub func(string, [][]byte, string) peer.Response) { + fake.invokeChaincodeMutex.Lock() + defer fake.invokeChaincodeMutex.Unlock() + fake.InvokeChaincodeStub = stub +} + +func (fake *ChaincodeStub) InvokeChaincodeArgsForCall(i int) (string, [][]byte, string) { + fake.invokeChaincodeMutex.RLock() + defer fake.invokeChaincodeMutex.RUnlock() + argsForCall := fake.invokeChaincodeArgsForCall[i] + return argsForCall.arg1, argsForCall.arg2, argsForCall.arg3 +} + +func (fake *ChaincodeStub) InvokeChaincodeReturns(result1 peer.Response) { + fake.invokeChaincodeMutex.Lock() + defer fake.invokeChaincodeMutex.Unlock() + fake.InvokeChaincodeStub = nil + fake.invokeChaincodeReturns = struct { + result1 peer.Response + }{result1} +} + +func (fake *ChaincodeStub) InvokeChaincodeReturnsOnCall(i int, result1 peer.Response) { + fake.invokeChaincodeMutex.Lock() + defer fake.invokeChaincodeMutex.Unlock() + fake.InvokeChaincodeStub = nil + if fake.invokeChaincodeReturnsOnCall == nil { + fake.invokeChaincodeReturnsOnCall = make(map[int]struct { + result1 peer.Response + }) + } + fake.invokeChaincodeReturnsOnCall[i] = struct { + result1 peer.Response + }{result1} +} + +func (fake *ChaincodeStub) PutPrivateData(arg1 string, arg2 string, arg3 []byte) error { + var arg3Copy []byte + if arg3 != nil { + arg3Copy = make([]byte, len(arg3)) + copy(arg3Copy, arg3) + } + fake.putPrivateDataMutex.Lock() + ret, specificReturn := fake.putPrivateDataReturnsOnCall[len(fake.putPrivateDataArgsForCall)] + fake.putPrivateDataArgsForCall = append(fake.putPrivateDataArgsForCall, struct { + arg1 string + arg2 string + arg3 []byte + }{arg1, arg2, arg3Copy}) + fake.recordInvocation("PutPrivateData", []interface{}{arg1, arg2, arg3Copy}) + fake.putPrivateDataMutex.Unlock() + if fake.PutPrivateDataStub != nil { + return fake.PutPrivateDataStub(arg1, arg2, arg3) + } + if specificReturn { + return ret.result1 + } + fakeReturns := fake.putPrivateDataReturns + return fakeReturns.result1 +} + +func (fake *ChaincodeStub) PutPrivateDataCallCount() int { + fake.putPrivateDataMutex.RLock() + defer fake.putPrivateDataMutex.RUnlock() + return len(fake.putPrivateDataArgsForCall) +} + +func (fake *ChaincodeStub) PutPrivateDataCalls(stub func(string, string, []byte) error) { + fake.putPrivateDataMutex.Lock() + defer fake.putPrivateDataMutex.Unlock() + fake.PutPrivateDataStub = stub +} + +func (fake *ChaincodeStub) PutPrivateDataArgsForCall(i int) (string, string, []byte) { + fake.putPrivateDataMutex.RLock() + defer fake.putPrivateDataMutex.RUnlock() + argsForCall := fake.putPrivateDataArgsForCall[i] + return argsForCall.arg1, argsForCall.arg2, argsForCall.arg3 +} + +func (fake *ChaincodeStub) PutPrivateDataReturns(result1 error) { + fake.putPrivateDataMutex.Lock() + defer fake.putPrivateDataMutex.Unlock() + fake.PutPrivateDataStub = nil + fake.putPrivateDataReturns = struct { + result1 error + }{result1} +} + +func (fake *ChaincodeStub) PutPrivateDataReturnsOnCall(i int, result1 error) { + fake.putPrivateDataMutex.Lock() + defer fake.putPrivateDataMutex.Unlock() + fake.PutPrivateDataStub = nil + if fake.putPrivateDataReturnsOnCall == nil { + fake.putPrivateDataReturnsOnCall = make(map[int]struct { + result1 error + }) + } + fake.putPrivateDataReturnsOnCall[i] = struct { + result1 error + }{result1} +} + +func (fake *ChaincodeStub) PutState(arg1 string, arg2 []byte) error { + var arg2Copy []byte + if arg2 != nil { + arg2Copy = make([]byte, len(arg2)) + copy(arg2Copy, arg2) + } + fake.putStateMutex.Lock() + ret, specificReturn := fake.putStateReturnsOnCall[len(fake.putStateArgsForCall)] + fake.putStateArgsForCall = append(fake.putStateArgsForCall, struct { + arg1 string + arg2 []byte + }{arg1, arg2Copy}) + fake.recordInvocation("PutState", []interface{}{arg1, arg2Copy}) + fake.putStateMutex.Unlock() + if fake.PutStateStub != nil { + return fake.PutStateStub(arg1, arg2) + } + if specificReturn { + return ret.result1 + } + fakeReturns := fake.putStateReturns + return fakeReturns.result1 +} + +func (fake *ChaincodeStub) PutStateCallCount() int { + fake.putStateMutex.RLock() + defer fake.putStateMutex.RUnlock() + return len(fake.putStateArgsForCall) +} + +func (fake *ChaincodeStub) PutStateCalls(stub func(string, []byte) error) { + fake.putStateMutex.Lock() + defer fake.putStateMutex.Unlock() + fake.PutStateStub = stub +} + +func (fake *ChaincodeStub) PutStateArgsForCall(i int) (string, []byte) { + fake.putStateMutex.RLock() + defer fake.putStateMutex.RUnlock() + argsForCall := fake.putStateArgsForCall[i] + return argsForCall.arg1, argsForCall.arg2 +} + +func (fake *ChaincodeStub) PutStateReturns(result1 error) { + fake.putStateMutex.Lock() + defer fake.putStateMutex.Unlock() + fake.PutStateStub = nil + fake.putStateReturns = struct { + result1 error + }{result1} +} + +func (fake *ChaincodeStub) PutStateReturnsOnCall(i int, result1 error) { + fake.putStateMutex.Lock() + defer fake.putStateMutex.Unlock() + fake.PutStateStub = nil + if fake.putStateReturnsOnCall == nil { + fake.putStateReturnsOnCall = make(map[int]struct { + result1 error + }) + } + fake.putStateReturnsOnCall[i] = struct { + result1 error + }{result1} +} + +func (fake *ChaincodeStub) SetEvent(arg1 string, arg2 []byte) error { + var arg2Copy []byte + if arg2 != nil { + arg2Copy = make([]byte, len(arg2)) + copy(arg2Copy, arg2) + } + fake.setEventMutex.Lock() + ret, specificReturn := fake.setEventReturnsOnCall[len(fake.setEventArgsForCall)] + fake.setEventArgsForCall = append(fake.setEventArgsForCall, struct { + arg1 string + arg2 []byte + }{arg1, arg2Copy}) + fake.recordInvocation("SetEvent", []interface{}{arg1, arg2Copy}) + fake.setEventMutex.Unlock() + if fake.SetEventStub != nil { + return fake.SetEventStub(arg1, arg2) + } + if specificReturn { + return ret.result1 + } + fakeReturns := fake.setEventReturns + return fakeReturns.result1 +} + +func (fake *ChaincodeStub) SetEventCallCount() int { + fake.setEventMutex.RLock() + defer fake.setEventMutex.RUnlock() + return len(fake.setEventArgsForCall) +} + +func (fake *ChaincodeStub) SetEventCalls(stub func(string, []byte) error) { + fake.setEventMutex.Lock() + defer fake.setEventMutex.Unlock() + fake.SetEventStub = stub +} + +func (fake *ChaincodeStub) SetEventArgsForCall(i int) (string, []byte) { + fake.setEventMutex.RLock() + defer fake.setEventMutex.RUnlock() + argsForCall := fake.setEventArgsForCall[i] + return argsForCall.arg1, argsForCall.arg2 +} + +func (fake *ChaincodeStub) SetEventReturns(result1 error) { + fake.setEventMutex.Lock() + defer fake.setEventMutex.Unlock() + fake.SetEventStub = nil + fake.setEventReturns = struct { + result1 error + }{result1} +} + +func (fake *ChaincodeStub) SetEventReturnsOnCall(i int, result1 error) { + fake.setEventMutex.Lock() + defer fake.setEventMutex.Unlock() + fake.SetEventStub = nil + if fake.setEventReturnsOnCall == nil { + fake.setEventReturnsOnCall = make(map[int]struct { + result1 error + }) + } + fake.setEventReturnsOnCall[i] = struct { + result1 error + }{result1} +} + +func (fake *ChaincodeStub) SetPrivateDataValidationParameter(arg1 string, arg2 string, arg3 []byte) error { + var arg3Copy []byte + if arg3 != nil { + arg3Copy = make([]byte, len(arg3)) + copy(arg3Copy, arg3) + } + fake.setPrivateDataValidationParameterMutex.Lock() + ret, specificReturn := fake.setPrivateDataValidationParameterReturnsOnCall[len(fake.setPrivateDataValidationParameterArgsForCall)] + fake.setPrivateDataValidationParameterArgsForCall = append(fake.setPrivateDataValidationParameterArgsForCall, struct { + arg1 string + arg2 string + arg3 []byte + }{arg1, arg2, arg3Copy}) + fake.recordInvocation("SetPrivateDataValidationParameter", []interface{}{arg1, arg2, arg3Copy}) + fake.setPrivateDataValidationParameterMutex.Unlock() + if fake.SetPrivateDataValidationParameterStub != nil { + return fake.SetPrivateDataValidationParameterStub(arg1, arg2, arg3) + } + if specificReturn { + return ret.result1 + } + fakeReturns := fake.setPrivateDataValidationParameterReturns + return fakeReturns.result1 +} + +func (fake *ChaincodeStub) SetPrivateDataValidationParameterCallCount() int { + fake.setPrivateDataValidationParameterMutex.RLock() + defer fake.setPrivateDataValidationParameterMutex.RUnlock() + return len(fake.setPrivateDataValidationParameterArgsForCall) +} + +func (fake *ChaincodeStub) SetPrivateDataValidationParameterCalls(stub func(string, string, []byte) error) { + fake.setPrivateDataValidationParameterMutex.Lock() + defer fake.setPrivateDataValidationParameterMutex.Unlock() + fake.SetPrivateDataValidationParameterStub = stub +} + +func (fake *ChaincodeStub) SetPrivateDataValidationParameterArgsForCall(i int) (string, string, []byte) { + fake.setPrivateDataValidationParameterMutex.RLock() + defer fake.setPrivateDataValidationParameterMutex.RUnlock() + argsForCall := fake.setPrivateDataValidationParameterArgsForCall[i] + return argsForCall.arg1, argsForCall.arg2, argsForCall.arg3 +} + +func (fake *ChaincodeStub) SetPrivateDataValidationParameterReturns(result1 error) { + fake.setPrivateDataValidationParameterMutex.Lock() + defer fake.setPrivateDataValidationParameterMutex.Unlock() + fake.SetPrivateDataValidationParameterStub = nil + fake.setPrivateDataValidationParameterReturns = struct { + result1 error + }{result1} +} + +func (fake *ChaincodeStub) SetPrivateDataValidationParameterReturnsOnCall(i int, result1 error) { + fake.setPrivateDataValidationParameterMutex.Lock() + defer fake.setPrivateDataValidationParameterMutex.Unlock() + fake.SetPrivateDataValidationParameterStub = nil + if fake.setPrivateDataValidationParameterReturnsOnCall == nil { + fake.setPrivateDataValidationParameterReturnsOnCall = make(map[int]struct { + result1 error + }) + } + fake.setPrivateDataValidationParameterReturnsOnCall[i] = struct { + result1 error + }{result1} +} + +func (fake *ChaincodeStub) SetStateValidationParameter(arg1 string, arg2 []byte) error { + var arg2Copy []byte + if arg2 != nil { + arg2Copy = make([]byte, len(arg2)) + copy(arg2Copy, arg2) + } + fake.setStateValidationParameterMutex.Lock() + ret, specificReturn := fake.setStateValidationParameterReturnsOnCall[len(fake.setStateValidationParameterArgsForCall)] + fake.setStateValidationParameterArgsForCall = append(fake.setStateValidationParameterArgsForCall, struct { + arg1 string + arg2 []byte + }{arg1, arg2Copy}) + fake.recordInvocation("SetStateValidationParameter", []interface{}{arg1, arg2Copy}) + fake.setStateValidationParameterMutex.Unlock() + if fake.SetStateValidationParameterStub != nil { + return fake.SetStateValidationParameterStub(arg1, arg2) + } + if specificReturn { + return ret.result1 + } + fakeReturns := fake.setStateValidationParameterReturns + return fakeReturns.result1 +} + +func (fake *ChaincodeStub) SetStateValidationParameterCallCount() int { + fake.setStateValidationParameterMutex.RLock() + defer fake.setStateValidationParameterMutex.RUnlock() + return len(fake.setStateValidationParameterArgsForCall) +} + +func (fake *ChaincodeStub) SetStateValidationParameterCalls(stub func(string, []byte) error) { + fake.setStateValidationParameterMutex.Lock() + defer fake.setStateValidationParameterMutex.Unlock() + fake.SetStateValidationParameterStub = stub +} + +func (fake *ChaincodeStub) SetStateValidationParameterArgsForCall(i int) (string, []byte) { + fake.setStateValidationParameterMutex.RLock() + defer fake.setStateValidationParameterMutex.RUnlock() + argsForCall := fake.setStateValidationParameterArgsForCall[i] + return argsForCall.arg1, argsForCall.arg2 +} + +func (fake *ChaincodeStub) SetStateValidationParameterReturns(result1 error) { + fake.setStateValidationParameterMutex.Lock() + defer fake.setStateValidationParameterMutex.Unlock() + fake.SetStateValidationParameterStub = nil + fake.setStateValidationParameterReturns = struct { + result1 error + }{result1} +} + +func (fake *ChaincodeStub) SetStateValidationParameterReturnsOnCall(i int, result1 error) { + fake.setStateValidationParameterMutex.Lock() + defer fake.setStateValidationParameterMutex.Unlock() + fake.SetStateValidationParameterStub = nil + if fake.setStateValidationParameterReturnsOnCall == nil { + fake.setStateValidationParameterReturnsOnCall = make(map[int]struct { + result1 error + }) + } + fake.setStateValidationParameterReturnsOnCall[i] = struct { + result1 error + }{result1} +} + +func (fake *ChaincodeStub) SplitCompositeKey(arg1 string) (string, []string, error) { + fake.splitCompositeKeyMutex.Lock() + ret, specificReturn := fake.splitCompositeKeyReturnsOnCall[len(fake.splitCompositeKeyArgsForCall)] + fake.splitCompositeKeyArgsForCall = append(fake.splitCompositeKeyArgsForCall, struct { + arg1 string + }{arg1}) + fake.recordInvocation("SplitCompositeKey", []interface{}{arg1}) + fake.splitCompositeKeyMutex.Unlock() + if fake.SplitCompositeKeyStub != nil { + return fake.SplitCompositeKeyStub(arg1) + } + if specificReturn { + return ret.result1, ret.result2, ret.result3 + } + fakeReturns := fake.splitCompositeKeyReturns + return fakeReturns.result1, fakeReturns.result2, fakeReturns.result3 +} + +func (fake *ChaincodeStub) SplitCompositeKeyCallCount() int { + fake.splitCompositeKeyMutex.RLock() + defer fake.splitCompositeKeyMutex.RUnlock() + return len(fake.splitCompositeKeyArgsForCall) +} + +func (fake *ChaincodeStub) SplitCompositeKeyCalls(stub func(string) (string, []string, error)) { + fake.splitCompositeKeyMutex.Lock() + defer fake.splitCompositeKeyMutex.Unlock() + fake.SplitCompositeKeyStub = stub +} + +func (fake *ChaincodeStub) SplitCompositeKeyArgsForCall(i int) string { + fake.splitCompositeKeyMutex.RLock() + defer fake.splitCompositeKeyMutex.RUnlock() + argsForCall := fake.splitCompositeKeyArgsForCall[i] + return argsForCall.arg1 +} + +func (fake *ChaincodeStub) SplitCompositeKeyReturns(result1 string, result2 []string, result3 error) { + fake.splitCompositeKeyMutex.Lock() + defer fake.splitCompositeKeyMutex.Unlock() + fake.SplitCompositeKeyStub = nil + fake.splitCompositeKeyReturns = struct { + result1 string + result2 []string + result3 error + }{result1, result2, result3} +} + +func (fake *ChaincodeStub) SplitCompositeKeyReturnsOnCall(i int, result1 string, result2 []string, result3 error) { + fake.splitCompositeKeyMutex.Lock() + defer fake.splitCompositeKeyMutex.Unlock() + fake.SplitCompositeKeyStub = nil + if fake.splitCompositeKeyReturnsOnCall == nil { + fake.splitCompositeKeyReturnsOnCall = make(map[int]struct { + result1 string + result2 []string + result3 error + }) + } + fake.splitCompositeKeyReturnsOnCall[i] = struct { + result1 string + result2 []string + result3 error + }{result1, result2, result3} +} + +func (fake *ChaincodeStub) Invocations() map[string][][]interface{} { + fake.invocationsMutex.RLock() + defer fake.invocationsMutex.RUnlock() + fake.createCompositeKeyMutex.RLock() + defer fake.createCompositeKeyMutex.RUnlock() + fake.delPrivateDataMutex.RLock() + defer fake.delPrivateDataMutex.RUnlock() + fake.delStateMutex.RLock() + defer fake.delStateMutex.RUnlock() + fake.getArgsMutex.RLock() + defer fake.getArgsMutex.RUnlock() + fake.getArgsSliceMutex.RLock() + defer fake.getArgsSliceMutex.RUnlock() + fake.getBindingMutex.RLock() + defer fake.getBindingMutex.RUnlock() + fake.getChannelIDMutex.RLock() + defer fake.getChannelIDMutex.RUnlock() + fake.getCreatorMutex.RLock() + defer fake.getCreatorMutex.RUnlock() + fake.getDecorationsMutex.RLock() + defer fake.getDecorationsMutex.RUnlock() + fake.getFunctionAndParametersMutex.RLock() + defer fake.getFunctionAndParametersMutex.RUnlock() + fake.getHistoryForKeyMutex.RLock() + defer fake.getHistoryForKeyMutex.RUnlock() + fake.getPrivateDataMutex.RLock() + defer fake.getPrivateDataMutex.RUnlock() + fake.getPrivateDataByPartialCompositeKeyMutex.RLock() + defer fake.getPrivateDataByPartialCompositeKeyMutex.RUnlock() + fake.getPrivateDataByRangeMutex.RLock() + defer fake.getPrivateDataByRangeMutex.RUnlock() + fake.getPrivateDataHashMutex.RLock() + defer fake.getPrivateDataHashMutex.RUnlock() + fake.getPrivateDataQueryResultMutex.RLock() + defer fake.getPrivateDataQueryResultMutex.RUnlock() + fake.getPrivateDataValidationParameterMutex.RLock() + defer fake.getPrivateDataValidationParameterMutex.RUnlock() + fake.getQueryResultMutex.RLock() + defer fake.getQueryResultMutex.RUnlock() + fake.getQueryResultWithPaginationMutex.RLock() + defer fake.getQueryResultWithPaginationMutex.RUnlock() + fake.getSignedProposalMutex.RLock() + defer fake.getSignedProposalMutex.RUnlock() + fake.getStateMutex.RLock() + defer fake.getStateMutex.RUnlock() + fake.getStateByPartialCompositeKeyMutex.RLock() + defer fake.getStateByPartialCompositeKeyMutex.RUnlock() + fake.getStateByPartialCompositeKeyWithPaginationMutex.RLock() + defer fake.getStateByPartialCompositeKeyWithPaginationMutex.RUnlock() + fake.getStateByRangeMutex.RLock() + defer fake.getStateByRangeMutex.RUnlock() + fake.getStateByRangeWithPaginationMutex.RLock() + defer fake.getStateByRangeWithPaginationMutex.RUnlock() + fake.getStateValidationParameterMutex.RLock() + defer fake.getStateValidationParameterMutex.RUnlock() + fake.getStringArgsMutex.RLock() + defer fake.getStringArgsMutex.RUnlock() + fake.getTransientMutex.RLock() + defer fake.getTransientMutex.RUnlock() + fake.getTxIDMutex.RLock() + defer fake.getTxIDMutex.RUnlock() + fake.getTxTimestampMutex.RLock() + defer fake.getTxTimestampMutex.RUnlock() + fake.invokeChaincodeMutex.RLock() + defer fake.invokeChaincodeMutex.RUnlock() + fake.putPrivateDataMutex.RLock() + defer fake.putPrivateDataMutex.RUnlock() + fake.putStateMutex.RLock() + defer fake.putStateMutex.RUnlock() + fake.setEventMutex.RLock() + defer fake.setEventMutex.RUnlock() + fake.setPrivateDataValidationParameterMutex.RLock() + defer fake.setPrivateDataValidationParameterMutex.RUnlock() + fake.setStateValidationParameterMutex.RLock() + defer fake.setStateValidationParameterMutex.RUnlock() + fake.splitCompositeKeyMutex.RLock() + defer fake.splitCompositeKeyMutex.RUnlock() + copiedInvocations := map[string][][]interface{}{} + for key, value := range fake.invocations { + copiedInvocations[key] = value + } + return copiedInvocations +} + +func (fake *ChaincodeStub) recordInvocation(key string, args []interface{}) { + fake.invocationsMutex.Lock() + defer fake.invocationsMutex.Unlock() + if fake.invocations == nil { + fake.invocations = map[string][][]interface{}{} + } + if fake.invocations[key] == nil { + fake.invocations[key] = [][]interface{}{} + } + fake.invocations[key] = append(fake.invocations[key], args) +} diff --git a/topologies/t1/assets/chaincode/assets-transfer-basic/chaincode-go/chaincode/mocks/statequeryiterator.go b/topologies/t1/assets/chaincode/assets-transfer-basic/chaincode-go/chaincode/mocks/statequeryiterator.go new file mode 100644 index 0000000..27e3034 --- /dev/null +++ b/topologies/t1/assets/chaincode/assets-transfer-basic/chaincode-go/chaincode/mocks/statequeryiterator.go @@ -0,0 +1,232 @@ +// Code generated by counterfeiter. DO NOT EDIT. +package mocks + +import ( + "sync" + + "github.com/hyperledger/fabric-protos-go/ledger/queryresult" +) + +type StateQueryIterator struct { + CloseStub func() error + closeMutex sync.RWMutex + closeArgsForCall []struct { + } + closeReturns struct { + result1 error + } + closeReturnsOnCall map[int]struct { + result1 error + } + HasNextStub func() bool + hasNextMutex sync.RWMutex + hasNextArgsForCall []struct { + } + hasNextReturns struct { + result1 bool + } + hasNextReturnsOnCall map[int]struct { + result1 bool + } + NextStub func() (*queryresult.KV, error) + nextMutex sync.RWMutex + nextArgsForCall []struct { + } + nextReturns struct { + result1 *queryresult.KV + result2 error + } + nextReturnsOnCall map[int]struct { + result1 *queryresult.KV + result2 error + } + invocations map[string][][]interface{} + invocationsMutex sync.RWMutex +} + +func (fake *StateQueryIterator) Close() error { + fake.closeMutex.Lock() + ret, specificReturn := fake.closeReturnsOnCall[len(fake.closeArgsForCall)] + fake.closeArgsForCall = append(fake.closeArgsForCall, struct { + }{}) + fake.recordInvocation("Close", []interface{}{}) + fake.closeMutex.Unlock() + if fake.CloseStub != nil { + return fake.CloseStub() + } + if specificReturn { + return ret.result1 + } + fakeReturns := fake.closeReturns + return fakeReturns.result1 +} + +func (fake *StateQueryIterator) CloseCallCount() int { + fake.closeMutex.RLock() + defer fake.closeMutex.RUnlock() + return len(fake.closeArgsForCall) +} + +func (fake *StateQueryIterator) CloseCalls(stub func() error) { + fake.closeMutex.Lock() + defer fake.closeMutex.Unlock() + fake.CloseStub = stub +} + +func (fake *StateQueryIterator) CloseReturns(result1 error) { + fake.closeMutex.Lock() + defer fake.closeMutex.Unlock() + fake.CloseStub = nil + fake.closeReturns = struct { + result1 error + }{result1} +} + +func (fake *StateQueryIterator) CloseReturnsOnCall(i int, result1 error) { + fake.closeMutex.Lock() + defer fake.closeMutex.Unlock() + fake.CloseStub = nil + if fake.closeReturnsOnCall == nil { + fake.closeReturnsOnCall = make(map[int]struct { + result1 error + }) + } + fake.closeReturnsOnCall[i] = struct { + result1 error + }{result1} +} + +func (fake *StateQueryIterator) HasNext() bool { + fake.hasNextMutex.Lock() + ret, specificReturn := fake.hasNextReturnsOnCall[len(fake.hasNextArgsForCall)] + fake.hasNextArgsForCall = append(fake.hasNextArgsForCall, struct { + }{}) + fake.recordInvocation("HasNext", []interface{}{}) + fake.hasNextMutex.Unlock() + if fake.HasNextStub != nil { + return fake.HasNextStub() + } + if specificReturn { + return ret.result1 + } + fakeReturns := fake.hasNextReturns + return fakeReturns.result1 +} + +func (fake *StateQueryIterator) HasNextCallCount() int { + fake.hasNextMutex.RLock() + defer fake.hasNextMutex.RUnlock() + return len(fake.hasNextArgsForCall) +} + +func (fake *StateQueryIterator) HasNextCalls(stub func() bool) { + fake.hasNextMutex.Lock() + defer fake.hasNextMutex.Unlock() + fake.HasNextStub = stub +} + +func (fake *StateQueryIterator) HasNextReturns(result1 bool) { + fake.hasNextMutex.Lock() + defer fake.hasNextMutex.Unlock() + fake.HasNextStub = nil + fake.hasNextReturns = struct { + result1 bool + }{result1} +} + +func (fake *StateQueryIterator) HasNextReturnsOnCall(i int, result1 bool) { + fake.hasNextMutex.Lock() + defer fake.hasNextMutex.Unlock() + fake.HasNextStub = nil + if fake.hasNextReturnsOnCall == nil { + fake.hasNextReturnsOnCall = make(map[int]struct { + result1 bool + }) + } + fake.hasNextReturnsOnCall[i] = struct { + result1 bool + }{result1} +} + +func (fake *StateQueryIterator) Next() (*queryresult.KV, error) { + fake.nextMutex.Lock() + ret, specificReturn := fake.nextReturnsOnCall[len(fake.nextArgsForCall)] + fake.nextArgsForCall = append(fake.nextArgsForCall, struct { + }{}) + fake.recordInvocation("Next", []interface{}{}) + fake.nextMutex.Unlock() + if fake.NextStub != nil { + return fake.NextStub() + } + if specificReturn { + return ret.result1, ret.result2 + } + fakeReturns := fake.nextReturns + return fakeReturns.result1, fakeReturns.result2 +} + +func (fake *StateQueryIterator) NextCallCount() int { + fake.nextMutex.RLock() + defer fake.nextMutex.RUnlock() + return len(fake.nextArgsForCall) +} + +func (fake *StateQueryIterator) NextCalls(stub func() (*queryresult.KV, error)) { + fake.nextMutex.Lock() + defer fake.nextMutex.Unlock() + fake.NextStub = stub +} + +func (fake *StateQueryIterator) NextReturns(result1 *queryresult.KV, result2 error) { + fake.nextMutex.Lock() + defer fake.nextMutex.Unlock() + fake.NextStub = nil + fake.nextReturns = struct { + result1 *queryresult.KV + result2 error + }{result1, result2} +} + +func (fake *StateQueryIterator) NextReturnsOnCall(i int, result1 *queryresult.KV, result2 error) { + fake.nextMutex.Lock() + defer fake.nextMutex.Unlock() + fake.NextStub = nil + if fake.nextReturnsOnCall == nil { + fake.nextReturnsOnCall = make(map[int]struct { + result1 *queryresult.KV + result2 error + }) + } + fake.nextReturnsOnCall[i] = struct { + result1 *queryresult.KV + result2 error + }{result1, result2} +} + +func (fake *StateQueryIterator) Invocations() map[string][][]interface{} { + fake.invocationsMutex.RLock() + defer fake.invocationsMutex.RUnlock() + fake.closeMutex.RLock() + defer fake.closeMutex.RUnlock() + fake.hasNextMutex.RLock() + defer fake.hasNextMutex.RUnlock() + fake.nextMutex.RLock() + defer fake.nextMutex.RUnlock() + copiedInvocations := map[string][][]interface{}{} + for key, value := range fake.invocations { + copiedInvocations[key] = value + } + return copiedInvocations +} + +func (fake *StateQueryIterator) recordInvocation(key string, args []interface{}) { + fake.invocationsMutex.Lock() + defer fake.invocationsMutex.Unlock() + if fake.invocations == nil { + fake.invocations = map[string][][]interface{}{} + } + if fake.invocations[key] == nil { + fake.invocations[key] = [][]interface{}{} + } + fake.invocations[key] = append(fake.invocations[key], args) +} diff --git a/topologies/t1/assets/chaincode/assets-transfer-basic/chaincode-go/chaincode/mocks/transaction.go b/topologies/t1/assets/chaincode/assets-transfer-basic/chaincode-go/chaincode/mocks/transaction.go new file mode 100644 index 0000000..eea37db --- /dev/null +++ b/topologies/t1/assets/chaincode/assets-transfer-basic/chaincode-go/chaincode/mocks/transaction.go @@ -0,0 +1,164 @@ +// Code generated by counterfeiter. DO NOT EDIT. +package mocks + +import ( + "sync" + + "github.com/hyperledger/fabric-chaincode-go/pkg/cid" + "github.com/hyperledger/fabric-chaincode-go/shim" +) + +type TransactionContext struct { + GetClientIdentityStub func() cid.ClientIdentity + getClientIdentityMutex sync.RWMutex + getClientIdentityArgsForCall []struct { + } + getClientIdentityReturns struct { + result1 cid.ClientIdentity + } + getClientIdentityReturnsOnCall map[int]struct { + result1 cid.ClientIdentity + } + GetStubStub func() shim.ChaincodeStubInterface + getStubMutex sync.RWMutex + getStubArgsForCall []struct { + } + getStubReturns struct { + result1 shim.ChaincodeStubInterface + } + getStubReturnsOnCall map[int]struct { + result1 shim.ChaincodeStubInterface + } + invocations map[string][][]interface{} + invocationsMutex sync.RWMutex +} + +func (fake *TransactionContext) GetClientIdentity() cid.ClientIdentity { + fake.getClientIdentityMutex.Lock() + ret, specificReturn := fake.getClientIdentityReturnsOnCall[len(fake.getClientIdentityArgsForCall)] + fake.getClientIdentityArgsForCall = append(fake.getClientIdentityArgsForCall, struct { + }{}) + fake.recordInvocation("GetClientIdentity", []interface{}{}) + fake.getClientIdentityMutex.Unlock() + if fake.GetClientIdentityStub != nil { + return fake.GetClientIdentityStub() + } + if specificReturn { + return ret.result1 + } + fakeReturns := fake.getClientIdentityReturns + return fakeReturns.result1 +} + +func (fake *TransactionContext) GetClientIdentityCallCount() int { + fake.getClientIdentityMutex.RLock() + defer fake.getClientIdentityMutex.RUnlock() + return len(fake.getClientIdentityArgsForCall) +} + +func (fake *TransactionContext) GetClientIdentityCalls(stub func() cid.ClientIdentity) { + fake.getClientIdentityMutex.Lock() + defer fake.getClientIdentityMutex.Unlock() + fake.GetClientIdentityStub = stub +} + +func (fake *TransactionContext) GetClientIdentityReturns(result1 cid.ClientIdentity) { + fake.getClientIdentityMutex.Lock() + defer fake.getClientIdentityMutex.Unlock() + fake.GetClientIdentityStub = nil + fake.getClientIdentityReturns = struct { + result1 cid.ClientIdentity + }{result1} +} + +func (fake *TransactionContext) GetClientIdentityReturnsOnCall(i int, result1 cid.ClientIdentity) { + fake.getClientIdentityMutex.Lock() + defer fake.getClientIdentityMutex.Unlock() + fake.GetClientIdentityStub = nil + if fake.getClientIdentityReturnsOnCall == nil { + fake.getClientIdentityReturnsOnCall = make(map[int]struct { + result1 cid.ClientIdentity + }) + } + fake.getClientIdentityReturnsOnCall[i] = struct { + result1 cid.ClientIdentity + }{result1} +} + +func (fake *TransactionContext) GetStub() shim.ChaincodeStubInterface { + fake.getStubMutex.Lock() + ret, specificReturn := fake.getStubReturnsOnCall[len(fake.getStubArgsForCall)] + fake.getStubArgsForCall = append(fake.getStubArgsForCall, struct { + }{}) + fake.recordInvocation("GetStub", []interface{}{}) + fake.getStubMutex.Unlock() + if fake.GetStubStub != nil { + return fake.GetStubStub() + } + if specificReturn { + return ret.result1 + } + fakeReturns := fake.getStubReturns + return fakeReturns.result1 +} + +func (fake *TransactionContext) GetStubCallCount() int { + fake.getStubMutex.RLock() + defer fake.getStubMutex.RUnlock() + return len(fake.getStubArgsForCall) +} + +func (fake *TransactionContext) GetStubCalls(stub func() shim.ChaincodeStubInterface) { + fake.getStubMutex.Lock() + defer fake.getStubMutex.Unlock() + fake.GetStubStub = stub +} + +func (fake *TransactionContext) GetStubReturns(result1 shim.ChaincodeStubInterface) { + fake.getStubMutex.Lock() + defer fake.getStubMutex.Unlock() + fake.GetStubStub = nil + fake.getStubReturns = struct { + result1 shim.ChaincodeStubInterface + }{result1} +} + +func (fake *TransactionContext) GetStubReturnsOnCall(i int, result1 shim.ChaincodeStubInterface) { + fake.getStubMutex.Lock() + defer fake.getStubMutex.Unlock() + fake.GetStubStub = nil + if fake.getStubReturnsOnCall == nil { + fake.getStubReturnsOnCall = make(map[int]struct { + result1 shim.ChaincodeStubInterface + }) + } + fake.getStubReturnsOnCall[i] = struct { + result1 shim.ChaincodeStubInterface + }{result1} +} + +func (fake *TransactionContext) Invocations() map[string][][]interface{} { + fake.invocationsMutex.RLock() + defer fake.invocationsMutex.RUnlock() + fake.getClientIdentityMutex.RLock() + defer fake.getClientIdentityMutex.RUnlock() + fake.getStubMutex.RLock() + defer fake.getStubMutex.RUnlock() + copiedInvocations := map[string][][]interface{}{} + for key, value := range fake.invocations { + copiedInvocations[key] = value + } + return copiedInvocations +} + +func (fake *TransactionContext) recordInvocation(key string, args []interface{}) { + fake.invocationsMutex.Lock() + defer fake.invocationsMutex.Unlock() + if fake.invocations == nil { + fake.invocations = map[string][][]interface{}{} + } + if fake.invocations[key] == nil { + fake.invocations[key] = [][]interface{}{} + } + fake.invocations[key] = append(fake.invocations[key], args) +} diff --git a/topologies/t1/assets/chaincode/assets-transfer-basic/chaincode-go/chaincode/smartcontract.go b/topologies/t1/assets/chaincode/assets-transfer-basic/chaincode-go/chaincode/smartcontract.go new file mode 100644 index 0000000..71e8dd8 --- /dev/null +++ b/topologies/t1/assets/chaincode/assets-transfer-basic/chaincode-go/chaincode/smartcontract.go @@ -0,0 +1,185 @@ +package chaincode + +import ( + "encoding/json" + "fmt" + + "github.com/hyperledger/fabric-contract-api-go/contractapi" +) + +// SmartContract provides functions for managing an Asset +type SmartContract struct { + contractapi.Contract +} + +// Asset describes basic details of what makes up a simple asset +type Asset struct { + ID string `json:"ID"` + Color string `json:"color"` + Size int `json:"size"` + Owner string `json:"owner"` + AppraisedValue int `json:"appraisedValue"` +} + +// InitLedger adds a base set of assets to the ledger +func (s *SmartContract) InitLedger(ctx contractapi.TransactionContextInterface) error { + assets := []Asset{ + {ID: "asset1", Color: "blue", Size: 5, Owner: "Tomoko", AppraisedValue: 300}, + {ID: "asset2", Color: "red", Size: 5, Owner: "Brad", AppraisedValue: 400}, + {ID: "asset3", Color: "green", Size: 10, Owner: "Jin Soo", AppraisedValue: 500}, + {ID: "asset4", Color: "yellow", Size: 10, Owner: "Max", AppraisedValue: 600}, + {ID: "asset5", Color: "black", Size: 15, Owner: "Adriana", AppraisedValue: 700}, + {ID: "asset6", Color: "white", Size: 15, Owner: "Michel", AppraisedValue: 800}, + } + + for _, asset := range assets { + assetJSON, err := json.Marshal(asset) + if err != nil { + return err + } + + err = ctx.GetStub().PutState(asset.ID, assetJSON) + if err != nil { + return fmt.Errorf("failed to put to world state. %v", err) + } + } + + return nil +} + +// CreateAsset issues a new asset to the world state with given details. +func (s *SmartContract) CreateAsset(ctx contractapi.TransactionContextInterface, id string, color string, size int, owner string, appraisedValue int) error { + exists, err := s.AssetExists(ctx, id) + if err != nil { + return err + } + if exists { + return fmt.Errorf("the asset %s already exists", id) + } + + asset := Asset{ + ID: id, + Color: color, + Size: size, + Owner: owner, + AppraisedValue: appraisedValue, + } + assetJSON, err := json.Marshal(asset) + if err != nil { + return err + } + + return ctx.GetStub().PutState(id, assetJSON) +} + +// ReadAsset returns the asset stored in the world state with given id. +func (s *SmartContract) ReadAsset(ctx contractapi.TransactionContextInterface, id string) (*Asset, error) { + assetJSON, err := ctx.GetStub().GetState(id) + if err != nil { + return nil, fmt.Errorf("failed to read from world state: %v", err) + } + if assetJSON == nil { + return nil, fmt.Errorf("the asset %s does not exist", id) + } + + var asset Asset + err = json.Unmarshal(assetJSON, &asset) + if err != nil { + return nil, err + } + + return &asset, nil +} + +// UpdateAsset updates an existing asset in the world state with provided parameters. +func (s *SmartContract) UpdateAsset(ctx contractapi.TransactionContextInterface, id string, color string, size int, owner string, appraisedValue int) error { + exists, err := s.AssetExists(ctx, id) + if err != nil { + return err + } + if !exists { + return fmt.Errorf("the asset %s does not exist", id) + } + + // overwriting original asset with new asset + asset := Asset{ + ID: id, + Color: color, + Size: size, + Owner: owner, + AppraisedValue: appraisedValue, + } + assetJSON, err := json.Marshal(asset) + if err != nil { + return err + } + + return ctx.GetStub().PutState(id, assetJSON) +} + +// DeleteAsset deletes an given asset from the world state. +func (s *SmartContract) DeleteAsset(ctx contractapi.TransactionContextInterface, id string) error { + exists, err := s.AssetExists(ctx, id) + if err != nil { + return err + } + if !exists { + return fmt.Errorf("the asset %s does not exist", id) + } + + return ctx.GetStub().DelState(id) +} + +// AssetExists returns true when asset with given ID exists in world state +func (s *SmartContract) AssetExists(ctx contractapi.TransactionContextInterface, id string) (bool, error) { + assetJSON, err := ctx.GetStub().GetState(id) + if err != nil { + return false, fmt.Errorf("failed to read from world state: %v", err) + } + + return assetJSON != nil, nil +} + +// TransferAsset updates the owner field of asset with given id in world state. +func (s *SmartContract) TransferAsset(ctx contractapi.TransactionContextInterface, id string, newOwner string) error { + asset, err := s.ReadAsset(ctx, id) + if err != nil { + return err + } + + asset.Owner = newOwner + assetJSON, err := json.Marshal(asset) + if err != nil { + return err + } + + return ctx.GetStub().PutState(id, assetJSON) +} + +// GetAllAssets returns all assets found in world state +func (s *SmartContract) GetAllAssets(ctx contractapi.TransactionContextInterface) ([]*Asset, error) { + // range query with empty string for startKey and endKey does an + // open-ended query of all assets in the chaincode namespace. + resultsIterator, err := ctx.GetStub().GetStateByRange("", "") + if err != nil { + return nil, err + } + defer resultsIterator.Close() + + var assets []*Asset + for resultsIterator.HasNext() { + queryResponse, err := resultsIterator.Next() + if err != nil { + return nil, err + } + + var asset Asset + err = json.Unmarshal(queryResponse.Value, &asset) + if err != nil { + return nil, err + } + assets = append(assets, &asset) + } + + return assets, nil +} diff --git a/topologies/t1/assets/chaincode/assets-transfer-basic/chaincode-go/chaincode/smartcontract_test.go b/topologies/t1/assets/chaincode/assets-transfer-basic/chaincode-go/chaincode/smartcontract_test.go new file mode 100644 index 0000000..cb001de --- /dev/null +++ b/topologies/t1/assets/chaincode/assets-transfer-basic/chaincode-go/chaincode/smartcontract_test.go @@ -0,0 +1,184 @@ +package chaincode_test + +import ( + "encoding/json" + "fmt" + "testing" + + "github.com/hyperledger/fabric-chaincode-go/shim" + "github.com/hyperledger/fabric-contract-api-go/contractapi" + "github.com/hyperledger/fabric-protos-go/ledger/queryresult" + "github.com/hyperledger/fabric-samples/asset-transfer-basic/chaincode-go/chaincode" + "github.com/hyperledger/fabric-samples/asset-transfer-basic/chaincode-go/chaincode/mocks" + "github.com/stretchr/testify/require" +) + +//go:generate counterfeiter -o mocks/transaction.go -fake-name TransactionContext . transactionContext +type transactionContext interface { + contractapi.TransactionContextInterface +} + +//go:generate counterfeiter -o mocks/chaincodestub.go -fake-name ChaincodeStub . chaincodeStub +type chaincodeStub interface { + shim.ChaincodeStubInterface +} + +//go:generate counterfeiter -o mocks/statequeryiterator.go -fake-name StateQueryIterator . stateQueryIterator +type stateQueryIterator interface { + shim.StateQueryIteratorInterface +} + +func TestInitLedger(t *testing.T) { + chaincodeStub := &mocks.ChaincodeStub{} + transactionContext := &mocks.TransactionContext{} + transactionContext.GetStubReturns(chaincodeStub) + + assetTransfer := chaincode.SmartContract{} + err := assetTransfer.InitLedger(transactionContext) + require.NoError(t, err) + + chaincodeStub.PutStateReturns(fmt.Errorf("failed inserting key")) + err = assetTransfer.InitLedger(transactionContext) + require.EqualError(t, err, "failed to put to world state. failed inserting key") +} + +func TestCreateAsset(t *testing.T) { + chaincodeStub := &mocks.ChaincodeStub{} + transactionContext := &mocks.TransactionContext{} + transactionContext.GetStubReturns(chaincodeStub) + + assetTransfer := chaincode.SmartContract{} + err := assetTransfer.CreateAsset(transactionContext, "", "", 0, "", 0) + require.NoError(t, err) + + chaincodeStub.GetStateReturns([]byte{}, nil) + err = assetTransfer.CreateAsset(transactionContext, "asset1", "", 0, "", 0) + require.EqualError(t, err, "the asset asset1 already exists") + + chaincodeStub.GetStateReturns(nil, fmt.Errorf("unable to retrieve asset")) + err = assetTransfer.CreateAsset(transactionContext, "asset1", "", 0, "", 0) + require.EqualError(t, err, "failed to read from world state: unable to retrieve asset") +} + +func TestReadAsset(t *testing.T) { + chaincodeStub := &mocks.ChaincodeStub{} + transactionContext := &mocks.TransactionContext{} + transactionContext.GetStubReturns(chaincodeStub) + + expectedAsset := &chaincode.Asset{ID: "asset1"} + bytes, err := json.Marshal(expectedAsset) + require.NoError(t, err) + + chaincodeStub.GetStateReturns(bytes, nil) + assetTransfer := chaincode.SmartContract{} + asset, err := assetTransfer.ReadAsset(transactionContext, "") + require.NoError(t, err) + require.Equal(t, expectedAsset, asset) + + chaincodeStub.GetStateReturns(nil, fmt.Errorf("unable to retrieve asset")) + _, err = assetTransfer.ReadAsset(transactionContext, "") + require.EqualError(t, err, "failed to read from world state: unable to retrieve asset") + + chaincodeStub.GetStateReturns(nil, nil) + asset, err = assetTransfer.ReadAsset(transactionContext, "asset1") + require.EqualError(t, err, "the asset asset1 does not exist") + require.Nil(t, asset) +} + +func TestUpdateAsset(t *testing.T) { + chaincodeStub := &mocks.ChaincodeStub{} + transactionContext := &mocks.TransactionContext{} + transactionContext.GetStubReturns(chaincodeStub) + + expectedAsset := &chaincode.Asset{ID: "asset1"} + bytes, err := json.Marshal(expectedAsset) + require.NoError(t, err) + + chaincodeStub.GetStateReturns(bytes, nil) + assetTransfer := chaincode.SmartContract{} + err = assetTransfer.UpdateAsset(transactionContext, "", "", 0, "", 0) + require.NoError(t, err) + + chaincodeStub.GetStateReturns(nil, nil) + err = assetTransfer.UpdateAsset(transactionContext, "asset1", "", 0, "", 0) + require.EqualError(t, err, "the asset asset1 does not exist") + + chaincodeStub.GetStateReturns(nil, fmt.Errorf("unable to retrieve asset")) + err = assetTransfer.UpdateAsset(transactionContext, "asset1", "", 0, "", 0) + require.EqualError(t, err, "failed to read from world state: unable to retrieve asset") +} + +func TestDeleteAsset(t *testing.T) { + chaincodeStub := &mocks.ChaincodeStub{} + transactionContext := &mocks.TransactionContext{} + transactionContext.GetStubReturns(chaincodeStub) + + asset := &chaincode.Asset{ID: "asset1"} + bytes, err := json.Marshal(asset) + require.NoError(t, err) + + chaincodeStub.GetStateReturns(bytes, nil) + chaincodeStub.DelStateReturns(nil) + assetTransfer := chaincode.SmartContract{} + err = assetTransfer.DeleteAsset(transactionContext, "") + require.NoError(t, err) + + chaincodeStub.GetStateReturns(nil, nil) + err = assetTransfer.DeleteAsset(transactionContext, "asset1") + require.EqualError(t, err, "the asset asset1 does not exist") + + chaincodeStub.GetStateReturns(nil, fmt.Errorf("unable to retrieve asset")) + err = assetTransfer.DeleteAsset(transactionContext, "") + require.EqualError(t, err, "failed to read from world state: unable to retrieve asset") +} + +func TestTransferAsset(t *testing.T) { + chaincodeStub := &mocks.ChaincodeStub{} + transactionContext := &mocks.TransactionContext{} + transactionContext.GetStubReturns(chaincodeStub) + + asset := &chaincode.Asset{ID: "asset1"} + bytes, err := json.Marshal(asset) + require.NoError(t, err) + + chaincodeStub.GetStateReturns(bytes, nil) + assetTransfer := chaincode.SmartContract{} + err = assetTransfer.TransferAsset(transactionContext, "", "") + require.NoError(t, err) + + chaincodeStub.GetStateReturns(nil, fmt.Errorf("unable to retrieve asset")) + err = assetTransfer.TransferAsset(transactionContext, "", "") + require.EqualError(t, err, "failed to read from world state: unable to retrieve asset") +} + +func TestGetAllAssets(t *testing.T) { + asset := &chaincode.Asset{ID: "asset1"} + bytes, err := json.Marshal(asset) + require.NoError(t, err) + + iterator := &mocks.StateQueryIterator{} + iterator.HasNextReturnsOnCall(0, true) + iterator.HasNextReturnsOnCall(1, false) + iterator.NextReturns(&queryresult.KV{Value: bytes}, nil) + + chaincodeStub := &mocks.ChaincodeStub{} + transactionContext := &mocks.TransactionContext{} + transactionContext.GetStubReturns(chaincodeStub) + + chaincodeStub.GetStateByRangeReturns(iterator, nil) + assetTransfer := &chaincode.SmartContract{} + assets, err := assetTransfer.GetAllAssets(transactionContext) + require.NoError(t, err) + require.Equal(t, []*chaincode.Asset{asset}, assets) + + iterator.HasNextReturns(true) + iterator.NextReturns(nil, fmt.Errorf("failed retrieving next item")) + assets, err = assetTransfer.GetAllAssets(transactionContext) + require.EqualError(t, err, "failed retrieving next item") + require.Nil(t, assets) + + chaincodeStub.GetStateByRangeReturns(nil, fmt.Errorf("failed retrieving all assets")) + assets, err = assetTransfer.GetAllAssets(transactionContext) + require.EqualError(t, err, "failed retrieving all assets") + require.Nil(t, assets) +} diff --git a/topologies/t1/assets/chaincode/assets-transfer-basic/chaincode-go/go.mod b/topologies/t1/assets/chaincode/assets-transfer-basic/chaincode-go/go.mod new file mode 100644 index 0000000..630a157 --- /dev/null +++ b/topologies/t1/assets/chaincode/assets-transfer-basic/chaincode-go/go.mod @@ -0,0 +1,11 @@ +module github.com/hyperledger/fabric-samples/asset-transfer-basic/chaincode-go + +go 1.14 + +require ( + github.com/golang/protobuf v1.3.2 + github.com/hyperledger/fabric-chaincode-go v0.0.0-20210718160520-38d29fabecb9 + github.com/hyperledger/fabric-contract-api-go v1.1.1 + github.com/hyperledger/fabric-protos-go v0.0.0-20201028172056-a3136dde2354 + github.com/stretchr/testify v1.5.1 +) diff --git a/topologies/t1/assets/chaincode/assets-transfer-basic/chaincode-go/go.sum b/topologies/t1/assets/chaincode/assets-transfer-basic/chaincode-go/go.sum new file mode 100644 index 0000000..577c18b --- /dev/null +++ b/topologies/t1/assets/chaincode/assets-transfer-basic/chaincode-go/go.sum @@ -0,0 +1,154 @@ +cloud.google.com/go v0.26.0/go.mod h1:aQUYkXzVsufM+DwF1aE+0xfcU+56JwCaLick0ClmMTw= +github.com/BurntSushi/toml v0.3.1/go.mod h1:xHWCNGjB5oqiDr8zfno3MHue2Ht5sIBksp03qcyfWMU= +github.com/DATA-DOG/go-txdb v0.1.3/go.mod h1:DhAhxMXZpUJVGnT+p9IbzJoRKvlArO2pkHjnGX7o0n0= +github.com/PuerkitoBio/purell v1.1.1 h1:WEQqlqaGbrPkxLJWfBwQmfEAE1Z7ONdDLqrN38tNFfI= +github.com/PuerkitoBio/purell v1.1.1/go.mod h1:c11w/QuzBsJSee3cPx9rAFu61PvFxuPbtSwDGJws/X0= +github.com/PuerkitoBio/urlesc v0.0.0-20170810143723-de5bf2ad4578 h1:d+Bc7a5rLufV/sSk/8dngufqelfh6jnri85riMAaF/M= +github.com/PuerkitoBio/urlesc v0.0.0-20170810143723-de5bf2ad4578/go.mod h1:uGdkoq3SwY9Y+13GIhn11/XLaGBb4BfwItxLd5jeuXE= +github.com/armon/consul-api v0.0.0-20180202201655-eb2c6b5be1b6/go.mod h1:grANhF5doyWs3UAsr3K4I6qtAmlQcZDesFNEHPZAzj8= +github.com/client9/misspell v0.3.4/go.mod h1:qj6jICC3Q7zFZvVWo7KLAzC3yx5G7kyvSDkc90ppPyw= +github.com/coreos/etcd v3.3.10+incompatible/go.mod h1:uF7uidLiAD3TWHmW31ZFd/JWoc32PjwdhPthX9715RE= +github.com/coreos/go-etcd v2.0.0+incompatible/go.mod h1:Jez6KQU2B/sWsbdaef3ED8NzMklzPG4d5KIOhIy30Tk= +github.com/coreos/go-semver v0.2.0/go.mod h1:nnelYz7RCh+5ahJtPPxZlU+153eP4D4r3EedlOD2RNk= +github.com/cpuguy83/go-md2man v1.0.10/go.mod h1:SmD6nW6nTyfqj6ABTjUi3V3JVMnlJmwcJI5acqYI6dE= +github.com/cucumber/godog v0.8.0/go.mod h1:Cp3tEV1LRAyH/RuCThcxHS/+9ORZ+FMzPva2AZ5Ki+A= +github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38= +github.com/davecgh/go-spew v1.1.1 h1:vj9j/u1bqnvCEfJOwUhtlOARqs3+rkHYY13jYWTU97c= +github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38= +github.com/fsnotify/fsnotify v1.4.7/go.mod h1:jwhsz4b93w/PPRr/qN1Yymfu8t87LnFCMoQvtojpjFo= +github.com/go-openapi/jsonpointer v0.19.2/go.mod h1:3akKfEdA7DF1sugOqz1dVQHBcuDBPKZGEoHC/NkiQRg= +github.com/go-openapi/jsonpointer v0.19.3 h1:gihV7YNZK1iK6Tgwwsxo2rJbD1GTbdm72325Bq8FI3w= +github.com/go-openapi/jsonpointer v0.19.3/go.mod h1:Pl9vOtqEWErmShwVjC8pYs9cog34VGT37dQOVbmoatg= +github.com/go-openapi/jsonreference v0.19.2 h1:o20suLFB4Ri0tuzpWtyHlh7E7HnkqTNLq6aR6WVNS1w= +github.com/go-openapi/jsonreference v0.19.2/go.mod h1:jMjeRr2HHw6nAVajTXJ4eiUwohSTlpa0o73RUL1owJc= +github.com/go-openapi/spec v0.19.4 h1:ixzUSnHTd6hCemgtAJgluaTSGYpLNpJY4mA2DIkdOAo= +github.com/go-openapi/spec v0.19.4/go.mod h1:FpwSN1ksY1eteniUU7X0N/BgJ7a4WvBFVA8Lj9mJglo= +github.com/go-openapi/swag v0.19.2/go.mod h1:POnQmlKehdgb5mhVOsnJFsivZCEZ/vjK9gh66Z9tfKk= +github.com/go-openapi/swag v0.19.5 h1:lTz6Ys4CmqqCQmZPBlbQENR1/GucA2bzYTE12Pw4tFY= +github.com/go-openapi/swag v0.19.5/go.mod h1:POnQmlKehdgb5mhVOsnJFsivZCEZ/vjK9gh66Z9tfKk= +github.com/go-sql-driver/mysql v1.5.0/go.mod h1:DCzpHaOWr8IXmIStZouvnhqoel9Qv2LBy8hT2VhHyBg= +github.com/gobuffalo/envy v1.7.0 h1:GlXgaiBkmrYMHco6t4j7SacKO4XUjvh5pwXh0f4uxXU= +github.com/gobuffalo/envy v1.7.0/go.mod h1:n7DRkBerg/aorDM8kbduw5dN3oXGswK5liaSCx4T5NI= +github.com/gobuffalo/logger v1.0.0/go.mod h1:2zbswyIUa45I+c+FLXuWl9zSWEiVuthsk8ze5s8JvPs= +github.com/gobuffalo/packd v0.3.0 h1:eMwymTkA1uXsqxS0Tpoop3Lc0u3kTfiMBE6nKtQU4g4= +github.com/gobuffalo/packd v0.3.0/go.mod h1:zC7QkmNkYVGKPw4tHpBQ+ml7W/3tIebgeo1b36chA3Q= +github.com/gobuffalo/packr v1.30.1 h1:hu1fuVR3fXEZR7rXNW3h8rqSML8EVAf6KNm0NKO/wKg= +github.com/gobuffalo/packr v1.30.1/go.mod h1:ljMyFO2EcrnzsHsN99cvbq055Y9OhRrIaviy289eRuk= +github.com/gobuffalo/packr/v2 v2.5.1/go.mod h1:8f9c96ITobJlPzI44jj+4tHnEKNt0xXWSVlXRN9X1Iw= +github.com/golang/glog v0.0.0-20160126235308-23def4e6c14b h1:VKtxabqXZkF25pY9ekfRL6a582T4P37/31XEstQ5p58= +github.com/golang/glog v0.0.0-20160126235308-23def4e6c14b/go.mod h1:SBH7ygxi8pfUlaOkMMuAQtPIUF8ecWP5IEl/CR7VP2Q= +github.com/golang/mock v1.1.1/go.mod h1:oTYuIxOrZwtPieC+H1uAHpcLFnEyAGVDL/k47Jfbm0A= +github.com/golang/protobuf v1.2.0/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U= +github.com/golang/protobuf v1.3.2 h1:6nsPYzhq5kReh6QImI3k5qWzO4PEbvbIW2cwSfR/6xs= +github.com/golang/protobuf v1.3.2/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U= +github.com/google/go-cmp v0.2.0/go.mod h1:oXzfMopK8JAjlY9xF4vHSVASa0yLyX7SntLO5aqRK0M= +github.com/hashicorp/hcl v1.0.0/go.mod h1:E5yfLk+7swimpb2L/Alb/PJmXilQ/rhwaUYs4T20WEQ= +github.com/hyperledger/fabric-chaincode-go v0.0.0-20200424173110-d7076418f212 h1:1i4lnpV8BDgKOLi1hgElfBqdHXjXieSuj8629mwBZ8o= +github.com/hyperledger/fabric-chaincode-go v0.0.0-20200424173110-d7076418f212/go.mod h1:N7H3sA7Tx4k/YzFq7U0EPdqJtqvM4Kild0JoCc7C0Dc= +github.com/hyperledger/fabric-chaincode-go v0.0.0-20210718160520-38d29fabecb9 h1:1cAZHHrBYFrX3bwQGhOZtOB4sCM9QWVppd81O8vsPXs= +github.com/hyperledger/fabric-chaincode-go v0.0.0-20210718160520-38d29fabecb9/go.mod h1:N7H3sA7Tx4k/YzFq7U0EPdqJtqvM4Kild0JoCc7C0Dc= +github.com/hyperledger/fabric-contract-api-go v1.1.0 h1:K9uucl/6eX3NF0/b+CGIiO1IPm1VYQxBkpnVGJur2S4= +github.com/hyperledger/fabric-contract-api-go v1.1.0/go.mod h1:nHWt0B45fK53owcFpLtAe8DH0Q5P068mnzkNXMPSL7E= +github.com/hyperledger/fabric-contract-api-go v1.1.1 h1:gDhOC18gjgElNZ85kFWsbCQq95hyUP/21n++m0Sv6B0= +github.com/hyperledger/fabric-contract-api-go v1.1.1/go.mod h1:+39cWxbh5py3NtXpRA63rAH7NzXyED+QJx1EZr0tJPo= +github.com/hyperledger/fabric-protos-go v0.0.0-20190919234611-2a87503ac7c9/go.mod h1:xVYTjK4DtZRBxZ2D9aE4y6AbLaPwue2o/criQyQbVD0= +github.com/hyperledger/fabric-protos-go v0.0.0-20200424173316-dd554ba3746e h1:9PS5iezHk/j7XriSlNuSQILyCOfcZ9wZ3/PiucmSE8E= +github.com/hyperledger/fabric-protos-go v0.0.0-20200424173316-dd554ba3746e/go.mod h1:xVYTjK4DtZRBxZ2D9aE4y6AbLaPwue2o/criQyQbVD0= +github.com/hyperledger/fabric-protos-go v0.0.0-20201028172056-a3136dde2354 h1:6vLLEpvDbSlmUJFjg1hB5YMBpI+WgKguztlONcAFBoY= +github.com/hyperledger/fabric-protos-go v0.0.0-20201028172056-a3136dde2354/go.mod h1:xVYTjK4DtZRBxZ2D9aE4y6AbLaPwue2o/criQyQbVD0= +github.com/hyperledger/fabric-protos-go v0.0.0-20210722212527-1ed7094bb8fe h1:1ef+SKRVYiQCAMhZ5W6HZhStEcNNAm27V8YcptzSWnM= +github.com/hyperledger/fabric-protos-go v0.0.0-20210722212527-1ed7094bb8fe/go.mod h1:xVYTjK4DtZRBxZ2D9aE4y6AbLaPwue2o/criQyQbVD0= +github.com/inconshreveable/mousetrap v1.0.0/go.mod h1:PxqpIevigyE2G7u3NXJIT2ANytuPF1OarO4DADm73n8= +github.com/joho/godotenv v1.3.0 h1:Zjp+RcGpHhGlrMbJzXTrZZPrWj+1vfm90La1wgB6Bhc= +github.com/joho/godotenv v1.3.0/go.mod h1:7hK45KPybAkOC6peb+G5yklZfMxEjkZhHbwpqxOKXbg= +github.com/karrick/godirwalk v1.10.12/go.mod h1:RoGL9dQei4vP9ilrpETWE8CLOZ1kiN0LhBygSwrAsHA= +github.com/konsorten/go-windows-terminal-sequences v1.0.1/go.mod h1:T0+1ngSBFLxvqU3pZ+m/2kptfBszLMUkC4ZK/EgS/cQ= +github.com/konsorten/go-windows-terminal-sequences v1.0.2/go.mod h1:T0+1ngSBFLxvqU3pZ+m/2kptfBszLMUkC4ZK/EgS/cQ= +github.com/kr/pretty v0.1.0/go.mod h1:dAy3ld7l9f0ibDNOQOHHMYYIIbhfbHSm3C4ZsoJORNo= +github.com/kr/pretty v0.2.0 h1:s5hAObm+yFO5uHYt5dYjxi2rXrsnmRpJx4OYvIWUaQs= +github.com/kr/pretty v0.2.0/go.mod h1:ipq/a2n7PKx3OHsz4KJII5eveXtPO4qwEXGdVfWzfnI= +github.com/kr/pty v1.1.1/go.mod h1:pFQYn66WHrOpPYNljwOMqo10TkYh1fy3cYio2l3bCsQ= +github.com/kr/pty v1.1.5/go.mod h1:9r2w37qlBe7rQ6e1fg1S/9xpWHSnaqNdHD3WcMdbPDA= +github.com/kr/text v0.1.0 h1:45sCR5RtlFHMR4UwH9sdQ5TC8v0qDQCHnXt+kaKSTVE= +github.com/kr/text v0.1.0/go.mod h1:4Jbv+DJW3UT/LiOwJeYQe1efqtUx/iVham/4vfdArNI= +github.com/magiconair/properties v1.8.0/go.mod h1:PppfXfuXeibc/6YijjN8zIbojt8czPbwD3XqdrwzmxQ= +github.com/mailru/easyjson v0.0.0-20190614124828-94de47d64c63/go.mod h1:C1wdFJiN94OJF2b5HbByQZoLdCWB1Yqtg26g4irojpc= +github.com/mailru/easyjson v0.0.0-20190626092158-b2ccc519800e h1:hB2xlXdHp/pmPZq0y3QnmWAArdw9PqbmotexnWx/FU8= +github.com/mailru/easyjson v0.0.0-20190626092158-b2ccc519800e/go.mod h1:C1wdFJiN94OJF2b5HbByQZoLdCWB1Yqtg26g4irojpc= +github.com/mitchellh/go-homedir v1.1.0/go.mod h1:SfyaCUpYCn1Vlf4IUYiD9fPX4A5wJrkLzIz1N1q0pr0= +github.com/mitchellh/mapstructure v1.1.2/go.mod h1:FVVH3fgwuzCH5S8UJGiWEs2h04kUh9fWfEaFds41c1Y= +github.com/pelletier/go-toml v1.2.0/go.mod h1:5z9KED0ma1S8pY6P1sdut58dfprrGBbd/94hg7ilaic= +github.com/pmezard/go-difflib v1.0.0 h1:4DBwDE0NGyQoBHbLQYPwSUPoCMWR5BEzIk/f1lZbAQM= +github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4= +github.com/rogpeppe/go-internal v1.1.0/go.mod h1:M8bDsm7K2OlrFYOpmOWEs/qY81heoFRclV5y23lUDJ4= +github.com/rogpeppe/go-internal v1.3.0 h1:RR9dF3JtopPvtkroDZuVD7qquD0bnHlKSqaQhgwt8yk= +github.com/rogpeppe/go-internal v1.3.0/go.mod h1:M8bDsm7K2OlrFYOpmOWEs/qY81heoFRclV5y23lUDJ4= +github.com/russross/blackfriday v1.5.2/go.mod h1:JO/DiYxRf+HjHt06OyowR9PTA263kcR/rfWxYHBV53g= +github.com/sirupsen/logrus v1.4.2/go.mod h1:tLMulIdttU9McNUspp0xgXVQah82FyeX6MwdIuYE2rE= +github.com/spf13/afero v1.1.2/go.mod h1:j4pytiNVoe2o6bmDsKpLACNPDBIoEAkihy7loJ1B0CQ= +github.com/spf13/cast v1.3.0/go.mod h1:Qx5cxh0v+4UWYiBimWS+eyWzqEqokIECu5etghLkUJE= +github.com/spf13/cobra v0.0.5/go.mod h1:3K3wKZymM7VvHMDS9+Akkh4K60UwM26emMESw8tLCHU= +github.com/spf13/jwalterweatherman v1.0.0/go.mod h1:cQK4TGJAtQXfYWX+Ddv3mKDzgVb68N+wFjFa4jdeBTo= +github.com/spf13/pflag v1.0.3/go.mod h1:DYY7MBk1bdzusC3SYhjObp+wFpr4gzcvqqNjLnInEg4= +github.com/spf13/viper v1.3.2/go.mod h1:ZiWeW+zYFKm7srdB9IoDzzZXaJaI5eL9QjNiN/DMA2s= +github.com/stretchr/objx v0.1.0/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME= +github.com/stretchr/objx v0.1.1/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME= +github.com/stretchr/objx v0.2.0/go.mod h1:qt09Ya8vawLte6SNmTgCsAVtYtaKzEcn8ATUoHMkEqE= +github.com/stretchr/testify v1.2.2/go.mod h1:a8OnRcib4nhh0OaRAV+Yts87kKdq0PP7pXfy6kDkUVs= +github.com/stretchr/testify v1.3.0/go.mod h1:M5WIy9Dh21IEIfnGCwXGc5bZfKNJtfHm1UVUgZn+9EI= +github.com/stretchr/testify v1.4.0/go.mod h1:j7eGeouHqKxXV5pUuKE4zz7dFj8WfuZ+81PSLYec5m4= +github.com/stretchr/testify v1.5.1 h1:nOGnQDM7FYENwehXlg/kFVnos3rEvtKTjRvOWSzb6H4= +github.com/stretchr/testify v1.5.1/go.mod h1:5W2xD1RspED5o8YsWQXVCued0rvSQ+mT+I5cxcmMvtA= +github.com/ugorji/go/codec v0.0.0-20181204163529-d75b2dcb6bc8/go.mod h1:VFNgLljTbGfSG7qAOspJ7OScBnGdDN/yBr0sguwnwf0= +github.com/xeipuuv/gojsonpointer v0.0.0-20180127040702-4e3ac2762d5f h1:J9EGpcZtP0E/raorCMxlFGSTBrsSlaDGf3jU/qvAE2c= +github.com/xeipuuv/gojsonpointer v0.0.0-20180127040702-4e3ac2762d5f/go.mod h1:N2zxlSyiKSe5eX1tZViRH5QA0qijqEDrYZiPEAiq3wU= +github.com/xeipuuv/gojsonreference v0.0.0-20180127040603-bd5ef7bd5415 h1:EzJWgHovont7NscjpAxXsDA8S8BMYve8Y5+7cuRE7R0= +github.com/xeipuuv/gojsonreference v0.0.0-20180127040603-bd5ef7bd5415/go.mod h1:GwrjFmJcFw6At/Gs6z4yjiIwzuJ1/+UwLxMQDVQXShQ= +github.com/xeipuuv/gojsonschema v1.2.0 h1:LhYJRs+L4fBtjZUfuSZIKGeVu0QRy8e5Xi7D17UxZ74= +github.com/xeipuuv/gojsonschema v1.2.0/go.mod h1:anYRn/JVcOK2ZgGU+IjEV4nwlhoK5sQluxsYJ78Id3Y= +github.com/xordataexchange/crypt v0.0.3-0.20170626215501-b2862e3d0a77/go.mod h1:aYKd//L2LvnjZzWKhF00oedf4jCCReLcmhLdhm1A27Q= +golang.org/x/crypto v0.0.0-20181203042331-505ab145d0a9/go.mod h1:6SG95UA2DQfeDnfUPMdvaQW0Q7yPrPDi9nlGo2tz2b4= +golang.org/x/crypto v0.0.0-20190308221718-c2843e01d9a2/go.mod h1:djNgcEr1/C05ACkg1iLfiJU5Ep61QUkGW8qpdssI0+w= +golang.org/x/crypto v0.0.0-20190611184440-5c40567a22f8/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI= +golang.org/x/crypto v0.0.0-20190621222207-cc06ce4a13d4/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI= +golang.org/x/lint v0.0.0-20190313153728-d0100b6bd8b3/go.mod h1:6SW0HCj/g11FgYtHlgUYUwCkIfeOF89ocIRzGO/8vkc= +golang.org/x/net v0.0.0-20190311183353-d8887717615a/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg= +golang.org/x/net v0.0.0-20190404232315-eb5bcb51f2a3/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg= +golang.org/x/net v0.0.0-20190522155817-f3200d17e092/go.mod h1:HSz+uSET+XFnRR8LxR5pz3Of3rY3CfYBVs4xY44aLks= +golang.org/x/net v0.0.0-20190613194153-d28f0bde5980/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s= +golang.org/x/net v0.0.0-20190827160401-ba9fcec4b297 h1:k7pJ2yAPLPgbskkFdhRCsA77k2fySZ1zf2zCjvQCiIM= +golang.org/x/net v0.0.0-20190827160401-ba9fcec4b297/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s= +golang.org/x/oauth2 v0.0.0-20180821212333-d2e6202438be/go.mod h1:N/0e6XlmueqKjAGxoOufVs8QHGRruUQn6yWY3a++T0U= +golang.org/x/sync v0.0.0-20190423024810-112230192c58/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= +golang.org/x/sys v0.0.0-20181205085412-a5c9d58dba9a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY= +golang.org/x/sys v0.0.0-20190215142949-d0b11bdaac8a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY= +golang.org/x/sys v0.0.0-20190412213103-97732733099d/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= +golang.org/x/sys v0.0.0-20190422165155-953cdadca894/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= +golang.org/x/sys v0.0.0-20190515120540-06a5c4944438/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= +golang.org/x/sys v0.0.0-20190616124812-15dcb6c0061f/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= +golang.org/x/sys v0.0.0-20190710143415-6ec70d6a5542 h1:6ZQFf1D2YYDDI7eSwW8adlkkavTB9sw5I24FVtEvNUQ= +golang.org/x/sys v0.0.0-20190710143415-6ec70d6a5542/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= +golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ= +golang.org/x/text v0.3.2 h1:tW2bmiBqwgJj/UpqtC8EpXEZVYOwU0yG4iWbprSVAcs= +golang.org/x/text v0.3.2/go.mod h1:bEr9sfX3Q8Zfm5fL9x+3itogRgK3+ptLWKqgva+5dAk= +golang.org/x/tools v0.0.0-20180917221912-90fa682c2a6e/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ= +golang.org/x/tools v0.0.0-20190311212946-11955173bddd/go.mod h1:LCzVGOaR6xXOjkQ3onu1FJEFr0SW1gC7cKk1uF8kGRs= +golang.org/x/tools v0.0.0-20190524140312-2c0ae7006135/go.mod h1:RgjU9mgBXZiqYHBnxXauZ1Gv1EHHAz9KjViQ78xBX0Q= +golang.org/x/tools v0.0.0-20190614205625-5aca471b1d59/go.mod h1:/rFqwRUd4F7ZHNgwSSTFct+R/Kf4OFW1sUzUTQQTgfc= +golang.org/x/tools v0.0.0-20190624180213-70d37148ca0c/go.mod h1:/rFqwRUd4F7ZHNgwSSTFct+R/Kf4OFW1sUzUTQQTgfc= +google.golang.org/appengine v1.1.0/go.mod h1:EbEs0AVv82hx2wNQdGPgUI5lhzA/G0D9YwlJXL52JkM= +google.golang.org/genproto v0.0.0-20180817151627-c66870c02cf8/go.mod h1:JiN7NxoALGmiZfu7CAH4rXhgtRTLTxftemlI0sWmxmc= +google.golang.org/genproto v0.0.0-20180831171423-11092d34479b h1:lohp5blsw53GBXtLyLNaTXPXS9pJ1tiTw61ZHUoE9Qw= +google.golang.org/genproto v0.0.0-20180831171423-11092d34479b/go.mod h1:JiN7NxoALGmiZfu7CAH4rXhgtRTLTxftemlI0sWmxmc= +google.golang.org/grpc v1.23.0 h1:AzbTB6ux+okLTzP8Ru1Xs41C303zdcfEht7MQnYJt5A= +google.golang.org/grpc v1.23.0/go.mod h1:Y5yQAOtifL1yxbo5wqy6BxZv8vAUGQwXBOALyacEbxg= +gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0= +gopkg.in/check.v1 v1.0.0-20180628173108-788fd7840127/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0= +gopkg.in/check.v1 v1.0.0-20190902080502-41f04d3bba15 h1:YR8cESwS4TdDjEe65xsg0ogRM/Nc3DYOhEAlW+xobZo= +gopkg.in/check.v1 v1.0.0-20190902080502-41f04d3bba15/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0= +gopkg.in/errgo.v2 v2.1.0/go.mod h1:hNsd1EY+bozCKY1Ytp96fpM3vjJbqLJn88ws8XvfDNI= +gopkg.in/yaml.v2 v2.2.2 h1:ZCJp+EgiOT7lHqUV2J862kp8Qj64Jo6az82+3Td9dZw= +gopkg.in/yaml.v2 v2.2.2/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI= +gopkg.in/yaml.v2 v2.2.8 h1:obN1ZagJSUGI0Ek/LBmuj4SNLPfIny3KsKFopxRdj10= +gopkg.in/yaml.v2 v2.2.8/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI= +honnef.co/go/tools v0.0.0-20190523083050-ea95bdfd59fc/go.mod h1:rf3lG4BRIbNafJWhAfAdb/ePZxsR/4RtNHQocxwk9r4= diff --git a/topologies/t1/config/config.yaml b/topologies/t1/config/config.yaml new file mode 100644 index 0000000..1f4cc49 --- /dev/null +++ b/topologies/t1/config/config.yaml @@ -0,0 +1,14 @@ +NodeOUs: + Enable: true + ClientOUIdentifier: + Certificate: cacerts/ca-cert.pem + OrganizationalUnitIdentifier: client + PeerOUIdentifier: + Certificate: cacerts/ca-cert.pem + OrganizationalUnitIdentifier: peer + AdminOUIdentifier: + Certificate: cacerts/ca-cert.pem + OrganizationalUnitIdentifier: admin + OrdererOUIdentifier: + Certificate: cacerts/ca-cert.pem + OrganizationalUnitIdentifier: orderer \ No newline at end of file diff --git a/topologies/t1/config/configtx.yaml b/topologies/t1/config/configtx.yaml new file mode 100644 index 0000000..1264040 --- /dev/null +++ b/topologies/t1/config/configtx.yaml @@ -0,0 +1,428 @@ +# Copyright IBM Corp. All Rights Reserved. +# +# SPDX-License-Identifier: Apache-2.0 +# + +--- +################################################################################ +# +# Section: Organizations +# +# - This section defines the different organizational identities which will +# be referenced later in the configuration. +# +################################################################################ +Organizations: + + # SampleOrg defines an MSP using the sampleconfig. It should never be used + # in production but may be used as a template for other definitions + - &org1 + # DefaultOrg defines the organization which is used in the sampleconfig + # of the fabric.git development environment + Name: org1 + + # ID to load the MSP definition as + ID: org1MSP + + # MSPDir is the filesystem path which contains the MSP configuration + MSPDir: /tmp/crypto-material/orgs/org1/msp + + # Policies defines the set of policies at this level of the config tree + # For organization policies, their canonical path is usually + # /Channel/// + Policies: + Readers: + Type: Signature + Rule: "OR('org1MSP.member')" + Writers: + Type: Signature + Rule: "OR('org1MSP.member')" + Admins: + Type: Signature + Rule: "OR('org1MSP.admin')" + Endorsement: + Type: Signature + Rule: "OR('org1MSP.member')" + + OrdererEndpoints: + - <>-org1-orderer1:7050 + - <>-org1-orderer2:7050 + - <>-org1-orderer3:7050 + + - &org2 + # DefaultOrg defines the organization which is used in the sampleconfig + # of the fabric.git development environment + Name: org2MSP + + # ID to load the MSP definition as + ID: org2MSP + + MSPDir: /tmp/crypto-material/orgs/org2/msp + + # Policies defines the set of policies at this level of the config tree + # For organization policies, their canonical path is usually + # /Channel/// + Policies: + Readers: + Type: Signature + Rule: "OR('org2MSP.member')" + Writers: + Type: Signature + Rule: "OR('org2MSP.member')" + Admins: + Type: Signature + Rule: "OR('org2MSP.admin')" + Endorsement: + Type: Signature + Rule: "OR('org2MSP.peer')" + + # leave this flag set to true. + AnchorPeers: + # AnchorPeers defines the location of peers which can be used + # for cross org gossip communication. Note, this value is only + # encoded in the genesis block in the Application section context + - Host: <>-org2-peer1 + Port: 7051 + + - &org3 + # DefaultOrg defines the organization which is used in the sampleconfig + # of the fabric.git development environment + Name: org3MSP + + # ID to load the MSP definition as + ID: org3MSP + + MSPDir: /tmp/crypto-material/orgs/org3/msp + + # Policies defines the set of policies at this level of the config tree + # For organization policies, their canonical path is usually + # /Channel/// + Policies: + Readers: + Type: Signature + Rule: "OR('org3MSP.member')" + Writers: + Type: Signature + Rule: "OR('org3MSP.member')" + Admins: + Type: Signature + Rule: "OR('org3MSP.admin')" + Endorsement: + Type: Signature + Rule: "OR('org3MSP.peer')" + + # leave this flag set to true. + AnchorPeers: + # AnchorPeers defines the location of peers which can be used + # for cross org gossip communication. Note, this value is only + # encoded in the genesis block in the Application section context + - Host: <>-org3-peer1 + Port: 7051 + +################################################################################ +# +# SECTION: Capabilities +# +# - This section defines the capabilities of fabric network. This is a new +# concept as of v1.1.0 and should not be utilized in mixed networks with +# v1.0.x peers and orderers. Capabilities define features which must be +# present in a fabric binary for that binary to safely participate in the +# fabric network. For instance, if a new MSP type is added, newer binaries +# might recognize and validate the signatures from this type, while older +# binaries without this support would be unable to validate those +# transactions. This could lead to different versions of the fabric binaries +# having different world states. Instead, defining a capability for a channel +# informs those binaries without this capability that they must cease +# processing transactions until they have been upgraded. For v1.0.x if any +# capabilities are defined (including a map with all capabilities turned off) +# then the v1.0.x peer will deliberately crash. +# +################################################################################ +Capabilities: + # Channel capabilities apply to both the orderers and the peers and must be + # supported by both. + # Set the value of the capability to true to require it. + Channel: &ChannelCapabilities + # V2_0 capability ensures that orderers and peers behave according + # to v2.0 channel capabilities. Orderers and peers from + # prior releases would behave in an incompatible way, and are therefore + # not able to participate in channels at v2.0 capability. + # Prior to enabling V2.0 channel capabilities, ensure that all + # orderers and peers on a channel are at v2.0.0 or later. + V2_0: true + + # Orderer capabilities apply only to the orderers, and may be safely + # used with prior release peers. + # Set the value of the capability to true to require it. + Orderer: &OrdererCapabilities + # V2_0 orderer capability ensures that orderers behave according + # to v2.0 orderer capabilities. Orderers from + # prior releases would behave in an incompatible way, and are therefore + # not able to participate in channels at v2.0 orderer capability. + # Prior to enabling V2.0 orderer capabilities, ensure that all + # orderers on channel are at v2.0.0 or later. + V2_0: true + + # Application capabilities apply only to the peer network, and may be safely + # used with prior release orderers. + # Set the value of the capability to true to require it. + Application: &ApplicationCapabilities + # V2_0 application capability ensures that peers behave according + # to v2.0 application capabilities. Peers from + # prior releases would behave in an incompatible way, and are therefore + # not able to participate in channels at v2.0 application capability. + # Prior to enabling V2.0 application capabilities, ensure that all + # peers on channel are at v2.0.0 or later. + V2_0: true + +################################################################################ +# +# SECTION: Application +# +# - This section defines the values to encode into a config transaction or +# genesis block for application related parameters +# +################################################################################ +Application: &ApplicationDefaults + ACLs: &ACLsDefault + + # ACL policy for lscc's "getid" function + lscc/ChaincodeExists: /Channel/Application/Readers + + # ACL policy for lscc's "getdepspec" function + lscc/GetDeploymentSpec: /Channel/Application/Readers + + # ACL policy for lscc's "getccdata" function + lscc/GetChaincodeData: /Channel/Application/Readers + + # ACL Policy for lscc's "getchaincodes" function + lscc/GetInstantiatedChaincodes: /Channel/Application/Readers + + + #---Query System Chaincode (qscc) function to policy mapping for access control---# + + # ACL policy for qscc's "GetChainInfo" function + qscc/GetChainInfo: /Channel/Application/Readers + + + # ACL policy for qscc's "GetBlockByNumber" function + qscc/GetBlockByNumber: /Channel/Application/Readers + + # ACL policy for qscc's "GetBlockByHash" function + qscc/GetBlockByHash: /Channel/Application/Readers + + # ACL policy for qscc's "GetTransactionByID" function + qscc/GetTransactionByID: /Channel/Application/Readers + + # ACL policy for qscc's "GetBlockByTxID" function + qscc/GetBlockByTxID: /Channel/Application/Readers + + #---Configuration System Chaincode (cscc) function to policy mapping for access control---# + + # ACL policy for cscc's "GetConfigBlock" function + cscc/GetConfigBlock: /Channel/Application/Readers + + # ACL policy for cscc's "GetConfigTree" function + cscc/GetConfigTree: /Channel/Application/Readers + + # ACL policy for cscc's "SimulateConfigTreeUpdate" function + cscc/SimulateConfigTreeUpdate: /Channel/Application/Readers + + #---Miscellanesous peer function to policy mapping for access control---# + + # ACL policy for invoking chaincodes on peer + peer/Propose: /Channel/Application/Writers + + # ACL policy for chaincode to chaincode invocation + peer/ChaincodeToChaincode: /Channel/Application/Readers + + #---Events resource to policy mapping for access control###---# + + # ACL policy for sending block events + event/Block: /Channel/Application/Readers + + # ACL policy for sending filtered block events + event/FilteredBlock: /Channel/Application/Readers + + # Chaincode Lifecycle Policies introduced in Fabric 2.x + # ACL policy for _lifecycle's "CheckCommitReadiness" function + _lifecycle/CheckCommitReadiness: /Channel/Application/Writers + + # ACL policy for _lifecycle's "CommitChaincodeDefinition" function + _lifecycle/CommitChaincodeDefinition: /Channel/Application/Writers + + # ACL policy for _lifecycle's "QueryChaincodeDefinition" function + _lifecycle/QueryChaincodeDefinition: /Channel/Application/Readers + + + # Organizations is the list of orgs which are defined as participants on + # the application side of the network + Organizations: + + # Policies defines the set of policies at this level of the config tree + # For Application policies, their canonical path is + # /Channel/Application/ + Policies: + Readers: + Type: ImplicitMeta + Rule: "ANY Readers" + Writers: + Type: ImplicitMeta + Rule: "ANY Writers" + Admins: + Type: ImplicitMeta + Rule: "MAJORITY Admins" + LifecycleEndorsement: + Type: ImplicitMeta + Rule: "MAJORITY Endorsement" + Endorsement: + Type: ImplicitMeta + Rule: "MAJORITY Endorsement" + + Capabilities: + <<: *ApplicationCapabilities +################################################################################ +# +# SECTION: Orderer +# +# - This section defines the values to encode into a config transaction or +# genesis block for orderer related parameters +# +################################################################################ +Orderer: &OrdererDefaults + + # Orderer Type: The orderer implementation to start + OrdererType: etcdraft + + # Addresses used to be the list of orderer addresses that clients and peers + # could connect to. However, this does not allow clients to associate orderer + # addresses and orderer organizations which can be useful for things such + # as TLS validation. The preferred way to specify orderer addresses is now + # to include the OrdererEndpoints item in your org definition + Addresses: + - <>-org1-orderer1:7050 + - <>-org1-orderer2:7050 + - <>-org1-orderer3:7050 + + EtcdRaft: + Consenters: + - Host: <>-org1-orderer1 + Port: 7050 + ClientTLSCert: /tmp/crypto-material/orderers/org1/orderer1/node-tls/msp/signcerts/cert.pem + ServerTLSCert: /tmp/crypto-material/orderers/org1/orderer1/node-tls/msp/signcerts/cert.pem + - Host: <>-org1-orderer2 + Port: 7050 + ClientTLSCert: /tmp/crypto-material/orderers/org1/orderer2/node-tls/msp/signcerts/cert.pem + ServerTLSCert: /tmp/crypto-material/orderers/org1/orderer2/node-tls/msp/signcerts/cert.pem + - Host: <>-org1-orderer3 + Port: 7050 + ClientTLSCert: /tmp/crypto-material/orderers/org1/orderer3/node-tls/msp/signcerts/cert.pem + ServerTLSCert: /tmp/crypto-material/orderers/org1/orderer3/node-tls/msp/signcerts/cert.pem + + # Batch Timeout: The amount of time to wait before creating a batch + BatchTimeout: 2s + + # Batch Size: Controls the number of messages batched into a block + BatchSize: + + # Max Message Count: The maximum number of messages to permit in a batch + MaxMessageCount: 10 + + # Absolute Max Bytes: The absolute maximum number of bytes allowed for + # the serialized messages in a batch. + AbsoluteMaxBytes: 99 MB + + # Preferred Max Bytes: The preferred maximum number of bytes allowed for + # the serialized messages in a batch. A message larger than the preferred + # max bytes will result in a batch larger than preferred max bytes. + PreferredMaxBytes: 512 KB + + # Organizations is the list of orgs which are defined as participants on + # the orderer side of the network + Organizations: + + # Policies defines the set of policies at this level of the config tree + # For Orderer policies, their canonical path is + # /Channel/Orderer/ + Policies: + Readers: + Type: ImplicitMeta + Rule: "ANY Readers" + Writers: + Type: ImplicitMeta + Rule: "ANY Writers" + Admins: + Type: ImplicitMeta + Rule: "MAJORITY Admins" + # BlockValidation specifies what signatures must be included in the block + # from the orderer for the peer to validate it. + BlockValidation: + Type: ImplicitMeta + Rule: "ANY Writers" + +################################################################################ +# +# CHANNEL +# +# This section defines the values to encode into a config transaction or +# genesis block for channel related parameters. +# +################################################################################ +Channel: &ChannelDefaults + # Policies defines the set of policies at this level of the config tree + # For Channel policies, their canonical path is + # /Channel/ + Policies: + # Who may invoke the 'Deliver' API + Readers: + Type: ImplicitMeta + Rule: "ANY Readers" + # Who may invoke the 'Broadcast' API + Writers: + Type: ImplicitMeta + Rule: "ANY Writers" + # By default, who may modify elements at this config level + Admins: + Type: ImplicitMeta + Rule: "MAJORITY Admins" + + # Capabilities describes the channel level capabilities, see the + # dedicated Capabilities section elsewhere in this file for a full + # description + Capabilities: + <<: *ChannelCapabilities + +################################################################################ +# +# Profile +# +# - Different configuration profiles may be encoded here to be specified +# as parameters to the configtxgen tool +# +################################################################################ +Profiles: + OrgsOrdererGenesis: + <<: *ChannelDefaults + Orderer: + <<: *OrdererDefaults + Organizations: + - *org1 + Capabilities: + <<: *OrdererCapabilities + Consortiums: + MainConsortium: + Organizations: + - *org2 + - *org3 + + OrgsChannel: + Consortium: MainConsortium + <<: *ChannelDefaults + Application: + <<: *ApplicationDefaults + Organizations: + - *org2 + - *org3 + Capabilities: + <<: *ApplicationCapabilities + diff --git a/topologies/t1/containers/cas/org1-cas/docker-compose-org1-cas.yml b/topologies/t1/containers/cas/org1-cas/docker-compose-org1-cas.yml new file mode 100644 index 0000000..1d941c5 --- /dev/null +++ b/topologies/t1/containers/cas/org1-cas/docker-compose-org1-cas.yml @@ -0,0 +1,29 @@ +version: "3.9" +services: + org1-ca-tls: + container_name: ${CURRENT_HL_TOPOLOGY}-org1-ca-tls + + command: sh -c 'fabric-ca-server start -d -b org1-ca-tls-admin:org1-ca-tls-adminpw --port 7054' + environment: + - FABRIC_CA_SERVER_HOME=/tmp/hyperledger/fabric-ca/ + - FABRIC_CA_SERVER_TLS_ENABLED=true + - FABRIC_CA_SERVER_CSR_CN=${CURRENT_HL_TOPOLOGY}-org1-ca-tls + - FABRIC_CA_SERVER_CSR_HOSTS=0.0.0.0 + - FABRIC_CA_SERVER_DEBUG=true + - TOPOLOGY=${CURRENT_HL_TOPOLOGY} + volumes: + - "${HL_TOPOLOGIES_BASE_FOLDER}/crypto-material/cas/org1/ca-tls:/tmp/hyperledger/fabric-ca" + - "${HL_TOPOLOGIES_BASE_FOLDER}:/tmp" + org1-ca-identities: + container_name: ${CURRENT_HL_TOPOLOGY}-org1-ca-identities + command: sh -c 'fabric-ca-server start -d -b org1-ca-identities-admin:org1-ca-identities-adminpw --port 7054' + environment: + - FABRIC_CA_SERVER_HOME=/tmp/hyperledger/fabric-ca/ + - FABRIC_CA_SERVER_TLS_ENABLED=true + - FABRIC_CA_SERVER_CSR_CN=${CURRENT_HL_TOPOLOGY}-org1-ca-identities + - FABRIC_CA_SERVER_CSR_HOSTS=0.0.0.0 + - FABRIC_CA_SERVER_DEBUG=true + - TOPOLOGY=${CURRENT_HL_TOPOLOGY} + volumes: + - "${HL_TOPOLOGIES_BASE_FOLDER}/crypto-material/cas/org1/ca-identities:/tmp/hyperledger/fabric-ca" + - "${HL_TOPOLOGIES_BASE_FOLDER}:/tmp" \ No newline at end of file diff --git a/topologies/t1/containers/cas/org2-cas/docker-compose-org2-cas.yml b/topologies/t1/containers/cas/org2-cas/docker-compose-org2-cas.yml new file mode 100644 index 0000000..e22813e --- /dev/null +++ b/topologies/t1/containers/cas/org2-cas/docker-compose-org2-cas.yml @@ -0,0 +1,28 @@ +version: "3.9" +services: + org2-ca-tls: + container_name: ${CURRENT_HL_TOPOLOGY}-org2-ca-tls + command: sh -c 'fabric-ca-server start -d -b org2-ca-tls-admin:org2-ca-tls-adminpw --port 7054' + environment: + - FABRIC_CA_SERVER_HOME=/tmp/hyperledger/fabric-ca/ + - FABRIC_CA_SERVER_TLS_ENABLED=true + - FABRIC_CA_SERVER_CSR_CN=${CURRENT_HL_TOPOLOGY}-org2-ca-tls + - FABRIC_CA_SERVER_CSR_HOSTS=0.0.0.0 + - FABRIC_CA_SERVER_DEBUG=true + - TOPOLOGY=${CURRENT_HL_TOPOLOGY} + volumes: + - "${HL_TOPOLOGIES_BASE_FOLDER}/crypto-material/cas/org2/ca-tls:/tmp/hyperledger/fabric-ca" + - "${HL_TOPOLOGIES_BASE_FOLDER}:/tmp" + org2-ca-identities: + container_name: ${CURRENT_HL_TOPOLOGY}-org2-ca-identities + command: sh -c 'fabric-ca-server start -d -b org2-ca-identities-admin:org2-ca-identities-adminpw --port 7054' + environment: + - FABRIC_CA_SERVER_HOME=/tmp/hyperledger/fabric-ca/ + - FABRIC_CA_SERVER_TLS_ENABLED=true + - FABRIC_CA_SERVER_CSR_CN=${CURRENT_HL_TOPOLOGY}-org2-ca-identities + - FABRIC_CA_SERVER_CSR_HOSTS=0.0.0.0 + - FABRIC_CA_SERVER_DEBUG=true + - TOPOLOGY=${CURRENT_HL_TOPOLOGY} + volumes: + - "${HL_TOPOLOGIES_BASE_FOLDER}/crypto-material/cas/org2/ca-identities:/tmp/hyperledger/fabric-ca" + - "${HL_TOPOLOGIES_BASE_FOLDER}:/tmp" \ No newline at end of file diff --git a/topologies/t1/containers/cas/org3-cas/docker-compose-org3-cas.yml b/topologies/t1/containers/cas/org3-cas/docker-compose-org3-cas.yml new file mode 100644 index 0000000..b18ea4d --- /dev/null +++ b/topologies/t1/containers/cas/org3-cas/docker-compose-org3-cas.yml @@ -0,0 +1,28 @@ +version: "3.9" +services: + org3-ca-tls: + container_name: ${CURRENT_HL_TOPOLOGY}-org3-ca-tls + command: sh -c 'fabric-ca-server start -d -b org3-ca-tls-admin:org3-ca-tls-adminpw --port 7054' + environment: + - FABRIC_CA_SERVER_HOME=/tmp/hyperledger/fabric-ca/ + - FABRIC_CA_SERVER_TLS_ENABLED=true + - FABRIC_CA_SERVER_CSR_CN=${CURRENT_HL_TOPOLOGY}-org3-ca-tls + - FABRIC_CA_SERVER_CSR_HOSTS=0.0.0.0 + - FABRIC_CA_SERVER_DEBUG=true + - TOPOLOGY=${CURRENT_HL_TOPOLOGY} + volumes: + - "${HL_TOPOLOGIES_BASE_FOLDER}/crypto-material/cas/org3/ca-tls:/tmp/hyperledger/fabric-ca" + - "${HL_TOPOLOGIES_BASE_FOLDER}:/tmp" + org3-ca-identities: + container_name: ${CURRENT_HL_TOPOLOGY}-org3-ca-identities + command: sh -c 'fabric-ca-server start -d -b org3-ca-identities-admin:org3-ca-identities-adminpw --port 7054' + environment: + - FABRIC_CA_SERVER_HOME=/tmp/hyperledger/fabric-ca/ + - FABRIC_CA_SERVER_TLS_ENABLED=true + - FABRIC_CA_SERVER_CSR_CN=${CURRENT_HL_TOPOLOGY}-org3-ca-identities + - FABRIC_CA_SERVER_CSR_HOSTS=0.0.0.0 + - FABRIC_CA_SERVER_DEBUG=true + - TOPOLOGY=${CURRENT_HL_TOPOLOGY} + volumes: + - "${HL_TOPOLOGIES_BASE_FOLDER}/crypto-material/cas/org3/ca-identities:/tmp/hyperledger/fabric-ca" + - "${HL_TOPOLOGIES_BASE_FOLDER}:/tmp" \ No newline at end of file diff --git a/topologies/t1/containers/clis/org2-clis/docker-compose-org2-clis.yml b/topologies/t1/containers/clis/org2-clis/docker-compose-org2-clis.yml new file mode 100644 index 0000000..4ceb8a0 --- /dev/null +++ b/topologies/t1/containers/clis/org2-clis/docker-compose-org2-clis.yml @@ -0,0 +1,21 @@ +services: + org2-cli-peer1: + container_name: ${CURRENT_HL_TOPOLOGY}-org2-cli-peer1 + tty: true + stdin_open: true + environment: + - GOPATH=/opt/gopath + - CORE_VM_ENDPOINT=unix:///host/var/run/docker.sock + - FABRIC_LOGGING_SPEC=DEBUG + - CORE_PEER_ID=${CURRENT_HL_TOPOLOGY}-org2-cli-peer1 + - CORE_PEER_ADDRESS=${CURRENT_HL_TOPOLOGY}-org2-peer1:7051 + - CORE_PEER_LOCALMSPID=org2MSP + - CORE_PEER_TLS_ENABLED=true + - CORE_PEER_TLS_ROOTCERT_FILE=/tmp/crypto-material/cas/org2/ca-tls/ca-cert.pem + - CORE_PEER_MSPCONFIGPATH=/tmp/crypto-material/peers/org2/peer1/node/msp + - TOPOLOGY=${CURRENT_HL_TOPOLOGY} + working_dir: /opt/gopath/src/github.com/hyperledger/fabric/org2 + command: sh + volumes: + - ${HL_TOPOLOGIES_BASE_FOLDER}/assets/chaincode:/opt/gopath/src/github.com/hyperledger/fabric-samples/chaincode + - ${HL_TOPOLOGIES_BASE_FOLDER}:/tmp diff --git a/topologies/t1/containers/clis/org3-clis/docker-compose-org3-clis.yml b/topologies/t1/containers/clis/org3-clis/docker-compose-org3-clis.yml new file mode 100644 index 0000000..68d2ec0 --- /dev/null +++ b/topologies/t1/containers/clis/org3-clis/docker-compose-org3-clis.yml @@ -0,0 +1,21 @@ +services: + org3-cli-peer1: + container_name: ${CURRENT_HL_TOPOLOGY}-org3-cli-peer1 + tty: true + stdin_open: true + environment: + - GOPATH=/opt/gopath + - CORE_VM_ENDPOINT=unix:///host/var/run/docker.sock + - FABRIC_LOGGING_SPEC=DEBUG + - CORE_PEER_ID=${CURRENT_HL_TOPOLOGY}-org3-cli-peer1 + - CORE_PEER_ADDRESS=${CURRENT_HL_TOPOLOGY}-org3-peer1:7051 + - CORE_PEER_LOCALMSPID=org3MSP + - CORE_PEER_TLS_ENABLED=true + - CORE_PEER_TLS_ROOTCERT_FILE=/tmp/crypto-material/cas/org3/ca-tls/ca-cert.pem + - CORE_PEER_MSPCONFIGPATH=/tmp/crypto-material/peers/org3/peer1/node/msp + - TOPOLOGY=${CURRENT_HL_TOPOLOGY} + working_dir: /opt/gopath/src/github.com/hyperledger/fabric/org3 + command: sh + volumes: + - ${HL_TOPOLOGIES_BASE_FOLDER}/assets/chaincode:/opt/gopath/src/github.com/hyperledger/fabric-samples/chaincode + - ${HL_TOPOLOGIES_BASE_FOLDER}:/tmp \ No newline at end of file diff --git a/topologies/t1/containers/orderers/org1-orderers/docker-compose-org1-orderers.yml b/topologies/t1/containers/orderers/org1-orderers/docker-compose-org1-orderers.yml new file mode 100644 index 0000000..0045893 --- /dev/null +++ b/topologies/t1/containers/orderers/org1-orderers/docker-compose-org1-orderers.yml @@ -0,0 +1,82 @@ +services: + org1-orderer1: + container_name: ${CURRENT_HL_TOPOLOGY}-org1-orderer1 + environment: + - ORDERER_HOME=/tmp/hyperledger/orderer/home + - ORDERER_FILELEDGER_LOCATION=/tmp/hyperledger/orderer/fileledger + - ORDERER_HOST=${CURRENT_HL_TOPOLOGY}-org1-orderer1 + - ORDERER_GENERAL_LISTENADDRESS=0.0.0.0 + - ORDERER_GENERAL_LISTENPORT=7050 + - ORDERER_GENERAL_GENESISMETHOD=file + # - ORDERER_GENERAL_GENESISFILE=/tmp/crypto-material/artifacts/channels/genesis.block + - ORDERER_GENERAL_BOOTSTRAPFILE=/tmp/crypto-material/artifacts/channels/genesis.block + - ORDERER_GENERAL_LOCALMSPID=org1MSP + - ORDERER_GENERAL_LOCALMSPDIR=/tmp/crypto-material/orderers/org1/orderer1/node/msp + - ORDERER_GENERAL_TLS_ENABLED=true + - ORDERER_GENERAL_TLS_CERTIFICATE=/tmp/crypto-material/orderers/org1/orderer1/node-tls/msp/signcerts/cert.pem + - ORDERER_GENERAL_TLS_PRIVATEKEY=/tmp/crypto-material/orderers/org1/orderer1/node-tls/msp/keystore/key.pem + # - ORDERER_GENERAL_TLS_ROOTCAS=[/tmp/crypto-material/cas/org1/ca-tls/ca-cert.pem] + - ORDERER_GENERAL_TLS_ROOTCAS=[/tmp/crypto-material/cas/org1/ca-tls/ca-cert.pem,/tmp/crypto-material/cas/org2/ca-tls/ca-cert.pem,/tmp/crypto-material/cas/org3/ca-tls/ca-cert.pem] + - ORDERER_GENERAL_LOGLEVEL=debug + - ORDERER_DEBUG_BROADCASTTRACEDIR=data/logs + - ORDERER_OPERATIONS_LISTENADDRESS=0.0.0.0:7051 + - ORDERER_METRICS_STATSD_ADDRESS=0.0.0.0:7052 + - ORDERER_GENERAL_PROFILE_ADDRESS=0.0.0.0:7053 + - TOPOLOGY=${CURRENT_HL_TOPOLOGY} + volumes: + - ${HL_TOPOLOGIES_BASE_FOLDER}/crypto-material:/tmp/crypto-material + - ${HL_TOPOLOGIES_BASE_FOLDER}/homefolders/orderers/org1/orderer1:/tmp/hyperledger/orderer + org1-orderer2: + container_name: ${CURRENT_HL_TOPOLOGY}-org1-orderer2 + environment: + - ORDERER_HOME=/tmp/hyperledger/orderer/home + - ORDERER_FILELEDGER_LOCATION=/tmp/hyperledger/orderer/fileledger + - ORDERER_HOST=${CURRENT_HL_TOPOLOGY}-org1-orderer2 + - ORDERER_GENERAL_LISTENADDRESS=0.0.0.0 + - ORDERER_GENERAL_LISTENPORT=7050 + - ORDERER_GENERAL_GENESISMETHOD=file + # - ORDERER_GENERAL_GENESISFILE=/tmp/crypto-material/artifacts/channels/genesis.block + - ORDERER_GENERAL_BOOTSTRAPFILE=/tmp/crypto-material/artifacts/channels/genesis.block + - ORDERER_GENERAL_LOCALMSPID=org1MSP + - ORDERER_GENERAL_LOCALMSPDIR=/tmp/crypto-material/orderers/org1/orderer2/node/msp + - ORDERER_GENERAL_TLS_ENABLED=true + - ORDERER_GENERAL_TLS_CERTIFICATE=/tmp/crypto-material/orderers/org1/orderer2/node-tls/msp/signcerts/cert.pem + - ORDERER_GENERAL_TLS_PRIVATEKEY=/tmp/crypto-material/orderers/org1/orderer2/node-tls/msp/keystore/key.pem + # - ORDERER_GENERAL_TLS_ROOTCAS=[/tmp/crypto-material/cas/org1/ca-tls/ca-cert.pem] + - ORDERER_GENERAL_TLS_ROOTCAS=[/tmp/crypto-material/cas/org1/ca-tls/ca-cert.pem,/tmp/crypto-material/cas/org2/ca-tls/ca-cert.pem,/tmp/crypto-material/cas/org3/ca-tls/ca-cert.pem] + - ORDERER_GENERAL_LOGLEVEL=debug + - ORDERER_DEBUG_BROADCASTTRACEDIR=data/logs + - ORDERER_OPERATIONS_LISTENADDRESS=0.0.0.0:7051 + - ORDERER_METRICS_STATSD_ADDRESS=0.0.0.0:7052 + - ORDERER_GENERAL_PROFILE_ADDRESS=0.0.0.0:7053 + - TOPOLOGY=${CURRENT_HL_TOPOLOGY} + volumes: + - ${HL_TOPOLOGIES_BASE_FOLDER}/crypto-material:/tmp/crypto-material + - ${HL_TOPOLOGIES_BASE_FOLDER}/homefolders/orderers/org1/orderer2:/tmp/hyperledger/orderer + org1-orderer3: + container_name: ${CURRENT_HL_TOPOLOGY}-org1-orderer3 + environment: + - ORDERER_HOME=/tmp/hyperledger/orderer/home + - ORDERER_FILELEDGER_LOCATION=/tmp/hyperledger/orderer/fileledger + - ORDERER_HOST=${CURRENT_HL_TOPOLOGY}-org1-orderer3 + - ORDERER_GENERAL_LISTENADDRESS=0.0.0.0 + - ORDERER_GENERAL_LISTENPORT=7050 + - ORDERER_GENERAL_GENESISMETHOD=file + # - ORDERER_GENERAL_GENESISFILE=/tmp/crypto-material/artifacts/channels/genesis.block + - ORDERER_GENERAL_BOOTSTRAPFILE=/tmp/crypto-material/artifacts/channels/genesis.block + - ORDERER_GENERAL_LOCALMSPID=org1MSP + - ORDERER_GENERAL_LOCALMSPDIR=/tmp/crypto-material/orderers/org1/orderer3/node/msp + - ORDERER_GENERAL_TLS_ENABLED=true + - ORDERER_GENERAL_TLS_CERTIFICATE=/tmp/crypto-material/orderers/org1/orderer3/node-tls/msp/signcerts/cert.pem + - ORDERER_GENERAL_TLS_PRIVATEKEY=/tmp/crypto-material/orderers/org1/orderer3/node-tls/msp/keystore/key.pem + # - ORDERER_GENERAL_TLS_ROOTCAS=[/tmp/crypto-material/cas/org1/ca-tls/ca-cert.pem] + - ORDERER_GENERAL_TLS_ROOTCAS=[/tmp/crypto-material/cas/org1/ca-tls/ca-cert.pem,/tmp/crypto-material/cas/org2/ca-tls/ca-cert.pem,/tmp/crypto-material/cas/org3/ca-tls/ca-cert.pem] + - ORDERER_GENERAL_LOGLEVEL=debug + - ORDERER_DEBUG_BROADCASTTRACEDIR=data/logs + - ORDERER_OPERATIONS_LISTENADDRESS=0.0.0.0:7051 + - ORDERER_METRICS_STATSD_ADDRESS=0.0.0.0:7052 + - ORDERER_GENERAL_PROFILE_ADDRESS=0.0.0.0:7053 + - TOPOLOGY=${CURRENT_HL_TOPOLOGY} + volumes: + - ${HL_TOPOLOGIES_BASE_FOLDER}/crypto-material:/tmp/crypto-material + - ${HL_TOPOLOGIES_BASE_FOLDER}/homefolders/orderers/org1/orderer3:/tmp/hyperledger/orderer diff --git a/topologies/t1/containers/peers/org2-peers/docker-compose-org2-peers.yml b/topologies/t1/containers/peers/org2-peers/docker-compose-org2-peers.yml new file mode 100644 index 0000000..f115b6d --- /dev/null +++ b/topologies/t1/containers/peers/org2-peers/docker-compose-org2-peers.yml @@ -0,0 +1,50 @@ +services: + org2-peer1: + container_name: ${CURRENT_HL_TOPOLOGY}-org2-peer1 + environment: + - CORE_PEER_ID=${CURRENT_HL_TOPOLOGY}-org2-peer1 + - CORE_PEER_ADDRESS=${CURRENT_HL_TOPOLOGY}-org2-peer1:7051 + - CORE_PEER_LOCALMSPID=org2MSP + - CORE_PEER_MSPCONFIGPATH=/tmp/crypto-material/peers/org2/peer1/node/msp + - CORE_VM_ENDPOINT=unix:///host/var/run/docker.sock + - CORE_VM_DOCKER_HOSTCONFIG_NETWORKMODE=hl-fabric-${CURRENT_HL_TOPOLOGY} + # - FABRIC_LOGGING_SPEC=debug + - CORE_PEER_TLS_ENABLED=true + - CORE_PEER_TLS_CERT_FILE=/tmp/crypto-material/peers/org2/peer1/node-tls/msp/signcerts/cert.pem + - CORE_PEER_TLS_KEY_FILE=/tmp/crypto-material/peers/org2/peer1/node-tls/msp/keystore/key.pem + - CORE_PEER_TLS_ROOTCERT_FILE=/tmp/crypto-material/peers/org2/peer1/node-tls/msp/tlscacerts/ca-cert.pem + - CORE_PEER_GOSSIP_USELEADERELECTION=true + - CORE_PEER_GOSSIP_ORGLEADER=false + - CORE_PEER_GOSSIP_EXTERNALENDPOINT=${CURRENT_HL_TOPOLOGY}-org2-peer1:7051 + - CORE_PEER_GOSSIP_SKIPHANDSHAKE=true + - TOPOLOGY=${CURRENT_HL_TOPOLOGY} + # - CORE_PEER_CHAINCODELISTENADDRESS=0.0.0.0:7052 + working_dir: /opt/gopath/src/github.com/hyperledger/fabric/org2/peer1 + volumes: + - /var/run:/host/var/run + - ${HL_TOPOLOGIES_BASE_FOLDER}:/tmp + org2-peer2: + container_name: ${CURRENT_HL_TOPOLOGY}-org2-peer2 + environment: + - CORE_PEER_ID=${CURRENT_HL_TOPOLOGY}-org2-peer2 + - CORE_PEER_ADDRESS=${CURRENT_HL_TOPOLOGY}-org2-peer2:7051 + - CORE_PEER_LOCALMSPID=org2MSP + - CORE_PEER_MSPCONFIGPATH=/tmp/crypto-material/peers/org2/peer2/node/msp + - CORE_VM_ENDPOINT=unix:///host/var/run/docker.sock + - CORE_VM_DOCKER_HOSTCONFIG_NETWORKMODE=hl-fabric-${CURRENT_HL_TOPOLOGY} + # - FABRIC_LOGGING_SPEC=debug + - CORE_PEER_TLS_ENABLED=true + - CORE_PEER_TLS_CERT_FILE=/tmp/crypto-material/peers/org2/peer2/node-tls/msp/signcerts/cert.pem + - CORE_PEER_TLS_KEY_FILE=/tmp/crypto-material/peers/org2/peer2/node-tls/msp/keystore/key.pem + - CORE_PEER_TLS_ROOTCERT_FILE=/tmp/crypto-material/peers/org2/peer2/node-tls/msp/tlscacerts/ca-cert.pem + - CORE_PEER_GOSSIP_USELEADERELECTION=true + - CORE_PEER_GOSSIP_ORGLEADER=false + - CORE_PEER_GOSSIP_EXTERNALENDPOINT=${CURRENT_HL_TOPOLOGY}-org2-peer2:7051 + - CORE_PEER_GOSSIP_SKIPHANDSHAKE=true + # - CORE_PEER_CHAINCODELISTENADDRESS=0.0.0.0:7052 + - CORE_PEER_GOSSIP_BOOTSTRAP=${CURRENT_HL_TOPOLOGY}-org2-peer1:7051 + - TOPOLOGY=${CURRENT_HL_TOPOLOGY} + working_dir: /opt/gopath/src/github.com/hyperledger/fabric/org2/peer2 + volumes: + - /var/run:/host/var/run + - ${HL_TOPOLOGIES_BASE_FOLDER}:/tmp \ No newline at end of file diff --git a/topologies/t1/containers/peers/org3-peers/docker-compose-org3-peers.yml b/topologies/t1/containers/peers/org3-peers/docker-compose-org3-peers.yml new file mode 100644 index 0000000..c58de7b --- /dev/null +++ b/topologies/t1/containers/peers/org3-peers/docker-compose-org3-peers.yml @@ -0,0 +1,50 @@ +services: + org3-peer1: + container_name: ${CURRENT_HL_TOPOLOGY}-org3-peer1 + environment: + - CORE_PEER_ID=${CURRENT_HL_TOPOLOGY}-org3-peer1 + - CORE_PEER_ADDRESS=${CURRENT_HL_TOPOLOGY}-org3-peer1:7051 + - CORE_PEER_LOCALMSPID=org3MSP + - CORE_PEER_MSPCONFIGPATH=/tmp/crypto-material/peers/org3/peer1/node/msp + - CORE_VM_ENDPOINT=unix:///host/var/run/docker.sock + - CORE_VM_DOCKER_HOSTCONFIG_NETWORKMODE=hl-fabric-${CURRENT_HL_TOPOLOGY} + # - FABRIC_LOGGING_SPEC=debug + - CORE_PEER_TLS_ENABLED=true + - CORE_PEER_TLS_CERT_FILE=/tmp/crypto-material/peers/org3/peer1/node-tls/msp/signcerts/cert.pem + - CORE_PEER_TLS_KEY_FILE=/tmp/crypto-material/peers/org3/peer1/node-tls/msp/keystore/key.pem + - CORE_PEER_TLS_ROOTCERT_FILE=/tmp/crypto-material/peers/org3/peer1/node-tls/msp/tlscacerts/ca-cert.pem + - CORE_PEER_GOSSIP_USELEADERELECTION=true + - CORE_PEER_GOSSIP_ORGLEADER=false + - CORE_PEER_GOSSIP_EXTERNALENDPOINT=${CURRENT_HL_TOPOLOGY}-org3-peer1:7051 + - CORE_PEER_GOSSIP_SKIPHANDSHAKE=true + - TOPOLOGY=${CURRENT_HL_TOPOLOGY} + # - CORE_PEER_CHAINCODELISTENADDRESS=0.0.0.0:7052 + working_dir: /opt/gopath/src/github.com/hyperledger/fabric/org3/peer1 + volumes: + - /var/run:/host/var/run + - ${HL_TOPOLOGIES_BASE_FOLDER}:/tmp + org3-peer2: + container_name: ${CURRENT_HL_TOPOLOGY}-org3-peer2 + environment: + - CORE_PEER_ID=${CURRENT_HL_TOPOLOGY}-org3-peer2 + - CORE_PEER_ADDRESS=${CURRENT_HL_TOPOLOGY}-org3-peer2:7051 + - CORE_PEER_LOCALMSPID=org3MSP + - CORE_PEER_MSPCONFIGPATH=/tmp/crypto-material/peers/org3/peer2/node/msp + - CORE_VM_ENDPOINT=unix:///host/var/run/docker.sock + - CORE_VM_DOCKER_HOSTCONFIG_NETWORKMODE=hl-fabric-${CURRENT_HL_TOPOLOGY} + # - FABRIC_LOGGING_SPEC=debug + - CORE_PEER_TLS_ENABLED=true + - CORE_PEER_TLS_CERT_FILE=/tmp/crypto-material/peers/org3/peer2/node-tls/msp/signcerts/cert.pem + - CORE_PEER_TLS_KEY_FILE=/tmp/crypto-material/peers/org3/peer2/node-tls/msp/keystore/key.pem + - CORE_PEER_TLS_ROOTCERT_FILE=/tmp/crypto-material/peers/org3/peer2/node-tls/msp/tlscacerts/ca-cert.pem + - CORE_PEER_GOSSIP_USELEADERELECTION=true + - CORE_PEER_GOSSIP_ORGLEADER=false + - CORE_PEER_GOSSIP_EXTERNALENDPOINT=${CURRENT_HL_TOPOLOGY}-org3-peer2:7051 + - CORE_PEER_GOSSIP_SKIPHANDSHAKE=true + # - CORE_PEER_CHAINCODELISTENADDRESS=0.0.0.0:7052 + - CORE_PEER_GOSSIP_BOOTSTRAP=${CURRENT_HL_TOPOLOGY}-org3-peer1:7051 + - TOPOLOGY=${CURRENT_HL_TOPOLOGY} + working_dir: /opt/gopath/src/github.com/hyperledger/fabric/org3/peer2 + volumes: + - /var/run:/host/var/run + - ${HL_TOPOLOGIES_BASE_FOLDER}:/tmp \ No newline at end of file diff --git a/topologies/t1/crypto-material/.gitkeep b/topologies/t1/crypto-material/.gitkeep new file mode 100644 index 0000000..e69de29 diff --git a/topologies/t1/docker-compose.yml b/topologies/t1/docker-compose.yml new file mode 100644 index 0000000..f4fbbce --- /dev/null +++ b/topologies/t1/docker-compose.yml @@ -0,0 +1,91 @@ +services: + org-shell-cmd: + image: alpine + networks: + - hl-fabric + org1-ca-tls: + image: hyperledger/fabric-ca:${FABRIC_CA_VERSION} + # ports: + # - :7054 + networks: + - hl-fabric + org1-ca-identities: + image: hyperledger/fabric-ca:${FABRIC_CA_VERSION} + # ports: + # - :7054 + networks: + - hl-fabric + org2-ca-tls: + image: hyperledger/fabric-ca:${FABRIC_CA_VERSION} + # ports: + # - :7054 + networks: + - hl-fabric + org2-ca-identities: + image: hyperledger/fabric-ca:${FABRIC_CA_VERSION} + # ports: + # - :7054 + networks: + - hl-fabric + org3-ca-tls: + image: hyperledger/fabric-ca:${FABRIC_CA_VERSION} + # ports: + # - :7054 + networks: + - hl-fabric + org3-ca-identities: + image: hyperledger/fabric-ca:${FABRIC_CA_VERSION} + # ports: + # - :7054 + networks: + - hl-fabric + org2-peer1: + image: hyperledger/fabric-peer:${FABRIC_PEER_VERSION} + # ports: + # - :7051 + networks: + - hl-fabric + org2-peer2: + image: hyperledger/fabric-peer:${FABRIC_PEER_VERSION} + # ports: + # - :7051 + networks: + - hl-fabric + org3-peer1: + image: hyperledger/fabric-peer:${FABRIC_PEER_VERSION} + # ports: + # - :7051 + networks: + - hl-fabric + org3-peer2: + image: hyperledger/fabric-peer:${FABRIC_PEER_VERSION} + # ports: + # - :7051 + networks: + - hl-fabric + org2-cli-peer1: + image: hyperledger/fabric-tools:${FABRIC_TOOLS_VERSION} + networks: + - hl-fabric + org3-cli-peer1: + image: hyperledger/fabric-tools:${FABRIC_TOOLS_VERSION} + networks: + - hl-fabric + org1-orderer1: + image: hyperledger/fabric-orderer:${PEER_ORDERER_VERSION} + # ports: + # - :7050 + networks: + - hl-fabric + org1-orderer2: + image: hyperledger/fabric-orderer:${PEER_ORDERER_VERSION} + # ports: + # - :7050 + networks: + - hl-fabric + org1-orderer3: + image: hyperledger/fabric-orderer:${PEER_ORDERER_VERSION} + # ports: + # - :7050 + networks: + - hl-fabric diff --git a/topologies/t1/homefolders/.gitkeep b/topologies/t1/homefolders/.gitkeep new file mode 100644 index 0000000..e69de29 diff --git a/topologies/t1/scripts/all-org-peers-commit-chaincode.sh b/topologies/t1/scripts/all-org-peers-commit-chaincode.sh new file mode 100755 index 0000000..7810f33 --- /dev/null +++ b/topologies/t1/scripts/all-org-peers-commit-chaincode.sh @@ -0,0 +1,26 @@ +#!/bin/bash +set -e +set -x + +peer lifecycle chaincode checkcommitreadiness -C mychannel --name myccv1 -v "1.0" --sequence 1 + +export CORE_PEER_ADDRESS=${TOPOLOGY}-org2-peer1:7051 +export CORE_PEER_MSPCONFIGPATH=/tmp/crypto-material/orgs/org2/admins/admin/msp +QUERY_INSTALLED=`FABRIC_LOGGING_SPEC=ERROR peer lifecycle chaincode queryinstalled | grep myccv1` +IFS=' ' read -r -a array <<< $QUERY_INSTALLED +PACKAGE_ID=${array[2]} +PACKAGE_ID=${PACKAGE_ID::-1} +echo "The Package ID for the installed chaincode is: $PACKAGE_ID" + +peer lifecycle chaincode commit --channelID mychannel --name myccv1 --version "1.0" \ + --peerAddresses ${TOPOLOGY}-org2-peer1:7051 \ + --peerAddresses ${TOPOLOGY}-org2-peer2:7051 \ + --peerAddresses ${TOPOLOGY}-org3-peer1:7051 \ + --peerAddresses ${TOPOLOGY}-org3-peer2:7051 \ + --tlsRootCertFiles /tmp/crypto-material/cas/org2/ca-tls/ca-cert.pem \ + --tlsRootCertFiles /tmp/crypto-material/cas/org2/ca-tls/ca-cert.pem \ + --tlsRootCertFiles /tmp/crypto-material/cas/org3/ca-tls/ca-cert.pem \ + --tlsRootCertFiles /tmp/crypto-material/cas/org3/ca-tls/ca-cert.pem \ + --sequence 1 --tls --cafile /tmp/crypto-material/cas/org1/ca-tls/ca-cert.pem + +peer lifecycle chaincode querycommitted --channelID mychannel --name myccv1 \ No newline at end of file diff --git a/topologies/t1/scripts/all-org-peers-execute-chaincode.sh b/topologies/t1/scripts/all-org-peers-execute-chaincode.sh new file mode 100755 index 0000000..d8be52e --- /dev/null +++ b/topologies/t1/scripts/all-org-peers-execute-chaincode.sh @@ -0,0 +1,18 @@ +#!/bin/sh + +set -e +set -x + +export CORE_PEER_ADDRESS=${TOPOLOGY}-org2-peer1:7051 +export CORE_PEER_MSPCONFIGPATH=/tmp/crypto-material/orgs/org2/users/user/msp +peer chaincode invoke -C mychannel -n myccv1 -c '{"Args":["InitLedger"]}' --waitForEvent --tls --cafile /tmp/crypto-material/cas/org1/ca-tls/ca-cert.pem \ + --peerAddresses ${TOPOLOGY}-org2-peer1:7051 \ + --peerAddresses ${TOPOLOGY}-org2-peer2:7051 \ + --peerAddresses ${TOPOLOGY}-org3-peer1:7051 \ + --peerAddresses ${TOPOLOGY}-org3-peer2:7051 \ + --tlsRootCertFiles /tmp/crypto-material/cas/org2/ca-tls/ca-cert.pem \ + --tlsRootCertFiles /tmp/crypto-material/cas/org2/ca-tls/ca-cert.pem \ + --tlsRootCertFiles /tmp/crypto-material/cas/org3/ca-tls/ca-cert.pem \ + --tlsRootCertFiles /tmp/crypto-material/cas/org3/ca-tls/ca-cert.pem + +peer chaincode query -C mychannel -n myccv1 -c '{"Args":["GetAllAssets"]}' diff --git a/topologies/t1/scripts/channels-setup.sh b/topologies/t1/scripts/channels-setup.sh new file mode 100755 index 0000000..230aac8 --- /dev/null +++ b/topologies/t1/scripts/channels-setup.sh @@ -0,0 +1,7 @@ +#!/bin/sh +set -e +set -x + +export FABRIC_CFG_PATH=/tmp/crypto-material/config +configtxgen -profile OrgsOrdererGenesis -outputBlock /tmp/crypto-material/artifacts/channels/genesis.block -channelID syschannel +configtxgen -profile OrgsChannel -outputCreateChannelTx /tmp/crypto-material/artifacts/channels/channel.tx -channelID mychannel \ No newline at end of file diff --git a/topologies/t1/scripts/delete-state-data.sh b/topologies/t1/scripts/delete-state-data.sh new file mode 100755 index 0000000..9062431 --- /dev/null +++ b/topologies/t1/scripts/delete-state-data.sh @@ -0,0 +1,10 @@ +#!/bin/bash +set -e +set -x + +# remove crypto material files +rm -rf /tmp/crypto-material/* +touch /tmp/crypto-material/.gitkeep + +rm -rf /tmp/homefolders/* +touch /tmp/homefolders/.gitkeep diff --git a/topologies/t1/scripts/org1-enroll-identities-with-ca-identities.sh b/topologies/t1/scripts/org1-enroll-identities-with-ca-identities.sh new file mode 100755 index 0000000..5c5dec7 --- /dev/null +++ b/topologies/t1/scripts/org1-enroll-identities-with-ca-identities.sh @@ -0,0 +1,46 @@ +#!/bin/bash +set -e +set -x + +# enroll orderer1 node +export FABRIC_CA_CLIENT_HOME=/tmp/crypto-material/orderers/org1/orderer1/node +export FABRIC_CA_CLIENT_TLS_CERTFILES=/tmp/crypto-material/cas/org1/ca-identities/ca-cert.pem +fabric-ca-client enroll -d -u https://org1-orderer1:orderer1PW@0.0.0.0:7054 +mv /tmp/crypto-material/orderers/org1/orderer1/node/msp/cacerts/* /tmp/crypto-material/orderers/org1/orderer1/node/msp/cacerts/ca-cert.pem +cp /tmp/config/config.yaml /tmp/crypto-material/orderers/org1/orderer1/node/msp + +# enroll orderer2 node +export FABRIC_CA_CLIENT_HOME=/tmp/crypto-material/orderers/org1/orderer2/node +export FABRIC_CA_CLIENT_TLS_CERTFILES=/tmp/crypto-material/cas/org1/ca-identities/ca-cert.pem +fabric-ca-client enroll -d -u https://org1-orderer2:orderer2PW@0.0.0.0:7054 +mv /tmp/crypto-material/orderers/org1/orderer2/node/msp/cacerts/* /tmp/crypto-material/orderers/org1/orderer2/node/msp/cacerts/ca-cert.pem +cp /tmp/config/config.yaml /tmp/crypto-material/orderers/org1/orderer2/node/msp + +# enroll orderer3 node +export FABRIC_CA_CLIENT_HOME=/tmp/crypto-material/orderers/org1/orderer3/node +export FABRIC_CA_CLIENT_TLS_CERTFILES=/tmp/crypto-material/cas/org1/ca-identities/ca-cert.pem +fabric-ca-client enroll -d -u https://org1-orderer3:orderer3PW@0.0.0.0:7054 +mv /tmp/crypto-material/orderers/org1/orderer3/node/msp/cacerts/* /tmp/crypto-material/orderers/org1/orderer3/node/msp/cacerts/ca-cert.pem +cp /tmp/config/config.yaml /tmp/crypto-material/orderers/org1/orderer3/node/msp + +# enroll org1 admin +export FABRIC_CA_CLIENT_HOME=/tmp/crypto-material/orgs/org1/admins/admin +export FABRIC_CA_CLIENT_TLS_CERTFILES=/tmp/crypto-material/cas/org1/ca-identities/ca-cert.pem +fabric-ca-client enroll -d -u https://admin-org1:org1adminpw@0.0.0.0:7054 +mv /tmp/crypto-material/orgs/org1/admins/admin/msp/cacerts/* /tmp/crypto-material/orgs/org1/admins/admin/msp/cacerts/ca-cert.pem +cp /tmp/config/config.yaml /tmp/crypto-material/orgs/org1/admins/admin/msp + +mv /tmp/crypto-material/orderers/org1/orderer1/node-tls/msp/keystore/* /tmp/crypto-material/orderers/org1/orderer1/node-tls/msp/keystore/key.pem +mv /tmp/crypto-material/orderers/org1/orderer2/node-tls/msp/keystore/* /tmp/crypto-material/orderers/org1/orderer2/node-tls/msp/keystore/key.pem +mv /tmp/crypto-material/orderers/org1/orderer3/node-tls/msp/keystore/* /tmp/crypto-material/orderers/org1/orderer3/node-tls/msp/keystore/key.pem + +# setup org1 msp +mkdir -p /tmp/crypto-material/orgs/org1/msp +cp /tmp/config/config.yaml /tmp/crypto-material/orgs/org1/msp +mkdir -p /tmp/crypto-material/orgs/org1/msp/cacerts +cp /tmp/crypto-material/cas/org1/ca-identities/ca-cert.pem /tmp/crypto-material/orgs/org1/msp/cacerts/ca-cert.pem +mkdir -p /tmp/crypto-material/orgs/org1/msp/tlscacerts +cp /tmp/crypto-material/cas/org1/ca-tls/ca-cert.pem /tmp/crypto-material/orgs/org1/msp/tlscacerts/org1-tls-ca-cert.pem +cp /tmp/crypto-material/cas/org2/ca-tls/ca-cert.pem /tmp/crypto-material/orgs/org1/msp/tlscacerts/org2-tls-ca-cert.pem +cp /tmp/crypto-material/cas/org3/ca-tls/ca-cert.pem /tmp/crypto-material/orgs/org1/msp/tlscacerts/org3-tls-ca-cert.pem +mkdir -p /tmp/crypto-material/orgs/org1/msp/users \ No newline at end of file diff --git a/topologies/t1/scripts/org1-enroll-identities-with-ca-tls.sh b/topologies/t1/scripts/org1-enroll-identities-with-ca-tls.sh new file mode 100755 index 0000000..67e742b --- /dev/null +++ b/topologies/t1/scripts/org1-enroll-identities-with-ca-tls.sh @@ -0,0 +1,26 @@ +#!/bin/bash +set -e +set -x + +# enroll orderer1 node-tls +export FABRIC_CA_CLIENT_HOME=/tmp/crypto-material/orderers/org1/orderer1/node-tls +export FABRIC_CA_CLIENT_TLS_CERTFILES=/tmp/crypto-material/cas/org1/ca-tls/ca-cert.pem +fabric-ca-client enroll -d -u https://org1-orderer1:orderer1PW@0.0.0.0:7054 --enrollment.profile tls --csr.hosts ${TOPOLOGY}-org1-orderer1 + +# enroll orderer2 node-tls +export FABRIC_CA_CLIENT_HOME=/tmp/crypto-material/orderers/org1/orderer2/node-tls +export FABRIC_CA_CLIENT_TLS_CERTFILES=/tmp/crypto-material/cas/org1/ca-tls/ca-cert.pem +fabric-ca-client enroll -d -u https://org1-orderer2:orderer2PW@0.0.0.0:7054 --enrollment.profile tls --csr.hosts ${TOPOLOGY}-org1-orderer2 + +# enroll orderer3 node-tls +export FABRIC_CA_CLIENT_HOME=/tmp/crypto-material/orderers/org1/orderer3/node-tls +export FABRIC_CA_CLIENT_TLS_CERTFILES=/tmp/crypto-material/cas/org1/ca-tls/ca-cert.pem +fabric-ca-client enroll -d -u https://org1-orderer3:orderer3PW@0.0.0.0:7054 --enrollment.profile tls --csr.hosts ${TOPOLOGY}-org1-orderer3 + +mv /tmp/crypto-material/orderers/org1/orderer1/node-tls/msp/keystore/* /tmp/crypto-material/orderers/org1/orderer1/node-tls/msp/keystore/key.pem +mv /tmp/crypto-material/orderers/org1/orderer2/node-tls/msp/keystore/* /tmp/crypto-material/orderers/org1/orderer2/node-tls/msp/keystore/key.pem +mv /tmp/crypto-material/orderers/org1/orderer3/node-tls/msp/keystore/* /tmp/crypto-material/orderers/org1/orderer3/node-tls/msp/keystore/key.pem + +mv /tmp/crypto-material/orderers/org1/orderer1/node-tls/msp/tlscacerts/* /tmp/crypto-material/orderers/org1/orderer1/node-tls/msp/tlscacerts/ca-cert.pem +mv /tmp/crypto-material/orderers/org1/orderer2/node-tls/msp/tlscacerts/* /tmp/crypto-material/orderers/org1/orderer2/node-tls/msp/tlscacerts/ca-cert.pem +mv /tmp/crypto-material/orderers/org1/orderer3/node-tls/msp/tlscacerts/* /tmp/crypto-material/orderers/org1/orderer3/node-tls/msp/tlscacerts/ca-cert.pem \ No newline at end of file diff --git a/topologies/t1/scripts/org1-register-identities-with-ca-identities.sh b/topologies/t1/scripts/org1-register-identities-with-ca-identities.sh new file mode 100755 index 0000000..1e6dcb2 --- /dev/null +++ b/topologies/t1/scripts/org1-register-identities-with-ca-identities.sh @@ -0,0 +1,11 @@ +#!/bin/bash +set -e +set -x + +export FABRIC_CA_CLIENT_TLS_CERTFILES=/tmp/crypto-material/cas/org1/ca-identities/ca-cert.pem +export FABRIC_CA_CLIENT_HOME=/tmp/crypto-material/cas/org1/ca-identities-admin +fabric-ca-client enroll -d -u https://org1-ca-identities-admin:org1-ca-identities-adminpw@0.0.0.0:7054 +fabric-ca-client register -d --id.name org1-orderer1 --id.secret orderer1PW --id.type orderer -u https://0.0.0.0:7054 +fabric-ca-client register -d --id.name org1-orderer2 --id.secret orderer2PW --id.type orderer -u https://0.0.0.0:7054 +fabric-ca-client register -d --id.name org1-orderer3 --id.secret orderer3PW --id.type orderer -u https://0.0.0.0:7054 +fabric-ca-client register -d --id.name admin-org1 --id.secret org1adminpw --id.type admin --id.attrs "hf.Registrar.Roles=client,hf.Registrar.Attributes=*,hf.Revoker=true,hf.GenCRL=true,admin=true:ecert,abac.init=true:ecert" -u https://0.0.0.0:7054 \ No newline at end of file diff --git a/topologies/t1/scripts/org1-register-identities-with-ca-tls.sh b/topologies/t1/scripts/org1-register-identities-with-ca-tls.sh new file mode 100755 index 0000000..ab095a5 --- /dev/null +++ b/topologies/t1/scripts/org1-register-identities-with-ca-tls.sh @@ -0,0 +1,10 @@ +#!/bin/bash +set -e +set -x + +export FABRIC_CA_CLIENT_TLS_CERTFILES=/tmp/crypto-material/cas/org1/ca-tls/ca-cert.pem +export FABRIC_CA_CLIENT_HOME=/tmp/crypto-material/cas/org1/ca-tls-admin +fabric-ca-client enroll -d -u https://org1-ca-tls-admin:org1-ca-tls-adminpw@0.0.0.0:7054 +fabric-ca-client register -d --id.name org1-orderer1 --id.secret orderer1PW --id.type orderer -u https://0.0.0.0:7054 +fabric-ca-client register -d --id.name org1-orderer2 --id.secret orderer2PW --id.type orderer -u https://0.0.0.0:7054 +fabric-ca-client register -d --id.name org1-orderer3 --id.secret orderer3PW --id.type orderer -u https://0.0.0.0:7054 \ No newline at end of file diff --git a/topologies/t1/scripts/org2-approve-chaincode.sh b/topologies/t1/scripts/org2-approve-chaincode.sh new file mode 100755 index 0000000..fb8ac61 --- /dev/null +++ b/topologies/t1/scripts/org2-approve-chaincode.sh @@ -0,0 +1,22 @@ +#!/bin/bash +set -e +set -x + +export CORE_PEER_ADDRESS=${TOPOLOGY}-org2-peer1:7051 +export CORE_PEER_MSPCONFIGPATH=/tmp/crypto-material/orgs/org2/admins/admin/msp +QUERY_INSTALLED=`FABRIC_LOGGING_SPEC=ERROR peer lifecycle chaincode queryinstalled | grep myccv1` +IFS=' ' read -r -a array <<< $QUERY_INSTALLED +PACKAGE_ID=${array[2]} +PACKAGE_ID=${PACKAGE_ID::-1} +echo "The Package ID for the installed chaincode is: $PACKAGE_ID" + + +peer lifecycle chaincode approveformyorg --channelID mychannel --name myccv1 --version "1.0" \ + --package-id $PACKAGE_ID \ + --peerAddresses ${TOPOLOGY}-org2-peer1:7051 \ + --peerAddresses ${TOPOLOGY}-org2-peer2:7051 \ + --tlsRootCertFiles /tmp/crypto-material/cas/org2/ca-tls/ca-cert.pem \ + --tlsRootCertFiles /tmp/crypto-material/cas/org2/ca-tls/ca-cert.pem \ + --sequence 1 --tls --cafile /tmp/crypto-material/cas/org1/ca-tls/ca-cert.pem + +peer lifecycle chaincode queryapproved -C mychannel --name myccv1 \ No newline at end of file diff --git a/topologies/t1/scripts/org2-create-and-join-channels.sh b/topologies/t1/scripts/org2-create-and-join-channels.sh new file mode 100755 index 0000000..ed6a189 --- /dev/null +++ b/topologies/t1/scripts/org2-create-and-join-channels.sh @@ -0,0 +1,12 @@ +#!/bin/sh +set -e +set -x + +export CORE_PEER_MSPCONFIGPATH=/tmp/crypto-material/orgs/org2/admins/admin/msp +peer channel create -c mychannel -f /tmp/crypto-material/artifacts/channels/channel.tx -o ${TOPOLOGY}-org1-orderer1:7050 --outputBlock /tmp/crypto-material/artifacts/channels/mychannel.block --tls --cafile /tmp/crypto-material/cas/org1/ca-tls/ca-cert.pem + +export CORE_PEER_ADDRESS=${TOPOLOGY}-org2-peer1:7051 +peer channel join -b /tmp/crypto-material/artifacts/channels/mychannel.block + +export CORE_PEER_ADDRESS=${TOPOLOGY}-org2-peer2:7051 +peer channel join -b /tmp/crypto-material/artifacts/channels/mychannel.block \ No newline at end of file diff --git a/topologies/t1/scripts/org2-enroll-identities-with-ca-identities.sh b/topologies/t1/scripts/org2-enroll-identities-with-ca-identities.sh new file mode 100755 index 0000000..ad0ea07 --- /dev/null +++ b/topologies/t1/scripts/org2-enroll-identities-with-ca-identities.sh @@ -0,0 +1,49 @@ +#!/bin/bash +set -e +set -x + +# enroll peer1 node +export FABRIC_CA_CLIENT_HOME=/tmp/crypto-material/peers/org2/peer1/node +export FABRIC_CA_CLIENT_TLS_CERTFILES=/tmp/crypto-material/cas/org2/ca-identities/ca-cert.pem +fabric-ca-client enroll -d -u https://peer1-org2:peer1PW@0.0.0.0:7054 +mv /tmp/crypto-material/peers/org2/peer1/node/msp/cacerts/* /tmp/crypto-material/peers/org2/peer1/node/msp/cacerts/ca-cert.pem +cp /tmp/config/config.yaml /tmp/crypto-material/peers/org2/peer1/node/msp + +# enroll peer2 node +export FABRIC_CA_CLIENT_HOME=/tmp/crypto-material/peers/org2/peer2/node +export FABRIC_CA_CLIENT_TLS_CERTFILES=/tmp/crypto-material/cas/org2/ca-identities/ca-cert.pem +fabric-ca-client enroll -d -u https://peer2-org2:peer2PW@0.0.0.0:7054 +mv /tmp/crypto-material/peers/org2/peer2/node/msp/cacerts/* /tmp/crypto-material/peers/org2/peer2/node/msp/cacerts/ca-cert.pem +cp /tmp/config/config.yaml /tmp/crypto-material/peers/org2/peer2/node/msp + +# enroll org2 admin +export FABRIC_CA_CLIENT_HOME=/tmp/crypto-material/orgs/org2/admins/admin +export FABRIC_CA_CLIENT_TLS_CERTFILES=/tmp/crypto-material/cas/org2/ca-identities/ca-cert.pem +fabric-ca-client enroll -d -u https://admin-org2:org2AdminPW@0.0.0.0:7054 +mv /tmp/crypto-material/orgs/org2/admins/admin/msp/cacerts/* /tmp/crypto-material/orgs/org2/admins/admin/msp/cacerts/ca-cert.pem +cp /tmp/config/config.yaml /tmp/crypto-material/orgs/org2/admins/admin/msp + +# enroll org2 user +export FABRIC_CA_CLIENT_HOME=/tmp/crypto-material/orgs/org2/users/user +export FABRIC_CA_CLIENT_TLS_CERTFILES=/tmp/crypto-material/cas/org2/ca-identities/ca-cert.pem +fabric-ca-client enroll -d -u https://admin-org2:org2AdminPW@0.0.0.0:7054 +mv /tmp/crypto-material/orgs/org2/users/user/msp/cacerts/* /tmp/crypto-material/orgs/org2/users/user/msp/cacerts/ca-cert.pem +cp /tmp/config/config.yaml /tmp/crypto-material/orgs/org2/users/user/msp + +# enroll org2 client +export FABRIC_CA_CLIENT_HOME=/tmp/crypto-material/orgs/org2/clients/client +export FABRIC_CA_CLIENT_TLS_CERTFILES=/tmp/crypto-material/cas/org2/ca-identities/ca-cert.pem +fabric-ca-client enroll -d -u https://admin-org2:org2AdminPW@0.0.0.0:7054 +mv /tmp/crypto-material/orgs/org2/clients/client/msp/cacerts/* /tmp/crypto-material/orgs/org2/clients/client/msp/cacerts/ca-cert.pem +cp /tmp/config/config.yaml /tmp/crypto-material/orgs/org2/clients/client/msp + +# setup org2 msp +mkdir -p /tmp/crypto-material/orgs/org2/msp +cp /tmp/config/config.yaml /tmp/crypto-material/orgs/org2/msp +mkdir -p /tmp/crypto-material/orgs/org2/msp/cacerts +cp /tmp/crypto-material/cas/org2/ca-identities/ca-cert.pem /tmp/crypto-material/orgs/org2/msp/cacerts/ca-cert.pem +mkdir -p /tmp/crypto-material/orgs/org2/msp/tlscacerts +cp /tmp/crypto-material/cas/org1/ca-tls/ca-cert.pem /tmp/crypto-material/orgs/org2/msp/tlscacerts/org1-tls-ca-cert.pem +cp /tmp/crypto-material/cas/org2/ca-tls/ca-cert.pem /tmp/crypto-material/orgs/org2/msp/tlscacerts/org2-tls-ca-cert.pem +cp /tmp/crypto-material/cas/org3/ca-tls/ca-cert.pem /tmp/crypto-material/orgs/org2/msp/tlscacerts/org3-tls-ca-cert.pem +mkdir -p /tmp/crypto-material/orgs/org2/msp/users diff --git a/topologies/t1/scripts/org2-enroll-identities-with-ca-tls.sh b/topologies/t1/scripts/org2-enroll-identities-with-ca-tls.sh new file mode 100755 index 0000000..ea37b7e --- /dev/null +++ b/topologies/t1/scripts/org2-enroll-identities-with-ca-tls.sh @@ -0,0 +1,19 @@ +#!/bin/bash +set -e +set -x + +# enroll peer1 node-tls +export FABRIC_CA_CLIENT_HOME=/tmp/crypto-material/peers/org2/peer1/node-tls +export FABRIC_CA_CLIENT_TLS_CERTFILES=/tmp/crypto-material/cas/org2/ca-tls/ca-cert.pem +fabric-ca-client enroll -d -u https://peer1-org2:peer1PW@0.0.0.0:7054 --enrollment.profile tls --csr.hosts ${TOPOLOGY}-org2-peer1 + +# enroll peer2 node-tls +export FABRIC_CA_CLIENT_HOME=/tmp/crypto-material/peers/org2/peer2/node-tls +export FABRIC_CA_CLIENT_TLS_CERTFILES=/tmp/crypto-material/cas/org2/ca-tls/ca-cert.pem +fabric-ca-client enroll -d -u https://peer2-org2:peer2PW@0.0.0.0:7054 --enrollment.profile tls --csr.hosts ${TOPOLOGY}-org2-peer2 + +mv /tmp/crypto-material/peers/org2/peer1/node-tls/msp/keystore/* /tmp/crypto-material/peers/org2/peer1/node-tls/msp/keystore/key.pem +mv /tmp/crypto-material/peers/org2/peer2/node-tls/msp/keystore/* /tmp/crypto-material/peers/org2/peer2/node-tls/msp/keystore/key.pem + +mv /tmp/crypto-material/peers/org2/peer1/node-tls/msp/tlscacerts/* /tmp/crypto-material/peers/org2/peer1/node-tls/msp/tlscacerts/ca-cert.pem +mv /tmp/crypto-material/peers/org2/peer2/node-tls/msp/tlscacerts/* /tmp/crypto-material/peers/org2/peer2/node-tls/msp/tlscacerts/ca-cert.pem \ No newline at end of file diff --git a/topologies/t1/scripts/org2-install-chaincode.sh b/topologies/t1/scripts/org2-install-chaincode.sh new file mode 100755 index 0000000..b9db570 --- /dev/null +++ b/topologies/t1/scripts/org2-install-chaincode.sh @@ -0,0 +1,13 @@ +#!/bin/bash +set -e +set -x + +export CHAINCODE_DIR=/opt/gopath/src/github.com/hyperledger/fabric-samples/chaincode/assets-transfer-basic/chaincode-go + +export CORE_PEER_ADDRESS=${TOPOLOGY}-org2-peer1:7051 +export CORE_PEER_MSPCONFIGPATH=/tmp/crypto-material/orgs/org2/admins/admin/msp +peer lifecycle chaincode package mycc.tar.gz --path $CHAINCODE_DIR --lang golang --label myccv1 +peer lifecycle chaincode install mycc.tar.gz + +export CORE_PEER_ADDRESS=${TOPOLOGY}-org2-peer2:7051 +peer lifecycle chaincode install mycc.tar.gz \ No newline at end of file diff --git a/topologies/t1/scripts/org2-register-identities-with-ca-identities.sh b/topologies/t1/scripts/org2-register-identities-with-ca-identities.sh new file mode 100755 index 0000000..beb2cd6 --- /dev/null +++ b/topologies/t1/scripts/org2-register-identities-with-ca-identities.sh @@ -0,0 +1,12 @@ +#!/bin/bash +set -e +set -x + +export FABRIC_CA_CLIENT_TLS_CERTFILES=/tmp/crypto-material/cas/org2/ca-identities/ca-cert.pem +export FABRIC_CA_CLIENT_HOME=/tmp/crypto-material/cas/org2/ca-identities-admin +fabric-ca-client enroll -d -u https://org2-ca-identities-admin:org2-ca-identities-adminpw@0.0.0.0:7054 +fabric-ca-client register -d --id.name peer1-org2 --id.secret peer1PW --id.type peer -u https://0.0.0.0:7054 +fabric-ca-client register -d --id.name peer2-org2 --id.secret peer2PW --id.type peer -u https://0.0.0.0:7054 +fabric-ca-client register -d --id.name admin-org2 --id.secret org2AdminPW --id.type admin -u https://0.0.0.0:7054 +fabric-ca-client register -d --id.name user-org2 --id.secret org2UserPW --id.type user -u https://0.0.0.0:7054 +fabric-ca-client register -d --id.name client-org2 --id.secret org2UserPW --id.type client -u https://0.0.0.0:7054 \ No newline at end of file diff --git a/topologies/t1/scripts/org2-register-identities-with-ca-tls.sh b/topologies/t1/scripts/org2-register-identities-with-ca-tls.sh new file mode 100755 index 0000000..3a99816 --- /dev/null +++ b/topologies/t1/scripts/org2-register-identities-with-ca-tls.sh @@ -0,0 +1,9 @@ +#!/bin/bash +set -e +set -x + +export FABRIC_CA_CLIENT_TLS_CERTFILES=/tmp/crypto-material/cas/org2/ca-tls/ca-cert.pem +export FABRIC_CA_CLIENT_HOME=/tmp/crypto-material/cas/org2/ca-tls-admin +fabric-ca-client enroll -d -u https://org2-ca-tls-admin:org2-ca-tls-adminpw@0.0.0.0:7054 +fabric-ca-client register -d --id.name peer1-org2 --id.secret peer1PW --id.type peer -u https://0.0.0.0:7054 +fabric-ca-client register -d --id.name peer2-org2 --id.secret peer2PW --id.type peer -u https://0.0.0.0:7054 \ No newline at end of file diff --git a/topologies/t1/scripts/org3-approve-chaincode.sh b/topologies/t1/scripts/org3-approve-chaincode.sh new file mode 100755 index 0000000..094c94f --- /dev/null +++ b/topologies/t1/scripts/org3-approve-chaincode.sh @@ -0,0 +1,22 @@ +#!/bin/bash +set -e +set -x + +export CORE_PEER_ADDRESS=${TOPOLOGY}-org3-peer1:7051 +export CORE_PEER_MSPCONFIGPATH=/tmp/crypto-material/orgs/org3/admins/admin/msp +QUERY_INSTALLED=`FABRIC_LOGGING_SPEC=ERROR peer lifecycle chaincode queryinstalled | grep myccv1` +IFS=' ' read -r -a array <<< $QUERY_INSTALLED +PACKAGE_ID=${array[2]} +PACKAGE_ID=${PACKAGE_ID::-1} +echo "The Package ID for the installed chaincode is: $PACKAGE_ID" + + +peer lifecycle chaincode approveformyorg --channelID mychannel --name myccv1 --version "1.0" \ + --package-id $PACKAGE_ID \ + --peerAddresses ${TOPOLOGY}-org3-peer1:7051 \ + --peerAddresses ${TOPOLOGY}-org3-peer2:7051 \ + --tlsRootCertFiles /tmp/crypto-material/cas/org3/ca-tls/ca-cert.pem \ + --tlsRootCertFiles /tmp/crypto-material/cas/org3/ca-tls/ca-cert.pem \ + --sequence 1 --tls --cafile /tmp/crypto-material/cas/org1/ca-tls/ca-cert.pem + +peer lifecycle chaincode queryapproved -C mychannel --name myccv1 \ No newline at end of file diff --git a/topologies/t1/scripts/org3-create-and-join-channels.sh b/topologies/t1/scripts/org3-create-and-join-channels.sh new file mode 100755 index 0000000..ebf8b48 --- /dev/null +++ b/topologies/t1/scripts/org3-create-and-join-channels.sh @@ -0,0 +1,11 @@ +#!/bin/sh +set -e +set -x + +export CORE_PEER_MSPCONFIGPATH=/tmp/crypto-material/orgs/org3/admins/admin/msp + +export CORE_PEER_ADDRESS=${TOPOLOGY}-org3-peer1:7051 +peer channel join -b /tmp/crypto-material/artifacts/channels/mychannel.block + +export CORE_PEER_ADDRESS=${TOPOLOGY}-org3-peer2:7051 +peer channel join -b /tmp/crypto-material/artifacts/channels/mychannel.block \ No newline at end of file diff --git a/topologies/t1/scripts/org3-enroll-identities-with-ca-identities.sh b/topologies/t1/scripts/org3-enroll-identities-with-ca-identities.sh new file mode 100755 index 0000000..36f317d --- /dev/null +++ b/topologies/t1/scripts/org3-enroll-identities-with-ca-identities.sh @@ -0,0 +1,49 @@ +#!/bin/bash +set -e +set -x + +# enroll peer1 node +export FABRIC_CA_CLIENT_HOME=/tmp/crypto-material/peers/org3/peer1/node +export FABRIC_CA_CLIENT_TLS_CERTFILES=/tmp/crypto-material/cas/org3/ca-identities/ca-cert.pem +fabric-ca-client enroll -d -u https://peer1-org3:peer1PW@0.0.0.0:7054 +mv /tmp/crypto-material/peers/org3/peer1/node/msp/cacerts/* /tmp/crypto-material/peers/org3/peer1/node/msp/cacerts/ca-cert.pem +cp /tmp/config/config.yaml /tmp/crypto-material/peers/org3/peer1/node/msp + +# enroll peer2 node +export FABRIC_CA_CLIENT_HOME=/tmp/crypto-material/peers/org3/peer2/node +export FABRIC_CA_CLIENT_TLS_CERTFILES=/tmp/crypto-material/cas/org3/ca-identities/ca-cert.pem +fabric-ca-client enroll -d -u https://peer2-org3:peer2PW@0.0.0.0:7054 +mv /tmp/crypto-material/peers/org3/peer2/node/msp/cacerts/* /tmp/crypto-material/peers/org3/peer2/node/msp/cacerts/ca-cert.pem +cp /tmp/config/config.yaml /tmp/crypto-material/peers/org3/peer2/node/msp + +# enroll org3 admin +export FABRIC_CA_CLIENT_HOME=/tmp/crypto-material/orgs/org3/admins/admin +export FABRIC_CA_CLIENT_TLS_CERTFILES=/tmp/crypto-material/cas/org3/ca-identities/ca-cert.pem +fabric-ca-client enroll -d -u https://admin-org3:org3AdminPW@0.0.0.0:7054 +mv /tmp/crypto-material/orgs/org3/admins/admin/msp/cacerts/* /tmp/crypto-material/orgs/org3/admins/admin/msp/cacerts/ca-cert.pem +cp /tmp/config/config.yaml /tmp/crypto-material/orgs/org3/admins/admin/msp + +# enroll org3 user +export FABRIC_CA_CLIENT_HOME=/tmp/crypto-material/orgs/org3/users/user +export FABRIC_CA_CLIENT_TLS_CERTFILES=/tmp/crypto-material/cas/org3/ca-identities/ca-cert.pem +fabric-ca-client enroll -d -u https://admin-org3:org3AdminPW@0.0.0.0:7054 +mv /tmp/crypto-material/orgs/org3/users/user/msp/cacerts/* /tmp/crypto-material/orgs/org3/users/user/msp/cacerts/ca-cert.pem +cp /tmp/config/config.yaml /tmp/crypto-material/orgs/org3/users/user/msp + +# enroll org3 client +export FABRIC_CA_CLIENT_HOME=/tmp/crypto-material/orgs/org3/clients/client +export FABRIC_CA_CLIENT_TLS_CERTFILES=/tmp/crypto-material/cas/org3/ca-identities/ca-cert.pem +fabric-ca-client enroll -d -u https://admin-org3:org3AdminPW@0.0.0.0:7054 +mv /tmp/crypto-material/orgs/org3/clients/client/msp/cacerts/* /tmp/crypto-material/orgs/org3/clients/client/msp/cacerts/ca-cert.pem +cp /tmp/config/config.yaml /tmp/crypto-material/orgs/org3/clients/client/msp + +# setup org3 msp +mkdir -p /tmp/crypto-material/orgs/org3/msp +cp /tmp/config/config.yaml /tmp/crypto-material/orgs/org3/msp +mkdir -p /tmp/crypto-material/orgs/org3/msp/cacerts +cp /tmp/crypto-material/cas/org3/ca-identities/ca-cert.pem /tmp/crypto-material/orgs/org3/msp/cacerts/ca-cert.pem +mkdir -p /tmp/crypto-material/orgs/org3/msp/tlscacerts +cp /tmp/crypto-material/cas/org1/ca-tls/ca-cert.pem /tmp/crypto-material/orgs/org3/msp/tlscacerts/org1-tls-ca-cert.pem +cp /tmp/crypto-material/cas/org2/ca-tls/ca-cert.pem /tmp/crypto-material/orgs/org3/msp/tlscacerts/org2-tls-ca-cert.pem +cp /tmp/crypto-material/cas/org3/ca-tls/ca-cert.pem /tmp/crypto-material/orgs/org3/msp/tlscacerts/org3-tls-ca-cert.pem +mkdir -p /tmp/crypto-material/orgs/org3/msp/users diff --git a/topologies/t1/scripts/org3-enroll-identities-with-ca-tls.sh b/topologies/t1/scripts/org3-enroll-identities-with-ca-tls.sh new file mode 100755 index 0000000..2bbb3b3 --- /dev/null +++ b/topologies/t1/scripts/org3-enroll-identities-with-ca-tls.sh @@ -0,0 +1,19 @@ +#!/bin/bash +set -e +set -x + +# enroll peer1 node-tls +export FABRIC_CA_CLIENT_HOME=/tmp/crypto-material/peers/org3/peer1/node-tls +export FABRIC_CA_CLIENT_TLS_CERTFILES=/tmp/crypto-material/cas/org3/ca-tls/ca-cert.pem +fabric-ca-client enroll -d -u https://peer1-org3:peer1PW@0.0.0.0:7054 --enrollment.profile tls --csr.hosts ${TOPOLOGY}-org3-peer1 + +# enroll peer2 node-tls +export FABRIC_CA_CLIENT_HOME=/tmp/crypto-material/peers/org3/peer2/node-tls +export FABRIC_CA_CLIENT_TLS_CERTFILES=/tmp/crypto-material/cas/org3/ca-tls/ca-cert.pem +fabric-ca-client enroll -d -u https://peer2-org3:peer2PW@0.0.0.0:7054 --enrollment.profile tls --csr.hosts ${TOPOLOGY}-org3-peer2 + +mv /tmp/crypto-material/peers/org3/peer1/node-tls/msp/keystore/* /tmp/crypto-material/peers/org3/peer1/node-tls/msp/keystore/key.pem +mv /tmp/crypto-material/peers/org3/peer2/node-tls/msp/keystore/* /tmp/crypto-material/peers/org3/peer2/node-tls/msp/keystore/key.pem + +mv /tmp/crypto-material/peers/org3/peer1/node-tls/msp/tlscacerts/* /tmp/crypto-material/peers/org3/peer1/node-tls/msp/tlscacerts/ca-cert.pem +mv /tmp/crypto-material/peers/org3/peer2/node-tls/msp/tlscacerts/* /tmp/crypto-material/peers/org3/peer2/node-tls/msp/tlscacerts/ca-cert.pem \ No newline at end of file diff --git a/topologies/t1/scripts/org3-install-chaincode.sh b/topologies/t1/scripts/org3-install-chaincode.sh new file mode 100755 index 0000000..f6b8789 --- /dev/null +++ b/topologies/t1/scripts/org3-install-chaincode.sh @@ -0,0 +1,13 @@ +#!/bin/bash +set -e +set -x + +export CHAINCODE_DIR=/opt/gopath/src/github.com/hyperledger/fabric-samples/chaincode/assets-transfer-basic/chaincode-go + +export CORE_PEER_ADDRESS=${TOPOLOGY}-org3-peer1:7051 +export CORE_PEER_MSPCONFIGPATH=/tmp/crypto-material/orgs/org3/admins/admin/msp +peer lifecycle chaincode package mycc.tar.gz --path $CHAINCODE_DIR --lang golang --label myccv1 +peer lifecycle chaincode install mycc.tar.gz + +export CORE_PEER_ADDRESS=${TOPOLOGY}-org3-peer2:7051 +peer lifecycle chaincode install mycc.tar.gz diff --git a/topologies/t1/scripts/org3-register-identities-with-ca-identities.sh b/topologies/t1/scripts/org3-register-identities-with-ca-identities.sh new file mode 100755 index 0000000..1c56144 --- /dev/null +++ b/topologies/t1/scripts/org3-register-identities-with-ca-identities.sh @@ -0,0 +1,12 @@ +#!/bin/bash +set -e +set -x + +export FABRIC_CA_CLIENT_TLS_CERTFILES=/tmp/crypto-material/cas/org3/ca-identities/ca-cert.pem +export FABRIC_CA_CLIENT_HOME=/tmp/crypto-material/cas/org3/ca-identities-admin +fabric-ca-client enroll -d -u https://org3-ca-identities-admin:org3-ca-identities-adminpw@0.0.0.0:7054 +fabric-ca-client register -d --id.name peer1-org3 --id.secret peer1PW --id.type peer -u https://0.0.0.0:7054 +fabric-ca-client register -d --id.name peer2-org3 --id.secret peer2PW --id.type peer -u https://0.0.0.0:7054 +fabric-ca-client register -d --id.name admin-org3 --id.secret org3AdminPW --id.type admin -u https://0.0.0.0:7054 +fabric-ca-client register -d --id.name user-org3 --id.secret org3UserPW --id.type user -u https://0.0.0.0:7054 +fabric-ca-client register -d --id.name client-org3 --id.secret org3UserPW --id.type client -u https://0.0.0.0:7054 \ No newline at end of file diff --git a/topologies/t1/scripts/org3-register-identities-with-ca-tls.sh b/topologies/t1/scripts/org3-register-identities-with-ca-tls.sh new file mode 100755 index 0000000..a5ccd7f --- /dev/null +++ b/topologies/t1/scripts/org3-register-identities-with-ca-tls.sh @@ -0,0 +1,9 @@ +#!/bin/bash +set -e +set -x + +export FABRIC_CA_CLIENT_TLS_CERTFILES=/tmp/crypto-material/cas/org3/ca-tls/ca-cert.pem +export FABRIC_CA_CLIENT_HOME=/tmp/crypto-material/cas/org3/ca-tls-admin +fabric-ca-client enroll -d -u https://org3-ca-tls-admin:org3-ca-tls-adminpw@0.0.0.0:7054 +fabric-ca-client register -d --id.name peer1-org3 --id.secret peer1PW --id.type peer -u https://0.0.0.0:7054 +fabric-ca-client register -d --id.name peer2-org3 --id.secret peer2PW --id.type peer -u https://0.0.0.0:7054 \ No newline at end of file diff --git a/topologies/t1/scripts/patch-configtx.sh b/topologies/t1/scripts/patch-configtx.sh new file mode 100755 index 0000000..a4359d7 --- /dev/null +++ b/topologies/t1/scripts/patch-configtx.sh @@ -0,0 +1,8 @@ +#!/bin/bash +set -e +set -x + +mkdir -p /tmp/crypto-material/config +cp /tmp/config/configtx.yaml /tmp/crypto-material/config + +cat /tmp/config/configtx.yaml | sed -r "s;<>;${TOPOLOGY};g" | tee /tmp/crypto-material/config/configtx.yaml > /dev/null \ No newline at end of file diff --git a/topologies/t1/setup-network.sh b/topologies/t1/setup-network.sh new file mode 100755 index 0000000..5728969 --- /dev/null +++ b/topologies/t1/setup-network.sh @@ -0,0 +1,163 @@ +#!/bin/bash +# -----stop script execution on error and log commands +set -e +set -x + +# -----get the folder of where the current script is located +export HL_TOPOLOGIES_BASE_FOLDER=$( cd ${0%/*} && pwd -P ) +rm -rf ./topologies + +export CURRENT_HL_TOPOLOGY=t1 +echo "Topology ${CURRENT_HL_TOPOLOGY} Root Folder Set to: ${HL_TOPOLOGIES_BASE_FOLDER}" + +# -----delete any old or existing docker networks and clear any state data---- +echo "Deleting the old network..." +./teardown-network.sh + +# -----bring up the hyperledger admin shell terminal console, used to bootstrap the network +docker compose --env-file ${HL_TOPOLOGIES_BASE_FOLDER}/.env -f ${HL_TOPOLOGIES_BASE_FOLDER}/../docker-compose-base.yml \ + -f ${HL_TOPOLOGIES_BASE_FOLDER}/../docker-compose-shell-cmd.yml up -d + +# -----begin by modifying the configtx yaml file with any string replacements +echo "Starting the setup of the new network..." + +docker exec ${CURRENT_HL_TOPOLOGY}-shell-cmd /bin/sh -c "/bin/sh /tmp/scripts/patch-configtx.sh" + +# ----------------------------------------------------------------------------- +# -----setup the CAs for all orgs and register with these the TLS-CA and Identities-CA users, such as admins, clients, etc...----- +# ----------------------------------------------------------------------------- +# -----org1 CAs +docker compose --env-file ${HL_TOPOLOGIES_BASE_FOLDER}/.env -f ${HL_TOPOLOGIES_BASE_FOLDER}/../docker-compose-base.yml \ + -f ${HL_TOPOLOGIES_BASE_FOLDER}/docker-compose.yml \ + -f ${HL_TOPOLOGIES_BASE_FOLDER}/containers/cas/org1-cas/docker-compose-org1-cas.yml up -d org1-ca-tls +sleep 2 +docker exec ${CURRENT_HL_TOPOLOGY}-org1-ca-tls /bin/sh -c "/bin/sh /tmp/scripts/org1-register-identities-with-ca-tls.sh" + +docker compose --env-file ${HL_TOPOLOGIES_BASE_FOLDER}/.env -f ${HL_TOPOLOGIES_BASE_FOLDER}/../docker-compose-base.yml \ + -f ${HL_TOPOLOGIES_BASE_FOLDER}/docker-compose.yml \ + -f ${HL_TOPOLOGIES_BASE_FOLDER}/containers/cas/org1-cas/docker-compose-org1-cas.yml up -d org1-ca-identities +sleep 2 +docker exec ${CURRENT_HL_TOPOLOGY}-org1-ca-identities /bin/sh -c "/bin/sh /tmp/scripts/org1-register-identities-with-ca-identities.sh" + +# -----org2 CAs +docker compose --env-file ${HL_TOPOLOGIES_BASE_FOLDER}/.env -f ${HL_TOPOLOGIES_BASE_FOLDER}/../docker-compose-base.yml \ + -f ${HL_TOPOLOGIES_BASE_FOLDER}/docker-compose.yml \ + -f ${HL_TOPOLOGIES_BASE_FOLDER}/containers/cas/org2-cas/docker-compose-org2-cas.yml up -d org2-ca-tls +sleep 2 +docker exec ${CURRENT_HL_TOPOLOGY}-org2-ca-tls /bin/sh -c "/bin/sh /tmp/scripts/org2-register-identities-with-ca-tls.sh" + +docker compose --env-file ${HL_TOPOLOGIES_BASE_FOLDER}/.env -f ${HL_TOPOLOGIES_BASE_FOLDER}/../docker-compose-base.yml \ + -f ${HL_TOPOLOGIES_BASE_FOLDER}/docker-compose.yml \ + -f ${HL_TOPOLOGIES_BASE_FOLDER}/containers/cas/org2-cas/docker-compose-org2-cas.yml up -d org2-ca-identities +sleep 2 +docker exec ${CURRENT_HL_TOPOLOGY}-org2-ca-identities /bin/sh -c "/bin/sh /tmp/scripts/org2-register-identities-with-ca-identities.sh" + +# -----org3 CAs +docker compose --env-file ${HL_TOPOLOGIES_BASE_FOLDER}/.env -f ${HL_TOPOLOGIES_BASE_FOLDER}/../docker-compose-base.yml \ + -f ${HL_TOPOLOGIES_BASE_FOLDER}/docker-compose.yml \ + -f ${HL_TOPOLOGIES_BASE_FOLDER}/containers/cas/org3-cas/docker-compose-org3-cas.yml up -d org3-ca-tls +sleep 2 +docker exec ${CURRENT_HL_TOPOLOGY}-org3-ca-tls /bin/sh -c "/bin/sh /tmp/scripts/org3-register-identities-with-ca-tls.sh" + +docker compose --env-file ${HL_TOPOLOGIES_BASE_FOLDER}/.env -f ${HL_TOPOLOGIES_BASE_FOLDER}/../docker-compose-base.yml \ + -f ${HL_TOPOLOGIES_BASE_FOLDER}/docker-compose.yml \ + -f ${HL_TOPOLOGIES_BASE_FOLDER}/containers/cas/org3-cas/docker-compose-org3-cas.yml up -d org3-ca-identities +sleep 2 +docker exec ${CURRENT_HL_TOPOLOGY}-org3-ca-identities /bin/sh -c "/bin/sh /tmp/scripts/org3-register-identities-with-ca-identities.sh" + +# ----------------------------------------------------------------------------- +# -----setup the Peers for all orgs----- +# ----------------------------------------------------------------------------- +# -----begin by enrolling each orgs' TLS-CA and Identities-CA users +docker exec ${CURRENT_HL_TOPOLOGY}-org2-ca-tls /bin/sh -c "/bin/sh /tmp/scripts/org2-enroll-identities-with-ca-tls.sh" +docker exec ${CURRENT_HL_TOPOLOGY}-org2-ca-identities /bin/sh -c "/bin/sh /tmp/scripts/org2-enroll-identities-with-ca-identities.sh" + +docker exec ${CURRENT_HL_TOPOLOGY}-org3-ca-tls /bin/sh -c "/bin/sh /tmp/scripts/org3-enroll-identities-with-ca-tls.sh" +docker exec ${CURRENT_HL_TOPOLOGY}-org3-ca-identities /bin/sh -c "/bin/sh /tmp/scripts/org3-enroll-identities-with-ca-identities.sh" + +# -----org2 peers +docker compose --env-file ${HL_TOPOLOGIES_BASE_FOLDER}/.env -f ${HL_TOPOLOGIES_BASE_FOLDER}/../docker-compose-base.yml \ + -f ${HL_TOPOLOGIES_BASE_FOLDER}/docker-compose.yml \ + -f ${HL_TOPOLOGIES_BASE_FOLDER}/containers/peers/org2-peers/docker-compose-org2-peers.yml up -d org2-peer1 + +docker compose --env-file ${HL_TOPOLOGIES_BASE_FOLDER}/.env -f ${HL_TOPOLOGIES_BASE_FOLDER}/../docker-compose-base.yml \ + -f ${HL_TOPOLOGIES_BASE_FOLDER}/docker-compose.yml \ + -f ${HL_TOPOLOGIES_BASE_FOLDER}/containers/peers/org2-peers/docker-compose-org2-peers.yml up -d org2-peer2 + +# -----org3 peers +docker compose --env-file ${HL_TOPOLOGIES_BASE_FOLDER}/.env -f ${HL_TOPOLOGIES_BASE_FOLDER}/../docker-compose-base.yml \ + -f ${HL_TOPOLOGIES_BASE_FOLDER}/docker-compose.yml \ + -f ${HL_TOPOLOGIES_BASE_FOLDER}/containers/peers/org3-peers/docker-compose-org3-peers.yml up -d org3-peer1 + +docker compose --env-file ${HL_TOPOLOGIES_BASE_FOLDER}/.env -f ${HL_TOPOLOGIES_BASE_FOLDER}/../docker-compose-base.yml \ + -f ${HL_TOPOLOGIES_BASE_FOLDER}/docker-compose.yml \ + -f ${HL_TOPOLOGIES_BASE_FOLDER}/containers/peers/org3-peers/docker-compose-org3-peers.yml up -d org3-peer2 + +# ----------------------------------------------------------------------------- +# -----setup CLIs for each org----- +# ----------------------------------------------------------------------------- +docker compose --env-file ${HL_TOPOLOGIES_BASE_FOLDER}/.env -f ${HL_TOPOLOGIES_BASE_FOLDER}/../docker-compose-base.yml \ + -f ${HL_TOPOLOGIES_BASE_FOLDER}/docker-compose.yml \ + -f ${HL_TOPOLOGIES_BASE_FOLDER}/containers/clis/org2-clis/docker-compose-org2-clis.yml up -d org2-cli-peer1 + +docker compose --env-file ${HL_TOPOLOGIES_BASE_FOLDER}/.env -f ${HL_TOPOLOGIES_BASE_FOLDER}/../docker-compose-base.yml \ + -f ${HL_TOPOLOGIES_BASE_FOLDER}/docker-compose.yml \ + -f ${HL_TOPOLOGIES_BASE_FOLDER}/containers/clis/org3-clis/docker-compose-org3-clis.yml up -d org3-cli-peer1 + +# ----------------------------------------------------------------------------- +# -----setup Orderers ----- +# ----------------------------------------------------------------------------- +# -----enroll orderer users with TLS-CA and Identities-CA for org1, the orderer org +docker exec ${CURRENT_HL_TOPOLOGY}-org1-ca-tls /bin/sh -c "/bin/sh /tmp/scripts/org1-enroll-identities-with-ca-tls.sh" +docker exec ${CURRENT_HL_TOPOLOGY}-org1-ca-identities /bin/sh -c "/bin/sh /tmp/scripts/org1-enroll-identities-with-ca-identities.sh" + +# -----generate the genesis.block and mychannel artifacts +docker exec ${CURRENT_HL_TOPOLOGY}-org2-cli-peer1 /bin/sh -c "/tmp/scripts/channels-setup.sh" + +sleep 1 + +# -----bring up org1 orderers +docker compose --env-file ${HL_TOPOLOGIES_BASE_FOLDER}/.env -f ${HL_TOPOLOGIES_BASE_FOLDER}/../docker-compose-base.yml \ + -f ${HL_TOPOLOGIES_BASE_FOLDER}/docker-compose.yml \ + -f ${HL_TOPOLOGIES_BASE_FOLDER}/containers/orderers/org1-orderers/docker-compose-org1-orderers.yml up -d org1-orderer1 + +docker compose --env-file ${HL_TOPOLOGIES_BASE_FOLDER}/.env -f ${HL_TOPOLOGIES_BASE_FOLDER}/../docker-compose-base.yml \ + -f ${HL_TOPOLOGIES_BASE_FOLDER}/docker-compose.yml \ + -f ${HL_TOPOLOGIES_BASE_FOLDER}/containers/orderers/org1-orderers/docker-compose-org1-orderers.yml up -d org1-orderer2 + +docker compose --env-file ${HL_TOPOLOGIES_BASE_FOLDER}/.env -f ${HL_TOPOLOGIES_BASE_FOLDER}/../docker-compose-base.yml \ + -f ${HL_TOPOLOGIES_BASE_FOLDER}/docker-compose.yml \ + -f ${HL_TOPOLOGIES_BASE_FOLDER}/containers/orderers/org1-orderers/docker-compose-org1-orderers.yml up -d org1-orderer3 + +# -----need to wait until raft leader selection is completed for the orderers +sleep 4 + +# ----------------------------------------------------------------------------- +# -----setup Channels ----- +# ----------------------------------------------------------------------------- +docker exec ${CURRENT_HL_TOPOLOGY}-org2-cli-peer1 /bin/sh -c "/tmp/scripts/org2-create-and-join-channels.sh" +docker exec ${CURRENT_HL_TOPOLOGY}-org3-cli-peer1 /bin/sh -c "/tmp/scripts/org3-create-and-join-channels.sh" + +# ----------------------------------------------------------------------------- +# -----setup Chaincode ----- +# ----------------------------------------------------------------------------- +docker exec ${CURRENT_HL_TOPOLOGY}-org2-cli-peer1 /bin/sh -c "/tmp/scripts/org2-install-chaincode.sh" +docker exec ${CURRENT_HL_TOPOLOGY}-org3-cli-peer1 /bin/sh -c "/tmp/scripts/org3-install-chaincode.sh" + +sleep 1 + +docker exec ${CURRENT_HL_TOPOLOGY}-org2-cli-peer1 /bin/sh -c "/tmp/scripts/org2-approve-chaincode.sh" +docker exec ${CURRENT_HL_TOPOLOGY}-org3-cli-peer1 /bin/sh -c "/tmp/scripts/org3-approve-chaincode.sh" + +sleep 1 + +docker exec ${CURRENT_HL_TOPOLOGY}-org2-cli-peer1 /bin/sh -c "/tmp/scripts/all-org-peers-commit-chaincode.sh" + +sleep 1 + +# ----------------------------------------------------------------------------- +# -----test Chaincode with invoke and query----- +# ----------------------------------------------------------------------------- +docker exec ${CURRENT_HL_TOPOLOGY}-org2-cli-peer1 /bin/sh -c "/tmp/scripts/all-org-peers-execute-chaincode.sh" + +echo "******* NETWORK SETUP COMPLETED *******" diff --git a/topologies/t1/teardown-network.sh b/topologies/t1/teardown-network.sh new file mode 100755 index 0000000..c86a94b --- /dev/null +++ b/topologies/t1/teardown-network.sh @@ -0,0 +1,33 @@ +#!/bin/bash +# -----stop script execution on error and log commands +set -e +set -x + +export CURRENT_HL_TOPOLOGY=t1 + +# ----------------------------------------------------------------------------- +# -----remove current topoloy containers containers +# ----------------------------------------------------------------------------- +# -----the chaincode dockers throw an error as not being able to be deleted, although they are being deleted. ignoring errors here +set +e +docker ps --filter name=${CURRENT_HL_TOPOLOGY}-org --filter status=running -aq | xargs -r docker stop | xargs -r docker rm +docker ps --filter name=${CURRENT_HL_TOPOLOGY}-org --filter status=exited -aq | xargs -r docker rm +set -e + +# ----------------------------------------------------------------------------- +# -----clear any state data written to disk +# ----------------------------------------------------------------------------- +# -----docker exec will throw error if no running container found +if [ "$( docker container inspect -f '{{.State.Status}}' ${CURRENT_HL_TOPOLOGY}-shell-cmd )" == "running" ] +then + docker exec ${CURRENT_HL_TOPOLOGY}-shell-cmd /bin/sh -c "/bin/sh /tmp/scripts/delete-state-data.sh" +else + echo "Shell Cmd Container is not running." +fi + +# ----------------------------------------------------------------------------- +# -----remove the bootstrap shell container +# ----------------------------------------------------------------------------- +# -----remove shell command container +docker ps --filter name=${CURRENT_HL_TOPOLOGY}-shell-cmd --filter status=running -aq | xargs -r docker stop | xargs -r docker rm +docker ps --filter name=${CURRENT_HL_TOPOLOGY}-shell-cmd --filter status=exited -aq | xargs -r docker rm diff --git a/topologies/t2/.env b/topologies/t2/.env new file mode 100644 index 0000000..f6ad858 --- /dev/null +++ b/topologies/t2/.env @@ -0,0 +1,5 @@ +COMPOSE_PROJECT_NAME=hl-fabric-topology-t2 +FABRIC_CA_VERSION=1.5 +FABRIC_PEER_VERSION=2.2.3 +FABRIC_TOOLS_VERSION=2.2.3 +PEER_ORDERER_VERSION=2.2.3 \ No newline at end of file diff --git a/topologies/t2/.gitignore b/topologies/t2/.gitignore new file mode 100644 index 0000000..ee0881a --- /dev/null +++ b/topologies/t2/.gitignore @@ -0,0 +1,2 @@ +crypto-material/*/** +homefolders/*/** \ No newline at end of file diff --git a/topologies/t2/README.md b/topologies/t2/README.md new file mode 100644 index 0000000..eeb23e1 --- /dev/null +++ b/topologies/t2/README.md @@ -0,0 +1,41 @@ +# T2: More orderers and peers +## Description +--- +T1 plus additional orderers and peers. Nginx Proxies are setup in front of the Peers and Orderers. Communication from client applications to execute chaincode is done via the Proxies vs direct access to peer or orderer nodes. The proxies are also used for some of the lifecycle operations (approve, commit chaincode) +## Diagram +--- +![Diagram of components](../image_store/T2.png) + +## Components List +--- +* Org 1 + * Orderer 1 + * Orderer 2 + * Orderer 3 + * TLS CA + * Identities CA +* Org 2 + * Orderer 1 + * Orderer 2 + * Peer 1 + * Peer 1 CLI + * Peer 2 + * Peers Proxy (load balances between Org 1 Peer 1 and Org 1 Peer 2) + * TLS CA + * Identities CA +* Org 3 + * Peer 1 + * Peer 1 CLI + * Peer 2 + * Peer 3 + * Peers Proxy (load balances between Org 2 Peer 1, Org 2 Peer 2 and Org 2 Peer 3 + * TLS CA + * Identities CA +* Orderers Proxy (load balances across all Orderers: Org 1 Orderer 1-3 and Org 2 Orderer 1-2) + + +## Characteristics + +- World State Database Instance (LevelDB) embedded (in peer containers) +- Chaincode installed directly on peers +- Communication between all components done via TLS \ No newline at end of file diff --git a/topologies/t2/assets/chaincode/assets-transfer-basic/chaincode-go/assetTransfer.go b/topologies/t2/assets/chaincode/assets-transfer-basic/chaincode-go/assetTransfer.go new file mode 100644 index 0000000..9c619d5 --- /dev/null +++ b/topologies/t2/assets/chaincode/assets-transfer-basic/chaincode-go/assetTransfer.go @@ -0,0 +1,23 @@ +/* +SPDX-License-Identifier: Apache-2.0 +*/ + +package main + +import ( + "log" + + "github.com/hyperledger/fabric-contract-api-go/contractapi" + "github.com/hyperledger/fabric-samples/asset-transfer-basic/chaincode-go/chaincode" +) + +func main() { + assetChaincode, err := contractapi.NewChaincode(&chaincode.SmartContract{}) + if err != nil { + log.Panicf("Error creating asset-transfer-basic chaincode: %v", err) + } + + if err := assetChaincode.Start(); err != nil { + log.Panicf("Error starting asset-transfer-basic chaincode: %v", err) + } +} diff --git a/topologies/t2/assets/chaincode/assets-transfer-basic/chaincode-go/chaincode/mocks/chaincodestub.go b/topologies/t2/assets/chaincode/assets-transfer-basic/chaincode-go/chaincode/mocks/chaincodestub.go new file mode 100644 index 0000000..91348fe --- /dev/null +++ b/topologies/t2/assets/chaincode/assets-transfer-basic/chaincode-go/chaincode/mocks/chaincodestub.go @@ -0,0 +1,2878 @@ +// Code generated by counterfeiter. DO NOT EDIT. +package mocks + +import ( + "sync" + + "github.com/golang/protobuf/ptypes/timestamp" + "github.com/hyperledger/fabric-chaincode-go/shim" + "github.com/hyperledger/fabric-protos-go/peer" +) + +type ChaincodeStub struct { + CreateCompositeKeyStub func(string, []string) (string, error) + createCompositeKeyMutex sync.RWMutex + createCompositeKeyArgsForCall []struct { + arg1 string + arg2 []string + } + createCompositeKeyReturns struct { + result1 string + result2 error + } + createCompositeKeyReturnsOnCall map[int]struct { + result1 string + result2 error + } + DelPrivateDataStub func(string, string) error + delPrivateDataMutex sync.RWMutex + delPrivateDataArgsForCall []struct { + arg1 string + arg2 string + } + delPrivateDataReturns struct { + result1 error + } + delPrivateDataReturnsOnCall map[int]struct { + result1 error + } + DelStateStub func(string) error + delStateMutex sync.RWMutex + delStateArgsForCall []struct { + arg1 string + } + delStateReturns struct { + result1 error + } + delStateReturnsOnCall map[int]struct { + result1 error + } + GetArgsStub func() [][]byte + getArgsMutex sync.RWMutex + getArgsArgsForCall []struct { + } + getArgsReturns struct { + result1 [][]byte + } + getArgsReturnsOnCall map[int]struct { + result1 [][]byte + } + GetArgsSliceStub func() ([]byte, error) + getArgsSliceMutex sync.RWMutex + getArgsSliceArgsForCall []struct { + } + getArgsSliceReturns struct { + result1 []byte + result2 error + } + getArgsSliceReturnsOnCall map[int]struct { + result1 []byte + result2 error + } + GetBindingStub func() ([]byte, error) + getBindingMutex sync.RWMutex + getBindingArgsForCall []struct { + } + getBindingReturns struct { + result1 []byte + result2 error + } + getBindingReturnsOnCall map[int]struct { + result1 []byte + result2 error + } + GetChannelIDStub func() string + getChannelIDMutex sync.RWMutex + getChannelIDArgsForCall []struct { + } + getChannelIDReturns struct { + result1 string + } + getChannelIDReturnsOnCall map[int]struct { + result1 string + } + GetCreatorStub func() ([]byte, error) + getCreatorMutex sync.RWMutex + getCreatorArgsForCall []struct { + } + getCreatorReturns struct { + result1 []byte + result2 error + } + getCreatorReturnsOnCall map[int]struct { + result1 []byte + result2 error + } + GetDecorationsStub func() map[string][]byte + getDecorationsMutex sync.RWMutex + getDecorationsArgsForCall []struct { + } + getDecorationsReturns struct { + result1 map[string][]byte + } + getDecorationsReturnsOnCall map[int]struct { + result1 map[string][]byte + } + GetFunctionAndParametersStub func() (string, []string) + getFunctionAndParametersMutex sync.RWMutex + getFunctionAndParametersArgsForCall []struct { + } + getFunctionAndParametersReturns struct { + result1 string + result2 []string + } + getFunctionAndParametersReturnsOnCall map[int]struct { + result1 string + result2 []string + } + GetHistoryForKeyStub func(string) (shim.HistoryQueryIteratorInterface, error) + getHistoryForKeyMutex sync.RWMutex + getHistoryForKeyArgsForCall []struct { + arg1 string + } + getHistoryForKeyReturns struct { + result1 shim.HistoryQueryIteratorInterface + result2 error + } + getHistoryForKeyReturnsOnCall map[int]struct { + result1 shim.HistoryQueryIteratorInterface + result2 error + } + GetPrivateDataStub func(string, string) ([]byte, error) + getPrivateDataMutex sync.RWMutex + getPrivateDataArgsForCall []struct { + arg1 string + arg2 string + } + getPrivateDataReturns struct { + result1 []byte + result2 error + } + getPrivateDataReturnsOnCall map[int]struct { + result1 []byte + result2 error + } + GetPrivateDataByPartialCompositeKeyStub func(string, string, []string) (shim.StateQueryIteratorInterface, error) + getPrivateDataByPartialCompositeKeyMutex sync.RWMutex + getPrivateDataByPartialCompositeKeyArgsForCall []struct { + arg1 string + arg2 string + arg3 []string + } + getPrivateDataByPartialCompositeKeyReturns struct { + result1 shim.StateQueryIteratorInterface + result2 error + } + getPrivateDataByPartialCompositeKeyReturnsOnCall map[int]struct { + result1 shim.StateQueryIteratorInterface + result2 error + } + GetPrivateDataByRangeStub func(string, string, string) (shim.StateQueryIteratorInterface, error) + getPrivateDataByRangeMutex sync.RWMutex + getPrivateDataByRangeArgsForCall []struct { + arg1 string + arg2 string + arg3 string + } + getPrivateDataByRangeReturns struct { + result1 shim.StateQueryIteratorInterface + result2 error + } + getPrivateDataByRangeReturnsOnCall map[int]struct { + result1 shim.StateQueryIteratorInterface + result2 error + } + GetPrivateDataHashStub func(string, string) ([]byte, error) + getPrivateDataHashMutex sync.RWMutex + getPrivateDataHashArgsForCall []struct { + arg1 string + arg2 string + } + getPrivateDataHashReturns struct { + result1 []byte + result2 error + } + getPrivateDataHashReturnsOnCall map[int]struct { + result1 []byte + result2 error + } + GetPrivateDataQueryResultStub func(string, string) (shim.StateQueryIteratorInterface, error) + getPrivateDataQueryResultMutex sync.RWMutex + getPrivateDataQueryResultArgsForCall []struct { + arg1 string + arg2 string + } + getPrivateDataQueryResultReturns struct { + result1 shim.StateQueryIteratorInterface + result2 error + } + getPrivateDataQueryResultReturnsOnCall map[int]struct { + result1 shim.StateQueryIteratorInterface + result2 error + } + GetPrivateDataValidationParameterStub func(string, string) ([]byte, error) + getPrivateDataValidationParameterMutex sync.RWMutex + getPrivateDataValidationParameterArgsForCall []struct { + arg1 string + arg2 string + } + getPrivateDataValidationParameterReturns struct { + result1 []byte + result2 error + } + getPrivateDataValidationParameterReturnsOnCall map[int]struct { + result1 []byte + result2 error + } + GetQueryResultStub func(string) (shim.StateQueryIteratorInterface, error) + getQueryResultMutex sync.RWMutex + getQueryResultArgsForCall []struct { + arg1 string + } + getQueryResultReturns struct { + result1 shim.StateQueryIteratorInterface + result2 error + } + getQueryResultReturnsOnCall map[int]struct { + result1 shim.StateQueryIteratorInterface + result2 error + } + GetQueryResultWithPaginationStub func(string, int32, string) (shim.StateQueryIteratorInterface, *peer.QueryResponseMetadata, error) + getQueryResultWithPaginationMutex sync.RWMutex + getQueryResultWithPaginationArgsForCall []struct { + arg1 string + arg2 int32 + arg3 string + } + getQueryResultWithPaginationReturns struct { + result1 shim.StateQueryIteratorInterface + result2 *peer.QueryResponseMetadata + result3 error + } + getQueryResultWithPaginationReturnsOnCall map[int]struct { + result1 shim.StateQueryIteratorInterface + result2 *peer.QueryResponseMetadata + result3 error + } + GetSignedProposalStub func() (*peer.SignedProposal, error) + getSignedProposalMutex sync.RWMutex + getSignedProposalArgsForCall []struct { + } + getSignedProposalReturns struct { + result1 *peer.SignedProposal + result2 error + } + getSignedProposalReturnsOnCall map[int]struct { + result1 *peer.SignedProposal + result2 error + } + GetStateStub func(string) ([]byte, error) + getStateMutex sync.RWMutex + getStateArgsForCall []struct { + arg1 string + } + getStateReturns struct { + result1 []byte + result2 error + } + getStateReturnsOnCall map[int]struct { + result1 []byte + result2 error + } + GetStateByPartialCompositeKeyStub func(string, []string) (shim.StateQueryIteratorInterface, error) + getStateByPartialCompositeKeyMutex sync.RWMutex + getStateByPartialCompositeKeyArgsForCall []struct { + arg1 string + arg2 []string + } + getStateByPartialCompositeKeyReturns struct { + result1 shim.StateQueryIteratorInterface + result2 error + } + getStateByPartialCompositeKeyReturnsOnCall map[int]struct { + result1 shim.StateQueryIteratorInterface + result2 error + } + GetStateByPartialCompositeKeyWithPaginationStub func(string, []string, int32, string) (shim.StateQueryIteratorInterface, *peer.QueryResponseMetadata, error) + getStateByPartialCompositeKeyWithPaginationMutex sync.RWMutex + getStateByPartialCompositeKeyWithPaginationArgsForCall []struct { + arg1 string + arg2 []string + arg3 int32 + arg4 string + } + getStateByPartialCompositeKeyWithPaginationReturns struct { + result1 shim.StateQueryIteratorInterface + result2 *peer.QueryResponseMetadata + result3 error + } + getStateByPartialCompositeKeyWithPaginationReturnsOnCall map[int]struct { + result1 shim.StateQueryIteratorInterface + result2 *peer.QueryResponseMetadata + result3 error + } + GetStateByRangeStub func(string, string) (shim.StateQueryIteratorInterface, error) + getStateByRangeMutex sync.RWMutex + getStateByRangeArgsForCall []struct { + arg1 string + arg2 string + } + getStateByRangeReturns struct { + result1 shim.StateQueryIteratorInterface + result2 error + } + getStateByRangeReturnsOnCall map[int]struct { + result1 shim.StateQueryIteratorInterface + result2 error + } + GetStateByRangeWithPaginationStub func(string, string, int32, string) (shim.StateQueryIteratorInterface, *peer.QueryResponseMetadata, error) + getStateByRangeWithPaginationMutex sync.RWMutex + getStateByRangeWithPaginationArgsForCall []struct { + arg1 string + arg2 string + arg3 int32 + arg4 string + } + getStateByRangeWithPaginationReturns struct { + result1 shim.StateQueryIteratorInterface + result2 *peer.QueryResponseMetadata + result3 error + } + getStateByRangeWithPaginationReturnsOnCall map[int]struct { + result1 shim.StateQueryIteratorInterface + result2 *peer.QueryResponseMetadata + result3 error + } + GetStateValidationParameterStub func(string) ([]byte, error) + getStateValidationParameterMutex sync.RWMutex + getStateValidationParameterArgsForCall []struct { + arg1 string + } + getStateValidationParameterReturns struct { + result1 []byte + result2 error + } + getStateValidationParameterReturnsOnCall map[int]struct { + result1 []byte + result2 error + } + GetStringArgsStub func() []string + getStringArgsMutex sync.RWMutex + getStringArgsArgsForCall []struct { + } + getStringArgsReturns struct { + result1 []string + } + getStringArgsReturnsOnCall map[int]struct { + result1 []string + } + GetTransientStub func() (map[string][]byte, error) + getTransientMutex sync.RWMutex + getTransientArgsForCall []struct { + } + getTransientReturns struct { + result1 map[string][]byte + result2 error + } + getTransientReturnsOnCall map[int]struct { + result1 map[string][]byte + result2 error + } + GetTxIDStub func() string + getTxIDMutex sync.RWMutex + getTxIDArgsForCall []struct { + } + getTxIDReturns struct { + result1 string + } + getTxIDReturnsOnCall map[int]struct { + result1 string + } + GetTxTimestampStub func() (*timestamp.Timestamp, error) + getTxTimestampMutex sync.RWMutex + getTxTimestampArgsForCall []struct { + } + getTxTimestampReturns struct { + result1 *timestamp.Timestamp + result2 error + } + getTxTimestampReturnsOnCall map[int]struct { + result1 *timestamp.Timestamp + result2 error + } + InvokeChaincodeStub func(string, [][]byte, string) peer.Response + invokeChaincodeMutex sync.RWMutex + invokeChaincodeArgsForCall []struct { + arg1 string + arg2 [][]byte + arg3 string + } + invokeChaincodeReturns struct { + result1 peer.Response + } + invokeChaincodeReturnsOnCall map[int]struct { + result1 peer.Response + } + PutPrivateDataStub func(string, string, []byte) error + putPrivateDataMutex sync.RWMutex + putPrivateDataArgsForCall []struct { + arg1 string + arg2 string + arg3 []byte + } + putPrivateDataReturns struct { + result1 error + } + putPrivateDataReturnsOnCall map[int]struct { + result1 error + } + PutStateStub func(string, []byte) error + putStateMutex sync.RWMutex + putStateArgsForCall []struct { + arg1 string + arg2 []byte + } + putStateReturns struct { + result1 error + } + putStateReturnsOnCall map[int]struct { + result1 error + } + SetEventStub func(string, []byte) error + setEventMutex sync.RWMutex + setEventArgsForCall []struct { + arg1 string + arg2 []byte + } + setEventReturns struct { + result1 error + } + setEventReturnsOnCall map[int]struct { + result1 error + } + SetPrivateDataValidationParameterStub func(string, string, []byte) error + setPrivateDataValidationParameterMutex sync.RWMutex + setPrivateDataValidationParameterArgsForCall []struct { + arg1 string + arg2 string + arg3 []byte + } + setPrivateDataValidationParameterReturns struct { + result1 error + } + setPrivateDataValidationParameterReturnsOnCall map[int]struct { + result1 error + } + SetStateValidationParameterStub func(string, []byte) error + setStateValidationParameterMutex sync.RWMutex + setStateValidationParameterArgsForCall []struct { + arg1 string + arg2 []byte + } + setStateValidationParameterReturns struct { + result1 error + } + setStateValidationParameterReturnsOnCall map[int]struct { + result1 error + } + SplitCompositeKeyStub func(string) (string, []string, error) + splitCompositeKeyMutex sync.RWMutex + splitCompositeKeyArgsForCall []struct { + arg1 string + } + splitCompositeKeyReturns struct { + result1 string + result2 []string + result3 error + } + splitCompositeKeyReturnsOnCall map[int]struct { + result1 string + result2 []string + result3 error + } + invocations map[string][][]interface{} + invocationsMutex sync.RWMutex +} + +func (fake *ChaincodeStub) CreateCompositeKey(arg1 string, arg2 []string) (string, error) { + var arg2Copy []string + if arg2 != nil { + arg2Copy = make([]string, len(arg2)) + copy(arg2Copy, arg2) + } + fake.createCompositeKeyMutex.Lock() + ret, specificReturn := fake.createCompositeKeyReturnsOnCall[len(fake.createCompositeKeyArgsForCall)] + fake.createCompositeKeyArgsForCall = append(fake.createCompositeKeyArgsForCall, struct { + arg1 string + arg2 []string + }{arg1, arg2Copy}) + fake.recordInvocation("CreateCompositeKey", []interface{}{arg1, arg2Copy}) + fake.createCompositeKeyMutex.Unlock() + if fake.CreateCompositeKeyStub != nil { + return fake.CreateCompositeKeyStub(arg1, arg2) + } + if specificReturn { + return ret.result1, ret.result2 + } + fakeReturns := fake.createCompositeKeyReturns + return fakeReturns.result1, fakeReturns.result2 +} + +func (fake *ChaincodeStub) CreateCompositeKeyCallCount() int { + fake.createCompositeKeyMutex.RLock() + defer fake.createCompositeKeyMutex.RUnlock() + return len(fake.createCompositeKeyArgsForCall) +} + +func (fake *ChaincodeStub) CreateCompositeKeyCalls(stub func(string, []string) (string, error)) { + fake.createCompositeKeyMutex.Lock() + defer fake.createCompositeKeyMutex.Unlock() + fake.CreateCompositeKeyStub = stub +} + +func (fake *ChaincodeStub) CreateCompositeKeyArgsForCall(i int) (string, []string) { + fake.createCompositeKeyMutex.RLock() + defer fake.createCompositeKeyMutex.RUnlock() + argsForCall := fake.createCompositeKeyArgsForCall[i] + return argsForCall.arg1, argsForCall.arg2 +} + +func (fake *ChaincodeStub) CreateCompositeKeyReturns(result1 string, result2 error) { + fake.createCompositeKeyMutex.Lock() + defer fake.createCompositeKeyMutex.Unlock() + fake.CreateCompositeKeyStub = nil + fake.createCompositeKeyReturns = struct { + result1 string + result2 error + }{result1, result2} +} + +func (fake *ChaincodeStub) CreateCompositeKeyReturnsOnCall(i int, result1 string, result2 error) { + fake.createCompositeKeyMutex.Lock() + defer fake.createCompositeKeyMutex.Unlock() + fake.CreateCompositeKeyStub = nil + if fake.createCompositeKeyReturnsOnCall == nil { + fake.createCompositeKeyReturnsOnCall = make(map[int]struct { + result1 string + result2 error + }) + } + fake.createCompositeKeyReturnsOnCall[i] = struct { + result1 string + result2 error + }{result1, result2} +} + +func (fake *ChaincodeStub) DelPrivateData(arg1 string, arg2 string) error { + fake.delPrivateDataMutex.Lock() + ret, specificReturn := fake.delPrivateDataReturnsOnCall[len(fake.delPrivateDataArgsForCall)] + fake.delPrivateDataArgsForCall = append(fake.delPrivateDataArgsForCall, struct { + arg1 string + arg2 string + }{arg1, arg2}) + fake.recordInvocation("DelPrivateData", []interface{}{arg1, arg2}) + fake.delPrivateDataMutex.Unlock() + if fake.DelPrivateDataStub != nil { + return fake.DelPrivateDataStub(arg1, arg2) + } + if specificReturn { + return ret.result1 + } + fakeReturns := fake.delPrivateDataReturns + return fakeReturns.result1 +} + +func (fake *ChaincodeStub) DelPrivateDataCallCount() int { + fake.delPrivateDataMutex.RLock() + defer fake.delPrivateDataMutex.RUnlock() + return len(fake.delPrivateDataArgsForCall) +} + +func (fake *ChaincodeStub) DelPrivateDataCalls(stub func(string, string) error) { + fake.delPrivateDataMutex.Lock() + defer fake.delPrivateDataMutex.Unlock() + fake.DelPrivateDataStub = stub +} + +func (fake *ChaincodeStub) DelPrivateDataArgsForCall(i int) (string, string) { + fake.delPrivateDataMutex.RLock() + defer fake.delPrivateDataMutex.RUnlock() + argsForCall := fake.delPrivateDataArgsForCall[i] + return argsForCall.arg1, argsForCall.arg2 +} + +func (fake *ChaincodeStub) DelPrivateDataReturns(result1 error) { + fake.delPrivateDataMutex.Lock() + defer fake.delPrivateDataMutex.Unlock() + fake.DelPrivateDataStub = nil + fake.delPrivateDataReturns = struct { + result1 error + }{result1} +} + +func (fake *ChaincodeStub) DelPrivateDataReturnsOnCall(i int, result1 error) { + fake.delPrivateDataMutex.Lock() + defer fake.delPrivateDataMutex.Unlock() + fake.DelPrivateDataStub = nil + if fake.delPrivateDataReturnsOnCall == nil { + fake.delPrivateDataReturnsOnCall = make(map[int]struct { + result1 error + }) + } + fake.delPrivateDataReturnsOnCall[i] = struct { + result1 error + }{result1} +} + +func (fake *ChaincodeStub) DelState(arg1 string) error { + fake.delStateMutex.Lock() + ret, specificReturn := fake.delStateReturnsOnCall[len(fake.delStateArgsForCall)] + fake.delStateArgsForCall = append(fake.delStateArgsForCall, struct { + arg1 string + }{arg1}) + fake.recordInvocation("DelState", []interface{}{arg1}) + fake.delStateMutex.Unlock() + if fake.DelStateStub != nil { + return fake.DelStateStub(arg1) + } + if specificReturn { + return ret.result1 + } + fakeReturns := fake.delStateReturns + return fakeReturns.result1 +} + +func (fake *ChaincodeStub) DelStateCallCount() int { + fake.delStateMutex.RLock() + defer fake.delStateMutex.RUnlock() + return len(fake.delStateArgsForCall) +} + +func (fake *ChaincodeStub) DelStateCalls(stub func(string) error) { + fake.delStateMutex.Lock() + defer fake.delStateMutex.Unlock() + fake.DelStateStub = stub +} + +func (fake *ChaincodeStub) DelStateArgsForCall(i int) string { + fake.delStateMutex.RLock() + defer fake.delStateMutex.RUnlock() + argsForCall := fake.delStateArgsForCall[i] + return argsForCall.arg1 +} + +func (fake *ChaincodeStub) DelStateReturns(result1 error) { + fake.delStateMutex.Lock() + defer fake.delStateMutex.Unlock() + fake.DelStateStub = nil + fake.delStateReturns = struct { + result1 error + }{result1} +} + +func (fake *ChaincodeStub) DelStateReturnsOnCall(i int, result1 error) { + fake.delStateMutex.Lock() + defer fake.delStateMutex.Unlock() + fake.DelStateStub = nil + if fake.delStateReturnsOnCall == nil { + fake.delStateReturnsOnCall = make(map[int]struct { + result1 error + }) + } + fake.delStateReturnsOnCall[i] = struct { + result1 error + }{result1} +} + +func (fake *ChaincodeStub) GetArgs() [][]byte { + fake.getArgsMutex.Lock() + ret, specificReturn := fake.getArgsReturnsOnCall[len(fake.getArgsArgsForCall)] + fake.getArgsArgsForCall = append(fake.getArgsArgsForCall, struct { + }{}) + fake.recordInvocation("GetArgs", []interface{}{}) + fake.getArgsMutex.Unlock() + if fake.GetArgsStub != nil { + return fake.GetArgsStub() + } + if specificReturn { + return ret.result1 + } + fakeReturns := fake.getArgsReturns + return fakeReturns.result1 +} + +func (fake *ChaincodeStub) GetArgsCallCount() int { + fake.getArgsMutex.RLock() + defer fake.getArgsMutex.RUnlock() + return len(fake.getArgsArgsForCall) +} + +func (fake *ChaincodeStub) GetArgsCalls(stub func() [][]byte) { + fake.getArgsMutex.Lock() + defer fake.getArgsMutex.Unlock() + fake.GetArgsStub = stub +} + +func (fake *ChaincodeStub) GetArgsReturns(result1 [][]byte) { + fake.getArgsMutex.Lock() + defer fake.getArgsMutex.Unlock() + fake.GetArgsStub = nil + fake.getArgsReturns = struct { + result1 [][]byte + }{result1} +} + +func (fake *ChaincodeStub) GetArgsReturnsOnCall(i int, result1 [][]byte) { + fake.getArgsMutex.Lock() + defer fake.getArgsMutex.Unlock() + fake.GetArgsStub = nil + if fake.getArgsReturnsOnCall == nil { + fake.getArgsReturnsOnCall = make(map[int]struct { + result1 [][]byte + }) + } + fake.getArgsReturnsOnCall[i] = struct { + result1 [][]byte + }{result1} +} + +func (fake *ChaincodeStub) GetArgsSlice() ([]byte, error) { + fake.getArgsSliceMutex.Lock() + ret, specificReturn := fake.getArgsSliceReturnsOnCall[len(fake.getArgsSliceArgsForCall)] + fake.getArgsSliceArgsForCall = append(fake.getArgsSliceArgsForCall, struct { + }{}) + fake.recordInvocation("GetArgsSlice", []interface{}{}) + fake.getArgsSliceMutex.Unlock() + if fake.GetArgsSliceStub != nil { + return fake.GetArgsSliceStub() + } + if specificReturn { + return ret.result1, ret.result2 + } + fakeReturns := fake.getArgsSliceReturns + return fakeReturns.result1, fakeReturns.result2 +} + +func (fake *ChaincodeStub) GetArgsSliceCallCount() int { + fake.getArgsSliceMutex.RLock() + defer fake.getArgsSliceMutex.RUnlock() + return len(fake.getArgsSliceArgsForCall) +} + +func (fake *ChaincodeStub) GetArgsSliceCalls(stub func() ([]byte, error)) { + fake.getArgsSliceMutex.Lock() + defer fake.getArgsSliceMutex.Unlock() + fake.GetArgsSliceStub = stub +} + +func (fake *ChaincodeStub) GetArgsSliceReturns(result1 []byte, result2 error) { + fake.getArgsSliceMutex.Lock() + defer fake.getArgsSliceMutex.Unlock() + fake.GetArgsSliceStub = nil + fake.getArgsSliceReturns = struct { + result1 []byte + result2 error + }{result1, result2} +} + +func (fake *ChaincodeStub) GetArgsSliceReturnsOnCall(i int, result1 []byte, result2 error) { + fake.getArgsSliceMutex.Lock() + defer fake.getArgsSliceMutex.Unlock() + fake.GetArgsSliceStub = nil + if fake.getArgsSliceReturnsOnCall == nil { + fake.getArgsSliceReturnsOnCall = make(map[int]struct { + result1 []byte + result2 error + }) + } + fake.getArgsSliceReturnsOnCall[i] = struct { + result1 []byte + result2 error + }{result1, result2} +} + +func (fake *ChaincodeStub) GetBinding() ([]byte, error) { + fake.getBindingMutex.Lock() + ret, specificReturn := fake.getBindingReturnsOnCall[len(fake.getBindingArgsForCall)] + fake.getBindingArgsForCall = append(fake.getBindingArgsForCall, struct { + }{}) + fake.recordInvocation("GetBinding", []interface{}{}) + fake.getBindingMutex.Unlock() + if fake.GetBindingStub != nil { + return fake.GetBindingStub() + } + if specificReturn { + return ret.result1, ret.result2 + } + fakeReturns := fake.getBindingReturns + return fakeReturns.result1, fakeReturns.result2 +} + +func (fake *ChaincodeStub) GetBindingCallCount() int { + fake.getBindingMutex.RLock() + defer fake.getBindingMutex.RUnlock() + return len(fake.getBindingArgsForCall) +} + +func (fake *ChaincodeStub) GetBindingCalls(stub func() ([]byte, error)) { + fake.getBindingMutex.Lock() + defer fake.getBindingMutex.Unlock() + fake.GetBindingStub = stub +} + +func (fake *ChaincodeStub) GetBindingReturns(result1 []byte, result2 error) { + fake.getBindingMutex.Lock() + defer fake.getBindingMutex.Unlock() + fake.GetBindingStub = nil + fake.getBindingReturns = struct { + result1 []byte + result2 error + }{result1, result2} +} + +func (fake *ChaincodeStub) GetBindingReturnsOnCall(i int, result1 []byte, result2 error) { + fake.getBindingMutex.Lock() + defer fake.getBindingMutex.Unlock() + fake.GetBindingStub = nil + if fake.getBindingReturnsOnCall == nil { + fake.getBindingReturnsOnCall = make(map[int]struct { + result1 []byte + result2 error + }) + } + fake.getBindingReturnsOnCall[i] = struct { + result1 []byte + result2 error + }{result1, result2} +} + +func (fake *ChaincodeStub) GetChannelID() string { + fake.getChannelIDMutex.Lock() + ret, specificReturn := fake.getChannelIDReturnsOnCall[len(fake.getChannelIDArgsForCall)] + fake.getChannelIDArgsForCall = append(fake.getChannelIDArgsForCall, struct { + }{}) + fake.recordInvocation("GetChannelID", []interface{}{}) + fake.getChannelIDMutex.Unlock() + if fake.GetChannelIDStub != nil { + return fake.GetChannelIDStub() + } + if specificReturn { + return ret.result1 + } + fakeReturns := fake.getChannelIDReturns + return fakeReturns.result1 +} + +func (fake *ChaincodeStub) GetChannelIDCallCount() int { + fake.getChannelIDMutex.RLock() + defer fake.getChannelIDMutex.RUnlock() + return len(fake.getChannelIDArgsForCall) +} + +func (fake *ChaincodeStub) GetChannelIDCalls(stub func() string) { + fake.getChannelIDMutex.Lock() + defer fake.getChannelIDMutex.Unlock() + fake.GetChannelIDStub = stub +} + +func (fake *ChaincodeStub) GetChannelIDReturns(result1 string) { + fake.getChannelIDMutex.Lock() + defer fake.getChannelIDMutex.Unlock() + fake.GetChannelIDStub = nil + fake.getChannelIDReturns = struct { + result1 string + }{result1} +} + +func (fake *ChaincodeStub) GetChannelIDReturnsOnCall(i int, result1 string) { + fake.getChannelIDMutex.Lock() + defer fake.getChannelIDMutex.Unlock() + fake.GetChannelIDStub = nil + if fake.getChannelIDReturnsOnCall == nil { + fake.getChannelIDReturnsOnCall = make(map[int]struct { + result1 string + }) + } + fake.getChannelIDReturnsOnCall[i] = struct { + result1 string + }{result1} +} + +func (fake *ChaincodeStub) GetCreator() ([]byte, error) { + fake.getCreatorMutex.Lock() + ret, specificReturn := fake.getCreatorReturnsOnCall[len(fake.getCreatorArgsForCall)] + fake.getCreatorArgsForCall = append(fake.getCreatorArgsForCall, struct { + }{}) + fake.recordInvocation("GetCreator", []interface{}{}) + fake.getCreatorMutex.Unlock() + if fake.GetCreatorStub != nil { + return fake.GetCreatorStub() + } + if specificReturn { + return ret.result1, ret.result2 + } + fakeReturns := fake.getCreatorReturns + return fakeReturns.result1, fakeReturns.result2 +} + +func (fake *ChaincodeStub) GetCreatorCallCount() int { + fake.getCreatorMutex.RLock() + defer fake.getCreatorMutex.RUnlock() + return len(fake.getCreatorArgsForCall) +} + +func (fake *ChaincodeStub) GetCreatorCalls(stub func() ([]byte, error)) { + fake.getCreatorMutex.Lock() + defer fake.getCreatorMutex.Unlock() + fake.GetCreatorStub = stub +} + +func (fake *ChaincodeStub) GetCreatorReturns(result1 []byte, result2 error) { + fake.getCreatorMutex.Lock() + defer fake.getCreatorMutex.Unlock() + fake.GetCreatorStub = nil + fake.getCreatorReturns = struct { + result1 []byte + result2 error + }{result1, result2} +} + +func (fake *ChaincodeStub) GetCreatorReturnsOnCall(i int, result1 []byte, result2 error) { + fake.getCreatorMutex.Lock() + defer fake.getCreatorMutex.Unlock() + fake.GetCreatorStub = nil + if fake.getCreatorReturnsOnCall == nil { + fake.getCreatorReturnsOnCall = make(map[int]struct { + result1 []byte + result2 error + }) + } + fake.getCreatorReturnsOnCall[i] = struct { + result1 []byte + result2 error + }{result1, result2} +} + +func (fake *ChaincodeStub) GetDecorations() map[string][]byte { + fake.getDecorationsMutex.Lock() + ret, specificReturn := fake.getDecorationsReturnsOnCall[len(fake.getDecorationsArgsForCall)] + fake.getDecorationsArgsForCall = append(fake.getDecorationsArgsForCall, struct { + }{}) + fake.recordInvocation("GetDecorations", []interface{}{}) + fake.getDecorationsMutex.Unlock() + if fake.GetDecorationsStub != nil { + return fake.GetDecorationsStub() + } + if specificReturn { + return ret.result1 + } + fakeReturns := fake.getDecorationsReturns + return fakeReturns.result1 +} + +func (fake *ChaincodeStub) GetDecorationsCallCount() int { + fake.getDecorationsMutex.RLock() + defer fake.getDecorationsMutex.RUnlock() + return len(fake.getDecorationsArgsForCall) +} + +func (fake *ChaincodeStub) GetDecorationsCalls(stub func() map[string][]byte) { + fake.getDecorationsMutex.Lock() + defer fake.getDecorationsMutex.Unlock() + fake.GetDecorationsStub = stub +} + +func (fake *ChaincodeStub) GetDecorationsReturns(result1 map[string][]byte) { + fake.getDecorationsMutex.Lock() + defer fake.getDecorationsMutex.Unlock() + fake.GetDecorationsStub = nil + fake.getDecorationsReturns = struct { + result1 map[string][]byte + }{result1} +} + +func (fake *ChaincodeStub) GetDecorationsReturnsOnCall(i int, result1 map[string][]byte) { + fake.getDecorationsMutex.Lock() + defer fake.getDecorationsMutex.Unlock() + fake.GetDecorationsStub = nil + if fake.getDecorationsReturnsOnCall == nil { + fake.getDecorationsReturnsOnCall = make(map[int]struct { + result1 map[string][]byte + }) + } + fake.getDecorationsReturnsOnCall[i] = struct { + result1 map[string][]byte + }{result1} +} + +func (fake *ChaincodeStub) GetFunctionAndParameters() (string, []string) { + fake.getFunctionAndParametersMutex.Lock() + ret, specificReturn := fake.getFunctionAndParametersReturnsOnCall[len(fake.getFunctionAndParametersArgsForCall)] + fake.getFunctionAndParametersArgsForCall = append(fake.getFunctionAndParametersArgsForCall, struct { + }{}) + fake.recordInvocation("GetFunctionAndParameters", []interface{}{}) + fake.getFunctionAndParametersMutex.Unlock() + if fake.GetFunctionAndParametersStub != nil { + return fake.GetFunctionAndParametersStub() + } + if specificReturn { + return ret.result1, ret.result2 + } + fakeReturns := fake.getFunctionAndParametersReturns + return fakeReturns.result1, fakeReturns.result2 +} + +func (fake *ChaincodeStub) GetFunctionAndParametersCallCount() int { + fake.getFunctionAndParametersMutex.RLock() + defer fake.getFunctionAndParametersMutex.RUnlock() + return len(fake.getFunctionAndParametersArgsForCall) +} + +func (fake *ChaincodeStub) GetFunctionAndParametersCalls(stub func() (string, []string)) { + fake.getFunctionAndParametersMutex.Lock() + defer fake.getFunctionAndParametersMutex.Unlock() + fake.GetFunctionAndParametersStub = stub +} + +func (fake *ChaincodeStub) GetFunctionAndParametersReturns(result1 string, result2 []string) { + fake.getFunctionAndParametersMutex.Lock() + defer fake.getFunctionAndParametersMutex.Unlock() + fake.GetFunctionAndParametersStub = nil + fake.getFunctionAndParametersReturns = struct { + result1 string + result2 []string + }{result1, result2} +} + +func (fake *ChaincodeStub) GetFunctionAndParametersReturnsOnCall(i int, result1 string, result2 []string) { + fake.getFunctionAndParametersMutex.Lock() + defer fake.getFunctionAndParametersMutex.Unlock() + fake.GetFunctionAndParametersStub = nil + if fake.getFunctionAndParametersReturnsOnCall == nil { + fake.getFunctionAndParametersReturnsOnCall = make(map[int]struct { + result1 string + result2 []string + }) + } + fake.getFunctionAndParametersReturnsOnCall[i] = struct { + result1 string + result2 []string + }{result1, result2} +} + +func (fake *ChaincodeStub) GetHistoryForKey(arg1 string) (shim.HistoryQueryIteratorInterface, error) { + fake.getHistoryForKeyMutex.Lock() + ret, specificReturn := fake.getHistoryForKeyReturnsOnCall[len(fake.getHistoryForKeyArgsForCall)] + fake.getHistoryForKeyArgsForCall = append(fake.getHistoryForKeyArgsForCall, struct { + arg1 string + }{arg1}) + fake.recordInvocation("GetHistoryForKey", []interface{}{arg1}) + fake.getHistoryForKeyMutex.Unlock() + if fake.GetHistoryForKeyStub != nil { + return fake.GetHistoryForKeyStub(arg1) + } + if specificReturn { + return ret.result1, ret.result2 + } + fakeReturns := fake.getHistoryForKeyReturns + return fakeReturns.result1, fakeReturns.result2 +} + +func (fake *ChaincodeStub) GetHistoryForKeyCallCount() int { + fake.getHistoryForKeyMutex.RLock() + defer fake.getHistoryForKeyMutex.RUnlock() + return len(fake.getHistoryForKeyArgsForCall) +} + +func (fake *ChaincodeStub) GetHistoryForKeyCalls(stub func(string) (shim.HistoryQueryIteratorInterface, error)) { + fake.getHistoryForKeyMutex.Lock() + defer fake.getHistoryForKeyMutex.Unlock() + fake.GetHistoryForKeyStub = stub +} + +func (fake *ChaincodeStub) GetHistoryForKeyArgsForCall(i int) string { + fake.getHistoryForKeyMutex.RLock() + defer fake.getHistoryForKeyMutex.RUnlock() + argsForCall := fake.getHistoryForKeyArgsForCall[i] + return argsForCall.arg1 +} + +func (fake *ChaincodeStub) GetHistoryForKeyReturns(result1 shim.HistoryQueryIteratorInterface, result2 error) { + fake.getHistoryForKeyMutex.Lock() + defer fake.getHistoryForKeyMutex.Unlock() + fake.GetHistoryForKeyStub = nil + fake.getHistoryForKeyReturns = struct { + result1 shim.HistoryQueryIteratorInterface + result2 error + }{result1, result2} +} + +func (fake *ChaincodeStub) GetHistoryForKeyReturnsOnCall(i int, result1 shim.HistoryQueryIteratorInterface, result2 error) { + fake.getHistoryForKeyMutex.Lock() + defer fake.getHistoryForKeyMutex.Unlock() + fake.GetHistoryForKeyStub = nil + if fake.getHistoryForKeyReturnsOnCall == nil { + fake.getHistoryForKeyReturnsOnCall = make(map[int]struct { + result1 shim.HistoryQueryIteratorInterface + result2 error + }) + } + fake.getHistoryForKeyReturnsOnCall[i] = struct { + result1 shim.HistoryQueryIteratorInterface + result2 error + }{result1, result2} +} + +func (fake *ChaincodeStub) GetPrivateData(arg1 string, arg2 string) ([]byte, error) { + fake.getPrivateDataMutex.Lock() + ret, specificReturn := fake.getPrivateDataReturnsOnCall[len(fake.getPrivateDataArgsForCall)] + fake.getPrivateDataArgsForCall = append(fake.getPrivateDataArgsForCall, struct { + arg1 string + arg2 string + }{arg1, arg2}) + fake.recordInvocation("GetPrivateData", []interface{}{arg1, arg2}) + fake.getPrivateDataMutex.Unlock() + if fake.GetPrivateDataStub != nil { + return fake.GetPrivateDataStub(arg1, arg2) + } + if specificReturn { + return ret.result1, ret.result2 + } + fakeReturns := fake.getPrivateDataReturns + return fakeReturns.result1, fakeReturns.result2 +} + +func (fake *ChaincodeStub) GetPrivateDataCallCount() int { + fake.getPrivateDataMutex.RLock() + defer fake.getPrivateDataMutex.RUnlock() + return len(fake.getPrivateDataArgsForCall) +} + +func (fake *ChaincodeStub) GetPrivateDataCalls(stub func(string, string) ([]byte, error)) { + fake.getPrivateDataMutex.Lock() + defer fake.getPrivateDataMutex.Unlock() + fake.GetPrivateDataStub = stub +} + +func (fake *ChaincodeStub) GetPrivateDataArgsForCall(i int) (string, string) { + fake.getPrivateDataMutex.RLock() + defer fake.getPrivateDataMutex.RUnlock() + argsForCall := fake.getPrivateDataArgsForCall[i] + return argsForCall.arg1, argsForCall.arg2 +} + +func (fake *ChaincodeStub) GetPrivateDataReturns(result1 []byte, result2 error) { + fake.getPrivateDataMutex.Lock() + defer fake.getPrivateDataMutex.Unlock() + fake.GetPrivateDataStub = nil + fake.getPrivateDataReturns = struct { + result1 []byte + result2 error + }{result1, result2} +} + +func (fake *ChaincodeStub) GetPrivateDataReturnsOnCall(i int, result1 []byte, result2 error) { + fake.getPrivateDataMutex.Lock() + defer fake.getPrivateDataMutex.Unlock() + fake.GetPrivateDataStub = nil + if fake.getPrivateDataReturnsOnCall == nil { + fake.getPrivateDataReturnsOnCall = make(map[int]struct { + result1 []byte + result2 error + }) + } + fake.getPrivateDataReturnsOnCall[i] = struct { + result1 []byte + result2 error + }{result1, result2} +} + +func (fake *ChaincodeStub) GetPrivateDataByPartialCompositeKey(arg1 string, arg2 string, arg3 []string) (shim.StateQueryIteratorInterface, error) { + var arg3Copy []string + if arg3 != nil { + arg3Copy = make([]string, len(arg3)) + copy(arg3Copy, arg3) + } + fake.getPrivateDataByPartialCompositeKeyMutex.Lock() + ret, specificReturn := fake.getPrivateDataByPartialCompositeKeyReturnsOnCall[len(fake.getPrivateDataByPartialCompositeKeyArgsForCall)] + fake.getPrivateDataByPartialCompositeKeyArgsForCall = append(fake.getPrivateDataByPartialCompositeKeyArgsForCall, struct { + arg1 string + arg2 string + arg3 []string + }{arg1, arg2, arg3Copy}) + fake.recordInvocation("GetPrivateDataByPartialCompositeKey", []interface{}{arg1, arg2, arg3Copy}) + fake.getPrivateDataByPartialCompositeKeyMutex.Unlock() + if fake.GetPrivateDataByPartialCompositeKeyStub != nil { + return fake.GetPrivateDataByPartialCompositeKeyStub(arg1, arg2, arg3) + } + if specificReturn { + return ret.result1, ret.result2 + } + fakeReturns := fake.getPrivateDataByPartialCompositeKeyReturns + return fakeReturns.result1, fakeReturns.result2 +} + +func (fake *ChaincodeStub) GetPrivateDataByPartialCompositeKeyCallCount() int { + fake.getPrivateDataByPartialCompositeKeyMutex.RLock() + defer fake.getPrivateDataByPartialCompositeKeyMutex.RUnlock() + return len(fake.getPrivateDataByPartialCompositeKeyArgsForCall) +} + +func (fake *ChaincodeStub) GetPrivateDataByPartialCompositeKeyCalls(stub func(string, string, []string) (shim.StateQueryIteratorInterface, error)) { + fake.getPrivateDataByPartialCompositeKeyMutex.Lock() + defer fake.getPrivateDataByPartialCompositeKeyMutex.Unlock() + fake.GetPrivateDataByPartialCompositeKeyStub = stub +} + +func (fake *ChaincodeStub) GetPrivateDataByPartialCompositeKeyArgsForCall(i int) (string, string, []string) { + fake.getPrivateDataByPartialCompositeKeyMutex.RLock() + defer fake.getPrivateDataByPartialCompositeKeyMutex.RUnlock() + argsForCall := fake.getPrivateDataByPartialCompositeKeyArgsForCall[i] + return argsForCall.arg1, argsForCall.arg2, argsForCall.arg3 +} + +func (fake *ChaincodeStub) GetPrivateDataByPartialCompositeKeyReturns(result1 shim.StateQueryIteratorInterface, result2 error) { + fake.getPrivateDataByPartialCompositeKeyMutex.Lock() + defer fake.getPrivateDataByPartialCompositeKeyMutex.Unlock() + fake.GetPrivateDataByPartialCompositeKeyStub = nil + fake.getPrivateDataByPartialCompositeKeyReturns = struct { + result1 shim.StateQueryIteratorInterface + result2 error + }{result1, result2} +} + +func (fake *ChaincodeStub) GetPrivateDataByPartialCompositeKeyReturnsOnCall(i int, result1 shim.StateQueryIteratorInterface, result2 error) { + fake.getPrivateDataByPartialCompositeKeyMutex.Lock() + defer fake.getPrivateDataByPartialCompositeKeyMutex.Unlock() + fake.GetPrivateDataByPartialCompositeKeyStub = nil + if fake.getPrivateDataByPartialCompositeKeyReturnsOnCall == nil { + fake.getPrivateDataByPartialCompositeKeyReturnsOnCall = make(map[int]struct { + result1 shim.StateQueryIteratorInterface + result2 error + }) + } + fake.getPrivateDataByPartialCompositeKeyReturnsOnCall[i] = struct { + result1 shim.StateQueryIteratorInterface + result2 error + }{result1, result2} +} + +func (fake *ChaincodeStub) GetPrivateDataByRange(arg1 string, arg2 string, arg3 string) (shim.StateQueryIteratorInterface, error) { + fake.getPrivateDataByRangeMutex.Lock() + ret, specificReturn := fake.getPrivateDataByRangeReturnsOnCall[len(fake.getPrivateDataByRangeArgsForCall)] + fake.getPrivateDataByRangeArgsForCall = append(fake.getPrivateDataByRangeArgsForCall, struct { + arg1 string + arg2 string + arg3 string + }{arg1, arg2, arg3}) + fake.recordInvocation("GetPrivateDataByRange", []interface{}{arg1, arg2, arg3}) + fake.getPrivateDataByRangeMutex.Unlock() + if fake.GetPrivateDataByRangeStub != nil { + return fake.GetPrivateDataByRangeStub(arg1, arg2, arg3) + } + if specificReturn { + return ret.result1, ret.result2 + } + fakeReturns := fake.getPrivateDataByRangeReturns + return fakeReturns.result1, fakeReturns.result2 +} + +func (fake *ChaincodeStub) GetPrivateDataByRangeCallCount() int { + fake.getPrivateDataByRangeMutex.RLock() + defer fake.getPrivateDataByRangeMutex.RUnlock() + return len(fake.getPrivateDataByRangeArgsForCall) +} + +func (fake *ChaincodeStub) GetPrivateDataByRangeCalls(stub func(string, string, string) (shim.StateQueryIteratorInterface, error)) { + fake.getPrivateDataByRangeMutex.Lock() + defer fake.getPrivateDataByRangeMutex.Unlock() + fake.GetPrivateDataByRangeStub = stub +} + +func (fake *ChaincodeStub) GetPrivateDataByRangeArgsForCall(i int) (string, string, string) { + fake.getPrivateDataByRangeMutex.RLock() + defer fake.getPrivateDataByRangeMutex.RUnlock() + argsForCall := fake.getPrivateDataByRangeArgsForCall[i] + return argsForCall.arg1, argsForCall.arg2, argsForCall.arg3 +} + +func (fake *ChaincodeStub) GetPrivateDataByRangeReturns(result1 shim.StateQueryIteratorInterface, result2 error) { + fake.getPrivateDataByRangeMutex.Lock() + defer fake.getPrivateDataByRangeMutex.Unlock() + fake.GetPrivateDataByRangeStub = nil + fake.getPrivateDataByRangeReturns = struct { + result1 shim.StateQueryIteratorInterface + result2 error + }{result1, result2} +} + +func (fake *ChaincodeStub) GetPrivateDataByRangeReturnsOnCall(i int, result1 shim.StateQueryIteratorInterface, result2 error) { + fake.getPrivateDataByRangeMutex.Lock() + defer fake.getPrivateDataByRangeMutex.Unlock() + fake.GetPrivateDataByRangeStub = nil + if fake.getPrivateDataByRangeReturnsOnCall == nil { + fake.getPrivateDataByRangeReturnsOnCall = make(map[int]struct { + result1 shim.StateQueryIteratorInterface + result2 error + }) + } + fake.getPrivateDataByRangeReturnsOnCall[i] = struct { + result1 shim.StateQueryIteratorInterface + result2 error + }{result1, result2} +} + +func (fake *ChaincodeStub) GetPrivateDataHash(arg1 string, arg2 string) ([]byte, error) { + fake.getPrivateDataHashMutex.Lock() + ret, specificReturn := fake.getPrivateDataHashReturnsOnCall[len(fake.getPrivateDataHashArgsForCall)] + fake.getPrivateDataHashArgsForCall = append(fake.getPrivateDataHashArgsForCall, struct { + arg1 string + arg2 string + }{arg1, arg2}) + fake.recordInvocation("GetPrivateDataHash", []interface{}{arg1, arg2}) + fake.getPrivateDataHashMutex.Unlock() + if fake.GetPrivateDataHashStub != nil { + return fake.GetPrivateDataHashStub(arg1, arg2) + } + if specificReturn { + return ret.result1, ret.result2 + } + fakeReturns := fake.getPrivateDataHashReturns + return fakeReturns.result1, fakeReturns.result2 +} + +func (fake *ChaincodeStub) GetPrivateDataHashCallCount() int { + fake.getPrivateDataHashMutex.RLock() + defer fake.getPrivateDataHashMutex.RUnlock() + return len(fake.getPrivateDataHashArgsForCall) +} + +func (fake *ChaincodeStub) GetPrivateDataHashCalls(stub func(string, string) ([]byte, error)) { + fake.getPrivateDataHashMutex.Lock() + defer fake.getPrivateDataHashMutex.Unlock() + fake.GetPrivateDataHashStub = stub +} + +func (fake *ChaincodeStub) GetPrivateDataHashArgsForCall(i int) (string, string) { + fake.getPrivateDataHashMutex.RLock() + defer fake.getPrivateDataHashMutex.RUnlock() + argsForCall := fake.getPrivateDataHashArgsForCall[i] + return argsForCall.arg1, argsForCall.arg2 +} + +func (fake *ChaincodeStub) GetPrivateDataHashReturns(result1 []byte, result2 error) { + fake.getPrivateDataHashMutex.Lock() + defer fake.getPrivateDataHashMutex.Unlock() + fake.GetPrivateDataHashStub = nil + fake.getPrivateDataHashReturns = struct { + result1 []byte + result2 error + }{result1, result2} +} + +func (fake *ChaincodeStub) GetPrivateDataHashReturnsOnCall(i int, result1 []byte, result2 error) { + fake.getPrivateDataHashMutex.Lock() + defer fake.getPrivateDataHashMutex.Unlock() + fake.GetPrivateDataHashStub = nil + if fake.getPrivateDataHashReturnsOnCall == nil { + fake.getPrivateDataHashReturnsOnCall = make(map[int]struct { + result1 []byte + result2 error + }) + } + fake.getPrivateDataHashReturnsOnCall[i] = struct { + result1 []byte + result2 error + }{result1, result2} +} + +func (fake *ChaincodeStub) GetPrivateDataQueryResult(arg1 string, arg2 string) (shim.StateQueryIteratorInterface, error) { + fake.getPrivateDataQueryResultMutex.Lock() + ret, specificReturn := fake.getPrivateDataQueryResultReturnsOnCall[len(fake.getPrivateDataQueryResultArgsForCall)] + fake.getPrivateDataQueryResultArgsForCall = append(fake.getPrivateDataQueryResultArgsForCall, struct { + arg1 string + arg2 string + }{arg1, arg2}) + fake.recordInvocation("GetPrivateDataQueryResult", []interface{}{arg1, arg2}) + fake.getPrivateDataQueryResultMutex.Unlock() + if fake.GetPrivateDataQueryResultStub != nil { + return fake.GetPrivateDataQueryResultStub(arg1, arg2) + } + if specificReturn { + return ret.result1, ret.result2 + } + fakeReturns := fake.getPrivateDataQueryResultReturns + return fakeReturns.result1, fakeReturns.result2 +} + +func (fake *ChaincodeStub) GetPrivateDataQueryResultCallCount() int { + fake.getPrivateDataQueryResultMutex.RLock() + defer fake.getPrivateDataQueryResultMutex.RUnlock() + return len(fake.getPrivateDataQueryResultArgsForCall) +} + +func (fake *ChaincodeStub) GetPrivateDataQueryResultCalls(stub func(string, string) (shim.StateQueryIteratorInterface, error)) { + fake.getPrivateDataQueryResultMutex.Lock() + defer fake.getPrivateDataQueryResultMutex.Unlock() + fake.GetPrivateDataQueryResultStub = stub +} + +func (fake *ChaincodeStub) GetPrivateDataQueryResultArgsForCall(i int) (string, string) { + fake.getPrivateDataQueryResultMutex.RLock() + defer fake.getPrivateDataQueryResultMutex.RUnlock() + argsForCall := fake.getPrivateDataQueryResultArgsForCall[i] + return argsForCall.arg1, argsForCall.arg2 +} + +func (fake *ChaincodeStub) GetPrivateDataQueryResultReturns(result1 shim.StateQueryIteratorInterface, result2 error) { + fake.getPrivateDataQueryResultMutex.Lock() + defer fake.getPrivateDataQueryResultMutex.Unlock() + fake.GetPrivateDataQueryResultStub = nil + fake.getPrivateDataQueryResultReturns = struct { + result1 shim.StateQueryIteratorInterface + result2 error + }{result1, result2} +} + +func (fake *ChaincodeStub) GetPrivateDataQueryResultReturnsOnCall(i int, result1 shim.StateQueryIteratorInterface, result2 error) { + fake.getPrivateDataQueryResultMutex.Lock() + defer fake.getPrivateDataQueryResultMutex.Unlock() + fake.GetPrivateDataQueryResultStub = nil + if fake.getPrivateDataQueryResultReturnsOnCall == nil { + fake.getPrivateDataQueryResultReturnsOnCall = make(map[int]struct { + result1 shim.StateQueryIteratorInterface + result2 error + }) + } + fake.getPrivateDataQueryResultReturnsOnCall[i] = struct { + result1 shim.StateQueryIteratorInterface + result2 error + }{result1, result2} +} + +func (fake *ChaincodeStub) GetPrivateDataValidationParameter(arg1 string, arg2 string) ([]byte, error) { + fake.getPrivateDataValidationParameterMutex.Lock() + ret, specificReturn := fake.getPrivateDataValidationParameterReturnsOnCall[len(fake.getPrivateDataValidationParameterArgsForCall)] + fake.getPrivateDataValidationParameterArgsForCall = append(fake.getPrivateDataValidationParameterArgsForCall, struct { + arg1 string + arg2 string + }{arg1, arg2}) + fake.recordInvocation("GetPrivateDataValidationParameter", []interface{}{arg1, arg2}) + fake.getPrivateDataValidationParameterMutex.Unlock() + if fake.GetPrivateDataValidationParameterStub != nil { + return fake.GetPrivateDataValidationParameterStub(arg1, arg2) + } + if specificReturn { + return ret.result1, ret.result2 + } + fakeReturns := fake.getPrivateDataValidationParameterReturns + return fakeReturns.result1, fakeReturns.result2 +} + +func (fake *ChaincodeStub) GetPrivateDataValidationParameterCallCount() int { + fake.getPrivateDataValidationParameterMutex.RLock() + defer fake.getPrivateDataValidationParameterMutex.RUnlock() + return len(fake.getPrivateDataValidationParameterArgsForCall) +} + +func (fake *ChaincodeStub) GetPrivateDataValidationParameterCalls(stub func(string, string) ([]byte, error)) { + fake.getPrivateDataValidationParameterMutex.Lock() + defer fake.getPrivateDataValidationParameterMutex.Unlock() + fake.GetPrivateDataValidationParameterStub = stub +} + +func (fake *ChaincodeStub) GetPrivateDataValidationParameterArgsForCall(i int) (string, string) { + fake.getPrivateDataValidationParameterMutex.RLock() + defer fake.getPrivateDataValidationParameterMutex.RUnlock() + argsForCall := fake.getPrivateDataValidationParameterArgsForCall[i] + return argsForCall.arg1, argsForCall.arg2 +} + +func (fake *ChaincodeStub) GetPrivateDataValidationParameterReturns(result1 []byte, result2 error) { + fake.getPrivateDataValidationParameterMutex.Lock() + defer fake.getPrivateDataValidationParameterMutex.Unlock() + fake.GetPrivateDataValidationParameterStub = nil + fake.getPrivateDataValidationParameterReturns = struct { + result1 []byte + result2 error + }{result1, result2} +} + +func (fake *ChaincodeStub) GetPrivateDataValidationParameterReturnsOnCall(i int, result1 []byte, result2 error) { + fake.getPrivateDataValidationParameterMutex.Lock() + defer fake.getPrivateDataValidationParameterMutex.Unlock() + fake.GetPrivateDataValidationParameterStub = nil + if fake.getPrivateDataValidationParameterReturnsOnCall == nil { + fake.getPrivateDataValidationParameterReturnsOnCall = make(map[int]struct { + result1 []byte + result2 error + }) + } + fake.getPrivateDataValidationParameterReturnsOnCall[i] = struct { + result1 []byte + result2 error + }{result1, result2} +} + +func (fake *ChaincodeStub) GetQueryResult(arg1 string) (shim.StateQueryIteratorInterface, error) { + fake.getQueryResultMutex.Lock() + ret, specificReturn := fake.getQueryResultReturnsOnCall[len(fake.getQueryResultArgsForCall)] + fake.getQueryResultArgsForCall = append(fake.getQueryResultArgsForCall, struct { + arg1 string + }{arg1}) + fake.recordInvocation("GetQueryResult", []interface{}{arg1}) + fake.getQueryResultMutex.Unlock() + if fake.GetQueryResultStub != nil { + return fake.GetQueryResultStub(arg1) + } + if specificReturn { + return ret.result1, ret.result2 + } + fakeReturns := fake.getQueryResultReturns + return fakeReturns.result1, fakeReturns.result2 +} + +func (fake *ChaincodeStub) GetQueryResultCallCount() int { + fake.getQueryResultMutex.RLock() + defer fake.getQueryResultMutex.RUnlock() + return len(fake.getQueryResultArgsForCall) +} + +func (fake *ChaincodeStub) GetQueryResultCalls(stub func(string) (shim.StateQueryIteratorInterface, error)) { + fake.getQueryResultMutex.Lock() + defer fake.getQueryResultMutex.Unlock() + fake.GetQueryResultStub = stub +} + +func (fake *ChaincodeStub) GetQueryResultArgsForCall(i int) string { + fake.getQueryResultMutex.RLock() + defer fake.getQueryResultMutex.RUnlock() + argsForCall := fake.getQueryResultArgsForCall[i] + return argsForCall.arg1 +} + +func (fake *ChaincodeStub) GetQueryResultReturns(result1 shim.StateQueryIteratorInterface, result2 error) { + fake.getQueryResultMutex.Lock() + defer fake.getQueryResultMutex.Unlock() + fake.GetQueryResultStub = nil + fake.getQueryResultReturns = struct { + result1 shim.StateQueryIteratorInterface + result2 error + }{result1, result2} +} + +func (fake *ChaincodeStub) GetQueryResultReturnsOnCall(i int, result1 shim.StateQueryIteratorInterface, result2 error) { + fake.getQueryResultMutex.Lock() + defer fake.getQueryResultMutex.Unlock() + fake.GetQueryResultStub = nil + if fake.getQueryResultReturnsOnCall == nil { + fake.getQueryResultReturnsOnCall = make(map[int]struct { + result1 shim.StateQueryIteratorInterface + result2 error + }) + } + fake.getQueryResultReturnsOnCall[i] = struct { + result1 shim.StateQueryIteratorInterface + result2 error + }{result1, result2} +} + +func (fake *ChaincodeStub) GetQueryResultWithPagination(arg1 string, arg2 int32, arg3 string) (shim.StateQueryIteratorInterface, *peer.QueryResponseMetadata, error) { + fake.getQueryResultWithPaginationMutex.Lock() + ret, specificReturn := fake.getQueryResultWithPaginationReturnsOnCall[len(fake.getQueryResultWithPaginationArgsForCall)] + fake.getQueryResultWithPaginationArgsForCall = append(fake.getQueryResultWithPaginationArgsForCall, struct { + arg1 string + arg2 int32 + arg3 string + }{arg1, arg2, arg3}) + fake.recordInvocation("GetQueryResultWithPagination", []interface{}{arg1, arg2, arg3}) + fake.getQueryResultWithPaginationMutex.Unlock() + if fake.GetQueryResultWithPaginationStub != nil { + return fake.GetQueryResultWithPaginationStub(arg1, arg2, arg3) + } + if specificReturn { + return ret.result1, ret.result2, ret.result3 + } + fakeReturns := fake.getQueryResultWithPaginationReturns + return fakeReturns.result1, fakeReturns.result2, fakeReturns.result3 +} + +func (fake *ChaincodeStub) GetQueryResultWithPaginationCallCount() int { + fake.getQueryResultWithPaginationMutex.RLock() + defer fake.getQueryResultWithPaginationMutex.RUnlock() + return len(fake.getQueryResultWithPaginationArgsForCall) +} + +func (fake *ChaincodeStub) GetQueryResultWithPaginationCalls(stub func(string, int32, string) (shim.StateQueryIteratorInterface, *peer.QueryResponseMetadata, error)) { + fake.getQueryResultWithPaginationMutex.Lock() + defer fake.getQueryResultWithPaginationMutex.Unlock() + fake.GetQueryResultWithPaginationStub = stub +} + +func (fake *ChaincodeStub) GetQueryResultWithPaginationArgsForCall(i int) (string, int32, string) { + fake.getQueryResultWithPaginationMutex.RLock() + defer fake.getQueryResultWithPaginationMutex.RUnlock() + argsForCall := fake.getQueryResultWithPaginationArgsForCall[i] + return argsForCall.arg1, argsForCall.arg2, argsForCall.arg3 +} + +func (fake *ChaincodeStub) GetQueryResultWithPaginationReturns(result1 shim.StateQueryIteratorInterface, result2 *peer.QueryResponseMetadata, result3 error) { + fake.getQueryResultWithPaginationMutex.Lock() + defer fake.getQueryResultWithPaginationMutex.Unlock() + fake.GetQueryResultWithPaginationStub = nil + fake.getQueryResultWithPaginationReturns = struct { + result1 shim.StateQueryIteratorInterface + result2 *peer.QueryResponseMetadata + result3 error + }{result1, result2, result3} +} + +func (fake *ChaincodeStub) GetQueryResultWithPaginationReturnsOnCall(i int, result1 shim.StateQueryIteratorInterface, result2 *peer.QueryResponseMetadata, result3 error) { + fake.getQueryResultWithPaginationMutex.Lock() + defer fake.getQueryResultWithPaginationMutex.Unlock() + fake.GetQueryResultWithPaginationStub = nil + if fake.getQueryResultWithPaginationReturnsOnCall == nil { + fake.getQueryResultWithPaginationReturnsOnCall = make(map[int]struct { + result1 shim.StateQueryIteratorInterface + result2 *peer.QueryResponseMetadata + result3 error + }) + } + fake.getQueryResultWithPaginationReturnsOnCall[i] = struct { + result1 shim.StateQueryIteratorInterface + result2 *peer.QueryResponseMetadata + result3 error + }{result1, result2, result3} +} + +func (fake *ChaincodeStub) GetSignedProposal() (*peer.SignedProposal, error) { + fake.getSignedProposalMutex.Lock() + ret, specificReturn := fake.getSignedProposalReturnsOnCall[len(fake.getSignedProposalArgsForCall)] + fake.getSignedProposalArgsForCall = append(fake.getSignedProposalArgsForCall, struct { + }{}) + fake.recordInvocation("GetSignedProposal", []interface{}{}) + fake.getSignedProposalMutex.Unlock() + if fake.GetSignedProposalStub != nil { + return fake.GetSignedProposalStub() + } + if specificReturn { + return ret.result1, ret.result2 + } + fakeReturns := fake.getSignedProposalReturns + return fakeReturns.result1, fakeReturns.result2 +} + +func (fake *ChaincodeStub) GetSignedProposalCallCount() int { + fake.getSignedProposalMutex.RLock() + defer fake.getSignedProposalMutex.RUnlock() + return len(fake.getSignedProposalArgsForCall) +} + +func (fake *ChaincodeStub) GetSignedProposalCalls(stub func() (*peer.SignedProposal, error)) { + fake.getSignedProposalMutex.Lock() + defer fake.getSignedProposalMutex.Unlock() + fake.GetSignedProposalStub = stub +} + +func (fake *ChaincodeStub) GetSignedProposalReturns(result1 *peer.SignedProposal, result2 error) { + fake.getSignedProposalMutex.Lock() + defer fake.getSignedProposalMutex.Unlock() + fake.GetSignedProposalStub = nil + fake.getSignedProposalReturns = struct { + result1 *peer.SignedProposal + result2 error + }{result1, result2} +} + +func (fake *ChaincodeStub) GetSignedProposalReturnsOnCall(i int, result1 *peer.SignedProposal, result2 error) { + fake.getSignedProposalMutex.Lock() + defer fake.getSignedProposalMutex.Unlock() + fake.GetSignedProposalStub = nil + if fake.getSignedProposalReturnsOnCall == nil { + fake.getSignedProposalReturnsOnCall = make(map[int]struct { + result1 *peer.SignedProposal + result2 error + }) + } + fake.getSignedProposalReturnsOnCall[i] = struct { + result1 *peer.SignedProposal + result2 error + }{result1, result2} +} + +func (fake *ChaincodeStub) GetState(arg1 string) ([]byte, error) { + fake.getStateMutex.Lock() + ret, specificReturn := fake.getStateReturnsOnCall[len(fake.getStateArgsForCall)] + fake.getStateArgsForCall = append(fake.getStateArgsForCall, struct { + arg1 string + }{arg1}) + fake.recordInvocation("GetState", []interface{}{arg1}) + fake.getStateMutex.Unlock() + if fake.GetStateStub != nil { + return fake.GetStateStub(arg1) + } + if specificReturn { + return ret.result1, ret.result2 + } + fakeReturns := fake.getStateReturns + return fakeReturns.result1, fakeReturns.result2 +} + +func (fake *ChaincodeStub) GetStateCallCount() int { + fake.getStateMutex.RLock() + defer fake.getStateMutex.RUnlock() + return len(fake.getStateArgsForCall) +} + +func (fake *ChaincodeStub) GetStateCalls(stub func(string) ([]byte, error)) { + fake.getStateMutex.Lock() + defer fake.getStateMutex.Unlock() + fake.GetStateStub = stub +} + +func (fake *ChaincodeStub) GetStateArgsForCall(i int) string { + fake.getStateMutex.RLock() + defer fake.getStateMutex.RUnlock() + argsForCall := fake.getStateArgsForCall[i] + return argsForCall.arg1 +} + +func (fake *ChaincodeStub) GetStateReturns(result1 []byte, result2 error) { + fake.getStateMutex.Lock() + defer fake.getStateMutex.Unlock() + fake.GetStateStub = nil + fake.getStateReturns = struct { + result1 []byte + result2 error + }{result1, result2} +} + +func (fake *ChaincodeStub) GetStateReturnsOnCall(i int, result1 []byte, result2 error) { + fake.getStateMutex.Lock() + defer fake.getStateMutex.Unlock() + fake.GetStateStub = nil + if fake.getStateReturnsOnCall == nil { + fake.getStateReturnsOnCall = make(map[int]struct { + result1 []byte + result2 error + }) + } + fake.getStateReturnsOnCall[i] = struct { + result1 []byte + result2 error + }{result1, result2} +} + +func (fake *ChaincodeStub) GetStateByPartialCompositeKey(arg1 string, arg2 []string) (shim.StateQueryIteratorInterface, error) { + var arg2Copy []string + if arg2 != nil { + arg2Copy = make([]string, len(arg2)) + copy(arg2Copy, arg2) + } + fake.getStateByPartialCompositeKeyMutex.Lock() + ret, specificReturn := fake.getStateByPartialCompositeKeyReturnsOnCall[len(fake.getStateByPartialCompositeKeyArgsForCall)] + fake.getStateByPartialCompositeKeyArgsForCall = append(fake.getStateByPartialCompositeKeyArgsForCall, struct { + arg1 string + arg2 []string + }{arg1, arg2Copy}) + fake.recordInvocation("GetStateByPartialCompositeKey", []interface{}{arg1, arg2Copy}) + fake.getStateByPartialCompositeKeyMutex.Unlock() + if fake.GetStateByPartialCompositeKeyStub != nil { + return fake.GetStateByPartialCompositeKeyStub(arg1, arg2) + } + if specificReturn { + return ret.result1, ret.result2 + } + fakeReturns := fake.getStateByPartialCompositeKeyReturns + return fakeReturns.result1, fakeReturns.result2 +} + +func (fake *ChaincodeStub) GetStateByPartialCompositeKeyCallCount() int { + fake.getStateByPartialCompositeKeyMutex.RLock() + defer fake.getStateByPartialCompositeKeyMutex.RUnlock() + return len(fake.getStateByPartialCompositeKeyArgsForCall) +} + +func (fake *ChaincodeStub) GetStateByPartialCompositeKeyCalls(stub func(string, []string) (shim.StateQueryIteratorInterface, error)) { + fake.getStateByPartialCompositeKeyMutex.Lock() + defer fake.getStateByPartialCompositeKeyMutex.Unlock() + fake.GetStateByPartialCompositeKeyStub = stub +} + +func (fake *ChaincodeStub) GetStateByPartialCompositeKeyArgsForCall(i int) (string, []string) { + fake.getStateByPartialCompositeKeyMutex.RLock() + defer fake.getStateByPartialCompositeKeyMutex.RUnlock() + argsForCall := fake.getStateByPartialCompositeKeyArgsForCall[i] + return argsForCall.arg1, argsForCall.arg2 +} + +func (fake *ChaincodeStub) GetStateByPartialCompositeKeyReturns(result1 shim.StateQueryIteratorInterface, result2 error) { + fake.getStateByPartialCompositeKeyMutex.Lock() + defer fake.getStateByPartialCompositeKeyMutex.Unlock() + fake.GetStateByPartialCompositeKeyStub = nil + fake.getStateByPartialCompositeKeyReturns = struct { + result1 shim.StateQueryIteratorInterface + result2 error + }{result1, result2} +} + +func (fake *ChaincodeStub) GetStateByPartialCompositeKeyReturnsOnCall(i int, result1 shim.StateQueryIteratorInterface, result2 error) { + fake.getStateByPartialCompositeKeyMutex.Lock() + defer fake.getStateByPartialCompositeKeyMutex.Unlock() + fake.GetStateByPartialCompositeKeyStub = nil + if fake.getStateByPartialCompositeKeyReturnsOnCall == nil { + fake.getStateByPartialCompositeKeyReturnsOnCall = make(map[int]struct { + result1 shim.StateQueryIteratorInterface + result2 error + }) + } + fake.getStateByPartialCompositeKeyReturnsOnCall[i] = struct { + result1 shim.StateQueryIteratorInterface + result2 error + }{result1, result2} +} + +func (fake *ChaincodeStub) GetStateByPartialCompositeKeyWithPagination(arg1 string, arg2 []string, arg3 int32, arg4 string) (shim.StateQueryIteratorInterface, *peer.QueryResponseMetadata, error) { + var arg2Copy []string + if arg2 != nil { + arg2Copy = make([]string, len(arg2)) + copy(arg2Copy, arg2) + } + fake.getStateByPartialCompositeKeyWithPaginationMutex.Lock() + ret, specificReturn := fake.getStateByPartialCompositeKeyWithPaginationReturnsOnCall[len(fake.getStateByPartialCompositeKeyWithPaginationArgsForCall)] + fake.getStateByPartialCompositeKeyWithPaginationArgsForCall = append(fake.getStateByPartialCompositeKeyWithPaginationArgsForCall, struct { + arg1 string + arg2 []string + arg3 int32 + arg4 string + }{arg1, arg2Copy, arg3, arg4}) + fake.recordInvocation("GetStateByPartialCompositeKeyWithPagination", []interface{}{arg1, arg2Copy, arg3, arg4}) + fake.getStateByPartialCompositeKeyWithPaginationMutex.Unlock() + if fake.GetStateByPartialCompositeKeyWithPaginationStub != nil { + return fake.GetStateByPartialCompositeKeyWithPaginationStub(arg1, arg2, arg3, arg4) + } + if specificReturn { + return ret.result1, ret.result2, ret.result3 + } + fakeReturns := fake.getStateByPartialCompositeKeyWithPaginationReturns + return fakeReturns.result1, fakeReturns.result2, fakeReturns.result3 +} + +func (fake *ChaincodeStub) GetStateByPartialCompositeKeyWithPaginationCallCount() int { + fake.getStateByPartialCompositeKeyWithPaginationMutex.RLock() + defer fake.getStateByPartialCompositeKeyWithPaginationMutex.RUnlock() + return len(fake.getStateByPartialCompositeKeyWithPaginationArgsForCall) +} + +func (fake *ChaincodeStub) GetStateByPartialCompositeKeyWithPaginationCalls(stub func(string, []string, int32, string) (shim.StateQueryIteratorInterface, *peer.QueryResponseMetadata, error)) { + fake.getStateByPartialCompositeKeyWithPaginationMutex.Lock() + defer fake.getStateByPartialCompositeKeyWithPaginationMutex.Unlock() + fake.GetStateByPartialCompositeKeyWithPaginationStub = stub +} + +func (fake *ChaincodeStub) GetStateByPartialCompositeKeyWithPaginationArgsForCall(i int) (string, []string, int32, string) { + fake.getStateByPartialCompositeKeyWithPaginationMutex.RLock() + defer fake.getStateByPartialCompositeKeyWithPaginationMutex.RUnlock() + argsForCall := fake.getStateByPartialCompositeKeyWithPaginationArgsForCall[i] + return argsForCall.arg1, argsForCall.arg2, argsForCall.arg3, argsForCall.arg4 +} + +func (fake *ChaincodeStub) GetStateByPartialCompositeKeyWithPaginationReturns(result1 shim.StateQueryIteratorInterface, result2 *peer.QueryResponseMetadata, result3 error) { + fake.getStateByPartialCompositeKeyWithPaginationMutex.Lock() + defer fake.getStateByPartialCompositeKeyWithPaginationMutex.Unlock() + fake.GetStateByPartialCompositeKeyWithPaginationStub = nil + fake.getStateByPartialCompositeKeyWithPaginationReturns = struct { + result1 shim.StateQueryIteratorInterface + result2 *peer.QueryResponseMetadata + result3 error + }{result1, result2, result3} +} + +func (fake *ChaincodeStub) GetStateByPartialCompositeKeyWithPaginationReturnsOnCall(i int, result1 shim.StateQueryIteratorInterface, result2 *peer.QueryResponseMetadata, result3 error) { + fake.getStateByPartialCompositeKeyWithPaginationMutex.Lock() + defer fake.getStateByPartialCompositeKeyWithPaginationMutex.Unlock() + fake.GetStateByPartialCompositeKeyWithPaginationStub = nil + if fake.getStateByPartialCompositeKeyWithPaginationReturnsOnCall == nil { + fake.getStateByPartialCompositeKeyWithPaginationReturnsOnCall = make(map[int]struct { + result1 shim.StateQueryIteratorInterface + result2 *peer.QueryResponseMetadata + result3 error + }) + } + fake.getStateByPartialCompositeKeyWithPaginationReturnsOnCall[i] = struct { + result1 shim.StateQueryIteratorInterface + result2 *peer.QueryResponseMetadata + result3 error + }{result1, result2, result3} +} + +func (fake *ChaincodeStub) GetStateByRange(arg1 string, arg2 string) (shim.StateQueryIteratorInterface, error) { + fake.getStateByRangeMutex.Lock() + ret, specificReturn := fake.getStateByRangeReturnsOnCall[len(fake.getStateByRangeArgsForCall)] + fake.getStateByRangeArgsForCall = append(fake.getStateByRangeArgsForCall, struct { + arg1 string + arg2 string + }{arg1, arg2}) + fake.recordInvocation("GetStateByRange", []interface{}{arg1, arg2}) + fake.getStateByRangeMutex.Unlock() + if fake.GetStateByRangeStub != nil { + return fake.GetStateByRangeStub(arg1, arg2) + } + if specificReturn { + return ret.result1, ret.result2 + } + fakeReturns := fake.getStateByRangeReturns + return fakeReturns.result1, fakeReturns.result2 +} + +func (fake *ChaincodeStub) GetStateByRangeCallCount() int { + fake.getStateByRangeMutex.RLock() + defer fake.getStateByRangeMutex.RUnlock() + return len(fake.getStateByRangeArgsForCall) +} + +func (fake *ChaincodeStub) GetStateByRangeCalls(stub func(string, string) (shim.StateQueryIteratorInterface, error)) { + fake.getStateByRangeMutex.Lock() + defer fake.getStateByRangeMutex.Unlock() + fake.GetStateByRangeStub = stub +} + +func (fake *ChaincodeStub) GetStateByRangeArgsForCall(i int) (string, string) { + fake.getStateByRangeMutex.RLock() + defer fake.getStateByRangeMutex.RUnlock() + argsForCall := fake.getStateByRangeArgsForCall[i] + return argsForCall.arg1, argsForCall.arg2 +} + +func (fake *ChaincodeStub) GetStateByRangeReturns(result1 shim.StateQueryIteratorInterface, result2 error) { + fake.getStateByRangeMutex.Lock() + defer fake.getStateByRangeMutex.Unlock() + fake.GetStateByRangeStub = nil + fake.getStateByRangeReturns = struct { + result1 shim.StateQueryIteratorInterface + result2 error + }{result1, result2} +} + +func (fake *ChaincodeStub) GetStateByRangeReturnsOnCall(i int, result1 shim.StateQueryIteratorInterface, result2 error) { + fake.getStateByRangeMutex.Lock() + defer fake.getStateByRangeMutex.Unlock() + fake.GetStateByRangeStub = nil + if fake.getStateByRangeReturnsOnCall == nil { + fake.getStateByRangeReturnsOnCall = make(map[int]struct { + result1 shim.StateQueryIteratorInterface + result2 error + }) + } + fake.getStateByRangeReturnsOnCall[i] = struct { + result1 shim.StateQueryIteratorInterface + result2 error + }{result1, result2} +} + +func (fake *ChaincodeStub) GetStateByRangeWithPagination(arg1 string, arg2 string, arg3 int32, arg4 string) (shim.StateQueryIteratorInterface, *peer.QueryResponseMetadata, error) { + fake.getStateByRangeWithPaginationMutex.Lock() + ret, specificReturn := fake.getStateByRangeWithPaginationReturnsOnCall[len(fake.getStateByRangeWithPaginationArgsForCall)] + fake.getStateByRangeWithPaginationArgsForCall = append(fake.getStateByRangeWithPaginationArgsForCall, struct { + arg1 string + arg2 string + arg3 int32 + arg4 string + }{arg1, arg2, arg3, arg4}) + fake.recordInvocation("GetStateByRangeWithPagination", []interface{}{arg1, arg2, arg3, arg4}) + fake.getStateByRangeWithPaginationMutex.Unlock() + if fake.GetStateByRangeWithPaginationStub != nil { + return fake.GetStateByRangeWithPaginationStub(arg1, arg2, arg3, arg4) + } + if specificReturn { + return ret.result1, ret.result2, ret.result3 + } + fakeReturns := fake.getStateByRangeWithPaginationReturns + return fakeReturns.result1, fakeReturns.result2, fakeReturns.result3 +} + +func (fake *ChaincodeStub) GetStateByRangeWithPaginationCallCount() int { + fake.getStateByRangeWithPaginationMutex.RLock() + defer fake.getStateByRangeWithPaginationMutex.RUnlock() + return len(fake.getStateByRangeWithPaginationArgsForCall) +} + +func (fake *ChaincodeStub) GetStateByRangeWithPaginationCalls(stub func(string, string, int32, string) (shim.StateQueryIteratorInterface, *peer.QueryResponseMetadata, error)) { + fake.getStateByRangeWithPaginationMutex.Lock() + defer fake.getStateByRangeWithPaginationMutex.Unlock() + fake.GetStateByRangeWithPaginationStub = stub +} + +func (fake *ChaincodeStub) GetStateByRangeWithPaginationArgsForCall(i int) (string, string, int32, string) { + fake.getStateByRangeWithPaginationMutex.RLock() + defer fake.getStateByRangeWithPaginationMutex.RUnlock() + argsForCall := fake.getStateByRangeWithPaginationArgsForCall[i] + return argsForCall.arg1, argsForCall.arg2, argsForCall.arg3, argsForCall.arg4 +} + +func (fake *ChaincodeStub) GetStateByRangeWithPaginationReturns(result1 shim.StateQueryIteratorInterface, result2 *peer.QueryResponseMetadata, result3 error) { + fake.getStateByRangeWithPaginationMutex.Lock() + defer fake.getStateByRangeWithPaginationMutex.Unlock() + fake.GetStateByRangeWithPaginationStub = nil + fake.getStateByRangeWithPaginationReturns = struct { + result1 shim.StateQueryIteratorInterface + result2 *peer.QueryResponseMetadata + result3 error + }{result1, result2, result3} +} + +func (fake *ChaincodeStub) GetStateByRangeWithPaginationReturnsOnCall(i int, result1 shim.StateQueryIteratorInterface, result2 *peer.QueryResponseMetadata, result3 error) { + fake.getStateByRangeWithPaginationMutex.Lock() + defer fake.getStateByRangeWithPaginationMutex.Unlock() + fake.GetStateByRangeWithPaginationStub = nil + if fake.getStateByRangeWithPaginationReturnsOnCall == nil { + fake.getStateByRangeWithPaginationReturnsOnCall = make(map[int]struct { + result1 shim.StateQueryIteratorInterface + result2 *peer.QueryResponseMetadata + result3 error + }) + } + fake.getStateByRangeWithPaginationReturnsOnCall[i] = struct { + result1 shim.StateQueryIteratorInterface + result2 *peer.QueryResponseMetadata + result3 error + }{result1, result2, result3} +} + +func (fake *ChaincodeStub) GetStateValidationParameter(arg1 string) ([]byte, error) { + fake.getStateValidationParameterMutex.Lock() + ret, specificReturn := fake.getStateValidationParameterReturnsOnCall[len(fake.getStateValidationParameterArgsForCall)] + fake.getStateValidationParameterArgsForCall = append(fake.getStateValidationParameterArgsForCall, struct { + arg1 string + }{arg1}) + fake.recordInvocation("GetStateValidationParameter", []interface{}{arg1}) + fake.getStateValidationParameterMutex.Unlock() + if fake.GetStateValidationParameterStub != nil { + return fake.GetStateValidationParameterStub(arg1) + } + if specificReturn { + return ret.result1, ret.result2 + } + fakeReturns := fake.getStateValidationParameterReturns + return fakeReturns.result1, fakeReturns.result2 +} + +func (fake *ChaincodeStub) GetStateValidationParameterCallCount() int { + fake.getStateValidationParameterMutex.RLock() + defer fake.getStateValidationParameterMutex.RUnlock() + return len(fake.getStateValidationParameterArgsForCall) +} + +func (fake *ChaincodeStub) GetStateValidationParameterCalls(stub func(string) ([]byte, error)) { + fake.getStateValidationParameterMutex.Lock() + defer fake.getStateValidationParameterMutex.Unlock() + fake.GetStateValidationParameterStub = stub +} + +func (fake *ChaincodeStub) GetStateValidationParameterArgsForCall(i int) string { + fake.getStateValidationParameterMutex.RLock() + defer fake.getStateValidationParameterMutex.RUnlock() + argsForCall := fake.getStateValidationParameterArgsForCall[i] + return argsForCall.arg1 +} + +func (fake *ChaincodeStub) GetStateValidationParameterReturns(result1 []byte, result2 error) { + fake.getStateValidationParameterMutex.Lock() + defer fake.getStateValidationParameterMutex.Unlock() + fake.GetStateValidationParameterStub = nil + fake.getStateValidationParameterReturns = struct { + result1 []byte + result2 error + }{result1, result2} +} + +func (fake *ChaincodeStub) GetStateValidationParameterReturnsOnCall(i int, result1 []byte, result2 error) { + fake.getStateValidationParameterMutex.Lock() + defer fake.getStateValidationParameterMutex.Unlock() + fake.GetStateValidationParameterStub = nil + if fake.getStateValidationParameterReturnsOnCall == nil { + fake.getStateValidationParameterReturnsOnCall = make(map[int]struct { + result1 []byte + result2 error + }) + } + fake.getStateValidationParameterReturnsOnCall[i] = struct { + result1 []byte + result2 error + }{result1, result2} +} + +func (fake *ChaincodeStub) GetStringArgs() []string { + fake.getStringArgsMutex.Lock() + ret, specificReturn := fake.getStringArgsReturnsOnCall[len(fake.getStringArgsArgsForCall)] + fake.getStringArgsArgsForCall = append(fake.getStringArgsArgsForCall, struct { + }{}) + fake.recordInvocation("GetStringArgs", []interface{}{}) + fake.getStringArgsMutex.Unlock() + if fake.GetStringArgsStub != nil { + return fake.GetStringArgsStub() + } + if specificReturn { + return ret.result1 + } + fakeReturns := fake.getStringArgsReturns + return fakeReturns.result1 +} + +func (fake *ChaincodeStub) GetStringArgsCallCount() int { + fake.getStringArgsMutex.RLock() + defer fake.getStringArgsMutex.RUnlock() + return len(fake.getStringArgsArgsForCall) +} + +func (fake *ChaincodeStub) GetStringArgsCalls(stub func() []string) { + fake.getStringArgsMutex.Lock() + defer fake.getStringArgsMutex.Unlock() + fake.GetStringArgsStub = stub +} + +func (fake *ChaincodeStub) GetStringArgsReturns(result1 []string) { + fake.getStringArgsMutex.Lock() + defer fake.getStringArgsMutex.Unlock() + fake.GetStringArgsStub = nil + fake.getStringArgsReturns = struct { + result1 []string + }{result1} +} + +func (fake *ChaincodeStub) GetStringArgsReturnsOnCall(i int, result1 []string) { + fake.getStringArgsMutex.Lock() + defer fake.getStringArgsMutex.Unlock() + fake.GetStringArgsStub = nil + if fake.getStringArgsReturnsOnCall == nil { + fake.getStringArgsReturnsOnCall = make(map[int]struct { + result1 []string + }) + } + fake.getStringArgsReturnsOnCall[i] = struct { + result1 []string + }{result1} +} + +func (fake *ChaincodeStub) GetTransient() (map[string][]byte, error) { + fake.getTransientMutex.Lock() + ret, specificReturn := fake.getTransientReturnsOnCall[len(fake.getTransientArgsForCall)] + fake.getTransientArgsForCall = append(fake.getTransientArgsForCall, struct { + }{}) + fake.recordInvocation("GetTransient", []interface{}{}) + fake.getTransientMutex.Unlock() + if fake.GetTransientStub != nil { + return fake.GetTransientStub() + } + if specificReturn { + return ret.result1, ret.result2 + } + fakeReturns := fake.getTransientReturns + return fakeReturns.result1, fakeReturns.result2 +} + +func (fake *ChaincodeStub) GetTransientCallCount() int { + fake.getTransientMutex.RLock() + defer fake.getTransientMutex.RUnlock() + return len(fake.getTransientArgsForCall) +} + +func (fake *ChaincodeStub) GetTransientCalls(stub func() (map[string][]byte, error)) { + fake.getTransientMutex.Lock() + defer fake.getTransientMutex.Unlock() + fake.GetTransientStub = stub +} + +func (fake *ChaincodeStub) GetTransientReturns(result1 map[string][]byte, result2 error) { + fake.getTransientMutex.Lock() + defer fake.getTransientMutex.Unlock() + fake.GetTransientStub = nil + fake.getTransientReturns = struct { + result1 map[string][]byte + result2 error + }{result1, result2} +} + +func (fake *ChaincodeStub) GetTransientReturnsOnCall(i int, result1 map[string][]byte, result2 error) { + fake.getTransientMutex.Lock() + defer fake.getTransientMutex.Unlock() + fake.GetTransientStub = nil + if fake.getTransientReturnsOnCall == nil { + fake.getTransientReturnsOnCall = make(map[int]struct { + result1 map[string][]byte + result2 error + }) + } + fake.getTransientReturnsOnCall[i] = struct { + result1 map[string][]byte + result2 error + }{result1, result2} +} + +func (fake *ChaincodeStub) GetTxID() string { + fake.getTxIDMutex.Lock() + ret, specificReturn := fake.getTxIDReturnsOnCall[len(fake.getTxIDArgsForCall)] + fake.getTxIDArgsForCall = append(fake.getTxIDArgsForCall, struct { + }{}) + fake.recordInvocation("GetTxID", []interface{}{}) + fake.getTxIDMutex.Unlock() + if fake.GetTxIDStub != nil { + return fake.GetTxIDStub() + } + if specificReturn { + return ret.result1 + } + fakeReturns := fake.getTxIDReturns + return fakeReturns.result1 +} + +func (fake *ChaincodeStub) GetTxIDCallCount() int { + fake.getTxIDMutex.RLock() + defer fake.getTxIDMutex.RUnlock() + return len(fake.getTxIDArgsForCall) +} + +func (fake *ChaincodeStub) GetTxIDCalls(stub func() string) { + fake.getTxIDMutex.Lock() + defer fake.getTxIDMutex.Unlock() + fake.GetTxIDStub = stub +} + +func (fake *ChaincodeStub) GetTxIDReturns(result1 string) { + fake.getTxIDMutex.Lock() + defer fake.getTxIDMutex.Unlock() + fake.GetTxIDStub = nil + fake.getTxIDReturns = struct { + result1 string + }{result1} +} + +func (fake *ChaincodeStub) GetTxIDReturnsOnCall(i int, result1 string) { + fake.getTxIDMutex.Lock() + defer fake.getTxIDMutex.Unlock() + fake.GetTxIDStub = nil + if fake.getTxIDReturnsOnCall == nil { + fake.getTxIDReturnsOnCall = make(map[int]struct { + result1 string + }) + } + fake.getTxIDReturnsOnCall[i] = struct { + result1 string + }{result1} +} + +func (fake *ChaincodeStub) GetTxTimestamp() (*timestamp.Timestamp, error) { + fake.getTxTimestampMutex.Lock() + ret, specificReturn := fake.getTxTimestampReturnsOnCall[len(fake.getTxTimestampArgsForCall)] + fake.getTxTimestampArgsForCall = append(fake.getTxTimestampArgsForCall, struct { + }{}) + fake.recordInvocation("GetTxTimestamp", []interface{}{}) + fake.getTxTimestampMutex.Unlock() + if fake.GetTxTimestampStub != nil { + return fake.GetTxTimestampStub() + } + if specificReturn { + return ret.result1, ret.result2 + } + fakeReturns := fake.getTxTimestampReturns + return fakeReturns.result1, fakeReturns.result2 +} + +func (fake *ChaincodeStub) GetTxTimestampCallCount() int { + fake.getTxTimestampMutex.RLock() + defer fake.getTxTimestampMutex.RUnlock() + return len(fake.getTxTimestampArgsForCall) +} + +func (fake *ChaincodeStub) GetTxTimestampCalls(stub func() (*timestamp.Timestamp, error)) { + fake.getTxTimestampMutex.Lock() + defer fake.getTxTimestampMutex.Unlock() + fake.GetTxTimestampStub = stub +} + +func (fake *ChaincodeStub) GetTxTimestampReturns(result1 *timestamp.Timestamp, result2 error) { + fake.getTxTimestampMutex.Lock() + defer fake.getTxTimestampMutex.Unlock() + fake.GetTxTimestampStub = nil + fake.getTxTimestampReturns = struct { + result1 *timestamp.Timestamp + result2 error + }{result1, result2} +} + +func (fake *ChaincodeStub) GetTxTimestampReturnsOnCall(i int, result1 *timestamp.Timestamp, result2 error) { + fake.getTxTimestampMutex.Lock() + defer fake.getTxTimestampMutex.Unlock() + fake.GetTxTimestampStub = nil + if fake.getTxTimestampReturnsOnCall == nil { + fake.getTxTimestampReturnsOnCall = make(map[int]struct { + result1 *timestamp.Timestamp + result2 error + }) + } + fake.getTxTimestampReturnsOnCall[i] = struct { + result1 *timestamp.Timestamp + result2 error + }{result1, result2} +} + +func (fake *ChaincodeStub) InvokeChaincode(arg1 string, arg2 [][]byte, arg3 string) peer.Response { + var arg2Copy [][]byte + if arg2 != nil { + arg2Copy = make([][]byte, len(arg2)) + copy(arg2Copy, arg2) + } + fake.invokeChaincodeMutex.Lock() + ret, specificReturn := fake.invokeChaincodeReturnsOnCall[len(fake.invokeChaincodeArgsForCall)] + fake.invokeChaincodeArgsForCall = append(fake.invokeChaincodeArgsForCall, struct { + arg1 string + arg2 [][]byte + arg3 string + }{arg1, arg2Copy, arg3}) + fake.recordInvocation("InvokeChaincode", []interface{}{arg1, arg2Copy, arg3}) + fake.invokeChaincodeMutex.Unlock() + if fake.InvokeChaincodeStub != nil { + return fake.InvokeChaincodeStub(arg1, arg2, arg3) + } + if specificReturn { + return ret.result1 + } + fakeReturns := fake.invokeChaincodeReturns + return fakeReturns.result1 +} + +func (fake *ChaincodeStub) InvokeChaincodeCallCount() int { + fake.invokeChaincodeMutex.RLock() + defer fake.invokeChaincodeMutex.RUnlock() + return len(fake.invokeChaincodeArgsForCall) +} + +func (fake *ChaincodeStub) InvokeChaincodeCalls(stub func(string, [][]byte, string) peer.Response) { + fake.invokeChaincodeMutex.Lock() + defer fake.invokeChaincodeMutex.Unlock() + fake.InvokeChaincodeStub = stub +} + +func (fake *ChaincodeStub) InvokeChaincodeArgsForCall(i int) (string, [][]byte, string) { + fake.invokeChaincodeMutex.RLock() + defer fake.invokeChaincodeMutex.RUnlock() + argsForCall := fake.invokeChaincodeArgsForCall[i] + return argsForCall.arg1, argsForCall.arg2, argsForCall.arg3 +} + +func (fake *ChaincodeStub) InvokeChaincodeReturns(result1 peer.Response) { + fake.invokeChaincodeMutex.Lock() + defer fake.invokeChaincodeMutex.Unlock() + fake.InvokeChaincodeStub = nil + fake.invokeChaincodeReturns = struct { + result1 peer.Response + }{result1} +} + +func (fake *ChaincodeStub) InvokeChaincodeReturnsOnCall(i int, result1 peer.Response) { + fake.invokeChaincodeMutex.Lock() + defer fake.invokeChaincodeMutex.Unlock() + fake.InvokeChaincodeStub = nil + if fake.invokeChaincodeReturnsOnCall == nil { + fake.invokeChaincodeReturnsOnCall = make(map[int]struct { + result1 peer.Response + }) + } + fake.invokeChaincodeReturnsOnCall[i] = struct { + result1 peer.Response + }{result1} +} + +func (fake *ChaincodeStub) PutPrivateData(arg1 string, arg2 string, arg3 []byte) error { + var arg3Copy []byte + if arg3 != nil { + arg3Copy = make([]byte, len(arg3)) + copy(arg3Copy, arg3) + } + fake.putPrivateDataMutex.Lock() + ret, specificReturn := fake.putPrivateDataReturnsOnCall[len(fake.putPrivateDataArgsForCall)] + fake.putPrivateDataArgsForCall = append(fake.putPrivateDataArgsForCall, struct { + arg1 string + arg2 string + arg3 []byte + }{arg1, arg2, arg3Copy}) + fake.recordInvocation("PutPrivateData", []interface{}{arg1, arg2, arg3Copy}) + fake.putPrivateDataMutex.Unlock() + if fake.PutPrivateDataStub != nil { + return fake.PutPrivateDataStub(arg1, arg2, arg3) + } + if specificReturn { + return ret.result1 + } + fakeReturns := fake.putPrivateDataReturns + return fakeReturns.result1 +} + +func (fake *ChaincodeStub) PutPrivateDataCallCount() int { + fake.putPrivateDataMutex.RLock() + defer fake.putPrivateDataMutex.RUnlock() + return len(fake.putPrivateDataArgsForCall) +} + +func (fake *ChaincodeStub) PutPrivateDataCalls(stub func(string, string, []byte) error) { + fake.putPrivateDataMutex.Lock() + defer fake.putPrivateDataMutex.Unlock() + fake.PutPrivateDataStub = stub +} + +func (fake *ChaincodeStub) PutPrivateDataArgsForCall(i int) (string, string, []byte) { + fake.putPrivateDataMutex.RLock() + defer fake.putPrivateDataMutex.RUnlock() + argsForCall := fake.putPrivateDataArgsForCall[i] + return argsForCall.arg1, argsForCall.arg2, argsForCall.arg3 +} + +func (fake *ChaincodeStub) PutPrivateDataReturns(result1 error) { + fake.putPrivateDataMutex.Lock() + defer fake.putPrivateDataMutex.Unlock() + fake.PutPrivateDataStub = nil + fake.putPrivateDataReturns = struct { + result1 error + }{result1} +} + +func (fake *ChaincodeStub) PutPrivateDataReturnsOnCall(i int, result1 error) { + fake.putPrivateDataMutex.Lock() + defer fake.putPrivateDataMutex.Unlock() + fake.PutPrivateDataStub = nil + if fake.putPrivateDataReturnsOnCall == nil { + fake.putPrivateDataReturnsOnCall = make(map[int]struct { + result1 error + }) + } + fake.putPrivateDataReturnsOnCall[i] = struct { + result1 error + }{result1} +} + +func (fake *ChaincodeStub) PutState(arg1 string, arg2 []byte) error { + var arg2Copy []byte + if arg2 != nil { + arg2Copy = make([]byte, len(arg2)) + copy(arg2Copy, arg2) + } + fake.putStateMutex.Lock() + ret, specificReturn := fake.putStateReturnsOnCall[len(fake.putStateArgsForCall)] + fake.putStateArgsForCall = append(fake.putStateArgsForCall, struct { + arg1 string + arg2 []byte + }{arg1, arg2Copy}) + fake.recordInvocation("PutState", []interface{}{arg1, arg2Copy}) + fake.putStateMutex.Unlock() + if fake.PutStateStub != nil { + return fake.PutStateStub(arg1, arg2) + } + if specificReturn { + return ret.result1 + } + fakeReturns := fake.putStateReturns + return fakeReturns.result1 +} + +func (fake *ChaincodeStub) PutStateCallCount() int { + fake.putStateMutex.RLock() + defer fake.putStateMutex.RUnlock() + return len(fake.putStateArgsForCall) +} + +func (fake *ChaincodeStub) PutStateCalls(stub func(string, []byte) error) { + fake.putStateMutex.Lock() + defer fake.putStateMutex.Unlock() + fake.PutStateStub = stub +} + +func (fake *ChaincodeStub) PutStateArgsForCall(i int) (string, []byte) { + fake.putStateMutex.RLock() + defer fake.putStateMutex.RUnlock() + argsForCall := fake.putStateArgsForCall[i] + return argsForCall.arg1, argsForCall.arg2 +} + +func (fake *ChaincodeStub) PutStateReturns(result1 error) { + fake.putStateMutex.Lock() + defer fake.putStateMutex.Unlock() + fake.PutStateStub = nil + fake.putStateReturns = struct { + result1 error + }{result1} +} + +func (fake *ChaincodeStub) PutStateReturnsOnCall(i int, result1 error) { + fake.putStateMutex.Lock() + defer fake.putStateMutex.Unlock() + fake.PutStateStub = nil + if fake.putStateReturnsOnCall == nil { + fake.putStateReturnsOnCall = make(map[int]struct { + result1 error + }) + } + fake.putStateReturnsOnCall[i] = struct { + result1 error + }{result1} +} + +func (fake *ChaincodeStub) SetEvent(arg1 string, arg2 []byte) error { + var arg2Copy []byte + if arg2 != nil { + arg2Copy = make([]byte, len(arg2)) + copy(arg2Copy, arg2) + } + fake.setEventMutex.Lock() + ret, specificReturn := fake.setEventReturnsOnCall[len(fake.setEventArgsForCall)] + fake.setEventArgsForCall = append(fake.setEventArgsForCall, struct { + arg1 string + arg2 []byte + }{arg1, arg2Copy}) + fake.recordInvocation("SetEvent", []interface{}{arg1, arg2Copy}) + fake.setEventMutex.Unlock() + if fake.SetEventStub != nil { + return fake.SetEventStub(arg1, arg2) + } + if specificReturn { + return ret.result1 + } + fakeReturns := fake.setEventReturns + return fakeReturns.result1 +} + +func (fake *ChaincodeStub) SetEventCallCount() int { + fake.setEventMutex.RLock() + defer fake.setEventMutex.RUnlock() + return len(fake.setEventArgsForCall) +} + +func (fake *ChaincodeStub) SetEventCalls(stub func(string, []byte) error) { + fake.setEventMutex.Lock() + defer fake.setEventMutex.Unlock() + fake.SetEventStub = stub +} + +func (fake *ChaincodeStub) SetEventArgsForCall(i int) (string, []byte) { + fake.setEventMutex.RLock() + defer fake.setEventMutex.RUnlock() + argsForCall := fake.setEventArgsForCall[i] + return argsForCall.arg1, argsForCall.arg2 +} + +func (fake *ChaincodeStub) SetEventReturns(result1 error) { + fake.setEventMutex.Lock() + defer fake.setEventMutex.Unlock() + fake.SetEventStub = nil + fake.setEventReturns = struct { + result1 error + }{result1} +} + +func (fake *ChaincodeStub) SetEventReturnsOnCall(i int, result1 error) { + fake.setEventMutex.Lock() + defer fake.setEventMutex.Unlock() + fake.SetEventStub = nil + if fake.setEventReturnsOnCall == nil { + fake.setEventReturnsOnCall = make(map[int]struct { + result1 error + }) + } + fake.setEventReturnsOnCall[i] = struct { + result1 error + }{result1} +} + +func (fake *ChaincodeStub) SetPrivateDataValidationParameter(arg1 string, arg2 string, arg3 []byte) error { + var arg3Copy []byte + if arg3 != nil { + arg3Copy = make([]byte, len(arg3)) + copy(arg3Copy, arg3) + } + fake.setPrivateDataValidationParameterMutex.Lock() + ret, specificReturn := fake.setPrivateDataValidationParameterReturnsOnCall[len(fake.setPrivateDataValidationParameterArgsForCall)] + fake.setPrivateDataValidationParameterArgsForCall = append(fake.setPrivateDataValidationParameterArgsForCall, struct { + arg1 string + arg2 string + arg3 []byte + }{arg1, arg2, arg3Copy}) + fake.recordInvocation("SetPrivateDataValidationParameter", []interface{}{arg1, arg2, arg3Copy}) + fake.setPrivateDataValidationParameterMutex.Unlock() + if fake.SetPrivateDataValidationParameterStub != nil { + return fake.SetPrivateDataValidationParameterStub(arg1, arg2, arg3) + } + if specificReturn { + return ret.result1 + } + fakeReturns := fake.setPrivateDataValidationParameterReturns + return fakeReturns.result1 +} + +func (fake *ChaincodeStub) SetPrivateDataValidationParameterCallCount() int { + fake.setPrivateDataValidationParameterMutex.RLock() + defer fake.setPrivateDataValidationParameterMutex.RUnlock() + return len(fake.setPrivateDataValidationParameterArgsForCall) +} + +func (fake *ChaincodeStub) SetPrivateDataValidationParameterCalls(stub func(string, string, []byte) error) { + fake.setPrivateDataValidationParameterMutex.Lock() + defer fake.setPrivateDataValidationParameterMutex.Unlock() + fake.SetPrivateDataValidationParameterStub = stub +} + +func (fake *ChaincodeStub) SetPrivateDataValidationParameterArgsForCall(i int) (string, string, []byte) { + fake.setPrivateDataValidationParameterMutex.RLock() + defer fake.setPrivateDataValidationParameterMutex.RUnlock() + argsForCall := fake.setPrivateDataValidationParameterArgsForCall[i] + return argsForCall.arg1, argsForCall.arg2, argsForCall.arg3 +} + +func (fake *ChaincodeStub) SetPrivateDataValidationParameterReturns(result1 error) { + fake.setPrivateDataValidationParameterMutex.Lock() + defer fake.setPrivateDataValidationParameterMutex.Unlock() + fake.SetPrivateDataValidationParameterStub = nil + fake.setPrivateDataValidationParameterReturns = struct { + result1 error + }{result1} +} + +func (fake *ChaincodeStub) SetPrivateDataValidationParameterReturnsOnCall(i int, result1 error) { + fake.setPrivateDataValidationParameterMutex.Lock() + defer fake.setPrivateDataValidationParameterMutex.Unlock() + fake.SetPrivateDataValidationParameterStub = nil + if fake.setPrivateDataValidationParameterReturnsOnCall == nil { + fake.setPrivateDataValidationParameterReturnsOnCall = make(map[int]struct { + result1 error + }) + } + fake.setPrivateDataValidationParameterReturnsOnCall[i] = struct { + result1 error + }{result1} +} + +func (fake *ChaincodeStub) SetStateValidationParameter(arg1 string, arg2 []byte) error { + var arg2Copy []byte + if arg2 != nil { + arg2Copy = make([]byte, len(arg2)) + copy(arg2Copy, arg2) + } + fake.setStateValidationParameterMutex.Lock() + ret, specificReturn := fake.setStateValidationParameterReturnsOnCall[len(fake.setStateValidationParameterArgsForCall)] + fake.setStateValidationParameterArgsForCall = append(fake.setStateValidationParameterArgsForCall, struct { + arg1 string + arg2 []byte + }{arg1, arg2Copy}) + fake.recordInvocation("SetStateValidationParameter", []interface{}{arg1, arg2Copy}) + fake.setStateValidationParameterMutex.Unlock() + if fake.SetStateValidationParameterStub != nil { + return fake.SetStateValidationParameterStub(arg1, arg2) + } + if specificReturn { + return ret.result1 + } + fakeReturns := fake.setStateValidationParameterReturns + return fakeReturns.result1 +} + +func (fake *ChaincodeStub) SetStateValidationParameterCallCount() int { + fake.setStateValidationParameterMutex.RLock() + defer fake.setStateValidationParameterMutex.RUnlock() + return len(fake.setStateValidationParameterArgsForCall) +} + +func (fake *ChaincodeStub) SetStateValidationParameterCalls(stub func(string, []byte) error) { + fake.setStateValidationParameterMutex.Lock() + defer fake.setStateValidationParameterMutex.Unlock() + fake.SetStateValidationParameterStub = stub +} + +func (fake *ChaincodeStub) SetStateValidationParameterArgsForCall(i int) (string, []byte) { + fake.setStateValidationParameterMutex.RLock() + defer fake.setStateValidationParameterMutex.RUnlock() + argsForCall := fake.setStateValidationParameterArgsForCall[i] + return argsForCall.arg1, argsForCall.arg2 +} + +func (fake *ChaincodeStub) SetStateValidationParameterReturns(result1 error) { + fake.setStateValidationParameterMutex.Lock() + defer fake.setStateValidationParameterMutex.Unlock() + fake.SetStateValidationParameterStub = nil + fake.setStateValidationParameterReturns = struct { + result1 error + }{result1} +} + +func (fake *ChaincodeStub) SetStateValidationParameterReturnsOnCall(i int, result1 error) { + fake.setStateValidationParameterMutex.Lock() + defer fake.setStateValidationParameterMutex.Unlock() + fake.SetStateValidationParameterStub = nil + if fake.setStateValidationParameterReturnsOnCall == nil { + fake.setStateValidationParameterReturnsOnCall = make(map[int]struct { + result1 error + }) + } + fake.setStateValidationParameterReturnsOnCall[i] = struct { + result1 error + }{result1} +} + +func (fake *ChaincodeStub) SplitCompositeKey(arg1 string) (string, []string, error) { + fake.splitCompositeKeyMutex.Lock() + ret, specificReturn := fake.splitCompositeKeyReturnsOnCall[len(fake.splitCompositeKeyArgsForCall)] + fake.splitCompositeKeyArgsForCall = append(fake.splitCompositeKeyArgsForCall, struct { + arg1 string + }{arg1}) + fake.recordInvocation("SplitCompositeKey", []interface{}{arg1}) + fake.splitCompositeKeyMutex.Unlock() + if fake.SplitCompositeKeyStub != nil { + return fake.SplitCompositeKeyStub(arg1) + } + if specificReturn { + return ret.result1, ret.result2, ret.result3 + } + fakeReturns := fake.splitCompositeKeyReturns + return fakeReturns.result1, fakeReturns.result2, fakeReturns.result3 +} + +func (fake *ChaincodeStub) SplitCompositeKeyCallCount() int { + fake.splitCompositeKeyMutex.RLock() + defer fake.splitCompositeKeyMutex.RUnlock() + return len(fake.splitCompositeKeyArgsForCall) +} + +func (fake *ChaincodeStub) SplitCompositeKeyCalls(stub func(string) (string, []string, error)) { + fake.splitCompositeKeyMutex.Lock() + defer fake.splitCompositeKeyMutex.Unlock() + fake.SplitCompositeKeyStub = stub +} + +func (fake *ChaincodeStub) SplitCompositeKeyArgsForCall(i int) string { + fake.splitCompositeKeyMutex.RLock() + defer fake.splitCompositeKeyMutex.RUnlock() + argsForCall := fake.splitCompositeKeyArgsForCall[i] + return argsForCall.arg1 +} + +func (fake *ChaincodeStub) SplitCompositeKeyReturns(result1 string, result2 []string, result3 error) { + fake.splitCompositeKeyMutex.Lock() + defer fake.splitCompositeKeyMutex.Unlock() + fake.SplitCompositeKeyStub = nil + fake.splitCompositeKeyReturns = struct { + result1 string + result2 []string + result3 error + }{result1, result2, result3} +} + +func (fake *ChaincodeStub) SplitCompositeKeyReturnsOnCall(i int, result1 string, result2 []string, result3 error) { + fake.splitCompositeKeyMutex.Lock() + defer fake.splitCompositeKeyMutex.Unlock() + fake.SplitCompositeKeyStub = nil + if fake.splitCompositeKeyReturnsOnCall == nil { + fake.splitCompositeKeyReturnsOnCall = make(map[int]struct { + result1 string + result2 []string + result3 error + }) + } + fake.splitCompositeKeyReturnsOnCall[i] = struct { + result1 string + result2 []string + result3 error + }{result1, result2, result3} +} + +func (fake *ChaincodeStub) Invocations() map[string][][]interface{} { + fake.invocationsMutex.RLock() + defer fake.invocationsMutex.RUnlock() + fake.createCompositeKeyMutex.RLock() + defer fake.createCompositeKeyMutex.RUnlock() + fake.delPrivateDataMutex.RLock() + defer fake.delPrivateDataMutex.RUnlock() + fake.delStateMutex.RLock() + defer fake.delStateMutex.RUnlock() + fake.getArgsMutex.RLock() + defer fake.getArgsMutex.RUnlock() + fake.getArgsSliceMutex.RLock() + defer fake.getArgsSliceMutex.RUnlock() + fake.getBindingMutex.RLock() + defer fake.getBindingMutex.RUnlock() + fake.getChannelIDMutex.RLock() + defer fake.getChannelIDMutex.RUnlock() + fake.getCreatorMutex.RLock() + defer fake.getCreatorMutex.RUnlock() + fake.getDecorationsMutex.RLock() + defer fake.getDecorationsMutex.RUnlock() + fake.getFunctionAndParametersMutex.RLock() + defer fake.getFunctionAndParametersMutex.RUnlock() + fake.getHistoryForKeyMutex.RLock() + defer fake.getHistoryForKeyMutex.RUnlock() + fake.getPrivateDataMutex.RLock() + defer fake.getPrivateDataMutex.RUnlock() + fake.getPrivateDataByPartialCompositeKeyMutex.RLock() + defer fake.getPrivateDataByPartialCompositeKeyMutex.RUnlock() + fake.getPrivateDataByRangeMutex.RLock() + defer fake.getPrivateDataByRangeMutex.RUnlock() + fake.getPrivateDataHashMutex.RLock() + defer fake.getPrivateDataHashMutex.RUnlock() + fake.getPrivateDataQueryResultMutex.RLock() + defer fake.getPrivateDataQueryResultMutex.RUnlock() + fake.getPrivateDataValidationParameterMutex.RLock() + defer fake.getPrivateDataValidationParameterMutex.RUnlock() + fake.getQueryResultMutex.RLock() + defer fake.getQueryResultMutex.RUnlock() + fake.getQueryResultWithPaginationMutex.RLock() + defer fake.getQueryResultWithPaginationMutex.RUnlock() + fake.getSignedProposalMutex.RLock() + defer fake.getSignedProposalMutex.RUnlock() + fake.getStateMutex.RLock() + defer fake.getStateMutex.RUnlock() + fake.getStateByPartialCompositeKeyMutex.RLock() + defer fake.getStateByPartialCompositeKeyMutex.RUnlock() + fake.getStateByPartialCompositeKeyWithPaginationMutex.RLock() + defer fake.getStateByPartialCompositeKeyWithPaginationMutex.RUnlock() + fake.getStateByRangeMutex.RLock() + defer fake.getStateByRangeMutex.RUnlock() + fake.getStateByRangeWithPaginationMutex.RLock() + defer fake.getStateByRangeWithPaginationMutex.RUnlock() + fake.getStateValidationParameterMutex.RLock() + defer fake.getStateValidationParameterMutex.RUnlock() + fake.getStringArgsMutex.RLock() + defer fake.getStringArgsMutex.RUnlock() + fake.getTransientMutex.RLock() + defer fake.getTransientMutex.RUnlock() + fake.getTxIDMutex.RLock() + defer fake.getTxIDMutex.RUnlock() + fake.getTxTimestampMutex.RLock() + defer fake.getTxTimestampMutex.RUnlock() + fake.invokeChaincodeMutex.RLock() + defer fake.invokeChaincodeMutex.RUnlock() + fake.putPrivateDataMutex.RLock() + defer fake.putPrivateDataMutex.RUnlock() + fake.putStateMutex.RLock() + defer fake.putStateMutex.RUnlock() + fake.setEventMutex.RLock() + defer fake.setEventMutex.RUnlock() + fake.setPrivateDataValidationParameterMutex.RLock() + defer fake.setPrivateDataValidationParameterMutex.RUnlock() + fake.setStateValidationParameterMutex.RLock() + defer fake.setStateValidationParameterMutex.RUnlock() + fake.splitCompositeKeyMutex.RLock() + defer fake.splitCompositeKeyMutex.RUnlock() + copiedInvocations := map[string][][]interface{}{} + for key, value := range fake.invocations { + copiedInvocations[key] = value + } + return copiedInvocations +} + +func (fake *ChaincodeStub) recordInvocation(key string, args []interface{}) { + fake.invocationsMutex.Lock() + defer fake.invocationsMutex.Unlock() + if fake.invocations == nil { + fake.invocations = map[string][][]interface{}{} + } + if fake.invocations[key] == nil { + fake.invocations[key] = [][]interface{}{} + } + fake.invocations[key] = append(fake.invocations[key], args) +} diff --git a/topologies/t2/assets/chaincode/assets-transfer-basic/chaincode-go/chaincode/mocks/statequeryiterator.go b/topologies/t2/assets/chaincode/assets-transfer-basic/chaincode-go/chaincode/mocks/statequeryiterator.go new file mode 100644 index 0000000..27e3034 --- /dev/null +++ b/topologies/t2/assets/chaincode/assets-transfer-basic/chaincode-go/chaincode/mocks/statequeryiterator.go @@ -0,0 +1,232 @@ +// Code generated by counterfeiter. DO NOT EDIT. +package mocks + +import ( + "sync" + + "github.com/hyperledger/fabric-protos-go/ledger/queryresult" +) + +type StateQueryIterator struct { + CloseStub func() error + closeMutex sync.RWMutex + closeArgsForCall []struct { + } + closeReturns struct { + result1 error + } + closeReturnsOnCall map[int]struct { + result1 error + } + HasNextStub func() bool + hasNextMutex sync.RWMutex + hasNextArgsForCall []struct { + } + hasNextReturns struct { + result1 bool + } + hasNextReturnsOnCall map[int]struct { + result1 bool + } + NextStub func() (*queryresult.KV, error) + nextMutex sync.RWMutex + nextArgsForCall []struct { + } + nextReturns struct { + result1 *queryresult.KV + result2 error + } + nextReturnsOnCall map[int]struct { + result1 *queryresult.KV + result2 error + } + invocations map[string][][]interface{} + invocationsMutex sync.RWMutex +} + +func (fake *StateQueryIterator) Close() error { + fake.closeMutex.Lock() + ret, specificReturn := fake.closeReturnsOnCall[len(fake.closeArgsForCall)] + fake.closeArgsForCall = append(fake.closeArgsForCall, struct { + }{}) + fake.recordInvocation("Close", []interface{}{}) + fake.closeMutex.Unlock() + if fake.CloseStub != nil { + return fake.CloseStub() + } + if specificReturn { + return ret.result1 + } + fakeReturns := fake.closeReturns + return fakeReturns.result1 +} + +func (fake *StateQueryIterator) CloseCallCount() int { + fake.closeMutex.RLock() + defer fake.closeMutex.RUnlock() + return len(fake.closeArgsForCall) +} + +func (fake *StateQueryIterator) CloseCalls(stub func() error) { + fake.closeMutex.Lock() + defer fake.closeMutex.Unlock() + fake.CloseStub = stub +} + +func (fake *StateQueryIterator) CloseReturns(result1 error) { + fake.closeMutex.Lock() + defer fake.closeMutex.Unlock() + fake.CloseStub = nil + fake.closeReturns = struct { + result1 error + }{result1} +} + +func (fake *StateQueryIterator) CloseReturnsOnCall(i int, result1 error) { + fake.closeMutex.Lock() + defer fake.closeMutex.Unlock() + fake.CloseStub = nil + if fake.closeReturnsOnCall == nil { + fake.closeReturnsOnCall = make(map[int]struct { + result1 error + }) + } + fake.closeReturnsOnCall[i] = struct { + result1 error + }{result1} +} + +func (fake *StateQueryIterator) HasNext() bool { + fake.hasNextMutex.Lock() + ret, specificReturn := fake.hasNextReturnsOnCall[len(fake.hasNextArgsForCall)] + fake.hasNextArgsForCall = append(fake.hasNextArgsForCall, struct { + }{}) + fake.recordInvocation("HasNext", []interface{}{}) + fake.hasNextMutex.Unlock() + if fake.HasNextStub != nil { + return fake.HasNextStub() + } + if specificReturn { + return ret.result1 + } + fakeReturns := fake.hasNextReturns + return fakeReturns.result1 +} + +func (fake *StateQueryIterator) HasNextCallCount() int { + fake.hasNextMutex.RLock() + defer fake.hasNextMutex.RUnlock() + return len(fake.hasNextArgsForCall) +} + +func (fake *StateQueryIterator) HasNextCalls(stub func() bool) { + fake.hasNextMutex.Lock() + defer fake.hasNextMutex.Unlock() + fake.HasNextStub = stub +} + +func (fake *StateQueryIterator) HasNextReturns(result1 bool) { + fake.hasNextMutex.Lock() + defer fake.hasNextMutex.Unlock() + fake.HasNextStub = nil + fake.hasNextReturns = struct { + result1 bool + }{result1} +} + +func (fake *StateQueryIterator) HasNextReturnsOnCall(i int, result1 bool) { + fake.hasNextMutex.Lock() + defer fake.hasNextMutex.Unlock() + fake.HasNextStub = nil + if fake.hasNextReturnsOnCall == nil { + fake.hasNextReturnsOnCall = make(map[int]struct { + result1 bool + }) + } + fake.hasNextReturnsOnCall[i] = struct { + result1 bool + }{result1} +} + +func (fake *StateQueryIterator) Next() (*queryresult.KV, error) { + fake.nextMutex.Lock() + ret, specificReturn := fake.nextReturnsOnCall[len(fake.nextArgsForCall)] + fake.nextArgsForCall = append(fake.nextArgsForCall, struct { + }{}) + fake.recordInvocation("Next", []interface{}{}) + fake.nextMutex.Unlock() + if fake.NextStub != nil { + return fake.NextStub() + } + if specificReturn { + return ret.result1, ret.result2 + } + fakeReturns := fake.nextReturns + return fakeReturns.result1, fakeReturns.result2 +} + +func (fake *StateQueryIterator) NextCallCount() int { + fake.nextMutex.RLock() + defer fake.nextMutex.RUnlock() + return len(fake.nextArgsForCall) +} + +func (fake *StateQueryIterator) NextCalls(stub func() (*queryresult.KV, error)) { + fake.nextMutex.Lock() + defer fake.nextMutex.Unlock() + fake.NextStub = stub +} + +func (fake *StateQueryIterator) NextReturns(result1 *queryresult.KV, result2 error) { + fake.nextMutex.Lock() + defer fake.nextMutex.Unlock() + fake.NextStub = nil + fake.nextReturns = struct { + result1 *queryresult.KV + result2 error + }{result1, result2} +} + +func (fake *StateQueryIterator) NextReturnsOnCall(i int, result1 *queryresult.KV, result2 error) { + fake.nextMutex.Lock() + defer fake.nextMutex.Unlock() + fake.NextStub = nil + if fake.nextReturnsOnCall == nil { + fake.nextReturnsOnCall = make(map[int]struct { + result1 *queryresult.KV + result2 error + }) + } + fake.nextReturnsOnCall[i] = struct { + result1 *queryresult.KV + result2 error + }{result1, result2} +} + +func (fake *StateQueryIterator) Invocations() map[string][][]interface{} { + fake.invocationsMutex.RLock() + defer fake.invocationsMutex.RUnlock() + fake.closeMutex.RLock() + defer fake.closeMutex.RUnlock() + fake.hasNextMutex.RLock() + defer fake.hasNextMutex.RUnlock() + fake.nextMutex.RLock() + defer fake.nextMutex.RUnlock() + copiedInvocations := map[string][][]interface{}{} + for key, value := range fake.invocations { + copiedInvocations[key] = value + } + return copiedInvocations +} + +func (fake *StateQueryIterator) recordInvocation(key string, args []interface{}) { + fake.invocationsMutex.Lock() + defer fake.invocationsMutex.Unlock() + if fake.invocations == nil { + fake.invocations = map[string][][]interface{}{} + } + if fake.invocations[key] == nil { + fake.invocations[key] = [][]interface{}{} + } + fake.invocations[key] = append(fake.invocations[key], args) +} diff --git a/topologies/t2/assets/chaincode/assets-transfer-basic/chaincode-go/chaincode/mocks/transaction.go b/topologies/t2/assets/chaincode/assets-transfer-basic/chaincode-go/chaincode/mocks/transaction.go new file mode 100644 index 0000000..eea37db --- /dev/null +++ b/topologies/t2/assets/chaincode/assets-transfer-basic/chaincode-go/chaincode/mocks/transaction.go @@ -0,0 +1,164 @@ +// Code generated by counterfeiter. DO NOT EDIT. +package mocks + +import ( + "sync" + + "github.com/hyperledger/fabric-chaincode-go/pkg/cid" + "github.com/hyperledger/fabric-chaincode-go/shim" +) + +type TransactionContext struct { + GetClientIdentityStub func() cid.ClientIdentity + getClientIdentityMutex sync.RWMutex + getClientIdentityArgsForCall []struct { + } + getClientIdentityReturns struct { + result1 cid.ClientIdentity + } + getClientIdentityReturnsOnCall map[int]struct { + result1 cid.ClientIdentity + } + GetStubStub func() shim.ChaincodeStubInterface + getStubMutex sync.RWMutex + getStubArgsForCall []struct { + } + getStubReturns struct { + result1 shim.ChaincodeStubInterface + } + getStubReturnsOnCall map[int]struct { + result1 shim.ChaincodeStubInterface + } + invocations map[string][][]interface{} + invocationsMutex sync.RWMutex +} + +func (fake *TransactionContext) GetClientIdentity() cid.ClientIdentity { + fake.getClientIdentityMutex.Lock() + ret, specificReturn := fake.getClientIdentityReturnsOnCall[len(fake.getClientIdentityArgsForCall)] + fake.getClientIdentityArgsForCall = append(fake.getClientIdentityArgsForCall, struct { + }{}) + fake.recordInvocation("GetClientIdentity", []interface{}{}) + fake.getClientIdentityMutex.Unlock() + if fake.GetClientIdentityStub != nil { + return fake.GetClientIdentityStub() + } + if specificReturn { + return ret.result1 + } + fakeReturns := fake.getClientIdentityReturns + return fakeReturns.result1 +} + +func (fake *TransactionContext) GetClientIdentityCallCount() int { + fake.getClientIdentityMutex.RLock() + defer fake.getClientIdentityMutex.RUnlock() + return len(fake.getClientIdentityArgsForCall) +} + +func (fake *TransactionContext) GetClientIdentityCalls(stub func() cid.ClientIdentity) { + fake.getClientIdentityMutex.Lock() + defer fake.getClientIdentityMutex.Unlock() + fake.GetClientIdentityStub = stub +} + +func (fake *TransactionContext) GetClientIdentityReturns(result1 cid.ClientIdentity) { + fake.getClientIdentityMutex.Lock() + defer fake.getClientIdentityMutex.Unlock() + fake.GetClientIdentityStub = nil + fake.getClientIdentityReturns = struct { + result1 cid.ClientIdentity + }{result1} +} + +func (fake *TransactionContext) GetClientIdentityReturnsOnCall(i int, result1 cid.ClientIdentity) { + fake.getClientIdentityMutex.Lock() + defer fake.getClientIdentityMutex.Unlock() + fake.GetClientIdentityStub = nil + if fake.getClientIdentityReturnsOnCall == nil { + fake.getClientIdentityReturnsOnCall = make(map[int]struct { + result1 cid.ClientIdentity + }) + } + fake.getClientIdentityReturnsOnCall[i] = struct { + result1 cid.ClientIdentity + }{result1} +} + +func (fake *TransactionContext) GetStub() shim.ChaincodeStubInterface { + fake.getStubMutex.Lock() + ret, specificReturn := fake.getStubReturnsOnCall[len(fake.getStubArgsForCall)] + fake.getStubArgsForCall = append(fake.getStubArgsForCall, struct { + }{}) + fake.recordInvocation("GetStub", []interface{}{}) + fake.getStubMutex.Unlock() + if fake.GetStubStub != nil { + return fake.GetStubStub() + } + if specificReturn { + return ret.result1 + } + fakeReturns := fake.getStubReturns + return fakeReturns.result1 +} + +func (fake *TransactionContext) GetStubCallCount() int { + fake.getStubMutex.RLock() + defer fake.getStubMutex.RUnlock() + return len(fake.getStubArgsForCall) +} + +func (fake *TransactionContext) GetStubCalls(stub func() shim.ChaincodeStubInterface) { + fake.getStubMutex.Lock() + defer fake.getStubMutex.Unlock() + fake.GetStubStub = stub +} + +func (fake *TransactionContext) GetStubReturns(result1 shim.ChaincodeStubInterface) { + fake.getStubMutex.Lock() + defer fake.getStubMutex.Unlock() + fake.GetStubStub = nil + fake.getStubReturns = struct { + result1 shim.ChaincodeStubInterface + }{result1} +} + +func (fake *TransactionContext) GetStubReturnsOnCall(i int, result1 shim.ChaincodeStubInterface) { + fake.getStubMutex.Lock() + defer fake.getStubMutex.Unlock() + fake.GetStubStub = nil + if fake.getStubReturnsOnCall == nil { + fake.getStubReturnsOnCall = make(map[int]struct { + result1 shim.ChaincodeStubInterface + }) + } + fake.getStubReturnsOnCall[i] = struct { + result1 shim.ChaincodeStubInterface + }{result1} +} + +func (fake *TransactionContext) Invocations() map[string][][]interface{} { + fake.invocationsMutex.RLock() + defer fake.invocationsMutex.RUnlock() + fake.getClientIdentityMutex.RLock() + defer fake.getClientIdentityMutex.RUnlock() + fake.getStubMutex.RLock() + defer fake.getStubMutex.RUnlock() + copiedInvocations := map[string][][]interface{}{} + for key, value := range fake.invocations { + copiedInvocations[key] = value + } + return copiedInvocations +} + +func (fake *TransactionContext) recordInvocation(key string, args []interface{}) { + fake.invocationsMutex.Lock() + defer fake.invocationsMutex.Unlock() + if fake.invocations == nil { + fake.invocations = map[string][][]interface{}{} + } + if fake.invocations[key] == nil { + fake.invocations[key] = [][]interface{}{} + } + fake.invocations[key] = append(fake.invocations[key], args) +} diff --git a/topologies/t2/assets/chaincode/assets-transfer-basic/chaincode-go/chaincode/smartcontract.go b/topologies/t2/assets/chaincode/assets-transfer-basic/chaincode-go/chaincode/smartcontract.go new file mode 100644 index 0000000..71e8dd8 --- /dev/null +++ b/topologies/t2/assets/chaincode/assets-transfer-basic/chaincode-go/chaincode/smartcontract.go @@ -0,0 +1,185 @@ +package chaincode + +import ( + "encoding/json" + "fmt" + + "github.com/hyperledger/fabric-contract-api-go/contractapi" +) + +// SmartContract provides functions for managing an Asset +type SmartContract struct { + contractapi.Contract +} + +// Asset describes basic details of what makes up a simple asset +type Asset struct { + ID string `json:"ID"` + Color string `json:"color"` + Size int `json:"size"` + Owner string `json:"owner"` + AppraisedValue int `json:"appraisedValue"` +} + +// InitLedger adds a base set of assets to the ledger +func (s *SmartContract) InitLedger(ctx contractapi.TransactionContextInterface) error { + assets := []Asset{ + {ID: "asset1", Color: "blue", Size: 5, Owner: "Tomoko", AppraisedValue: 300}, + {ID: "asset2", Color: "red", Size: 5, Owner: "Brad", AppraisedValue: 400}, + {ID: "asset3", Color: "green", Size: 10, Owner: "Jin Soo", AppraisedValue: 500}, + {ID: "asset4", Color: "yellow", Size: 10, Owner: "Max", AppraisedValue: 600}, + {ID: "asset5", Color: "black", Size: 15, Owner: "Adriana", AppraisedValue: 700}, + {ID: "asset6", Color: "white", Size: 15, Owner: "Michel", AppraisedValue: 800}, + } + + for _, asset := range assets { + assetJSON, err := json.Marshal(asset) + if err != nil { + return err + } + + err = ctx.GetStub().PutState(asset.ID, assetJSON) + if err != nil { + return fmt.Errorf("failed to put to world state. %v", err) + } + } + + return nil +} + +// CreateAsset issues a new asset to the world state with given details. +func (s *SmartContract) CreateAsset(ctx contractapi.TransactionContextInterface, id string, color string, size int, owner string, appraisedValue int) error { + exists, err := s.AssetExists(ctx, id) + if err != nil { + return err + } + if exists { + return fmt.Errorf("the asset %s already exists", id) + } + + asset := Asset{ + ID: id, + Color: color, + Size: size, + Owner: owner, + AppraisedValue: appraisedValue, + } + assetJSON, err := json.Marshal(asset) + if err != nil { + return err + } + + return ctx.GetStub().PutState(id, assetJSON) +} + +// ReadAsset returns the asset stored in the world state with given id. +func (s *SmartContract) ReadAsset(ctx contractapi.TransactionContextInterface, id string) (*Asset, error) { + assetJSON, err := ctx.GetStub().GetState(id) + if err != nil { + return nil, fmt.Errorf("failed to read from world state: %v", err) + } + if assetJSON == nil { + return nil, fmt.Errorf("the asset %s does not exist", id) + } + + var asset Asset + err = json.Unmarshal(assetJSON, &asset) + if err != nil { + return nil, err + } + + return &asset, nil +} + +// UpdateAsset updates an existing asset in the world state with provided parameters. +func (s *SmartContract) UpdateAsset(ctx contractapi.TransactionContextInterface, id string, color string, size int, owner string, appraisedValue int) error { + exists, err := s.AssetExists(ctx, id) + if err != nil { + return err + } + if !exists { + return fmt.Errorf("the asset %s does not exist", id) + } + + // overwriting original asset with new asset + asset := Asset{ + ID: id, + Color: color, + Size: size, + Owner: owner, + AppraisedValue: appraisedValue, + } + assetJSON, err := json.Marshal(asset) + if err != nil { + return err + } + + return ctx.GetStub().PutState(id, assetJSON) +} + +// DeleteAsset deletes an given asset from the world state. +func (s *SmartContract) DeleteAsset(ctx contractapi.TransactionContextInterface, id string) error { + exists, err := s.AssetExists(ctx, id) + if err != nil { + return err + } + if !exists { + return fmt.Errorf("the asset %s does not exist", id) + } + + return ctx.GetStub().DelState(id) +} + +// AssetExists returns true when asset with given ID exists in world state +func (s *SmartContract) AssetExists(ctx contractapi.TransactionContextInterface, id string) (bool, error) { + assetJSON, err := ctx.GetStub().GetState(id) + if err != nil { + return false, fmt.Errorf("failed to read from world state: %v", err) + } + + return assetJSON != nil, nil +} + +// TransferAsset updates the owner field of asset with given id in world state. +func (s *SmartContract) TransferAsset(ctx contractapi.TransactionContextInterface, id string, newOwner string) error { + asset, err := s.ReadAsset(ctx, id) + if err != nil { + return err + } + + asset.Owner = newOwner + assetJSON, err := json.Marshal(asset) + if err != nil { + return err + } + + return ctx.GetStub().PutState(id, assetJSON) +} + +// GetAllAssets returns all assets found in world state +func (s *SmartContract) GetAllAssets(ctx contractapi.TransactionContextInterface) ([]*Asset, error) { + // range query with empty string for startKey and endKey does an + // open-ended query of all assets in the chaincode namespace. + resultsIterator, err := ctx.GetStub().GetStateByRange("", "") + if err != nil { + return nil, err + } + defer resultsIterator.Close() + + var assets []*Asset + for resultsIterator.HasNext() { + queryResponse, err := resultsIterator.Next() + if err != nil { + return nil, err + } + + var asset Asset + err = json.Unmarshal(queryResponse.Value, &asset) + if err != nil { + return nil, err + } + assets = append(assets, &asset) + } + + return assets, nil +} diff --git a/topologies/t2/assets/chaincode/assets-transfer-basic/chaincode-go/chaincode/smartcontract_test.go b/topologies/t2/assets/chaincode/assets-transfer-basic/chaincode-go/chaincode/smartcontract_test.go new file mode 100644 index 0000000..cb001de --- /dev/null +++ b/topologies/t2/assets/chaincode/assets-transfer-basic/chaincode-go/chaincode/smartcontract_test.go @@ -0,0 +1,184 @@ +package chaincode_test + +import ( + "encoding/json" + "fmt" + "testing" + + "github.com/hyperledger/fabric-chaincode-go/shim" + "github.com/hyperledger/fabric-contract-api-go/contractapi" + "github.com/hyperledger/fabric-protos-go/ledger/queryresult" + "github.com/hyperledger/fabric-samples/asset-transfer-basic/chaincode-go/chaincode" + "github.com/hyperledger/fabric-samples/asset-transfer-basic/chaincode-go/chaincode/mocks" + "github.com/stretchr/testify/require" +) + +//go:generate counterfeiter -o mocks/transaction.go -fake-name TransactionContext . transactionContext +type transactionContext interface { + contractapi.TransactionContextInterface +} + +//go:generate counterfeiter -o mocks/chaincodestub.go -fake-name ChaincodeStub . chaincodeStub +type chaincodeStub interface { + shim.ChaincodeStubInterface +} + +//go:generate counterfeiter -o mocks/statequeryiterator.go -fake-name StateQueryIterator . stateQueryIterator +type stateQueryIterator interface { + shim.StateQueryIteratorInterface +} + +func TestInitLedger(t *testing.T) { + chaincodeStub := &mocks.ChaincodeStub{} + transactionContext := &mocks.TransactionContext{} + transactionContext.GetStubReturns(chaincodeStub) + + assetTransfer := chaincode.SmartContract{} + err := assetTransfer.InitLedger(transactionContext) + require.NoError(t, err) + + chaincodeStub.PutStateReturns(fmt.Errorf("failed inserting key")) + err = assetTransfer.InitLedger(transactionContext) + require.EqualError(t, err, "failed to put to world state. failed inserting key") +} + +func TestCreateAsset(t *testing.T) { + chaincodeStub := &mocks.ChaincodeStub{} + transactionContext := &mocks.TransactionContext{} + transactionContext.GetStubReturns(chaincodeStub) + + assetTransfer := chaincode.SmartContract{} + err := assetTransfer.CreateAsset(transactionContext, "", "", 0, "", 0) + require.NoError(t, err) + + chaincodeStub.GetStateReturns([]byte{}, nil) + err = assetTransfer.CreateAsset(transactionContext, "asset1", "", 0, "", 0) + require.EqualError(t, err, "the asset asset1 already exists") + + chaincodeStub.GetStateReturns(nil, fmt.Errorf("unable to retrieve asset")) + err = assetTransfer.CreateAsset(transactionContext, "asset1", "", 0, "", 0) + require.EqualError(t, err, "failed to read from world state: unable to retrieve asset") +} + +func TestReadAsset(t *testing.T) { + chaincodeStub := &mocks.ChaincodeStub{} + transactionContext := &mocks.TransactionContext{} + transactionContext.GetStubReturns(chaincodeStub) + + expectedAsset := &chaincode.Asset{ID: "asset1"} + bytes, err := json.Marshal(expectedAsset) + require.NoError(t, err) + + chaincodeStub.GetStateReturns(bytes, nil) + assetTransfer := chaincode.SmartContract{} + asset, err := assetTransfer.ReadAsset(transactionContext, "") + require.NoError(t, err) + require.Equal(t, expectedAsset, asset) + + chaincodeStub.GetStateReturns(nil, fmt.Errorf("unable to retrieve asset")) + _, err = assetTransfer.ReadAsset(transactionContext, "") + require.EqualError(t, err, "failed to read from world state: unable to retrieve asset") + + chaincodeStub.GetStateReturns(nil, nil) + asset, err = assetTransfer.ReadAsset(transactionContext, "asset1") + require.EqualError(t, err, "the asset asset1 does not exist") + require.Nil(t, asset) +} + +func TestUpdateAsset(t *testing.T) { + chaincodeStub := &mocks.ChaincodeStub{} + transactionContext := &mocks.TransactionContext{} + transactionContext.GetStubReturns(chaincodeStub) + + expectedAsset := &chaincode.Asset{ID: "asset1"} + bytes, err := json.Marshal(expectedAsset) + require.NoError(t, err) + + chaincodeStub.GetStateReturns(bytes, nil) + assetTransfer := chaincode.SmartContract{} + err = assetTransfer.UpdateAsset(transactionContext, "", "", 0, "", 0) + require.NoError(t, err) + + chaincodeStub.GetStateReturns(nil, nil) + err = assetTransfer.UpdateAsset(transactionContext, "asset1", "", 0, "", 0) + require.EqualError(t, err, "the asset asset1 does not exist") + + chaincodeStub.GetStateReturns(nil, fmt.Errorf("unable to retrieve asset")) + err = assetTransfer.UpdateAsset(transactionContext, "asset1", "", 0, "", 0) + require.EqualError(t, err, "failed to read from world state: unable to retrieve asset") +} + +func TestDeleteAsset(t *testing.T) { + chaincodeStub := &mocks.ChaincodeStub{} + transactionContext := &mocks.TransactionContext{} + transactionContext.GetStubReturns(chaincodeStub) + + asset := &chaincode.Asset{ID: "asset1"} + bytes, err := json.Marshal(asset) + require.NoError(t, err) + + chaincodeStub.GetStateReturns(bytes, nil) + chaincodeStub.DelStateReturns(nil) + assetTransfer := chaincode.SmartContract{} + err = assetTransfer.DeleteAsset(transactionContext, "") + require.NoError(t, err) + + chaincodeStub.GetStateReturns(nil, nil) + err = assetTransfer.DeleteAsset(transactionContext, "asset1") + require.EqualError(t, err, "the asset asset1 does not exist") + + chaincodeStub.GetStateReturns(nil, fmt.Errorf("unable to retrieve asset")) + err = assetTransfer.DeleteAsset(transactionContext, "") + require.EqualError(t, err, "failed to read from world state: unable to retrieve asset") +} + +func TestTransferAsset(t *testing.T) { + chaincodeStub := &mocks.ChaincodeStub{} + transactionContext := &mocks.TransactionContext{} + transactionContext.GetStubReturns(chaincodeStub) + + asset := &chaincode.Asset{ID: "asset1"} + bytes, err := json.Marshal(asset) + require.NoError(t, err) + + chaincodeStub.GetStateReturns(bytes, nil) + assetTransfer := chaincode.SmartContract{} + err = assetTransfer.TransferAsset(transactionContext, "", "") + require.NoError(t, err) + + chaincodeStub.GetStateReturns(nil, fmt.Errorf("unable to retrieve asset")) + err = assetTransfer.TransferAsset(transactionContext, "", "") + require.EqualError(t, err, "failed to read from world state: unable to retrieve asset") +} + +func TestGetAllAssets(t *testing.T) { + asset := &chaincode.Asset{ID: "asset1"} + bytes, err := json.Marshal(asset) + require.NoError(t, err) + + iterator := &mocks.StateQueryIterator{} + iterator.HasNextReturnsOnCall(0, true) + iterator.HasNextReturnsOnCall(1, false) + iterator.NextReturns(&queryresult.KV{Value: bytes}, nil) + + chaincodeStub := &mocks.ChaincodeStub{} + transactionContext := &mocks.TransactionContext{} + transactionContext.GetStubReturns(chaincodeStub) + + chaincodeStub.GetStateByRangeReturns(iterator, nil) + assetTransfer := &chaincode.SmartContract{} + assets, err := assetTransfer.GetAllAssets(transactionContext) + require.NoError(t, err) + require.Equal(t, []*chaincode.Asset{asset}, assets) + + iterator.HasNextReturns(true) + iterator.NextReturns(nil, fmt.Errorf("failed retrieving next item")) + assets, err = assetTransfer.GetAllAssets(transactionContext) + require.EqualError(t, err, "failed retrieving next item") + require.Nil(t, assets) + + chaincodeStub.GetStateByRangeReturns(nil, fmt.Errorf("failed retrieving all assets")) + assets, err = assetTransfer.GetAllAssets(transactionContext) + require.EqualError(t, err, "failed retrieving all assets") + require.Nil(t, assets) +} diff --git a/topologies/t2/assets/chaincode/assets-transfer-basic/chaincode-go/go.mod b/topologies/t2/assets/chaincode/assets-transfer-basic/chaincode-go/go.mod new file mode 100644 index 0000000..630a157 --- /dev/null +++ b/topologies/t2/assets/chaincode/assets-transfer-basic/chaincode-go/go.mod @@ -0,0 +1,11 @@ +module github.com/hyperledger/fabric-samples/asset-transfer-basic/chaincode-go + +go 1.14 + +require ( + github.com/golang/protobuf v1.3.2 + github.com/hyperledger/fabric-chaincode-go v0.0.0-20210718160520-38d29fabecb9 + github.com/hyperledger/fabric-contract-api-go v1.1.1 + github.com/hyperledger/fabric-protos-go v0.0.0-20201028172056-a3136dde2354 + github.com/stretchr/testify v1.5.1 +) diff --git a/topologies/t2/assets/chaincode/assets-transfer-basic/chaincode-go/go.sum b/topologies/t2/assets/chaincode/assets-transfer-basic/chaincode-go/go.sum new file mode 100644 index 0000000..577c18b --- /dev/null +++ b/topologies/t2/assets/chaincode/assets-transfer-basic/chaincode-go/go.sum @@ -0,0 +1,154 @@ +cloud.google.com/go v0.26.0/go.mod h1:aQUYkXzVsufM+DwF1aE+0xfcU+56JwCaLick0ClmMTw= +github.com/BurntSushi/toml v0.3.1/go.mod h1:xHWCNGjB5oqiDr8zfno3MHue2Ht5sIBksp03qcyfWMU= +github.com/DATA-DOG/go-txdb v0.1.3/go.mod h1:DhAhxMXZpUJVGnT+p9IbzJoRKvlArO2pkHjnGX7o0n0= +github.com/PuerkitoBio/purell v1.1.1 h1:WEQqlqaGbrPkxLJWfBwQmfEAE1Z7ONdDLqrN38tNFfI= +github.com/PuerkitoBio/purell v1.1.1/go.mod h1:c11w/QuzBsJSee3cPx9rAFu61PvFxuPbtSwDGJws/X0= +github.com/PuerkitoBio/urlesc v0.0.0-20170810143723-de5bf2ad4578 h1:d+Bc7a5rLufV/sSk/8dngufqelfh6jnri85riMAaF/M= +github.com/PuerkitoBio/urlesc v0.0.0-20170810143723-de5bf2ad4578/go.mod h1:uGdkoq3SwY9Y+13GIhn11/XLaGBb4BfwItxLd5jeuXE= +github.com/armon/consul-api v0.0.0-20180202201655-eb2c6b5be1b6/go.mod h1:grANhF5doyWs3UAsr3K4I6qtAmlQcZDesFNEHPZAzj8= +github.com/client9/misspell v0.3.4/go.mod h1:qj6jICC3Q7zFZvVWo7KLAzC3yx5G7kyvSDkc90ppPyw= +github.com/coreos/etcd v3.3.10+incompatible/go.mod h1:uF7uidLiAD3TWHmW31ZFd/JWoc32PjwdhPthX9715RE= +github.com/coreos/go-etcd v2.0.0+incompatible/go.mod h1:Jez6KQU2B/sWsbdaef3ED8NzMklzPG4d5KIOhIy30Tk= +github.com/coreos/go-semver v0.2.0/go.mod h1:nnelYz7RCh+5ahJtPPxZlU+153eP4D4r3EedlOD2RNk= +github.com/cpuguy83/go-md2man v1.0.10/go.mod h1:SmD6nW6nTyfqj6ABTjUi3V3JVMnlJmwcJI5acqYI6dE= +github.com/cucumber/godog v0.8.0/go.mod h1:Cp3tEV1LRAyH/RuCThcxHS/+9ORZ+FMzPva2AZ5Ki+A= +github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38= +github.com/davecgh/go-spew v1.1.1 h1:vj9j/u1bqnvCEfJOwUhtlOARqs3+rkHYY13jYWTU97c= +github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38= +github.com/fsnotify/fsnotify v1.4.7/go.mod h1:jwhsz4b93w/PPRr/qN1Yymfu8t87LnFCMoQvtojpjFo= +github.com/go-openapi/jsonpointer v0.19.2/go.mod h1:3akKfEdA7DF1sugOqz1dVQHBcuDBPKZGEoHC/NkiQRg= +github.com/go-openapi/jsonpointer v0.19.3 h1:gihV7YNZK1iK6Tgwwsxo2rJbD1GTbdm72325Bq8FI3w= +github.com/go-openapi/jsonpointer v0.19.3/go.mod h1:Pl9vOtqEWErmShwVjC8pYs9cog34VGT37dQOVbmoatg= +github.com/go-openapi/jsonreference v0.19.2 h1:o20suLFB4Ri0tuzpWtyHlh7E7HnkqTNLq6aR6WVNS1w= +github.com/go-openapi/jsonreference v0.19.2/go.mod h1:jMjeRr2HHw6nAVajTXJ4eiUwohSTlpa0o73RUL1owJc= +github.com/go-openapi/spec v0.19.4 h1:ixzUSnHTd6hCemgtAJgluaTSGYpLNpJY4mA2DIkdOAo= +github.com/go-openapi/spec v0.19.4/go.mod h1:FpwSN1ksY1eteniUU7X0N/BgJ7a4WvBFVA8Lj9mJglo= +github.com/go-openapi/swag v0.19.2/go.mod h1:POnQmlKehdgb5mhVOsnJFsivZCEZ/vjK9gh66Z9tfKk= +github.com/go-openapi/swag v0.19.5 h1:lTz6Ys4CmqqCQmZPBlbQENR1/GucA2bzYTE12Pw4tFY= +github.com/go-openapi/swag v0.19.5/go.mod h1:POnQmlKehdgb5mhVOsnJFsivZCEZ/vjK9gh66Z9tfKk= +github.com/go-sql-driver/mysql v1.5.0/go.mod h1:DCzpHaOWr8IXmIStZouvnhqoel9Qv2LBy8hT2VhHyBg= +github.com/gobuffalo/envy v1.7.0 h1:GlXgaiBkmrYMHco6t4j7SacKO4XUjvh5pwXh0f4uxXU= +github.com/gobuffalo/envy v1.7.0/go.mod h1:n7DRkBerg/aorDM8kbduw5dN3oXGswK5liaSCx4T5NI= +github.com/gobuffalo/logger v1.0.0/go.mod h1:2zbswyIUa45I+c+FLXuWl9zSWEiVuthsk8ze5s8JvPs= +github.com/gobuffalo/packd v0.3.0 h1:eMwymTkA1uXsqxS0Tpoop3Lc0u3kTfiMBE6nKtQU4g4= +github.com/gobuffalo/packd v0.3.0/go.mod h1:zC7QkmNkYVGKPw4tHpBQ+ml7W/3tIebgeo1b36chA3Q= +github.com/gobuffalo/packr v1.30.1 h1:hu1fuVR3fXEZR7rXNW3h8rqSML8EVAf6KNm0NKO/wKg= +github.com/gobuffalo/packr v1.30.1/go.mod h1:ljMyFO2EcrnzsHsN99cvbq055Y9OhRrIaviy289eRuk= +github.com/gobuffalo/packr/v2 v2.5.1/go.mod h1:8f9c96ITobJlPzI44jj+4tHnEKNt0xXWSVlXRN9X1Iw= +github.com/golang/glog v0.0.0-20160126235308-23def4e6c14b h1:VKtxabqXZkF25pY9ekfRL6a582T4P37/31XEstQ5p58= +github.com/golang/glog v0.0.0-20160126235308-23def4e6c14b/go.mod h1:SBH7ygxi8pfUlaOkMMuAQtPIUF8ecWP5IEl/CR7VP2Q= +github.com/golang/mock v1.1.1/go.mod h1:oTYuIxOrZwtPieC+H1uAHpcLFnEyAGVDL/k47Jfbm0A= +github.com/golang/protobuf v1.2.0/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U= +github.com/golang/protobuf v1.3.2 h1:6nsPYzhq5kReh6QImI3k5qWzO4PEbvbIW2cwSfR/6xs= +github.com/golang/protobuf v1.3.2/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U= +github.com/google/go-cmp v0.2.0/go.mod h1:oXzfMopK8JAjlY9xF4vHSVASa0yLyX7SntLO5aqRK0M= +github.com/hashicorp/hcl v1.0.0/go.mod h1:E5yfLk+7swimpb2L/Alb/PJmXilQ/rhwaUYs4T20WEQ= +github.com/hyperledger/fabric-chaincode-go v0.0.0-20200424173110-d7076418f212 h1:1i4lnpV8BDgKOLi1hgElfBqdHXjXieSuj8629mwBZ8o= +github.com/hyperledger/fabric-chaincode-go v0.0.0-20200424173110-d7076418f212/go.mod h1:N7H3sA7Tx4k/YzFq7U0EPdqJtqvM4Kild0JoCc7C0Dc= +github.com/hyperledger/fabric-chaincode-go v0.0.0-20210718160520-38d29fabecb9 h1:1cAZHHrBYFrX3bwQGhOZtOB4sCM9QWVppd81O8vsPXs= +github.com/hyperledger/fabric-chaincode-go v0.0.0-20210718160520-38d29fabecb9/go.mod h1:N7H3sA7Tx4k/YzFq7U0EPdqJtqvM4Kild0JoCc7C0Dc= +github.com/hyperledger/fabric-contract-api-go v1.1.0 h1:K9uucl/6eX3NF0/b+CGIiO1IPm1VYQxBkpnVGJur2S4= +github.com/hyperledger/fabric-contract-api-go v1.1.0/go.mod h1:nHWt0B45fK53owcFpLtAe8DH0Q5P068mnzkNXMPSL7E= +github.com/hyperledger/fabric-contract-api-go v1.1.1 h1:gDhOC18gjgElNZ85kFWsbCQq95hyUP/21n++m0Sv6B0= +github.com/hyperledger/fabric-contract-api-go v1.1.1/go.mod h1:+39cWxbh5py3NtXpRA63rAH7NzXyED+QJx1EZr0tJPo= +github.com/hyperledger/fabric-protos-go v0.0.0-20190919234611-2a87503ac7c9/go.mod h1:xVYTjK4DtZRBxZ2D9aE4y6AbLaPwue2o/criQyQbVD0= +github.com/hyperledger/fabric-protos-go v0.0.0-20200424173316-dd554ba3746e h1:9PS5iezHk/j7XriSlNuSQILyCOfcZ9wZ3/PiucmSE8E= +github.com/hyperledger/fabric-protos-go v0.0.0-20200424173316-dd554ba3746e/go.mod h1:xVYTjK4DtZRBxZ2D9aE4y6AbLaPwue2o/criQyQbVD0= +github.com/hyperledger/fabric-protos-go v0.0.0-20201028172056-a3136dde2354 h1:6vLLEpvDbSlmUJFjg1hB5YMBpI+WgKguztlONcAFBoY= +github.com/hyperledger/fabric-protos-go v0.0.0-20201028172056-a3136dde2354/go.mod h1:xVYTjK4DtZRBxZ2D9aE4y6AbLaPwue2o/criQyQbVD0= +github.com/hyperledger/fabric-protos-go v0.0.0-20210722212527-1ed7094bb8fe h1:1ef+SKRVYiQCAMhZ5W6HZhStEcNNAm27V8YcptzSWnM= +github.com/hyperledger/fabric-protos-go v0.0.0-20210722212527-1ed7094bb8fe/go.mod h1:xVYTjK4DtZRBxZ2D9aE4y6AbLaPwue2o/criQyQbVD0= +github.com/inconshreveable/mousetrap v1.0.0/go.mod h1:PxqpIevigyE2G7u3NXJIT2ANytuPF1OarO4DADm73n8= +github.com/joho/godotenv v1.3.0 h1:Zjp+RcGpHhGlrMbJzXTrZZPrWj+1vfm90La1wgB6Bhc= +github.com/joho/godotenv v1.3.0/go.mod h1:7hK45KPybAkOC6peb+G5yklZfMxEjkZhHbwpqxOKXbg= +github.com/karrick/godirwalk v1.10.12/go.mod h1:RoGL9dQei4vP9ilrpETWE8CLOZ1kiN0LhBygSwrAsHA= +github.com/konsorten/go-windows-terminal-sequences v1.0.1/go.mod h1:T0+1ngSBFLxvqU3pZ+m/2kptfBszLMUkC4ZK/EgS/cQ= +github.com/konsorten/go-windows-terminal-sequences v1.0.2/go.mod h1:T0+1ngSBFLxvqU3pZ+m/2kptfBszLMUkC4ZK/EgS/cQ= +github.com/kr/pretty v0.1.0/go.mod h1:dAy3ld7l9f0ibDNOQOHHMYYIIbhfbHSm3C4ZsoJORNo= +github.com/kr/pretty v0.2.0 h1:s5hAObm+yFO5uHYt5dYjxi2rXrsnmRpJx4OYvIWUaQs= +github.com/kr/pretty v0.2.0/go.mod h1:ipq/a2n7PKx3OHsz4KJII5eveXtPO4qwEXGdVfWzfnI= +github.com/kr/pty v1.1.1/go.mod h1:pFQYn66WHrOpPYNljwOMqo10TkYh1fy3cYio2l3bCsQ= +github.com/kr/pty v1.1.5/go.mod h1:9r2w37qlBe7rQ6e1fg1S/9xpWHSnaqNdHD3WcMdbPDA= +github.com/kr/text v0.1.0 h1:45sCR5RtlFHMR4UwH9sdQ5TC8v0qDQCHnXt+kaKSTVE= +github.com/kr/text v0.1.0/go.mod h1:4Jbv+DJW3UT/LiOwJeYQe1efqtUx/iVham/4vfdArNI= +github.com/magiconair/properties v1.8.0/go.mod h1:PppfXfuXeibc/6YijjN8zIbojt8czPbwD3XqdrwzmxQ= +github.com/mailru/easyjson v0.0.0-20190614124828-94de47d64c63/go.mod h1:C1wdFJiN94OJF2b5HbByQZoLdCWB1Yqtg26g4irojpc= +github.com/mailru/easyjson v0.0.0-20190626092158-b2ccc519800e h1:hB2xlXdHp/pmPZq0y3QnmWAArdw9PqbmotexnWx/FU8= +github.com/mailru/easyjson v0.0.0-20190626092158-b2ccc519800e/go.mod h1:C1wdFJiN94OJF2b5HbByQZoLdCWB1Yqtg26g4irojpc= +github.com/mitchellh/go-homedir v1.1.0/go.mod h1:SfyaCUpYCn1Vlf4IUYiD9fPX4A5wJrkLzIz1N1q0pr0= +github.com/mitchellh/mapstructure v1.1.2/go.mod h1:FVVH3fgwuzCH5S8UJGiWEs2h04kUh9fWfEaFds41c1Y= +github.com/pelletier/go-toml v1.2.0/go.mod h1:5z9KED0ma1S8pY6P1sdut58dfprrGBbd/94hg7ilaic= +github.com/pmezard/go-difflib v1.0.0 h1:4DBwDE0NGyQoBHbLQYPwSUPoCMWR5BEzIk/f1lZbAQM= +github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4= +github.com/rogpeppe/go-internal v1.1.0/go.mod h1:M8bDsm7K2OlrFYOpmOWEs/qY81heoFRclV5y23lUDJ4= +github.com/rogpeppe/go-internal v1.3.0 h1:RR9dF3JtopPvtkroDZuVD7qquD0bnHlKSqaQhgwt8yk= +github.com/rogpeppe/go-internal v1.3.0/go.mod h1:M8bDsm7K2OlrFYOpmOWEs/qY81heoFRclV5y23lUDJ4= +github.com/russross/blackfriday v1.5.2/go.mod h1:JO/DiYxRf+HjHt06OyowR9PTA263kcR/rfWxYHBV53g= +github.com/sirupsen/logrus v1.4.2/go.mod h1:tLMulIdttU9McNUspp0xgXVQah82FyeX6MwdIuYE2rE= +github.com/spf13/afero v1.1.2/go.mod h1:j4pytiNVoe2o6bmDsKpLACNPDBIoEAkihy7loJ1B0CQ= +github.com/spf13/cast v1.3.0/go.mod h1:Qx5cxh0v+4UWYiBimWS+eyWzqEqokIECu5etghLkUJE= +github.com/spf13/cobra v0.0.5/go.mod h1:3K3wKZymM7VvHMDS9+Akkh4K60UwM26emMESw8tLCHU= +github.com/spf13/jwalterweatherman v1.0.0/go.mod h1:cQK4TGJAtQXfYWX+Ddv3mKDzgVb68N+wFjFa4jdeBTo= +github.com/spf13/pflag v1.0.3/go.mod h1:DYY7MBk1bdzusC3SYhjObp+wFpr4gzcvqqNjLnInEg4= +github.com/spf13/viper v1.3.2/go.mod h1:ZiWeW+zYFKm7srdB9IoDzzZXaJaI5eL9QjNiN/DMA2s= +github.com/stretchr/objx v0.1.0/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME= +github.com/stretchr/objx v0.1.1/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME= +github.com/stretchr/objx v0.2.0/go.mod h1:qt09Ya8vawLte6SNmTgCsAVtYtaKzEcn8ATUoHMkEqE= +github.com/stretchr/testify v1.2.2/go.mod h1:a8OnRcib4nhh0OaRAV+Yts87kKdq0PP7pXfy6kDkUVs= +github.com/stretchr/testify v1.3.0/go.mod h1:M5WIy9Dh21IEIfnGCwXGc5bZfKNJtfHm1UVUgZn+9EI= +github.com/stretchr/testify v1.4.0/go.mod h1:j7eGeouHqKxXV5pUuKE4zz7dFj8WfuZ+81PSLYec5m4= +github.com/stretchr/testify v1.5.1 h1:nOGnQDM7FYENwehXlg/kFVnos3rEvtKTjRvOWSzb6H4= +github.com/stretchr/testify v1.5.1/go.mod h1:5W2xD1RspED5o8YsWQXVCued0rvSQ+mT+I5cxcmMvtA= +github.com/ugorji/go/codec v0.0.0-20181204163529-d75b2dcb6bc8/go.mod h1:VFNgLljTbGfSG7qAOspJ7OScBnGdDN/yBr0sguwnwf0= +github.com/xeipuuv/gojsonpointer v0.0.0-20180127040702-4e3ac2762d5f h1:J9EGpcZtP0E/raorCMxlFGSTBrsSlaDGf3jU/qvAE2c= +github.com/xeipuuv/gojsonpointer v0.0.0-20180127040702-4e3ac2762d5f/go.mod h1:N2zxlSyiKSe5eX1tZViRH5QA0qijqEDrYZiPEAiq3wU= +github.com/xeipuuv/gojsonreference v0.0.0-20180127040603-bd5ef7bd5415 h1:EzJWgHovont7NscjpAxXsDA8S8BMYve8Y5+7cuRE7R0= +github.com/xeipuuv/gojsonreference v0.0.0-20180127040603-bd5ef7bd5415/go.mod h1:GwrjFmJcFw6At/Gs6z4yjiIwzuJ1/+UwLxMQDVQXShQ= +github.com/xeipuuv/gojsonschema v1.2.0 h1:LhYJRs+L4fBtjZUfuSZIKGeVu0QRy8e5Xi7D17UxZ74= +github.com/xeipuuv/gojsonschema v1.2.0/go.mod h1:anYRn/JVcOK2ZgGU+IjEV4nwlhoK5sQluxsYJ78Id3Y= +github.com/xordataexchange/crypt v0.0.3-0.20170626215501-b2862e3d0a77/go.mod h1:aYKd//L2LvnjZzWKhF00oedf4jCCReLcmhLdhm1A27Q= +golang.org/x/crypto v0.0.0-20181203042331-505ab145d0a9/go.mod h1:6SG95UA2DQfeDnfUPMdvaQW0Q7yPrPDi9nlGo2tz2b4= +golang.org/x/crypto v0.0.0-20190308221718-c2843e01d9a2/go.mod h1:djNgcEr1/C05ACkg1iLfiJU5Ep61QUkGW8qpdssI0+w= +golang.org/x/crypto v0.0.0-20190611184440-5c40567a22f8/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI= +golang.org/x/crypto v0.0.0-20190621222207-cc06ce4a13d4/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI= +golang.org/x/lint v0.0.0-20190313153728-d0100b6bd8b3/go.mod h1:6SW0HCj/g11FgYtHlgUYUwCkIfeOF89ocIRzGO/8vkc= +golang.org/x/net v0.0.0-20190311183353-d8887717615a/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg= +golang.org/x/net v0.0.0-20190404232315-eb5bcb51f2a3/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg= +golang.org/x/net v0.0.0-20190522155817-f3200d17e092/go.mod h1:HSz+uSET+XFnRR8LxR5pz3Of3rY3CfYBVs4xY44aLks= +golang.org/x/net v0.0.0-20190613194153-d28f0bde5980/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s= +golang.org/x/net v0.0.0-20190827160401-ba9fcec4b297 h1:k7pJ2yAPLPgbskkFdhRCsA77k2fySZ1zf2zCjvQCiIM= +golang.org/x/net v0.0.0-20190827160401-ba9fcec4b297/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s= +golang.org/x/oauth2 v0.0.0-20180821212333-d2e6202438be/go.mod h1:N/0e6XlmueqKjAGxoOufVs8QHGRruUQn6yWY3a++T0U= +golang.org/x/sync v0.0.0-20190423024810-112230192c58/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= +golang.org/x/sys v0.0.0-20181205085412-a5c9d58dba9a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY= +golang.org/x/sys v0.0.0-20190215142949-d0b11bdaac8a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY= +golang.org/x/sys v0.0.0-20190412213103-97732733099d/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= +golang.org/x/sys v0.0.0-20190422165155-953cdadca894/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= +golang.org/x/sys v0.0.0-20190515120540-06a5c4944438/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= +golang.org/x/sys v0.0.0-20190616124812-15dcb6c0061f/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= +golang.org/x/sys v0.0.0-20190710143415-6ec70d6a5542 h1:6ZQFf1D2YYDDI7eSwW8adlkkavTB9sw5I24FVtEvNUQ= +golang.org/x/sys v0.0.0-20190710143415-6ec70d6a5542/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= +golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ= +golang.org/x/text v0.3.2 h1:tW2bmiBqwgJj/UpqtC8EpXEZVYOwU0yG4iWbprSVAcs= +golang.org/x/text v0.3.2/go.mod h1:bEr9sfX3Q8Zfm5fL9x+3itogRgK3+ptLWKqgva+5dAk= +golang.org/x/tools v0.0.0-20180917221912-90fa682c2a6e/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ= +golang.org/x/tools v0.0.0-20190311212946-11955173bddd/go.mod h1:LCzVGOaR6xXOjkQ3onu1FJEFr0SW1gC7cKk1uF8kGRs= +golang.org/x/tools v0.0.0-20190524140312-2c0ae7006135/go.mod h1:RgjU9mgBXZiqYHBnxXauZ1Gv1EHHAz9KjViQ78xBX0Q= +golang.org/x/tools v0.0.0-20190614205625-5aca471b1d59/go.mod h1:/rFqwRUd4F7ZHNgwSSTFct+R/Kf4OFW1sUzUTQQTgfc= +golang.org/x/tools v0.0.0-20190624180213-70d37148ca0c/go.mod h1:/rFqwRUd4F7ZHNgwSSTFct+R/Kf4OFW1sUzUTQQTgfc= +google.golang.org/appengine v1.1.0/go.mod h1:EbEs0AVv82hx2wNQdGPgUI5lhzA/G0D9YwlJXL52JkM= +google.golang.org/genproto v0.0.0-20180817151627-c66870c02cf8/go.mod h1:JiN7NxoALGmiZfu7CAH4rXhgtRTLTxftemlI0sWmxmc= +google.golang.org/genproto v0.0.0-20180831171423-11092d34479b h1:lohp5blsw53GBXtLyLNaTXPXS9pJ1tiTw61ZHUoE9Qw= +google.golang.org/genproto v0.0.0-20180831171423-11092d34479b/go.mod h1:JiN7NxoALGmiZfu7CAH4rXhgtRTLTxftemlI0sWmxmc= +google.golang.org/grpc v1.23.0 h1:AzbTB6ux+okLTzP8Ru1Xs41C303zdcfEht7MQnYJt5A= +google.golang.org/grpc v1.23.0/go.mod h1:Y5yQAOtifL1yxbo5wqy6BxZv8vAUGQwXBOALyacEbxg= +gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0= +gopkg.in/check.v1 v1.0.0-20180628173108-788fd7840127/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0= +gopkg.in/check.v1 v1.0.0-20190902080502-41f04d3bba15 h1:YR8cESwS4TdDjEe65xsg0ogRM/Nc3DYOhEAlW+xobZo= +gopkg.in/check.v1 v1.0.0-20190902080502-41f04d3bba15/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0= +gopkg.in/errgo.v2 v2.1.0/go.mod h1:hNsd1EY+bozCKY1Ytp96fpM3vjJbqLJn88ws8XvfDNI= +gopkg.in/yaml.v2 v2.2.2 h1:ZCJp+EgiOT7lHqUV2J862kp8Qj64Jo6az82+3Td9dZw= +gopkg.in/yaml.v2 v2.2.2/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI= +gopkg.in/yaml.v2 v2.2.8 h1:obN1ZagJSUGI0Ek/LBmuj4SNLPfIny3KsKFopxRdj10= +gopkg.in/yaml.v2 v2.2.8/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI= +honnef.co/go/tools v0.0.0-20190523083050-ea95bdfd59fc/go.mod h1:rf3lG4BRIbNafJWhAfAdb/ePZxsR/4RtNHQocxwk9r4= diff --git a/topologies/t2/config/config.yaml b/topologies/t2/config/config.yaml new file mode 100644 index 0000000..1f4cc49 --- /dev/null +++ b/topologies/t2/config/config.yaml @@ -0,0 +1,14 @@ +NodeOUs: + Enable: true + ClientOUIdentifier: + Certificate: cacerts/ca-cert.pem + OrganizationalUnitIdentifier: client + PeerOUIdentifier: + Certificate: cacerts/ca-cert.pem + OrganizationalUnitIdentifier: peer + AdminOUIdentifier: + Certificate: cacerts/ca-cert.pem + OrganizationalUnitIdentifier: admin + OrdererOUIdentifier: + Certificate: cacerts/ca-cert.pem + OrganizationalUnitIdentifier: orderer \ No newline at end of file diff --git a/topologies/t2/config/configtx.yaml b/topologies/t2/config/configtx.yaml new file mode 100644 index 0000000..756900a --- /dev/null +++ b/topologies/t2/config/configtx.yaml @@ -0,0 +1,440 @@ +# Copyright IBM Corp. All Rights Reserved. +# +# SPDX-License-Identifier: Apache-2.0 +# + +--- +################################################################################ +# +# Section: Organizations +# +# - This section defines the different organizational identities which will +# be referenced later in the configuration. +# +################################################################################ +Organizations: + + # SampleOrg defines an MSP using the sampleconfig. It should never be used + # in production but may be used as a template for other definitions + - &org1 + # DefaultOrg defines the organization which is used in the sampleconfig + # of the fabric.git development environment + Name: org1 + + # ID to load the MSP definition as + ID: org1MSP + + # MSPDir is the filesystem path which contains the MSP configuration + MSPDir: /tmp/crypto-material/orgs/org1/msp + + # Policies defines the set of policies at this level of the config tree + # For organization policies, their canonical path is usually + # /Channel/// + Policies: + Readers: + Type: Signature + Rule: "OR('org1MSP.member')" + Writers: + Type: Signature + Rule: "OR('org1MSP.member')" + Admins: + Type: Signature + Rule: "OR('org1MSP.admin')" + Endorsement: + Type: Signature + Rule: "OR('org1MSP.member')" + + OrdererEndpoints: + - <>-org1-orderer1:7050 + - <>-org1-orderer2:7050 + - <>-org1-orderer3:7050 + + - &org2 + # DefaultOrg defines the organization which is used in the sampleconfig + # of the fabric.git development environment + Name: org2MSP + + # ID to load the MSP definition as + ID: org2MSP + + MSPDir: /tmp/crypto-material/orgs/org2/msp + + # Policies defines the set of policies at this level of the config tree + # For organization policies, their canonical path is usually + # /Channel/// + Policies: + Readers: + Type: Signature + Rule: "OR('org2MSP.member')" + Writers: + Type: Signature + Rule: "OR('org2MSP.member')" + Admins: + Type: Signature + Rule: "OR('org2MSP.admin')" + Endorsement: + Type: Signature + Rule: "OR('org2MSP.peer')" + + OrdererEndpoints: + - <>-org2-orderer1:7050 + - <>-org2-orderer2:7050 + + # leave this flag set to true. + AnchorPeers: + # AnchorPeers defines the location of peers which can be used + # for cross org gossip communication. Note, this value is only + # encoded in the genesis block in the Application section context + - Host: <>-org2-peer1 + Port: 7051 + + - &org3 + # DefaultOrg defines the organization which is used in the sampleconfig + # of the fabric.git development environment + Name: org3MSP + + # ID to load the MSP definition as + ID: org3MSP + + MSPDir: /tmp/crypto-material/orgs/org3/msp + + # Policies defines the set of policies at this level of the config tree + # For organization policies, their canonical path is usually + # /Channel/// + Policies: + Readers: + Type: Signature + Rule: "OR('org3MSP.member')" + Writers: + Type: Signature + Rule: "OR('org3MSP.member')" + Admins: + Type: Signature + Rule: "OR('org3MSP.admin')" + Endorsement: + Type: Signature + Rule: "OR('org3MSP.peer')" + + # leave this flag set to true. + AnchorPeers: + # AnchorPeers defines the location of peers which can be used + # for cross org gossip communication. Note, this value is only + # encoded in the genesis block in the Application section context + - Host: <>-org3-peer1 + Port: 7051 + +################################################################################ +# +# SECTION: Capabilities +# +# - This section defines the capabilities of fabric network. This is a new +# concept as of v1.1.0 and should not be utilized in mixed networks with +# v1.0.x peers and orderers. Capabilities define features which must be +# present in a fabric binary for that binary to safely participate in the +# fabric network. For instance, if a new MSP type is added, newer binaries +# might recognize and validate the signatures from this type, while older +# binaries without this support would be unable to validate those +# transactions. This could lead to different versions of the fabric binaries +# having different world states. Instead, defining a capability for a channel +# informs those binaries without this capability that they must cease +# processing transactions until they have been upgraded. For v1.0.x if any +# capabilities are defined (including a map with all capabilities turned off) +# then the v1.0.x peer will deliberately crash. +# +################################################################################ +Capabilities: + # Channel capabilities apply to both the orderers and the peers and must be + # supported by both. + # Set the value of the capability to true to require it. + Channel: &ChannelCapabilities + # V2_0 capability ensures that orderers and peers behave according + # to v2.0 channel capabilities. Orderers and peers from + # prior releases would behave in an incompatible way, and are therefore + # not able to participate in channels at v2.0 capability. + # Prior to enabling V2.0 channel capabilities, ensure that all + # orderers and peers on a channel are at v2.0.0 or later. + V2_0: true + + # Orderer capabilities apply only to the orderers, and may be safely + # used with prior release peers. + # Set the value of the capability to true to require it. + Orderer: &OrdererCapabilities + # V2_0 orderer capability ensures that orderers behave according + # to v2.0 orderer capabilities. Orderers from + # prior releases would behave in an incompatible way, and are therefore + # not able to participate in channels at v2.0 orderer capability. + # Prior to enabling V2.0 orderer capabilities, ensure that all + # orderers on channel are at v2.0.0 or later. + V2_0: true + + # Application capabilities apply only to the peer network, and may be safely + # used with prior release orderers. + # Set the value of the capability to true to require it. + Application: &ApplicationCapabilities + # V2_0 application capability ensures that peers behave according + # to v2.0 application capabilities. Peers from + # prior releases would behave in an incompatible way, and are therefore + # not able to participate in channels at v2.0 application capability. + # Prior to enabling V2.0 application capabilities, ensure that all + # peers on channel are at v2.0.0 or later. + V2_0: true + +################################################################################ +# +# SECTION: Application +# +# - This section defines the values to encode into a config transaction or +# genesis block for application related parameters +# +################################################################################ +Application: &ApplicationDefaults + ACLs: &ACLsDefault + + # ACL policy for lscc's "getid" function + lscc/ChaincodeExists: /Channel/Application/Readers + + # ACL policy for lscc's "getdepspec" function + lscc/GetDeploymentSpec: /Channel/Application/Readers + + # ACL policy for lscc's "getccdata" function + lscc/GetChaincodeData: /Channel/Application/Readers + + # ACL Policy for lscc's "getchaincodes" function + lscc/GetInstantiatedChaincodes: /Channel/Application/Readers + + + #---Query System Chaincode (qscc) function to policy mapping for access control---# + + # ACL policy for qscc's "GetChainInfo" function + qscc/GetChainInfo: /Channel/Application/Readers + + + # ACL policy for qscc's "GetBlockByNumber" function + qscc/GetBlockByNumber: /Channel/Application/Readers + + # ACL policy for qscc's "GetBlockByHash" function + qscc/GetBlockByHash: /Channel/Application/Readers + + # ACL policy for qscc's "GetTransactionByID" function + qscc/GetTransactionByID: /Channel/Application/Readers + + # ACL policy for qscc's "GetBlockByTxID" function + qscc/GetBlockByTxID: /Channel/Application/Readers + + #---Configuration System Chaincode (cscc) function to policy mapping for access control---# + + # ACL policy for cscc's "GetConfigBlock" function + cscc/GetConfigBlock: /Channel/Application/Readers + + # ACL policy for cscc's "GetConfigTree" function + cscc/GetConfigTree: /Channel/Application/Readers + + # ACL policy for cscc's "SimulateConfigTreeUpdate" function + cscc/SimulateConfigTreeUpdate: /Channel/Application/Readers + + #---Miscellanesous peer function to policy mapping for access control---# + + # ACL policy for invoking chaincodes on peer + peer/Propose: /Channel/Application/Writers + + # ACL policy for chaincode to chaincode invocation + peer/ChaincodeToChaincode: /Channel/Application/Readers + + #---Events resource to policy mapping for access control###---# + + # ACL policy for sending block events + event/Block: /Channel/Application/Readers + + # ACL policy for sending filtered block events + event/FilteredBlock: /Channel/Application/Readers + + # Chaincode Lifecycle Policies introduced in Fabric 2.x + # ACL policy for _lifecycle's "CheckCommitReadiness" function + _lifecycle/CheckCommitReadiness: /Channel/Application/Writers + + # ACL policy for _lifecycle's "CommitChaincodeDefinition" function + _lifecycle/CommitChaincodeDefinition: /Channel/Application/Writers + + # ACL policy for _lifecycle's "QueryChaincodeDefinition" function + _lifecycle/QueryChaincodeDefinition: /Channel/Application/Readers + + + # Organizations is the list of orgs which are defined as participants on + # the application side of the network + Organizations: + + # Policies defines the set of policies at this level of the config tree + # For Application policies, their canonical path is + # /Channel/Application/ + Policies: + Readers: + Type: ImplicitMeta + Rule: "ANY Readers" + Writers: + Type: ImplicitMeta + Rule: "ANY Writers" + Admins: + Type: ImplicitMeta + Rule: "MAJORITY Admins" + LifecycleEndorsement: + Type: ImplicitMeta + Rule: "MAJORITY Endorsement" + Endorsement: + Type: ImplicitMeta + Rule: "MAJORITY Endorsement" + + Capabilities: + <<: *ApplicationCapabilities +################################################################################ +# +# SECTION: Orderer +# +# - This section defines the values to encode into a config transaction or +# genesis block for orderer related parameters +# +################################################################################ +Orderer: &OrdererDefaults + + # Orderer Type: The orderer implementation to start + OrdererType: etcdraft + + # Addresses used to be the list of orderer addresses that clients and peers + # could connect to. However, this does not allow clients to associate orderer + # addresses and orderer organizations which can be useful for things such + # as TLS validation. The preferred way to specify orderer addresses is now + # to include the OrdererEndpoints item in your org definition + # Addresses: + # - <>-org1-orderer1:7050 + # - <>-org1-orderer2:7050 + # - <>-org1-orderer3:7050 + + EtcdRaft: + Consenters: + - Host: <>-org1-orderer1 + Port: 7050 + ClientTLSCert: /tmp/crypto-material/orderers/org1/orderer1/node-tls/msp/signcerts/cert.pem + ServerTLSCert: /tmp/crypto-material/orderers/org1/orderer1/node-tls/msp/signcerts/cert.pem + - Host: <>-org1-orderer2 + Port: 7050 + ClientTLSCert: /tmp/crypto-material/orderers/org1/orderer2/node-tls/msp/signcerts/cert.pem + ServerTLSCert: /tmp/crypto-material/orderers/org1/orderer2/node-tls/msp/signcerts/cert.pem + - Host: <>-org1-orderer3 + Port: 7050 + ClientTLSCert: /tmp/crypto-material/orderers/org1/orderer3/node-tls/msp/signcerts/cert.pem + ServerTLSCert: /tmp/crypto-material/orderers/org1/orderer3/node-tls/msp/signcerts/cert.pem + - Host: <>-org2-orderer1 + Port: 7050 + ClientTLSCert: /tmp/crypto-material/orderers/org2/orderer1/node-tls/msp/signcerts/cert.pem + ServerTLSCert: /tmp/crypto-material/orderers/org2/orderer1/node-tls/msp/signcerts/cert.pem + - Host: <>-org2-orderer2 + Port: 7050 + ClientTLSCert: /tmp/crypto-material/orderers/org2/orderer2/node-tls/msp/signcerts/cert.pem + ServerTLSCert: /tmp/crypto-material/orderers/org2/orderer2/node-tls/msp/signcerts/cert.pem + + # Batch Timeout: The amount of time to wait before creating a batch + BatchTimeout: 2s + + # Batch Size: Controls the number of messages batched into a block + BatchSize: + + # Max Message Count: The maximum number of messages to permit in a batch + MaxMessageCount: 10 + + # Absolute Max Bytes: The absolute maximum number of bytes allowed for + # the serialized messages in a batch. + AbsoluteMaxBytes: 99 MB + + # Preferred Max Bytes: The preferred maximum number of bytes allowed for + # the serialized messages in a batch. A message larger than the preferred + # max bytes will result in a batch larger than preferred max bytes. + PreferredMaxBytes: 512 KB + + # Organizations is the list of orgs which are defined as participants on + # the orderer side of the network + Organizations: + + # Policies defines the set of policies at this level of the config tree + # For Orderer policies, their canonical path is + # /Channel/Orderer/ + Policies: + Readers: + Type: ImplicitMeta + Rule: "ANY Readers" + Writers: + Type: ImplicitMeta + Rule: "ANY Writers" + Admins: + Type: ImplicitMeta + Rule: "MAJORITY Admins" + # BlockValidation specifies what signatures must be included in the block + # from the orderer for the peer to validate it. + BlockValidation: + Type: ImplicitMeta + Rule: "ANY Writers" + +################################################################################ +# +# CHANNEL +# +# This section defines the values to encode into a config transaction or +# genesis block for channel related parameters. +# +################################################################################ +Channel: &ChannelDefaults + # Policies defines the set of policies at this level of the config tree + # For Channel policies, their canonical path is + # /Channel/ + Policies: + # Who may invoke the 'Deliver' API + Readers: + Type: ImplicitMeta + Rule: "ANY Readers" + # Who may invoke the 'Broadcast' API + Writers: + Type: ImplicitMeta + Rule: "ANY Writers" + # By default, who may modify elements at this config level + Admins: + Type: ImplicitMeta + Rule: "MAJORITY Admins" + + # Capabilities describes the channel level capabilities, see the + # dedicated Capabilities section elsewhere in this file for a full + # description + Capabilities: + <<: *ChannelCapabilities + +################################################################################ +# +# Profile +# +# - Different configuration profiles may be encoded here to be specified +# as parameters to the configtxgen tool +# +################################################################################ +Profiles: + OrgsOrdererGenesis: + <<: *ChannelDefaults + Orderer: + <<: *OrdererDefaults + Organizations: + - *org1 + Capabilities: + <<: *OrdererCapabilities + Consortiums: + MainConsortium: + Organizations: + - *org2 + - *org3 + + OrgsChannel: + Consortium: MainConsortium + <<: *ChannelDefaults + Application: + <<: *ApplicationDefaults + Organizations: + - *org2 + - *org3 + Capabilities: + <<: *ApplicationCapabilities + diff --git a/topologies/t2/config/orderers-proxy/nginx/nginx.conf b/topologies/t2/config/orderers-proxy/nginx/nginx.conf new file mode 100644 index 0000000..a4e759d --- /dev/null +++ b/topologies/t2/config/orderers-proxy/nginx/nginx.conf @@ -0,0 +1,32 @@ +worker_processes auto; +pid /run/nginx.pid; +include /etc/nginx/modules-enabled/*.conf; + +events { + worker_connections 768; + # multi_accept on; +} + +stream { + upstream backend_servers { + server t2-org1-orderer1:7050 max_fails=3 fail_timeout=10s; + server t2-org1-orderer2:7050 max_fails=3 fail_timeout=10s; + server t2-org1-orderer3:7050 max_fails=3 fail_timeout=10s; + server t2-org2-orderer1:7050 max_fails=3 fail_timeout=10s; + server t2-org2-orderer2:7050 max_fails=3 fail_timeout=10s; + } + + log_format basic '$remote_addr [$time_local] ' + '$protocol $status $bytes_sent $bytes_received ' + '$session_time "$upstream_addr" ' + '"$upstream_bytes_sent" "$upstream_bytes_received" "$upstream_connect_time"'; + + access_log /var/log/nginx/access.log basic; + error_log /var/log/nginx/error.log; + + server { + listen 8443; + proxy_pass backend_servers; + proxy_next_upstream on; + } +} \ No newline at end of file diff --git a/topologies/t2/config/peers-proxies/nginx/org2-nginx.conf b/topologies/t2/config/peers-proxies/nginx/org2-nginx.conf new file mode 100644 index 0000000..2d4fde1 --- /dev/null +++ b/topologies/t2/config/peers-proxies/nginx/org2-nginx.conf @@ -0,0 +1,29 @@ +worker_processes auto; +pid /run/nginx.pid; +include /etc/nginx/modules-enabled/*.conf; + +events { + worker_connections 768; + # multi_accept on; +} + +stream { + upstream backend_servers { + server t2-org2-peer1:7051 max_fails=3 fail_timeout=10s; + server t2-org2-peer2:7051 max_fails=3 fail_timeout=10s; + } + + log_format basic '$remote_addr [$time_local] ' + '$protocol $status $bytes_sent $bytes_received ' + '$session_time "$upstream_addr" ' + '"$upstream_bytes_sent" "$upstream_bytes_received" "$upstream_connect_time"'; + + access_log /var/log/nginx/access.log basic; + error_log /var/log/nginx/error.log; + + server { + listen 8443; + proxy_pass backend_servers; + proxy_next_upstream on; + } +} \ No newline at end of file diff --git a/topologies/t2/config/peers-proxies/nginx/org3-nginx.conf b/topologies/t2/config/peers-proxies/nginx/org3-nginx.conf new file mode 100644 index 0000000..653e25c --- /dev/null +++ b/topologies/t2/config/peers-proxies/nginx/org3-nginx.conf @@ -0,0 +1,30 @@ +worker_processes auto; +pid /run/nginx.pid; +include /etc/nginx/modules-enabled/*.conf; + +events { + worker_connections 768; + # multi_accept on; +} + +stream { + upstream backend_servers { + server t2-org3-peer1:7051 max_fails=3 fail_timeout=10s; + server t2-org3-peer2:7051 max_fails=3 fail_timeout=10s; + server t2-org3-peer3:7051 max_fails=3 fail_timeout=10s; + } + + log_format basic '$remote_addr [$time_local] ' + '$protocol $status $bytes_sent $bytes_received ' + '$session_time "$upstream_addr" ' + '"$upstream_bytes_sent" "$upstream_bytes_received" "$upstream_connect_time"'; + + access_log /var/log/nginx/access.log basic; + error_log /var/log/nginx/error.log; + + server { + listen 8443; + proxy_pass backend_servers; + proxy_next_upstream on; + } +} \ No newline at end of file diff --git a/topologies/t2/containers/cas/org1-cas/docker-compose-org1-cas.yml b/topologies/t2/containers/cas/org1-cas/docker-compose-org1-cas.yml new file mode 100644 index 0000000..1d941c5 --- /dev/null +++ b/topologies/t2/containers/cas/org1-cas/docker-compose-org1-cas.yml @@ -0,0 +1,29 @@ +version: "3.9" +services: + org1-ca-tls: + container_name: ${CURRENT_HL_TOPOLOGY}-org1-ca-tls + + command: sh -c 'fabric-ca-server start -d -b org1-ca-tls-admin:org1-ca-tls-adminpw --port 7054' + environment: + - FABRIC_CA_SERVER_HOME=/tmp/hyperledger/fabric-ca/ + - FABRIC_CA_SERVER_TLS_ENABLED=true + - FABRIC_CA_SERVER_CSR_CN=${CURRENT_HL_TOPOLOGY}-org1-ca-tls + - FABRIC_CA_SERVER_CSR_HOSTS=0.0.0.0 + - FABRIC_CA_SERVER_DEBUG=true + - TOPOLOGY=${CURRENT_HL_TOPOLOGY} + volumes: + - "${HL_TOPOLOGIES_BASE_FOLDER}/crypto-material/cas/org1/ca-tls:/tmp/hyperledger/fabric-ca" + - "${HL_TOPOLOGIES_BASE_FOLDER}:/tmp" + org1-ca-identities: + container_name: ${CURRENT_HL_TOPOLOGY}-org1-ca-identities + command: sh -c 'fabric-ca-server start -d -b org1-ca-identities-admin:org1-ca-identities-adminpw --port 7054' + environment: + - FABRIC_CA_SERVER_HOME=/tmp/hyperledger/fabric-ca/ + - FABRIC_CA_SERVER_TLS_ENABLED=true + - FABRIC_CA_SERVER_CSR_CN=${CURRENT_HL_TOPOLOGY}-org1-ca-identities + - FABRIC_CA_SERVER_CSR_HOSTS=0.0.0.0 + - FABRIC_CA_SERVER_DEBUG=true + - TOPOLOGY=${CURRENT_HL_TOPOLOGY} + volumes: + - "${HL_TOPOLOGIES_BASE_FOLDER}/crypto-material/cas/org1/ca-identities:/tmp/hyperledger/fabric-ca" + - "${HL_TOPOLOGIES_BASE_FOLDER}:/tmp" \ No newline at end of file diff --git a/topologies/t2/containers/cas/org2-cas/docker-compose-org2-cas.yml b/topologies/t2/containers/cas/org2-cas/docker-compose-org2-cas.yml new file mode 100644 index 0000000..e22813e --- /dev/null +++ b/topologies/t2/containers/cas/org2-cas/docker-compose-org2-cas.yml @@ -0,0 +1,28 @@ +version: "3.9" +services: + org2-ca-tls: + container_name: ${CURRENT_HL_TOPOLOGY}-org2-ca-tls + command: sh -c 'fabric-ca-server start -d -b org2-ca-tls-admin:org2-ca-tls-adminpw --port 7054' + environment: + - FABRIC_CA_SERVER_HOME=/tmp/hyperledger/fabric-ca/ + - FABRIC_CA_SERVER_TLS_ENABLED=true + - FABRIC_CA_SERVER_CSR_CN=${CURRENT_HL_TOPOLOGY}-org2-ca-tls + - FABRIC_CA_SERVER_CSR_HOSTS=0.0.0.0 + - FABRIC_CA_SERVER_DEBUG=true + - TOPOLOGY=${CURRENT_HL_TOPOLOGY} + volumes: + - "${HL_TOPOLOGIES_BASE_FOLDER}/crypto-material/cas/org2/ca-tls:/tmp/hyperledger/fabric-ca" + - "${HL_TOPOLOGIES_BASE_FOLDER}:/tmp" + org2-ca-identities: + container_name: ${CURRENT_HL_TOPOLOGY}-org2-ca-identities + command: sh -c 'fabric-ca-server start -d -b org2-ca-identities-admin:org2-ca-identities-adminpw --port 7054' + environment: + - FABRIC_CA_SERVER_HOME=/tmp/hyperledger/fabric-ca/ + - FABRIC_CA_SERVER_TLS_ENABLED=true + - FABRIC_CA_SERVER_CSR_CN=${CURRENT_HL_TOPOLOGY}-org2-ca-identities + - FABRIC_CA_SERVER_CSR_HOSTS=0.0.0.0 + - FABRIC_CA_SERVER_DEBUG=true + - TOPOLOGY=${CURRENT_HL_TOPOLOGY} + volumes: + - "${HL_TOPOLOGIES_BASE_FOLDER}/crypto-material/cas/org2/ca-identities:/tmp/hyperledger/fabric-ca" + - "${HL_TOPOLOGIES_BASE_FOLDER}:/tmp" \ No newline at end of file diff --git a/topologies/t2/containers/cas/org3-cas/docker-compose-org3-cas.yml b/topologies/t2/containers/cas/org3-cas/docker-compose-org3-cas.yml new file mode 100644 index 0000000..b18ea4d --- /dev/null +++ b/topologies/t2/containers/cas/org3-cas/docker-compose-org3-cas.yml @@ -0,0 +1,28 @@ +version: "3.9" +services: + org3-ca-tls: + container_name: ${CURRENT_HL_TOPOLOGY}-org3-ca-tls + command: sh -c 'fabric-ca-server start -d -b org3-ca-tls-admin:org3-ca-tls-adminpw --port 7054' + environment: + - FABRIC_CA_SERVER_HOME=/tmp/hyperledger/fabric-ca/ + - FABRIC_CA_SERVER_TLS_ENABLED=true + - FABRIC_CA_SERVER_CSR_CN=${CURRENT_HL_TOPOLOGY}-org3-ca-tls + - FABRIC_CA_SERVER_CSR_HOSTS=0.0.0.0 + - FABRIC_CA_SERVER_DEBUG=true + - TOPOLOGY=${CURRENT_HL_TOPOLOGY} + volumes: + - "${HL_TOPOLOGIES_BASE_FOLDER}/crypto-material/cas/org3/ca-tls:/tmp/hyperledger/fabric-ca" + - "${HL_TOPOLOGIES_BASE_FOLDER}:/tmp" + org3-ca-identities: + container_name: ${CURRENT_HL_TOPOLOGY}-org3-ca-identities + command: sh -c 'fabric-ca-server start -d -b org3-ca-identities-admin:org3-ca-identities-adminpw --port 7054' + environment: + - FABRIC_CA_SERVER_HOME=/tmp/hyperledger/fabric-ca/ + - FABRIC_CA_SERVER_TLS_ENABLED=true + - FABRIC_CA_SERVER_CSR_CN=${CURRENT_HL_TOPOLOGY}-org3-ca-identities + - FABRIC_CA_SERVER_CSR_HOSTS=0.0.0.0 + - FABRIC_CA_SERVER_DEBUG=true + - TOPOLOGY=${CURRENT_HL_TOPOLOGY} + volumes: + - "${HL_TOPOLOGIES_BASE_FOLDER}/crypto-material/cas/org3/ca-identities:/tmp/hyperledger/fabric-ca" + - "${HL_TOPOLOGIES_BASE_FOLDER}:/tmp" \ No newline at end of file diff --git a/topologies/t2/containers/clis/org2-clis/docker-compose-org2-clis.yml b/topologies/t2/containers/clis/org2-clis/docker-compose-org2-clis.yml new file mode 100644 index 0000000..4ceb8a0 --- /dev/null +++ b/topologies/t2/containers/clis/org2-clis/docker-compose-org2-clis.yml @@ -0,0 +1,21 @@ +services: + org2-cli-peer1: + container_name: ${CURRENT_HL_TOPOLOGY}-org2-cli-peer1 + tty: true + stdin_open: true + environment: + - GOPATH=/opt/gopath + - CORE_VM_ENDPOINT=unix:///host/var/run/docker.sock + - FABRIC_LOGGING_SPEC=DEBUG + - CORE_PEER_ID=${CURRENT_HL_TOPOLOGY}-org2-cli-peer1 + - CORE_PEER_ADDRESS=${CURRENT_HL_TOPOLOGY}-org2-peer1:7051 + - CORE_PEER_LOCALMSPID=org2MSP + - CORE_PEER_TLS_ENABLED=true + - CORE_PEER_TLS_ROOTCERT_FILE=/tmp/crypto-material/cas/org2/ca-tls/ca-cert.pem + - CORE_PEER_MSPCONFIGPATH=/tmp/crypto-material/peers/org2/peer1/node/msp + - TOPOLOGY=${CURRENT_HL_TOPOLOGY} + working_dir: /opt/gopath/src/github.com/hyperledger/fabric/org2 + command: sh + volumes: + - ${HL_TOPOLOGIES_BASE_FOLDER}/assets/chaincode:/opt/gopath/src/github.com/hyperledger/fabric-samples/chaincode + - ${HL_TOPOLOGIES_BASE_FOLDER}:/tmp diff --git a/topologies/t2/containers/clis/org3-clis/docker-compose-org3-clis.yml b/topologies/t2/containers/clis/org3-clis/docker-compose-org3-clis.yml new file mode 100644 index 0000000..68d2ec0 --- /dev/null +++ b/topologies/t2/containers/clis/org3-clis/docker-compose-org3-clis.yml @@ -0,0 +1,21 @@ +services: + org3-cli-peer1: + container_name: ${CURRENT_HL_TOPOLOGY}-org3-cli-peer1 + tty: true + stdin_open: true + environment: + - GOPATH=/opt/gopath + - CORE_VM_ENDPOINT=unix:///host/var/run/docker.sock + - FABRIC_LOGGING_SPEC=DEBUG + - CORE_PEER_ID=${CURRENT_HL_TOPOLOGY}-org3-cli-peer1 + - CORE_PEER_ADDRESS=${CURRENT_HL_TOPOLOGY}-org3-peer1:7051 + - CORE_PEER_LOCALMSPID=org3MSP + - CORE_PEER_TLS_ENABLED=true + - CORE_PEER_TLS_ROOTCERT_FILE=/tmp/crypto-material/cas/org3/ca-tls/ca-cert.pem + - CORE_PEER_MSPCONFIGPATH=/tmp/crypto-material/peers/org3/peer1/node/msp + - TOPOLOGY=${CURRENT_HL_TOPOLOGY} + working_dir: /opt/gopath/src/github.com/hyperledger/fabric/org3 + command: sh + volumes: + - ${HL_TOPOLOGIES_BASE_FOLDER}/assets/chaincode:/opt/gopath/src/github.com/hyperledger/fabric-samples/chaincode + - ${HL_TOPOLOGIES_BASE_FOLDER}:/tmp \ No newline at end of file diff --git a/topologies/t2/containers/orderers/orderers-proxy/docker-compose-orderers-proxy.yml b/topologies/t2/containers/orderers/orderers-proxy/docker-compose-orderers-proxy.yml new file mode 100644 index 0000000..d45a868 --- /dev/null +++ b/topologies/t2/containers/orderers/orderers-proxy/docker-compose-orderers-proxy.yml @@ -0,0 +1,7 @@ +services: + orderers-proxy: + container_name: ${CURRENT_HL_TOPOLOGY}-orderers-proxy + volumes: + - ${HL_TOPOLOGIES_BASE_FOLDER}/config/orderers-proxy/nginx/nginx.conf:/etc/nginx/nginx.conf + ports: + - 8443:8443 \ No newline at end of file diff --git a/topologies/t2/containers/orderers/org1-orderers/docker-compose-org1-orderers.yml b/topologies/t2/containers/orderers/org1-orderers/docker-compose-org1-orderers.yml new file mode 100644 index 0000000..0045893 --- /dev/null +++ b/topologies/t2/containers/orderers/org1-orderers/docker-compose-org1-orderers.yml @@ -0,0 +1,82 @@ +services: + org1-orderer1: + container_name: ${CURRENT_HL_TOPOLOGY}-org1-orderer1 + environment: + - ORDERER_HOME=/tmp/hyperledger/orderer/home + - ORDERER_FILELEDGER_LOCATION=/tmp/hyperledger/orderer/fileledger + - ORDERER_HOST=${CURRENT_HL_TOPOLOGY}-org1-orderer1 + - ORDERER_GENERAL_LISTENADDRESS=0.0.0.0 + - ORDERER_GENERAL_LISTENPORT=7050 + - ORDERER_GENERAL_GENESISMETHOD=file + # - ORDERER_GENERAL_GENESISFILE=/tmp/crypto-material/artifacts/channels/genesis.block + - ORDERER_GENERAL_BOOTSTRAPFILE=/tmp/crypto-material/artifacts/channels/genesis.block + - ORDERER_GENERAL_LOCALMSPID=org1MSP + - ORDERER_GENERAL_LOCALMSPDIR=/tmp/crypto-material/orderers/org1/orderer1/node/msp + - ORDERER_GENERAL_TLS_ENABLED=true + - ORDERER_GENERAL_TLS_CERTIFICATE=/tmp/crypto-material/orderers/org1/orderer1/node-tls/msp/signcerts/cert.pem + - ORDERER_GENERAL_TLS_PRIVATEKEY=/tmp/crypto-material/orderers/org1/orderer1/node-tls/msp/keystore/key.pem + # - ORDERER_GENERAL_TLS_ROOTCAS=[/tmp/crypto-material/cas/org1/ca-tls/ca-cert.pem] + - ORDERER_GENERAL_TLS_ROOTCAS=[/tmp/crypto-material/cas/org1/ca-tls/ca-cert.pem,/tmp/crypto-material/cas/org2/ca-tls/ca-cert.pem,/tmp/crypto-material/cas/org3/ca-tls/ca-cert.pem] + - ORDERER_GENERAL_LOGLEVEL=debug + - ORDERER_DEBUG_BROADCASTTRACEDIR=data/logs + - ORDERER_OPERATIONS_LISTENADDRESS=0.0.0.0:7051 + - ORDERER_METRICS_STATSD_ADDRESS=0.0.0.0:7052 + - ORDERER_GENERAL_PROFILE_ADDRESS=0.0.0.0:7053 + - TOPOLOGY=${CURRENT_HL_TOPOLOGY} + volumes: + - ${HL_TOPOLOGIES_BASE_FOLDER}/crypto-material:/tmp/crypto-material + - ${HL_TOPOLOGIES_BASE_FOLDER}/homefolders/orderers/org1/orderer1:/tmp/hyperledger/orderer + org1-orderer2: + container_name: ${CURRENT_HL_TOPOLOGY}-org1-orderer2 + environment: + - ORDERER_HOME=/tmp/hyperledger/orderer/home + - ORDERER_FILELEDGER_LOCATION=/tmp/hyperledger/orderer/fileledger + - ORDERER_HOST=${CURRENT_HL_TOPOLOGY}-org1-orderer2 + - ORDERER_GENERAL_LISTENADDRESS=0.0.0.0 + - ORDERER_GENERAL_LISTENPORT=7050 + - ORDERER_GENERAL_GENESISMETHOD=file + # - ORDERER_GENERAL_GENESISFILE=/tmp/crypto-material/artifacts/channels/genesis.block + - ORDERER_GENERAL_BOOTSTRAPFILE=/tmp/crypto-material/artifacts/channels/genesis.block + - ORDERER_GENERAL_LOCALMSPID=org1MSP + - ORDERER_GENERAL_LOCALMSPDIR=/tmp/crypto-material/orderers/org1/orderer2/node/msp + - ORDERER_GENERAL_TLS_ENABLED=true + - ORDERER_GENERAL_TLS_CERTIFICATE=/tmp/crypto-material/orderers/org1/orderer2/node-tls/msp/signcerts/cert.pem + - ORDERER_GENERAL_TLS_PRIVATEKEY=/tmp/crypto-material/orderers/org1/orderer2/node-tls/msp/keystore/key.pem + # - ORDERER_GENERAL_TLS_ROOTCAS=[/tmp/crypto-material/cas/org1/ca-tls/ca-cert.pem] + - ORDERER_GENERAL_TLS_ROOTCAS=[/tmp/crypto-material/cas/org1/ca-tls/ca-cert.pem,/tmp/crypto-material/cas/org2/ca-tls/ca-cert.pem,/tmp/crypto-material/cas/org3/ca-tls/ca-cert.pem] + - ORDERER_GENERAL_LOGLEVEL=debug + - ORDERER_DEBUG_BROADCASTTRACEDIR=data/logs + - ORDERER_OPERATIONS_LISTENADDRESS=0.0.0.0:7051 + - ORDERER_METRICS_STATSD_ADDRESS=0.0.0.0:7052 + - ORDERER_GENERAL_PROFILE_ADDRESS=0.0.0.0:7053 + - TOPOLOGY=${CURRENT_HL_TOPOLOGY} + volumes: + - ${HL_TOPOLOGIES_BASE_FOLDER}/crypto-material:/tmp/crypto-material + - ${HL_TOPOLOGIES_BASE_FOLDER}/homefolders/orderers/org1/orderer2:/tmp/hyperledger/orderer + org1-orderer3: + container_name: ${CURRENT_HL_TOPOLOGY}-org1-orderer3 + environment: + - ORDERER_HOME=/tmp/hyperledger/orderer/home + - ORDERER_FILELEDGER_LOCATION=/tmp/hyperledger/orderer/fileledger + - ORDERER_HOST=${CURRENT_HL_TOPOLOGY}-org1-orderer3 + - ORDERER_GENERAL_LISTENADDRESS=0.0.0.0 + - ORDERER_GENERAL_LISTENPORT=7050 + - ORDERER_GENERAL_GENESISMETHOD=file + # - ORDERER_GENERAL_GENESISFILE=/tmp/crypto-material/artifacts/channels/genesis.block + - ORDERER_GENERAL_BOOTSTRAPFILE=/tmp/crypto-material/artifacts/channels/genesis.block + - ORDERER_GENERAL_LOCALMSPID=org1MSP + - ORDERER_GENERAL_LOCALMSPDIR=/tmp/crypto-material/orderers/org1/orderer3/node/msp + - ORDERER_GENERAL_TLS_ENABLED=true + - ORDERER_GENERAL_TLS_CERTIFICATE=/tmp/crypto-material/orderers/org1/orderer3/node-tls/msp/signcerts/cert.pem + - ORDERER_GENERAL_TLS_PRIVATEKEY=/tmp/crypto-material/orderers/org1/orderer3/node-tls/msp/keystore/key.pem + # - ORDERER_GENERAL_TLS_ROOTCAS=[/tmp/crypto-material/cas/org1/ca-tls/ca-cert.pem] + - ORDERER_GENERAL_TLS_ROOTCAS=[/tmp/crypto-material/cas/org1/ca-tls/ca-cert.pem,/tmp/crypto-material/cas/org2/ca-tls/ca-cert.pem,/tmp/crypto-material/cas/org3/ca-tls/ca-cert.pem] + - ORDERER_GENERAL_LOGLEVEL=debug + - ORDERER_DEBUG_BROADCASTTRACEDIR=data/logs + - ORDERER_OPERATIONS_LISTENADDRESS=0.0.0.0:7051 + - ORDERER_METRICS_STATSD_ADDRESS=0.0.0.0:7052 + - ORDERER_GENERAL_PROFILE_ADDRESS=0.0.0.0:7053 + - TOPOLOGY=${CURRENT_HL_TOPOLOGY} + volumes: + - ${HL_TOPOLOGIES_BASE_FOLDER}/crypto-material:/tmp/crypto-material + - ${HL_TOPOLOGIES_BASE_FOLDER}/homefolders/orderers/org1/orderer3:/tmp/hyperledger/orderer diff --git a/topologies/t2/containers/orderers/org2-orderers/docker-compose-org2-orderers.yml b/topologies/t2/containers/orderers/org2-orderers/docker-compose-org2-orderers.yml new file mode 100644 index 0000000..2c90e3a --- /dev/null +++ b/topologies/t2/containers/orderers/org2-orderers/docker-compose-org2-orderers.yml @@ -0,0 +1,55 @@ +services: + org2-orderer1: + container_name: ${CURRENT_HL_TOPOLOGY}-org2-orderer1 + environment: + - ORDERER_HOME=/tmp/hyperledger/orderer/home + - ORDERER_FILELEDGER_LOCATION=/tmp/hyperledger/orderer/fileledger + - ORDERER_HOST=${CURRENT_HL_TOPOLOGY}-org2-orderer1 + - ORDERER_GENERAL_LISTENADDRESS=0.0.0.0 + - ORDERER_GENERAL_LISTENPORT=7050 + - ORDERER_GENERAL_GENESISMETHOD=file + # - ORDERER_GENERAL_GENESISFILE=/tmp/crypto-material/artifacts/channels/genesis.block + - ORDERER_GENERAL_BOOTSTRAPFILE=/tmp/crypto-material/artifacts/channels/genesis.block + - ORDERER_GENERAL_LOCALMSPID=org2MSP + - ORDERER_GENERAL_LOCALMSPDIR=/tmp/crypto-material/orderers/org2/orderer1/node/msp + - ORDERER_GENERAL_TLS_ENABLED=true + - ORDERER_GENERAL_TLS_CERTIFICATE=/tmp/crypto-material/orderers/org2/orderer1/node-tls/msp/signcerts/cert.pem + - ORDERER_GENERAL_TLS_PRIVATEKEY=/tmp/crypto-material/orderers/org2/orderer1/node-tls/msp/keystore/key.pem + # - ORDERER_GENERAL_TLS_ROOTCAS=[/tmp/crypto-material/cas/org2/ca-tls/ca-cert.pem] + - ORDERER_GENERAL_TLS_ROOTCAS=[/tmp/crypto-material/cas/org1/ca-tls/ca-cert.pem,/tmp/crypto-material/cas/org2/ca-tls/ca-cert.pem,/tmp/crypto-material/cas/org3/ca-tls/ca-cert.pem] + - ORDERER_GENERAL_LOGLEVEL=debug + - ORDERER_DEBUG_BROADCASTTRACEDIR=data/logs + - ORDERER_OPERATIONS_LISTENADDRESS=0.0.0.0:7051 + - ORDERER_METRICS_STATSD_ADDRESS=0.0.0.0:7052 + - ORDERER_GENERAL_PROFILE_ADDRESS=0.0.0.0:7053 + - TOPOLOGY=${CURRENT_HL_TOPOLOGY} + volumes: + - ${HL_TOPOLOGIES_BASE_FOLDER}/crypto-material:/tmp/crypto-material + - ${HL_TOPOLOGIES_BASE_FOLDER}/homefolders/orderers/org2/orderer1:/tmp/hyperledger/orderer + org2-orderer2: + container_name: ${CURRENT_HL_TOPOLOGY}-org2-orderer2 + environment: + - ORDERER_HOME=/tmp/hyperledger/orderer/home + - ORDERER_FILELEDGER_LOCATION=/tmp/hyperledger/orderer/fileledger + - ORDERER_HOST=${CURRENT_HL_TOPOLOGY}-org2-orderer2 + - ORDERER_GENERAL_LISTENADDRESS=0.0.0.0 + - ORDERER_GENERAL_LISTENPORT=7050 + - ORDERER_GENERAL_GENESISMETHOD=file + # - ORDERER_GENERAL_GENESISFILE=/tmp/crypto-material/artifacts/channels/genesis.block + - ORDERER_GENERAL_BOOTSTRAPFILE=/tmp/crypto-material/artifacts/channels/genesis.block + - ORDERER_GENERAL_LOCALMSPID=org2MSP + - ORDERER_GENERAL_LOCALMSPDIR=/tmp/crypto-material/orderers/org2/orderer2/node/msp + - ORDERER_GENERAL_TLS_ENABLED=true + - ORDERER_GENERAL_TLS_CERTIFICATE=/tmp/crypto-material/orderers/org2/orderer2/node-tls/msp/signcerts/cert.pem + - ORDERER_GENERAL_TLS_PRIVATEKEY=/tmp/crypto-material/orderers/org2/orderer2/node-tls/msp/keystore/key.pem + # - ORDERER_GENERAL_TLS_ROOTCAS=[/tmp/crypto-material/cas/org2/ca-tls/ca-cert.pem] + - ORDERER_GENERAL_TLS_ROOTCAS=[/tmp/crypto-material/cas/org1/ca-tls/ca-cert.pem,/tmp/crypto-material/cas/org2/ca-tls/ca-cert.pem,/tmp/crypto-material/cas/org3/ca-tls/ca-cert.pem] + - ORDERER_GENERAL_LOGLEVEL=debug + - ORDERER_DEBUG_BROADCASTTRACEDIR=data/logs + - ORDERER_OPERATIONS_LISTENADDRESS=0.0.0.0:7051 + - ORDERER_METRICS_STATSD_ADDRESS=0.0.0.0:7052 + - ORDERER_GENERAL_PROFILE_ADDRESS=0.0.0.0:7053 + - TOPOLOGY=${CURRENT_HL_TOPOLOGY} + volumes: + - ${HL_TOPOLOGIES_BASE_FOLDER}/crypto-material:/tmp/crypto-material + - ${HL_TOPOLOGIES_BASE_FOLDER}/homefolders/orderers/org2/orderer2:/tmp/hyperledger/orderer \ No newline at end of file diff --git a/topologies/t2/containers/peers/org2-peers/docker-compose-org2-peers.yml b/topologies/t2/containers/peers/org2-peers/docker-compose-org2-peers.yml new file mode 100644 index 0000000..c8184c4 --- /dev/null +++ b/topologies/t2/containers/peers/org2-peers/docker-compose-org2-peers.yml @@ -0,0 +1,50 @@ +services: + org2-peer1: + container_name: ${CURRENT_HL_TOPOLOGY}-org2-peer1 + environment: + - CORE_PEER_ID=${CURRENT_HL_TOPOLOGY}-org2-peer1 + - CORE_PEER_ADDRESS=${CURRENT_HL_TOPOLOGY}-org2-peer1:7051 + - CORE_PEER_LOCALMSPID=org2MSP + - CORE_PEER_MSPCONFIGPATH=/tmp/crypto-material/peers/org2/peer1/node/msp + - CORE_VM_ENDPOINT=unix:///host/var/run/docker.sock + - CORE_VM_DOCKER_HOSTCONFIG_NETWORKMODE=hl-fabric-${CURRENT_HL_TOPOLOGY} + # - FABRIC_LOGGING_SPEC=debug + - CORE_PEER_TLS_ENABLED=true + - CORE_PEER_TLS_CERT_FILE=/tmp/crypto-material/peers/org2/peer1/node-tls/msp/signcerts/cert.pem + - CORE_PEER_TLS_KEY_FILE=/tmp/crypto-material/peers/org2/peer1/node-tls/msp/keystore/key.pem + - CORE_PEER_TLS_ROOTCERT_FILE=/tmp/crypto-material/peers/org2/peer1/node-tls/msp/tlscacerts/ca-cert.pem + - CORE_PEER_GOSSIP_USELEADERELECTION=true + - CORE_PEER_GOSSIP_ORGLEADER=false + - CORE_PEER_GOSSIP_EXTERNALENDPOINT=${CURRENT_HL_TOPOLOGY}-org2-peer1:7051 + - CORE_PEER_GOSSIP_SKIPHANDSHAKE=true + - TOPOLOGY=${CURRENT_HL_TOPOLOGY} + # - CORE_PEER_CHAINCODELISTENADDRESS=0.0.0.0:7052 + working_dir: /opt/gopath/src/github.com/hyperledger/fabric/org2/peer1 + volumes: + - /var/run:/host/var/run + - ${HL_TOPOLOGIES_BASE_FOLDER}:/tmp + org2-peer2: + container_name: ${CURRENT_HL_TOPOLOGY}-org2-peer2 + environment: + - CORE_PEER_ID=${CURRENT_HL_TOPOLOGY}-org2-peer2 + - CORE_PEER_ADDRESS=${CURRENT_HL_TOPOLOGY}-org2-peer2:7051 + - CORE_PEER_LOCALMSPID=org2MSP + - CORE_PEER_MSPCONFIGPATH=/tmp/crypto-material/peers/org2/peer2/node/msp + - CORE_VM_ENDPOINT=unix:///host/var/run/docker.sock + - CORE_VM_DOCKER_HOSTCONFIG_NETWORKMODE=hl-fabric-${CURRENT_HL_TOPOLOGY} + # - FABRIC_LOGGING_SPEC=debug + - CORE_PEER_TLS_ENABLED=true + - CORE_PEER_TLS_CERT_FILE=/tmp/crypto-material/peers/org2/peer2/node-tls/msp/signcerts/cert.pem + - CORE_PEER_TLS_KEY_FILE=/tmp/crypto-material/peers/org2/peer2/node-tls/msp/keystore/key.pem + - CORE_PEER_TLS_ROOTCERT_FILE=/tmp/crypto-material/peers/org2/peer2/node-tls/msp/tlscacerts/ca-cert.pem + - CORE_PEER_GOSSIP_USELEADERELECTION=true + - CORE_PEER_GOSSIP_ORGLEADER=false + - CORE_PEER_GOSSIP_EXTERNALENDPOINT=${CURRENT_HL_TOPOLOGY}-org2-peer2:7051 + - CORE_PEER_GOSSIP_SKIPHANDSHAKE=true + # - CORE_PEER_CHAINCODELISTENADDRESS=0.0.0.0:7052 + - CORE_PEER_GOSSIP_BOOTSTRAP=${CURRENT_HL_TOPOLOGY}-org2-peer1:7051 + - TOPOLOGY=${CURRENT_HL_TOPOLOGY} + working_dir: /opt/gopath/src/github.com/hyperledger/fabric/org2/peer2 + volumes: + - /var/run:/host/var/run + - ${HL_TOPOLOGIES_BASE_FOLDER}:/tmp diff --git a/topologies/t2/containers/peers/org3-peers/docker-compose-org3-peers.yml b/topologies/t2/containers/peers/org3-peers/docker-compose-org3-peers.yml new file mode 100644 index 0000000..c99024b --- /dev/null +++ b/topologies/t2/containers/peers/org3-peers/docker-compose-org3-peers.yml @@ -0,0 +1,75 @@ +services: + org3-peer1: + container_name: ${CURRENT_HL_TOPOLOGY}-org3-peer1 + environment: + - CORE_PEER_ID=${CURRENT_HL_TOPOLOGY}-org3-peer1 + - CORE_PEER_ADDRESS=${CURRENT_HL_TOPOLOGY}-org3-peer1:7051 + - CORE_PEER_LOCALMSPID=org3MSP + - CORE_PEER_MSPCONFIGPATH=/tmp/crypto-material/peers/org3/peer1/node/msp + - CORE_VM_ENDPOINT=unix:///host/var/run/docker.sock + - CORE_VM_DOCKER_HOSTCONFIG_NETWORKMODE=hl-fabric-${CURRENT_HL_TOPOLOGY} + # - FABRIC_LOGGING_SPEC=debug + - CORE_PEER_TLS_ENABLED=true + - CORE_PEER_TLS_CERT_FILE=/tmp/crypto-material/peers/org3/peer1/node-tls/msp/signcerts/cert.pem + - CORE_PEER_TLS_KEY_FILE=/tmp/crypto-material/peers/org3/peer1/node-tls/msp/keystore/key.pem + - CORE_PEER_TLS_ROOTCERT_FILE=/tmp/crypto-material/peers/org3/peer1/node-tls/msp/tlscacerts/ca-cert.pem + - CORE_PEER_GOSSIP_USELEADERELECTION=true + - CORE_PEER_GOSSIP_ORGLEADER=false + - CORE_PEER_GOSSIP_EXTERNALENDPOINT=${CURRENT_HL_TOPOLOGY}-org3-peer1:7051 + - CORE_PEER_GOSSIP_SKIPHANDSHAKE=true + - TOPOLOGY=${CURRENT_HL_TOPOLOGY} + # - CORE_PEER_CHAINCODELISTENADDRESS=0.0.0.0:7052 + working_dir: /opt/gopath/src/github.com/hyperledger/fabric/org3/peer1 + volumes: + - /var/run:/host/var/run + - ${HL_TOPOLOGIES_BASE_FOLDER}:/tmp + org3-peer2: + container_name: ${CURRENT_HL_TOPOLOGY}-org3-peer2 + environment: + - CORE_PEER_ID=${CURRENT_HL_TOPOLOGY}-org3-peer2 + - CORE_PEER_ADDRESS=${CURRENT_HL_TOPOLOGY}-org3-peer2:7051 + - CORE_PEER_LOCALMSPID=org3MSP + - CORE_PEER_MSPCONFIGPATH=/tmp/crypto-material/peers/org3/peer2/node/msp + - CORE_VM_ENDPOINT=unix:///host/var/run/docker.sock + - CORE_VM_DOCKER_HOSTCONFIG_NETWORKMODE=hl-fabric-${CURRENT_HL_TOPOLOGY} + # - FABRIC_LOGGING_SPEC=debug + - CORE_PEER_TLS_ENABLED=true + - CORE_PEER_TLS_CERT_FILE=/tmp/crypto-material/peers/org3/peer2/node-tls/msp/signcerts/cert.pem + - CORE_PEER_TLS_KEY_FILE=/tmp/crypto-material/peers/org3/peer2/node-tls/msp/keystore/key.pem + - CORE_PEER_TLS_ROOTCERT_FILE=/tmp/crypto-material/peers/org3/peer2/node-tls/msp/tlscacerts/ca-cert.pem + - CORE_PEER_GOSSIP_USELEADERELECTION=true + - CORE_PEER_GOSSIP_ORGLEADER=false + - CORE_PEER_GOSSIP_EXTERNALENDPOINT=${CURRENT_HL_TOPOLOGY}-org3-peer2:7051 + - CORE_PEER_GOSSIP_SKIPHANDSHAKE=true + # - CORE_PEER_CHAINCODELISTENADDRESS=0.0.0.0:7052 + - CORE_PEER_GOSSIP_BOOTSTRAP=${CURRENT_HL_TOPOLOGY}-org3-peer1:7051 + - TOPOLOGY=${CURRENT_HL_TOPOLOGY} + working_dir: /opt/gopath/src/github.com/hyperledger/fabric/org3/peer2 + volumes: + - /var/run:/host/var/run + - ${HL_TOPOLOGIES_BASE_FOLDER}:/tmp + org3-peer3: + container_name: ${CURRENT_HL_TOPOLOGY}-org3-peer3 + environment: + - CORE_PEER_ID=${CURRENT_HL_TOPOLOGY}-org3-peer3 + - CORE_PEER_ADDRESS=${CURRENT_HL_TOPOLOGY}-org3-peer3:7051 + - CORE_PEER_LOCALMSPID=org3MSP + - CORE_PEER_MSPCONFIGPATH=/tmp/crypto-material/peers/org3/peer3/node/msp + - CORE_VM_ENDPOINT=unix:///host/var/run/docker.sock + - CORE_VM_DOCKER_HOSTCONFIG_NETWORKMODE=hl-fabric-${CURRENT_HL_TOPOLOGY} + # - FABRIC_LOGGING_SPEC=debug + - CORE_PEER_TLS_ENABLED=true + - CORE_PEER_TLS_CERT_FILE=/tmp/crypto-material/peers/org3/peer3/node-tls/msp/signcerts/cert.pem + - CORE_PEER_TLS_KEY_FILE=/tmp/crypto-material/peers/org3/peer3/node-tls/msp/keystore/key.pem + - CORE_PEER_TLS_ROOTCERT_FILE=/tmp/crypto-material/peers/org3/peer3/node-tls/msp/tlscacerts/ca-cert.pem + - CORE_PEER_GOSSIP_USELEADERELECTION=true + - CORE_PEER_GOSSIP_ORGLEADER=false + - CORE_PEER_GOSSIP_EXTERNALENDPOINT=${CURRENT_HL_TOPOLOGY}-org3-peer3:7051 + - CORE_PEER_GOSSIP_SKIPHANDSHAKE=true + # - CORE_PEER_CHAINCODELISTENADDRESS=0.0.0.0:7052 + - CORE_PEER_GOSSIP_BOOTSTRAP=${CURRENT_HL_TOPOLOGY}-org3-peer1:7051 + - TOPOLOGY=${CURRENT_HL_TOPOLOGY} + working_dir: /opt/gopath/src/github.com/hyperledger/fabric/org3/peer3 + volumes: + - /var/run:/host/var/run + - ${HL_TOPOLOGIES_BASE_FOLDER}:/tmp \ No newline at end of file diff --git a/topologies/t2/containers/peers/peers-proxies/docker-compose-peers-proxies.yml b/topologies/t2/containers/peers/peers-proxies/docker-compose-peers-proxies.yml new file mode 100644 index 0000000..0341c78 --- /dev/null +++ b/topologies/t2/containers/peers/peers-proxies/docker-compose-peers-proxies.yml @@ -0,0 +1,9 @@ +services: + org2-peers-proxy: + container_name: ${CURRENT_HL_TOPOLOGY}-org2-peers-proxy + volumes: + - ${HL_TOPOLOGIES_BASE_FOLDER}/config/peers-proxies/nginx/org2-nginx.conf:/etc/nginx/nginx.conf + org3-peers-proxy: + container_name: ${CURRENT_HL_TOPOLOGY}-org3-peers-proxy + volumes: + - ${HL_TOPOLOGIES_BASE_FOLDER}/config/peers-proxies/nginx/org3-nginx.conf:/etc/nginx/nginx.conf diff --git a/topologies/t2/crypto-material/.gitkeep b/topologies/t2/crypto-material/.gitkeep new file mode 100644 index 0000000..e69de29 diff --git a/topologies/t2/docker-compose.yml b/topologies/t2/docker-compose.yml new file mode 100644 index 0000000..353db36 --- /dev/null +++ b/topologies/t2/docker-compose.yml @@ -0,0 +1,127 @@ +services: + org-shell-cmd: + image: alpine + networks: + - hl-fabric + org1-ca-tls: + image: hyperledger/fabric-ca:${FABRIC_CA_VERSION} + # ports: + # - :7054 + networks: + - hl-fabric + org1-ca-identities: + image: hyperledger/fabric-ca:${FABRIC_CA_VERSION} + # ports: + # - :7054 + networks: + - hl-fabric + org2-ca-tls: + image: hyperledger/fabric-ca:${FABRIC_CA_VERSION} + # ports: + # - :7054 + networks: + - hl-fabric + org2-ca-identities: + image: hyperledger/fabric-ca:${FABRIC_CA_VERSION} + # ports: + # - :7054 + networks: + - hl-fabric + org3-ca-tls: + image: hyperledger/fabric-ca:${FABRIC_CA_VERSION} + # ports: + # - :7054 + networks: + - hl-fabric + org3-ca-identities: + image: hyperledger/fabric-ca:${FABRIC_CA_VERSION} + # ports: + # - :7054 + networks: + - hl-fabric + org2-peer1: + image: hyperledger/fabric-peer:${FABRIC_PEER_VERSION} + # ports: + # - :7051 + networks: + - hl-fabric + org2-peer2: + image: hyperledger/fabric-peer:${FABRIC_PEER_VERSION} + # ports: + # - :7051 + networks: + - hl-fabric + org3-peer1: + image: hyperledger/fabric-peer:${FABRIC_PEER_VERSION} + # ports: + # - :7051 + networks: + - hl-fabric + org3-peer2: + image: hyperledger/fabric-peer:${FABRIC_PEER_VERSION} + # ports: + # - :7051 + networks: + - hl-fabric + org3-peer3: + image: hyperledger/fabric-peer:${FABRIC_PEER_VERSION} + # ports: + # - :7051 + networks: + - hl-fabric + org2-cli-peer1: + image: hyperledger/fabric-tools:${FABRIC_TOOLS_VERSION} + networks: + - hl-fabric + org3-cli-peer1: + image: hyperledger/fabric-tools:${FABRIC_TOOLS_VERSION} + networks: + - hl-fabric + org1-orderer1: + image: hyperledger/fabric-orderer:${PEER_ORDERER_VERSION} + # ports: + # - :7050 + networks: + - hl-fabric + org1-orderer2: + image: hyperledger/fabric-orderer:${PEER_ORDERER_VERSION} + # ports: + # - :7050 + networks: + - hl-fabric + org1-orderer3: + image: hyperledger/fabric-orderer:${PEER_ORDERER_VERSION} + # ports: + # - :7050 + networks: + - hl-fabric + orderers-proxy: + image: nginx-hl-fabric + # ports: + # - :8443 + networks: + - hl-fabric + org2-peers-proxy: + image: nginx-hl-fabric + # ports: + # - :8443 + networks: + - hl-fabric + org3-peers-proxy: + image: nginx-hl-fabric + # ports: + # - :8443 + networks: + - hl-fabric + org2-orderer1: + image: hyperledger/fabric-orderer:${PEER_ORDERER_VERSION} + # ports: + # - :7050 + networks: + - hl-fabric + org2-orderer2: + image: hyperledger/fabric-orderer:${PEER_ORDERER_VERSION} + # ports: + # - :7050 + networks: + - hl-fabric \ No newline at end of file diff --git a/topologies/t2/homefolders/.gitkeep b/topologies/t2/homefolders/.gitkeep new file mode 100644 index 0000000..e69de29 diff --git a/topologies/t2/images/nginx/Dockerfile b/topologies/t2/images/nginx/Dockerfile new file mode 100644 index 0000000..bd4aa06 --- /dev/null +++ b/topologies/t2/images/nginx/Dockerfile @@ -0,0 +1,4 @@ +FROM nginx +RUN apt update +RUN apt install -y nginx-common +RUN apt install -y libnginx-mod-stream \ No newline at end of file diff --git a/topologies/t2/scripts/all-org-peers-commit-chaincode.sh b/topologies/t2/scripts/all-org-peers-commit-chaincode.sh new file mode 100755 index 0000000..f558519 --- /dev/null +++ b/topologies/t2/scripts/all-org-peers-commit-chaincode.sh @@ -0,0 +1,22 @@ +#!/bin/bash +set -e +set -x + +peer lifecycle chaincode checkcommitreadiness -C mychannel --name myccv1 -v "1.0" --sequence 1 + +export CORE_PEER_ADDRESS=${TOPOLOGY}-org2-peer1:7051 +export CORE_PEER_MSPCONFIGPATH=/tmp/crypto-material/orgs/org2/admins/admin/msp +QUERY_INSTALLED=`FABRIC_LOGGING_SPEC=ERROR peer lifecycle chaincode queryinstalled | grep myccv1` +IFS=' ' read -r -a array <<< $QUERY_INSTALLED +PACKAGE_ID=${array[2]} +PACKAGE_ID=${PACKAGE_ID::-1} +echo "The Package ID for the installed chaincode is: $PACKAGE_ID" + +peer lifecycle chaincode commit --channelID mychannel -o ${TOPOLOGY}-orderers-proxy:8443 --name myccv1 --version "1.0" \ + --peerAddresses ${TOPOLOGY}-org2-peers-proxy:8443 \ + --peerAddresses ${TOPOLOGY}-org3-peers-proxy:8443 \ + --tlsRootCertFiles /tmp/crypto-material/cas/org2/ca-tls/ca-cert.pem \ + --tlsRootCertFiles /tmp/crypto-material/cas/org3/ca-tls/ca-cert.pem \ + --sequence 1 --tls --cafile /tmp/crypto-material/cas/orgs/tls/org1-org2-ca-cert.pem + +peer lifecycle chaincode querycommitted --channelID mychannel --name myccv1 \ No newline at end of file diff --git a/topologies/t2/scripts/all-org-peers-execute-chaincode.sh b/topologies/t2/scripts/all-org-peers-execute-chaincode.sh new file mode 100755 index 0000000..f6edb07 --- /dev/null +++ b/topologies/t2/scripts/all-org-peers-execute-chaincode.sh @@ -0,0 +1,15 @@ +#!/bin/sh + +set -e +set -x + +export CORE_PEER_ADDRESS=${TOPOLOGY}-org2-peer1:7051 +export CORE_PEER_MSPCONFIGPATH=/tmp/crypto-material/orgs/org2/users/user/msp +peer chaincode invoke -C mychannel -o ${TOPOLOGY}-orderers-proxy:8443 -n myccv1 -c '{"Args":["InitLedger"]}' --waitForEvent --tls --cafile /tmp/crypto-material/cas/orgs/tls/org1-org2-ca-cert.pem \ + --peerAddresses ${TOPOLOGY}-org2-peers-proxy:8443 \ + --peerAddresses ${TOPOLOGY}-org3-peers-proxy:8443 \ + --tlsRootCertFiles /tmp/crypto-material/cas/org2/ca-tls/ca-cert.pem \ + --tlsRootCertFiles /tmp/crypto-material/cas/org3/ca-tls/ca-cert.pem \ + + +peer chaincode query -C mychannel -n myccv1 -c '{"Args":["GetAllAssets"]}' diff --git a/topologies/t2/scripts/channels-setup.sh b/topologies/t2/scripts/channels-setup.sh new file mode 100755 index 0000000..230aac8 --- /dev/null +++ b/topologies/t2/scripts/channels-setup.sh @@ -0,0 +1,7 @@ +#!/bin/sh +set -e +set -x + +export FABRIC_CFG_PATH=/tmp/crypto-material/config +configtxgen -profile OrgsOrdererGenesis -outputBlock /tmp/crypto-material/artifacts/channels/genesis.block -channelID syschannel +configtxgen -profile OrgsChannel -outputCreateChannelTx /tmp/crypto-material/artifacts/channels/channel.tx -channelID mychannel \ No newline at end of file diff --git a/topologies/t2/scripts/delete-state-data.sh b/topologies/t2/scripts/delete-state-data.sh new file mode 100755 index 0000000..9062431 --- /dev/null +++ b/topologies/t2/scripts/delete-state-data.sh @@ -0,0 +1,10 @@ +#!/bin/bash +set -e +set -x + +# remove crypto material files +rm -rf /tmp/crypto-material/* +touch /tmp/crypto-material/.gitkeep + +rm -rf /tmp/homefolders/* +touch /tmp/homefolders/.gitkeep diff --git a/topologies/t2/scripts/org1-enroll-identities-with-ca-identities.sh b/topologies/t2/scripts/org1-enroll-identities-with-ca-identities.sh new file mode 100755 index 0000000..5c5dec7 --- /dev/null +++ b/topologies/t2/scripts/org1-enroll-identities-with-ca-identities.sh @@ -0,0 +1,46 @@ +#!/bin/bash +set -e +set -x + +# enroll orderer1 node +export FABRIC_CA_CLIENT_HOME=/tmp/crypto-material/orderers/org1/orderer1/node +export FABRIC_CA_CLIENT_TLS_CERTFILES=/tmp/crypto-material/cas/org1/ca-identities/ca-cert.pem +fabric-ca-client enroll -d -u https://org1-orderer1:orderer1PW@0.0.0.0:7054 +mv /tmp/crypto-material/orderers/org1/orderer1/node/msp/cacerts/* /tmp/crypto-material/orderers/org1/orderer1/node/msp/cacerts/ca-cert.pem +cp /tmp/config/config.yaml /tmp/crypto-material/orderers/org1/orderer1/node/msp + +# enroll orderer2 node +export FABRIC_CA_CLIENT_HOME=/tmp/crypto-material/orderers/org1/orderer2/node +export FABRIC_CA_CLIENT_TLS_CERTFILES=/tmp/crypto-material/cas/org1/ca-identities/ca-cert.pem +fabric-ca-client enroll -d -u https://org1-orderer2:orderer2PW@0.0.0.0:7054 +mv /tmp/crypto-material/orderers/org1/orderer2/node/msp/cacerts/* /tmp/crypto-material/orderers/org1/orderer2/node/msp/cacerts/ca-cert.pem +cp /tmp/config/config.yaml /tmp/crypto-material/orderers/org1/orderer2/node/msp + +# enroll orderer3 node +export FABRIC_CA_CLIENT_HOME=/tmp/crypto-material/orderers/org1/orderer3/node +export FABRIC_CA_CLIENT_TLS_CERTFILES=/tmp/crypto-material/cas/org1/ca-identities/ca-cert.pem +fabric-ca-client enroll -d -u https://org1-orderer3:orderer3PW@0.0.0.0:7054 +mv /tmp/crypto-material/orderers/org1/orderer3/node/msp/cacerts/* /tmp/crypto-material/orderers/org1/orderer3/node/msp/cacerts/ca-cert.pem +cp /tmp/config/config.yaml /tmp/crypto-material/orderers/org1/orderer3/node/msp + +# enroll org1 admin +export FABRIC_CA_CLIENT_HOME=/tmp/crypto-material/orgs/org1/admins/admin +export FABRIC_CA_CLIENT_TLS_CERTFILES=/tmp/crypto-material/cas/org1/ca-identities/ca-cert.pem +fabric-ca-client enroll -d -u https://admin-org1:org1adminpw@0.0.0.0:7054 +mv /tmp/crypto-material/orgs/org1/admins/admin/msp/cacerts/* /tmp/crypto-material/orgs/org1/admins/admin/msp/cacerts/ca-cert.pem +cp /tmp/config/config.yaml /tmp/crypto-material/orgs/org1/admins/admin/msp + +mv /tmp/crypto-material/orderers/org1/orderer1/node-tls/msp/keystore/* /tmp/crypto-material/orderers/org1/orderer1/node-tls/msp/keystore/key.pem +mv /tmp/crypto-material/orderers/org1/orderer2/node-tls/msp/keystore/* /tmp/crypto-material/orderers/org1/orderer2/node-tls/msp/keystore/key.pem +mv /tmp/crypto-material/orderers/org1/orderer3/node-tls/msp/keystore/* /tmp/crypto-material/orderers/org1/orderer3/node-tls/msp/keystore/key.pem + +# setup org1 msp +mkdir -p /tmp/crypto-material/orgs/org1/msp +cp /tmp/config/config.yaml /tmp/crypto-material/orgs/org1/msp +mkdir -p /tmp/crypto-material/orgs/org1/msp/cacerts +cp /tmp/crypto-material/cas/org1/ca-identities/ca-cert.pem /tmp/crypto-material/orgs/org1/msp/cacerts/ca-cert.pem +mkdir -p /tmp/crypto-material/orgs/org1/msp/tlscacerts +cp /tmp/crypto-material/cas/org1/ca-tls/ca-cert.pem /tmp/crypto-material/orgs/org1/msp/tlscacerts/org1-tls-ca-cert.pem +cp /tmp/crypto-material/cas/org2/ca-tls/ca-cert.pem /tmp/crypto-material/orgs/org1/msp/tlscacerts/org2-tls-ca-cert.pem +cp /tmp/crypto-material/cas/org3/ca-tls/ca-cert.pem /tmp/crypto-material/orgs/org1/msp/tlscacerts/org3-tls-ca-cert.pem +mkdir -p /tmp/crypto-material/orgs/org1/msp/users \ No newline at end of file diff --git a/topologies/t2/scripts/org1-enroll-identities-with-ca-tls.sh b/topologies/t2/scripts/org1-enroll-identities-with-ca-tls.sh new file mode 100755 index 0000000..dea4770 --- /dev/null +++ b/topologies/t2/scripts/org1-enroll-identities-with-ca-tls.sh @@ -0,0 +1,26 @@ +#!/bin/bash +set -e +set -x + +# enroll orderer1 node-tls +export FABRIC_CA_CLIENT_HOME=/tmp/crypto-material/orderers/org1/orderer1/node-tls +export FABRIC_CA_CLIENT_TLS_CERTFILES=/tmp/crypto-material/cas/org1/ca-tls/ca-cert.pem +fabric-ca-client enroll -d -u https://org1-orderer1:orderer1PW@0.0.0.0:7054 --enrollment.profile tls --csr.hosts ${TOPOLOGY}-org1-orderer1 --csr.hosts ${TOPOLOGY}-orderers-proxy + +# enroll orderer2 node-tls +export FABRIC_CA_CLIENT_HOME=/tmp/crypto-material/orderers/org1/orderer2/node-tls +export FABRIC_CA_CLIENT_TLS_CERTFILES=/tmp/crypto-material/cas/org1/ca-tls/ca-cert.pem +fabric-ca-client enroll -d -u https://org1-orderer2:orderer2PW@0.0.0.0:7054 --enrollment.profile tls --csr.hosts ${TOPOLOGY}-org1-orderer2 --csr.hosts ${TOPOLOGY}-orderers-proxy + +# enroll orderer3 node-tls +export FABRIC_CA_CLIENT_HOME=/tmp/crypto-material/orderers/org1/orderer3/node-tls +export FABRIC_CA_CLIENT_TLS_CERTFILES=/tmp/crypto-material/cas/org1/ca-tls/ca-cert.pem +fabric-ca-client enroll -d -u https://org1-orderer3:orderer3PW@0.0.0.0:7054 --enrollment.profile tls --csr.hosts ${TOPOLOGY}-org1-orderer3 --csr.hosts ${TOPOLOGY}-orderers-proxy + +mv /tmp/crypto-material/orderers/org1/orderer1/node-tls/msp/keystore/* /tmp/crypto-material/orderers/org1/orderer1/node-tls/msp/keystore/key.pem +mv /tmp/crypto-material/orderers/org1/orderer2/node-tls/msp/keystore/* /tmp/crypto-material/orderers/org1/orderer2/node-tls/msp/keystore/key.pem +mv /tmp/crypto-material/orderers/org1/orderer3/node-tls/msp/keystore/* /tmp/crypto-material/orderers/org1/orderer3/node-tls/msp/keystore/key.pem + +mv /tmp/crypto-material/orderers/org1/orderer1/node-tls/msp/tlscacerts/* /tmp/crypto-material/orderers/org1/orderer1/node-tls/msp/tlscacerts/ca-cert.pem +mv /tmp/crypto-material/orderers/org1/orderer2/node-tls/msp/tlscacerts/* /tmp/crypto-material/orderers/org1/orderer2/node-tls/msp/tlscacerts/ca-cert.pem +mv /tmp/crypto-material/orderers/org1/orderer3/node-tls/msp/tlscacerts/* /tmp/crypto-material/orderers/org1/orderer3/node-tls/msp/tlscacerts/ca-cert.pem \ No newline at end of file diff --git a/topologies/t2/scripts/org1-register-identities-with-ca-identities.sh b/topologies/t2/scripts/org1-register-identities-with-ca-identities.sh new file mode 100755 index 0000000..1e6dcb2 --- /dev/null +++ b/topologies/t2/scripts/org1-register-identities-with-ca-identities.sh @@ -0,0 +1,11 @@ +#!/bin/bash +set -e +set -x + +export FABRIC_CA_CLIENT_TLS_CERTFILES=/tmp/crypto-material/cas/org1/ca-identities/ca-cert.pem +export FABRIC_CA_CLIENT_HOME=/tmp/crypto-material/cas/org1/ca-identities-admin +fabric-ca-client enroll -d -u https://org1-ca-identities-admin:org1-ca-identities-adminpw@0.0.0.0:7054 +fabric-ca-client register -d --id.name org1-orderer1 --id.secret orderer1PW --id.type orderer -u https://0.0.0.0:7054 +fabric-ca-client register -d --id.name org1-orderer2 --id.secret orderer2PW --id.type orderer -u https://0.0.0.0:7054 +fabric-ca-client register -d --id.name org1-orderer3 --id.secret orderer3PW --id.type orderer -u https://0.0.0.0:7054 +fabric-ca-client register -d --id.name admin-org1 --id.secret org1adminpw --id.type admin --id.attrs "hf.Registrar.Roles=client,hf.Registrar.Attributes=*,hf.Revoker=true,hf.GenCRL=true,admin=true:ecert,abac.init=true:ecert" -u https://0.0.0.0:7054 \ No newline at end of file diff --git a/topologies/t2/scripts/org1-register-identities-with-ca-tls.sh b/topologies/t2/scripts/org1-register-identities-with-ca-tls.sh new file mode 100755 index 0000000..ab095a5 --- /dev/null +++ b/topologies/t2/scripts/org1-register-identities-with-ca-tls.sh @@ -0,0 +1,10 @@ +#!/bin/bash +set -e +set -x + +export FABRIC_CA_CLIENT_TLS_CERTFILES=/tmp/crypto-material/cas/org1/ca-tls/ca-cert.pem +export FABRIC_CA_CLIENT_HOME=/tmp/crypto-material/cas/org1/ca-tls-admin +fabric-ca-client enroll -d -u https://org1-ca-tls-admin:org1-ca-tls-adminpw@0.0.0.0:7054 +fabric-ca-client register -d --id.name org1-orderer1 --id.secret orderer1PW --id.type orderer -u https://0.0.0.0:7054 +fabric-ca-client register -d --id.name org1-orderer2 --id.secret orderer2PW --id.type orderer -u https://0.0.0.0:7054 +fabric-ca-client register -d --id.name org1-orderer3 --id.secret orderer3PW --id.type orderer -u https://0.0.0.0:7054 \ No newline at end of file diff --git a/topologies/t2/scripts/org2-approve-chaincode.sh b/topologies/t2/scripts/org2-approve-chaincode.sh new file mode 100755 index 0000000..db790cf --- /dev/null +++ b/topologies/t2/scripts/org2-approve-chaincode.sh @@ -0,0 +1,20 @@ +#!/bin/bash +set -e +set -x + +export CORE_PEER_ADDRESS=${TOPOLOGY}-org2-peer1:7051 +export CORE_PEER_MSPCONFIGPATH=/tmp/crypto-material/orgs/org2/admins/admin/msp +QUERY_INSTALLED=`FABRIC_LOGGING_SPEC=ERROR peer lifecycle chaincode queryinstalled | grep myccv1` +IFS=' ' read -r -a array <<< $QUERY_INSTALLED +PACKAGE_ID=${array[2]} +PACKAGE_ID=${PACKAGE_ID::-1} +echo "The Package ID for the installed chaincode is: $PACKAGE_ID" + + +peer lifecycle chaincode approveformyorg --channelID mychannel -o ${TOPOLOGY}-orderers-proxy:8443 --name myccv1 --version "1.0" \ + --package-id $PACKAGE_ID \ + --peerAddresses ${TOPOLOGY}-org2-peers-proxy:8443 \ + --tlsRootCertFiles /tmp/crypto-material/cas/org2/ca-tls/ca-cert.pem \ + --sequence 1 --tls --cafile /tmp/crypto-material/cas/orgs/tls/org1-org2-ca-cert.pem + +peer lifecycle chaincode queryapproved -C mychannel --name myccv1 \ No newline at end of file diff --git a/topologies/t2/scripts/org2-create-and-join-channels.sh b/topologies/t2/scripts/org2-create-and-join-channels.sh new file mode 100755 index 0000000..5c0d3e2 --- /dev/null +++ b/topologies/t2/scripts/org2-create-and-join-channels.sh @@ -0,0 +1,12 @@ +#!/bin/sh +set -e +set -x + +export CORE_PEER_MSPCONFIGPATH=/tmp/crypto-material/orgs/org2/admins/admin/msp +peer channel create -c mychannel -f /tmp/crypto-material/artifacts/channels/channel.tx -o ${TOPOLOGY}-orderers-proxy:8443 --outputBlock /tmp/crypto-material/artifacts/channels/mychannel.block --tls --cafile /tmp/crypto-material/cas/orgs/tls/org1-org2-ca-cert.pem + +export CORE_PEER_ADDRESS=${TOPOLOGY}-org2-peer1:7051 +peer channel join -b /tmp/crypto-material/artifacts/channels/mychannel.block + +export CORE_PEER_ADDRESS=${TOPOLOGY}-org2-peer2:7051 +peer channel join -b /tmp/crypto-material/artifacts/channels/mychannel.block \ No newline at end of file diff --git a/topologies/t2/scripts/org2-enroll-identities-with-ca-identities.sh b/topologies/t2/scripts/org2-enroll-identities-with-ca-identities.sh new file mode 100755 index 0000000..3645dd1 --- /dev/null +++ b/topologies/t2/scripts/org2-enroll-identities-with-ca-identities.sh @@ -0,0 +1,63 @@ +#!/bin/bash +set -e +set -x + +# enroll peer1 node +export FABRIC_CA_CLIENT_HOME=/tmp/crypto-material/peers/org2/peer1/node +export FABRIC_CA_CLIENT_TLS_CERTFILES=/tmp/crypto-material/cas/org2/ca-identities/ca-cert.pem +fabric-ca-client enroll -d -u https://peer1-org2:peer1PW@0.0.0.0:7054 +mv /tmp/crypto-material/peers/org2/peer1/node/msp/cacerts/* /tmp/crypto-material/peers/org2/peer1/node/msp/cacerts/ca-cert.pem +cp /tmp/config/config.yaml /tmp/crypto-material/peers/org2/peer1/node/msp + +# enroll peer2 node +export FABRIC_CA_CLIENT_HOME=/tmp/crypto-material/peers/org2/peer2/node +export FABRIC_CA_CLIENT_TLS_CERTFILES=/tmp/crypto-material/cas/org2/ca-identities/ca-cert.pem +fabric-ca-client enroll -d -u https://peer2-org2:peer2PW@0.0.0.0:7054 +mv /tmp/crypto-material/peers/org2/peer2/node/msp/cacerts/* /tmp/crypto-material/peers/org2/peer2/node/msp/cacerts/ca-cert.pem +cp /tmp/config/config.yaml /tmp/crypto-material/peers/org2/peer2/node/msp + +# enroll orderer1 node +export FABRIC_CA_CLIENT_HOME=/tmp/crypto-material/orderers/org2/orderer1/node +export FABRIC_CA_CLIENT_TLS_CERTFILES=/tmp/crypto-material/cas/org2/ca-identities/ca-cert.pem +fabric-ca-client enroll -d -u https://org2-orderer1:orderer1PW@0.0.0.0:7054 +mv /tmp/crypto-material/orderers/org2/orderer1/node/msp/cacerts/* /tmp/crypto-material/orderers/org2/orderer1/node/msp/cacerts/ca-cert.pem +cp /tmp/config/config.yaml /tmp/crypto-material/orderers/org2/orderer1/node/msp + +# enroll orderer2 node +export FABRIC_CA_CLIENT_HOME=/tmp/crypto-material/orderers/org2/orderer2/node +export FABRIC_CA_CLIENT_TLS_CERTFILES=/tmp/crypto-material/cas/org2/ca-identities/ca-cert.pem +fabric-ca-client enroll -d -u https://org2-orderer2:orderer2PW@0.0.0.0:7054 +mv /tmp/crypto-material/orderers/org2/orderer2/node/msp/cacerts/* /tmp/crypto-material/orderers/org2/orderer2/node/msp/cacerts/ca-cert.pem +cp /tmp/config/config.yaml /tmp/crypto-material/orderers/org2/orderer2/node/msp + +# enroll org2 admin +export FABRIC_CA_CLIENT_HOME=/tmp/crypto-material/orgs/org2/admins/admin +export FABRIC_CA_CLIENT_TLS_CERTFILES=/tmp/crypto-material/cas/org2/ca-identities/ca-cert.pem +fabric-ca-client enroll -d -u https://admin-org2:org2AdminPW@0.0.0.0:7054 +mv /tmp/crypto-material/orgs/org2/admins/admin/msp/cacerts/* /tmp/crypto-material/orgs/org2/admins/admin/msp/cacerts/ca-cert.pem +cp /tmp/config/config.yaml /tmp/crypto-material/orgs/org2/admins/admin/msp + +# enroll org2 user +export FABRIC_CA_CLIENT_HOME=/tmp/crypto-material/orgs/org2/users/user +export FABRIC_CA_CLIENT_TLS_CERTFILES=/tmp/crypto-material/cas/org2/ca-identities/ca-cert.pem +fabric-ca-client enroll -d -u https://admin-org2:org2AdminPW@0.0.0.0:7054 +mv /tmp/crypto-material/orgs/org2/users/user/msp/cacerts/* /tmp/crypto-material/orgs/org2/users/user/msp/cacerts/ca-cert.pem +cp /tmp/config/config.yaml /tmp/crypto-material/orgs/org2/users/user/msp + +# enroll org2 client +export FABRIC_CA_CLIENT_HOME=/tmp/crypto-material/orgs/org2/clients/client +export FABRIC_CA_CLIENT_TLS_CERTFILES=/tmp/crypto-material/cas/org2/ca-identities/ca-cert.pem +fabric-ca-client enroll -d -u https://admin-org2:org2AdminPW@0.0.0.0:7054 +mv /tmp/crypto-material/orgs/org2/clients/client/msp/cacerts/* /tmp/crypto-material/orgs/org2/clients/client/msp/cacerts/ca-cert.pem +cp /tmp/config/config.yaml /tmp/crypto-material/orgs/org2/clients/client/msp + +# setup org2 msp +mkdir -p /tmp/crypto-material/orgs/org2/msp +cp /tmp/config/config.yaml /tmp/crypto-material/orgs/org2/msp +mkdir -p /tmp/crypto-material/orgs/org2/msp/cacerts +cp /tmp/crypto-material/cas/org2/ca-identities/ca-cert.pem /tmp/crypto-material/orgs/org2/msp/cacerts/ca-cert.pem +mkdir -p /tmp/crypto-material/orgs/org2/msp/tlscacerts +cp /tmp/crypto-material/cas/org1/ca-tls/ca-cert.pem /tmp/crypto-material/orgs/org2/msp/tlscacerts/org1-tls-ca-cert.pem +cp /tmp/crypto-material/cas/org2/ca-tls/ca-cert.pem /tmp/crypto-material/orgs/org2/msp/tlscacerts/org2-tls-ca-cert.pem +cp /tmp/crypto-material/cas/org3/ca-tls/ca-cert.pem /tmp/crypto-material/orgs/org2/msp/tlscacerts/org3-tls-ca-cert.pem +mkdir -p /tmp/crypto-material/orgs/org2/msp/users diff --git a/topologies/t2/scripts/org2-enroll-identities-with-ca-tls.sh b/topologies/t2/scripts/org2-enroll-identities-with-ca-tls.sh new file mode 100755 index 0000000..f590a98 --- /dev/null +++ b/topologies/t2/scripts/org2-enroll-identities-with-ca-tls.sh @@ -0,0 +1,35 @@ +#!/bin/bash +set -e +set -x + +# enroll peer1 node-tls +export FABRIC_CA_CLIENT_HOME=/tmp/crypto-material/peers/org2/peer1/node-tls +export FABRIC_CA_CLIENT_TLS_CERTFILES=/tmp/crypto-material/cas/org2/ca-tls/ca-cert.pem +fabric-ca-client enroll -d -u https://peer1-org2:peer1PW@0.0.0.0:7054 --enrollment.profile tls --csr.hosts ${TOPOLOGY}-org2-peer1 --csr.hosts ${TOPOLOGY}-org2-peers-proxy + +# enroll peer2 node-tls +export FABRIC_CA_CLIENT_HOME=/tmp/crypto-material/peers/org2/peer2/node-tls +export FABRIC_CA_CLIENT_TLS_CERTFILES=/tmp/crypto-material/cas/org2/ca-tls/ca-cert.pem +fabric-ca-client enroll -d -u https://peer2-org2:peer2PW@0.0.0.0:7054 --enrollment.profile tls --csr.hosts ${TOPOLOGY}-org2-peer2 --csr.hosts ${TOPOLOGY}-org2-peers-proxy + +# enroll orderer1 node-tls +export FABRIC_CA_CLIENT_HOME=/tmp/crypto-material/orderers/org2/orderer1/node-tls +export FABRIC_CA_CLIENT_TLS_CERTFILES=/tmp/crypto-material/cas/org2/ca-tls/ca-cert.pem +fabric-ca-client enroll -d -u https://org2-orderer1:orderer1PW@0.0.0.0:7054 --enrollment.profile tls --csr.hosts ${TOPOLOGY}-org2-orderer1 --csr.hosts ${TOPOLOGY}-orderers-proxy + +# enroll orderer2 node-tls +export FABRIC_CA_CLIENT_HOME=/tmp/crypto-material/orderers/org2/orderer2/node-tls +export FABRIC_CA_CLIENT_TLS_CERTFILES=/tmp/crypto-material/cas/org2/ca-tls/ca-cert.pem +fabric-ca-client enroll -d -u https://org2-orderer2:orderer2PW@0.0.0.0:7054 --enrollment.profile tls --csr.hosts ${TOPOLOGY}-org2-orderer2 --csr.hosts ${TOPOLOGY}-orderers-proxy + +mv /tmp/crypto-material/peers/org2/peer1/node-tls/msp/keystore/* /tmp/crypto-material/peers/org2/peer1/node-tls/msp/keystore/key.pem +mv /tmp/crypto-material/peers/org2/peer2/node-tls/msp/keystore/* /tmp/crypto-material/peers/org2/peer2/node-tls/msp/keystore/key.pem + +mv /tmp/crypto-material/orderers/org2/orderer1/node-tls/msp/keystore/* /tmp/crypto-material/orderers/org2/orderer1/node-tls/msp/keystore/key.pem +mv /tmp/crypto-material/orderers/org2/orderer2/node-tls/msp/keystore/* /tmp/crypto-material/orderers/org2/orderer2/node-tls/msp/keystore/key.pem + +mv /tmp/crypto-material/peers/org2/peer1/node-tls/msp/tlscacerts/* /tmp/crypto-material/peers/org2/peer1/node-tls/msp/tlscacerts/ca-cert.pem +mv /tmp/crypto-material/peers/org2/peer2/node-tls/msp/tlscacerts/* /tmp/crypto-material/peers/org2/peer2/node-tls/msp/tlscacerts/ca-cert.pem + +mv /tmp/crypto-material/orderers/org2/orderer1/node-tls/msp/tlscacerts/* /tmp/crypto-material/orderers/org2/orderer1/node-tls/msp/tlscacerts/ca-cert.pem +mv /tmp/crypto-material/orderers/org2/orderer2/node-tls/msp/tlscacerts/* /tmp/crypto-material/orderers/org2/orderer2/node-tls/msp/tlscacerts/ca-cert.pem \ No newline at end of file diff --git a/topologies/t2/scripts/org2-install-chaincode.sh b/topologies/t2/scripts/org2-install-chaincode.sh new file mode 100755 index 0000000..b9db570 --- /dev/null +++ b/topologies/t2/scripts/org2-install-chaincode.sh @@ -0,0 +1,13 @@ +#!/bin/bash +set -e +set -x + +export CHAINCODE_DIR=/opt/gopath/src/github.com/hyperledger/fabric-samples/chaincode/assets-transfer-basic/chaincode-go + +export CORE_PEER_ADDRESS=${TOPOLOGY}-org2-peer1:7051 +export CORE_PEER_MSPCONFIGPATH=/tmp/crypto-material/orgs/org2/admins/admin/msp +peer lifecycle chaincode package mycc.tar.gz --path $CHAINCODE_DIR --lang golang --label myccv1 +peer lifecycle chaincode install mycc.tar.gz + +export CORE_PEER_ADDRESS=${TOPOLOGY}-org2-peer2:7051 +peer lifecycle chaincode install mycc.tar.gz \ No newline at end of file diff --git a/topologies/t2/scripts/org2-register-identities-with-ca-identities.sh b/topologies/t2/scripts/org2-register-identities-with-ca-identities.sh new file mode 100755 index 0000000..8b9efc1 --- /dev/null +++ b/topologies/t2/scripts/org2-register-identities-with-ca-identities.sh @@ -0,0 +1,14 @@ +#!/bin/bash +set -e +set -x + +export FABRIC_CA_CLIENT_TLS_CERTFILES=/tmp/crypto-material/cas/org2/ca-identities/ca-cert.pem +export FABRIC_CA_CLIENT_HOME=/tmp/crypto-material/cas/org2/ca-identities-admin +fabric-ca-client enroll -d -u https://org2-ca-identities-admin:org2-ca-identities-adminpw@0.0.0.0:7054 +fabric-ca-client register -d --id.name peer1-org2 --id.secret peer1PW --id.type peer -u https://0.0.0.0:7054 +fabric-ca-client register -d --id.name peer2-org2 --id.secret peer2PW --id.type peer -u https://0.0.0.0:7054 +fabric-ca-client register -d --id.name admin-org2 --id.secret org2AdminPW --id.type admin -u https://0.0.0.0:7054 +fabric-ca-client register -d --id.name user-org2 --id.secret org2UserPW --id.type user -u https://0.0.0.0:7054 +fabric-ca-client register -d --id.name client-org2 --id.secret org2UserPW --id.type client -u https://0.0.0.0:7054 +fabric-ca-client register -d --id.name org2-orderer1 --id.secret orderer1PW --id.type orderer -u https://0.0.0.0:7054 +fabric-ca-client register -d --id.name org2-orderer2 --id.secret orderer2PW --id.type orderer -u https://0.0.0.0:7054 \ No newline at end of file diff --git a/topologies/t2/scripts/org2-register-identities-with-ca-tls.sh b/topologies/t2/scripts/org2-register-identities-with-ca-tls.sh new file mode 100755 index 0000000..fd70710 --- /dev/null +++ b/topologies/t2/scripts/org2-register-identities-with-ca-tls.sh @@ -0,0 +1,11 @@ +#!/bin/bash +set -e +set -x + +export FABRIC_CA_CLIENT_TLS_CERTFILES=/tmp/crypto-material/cas/org2/ca-tls/ca-cert.pem +export FABRIC_CA_CLIENT_HOME=/tmp/crypto-material/cas/org2/ca-tls-admin +fabric-ca-client enroll -d -u https://org2-ca-tls-admin:org2-ca-tls-adminpw@0.0.0.0:7054 +fabric-ca-client register -d --id.name peer1-org2 --id.secret peer1PW --id.type peer -u https://0.0.0.0:7054 +fabric-ca-client register -d --id.name peer2-org2 --id.secret peer2PW --id.type peer -u https://0.0.0.0:7054 +fabric-ca-client register -d --id.name org2-orderer1 --id.secret orderer1PW --id.type orderer -u https://0.0.0.0:7054 +fabric-ca-client register -d --id.name org2-orderer2 --id.secret orderer2PW --id.type orderer -u https://0.0.0.0:7054 \ No newline at end of file diff --git a/topologies/t2/scripts/org3-approve-chaincode.sh b/topologies/t2/scripts/org3-approve-chaincode.sh new file mode 100755 index 0000000..413bc64 --- /dev/null +++ b/topologies/t2/scripts/org3-approve-chaincode.sh @@ -0,0 +1,20 @@ +#!/bin/bash +set -e +set -x + +export CORE_PEER_ADDRESS=${TOPOLOGY}-org3-peer1:7051 +export CORE_PEER_MSPCONFIGPATH=/tmp/crypto-material/orgs/org3/admins/admin/msp +QUERY_INSTALLED=`FABRIC_LOGGING_SPEC=ERROR peer lifecycle chaincode queryinstalled | grep myccv1` +IFS=' ' read -r -a array <<< $QUERY_INSTALLED +PACKAGE_ID=${array[2]} +PACKAGE_ID=${PACKAGE_ID::-1} +echo "The Package ID for the installed chaincode is: $PACKAGE_ID" + + +peer lifecycle chaincode approveformyorg --channelID mychannel -o ${TOPOLOGY}-orderers-proxy:8443 --name myccv1 --version "1.0" \ + --package-id $PACKAGE_ID \ + --peerAddresses ${TOPOLOGY}-org3-peers-proxy:8443 \ + --tlsRootCertFiles /tmp/crypto-material/cas/org3/ca-tls/ca-cert.pem \ + --sequence 1 --tls --cafile /tmp/crypto-material/cas/orgs/tls/org1-org2-ca-cert.pem + +peer lifecycle chaincode queryapproved -C mychannel --name myccv1 \ No newline at end of file diff --git a/topologies/t2/scripts/org3-create-and-join-channels.sh b/topologies/t2/scripts/org3-create-and-join-channels.sh new file mode 100755 index 0000000..4b5ab32 --- /dev/null +++ b/topologies/t2/scripts/org3-create-and-join-channels.sh @@ -0,0 +1,14 @@ +#!/bin/sh +set -e +set -x + +export CORE_PEER_MSPCONFIGPATH=/tmp/crypto-material/orgs/org3/admins/admin/msp + +export CORE_PEER_ADDRESS=${TOPOLOGY}-org3-peer1:7051 +peer channel join -b /tmp/crypto-material/artifacts/channels/mychannel.block + +export CORE_PEER_ADDRESS=${TOPOLOGY}-org3-peer2:7051 +peer channel join -b /tmp/crypto-material/artifacts/channels/mychannel.block + +export CORE_PEER_ADDRESS=${TOPOLOGY}-org3-peer3:7051 +peer channel join -b /tmp/crypto-material/artifacts/channels/mychannel.block \ No newline at end of file diff --git a/topologies/t2/scripts/org3-enroll-identities-with-ca-identities.sh b/topologies/t2/scripts/org3-enroll-identities-with-ca-identities.sh new file mode 100755 index 0000000..1050f89 --- /dev/null +++ b/topologies/t2/scripts/org3-enroll-identities-with-ca-identities.sh @@ -0,0 +1,56 @@ +#!/bin/bash +set -e +set -x + +# enroll peer1 node +export FABRIC_CA_CLIENT_HOME=/tmp/crypto-material/peers/org3/peer1/node +export FABRIC_CA_CLIENT_TLS_CERTFILES=/tmp/crypto-material/cas/org3/ca-identities/ca-cert.pem +fabric-ca-client enroll -d -u https://peer1-org3:peer1PW@0.0.0.0:7054 +mv /tmp/crypto-material/peers/org3/peer1/node/msp/cacerts/* /tmp/crypto-material/peers/org3/peer1/node/msp/cacerts/ca-cert.pem +cp /tmp/config/config.yaml /tmp/crypto-material/peers/org3/peer1/node/msp + +# enroll peer2 node +export FABRIC_CA_CLIENT_HOME=/tmp/crypto-material/peers/org3/peer2/node +export FABRIC_CA_CLIENT_TLS_CERTFILES=/tmp/crypto-material/cas/org3/ca-identities/ca-cert.pem +fabric-ca-client enroll -d -u https://peer2-org3:peer2PW@0.0.0.0:7054 +mv /tmp/crypto-material/peers/org3/peer2/node/msp/cacerts/* /tmp/crypto-material/peers/org3/peer2/node/msp/cacerts/ca-cert.pem +cp /tmp/config/config.yaml /tmp/crypto-material/peers/org3/peer2/node/msp + +# enroll peer3 node +export FABRIC_CA_CLIENT_HOME=/tmp/crypto-material/peers/org3/peer3/node +export FABRIC_CA_CLIENT_TLS_CERTFILES=/tmp/crypto-material/cas/org3/ca-identities/ca-cert.pem +fabric-ca-client enroll -d -u https://peer3-org3:peer3PW@0.0.0.0:7054 +mv /tmp/crypto-material/peers/org3/peer3/node/msp/cacerts/* /tmp/crypto-material/peers/org3/peer3/node/msp/cacerts/ca-cert.pem +cp /tmp/config/config.yaml /tmp/crypto-material/peers/org3/peer3/node/msp + +# enroll org3 admin +export FABRIC_CA_CLIENT_HOME=/tmp/crypto-material/orgs/org3/admins/admin +export FABRIC_CA_CLIENT_TLS_CERTFILES=/tmp/crypto-material/cas/org3/ca-identities/ca-cert.pem +fabric-ca-client enroll -d -u https://admin-org3:org3AdminPW@0.0.0.0:7054 +mv /tmp/crypto-material/orgs/org3/admins/admin/msp/cacerts/* /tmp/crypto-material/orgs/org3/admins/admin/msp/cacerts/ca-cert.pem +cp /tmp/config/config.yaml /tmp/crypto-material/orgs/org3/admins/admin/msp + +# enroll org3 user +export FABRIC_CA_CLIENT_HOME=/tmp/crypto-material/orgs/org3/users/user +export FABRIC_CA_CLIENT_TLS_CERTFILES=/tmp/crypto-material/cas/org3/ca-identities/ca-cert.pem +fabric-ca-client enroll -d -u https://admin-org3:org3AdminPW@0.0.0.0:7054 +mv /tmp/crypto-material/orgs/org3/users/user/msp/cacerts/* /tmp/crypto-material/orgs/org3/users/user/msp/cacerts/ca-cert.pem +cp /tmp/config/config.yaml /tmp/crypto-material/orgs/org3/users/user/msp + +# enroll org3 client +export FABRIC_CA_CLIENT_HOME=/tmp/crypto-material/orgs/org3/clients/client +export FABRIC_CA_CLIENT_TLS_CERTFILES=/tmp/crypto-material/cas/org3/ca-identities/ca-cert.pem +fabric-ca-client enroll -d -u https://admin-org3:org3AdminPW@0.0.0.0:7054 +mv /tmp/crypto-material/orgs/org3/clients/client/msp/cacerts/* /tmp/crypto-material/orgs/org3/clients/client/msp/cacerts/ca-cert.pem +cp /tmp/config/config.yaml /tmp/crypto-material/orgs/org3/clients/client/msp + +# setup org3 msp +mkdir -p /tmp/crypto-material/orgs/org3/msp +cp /tmp/config/config.yaml /tmp/crypto-material/orgs/org3/msp +mkdir -p /tmp/crypto-material/orgs/org3/msp/cacerts +cp /tmp/crypto-material/cas/org3/ca-identities/ca-cert.pem /tmp/crypto-material/orgs/org3/msp/cacerts/ca-cert.pem +mkdir -p /tmp/crypto-material/orgs/org3/msp/tlscacerts +cp /tmp/crypto-material/cas/org1/ca-tls/ca-cert.pem /tmp/crypto-material/orgs/org3/msp/tlscacerts/org1-tls-ca-cert.pem +cp /tmp/crypto-material/cas/org2/ca-tls/ca-cert.pem /tmp/crypto-material/orgs/org3/msp/tlscacerts/org2-tls-ca-cert.pem +cp /tmp/crypto-material/cas/org3/ca-tls/ca-cert.pem /tmp/crypto-material/orgs/org3/msp/tlscacerts/org3-tls-ca-cert.pem +mkdir -p /tmp/crypto-material/orgs/org3/msp/users diff --git a/topologies/t2/scripts/org3-enroll-identities-with-ca-tls.sh b/topologies/t2/scripts/org3-enroll-identities-with-ca-tls.sh new file mode 100755 index 0000000..45ba7d0 --- /dev/null +++ b/topologies/t2/scripts/org3-enroll-identities-with-ca-tls.sh @@ -0,0 +1,26 @@ +#!/bin/bash +set -e +set -x + +# enroll peer1 node-tls +export FABRIC_CA_CLIENT_HOME=/tmp/crypto-material/peers/org3/peer1/node-tls +export FABRIC_CA_CLIENT_TLS_CERTFILES=/tmp/crypto-material/cas/org3/ca-tls/ca-cert.pem +fabric-ca-client enroll -d -u https://peer1-org3:peer1PW@0.0.0.0:7054 --enrollment.profile tls --csr.hosts ${TOPOLOGY}-org3-peer1 --csr.hosts ${TOPOLOGY}-org3-peers-proxy + +# enroll peer2 node-tls +export FABRIC_CA_CLIENT_HOME=/tmp/crypto-material/peers/org3/peer2/node-tls +export FABRIC_CA_CLIENT_TLS_CERTFILES=/tmp/crypto-material/cas/org3/ca-tls/ca-cert.pem +fabric-ca-client enroll -d -u https://peer2-org3:peer2PW@0.0.0.0:7054 --enrollment.profile tls --csr.hosts ${TOPOLOGY}-org3-peer2 --csr.hosts ${TOPOLOGY}-org3-peers-proxy + +# enroll peer3 node-tls +export FABRIC_CA_CLIENT_HOME=/tmp/crypto-material/peers/org3/peer3/node-tls +export FABRIC_CA_CLIENT_TLS_CERTFILES=/tmp/crypto-material/cas/org3/ca-tls/ca-cert.pem +fabric-ca-client enroll -d -u https://peer3-org3:peer3PW@0.0.0.0:7054 --enrollment.profile tls --csr.hosts ${TOPOLOGY}-org3-peer3 --csr.hosts ${TOPOLOGY}-org3-peers-proxy + +mv /tmp/crypto-material/peers/org3/peer1/node-tls/msp/keystore/* /tmp/crypto-material/peers/org3/peer1/node-tls/msp/keystore/key.pem +mv /tmp/crypto-material/peers/org3/peer2/node-tls/msp/keystore/* /tmp/crypto-material/peers/org3/peer2/node-tls/msp/keystore/key.pem +mv /tmp/crypto-material/peers/org3/peer3/node-tls/msp/keystore/* /tmp/crypto-material/peers/org3/peer3/node-tls/msp/keystore/key.pem + +mv /tmp/crypto-material/peers/org3/peer1/node-tls/msp/tlscacerts/* /tmp/crypto-material/peers/org3/peer1/node-tls/msp/tlscacerts/ca-cert.pem +mv /tmp/crypto-material/peers/org3/peer2/node-tls/msp/tlscacerts/* /tmp/crypto-material/peers/org3/peer2/node-tls/msp/tlscacerts/ca-cert.pem +mv /tmp/crypto-material/peers/org3/peer3/node-tls/msp/tlscacerts/* /tmp/crypto-material/peers/org3/peer3/node-tls/msp/tlscacerts/ca-cert.pem \ No newline at end of file diff --git a/topologies/t2/scripts/org3-install-chaincode.sh b/topologies/t2/scripts/org3-install-chaincode.sh new file mode 100755 index 0000000..6241901 --- /dev/null +++ b/topologies/t2/scripts/org3-install-chaincode.sh @@ -0,0 +1,16 @@ +#!/bin/bash +set -e +set -x + +export CHAINCODE_DIR=/opt/gopath/src/github.com/hyperledger/fabric-samples/chaincode/assets-transfer-basic/chaincode-go + +export CORE_PEER_ADDRESS=${TOPOLOGY}-org3-peer1:7051 +export CORE_PEER_MSPCONFIGPATH=/tmp/crypto-material/orgs/org3/admins/admin/msp +peer lifecycle chaincode package mycc.tar.gz --path $CHAINCODE_DIR --lang golang --label myccv1 +peer lifecycle chaincode install mycc.tar.gz + +export CORE_PEER_ADDRESS=${TOPOLOGY}-org3-peer2:7051 +peer lifecycle chaincode install mycc.tar.gz + +export CORE_PEER_ADDRESS=${TOPOLOGY}-org3-peer3:7051 +peer lifecycle chaincode install mycc.tar.gz diff --git a/topologies/t2/scripts/org3-register-identities-with-ca-identities.sh b/topologies/t2/scripts/org3-register-identities-with-ca-identities.sh new file mode 100755 index 0000000..6e62c25 --- /dev/null +++ b/topologies/t2/scripts/org3-register-identities-with-ca-identities.sh @@ -0,0 +1,13 @@ +#!/bin/bash +set -e +set -x + +export FABRIC_CA_CLIENT_TLS_CERTFILES=/tmp/crypto-material/cas/org3/ca-identities/ca-cert.pem +export FABRIC_CA_CLIENT_HOME=/tmp/crypto-material/cas/org3/ca-identities-admin +fabric-ca-client enroll -d -u https://org3-ca-identities-admin:org3-ca-identities-adminpw@0.0.0.0:7054 +fabric-ca-client register -d --id.name peer1-org3 --id.secret peer1PW --id.type peer -u https://0.0.0.0:7054 +fabric-ca-client register -d --id.name peer2-org3 --id.secret peer2PW --id.type peer -u https://0.0.0.0:7054 +fabric-ca-client register -d --id.name peer3-org3 --id.secret peer3PW --id.type peer -u https://0.0.0.0:7054 +fabric-ca-client register -d --id.name admin-org3 --id.secret org3AdminPW --id.type admin -u https://0.0.0.0:7054 +fabric-ca-client register -d --id.name user-org3 --id.secret org3UserPW --id.type user -u https://0.0.0.0:7054 +fabric-ca-client register -d --id.name client-org3 --id.secret org3UserPW --id.type client -u https://0.0.0.0:7054 \ No newline at end of file diff --git a/topologies/t2/scripts/org3-register-identities-with-ca-tls.sh b/topologies/t2/scripts/org3-register-identities-with-ca-tls.sh new file mode 100755 index 0000000..fd78194 --- /dev/null +++ b/topologies/t2/scripts/org3-register-identities-with-ca-tls.sh @@ -0,0 +1,10 @@ +#!/bin/bash +set -e +set -x + +export FABRIC_CA_CLIENT_TLS_CERTFILES=/tmp/crypto-material/cas/org3/ca-tls/ca-cert.pem +export FABRIC_CA_CLIENT_HOME=/tmp/crypto-material/cas/org3/ca-tls-admin +fabric-ca-client enroll -d -u https://org3-ca-tls-admin:org3-ca-tls-adminpw@0.0.0.0:7054 +fabric-ca-client register -d --id.name peer1-org3 --id.secret peer1PW --id.type peer -u https://0.0.0.0:7054 +fabric-ca-client register -d --id.name peer2-org3 --id.secret peer2PW --id.type peer -u https://0.0.0.0:7054 +fabric-ca-client register -d --id.name peer3-org3 --id.secret peer3PW --id.type peer -u https://0.0.0.0:7054 \ No newline at end of file diff --git a/topologies/t2/scripts/orgs-create-ca-certs.sh b/topologies/t2/scripts/orgs-create-ca-certs.sh new file mode 100755 index 0000000..643f553 --- /dev/null +++ b/topologies/t2/scripts/orgs-create-ca-certs.sh @@ -0,0 +1,8 @@ +#!/bin/bash +set -e +set -x + +mkdir -p /tmp/crypto-material/cas/orgs/tls +cat /tmp/crypto-material/cas/org1/ca-tls/ca-cert.pem \ + /tmp/crypto-material/cas/org2/ca-tls/ca-cert.pem > \ + /tmp/crypto-material/cas/orgs/tls/org1-org2-ca-cert.pem diff --git a/topologies/t2/scripts/patch-configtx.sh b/topologies/t2/scripts/patch-configtx.sh new file mode 100755 index 0000000..a4359d7 --- /dev/null +++ b/topologies/t2/scripts/patch-configtx.sh @@ -0,0 +1,8 @@ +#!/bin/bash +set -e +set -x + +mkdir -p /tmp/crypto-material/config +cp /tmp/config/configtx.yaml /tmp/crypto-material/config + +cat /tmp/config/configtx.yaml | sed -r "s;<>;${TOPOLOGY};g" | tee /tmp/crypto-material/config/configtx.yaml > /dev/null \ No newline at end of file diff --git a/topologies/t2/scripts/setup-docker-images.sh b/topologies/t2/scripts/setup-docker-images.sh new file mode 100755 index 0000000..096ef35 --- /dev/null +++ b/topologies/t2/scripts/setup-docker-images.sh @@ -0,0 +1,2 @@ +cd ./images/nginx +docker build -t nginx-hl-fabric . \ No newline at end of file diff --git a/topologies/t2/setup-network.sh b/topologies/t2/setup-network.sh new file mode 100755 index 0000000..283d197 --- /dev/null +++ b/topologies/t2/setup-network.sh @@ -0,0 +1,193 @@ +#!/bin/bash +# -----stop script execution on error and log commands +set -e +set -x + +# -----get the folder of where the current script is located +export HL_TOPOLOGIES_BASE_FOLDER=$( cd ${0%/*} && pwd -P ) +rm -rf ./topologies + +export CURRENT_HL_TOPOLOGY=t2 +echo "Topology ${CURRENT_HL_TOPOLOGY} Root Folder Set to: ${HL_TOPOLOGIES_BASE_FOLDER}" + +# -----delete any old or existing docker networks and clear any state data---- +echo "Deleting the old network..." +./teardown-network.sh + +# -----bring up the hyperledger admin shell terminal console, used to bootstrap the network +docker compose --env-file ${HL_TOPOLOGIES_BASE_FOLDER}/.env -f ${HL_TOPOLOGIES_BASE_FOLDER}/../docker-compose-base.yml \ + -f ${HL_TOPOLOGIES_BASE_FOLDER}/../docker-compose-shell-cmd.yml up -d + +# -----begin by modifying the configtx yaml file with any string replacements +echo "Starting the setup of the new network..." + +docker exec ${CURRENT_HL_TOPOLOGY}-shell-cmd /bin/sh -c "/bin/sh /tmp/scripts/patch-configtx.sh" + +# -----Setup Docker Images ----- +./scripts/setup-docker-images.sh + +# ----------------------------------------------------------------------------- +# -----setup the CAs for all orgs and register with these the TLS-CA and Identities-CA users, such as admins, clients, etc...----- +# ----------------------------------------------------------------------------- +# -----org1 CAs +docker compose --env-file ${HL_TOPOLOGIES_BASE_FOLDER}/.env -f ${HL_TOPOLOGIES_BASE_FOLDER}/../docker-compose-base.yml \ + -f ${HL_TOPOLOGIES_BASE_FOLDER}/docker-compose.yml \ + -f ${HL_TOPOLOGIES_BASE_FOLDER}/containers/cas/org1-cas/docker-compose-org1-cas.yml up -d org1-ca-tls +sleep 2 +docker exec ${CURRENT_HL_TOPOLOGY}-org1-ca-tls /bin/sh -c "/bin/sh /tmp/scripts/org1-register-identities-with-ca-tls.sh" + +docker compose --env-file ${HL_TOPOLOGIES_BASE_FOLDER}/.env -f ${HL_TOPOLOGIES_BASE_FOLDER}/../docker-compose-base.yml \ + -f ${HL_TOPOLOGIES_BASE_FOLDER}/docker-compose.yml \ + -f ${HL_TOPOLOGIES_BASE_FOLDER}/containers/cas/org1-cas/docker-compose-org1-cas.yml up -d org1-ca-identities +sleep 2 +docker exec ${CURRENT_HL_TOPOLOGY}-org1-ca-identities /bin/sh -c "/bin/sh /tmp/scripts/org1-register-identities-with-ca-identities.sh" + +# -----org2 CAs +docker compose --env-file ${HL_TOPOLOGIES_BASE_FOLDER}/.env -f ${HL_TOPOLOGIES_BASE_FOLDER}/../docker-compose-base.yml \ + -f ${HL_TOPOLOGIES_BASE_FOLDER}/docker-compose.yml \ + -f ${HL_TOPOLOGIES_BASE_FOLDER}/containers/cas/org2-cas/docker-compose-org2-cas.yml up -d org2-ca-tls +sleep 2 +docker exec ${CURRENT_HL_TOPOLOGY}-org2-ca-tls /bin/sh -c "/bin/sh /tmp/scripts/org2-register-identities-with-ca-tls.sh" + +docker compose --env-file ${HL_TOPOLOGIES_BASE_FOLDER}/.env -f ${HL_TOPOLOGIES_BASE_FOLDER}/../docker-compose-base.yml \ + -f ${HL_TOPOLOGIES_BASE_FOLDER}/docker-compose.yml \ + -f ${HL_TOPOLOGIES_BASE_FOLDER}/containers/cas/org2-cas/docker-compose-org2-cas.yml up -d org2-ca-identities +sleep 2 +docker exec ${CURRENT_HL_TOPOLOGY}-org2-ca-identities /bin/sh -c "/bin/sh /tmp/scripts/org2-register-identities-with-ca-identities.sh" + +# -----org3 CAs +docker compose --env-file ${HL_TOPOLOGIES_BASE_FOLDER}/.env -f ${HL_TOPOLOGIES_BASE_FOLDER}/../docker-compose-base.yml \ + -f ${HL_TOPOLOGIES_BASE_FOLDER}/docker-compose.yml \ + -f ${HL_TOPOLOGIES_BASE_FOLDER}/containers/cas/org3-cas/docker-compose-org3-cas.yml up -d org3-ca-tls +sleep 2 +docker exec ${CURRENT_HL_TOPOLOGY}-org3-ca-tls /bin/sh -c "/bin/sh /tmp/scripts/org3-register-identities-with-ca-tls.sh" + +docker compose --env-file ${HL_TOPOLOGIES_BASE_FOLDER}/.env -f ${HL_TOPOLOGIES_BASE_FOLDER}/../docker-compose-base.yml \ + -f ${HL_TOPOLOGIES_BASE_FOLDER}/docker-compose.yml \ + -f ${HL_TOPOLOGIES_BASE_FOLDER}/containers/cas/org3-cas/docker-compose-org3-cas.yml up -d org3-ca-identities +sleep 2 +docker exec ${CURRENT_HL_TOPOLOGY}-org3-ca-identities /bin/sh -c "/bin/sh /tmp/scripts/org3-register-identities-with-ca-identities.sh" + +# ----------------------------------------------------------------------------- +# -----setup the Peers for all orgs----- +# ----------------------------------------------------------------------------- +# -----begin by enrolling each orgs' TLS-CA and Identities-CA users +docker exec ${CURRENT_HL_TOPOLOGY}-org2-ca-tls /bin/sh -c "/bin/sh /tmp/scripts/org2-enroll-identities-with-ca-tls.sh" +docker exec ${CURRENT_HL_TOPOLOGY}-org2-ca-identities /bin/sh -c "/bin/sh /tmp/scripts/org2-enroll-identities-with-ca-identities.sh" + +docker exec ${CURRENT_HL_TOPOLOGY}-org3-ca-tls /bin/sh -c "/bin/sh /tmp/scripts/org3-enroll-identities-with-ca-tls.sh" +docker exec ${CURRENT_HL_TOPOLOGY}-org3-ca-identities /bin/sh -c "/bin/sh /tmp/scripts/org3-enroll-identities-with-ca-identities.sh" + +# -----org2 peers +docker compose --env-file ${HL_TOPOLOGIES_BASE_FOLDER}/.env -f ${HL_TOPOLOGIES_BASE_FOLDER}/../docker-compose-base.yml \ + -f ${HL_TOPOLOGIES_BASE_FOLDER}/docker-compose.yml \ + -f ${HL_TOPOLOGIES_BASE_FOLDER}/containers/peers/org2-peers/docker-compose-org2-peers.yml up -d org2-peer1 + +docker compose --env-file ${HL_TOPOLOGIES_BASE_FOLDER}/.env -f ${HL_TOPOLOGIES_BASE_FOLDER}/../docker-compose-base.yml \ + -f ${HL_TOPOLOGIES_BASE_FOLDER}/docker-compose.yml \ + -f ${HL_TOPOLOGIES_BASE_FOLDER}/containers/peers/org2-peers/docker-compose-org2-peers.yml up -d org2-peer2 + +# -----org3 peers +docker compose --env-file ${HL_TOPOLOGIES_BASE_FOLDER}/.env -f ${HL_TOPOLOGIES_BASE_FOLDER}/../docker-compose-base.yml \ + -f ${HL_TOPOLOGIES_BASE_FOLDER}/docker-compose.yml \ + -f ${HL_TOPOLOGIES_BASE_FOLDER}/containers/peers/org3-peers/docker-compose-org3-peers.yml up -d org3-peer1 + +docker compose --env-file ${HL_TOPOLOGIES_BASE_FOLDER}/.env -f ${HL_TOPOLOGIES_BASE_FOLDER}/../docker-compose-base.yml \ + -f ${HL_TOPOLOGIES_BASE_FOLDER}/docker-compose.yml \ + -f ${HL_TOPOLOGIES_BASE_FOLDER}/containers/peers/org3-peers/docker-compose-org3-peers.yml up -d org3-peer2 + +docker compose --env-file ${HL_TOPOLOGIES_BASE_FOLDER}/.env -f ${HL_TOPOLOGIES_BASE_FOLDER}/../docker-compose-base.yml \ + -f ${HL_TOPOLOGIES_BASE_FOLDER}/docker-compose.yml \ + -f ${HL_TOPOLOGIES_BASE_FOLDER}/containers/peers/org3-peers/docker-compose-org3-peers.yml up -d org3-peer3 + +docker compose --env-file ${HL_TOPOLOGIES_BASE_FOLDER}/.env -f ${HL_TOPOLOGIES_BASE_FOLDER}/../docker-compose-base.yml \ + -f ${HL_TOPOLOGIES_BASE_FOLDER}/docker-compose.yml \ + -f ${HL_TOPOLOGIES_BASE_FOLDER}/containers/peers/peers-proxies/docker-compose-peers-proxies.yml up -d org2-peers-proxy + +docker compose --env-file ${HL_TOPOLOGIES_BASE_FOLDER}/.env -f ${HL_TOPOLOGIES_BASE_FOLDER}/../docker-compose-base.yml \ + -f ${HL_TOPOLOGIES_BASE_FOLDER}/docker-compose.yml \ + -f ${HL_TOPOLOGIES_BASE_FOLDER}/containers/peers/peers-proxies/docker-compose-peers-proxies.yml up -d org3-peers-proxy + +# ----------------------------------------------------------------------------- +# -----setup CLIs for each org----- +# ----------------------------------------------------------------------------- +docker compose --env-file ${HL_TOPOLOGIES_BASE_FOLDER}/.env -f ${HL_TOPOLOGIES_BASE_FOLDER}/../docker-compose-base.yml \ + -f ${HL_TOPOLOGIES_BASE_FOLDER}/docker-compose.yml \ + -f ${HL_TOPOLOGIES_BASE_FOLDER}/containers/clis/org2-clis/docker-compose-org2-clis.yml up -d org2-cli-peer1 + +docker compose --env-file ${HL_TOPOLOGIES_BASE_FOLDER}/.env -f ${HL_TOPOLOGIES_BASE_FOLDER}/../docker-compose-base.yml \ + -f ${HL_TOPOLOGIES_BASE_FOLDER}/docker-compose.yml \ + -f ${HL_TOPOLOGIES_BASE_FOLDER}/containers/clis/org3-clis/docker-compose-org3-clis.yml up -d org3-cli-peer1 + +# ----------------------------------------------------------------------------- +# -----setup Orderers ----- +# ----------------------------------------------------------------------------- +# -----enroll orderer users with TLS-CA and Identities-CA for org1, the orderer org +docker exec ${CURRENT_HL_TOPOLOGY}-org1-ca-tls /bin/sh -c "/bin/sh /tmp/scripts/org1-enroll-identities-with-ca-tls.sh" +docker exec ${CURRENT_HL_TOPOLOGY}-org1-ca-identities /bin/sh -c "/bin/sh /tmp/scripts/org1-enroll-identities-with-ca-identities.sh" + +# -----generate the genesis.block and mychannel artifacts +docker exec ${CURRENT_HL_TOPOLOGY}-org2-cli-peer1 /bin/sh -c "/tmp/scripts/channels-setup.sh" + +sleep 1 + +# -----bring up org1 orderers +docker compose --env-file ${HL_TOPOLOGIES_BASE_FOLDER}/.env -f ${HL_TOPOLOGIES_BASE_FOLDER}/../docker-compose-base.yml \ + -f ${HL_TOPOLOGIES_BASE_FOLDER}/docker-compose.yml \ + -f ${HL_TOPOLOGIES_BASE_FOLDER}/containers/orderers/org1-orderers/docker-compose-org1-orderers.yml up -d org1-orderer1 + +docker compose --env-file ${HL_TOPOLOGIES_BASE_FOLDER}/.env -f ${HL_TOPOLOGIES_BASE_FOLDER}/../docker-compose-base.yml \ + -f ${HL_TOPOLOGIES_BASE_FOLDER}/docker-compose.yml \ + -f ${HL_TOPOLOGIES_BASE_FOLDER}/containers/orderers/org1-orderers/docker-compose-org1-orderers.yml up -d org1-orderer2 + +docker compose --env-file ${HL_TOPOLOGIES_BASE_FOLDER}/.env -f ${HL_TOPOLOGIES_BASE_FOLDER}/../docker-compose-base.yml \ + -f ${HL_TOPOLOGIES_BASE_FOLDER}/docker-compose.yml \ + -f ${HL_TOPOLOGIES_BASE_FOLDER}/containers/orderers/org1-orderers/docker-compose-org1-orderers.yml up -d org1-orderer3 + +# -----bring up org2 orderers +docker compose --env-file ${HL_TOPOLOGIES_BASE_FOLDER}/.env -f ${HL_TOPOLOGIES_BASE_FOLDER}/../docker-compose-base.yml \ + -f ${HL_TOPOLOGIES_BASE_FOLDER}/docker-compose.yml \ + -f ${HL_TOPOLOGIES_BASE_FOLDER}/containers/orderers/org2-orderers/docker-compose-org2-orderers.yml up -d org2-orderer1 + +docker compose --env-file ${HL_TOPOLOGIES_BASE_FOLDER}/.env -f ${HL_TOPOLOGIES_BASE_FOLDER}/../docker-compose-base.yml \ + -f ${HL_TOPOLOGIES_BASE_FOLDER}/docker-compose.yml \ + -f ${HL_TOPOLOGIES_BASE_FOLDER}/containers/orderers/org2-orderers/docker-compose-org2-orderers.yml up -d org2-orderer2 + +# -----need to wait until raft leader selection is completed for the orderers +sleep 4 + +docker compose --env-file ${HL_TOPOLOGIES_BASE_FOLDER}/.env -f ${HL_TOPOLOGIES_BASE_FOLDER}/../docker-compose-base.yml \ + -f ${HL_TOPOLOGIES_BASE_FOLDER}/docker-compose.yml \ + -f ${HL_TOPOLOGIES_BASE_FOLDER}/containers/orderers/orderers-proxy/docker-compose-orderers-proxy.yml up -d orderers-proxy + +# ----------------------------------------------------------------------------- +# -----setup Channels ----- +# ----------------------------------------------------------------------------- +docker exec ${CURRENT_HL_TOPOLOGY}-org2-cli-peer1 /bin/sh -c "/tmp/scripts/orgs-create-ca-certs.sh" + +docker exec ${CURRENT_HL_TOPOLOGY}-org2-cli-peer1 /bin/sh -c "/tmp/scripts/org2-create-and-join-channels.sh" +docker exec ${CURRENT_HL_TOPOLOGY}-org3-cli-peer1 /bin/sh -c "/tmp/scripts/org3-create-and-join-channels.sh" + +# ----------------------------------------------------------------------------- +# -----setup Chaincode ----- +# ----------------------------------------------------------------------------- +docker exec ${CURRENT_HL_TOPOLOGY}-org2-cli-peer1 /bin/sh -c "/tmp/scripts/org2-install-chaincode.sh" +docker exec ${CURRENT_HL_TOPOLOGY}-org3-cli-peer1 /bin/sh -c "/tmp/scripts/org3-install-chaincode.sh" + +sleep 1 + +docker exec ${CURRENT_HL_TOPOLOGY}-org2-cli-peer1 /bin/sh -c "/tmp/scripts/org2-approve-chaincode.sh" +docker exec ${CURRENT_HL_TOPOLOGY}-org3-cli-peer1 /bin/sh -c "/tmp/scripts/org3-approve-chaincode.sh" + +sleep 1 + +docker exec ${CURRENT_HL_TOPOLOGY}-org2-cli-peer1 /bin/sh -c "/tmp/scripts/all-org-peers-commit-chaincode.sh" + +sleep 1 + +# ----------------------------------------------------------------------------- +# -----test Chaincode with invoke and query----- +# ----------------------------------------------------------------------------- +docker exec ${CURRENT_HL_TOPOLOGY}-org2-cli-peer1 /bin/sh -c "/tmp/scripts/all-org-peers-execute-chaincode.sh" + +echo "******* NETWORK SETUP COMPLETED *******" diff --git a/topologies/t2/teardown-network.sh b/topologies/t2/teardown-network.sh new file mode 100755 index 0000000..502210a --- /dev/null +++ b/topologies/t2/teardown-network.sh @@ -0,0 +1,35 @@ +#!/bin/bash +# -----stop script execution on error and log commands +set -e +set -x + +export CURRENT_HL_TOPOLOGY=t2 + +# ----------------------------------------------------------------------------- +# -----remove current topoloy containers containers +# ----------------------------------------------------------------------------- +# -----the chaincode dockers throw an error as not being able to be deleted, although they are being deleted. ignoring errors here +set +e +docker ps --filter name=${CURRENT_HL_TOPOLOGY}-org --filter status=running -aq | xargs -r docker stop | xargs -r docker rm +docker ps --filter name=${CURRENT_HL_TOPOLOGY}-org --filter status=exited -aq | xargs -r docker rm +docker ps --filter name=${CURRENT_HL_TOPOLOGY}-orderers --filter status=running -aq | xargs -r docker stop | xargs -r docker rm +docker ps --filter name=${CURRENT_HL_TOPOLOGY}-orderers --filter status=exited -aq | xargs -r docker rm +set -e + +# ----------------------------------------------------------------------------- +# -----clear any state data written to disk +# ----------------------------------------------------------------------------- +# -----docker exec will throw error if no running container found +if [ "$( docker container inspect -f '{{.State.Status}}' ${CURRENT_HL_TOPOLOGY}-shell-cmd )" == "running" ] +then + docker exec ${CURRENT_HL_TOPOLOGY}-shell-cmd /bin/sh -c "/bin/sh /tmp/scripts/delete-state-data.sh" +else + echo "Shell Cmd Container is not running." +fi + +# ----------------------------------------------------------------------------- +# -----remove the bootstrap shell container +# ----------------------------------------------------------------------------- +# -----remove shell command container +docker ps --filter name=${CURRENT_HL_TOPOLOGY}-shell-cmd --filter status=running -aq | xargs -r docker stop | xargs -r docker rm +docker ps --filter name=${CURRENT_HL_TOPOLOGY}-shell-cmd --filter status=exited -aq | xargs -r docker rm diff --git a/topologies/t3/.env b/topologies/t3/.env new file mode 100644 index 0000000..da02980 --- /dev/null +++ b/topologies/t3/.env @@ -0,0 +1,5 @@ +COMPOSE_PROJECT_NAME=hl-fabric-topology-t3 +FABRIC_CA_VERSION=1.5 +FABRIC_PEER_VERSION=2.4.6 +FABRIC_TOOLS_VERSION=2.4.6 +PEER_ORDERER_VERSION=2.4.6 \ No newline at end of file diff --git a/topologies/t3/.gitignore b/topologies/t3/.gitignore new file mode 100644 index 0000000..2728686 --- /dev/null +++ b/topologies/t3/.gitignore @@ -0,0 +1,4 @@ +crypto-material/*/** +homefolders/*/** +code.tar.gz +myccv1.tgz \ No newline at end of file diff --git a/topologies/t3/README.md b/topologies/t3/README.md new file mode 100644 index 0000000..179da8d --- /dev/null +++ b/topologies/t3/README.md @@ -0,0 +1,40 @@ +# T3: External chaincode +## Description +--- +T1 Network plus external chaincode server for each org that has peers +## Diagram +--- +![Diagram of components](../image_store/T3.png) + +## Relevant Documentation + +- https://hyperledger-fabric.readthedocs.io/en/latest/cc_service.html + +## Components List +--- +* Org 1 + * Orderer 1 + * Orderer 2 + * Orderer 3 + * TLS CA + * Identities CA +* Org 2 + * Peer 1 + * Peer 1 CLI + * Peer 2 + * External Chaincode Server + * TLS CA + * Identities CA +* Org 3 + * Peer 1 + * Peer 1 CLI + * Peer 2 + * External Chaincode Server + * TLS CA + * Identities CA + +## Characteristics + +- World State Database Instance (LevelDB) embedded (in peer containers) +- Chaincode running as a container, external to peers. Chaincode installation on peers involves moslty a shim, a way to establish a connection to the external server. For each org the communication between peers and the chaincode sever is done via TLS. +- Communication between all components done via TLS \ No newline at end of file diff --git a/topologies/t3/assets/Dockerfile b/topologies/t3/assets/Dockerfile new file mode 100644 index 0000000..d026483 --- /dev/null +++ b/topologies/t3/assets/Dockerfile @@ -0,0 +1,20 @@ +# This image is a microservice in golang for the Degree chaincode +FROM golang:1.17-alpine AS build + +RUN mkdir /tmp/hl-fabric +COPY ./chaincode/assets-transfer-basic/chaincode-go /tmp/hl-fabric +WORKDIR /tmp/hl-fabric + +# Build application +RUN go build -o assettransfer -v . + +# Production ready image +# Pass the binary to the prod image +FROM alpine:3.11 as prod + +COPY --from=build /tmp/hl-fabric/assettransfer /app/assettransfer + +USER 1000 + +WORKDIR /app +CMD ./assettransfer \ No newline at end of file diff --git a/topologies/t3/assets/buildpacks/externalchaincode/bin/build b/topologies/t3/assets/buildpacks/externalchaincode/bin/build new file mode 100755 index 0000000..d8e65b6 --- /dev/null +++ b/topologies/t3/assets/buildpacks/externalchaincode/bin/build @@ -0,0 +1,34 @@ +#!/bin/sh + +# The bin/build script is responsible for building, compiling, or transforming the contents +# of a chaincode package into artifacts that can be used by release and run. +# +# The peer invokes build with three arguments: +# bin/build CHAINCODE_SOURCE_DIR CHAINCODE_METADATA_DIR BUILD_OUTPUT_DIR +# +# When build is invoked, CHAINCODE_SOURCE_DIR contains the chaincode source and +# CHAINCODE_METADATA_DIR contains the metadata.json file from the chaincode package installed to the peer. +# BUILD_OUTPUT_DIR is the directory where build must place artifacts needed by release and run. +# The build script should treat the input directories CHAINCODE_SOURCE_DIR and +# CHAINCODE_METADATA_DIR as read only, but the BUILD_OUTPUT_DIR is writeable. + +CHAINCODE_SOURCE_DIR="$1" +CHAINCODE_METADATA_DIR="$2" +BUILD_OUTPUT_DIR="$3" + +set -euo pipefail + +#external chaincodes expect connection.json file in the chaincode package +if [ ! -f "$CHAINCODE_SOURCE_DIR/connection.json" ]; then + >&2 echo "$CHAINCODE_SOURCE_DIR/connection.json not found" + exit 1 +fi + +#simply copy the endpoint information to specified output location +cp $CHAINCODE_SOURCE_DIR/connection.json $BUILD_OUTPUT_DIR/connection.json + +if [ -d "$CHAINCODE_SOURCE_DIR/metadata" ]; then + cp -a $CHAINCODE_SOURCE_DIR/metadata $BUILD_OUTPUT_DIR/metadata +fi + +exit 0 \ No newline at end of file diff --git a/topologies/t3/assets/buildpacks/externalchaincode/bin/detect b/topologies/t3/assets/buildpacks/externalchaincode/bin/detect new file mode 100755 index 0000000..8ba529d --- /dev/null +++ b/topologies/t3/assets/buildpacks/externalchaincode/bin/detect @@ -0,0 +1,25 @@ +#!/bin/sh + +# The bin/detect script is responsible for determining whether or not a buildpack +# should be used to build a chaincode package and launch it. +# +# The peer invokes detect with two arguments: +# bin/detect CHAINCODE_SOURCE_DIR CHAINCODE_METADATA_DIR +# +# When detect is invoked, CHAINCODE_SOURCE_DIR contains the chaincode source and +# CHAINCODE_METADATA_DIR contains the metadata.json file from the chaincode package installed to the peer. +# The CHAINCODE_SOURCE_DIR and CHAINCODE_METADATA_DIR should be treated as read only inputs. +# If the buildpack should be applied to the chaincode source package, detect must return an exit code of 0; +# any other exit code will indicate that the buildpack should not be applied. + +CHAINCODE_METADATA_DIR="$2" + +set -euo pipefail + +# use jq to extract the chaincode type from metadata.json and exit with +# success if the chaincode type is golang +if [ "$(cat "$CHAINCODE_METADATA_DIR/metadata.json" | sed -e 's/[{}]/''/g' | awk -F"[,:}]" '{for(i=1;i<=NF;i++){if($i~/'type'\042/){print $(i+1)}}}' | tr -d '"')" = "external" ]; then + exit 0 +fi + +exit 1 \ No newline at end of file diff --git a/topologies/t3/assets/buildpacks/externalchaincode/bin/release b/topologies/t3/assets/buildpacks/externalchaincode/bin/release new file mode 100755 index 0000000..84519f4 --- /dev/null +++ b/topologies/t3/assets/buildpacks/externalchaincode/bin/release @@ -0,0 +1,33 @@ +#!/bin/sh + +# The bin/release script is responsible for providing chaincode metadata to the peer. +# bin/release is optional. If it is not provided, this step is skipped. +# +# The peer invokes release with two arguments: +# bin/release BUILD_OUTPUT_DIR RELEASE_OUTPUT_DIR +# +# When release is invoked, BUILD_OUTPUT_DIR contains the artifacts +# populated by the build program and should be treated as read only input. +# RELEASE_OUTPUT_DIR is the directory where release must place artifacts to be consumed by the peer. + +set -euo pipefail + +BUILD_OUTPUT_DIR="$1" +RELEASE_OUTPUT_DIR="$2" + +# copy indexes from metadata/* to the output directory +# if [ -d "$BUILD_OUTPUT_DIR/metadata" ] ; then +# cp -a "$BUILD_OUTPUT_DIR/metadata/"* "$RELEASE_OUTPUT_DIR/" +# fi + +#external chaincodes expect artifacts to be placed under "$RELEASE_OUTPUT_DIR"/chaincode/server +if [ -f $BUILD_OUTPUT_DIR/connection.json ]; then +mkdir -p "$RELEASE_OUTPUT_DIR"/chaincode/server +cp $BUILD_OUTPUT_DIR/connection.json "$RELEASE_OUTPUT_DIR"/chaincode/server + +#if tls_required is true, copy TLS files (using above example, the fully qualified path for these fils would be "$RELEASE_OUTPUT_DIR"/chaincode/server/tls) + +exit 0 +fi + +exit 1 \ No newline at end of file diff --git a/topologies/t3/assets/chaincode/assets-transfer-basic/chaincode-go/assetTransfer.go b/topologies/t3/assets/chaincode/assets-transfer-basic/chaincode-go/assetTransfer.go new file mode 100644 index 0000000..c6fd937 --- /dev/null +++ b/topologies/t3/assets/chaincode/assets-transfer-basic/chaincode-go/assetTransfer.go @@ -0,0 +1,80 @@ +/* +SPDX-License-Identifier: Apache-2.0 +*/ + +package main + +import ( + "io/ioutil" + "log" + "os" + "github.com/hyperledger/fabric-chaincode-go/shim" + "github.com/hyperledger/fabric-contract-api-go/contractapi" + "github.com/hyperledger/fabric-samples/asset-transfer-basic/chaincode-go/chaincode" +) + +func main() { + assetChaincode, err := contractapi.NewChaincode(&chaincode.SmartContract{}) + if err != nil { + log.Panicf("Error creating asset-transfer-basic chaincode: %v", err) + } + + //The ccid is assigned to the chaincode on install (using the “peer lifecycle chaincode install ” command) for instance + ccid := os.Getenv("CHACINCODE_ID") + address := os.Getenv("CHACINCODE_ADDRESS") + + server := &shim.ChaincodeServer{ + CCID: ccid, + Address: address, + CC: assetChaincode, + TLSProps: getTLSProperties(), + } + log.Printf("Starting the chaincode server ...") + if err := server.Start(); err != nil { + log.Panicf("error starting server chaincode: %s", err) + } +} + +func getTLSProperties() shim.TLSProperties { + + key := getEnvOrDefault("CHAINCODE_TLS_KEY", "") + cert := getEnvOrDefault("CHAINCODE_TLS_CERT", "") + clientCACert := getEnvOrDefault("CHAINCODE_CLIENT_CA_CERT", "") + + tlsDisabled := false + var keyBytes, certBytes, clientCACertBytes []byte + var err error + + if !tlsDisabled { + keyBytes, err = ioutil.ReadFile(key) + if err != nil { + log.Panicf("error while reading the crypto file: %s", err) + } + certBytes, err = ioutil.ReadFile(cert) + if err != nil { + log.Panicf("error while reading the crypto file: %s", err) + } + } + // Did not request for the peer cert verification + if clientCACert != "" { + clientCACertBytes, err = ioutil.ReadFile(clientCACert) + if err != nil { + log.Panicf("error while reading the crypto file: %s", err) + } + } + + return shim.TLSProperties{ + Disabled: tlsDisabled, + Key: keyBytes, + Cert: certBytes, + ClientCACerts: clientCACertBytes, + } +} + +func getEnvOrDefault(env, defaultVal string) string { + value, ok := os.LookupEnv(env) + if !ok { + value = defaultVal + } + return value +} diff --git a/topologies/t3/assets/chaincode/assets-transfer-basic/chaincode-go/chaincode/mocks/chaincodestub.go b/topologies/t3/assets/chaincode/assets-transfer-basic/chaincode-go/chaincode/mocks/chaincodestub.go new file mode 100644 index 0000000..91348fe --- /dev/null +++ b/topologies/t3/assets/chaincode/assets-transfer-basic/chaincode-go/chaincode/mocks/chaincodestub.go @@ -0,0 +1,2878 @@ +// Code generated by counterfeiter. DO NOT EDIT. +package mocks + +import ( + "sync" + + "github.com/golang/protobuf/ptypes/timestamp" + "github.com/hyperledger/fabric-chaincode-go/shim" + "github.com/hyperledger/fabric-protos-go/peer" +) + +type ChaincodeStub struct { + CreateCompositeKeyStub func(string, []string) (string, error) + createCompositeKeyMutex sync.RWMutex + createCompositeKeyArgsForCall []struct { + arg1 string + arg2 []string + } + createCompositeKeyReturns struct { + result1 string + result2 error + } + createCompositeKeyReturnsOnCall map[int]struct { + result1 string + result2 error + } + DelPrivateDataStub func(string, string) error + delPrivateDataMutex sync.RWMutex + delPrivateDataArgsForCall []struct { + arg1 string + arg2 string + } + delPrivateDataReturns struct { + result1 error + } + delPrivateDataReturnsOnCall map[int]struct { + result1 error + } + DelStateStub func(string) error + delStateMutex sync.RWMutex + delStateArgsForCall []struct { + arg1 string + } + delStateReturns struct { + result1 error + } + delStateReturnsOnCall map[int]struct { + result1 error + } + GetArgsStub func() [][]byte + getArgsMutex sync.RWMutex + getArgsArgsForCall []struct { + } + getArgsReturns struct { + result1 [][]byte + } + getArgsReturnsOnCall map[int]struct { + result1 [][]byte + } + GetArgsSliceStub func() ([]byte, error) + getArgsSliceMutex sync.RWMutex + getArgsSliceArgsForCall []struct { + } + getArgsSliceReturns struct { + result1 []byte + result2 error + } + getArgsSliceReturnsOnCall map[int]struct { + result1 []byte + result2 error + } + GetBindingStub func() ([]byte, error) + getBindingMutex sync.RWMutex + getBindingArgsForCall []struct { + } + getBindingReturns struct { + result1 []byte + result2 error + } + getBindingReturnsOnCall map[int]struct { + result1 []byte + result2 error + } + GetChannelIDStub func() string + getChannelIDMutex sync.RWMutex + getChannelIDArgsForCall []struct { + } + getChannelIDReturns struct { + result1 string + } + getChannelIDReturnsOnCall map[int]struct { + result1 string + } + GetCreatorStub func() ([]byte, error) + getCreatorMutex sync.RWMutex + getCreatorArgsForCall []struct { + } + getCreatorReturns struct { + result1 []byte + result2 error + } + getCreatorReturnsOnCall map[int]struct { + result1 []byte + result2 error + } + GetDecorationsStub func() map[string][]byte + getDecorationsMutex sync.RWMutex + getDecorationsArgsForCall []struct { + } + getDecorationsReturns struct { + result1 map[string][]byte + } + getDecorationsReturnsOnCall map[int]struct { + result1 map[string][]byte + } + GetFunctionAndParametersStub func() (string, []string) + getFunctionAndParametersMutex sync.RWMutex + getFunctionAndParametersArgsForCall []struct { + } + getFunctionAndParametersReturns struct { + result1 string + result2 []string + } + getFunctionAndParametersReturnsOnCall map[int]struct { + result1 string + result2 []string + } + GetHistoryForKeyStub func(string) (shim.HistoryQueryIteratorInterface, error) + getHistoryForKeyMutex sync.RWMutex + getHistoryForKeyArgsForCall []struct { + arg1 string + } + getHistoryForKeyReturns struct { + result1 shim.HistoryQueryIteratorInterface + result2 error + } + getHistoryForKeyReturnsOnCall map[int]struct { + result1 shim.HistoryQueryIteratorInterface + result2 error + } + GetPrivateDataStub func(string, string) ([]byte, error) + getPrivateDataMutex sync.RWMutex + getPrivateDataArgsForCall []struct { + arg1 string + arg2 string + } + getPrivateDataReturns struct { + result1 []byte + result2 error + } + getPrivateDataReturnsOnCall map[int]struct { + result1 []byte + result2 error + } + GetPrivateDataByPartialCompositeKeyStub func(string, string, []string) (shim.StateQueryIteratorInterface, error) + getPrivateDataByPartialCompositeKeyMutex sync.RWMutex + getPrivateDataByPartialCompositeKeyArgsForCall []struct { + arg1 string + arg2 string + arg3 []string + } + getPrivateDataByPartialCompositeKeyReturns struct { + result1 shim.StateQueryIteratorInterface + result2 error + } + getPrivateDataByPartialCompositeKeyReturnsOnCall map[int]struct { + result1 shim.StateQueryIteratorInterface + result2 error + } + GetPrivateDataByRangeStub func(string, string, string) (shim.StateQueryIteratorInterface, error) + getPrivateDataByRangeMutex sync.RWMutex + getPrivateDataByRangeArgsForCall []struct { + arg1 string + arg2 string + arg3 string + } + getPrivateDataByRangeReturns struct { + result1 shim.StateQueryIteratorInterface + result2 error + } + getPrivateDataByRangeReturnsOnCall map[int]struct { + result1 shim.StateQueryIteratorInterface + result2 error + } + GetPrivateDataHashStub func(string, string) ([]byte, error) + getPrivateDataHashMutex sync.RWMutex + getPrivateDataHashArgsForCall []struct { + arg1 string + arg2 string + } + getPrivateDataHashReturns struct { + result1 []byte + result2 error + } + getPrivateDataHashReturnsOnCall map[int]struct { + result1 []byte + result2 error + } + GetPrivateDataQueryResultStub func(string, string) (shim.StateQueryIteratorInterface, error) + getPrivateDataQueryResultMutex sync.RWMutex + getPrivateDataQueryResultArgsForCall []struct { + arg1 string + arg2 string + } + getPrivateDataQueryResultReturns struct { + result1 shim.StateQueryIteratorInterface + result2 error + } + getPrivateDataQueryResultReturnsOnCall map[int]struct { + result1 shim.StateQueryIteratorInterface + result2 error + } + GetPrivateDataValidationParameterStub func(string, string) ([]byte, error) + getPrivateDataValidationParameterMutex sync.RWMutex + getPrivateDataValidationParameterArgsForCall []struct { + arg1 string + arg2 string + } + getPrivateDataValidationParameterReturns struct { + result1 []byte + result2 error + } + getPrivateDataValidationParameterReturnsOnCall map[int]struct { + result1 []byte + result2 error + } + GetQueryResultStub func(string) (shim.StateQueryIteratorInterface, error) + getQueryResultMutex sync.RWMutex + getQueryResultArgsForCall []struct { + arg1 string + } + getQueryResultReturns struct { + result1 shim.StateQueryIteratorInterface + result2 error + } + getQueryResultReturnsOnCall map[int]struct { + result1 shim.StateQueryIteratorInterface + result2 error + } + GetQueryResultWithPaginationStub func(string, int32, string) (shim.StateQueryIteratorInterface, *peer.QueryResponseMetadata, error) + getQueryResultWithPaginationMutex sync.RWMutex + getQueryResultWithPaginationArgsForCall []struct { + arg1 string + arg2 int32 + arg3 string + } + getQueryResultWithPaginationReturns struct { + result1 shim.StateQueryIteratorInterface + result2 *peer.QueryResponseMetadata + result3 error + } + getQueryResultWithPaginationReturnsOnCall map[int]struct { + result1 shim.StateQueryIteratorInterface + result2 *peer.QueryResponseMetadata + result3 error + } + GetSignedProposalStub func() (*peer.SignedProposal, error) + getSignedProposalMutex sync.RWMutex + getSignedProposalArgsForCall []struct { + } + getSignedProposalReturns struct { + result1 *peer.SignedProposal + result2 error + } + getSignedProposalReturnsOnCall map[int]struct { + result1 *peer.SignedProposal + result2 error + } + GetStateStub func(string) ([]byte, error) + getStateMutex sync.RWMutex + getStateArgsForCall []struct { + arg1 string + } + getStateReturns struct { + result1 []byte + result2 error + } + getStateReturnsOnCall map[int]struct { + result1 []byte + result2 error + } + GetStateByPartialCompositeKeyStub func(string, []string) (shim.StateQueryIteratorInterface, error) + getStateByPartialCompositeKeyMutex sync.RWMutex + getStateByPartialCompositeKeyArgsForCall []struct { + arg1 string + arg2 []string + } + getStateByPartialCompositeKeyReturns struct { + result1 shim.StateQueryIteratorInterface + result2 error + } + getStateByPartialCompositeKeyReturnsOnCall map[int]struct { + result1 shim.StateQueryIteratorInterface + result2 error + } + GetStateByPartialCompositeKeyWithPaginationStub func(string, []string, int32, string) (shim.StateQueryIteratorInterface, *peer.QueryResponseMetadata, error) + getStateByPartialCompositeKeyWithPaginationMutex sync.RWMutex + getStateByPartialCompositeKeyWithPaginationArgsForCall []struct { + arg1 string + arg2 []string + arg3 int32 + arg4 string + } + getStateByPartialCompositeKeyWithPaginationReturns struct { + result1 shim.StateQueryIteratorInterface + result2 *peer.QueryResponseMetadata + result3 error + } + getStateByPartialCompositeKeyWithPaginationReturnsOnCall map[int]struct { + result1 shim.StateQueryIteratorInterface + result2 *peer.QueryResponseMetadata + result3 error + } + GetStateByRangeStub func(string, string) (shim.StateQueryIteratorInterface, error) + getStateByRangeMutex sync.RWMutex + getStateByRangeArgsForCall []struct { + arg1 string + arg2 string + } + getStateByRangeReturns struct { + result1 shim.StateQueryIteratorInterface + result2 error + } + getStateByRangeReturnsOnCall map[int]struct { + result1 shim.StateQueryIteratorInterface + result2 error + } + GetStateByRangeWithPaginationStub func(string, string, int32, string) (shim.StateQueryIteratorInterface, *peer.QueryResponseMetadata, error) + getStateByRangeWithPaginationMutex sync.RWMutex + getStateByRangeWithPaginationArgsForCall []struct { + arg1 string + arg2 string + arg3 int32 + arg4 string + } + getStateByRangeWithPaginationReturns struct { + result1 shim.StateQueryIteratorInterface + result2 *peer.QueryResponseMetadata + result3 error + } + getStateByRangeWithPaginationReturnsOnCall map[int]struct { + result1 shim.StateQueryIteratorInterface + result2 *peer.QueryResponseMetadata + result3 error + } + GetStateValidationParameterStub func(string) ([]byte, error) + getStateValidationParameterMutex sync.RWMutex + getStateValidationParameterArgsForCall []struct { + arg1 string + } + getStateValidationParameterReturns struct { + result1 []byte + result2 error + } + getStateValidationParameterReturnsOnCall map[int]struct { + result1 []byte + result2 error + } + GetStringArgsStub func() []string + getStringArgsMutex sync.RWMutex + getStringArgsArgsForCall []struct { + } + getStringArgsReturns struct { + result1 []string + } + getStringArgsReturnsOnCall map[int]struct { + result1 []string + } + GetTransientStub func() (map[string][]byte, error) + getTransientMutex sync.RWMutex + getTransientArgsForCall []struct { + } + getTransientReturns struct { + result1 map[string][]byte + result2 error + } + getTransientReturnsOnCall map[int]struct { + result1 map[string][]byte + result2 error + } + GetTxIDStub func() string + getTxIDMutex sync.RWMutex + getTxIDArgsForCall []struct { + } + getTxIDReturns struct { + result1 string + } + getTxIDReturnsOnCall map[int]struct { + result1 string + } + GetTxTimestampStub func() (*timestamp.Timestamp, error) + getTxTimestampMutex sync.RWMutex + getTxTimestampArgsForCall []struct { + } + getTxTimestampReturns struct { + result1 *timestamp.Timestamp + result2 error + } + getTxTimestampReturnsOnCall map[int]struct { + result1 *timestamp.Timestamp + result2 error + } + InvokeChaincodeStub func(string, [][]byte, string) peer.Response + invokeChaincodeMutex sync.RWMutex + invokeChaincodeArgsForCall []struct { + arg1 string + arg2 [][]byte + arg3 string + } + invokeChaincodeReturns struct { + result1 peer.Response + } + invokeChaincodeReturnsOnCall map[int]struct { + result1 peer.Response + } + PutPrivateDataStub func(string, string, []byte) error + putPrivateDataMutex sync.RWMutex + putPrivateDataArgsForCall []struct { + arg1 string + arg2 string + arg3 []byte + } + putPrivateDataReturns struct { + result1 error + } + putPrivateDataReturnsOnCall map[int]struct { + result1 error + } + PutStateStub func(string, []byte) error + putStateMutex sync.RWMutex + putStateArgsForCall []struct { + arg1 string + arg2 []byte + } + putStateReturns struct { + result1 error + } + putStateReturnsOnCall map[int]struct { + result1 error + } + SetEventStub func(string, []byte) error + setEventMutex sync.RWMutex + setEventArgsForCall []struct { + arg1 string + arg2 []byte + } + setEventReturns struct { + result1 error + } + setEventReturnsOnCall map[int]struct { + result1 error + } + SetPrivateDataValidationParameterStub func(string, string, []byte) error + setPrivateDataValidationParameterMutex sync.RWMutex + setPrivateDataValidationParameterArgsForCall []struct { + arg1 string + arg2 string + arg3 []byte + } + setPrivateDataValidationParameterReturns struct { + result1 error + } + setPrivateDataValidationParameterReturnsOnCall map[int]struct { + result1 error + } + SetStateValidationParameterStub func(string, []byte) error + setStateValidationParameterMutex sync.RWMutex + setStateValidationParameterArgsForCall []struct { + arg1 string + arg2 []byte + } + setStateValidationParameterReturns struct { + result1 error + } + setStateValidationParameterReturnsOnCall map[int]struct { + result1 error + } + SplitCompositeKeyStub func(string) (string, []string, error) + splitCompositeKeyMutex sync.RWMutex + splitCompositeKeyArgsForCall []struct { + arg1 string + } + splitCompositeKeyReturns struct { + result1 string + result2 []string + result3 error + } + splitCompositeKeyReturnsOnCall map[int]struct { + result1 string + result2 []string + result3 error + } + invocations map[string][][]interface{} + invocationsMutex sync.RWMutex +} + +func (fake *ChaincodeStub) CreateCompositeKey(arg1 string, arg2 []string) (string, error) { + var arg2Copy []string + if arg2 != nil { + arg2Copy = make([]string, len(arg2)) + copy(arg2Copy, arg2) + } + fake.createCompositeKeyMutex.Lock() + ret, specificReturn := fake.createCompositeKeyReturnsOnCall[len(fake.createCompositeKeyArgsForCall)] + fake.createCompositeKeyArgsForCall = append(fake.createCompositeKeyArgsForCall, struct { + arg1 string + arg2 []string + }{arg1, arg2Copy}) + fake.recordInvocation("CreateCompositeKey", []interface{}{arg1, arg2Copy}) + fake.createCompositeKeyMutex.Unlock() + if fake.CreateCompositeKeyStub != nil { + return fake.CreateCompositeKeyStub(arg1, arg2) + } + if specificReturn { + return ret.result1, ret.result2 + } + fakeReturns := fake.createCompositeKeyReturns + return fakeReturns.result1, fakeReturns.result2 +} + +func (fake *ChaincodeStub) CreateCompositeKeyCallCount() int { + fake.createCompositeKeyMutex.RLock() + defer fake.createCompositeKeyMutex.RUnlock() + return len(fake.createCompositeKeyArgsForCall) +} + +func (fake *ChaincodeStub) CreateCompositeKeyCalls(stub func(string, []string) (string, error)) { + fake.createCompositeKeyMutex.Lock() + defer fake.createCompositeKeyMutex.Unlock() + fake.CreateCompositeKeyStub = stub +} + +func (fake *ChaincodeStub) CreateCompositeKeyArgsForCall(i int) (string, []string) { + fake.createCompositeKeyMutex.RLock() + defer fake.createCompositeKeyMutex.RUnlock() + argsForCall := fake.createCompositeKeyArgsForCall[i] + return argsForCall.arg1, argsForCall.arg2 +} + +func (fake *ChaincodeStub) CreateCompositeKeyReturns(result1 string, result2 error) { + fake.createCompositeKeyMutex.Lock() + defer fake.createCompositeKeyMutex.Unlock() + fake.CreateCompositeKeyStub = nil + fake.createCompositeKeyReturns = struct { + result1 string + result2 error + }{result1, result2} +} + +func (fake *ChaincodeStub) CreateCompositeKeyReturnsOnCall(i int, result1 string, result2 error) { + fake.createCompositeKeyMutex.Lock() + defer fake.createCompositeKeyMutex.Unlock() + fake.CreateCompositeKeyStub = nil + if fake.createCompositeKeyReturnsOnCall == nil { + fake.createCompositeKeyReturnsOnCall = make(map[int]struct { + result1 string + result2 error + }) + } + fake.createCompositeKeyReturnsOnCall[i] = struct { + result1 string + result2 error + }{result1, result2} +} + +func (fake *ChaincodeStub) DelPrivateData(arg1 string, arg2 string) error { + fake.delPrivateDataMutex.Lock() + ret, specificReturn := fake.delPrivateDataReturnsOnCall[len(fake.delPrivateDataArgsForCall)] + fake.delPrivateDataArgsForCall = append(fake.delPrivateDataArgsForCall, struct { + arg1 string + arg2 string + }{arg1, arg2}) + fake.recordInvocation("DelPrivateData", []interface{}{arg1, arg2}) + fake.delPrivateDataMutex.Unlock() + if fake.DelPrivateDataStub != nil { + return fake.DelPrivateDataStub(arg1, arg2) + } + if specificReturn { + return ret.result1 + } + fakeReturns := fake.delPrivateDataReturns + return fakeReturns.result1 +} + +func (fake *ChaincodeStub) DelPrivateDataCallCount() int { + fake.delPrivateDataMutex.RLock() + defer fake.delPrivateDataMutex.RUnlock() + return len(fake.delPrivateDataArgsForCall) +} + +func (fake *ChaincodeStub) DelPrivateDataCalls(stub func(string, string) error) { + fake.delPrivateDataMutex.Lock() + defer fake.delPrivateDataMutex.Unlock() + fake.DelPrivateDataStub = stub +} + +func (fake *ChaincodeStub) DelPrivateDataArgsForCall(i int) (string, string) { + fake.delPrivateDataMutex.RLock() + defer fake.delPrivateDataMutex.RUnlock() + argsForCall := fake.delPrivateDataArgsForCall[i] + return argsForCall.arg1, argsForCall.arg2 +} + +func (fake *ChaincodeStub) DelPrivateDataReturns(result1 error) { + fake.delPrivateDataMutex.Lock() + defer fake.delPrivateDataMutex.Unlock() + fake.DelPrivateDataStub = nil + fake.delPrivateDataReturns = struct { + result1 error + }{result1} +} + +func (fake *ChaincodeStub) DelPrivateDataReturnsOnCall(i int, result1 error) { + fake.delPrivateDataMutex.Lock() + defer fake.delPrivateDataMutex.Unlock() + fake.DelPrivateDataStub = nil + if fake.delPrivateDataReturnsOnCall == nil { + fake.delPrivateDataReturnsOnCall = make(map[int]struct { + result1 error + }) + } + fake.delPrivateDataReturnsOnCall[i] = struct { + result1 error + }{result1} +} + +func (fake *ChaincodeStub) DelState(arg1 string) error { + fake.delStateMutex.Lock() + ret, specificReturn := fake.delStateReturnsOnCall[len(fake.delStateArgsForCall)] + fake.delStateArgsForCall = append(fake.delStateArgsForCall, struct { + arg1 string + }{arg1}) + fake.recordInvocation("DelState", []interface{}{arg1}) + fake.delStateMutex.Unlock() + if fake.DelStateStub != nil { + return fake.DelStateStub(arg1) + } + if specificReturn { + return ret.result1 + } + fakeReturns := fake.delStateReturns + return fakeReturns.result1 +} + +func (fake *ChaincodeStub) DelStateCallCount() int { + fake.delStateMutex.RLock() + defer fake.delStateMutex.RUnlock() + return len(fake.delStateArgsForCall) +} + +func (fake *ChaincodeStub) DelStateCalls(stub func(string) error) { + fake.delStateMutex.Lock() + defer fake.delStateMutex.Unlock() + fake.DelStateStub = stub +} + +func (fake *ChaincodeStub) DelStateArgsForCall(i int) string { + fake.delStateMutex.RLock() + defer fake.delStateMutex.RUnlock() + argsForCall := fake.delStateArgsForCall[i] + return argsForCall.arg1 +} + +func (fake *ChaincodeStub) DelStateReturns(result1 error) { + fake.delStateMutex.Lock() + defer fake.delStateMutex.Unlock() + fake.DelStateStub = nil + fake.delStateReturns = struct { + result1 error + }{result1} +} + +func (fake *ChaincodeStub) DelStateReturnsOnCall(i int, result1 error) { + fake.delStateMutex.Lock() + defer fake.delStateMutex.Unlock() + fake.DelStateStub = nil + if fake.delStateReturnsOnCall == nil { + fake.delStateReturnsOnCall = make(map[int]struct { + result1 error + }) + } + fake.delStateReturnsOnCall[i] = struct { + result1 error + }{result1} +} + +func (fake *ChaincodeStub) GetArgs() [][]byte { + fake.getArgsMutex.Lock() + ret, specificReturn := fake.getArgsReturnsOnCall[len(fake.getArgsArgsForCall)] + fake.getArgsArgsForCall = append(fake.getArgsArgsForCall, struct { + }{}) + fake.recordInvocation("GetArgs", []interface{}{}) + fake.getArgsMutex.Unlock() + if fake.GetArgsStub != nil { + return fake.GetArgsStub() + } + if specificReturn { + return ret.result1 + } + fakeReturns := fake.getArgsReturns + return fakeReturns.result1 +} + +func (fake *ChaincodeStub) GetArgsCallCount() int { + fake.getArgsMutex.RLock() + defer fake.getArgsMutex.RUnlock() + return len(fake.getArgsArgsForCall) +} + +func (fake *ChaincodeStub) GetArgsCalls(stub func() [][]byte) { + fake.getArgsMutex.Lock() + defer fake.getArgsMutex.Unlock() + fake.GetArgsStub = stub +} + +func (fake *ChaincodeStub) GetArgsReturns(result1 [][]byte) { + fake.getArgsMutex.Lock() + defer fake.getArgsMutex.Unlock() + fake.GetArgsStub = nil + fake.getArgsReturns = struct { + result1 [][]byte + }{result1} +} + +func (fake *ChaincodeStub) GetArgsReturnsOnCall(i int, result1 [][]byte) { + fake.getArgsMutex.Lock() + defer fake.getArgsMutex.Unlock() + fake.GetArgsStub = nil + if fake.getArgsReturnsOnCall == nil { + fake.getArgsReturnsOnCall = make(map[int]struct { + result1 [][]byte + }) + } + fake.getArgsReturnsOnCall[i] = struct { + result1 [][]byte + }{result1} +} + +func (fake *ChaincodeStub) GetArgsSlice() ([]byte, error) { + fake.getArgsSliceMutex.Lock() + ret, specificReturn := fake.getArgsSliceReturnsOnCall[len(fake.getArgsSliceArgsForCall)] + fake.getArgsSliceArgsForCall = append(fake.getArgsSliceArgsForCall, struct { + }{}) + fake.recordInvocation("GetArgsSlice", []interface{}{}) + fake.getArgsSliceMutex.Unlock() + if fake.GetArgsSliceStub != nil { + return fake.GetArgsSliceStub() + } + if specificReturn { + return ret.result1, ret.result2 + } + fakeReturns := fake.getArgsSliceReturns + return fakeReturns.result1, fakeReturns.result2 +} + +func (fake *ChaincodeStub) GetArgsSliceCallCount() int { + fake.getArgsSliceMutex.RLock() + defer fake.getArgsSliceMutex.RUnlock() + return len(fake.getArgsSliceArgsForCall) +} + +func (fake *ChaincodeStub) GetArgsSliceCalls(stub func() ([]byte, error)) { + fake.getArgsSliceMutex.Lock() + defer fake.getArgsSliceMutex.Unlock() + fake.GetArgsSliceStub = stub +} + +func (fake *ChaincodeStub) GetArgsSliceReturns(result1 []byte, result2 error) { + fake.getArgsSliceMutex.Lock() + defer fake.getArgsSliceMutex.Unlock() + fake.GetArgsSliceStub = nil + fake.getArgsSliceReturns = struct { + result1 []byte + result2 error + }{result1, result2} +} + +func (fake *ChaincodeStub) GetArgsSliceReturnsOnCall(i int, result1 []byte, result2 error) { + fake.getArgsSliceMutex.Lock() + defer fake.getArgsSliceMutex.Unlock() + fake.GetArgsSliceStub = nil + if fake.getArgsSliceReturnsOnCall == nil { + fake.getArgsSliceReturnsOnCall = make(map[int]struct { + result1 []byte + result2 error + }) + } + fake.getArgsSliceReturnsOnCall[i] = struct { + result1 []byte + result2 error + }{result1, result2} +} + +func (fake *ChaincodeStub) GetBinding() ([]byte, error) { + fake.getBindingMutex.Lock() + ret, specificReturn := fake.getBindingReturnsOnCall[len(fake.getBindingArgsForCall)] + fake.getBindingArgsForCall = append(fake.getBindingArgsForCall, struct { + }{}) + fake.recordInvocation("GetBinding", []interface{}{}) + fake.getBindingMutex.Unlock() + if fake.GetBindingStub != nil { + return fake.GetBindingStub() + } + if specificReturn { + return ret.result1, ret.result2 + } + fakeReturns := fake.getBindingReturns + return fakeReturns.result1, fakeReturns.result2 +} + +func (fake *ChaincodeStub) GetBindingCallCount() int { + fake.getBindingMutex.RLock() + defer fake.getBindingMutex.RUnlock() + return len(fake.getBindingArgsForCall) +} + +func (fake *ChaincodeStub) GetBindingCalls(stub func() ([]byte, error)) { + fake.getBindingMutex.Lock() + defer fake.getBindingMutex.Unlock() + fake.GetBindingStub = stub +} + +func (fake *ChaincodeStub) GetBindingReturns(result1 []byte, result2 error) { + fake.getBindingMutex.Lock() + defer fake.getBindingMutex.Unlock() + fake.GetBindingStub = nil + fake.getBindingReturns = struct { + result1 []byte + result2 error + }{result1, result2} +} + +func (fake *ChaincodeStub) GetBindingReturnsOnCall(i int, result1 []byte, result2 error) { + fake.getBindingMutex.Lock() + defer fake.getBindingMutex.Unlock() + fake.GetBindingStub = nil + if fake.getBindingReturnsOnCall == nil { + fake.getBindingReturnsOnCall = make(map[int]struct { + result1 []byte + result2 error + }) + } + fake.getBindingReturnsOnCall[i] = struct { + result1 []byte + result2 error + }{result1, result2} +} + +func (fake *ChaincodeStub) GetChannelID() string { + fake.getChannelIDMutex.Lock() + ret, specificReturn := fake.getChannelIDReturnsOnCall[len(fake.getChannelIDArgsForCall)] + fake.getChannelIDArgsForCall = append(fake.getChannelIDArgsForCall, struct { + }{}) + fake.recordInvocation("GetChannelID", []interface{}{}) + fake.getChannelIDMutex.Unlock() + if fake.GetChannelIDStub != nil { + return fake.GetChannelIDStub() + } + if specificReturn { + return ret.result1 + } + fakeReturns := fake.getChannelIDReturns + return fakeReturns.result1 +} + +func (fake *ChaincodeStub) GetChannelIDCallCount() int { + fake.getChannelIDMutex.RLock() + defer fake.getChannelIDMutex.RUnlock() + return len(fake.getChannelIDArgsForCall) +} + +func (fake *ChaincodeStub) GetChannelIDCalls(stub func() string) { + fake.getChannelIDMutex.Lock() + defer fake.getChannelIDMutex.Unlock() + fake.GetChannelIDStub = stub +} + +func (fake *ChaincodeStub) GetChannelIDReturns(result1 string) { + fake.getChannelIDMutex.Lock() + defer fake.getChannelIDMutex.Unlock() + fake.GetChannelIDStub = nil + fake.getChannelIDReturns = struct { + result1 string + }{result1} +} + +func (fake *ChaincodeStub) GetChannelIDReturnsOnCall(i int, result1 string) { + fake.getChannelIDMutex.Lock() + defer fake.getChannelIDMutex.Unlock() + fake.GetChannelIDStub = nil + if fake.getChannelIDReturnsOnCall == nil { + fake.getChannelIDReturnsOnCall = make(map[int]struct { + result1 string + }) + } + fake.getChannelIDReturnsOnCall[i] = struct { + result1 string + }{result1} +} + +func (fake *ChaincodeStub) GetCreator() ([]byte, error) { + fake.getCreatorMutex.Lock() + ret, specificReturn := fake.getCreatorReturnsOnCall[len(fake.getCreatorArgsForCall)] + fake.getCreatorArgsForCall = append(fake.getCreatorArgsForCall, struct { + }{}) + fake.recordInvocation("GetCreator", []interface{}{}) + fake.getCreatorMutex.Unlock() + if fake.GetCreatorStub != nil { + return fake.GetCreatorStub() + } + if specificReturn { + return ret.result1, ret.result2 + } + fakeReturns := fake.getCreatorReturns + return fakeReturns.result1, fakeReturns.result2 +} + +func (fake *ChaincodeStub) GetCreatorCallCount() int { + fake.getCreatorMutex.RLock() + defer fake.getCreatorMutex.RUnlock() + return len(fake.getCreatorArgsForCall) +} + +func (fake *ChaincodeStub) GetCreatorCalls(stub func() ([]byte, error)) { + fake.getCreatorMutex.Lock() + defer fake.getCreatorMutex.Unlock() + fake.GetCreatorStub = stub +} + +func (fake *ChaincodeStub) GetCreatorReturns(result1 []byte, result2 error) { + fake.getCreatorMutex.Lock() + defer fake.getCreatorMutex.Unlock() + fake.GetCreatorStub = nil + fake.getCreatorReturns = struct { + result1 []byte + result2 error + }{result1, result2} +} + +func (fake *ChaincodeStub) GetCreatorReturnsOnCall(i int, result1 []byte, result2 error) { + fake.getCreatorMutex.Lock() + defer fake.getCreatorMutex.Unlock() + fake.GetCreatorStub = nil + if fake.getCreatorReturnsOnCall == nil { + fake.getCreatorReturnsOnCall = make(map[int]struct { + result1 []byte + result2 error + }) + } + fake.getCreatorReturnsOnCall[i] = struct { + result1 []byte + result2 error + }{result1, result2} +} + +func (fake *ChaincodeStub) GetDecorations() map[string][]byte { + fake.getDecorationsMutex.Lock() + ret, specificReturn := fake.getDecorationsReturnsOnCall[len(fake.getDecorationsArgsForCall)] + fake.getDecorationsArgsForCall = append(fake.getDecorationsArgsForCall, struct { + }{}) + fake.recordInvocation("GetDecorations", []interface{}{}) + fake.getDecorationsMutex.Unlock() + if fake.GetDecorationsStub != nil { + return fake.GetDecorationsStub() + } + if specificReturn { + return ret.result1 + } + fakeReturns := fake.getDecorationsReturns + return fakeReturns.result1 +} + +func (fake *ChaincodeStub) GetDecorationsCallCount() int { + fake.getDecorationsMutex.RLock() + defer fake.getDecorationsMutex.RUnlock() + return len(fake.getDecorationsArgsForCall) +} + +func (fake *ChaincodeStub) GetDecorationsCalls(stub func() map[string][]byte) { + fake.getDecorationsMutex.Lock() + defer fake.getDecorationsMutex.Unlock() + fake.GetDecorationsStub = stub +} + +func (fake *ChaincodeStub) GetDecorationsReturns(result1 map[string][]byte) { + fake.getDecorationsMutex.Lock() + defer fake.getDecorationsMutex.Unlock() + fake.GetDecorationsStub = nil + fake.getDecorationsReturns = struct { + result1 map[string][]byte + }{result1} +} + +func (fake *ChaincodeStub) GetDecorationsReturnsOnCall(i int, result1 map[string][]byte) { + fake.getDecorationsMutex.Lock() + defer fake.getDecorationsMutex.Unlock() + fake.GetDecorationsStub = nil + if fake.getDecorationsReturnsOnCall == nil { + fake.getDecorationsReturnsOnCall = make(map[int]struct { + result1 map[string][]byte + }) + } + fake.getDecorationsReturnsOnCall[i] = struct { + result1 map[string][]byte + }{result1} +} + +func (fake *ChaincodeStub) GetFunctionAndParameters() (string, []string) { + fake.getFunctionAndParametersMutex.Lock() + ret, specificReturn := fake.getFunctionAndParametersReturnsOnCall[len(fake.getFunctionAndParametersArgsForCall)] + fake.getFunctionAndParametersArgsForCall = append(fake.getFunctionAndParametersArgsForCall, struct { + }{}) + fake.recordInvocation("GetFunctionAndParameters", []interface{}{}) + fake.getFunctionAndParametersMutex.Unlock() + if fake.GetFunctionAndParametersStub != nil { + return fake.GetFunctionAndParametersStub() + } + if specificReturn { + return ret.result1, ret.result2 + } + fakeReturns := fake.getFunctionAndParametersReturns + return fakeReturns.result1, fakeReturns.result2 +} + +func (fake *ChaincodeStub) GetFunctionAndParametersCallCount() int { + fake.getFunctionAndParametersMutex.RLock() + defer fake.getFunctionAndParametersMutex.RUnlock() + return len(fake.getFunctionAndParametersArgsForCall) +} + +func (fake *ChaincodeStub) GetFunctionAndParametersCalls(stub func() (string, []string)) { + fake.getFunctionAndParametersMutex.Lock() + defer fake.getFunctionAndParametersMutex.Unlock() + fake.GetFunctionAndParametersStub = stub +} + +func (fake *ChaincodeStub) GetFunctionAndParametersReturns(result1 string, result2 []string) { + fake.getFunctionAndParametersMutex.Lock() + defer fake.getFunctionAndParametersMutex.Unlock() + fake.GetFunctionAndParametersStub = nil + fake.getFunctionAndParametersReturns = struct { + result1 string + result2 []string + }{result1, result2} +} + +func (fake *ChaincodeStub) GetFunctionAndParametersReturnsOnCall(i int, result1 string, result2 []string) { + fake.getFunctionAndParametersMutex.Lock() + defer fake.getFunctionAndParametersMutex.Unlock() + fake.GetFunctionAndParametersStub = nil + if fake.getFunctionAndParametersReturnsOnCall == nil { + fake.getFunctionAndParametersReturnsOnCall = make(map[int]struct { + result1 string + result2 []string + }) + } + fake.getFunctionAndParametersReturnsOnCall[i] = struct { + result1 string + result2 []string + }{result1, result2} +} + +func (fake *ChaincodeStub) GetHistoryForKey(arg1 string) (shim.HistoryQueryIteratorInterface, error) { + fake.getHistoryForKeyMutex.Lock() + ret, specificReturn := fake.getHistoryForKeyReturnsOnCall[len(fake.getHistoryForKeyArgsForCall)] + fake.getHistoryForKeyArgsForCall = append(fake.getHistoryForKeyArgsForCall, struct { + arg1 string + }{arg1}) + fake.recordInvocation("GetHistoryForKey", []interface{}{arg1}) + fake.getHistoryForKeyMutex.Unlock() + if fake.GetHistoryForKeyStub != nil { + return fake.GetHistoryForKeyStub(arg1) + } + if specificReturn { + return ret.result1, ret.result2 + } + fakeReturns := fake.getHistoryForKeyReturns + return fakeReturns.result1, fakeReturns.result2 +} + +func (fake *ChaincodeStub) GetHistoryForKeyCallCount() int { + fake.getHistoryForKeyMutex.RLock() + defer fake.getHistoryForKeyMutex.RUnlock() + return len(fake.getHistoryForKeyArgsForCall) +} + +func (fake *ChaincodeStub) GetHistoryForKeyCalls(stub func(string) (shim.HistoryQueryIteratorInterface, error)) { + fake.getHistoryForKeyMutex.Lock() + defer fake.getHistoryForKeyMutex.Unlock() + fake.GetHistoryForKeyStub = stub +} + +func (fake *ChaincodeStub) GetHistoryForKeyArgsForCall(i int) string { + fake.getHistoryForKeyMutex.RLock() + defer fake.getHistoryForKeyMutex.RUnlock() + argsForCall := fake.getHistoryForKeyArgsForCall[i] + return argsForCall.arg1 +} + +func (fake *ChaincodeStub) GetHistoryForKeyReturns(result1 shim.HistoryQueryIteratorInterface, result2 error) { + fake.getHistoryForKeyMutex.Lock() + defer fake.getHistoryForKeyMutex.Unlock() + fake.GetHistoryForKeyStub = nil + fake.getHistoryForKeyReturns = struct { + result1 shim.HistoryQueryIteratorInterface + result2 error + }{result1, result2} +} + +func (fake *ChaincodeStub) GetHistoryForKeyReturnsOnCall(i int, result1 shim.HistoryQueryIteratorInterface, result2 error) { + fake.getHistoryForKeyMutex.Lock() + defer fake.getHistoryForKeyMutex.Unlock() + fake.GetHistoryForKeyStub = nil + if fake.getHistoryForKeyReturnsOnCall == nil { + fake.getHistoryForKeyReturnsOnCall = make(map[int]struct { + result1 shim.HistoryQueryIteratorInterface + result2 error + }) + } + fake.getHistoryForKeyReturnsOnCall[i] = struct { + result1 shim.HistoryQueryIteratorInterface + result2 error + }{result1, result2} +} + +func (fake *ChaincodeStub) GetPrivateData(arg1 string, arg2 string) ([]byte, error) { + fake.getPrivateDataMutex.Lock() + ret, specificReturn := fake.getPrivateDataReturnsOnCall[len(fake.getPrivateDataArgsForCall)] + fake.getPrivateDataArgsForCall = append(fake.getPrivateDataArgsForCall, struct { + arg1 string + arg2 string + }{arg1, arg2}) + fake.recordInvocation("GetPrivateData", []interface{}{arg1, arg2}) + fake.getPrivateDataMutex.Unlock() + if fake.GetPrivateDataStub != nil { + return fake.GetPrivateDataStub(arg1, arg2) + } + if specificReturn { + return ret.result1, ret.result2 + } + fakeReturns := fake.getPrivateDataReturns + return fakeReturns.result1, fakeReturns.result2 +} + +func (fake *ChaincodeStub) GetPrivateDataCallCount() int { + fake.getPrivateDataMutex.RLock() + defer fake.getPrivateDataMutex.RUnlock() + return len(fake.getPrivateDataArgsForCall) +} + +func (fake *ChaincodeStub) GetPrivateDataCalls(stub func(string, string) ([]byte, error)) { + fake.getPrivateDataMutex.Lock() + defer fake.getPrivateDataMutex.Unlock() + fake.GetPrivateDataStub = stub +} + +func (fake *ChaincodeStub) GetPrivateDataArgsForCall(i int) (string, string) { + fake.getPrivateDataMutex.RLock() + defer fake.getPrivateDataMutex.RUnlock() + argsForCall := fake.getPrivateDataArgsForCall[i] + return argsForCall.arg1, argsForCall.arg2 +} + +func (fake *ChaincodeStub) GetPrivateDataReturns(result1 []byte, result2 error) { + fake.getPrivateDataMutex.Lock() + defer fake.getPrivateDataMutex.Unlock() + fake.GetPrivateDataStub = nil + fake.getPrivateDataReturns = struct { + result1 []byte + result2 error + }{result1, result2} +} + +func (fake *ChaincodeStub) GetPrivateDataReturnsOnCall(i int, result1 []byte, result2 error) { + fake.getPrivateDataMutex.Lock() + defer fake.getPrivateDataMutex.Unlock() + fake.GetPrivateDataStub = nil + if fake.getPrivateDataReturnsOnCall == nil { + fake.getPrivateDataReturnsOnCall = make(map[int]struct { + result1 []byte + result2 error + }) + } + fake.getPrivateDataReturnsOnCall[i] = struct { + result1 []byte + result2 error + }{result1, result2} +} + +func (fake *ChaincodeStub) GetPrivateDataByPartialCompositeKey(arg1 string, arg2 string, arg3 []string) (shim.StateQueryIteratorInterface, error) { + var arg3Copy []string + if arg3 != nil { + arg3Copy = make([]string, len(arg3)) + copy(arg3Copy, arg3) + } + fake.getPrivateDataByPartialCompositeKeyMutex.Lock() + ret, specificReturn := fake.getPrivateDataByPartialCompositeKeyReturnsOnCall[len(fake.getPrivateDataByPartialCompositeKeyArgsForCall)] + fake.getPrivateDataByPartialCompositeKeyArgsForCall = append(fake.getPrivateDataByPartialCompositeKeyArgsForCall, struct { + arg1 string + arg2 string + arg3 []string + }{arg1, arg2, arg3Copy}) + fake.recordInvocation("GetPrivateDataByPartialCompositeKey", []interface{}{arg1, arg2, arg3Copy}) + fake.getPrivateDataByPartialCompositeKeyMutex.Unlock() + if fake.GetPrivateDataByPartialCompositeKeyStub != nil { + return fake.GetPrivateDataByPartialCompositeKeyStub(arg1, arg2, arg3) + } + if specificReturn { + return ret.result1, ret.result2 + } + fakeReturns := fake.getPrivateDataByPartialCompositeKeyReturns + return fakeReturns.result1, fakeReturns.result2 +} + +func (fake *ChaincodeStub) GetPrivateDataByPartialCompositeKeyCallCount() int { + fake.getPrivateDataByPartialCompositeKeyMutex.RLock() + defer fake.getPrivateDataByPartialCompositeKeyMutex.RUnlock() + return len(fake.getPrivateDataByPartialCompositeKeyArgsForCall) +} + +func (fake *ChaincodeStub) GetPrivateDataByPartialCompositeKeyCalls(stub func(string, string, []string) (shim.StateQueryIteratorInterface, error)) { + fake.getPrivateDataByPartialCompositeKeyMutex.Lock() + defer fake.getPrivateDataByPartialCompositeKeyMutex.Unlock() + fake.GetPrivateDataByPartialCompositeKeyStub = stub +} + +func (fake *ChaincodeStub) GetPrivateDataByPartialCompositeKeyArgsForCall(i int) (string, string, []string) { + fake.getPrivateDataByPartialCompositeKeyMutex.RLock() + defer fake.getPrivateDataByPartialCompositeKeyMutex.RUnlock() + argsForCall := fake.getPrivateDataByPartialCompositeKeyArgsForCall[i] + return argsForCall.arg1, argsForCall.arg2, argsForCall.arg3 +} + +func (fake *ChaincodeStub) GetPrivateDataByPartialCompositeKeyReturns(result1 shim.StateQueryIteratorInterface, result2 error) { + fake.getPrivateDataByPartialCompositeKeyMutex.Lock() + defer fake.getPrivateDataByPartialCompositeKeyMutex.Unlock() + fake.GetPrivateDataByPartialCompositeKeyStub = nil + fake.getPrivateDataByPartialCompositeKeyReturns = struct { + result1 shim.StateQueryIteratorInterface + result2 error + }{result1, result2} +} + +func (fake *ChaincodeStub) GetPrivateDataByPartialCompositeKeyReturnsOnCall(i int, result1 shim.StateQueryIteratorInterface, result2 error) { + fake.getPrivateDataByPartialCompositeKeyMutex.Lock() + defer fake.getPrivateDataByPartialCompositeKeyMutex.Unlock() + fake.GetPrivateDataByPartialCompositeKeyStub = nil + if fake.getPrivateDataByPartialCompositeKeyReturnsOnCall == nil { + fake.getPrivateDataByPartialCompositeKeyReturnsOnCall = make(map[int]struct { + result1 shim.StateQueryIteratorInterface + result2 error + }) + } + fake.getPrivateDataByPartialCompositeKeyReturnsOnCall[i] = struct { + result1 shim.StateQueryIteratorInterface + result2 error + }{result1, result2} +} + +func (fake *ChaincodeStub) GetPrivateDataByRange(arg1 string, arg2 string, arg3 string) (shim.StateQueryIteratorInterface, error) { + fake.getPrivateDataByRangeMutex.Lock() + ret, specificReturn := fake.getPrivateDataByRangeReturnsOnCall[len(fake.getPrivateDataByRangeArgsForCall)] + fake.getPrivateDataByRangeArgsForCall = append(fake.getPrivateDataByRangeArgsForCall, struct { + arg1 string + arg2 string + arg3 string + }{arg1, arg2, arg3}) + fake.recordInvocation("GetPrivateDataByRange", []interface{}{arg1, arg2, arg3}) + fake.getPrivateDataByRangeMutex.Unlock() + if fake.GetPrivateDataByRangeStub != nil { + return fake.GetPrivateDataByRangeStub(arg1, arg2, arg3) + } + if specificReturn { + return ret.result1, ret.result2 + } + fakeReturns := fake.getPrivateDataByRangeReturns + return fakeReturns.result1, fakeReturns.result2 +} + +func (fake *ChaincodeStub) GetPrivateDataByRangeCallCount() int { + fake.getPrivateDataByRangeMutex.RLock() + defer fake.getPrivateDataByRangeMutex.RUnlock() + return len(fake.getPrivateDataByRangeArgsForCall) +} + +func (fake *ChaincodeStub) GetPrivateDataByRangeCalls(stub func(string, string, string) (shim.StateQueryIteratorInterface, error)) { + fake.getPrivateDataByRangeMutex.Lock() + defer fake.getPrivateDataByRangeMutex.Unlock() + fake.GetPrivateDataByRangeStub = stub +} + +func (fake *ChaincodeStub) GetPrivateDataByRangeArgsForCall(i int) (string, string, string) { + fake.getPrivateDataByRangeMutex.RLock() + defer fake.getPrivateDataByRangeMutex.RUnlock() + argsForCall := fake.getPrivateDataByRangeArgsForCall[i] + return argsForCall.arg1, argsForCall.arg2, argsForCall.arg3 +} + +func (fake *ChaincodeStub) GetPrivateDataByRangeReturns(result1 shim.StateQueryIteratorInterface, result2 error) { + fake.getPrivateDataByRangeMutex.Lock() + defer fake.getPrivateDataByRangeMutex.Unlock() + fake.GetPrivateDataByRangeStub = nil + fake.getPrivateDataByRangeReturns = struct { + result1 shim.StateQueryIteratorInterface + result2 error + }{result1, result2} +} + +func (fake *ChaincodeStub) GetPrivateDataByRangeReturnsOnCall(i int, result1 shim.StateQueryIteratorInterface, result2 error) { + fake.getPrivateDataByRangeMutex.Lock() + defer fake.getPrivateDataByRangeMutex.Unlock() + fake.GetPrivateDataByRangeStub = nil + if fake.getPrivateDataByRangeReturnsOnCall == nil { + fake.getPrivateDataByRangeReturnsOnCall = make(map[int]struct { + result1 shim.StateQueryIteratorInterface + result2 error + }) + } + fake.getPrivateDataByRangeReturnsOnCall[i] = struct { + result1 shim.StateQueryIteratorInterface + result2 error + }{result1, result2} +} + +func (fake *ChaincodeStub) GetPrivateDataHash(arg1 string, arg2 string) ([]byte, error) { + fake.getPrivateDataHashMutex.Lock() + ret, specificReturn := fake.getPrivateDataHashReturnsOnCall[len(fake.getPrivateDataHashArgsForCall)] + fake.getPrivateDataHashArgsForCall = append(fake.getPrivateDataHashArgsForCall, struct { + arg1 string + arg2 string + }{arg1, arg2}) + fake.recordInvocation("GetPrivateDataHash", []interface{}{arg1, arg2}) + fake.getPrivateDataHashMutex.Unlock() + if fake.GetPrivateDataHashStub != nil { + return fake.GetPrivateDataHashStub(arg1, arg2) + } + if specificReturn { + return ret.result1, ret.result2 + } + fakeReturns := fake.getPrivateDataHashReturns + return fakeReturns.result1, fakeReturns.result2 +} + +func (fake *ChaincodeStub) GetPrivateDataHashCallCount() int { + fake.getPrivateDataHashMutex.RLock() + defer fake.getPrivateDataHashMutex.RUnlock() + return len(fake.getPrivateDataHashArgsForCall) +} + +func (fake *ChaincodeStub) GetPrivateDataHashCalls(stub func(string, string) ([]byte, error)) { + fake.getPrivateDataHashMutex.Lock() + defer fake.getPrivateDataHashMutex.Unlock() + fake.GetPrivateDataHashStub = stub +} + +func (fake *ChaincodeStub) GetPrivateDataHashArgsForCall(i int) (string, string) { + fake.getPrivateDataHashMutex.RLock() + defer fake.getPrivateDataHashMutex.RUnlock() + argsForCall := fake.getPrivateDataHashArgsForCall[i] + return argsForCall.arg1, argsForCall.arg2 +} + +func (fake *ChaincodeStub) GetPrivateDataHashReturns(result1 []byte, result2 error) { + fake.getPrivateDataHashMutex.Lock() + defer fake.getPrivateDataHashMutex.Unlock() + fake.GetPrivateDataHashStub = nil + fake.getPrivateDataHashReturns = struct { + result1 []byte + result2 error + }{result1, result2} +} + +func (fake *ChaincodeStub) GetPrivateDataHashReturnsOnCall(i int, result1 []byte, result2 error) { + fake.getPrivateDataHashMutex.Lock() + defer fake.getPrivateDataHashMutex.Unlock() + fake.GetPrivateDataHashStub = nil + if fake.getPrivateDataHashReturnsOnCall == nil { + fake.getPrivateDataHashReturnsOnCall = make(map[int]struct { + result1 []byte + result2 error + }) + } + fake.getPrivateDataHashReturnsOnCall[i] = struct { + result1 []byte + result2 error + }{result1, result2} +} + +func (fake *ChaincodeStub) GetPrivateDataQueryResult(arg1 string, arg2 string) (shim.StateQueryIteratorInterface, error) { + fake.getPrivateDataQueryResultMutex.Lock() + ret, specificReturn := fake.getPrivateDataQueryResultReturnsOnCall[len(fake.getPrivateDataQueryResultArgsForCall)] + fake.getPrivateDataQueryResultArgsForCall = append(fake.getPrivateDataQueryResultArgsForCall, struct { + arg1 string + arg2 string + }{arg1, arg2}) + fake.recordInvocation("GetPrivateDataQueryResult", []interface{}{arg1, arg2}) + fake.getPrivateDataQueryResultMutex.Unlock() + if fake.GetPrivateDataQueryResultStub != nil { + return fake.GetPrivateDataQueryResultStub(arg1, arg2) + } + if specificReturn { + return ret.result1, ret.result2 + } + fakeReturns := fake.getPrivateDataQueryResultReturns + return fakeReturns.result1, fakeReturns.result2 +} + +func (fake *ChaincodeStub) GetPrivateDataQueryResultCallCount() int { + fake.getPrivateDataQueryResultMutex.RLock() + defer fake.getPrivateDataQueryResultMutex.RUnlock() + return len(fake.getPrivateDataQueryResultArgsForCall) +} + +func (fake *ChaincodeStub) GetPrivateDataQueryResultCalls(stub func(string, string) (shim.StateQueryIteratorInterface, error)) { + fake.getPrivateDataQueryResultMutex.Lock() + defer fake.getPrivateDataQueryResultMutex.Unlock() + fake.GetPrivateDataQueryResultStub = stub +} + +func (fake *ChaincodeStub) GetPrivateDataQueryResultArgsForCall(i int) (string, string) { + fake.getPrivateDataQueryResultMutex.RLock() + defer fake.getPrivateDataQueryResultMutex.RUnlock() + argsForCall := fake.getPrivateDataQueryResultArgsForCall[i] + return argsForCall.arg1, argsForCall.arg2 +} + +func (fake *ChaincodeStub) GetPrivateDataQueryResultReturns(result1 shim.StateQueryIteratorInterface, result2 error) { + fake.getPrivateDataQueryResultMutex.Lock() + defer fake.getPrivateDataQueryResultMutex.Unlock() + fake.GetPrivateDataQueryResultStub = nil + fake.getPrivateDataQueryResultReturns = struct { + result1 shim.StateQueryIteratorInterface + result2 error + }{result1, result2} +} + +func (fake *ChaincodeStub) GetPrivateDataQueryResultReturnsOnCall(i int, result1 shim.StateQueryIteratorInterface, result2 error) { + fake.getPrivateDataQueryResultMutex.Lock() + defer fake.getPrivateDataQueryResultMutex.Unlock() + fake.GetPrivateDataQueryResultStub = nil + if fake.getPrivateDataQueryResultReturnsOnCall == nil { + fake.getPrivateDataQueryResultReturnsOnCall = make(map[int]struct { + result1 shim.StateQueryIteratorInterface + result2 error + }) + } + fake.getPrivateDataQueryResultReturnsOnCall[i] = struct { + result1 shim.StateQueryIteratorInterface + result2 error + }{result1, result2} +} + +func (fake *ChaincodeStub) GetPrivateDataValidationParameter(arg1 string, arg2 string) ([]byte, error) { + fake.getPrivateDataValidationParameterMutex.Lock() + ret, specificReturn := fake.getPrivateDataValidationParameterReturnsOnCall[len(fake.getPrivateDataValidationParameterArgsForCall)] + fake.getPrivateDataValidationParameterArgsForCall = append(fake.getPrivateDataValidationParameterArgsForCall, struct { + arg1 string + arg2 string + }{arg1, arg2}) + fake.recordInvocation("GetPrivateDataValidationParameter", []interface{}{arg1, arg2}) + fake.getPrivateDataValidationParameterMutex.Unlock() + if fake.GetPrivateDataValidationParameterStub != nil { + return fake.GetPrivateDataValidationParameterStub(arg1, arg2) + } + if specificReturn { + return ret.result1, ret.result2 + } + fakeReturns := fake.getPrivateDataValidationParameterReturns + return fakeReturns.result1, fakeReturns.result2 +} + +func (fake *ChaincodeStub) GetPrivateDataValidationParameterCallCount() int { + fake.getPrivateDataValidationParameterMutex.RLock() + defer fake.getPrivateDataValidationParameterMutex.RUnlock() + return len(fake.getPrivateDataValidationParameterArgsForCall) +} + +func (fake *ChaincodeStub) GetPrivateDataValidationParameterCalls(stub func(string, string) ([]byte, error)) { + fake.getPrivateDataValidationParameterMutex.Lock() + defer fake.getPrivateDataValidationParameterMutex.Unlock() + fake.GetPrivateDataValidationParameterStub = stub +} + +func (fake *ChaincodeStub) GetPrivateDataValidationParameterArgsForCall(i int) (string, string) { + fake.getPrivateDataValidationParameterMutex.RLock() + defer fake.getPrivateDataValidationParameterMutex.RUnlock() + argsForCall := fake.getPrivateDataValidationParameterArgsForCall[i] + return argsForCall.arg1, argsForCall.arg2 +} + +func (fake *ChaincodeStub) GetPrivateDataValidationParameterReturns(result1 []byte, result2 error) { + fake.getPrivateDataValidationParameterMutex.Lock() + defer fake.getPrivateDataValidationParameterMutex.Unlock() + fake.GetPrivateDataValidationParameterStub = nil + fake.getPrivateDataValidationParameterReturns = struct { + result1 []byte + result2 error + }{result1, result2} +} + +func (fake *ChaincodeStub) GetPrivateDataValidationParameterReturnsOnCall(i int, result1 []byte, result2 error) { + fake.getPrivateDataValidationParameterMutex.Lock() + defer fake.getPrivateDataValidationParameterMutex.Unlock() + fake.GetPrivateDataValidationParameterStub = nil + if fake.getPrivateDataValidationParameterReturnsOnCall == nil { + fake.getPrivateDataValidationParameterReturnsOnCall = make(map[int]struct { + result1 []byte + result2 error + }) + } + fake.getPrivateDataValidationParameterReturnsOnCall[i] = struct { + result1 []byte + result2 error + }{result1, result2} +} + +func (fake *ChaincodeStub) GetQueryResult(arg1 string) (shim.StateQueryIteratorInterface, error) { + fake.getQueryResultMutex.Lock() + ret, specificReturn := fake.getQueryResultReturnsOnCall[len(fake.getQueryResultArgsForCall)] + fake.getQueryResultArgsForCall = append(fake.getQueryResultArgsForCall, struct { + arg1 string + }{arg1}) + fake.recordInvocation("GetQueryResult", []interface{}{arg1}) + fake.getQueryResultMutex.Unlock() + if fake.GetQueryResultStub != nil { + return fake.GetQueryResultStub(arg1) + } + if specificReturn { + return ret.result1, ret.result2 + } + fakeReturns := fake.getQueryResultReturns + return fakeReturns.result1, fakeReturns.result2 +} + +func (fake *ChaincodeStub) GetQueryResultCallCount() int { + fake.getQueryResultMutex.RLock() + defer fake.getQueryResultMutex.RUnlock() + return len(fake.getQueryResultArgsForCall) +} + +func (fake *ChaincodeStub) GetQueryResultCalls(stub func(string) (shim.StateQueryIteratorInterface, error)) { + fake.getQueryResultMutex.Lock() + defer fake.getQueryResultMutex.Unlock() + fake.GetQueryResultStub = stub +} + +func (fake *ChaincodeStub) GetQueryResultArgsForCall(i int) string { + fake.getQueryResultMutex.RLock() + defer fake.getQueryResultMutex.RUnlock() + argsForCall := fake.getQueryResultArgsForCall[i] + return argsForCall.arg1 +} + +func (fake *ChaincodeStub) GetQueryResultReturns(result1 shim.StateQueryIteratorInterface, result2 error) { + fake.getQueryResultMutex.Lock() + defer fake.getQueryResultMutex.Unlock() + fake.GetQueryResultStub = nil + fake.getQueryResultReturns = struct { + result1 shim.StateQueryIteratorInterface + result2 error + }{result1, result2} +} + +func (fake *ChaincodeStub) GetQueryResultReturnsOnCall(i int, result1 shim.StateQueryIteratorInterface, result2 error) { + fake.getQueryResultMutex.Lock() + defer fake.getQueryResultMutex.Unlock() + fake.GetQueryResultStub = nil + if fake.getQueryResultReturnsOnCall == nil { + fake.getQueryResultReturnsOnCall = make(map[int]struct { + result1 shim.StateQueryIteratorInterface + result2 error + }) + } + fake.getQueryResultReturnsOnCall[i] = struct { + result1 shim.StateQueryIteratorInterface + result2 error + }{result1, result2} +} + +func (fake *ChaincodeStub) GetQueryResultWithPagination(arg1 string, arg2 int32, arg3 string) (shim.StateQueryIteratorInterface, *peer.QueryResponseMetadata, error) { + fake.getQueryResultWithPaginationMutex.Lock() + ret, specificReturn := fake.getQueryResultWithPaginationReturnsOnCall[len(fake.getQueryResultWithPaginationArgsForCall)] + fake.getQueryResultWithPaginationArgsForCall = append(fake.getQueryResultWithPaginationArgsForCall, struct { + arg1 string + arg2 int32 + arg3 string + }{arg1, arg2, arg3}) + fake.recordInvocation("GetQueryResultWithPagination", []interface{}{arg1, arg2, arg3}) + fake.getQueryResultWithPaginationMutex.Unlock() + if fake.GetQueryResultWithPaginationStub != nil { + return fake.GetQueryResultWithPaginationStub(arg1, arg2, arg3) + } + if specificReturn { + return ret.result1, ret.result2, ret.result3 + } + fakeReturns := fake.getQueryResultWithPaginationReturns + return fakeReturns.result1, fakeReturns.result2, fakeReturns.result3 +} + +func (fake *ChaincodeStub) GetQueryResultWithPaginationCallCount() int { + fake.getQueryResultWithPaginationMutex.RLock() + defer fake.getQueryResultWithPaginationMutex.RUnlock() + return len(fake.getQueryResultWithPaginationArgsForCall) +} + +func (fake *ChaincodeStub) GetQueryResultWithPaginationCalls(stub func(string, int32, string) (shim.StateQueryIteratorInterface, *peer.QueryResponseMetadata, error)) { + fake.getQueryResultWithPaginationMutex.Lock() + defer fake.getQueryResultWithPaginationMutex.Unlock() + fake.GetQueryResultWithPaginationStub = stub +} + +func (fake *ChaincodeStub) GetQueryResultWithPaginationArgsForCall(i int) (string, int32, string) { + fake.getQueryResultWithPaginationMutex.RLock() + defer fake.getQueryResultWithPaginationMutex.RUnlock() + argsForCall := fake.getQueryResultWithPaginationArgsForCall[i] + return argsForCall.arg1, argsForCall.arg2, argsForCall.arg3 +} + +func (fake *ChaincodeStub) GetQueryResultWithPaginationReturns(result1 shim.StateQueryIteratorInterface, result2 *peer.QueryResponseMetadata, result3 error) { + fake.getQueryResultWithPaginationMutex.Lock() + defer fake.getQueryResultWithPaginationMutex.Unlock() + fake.GetQueryResultWithPaginationStub = nil + fake.getQueryResultWithPaginationReturns = struct { + result1 shim.StateQueryIteratorInterface + result2 *peer.QueryResponseMetadata + result3 error + }{result1, result2, result3} +} + +func (fake *ChaincodeStub) GetQueryResultWithPaginationReturnsOnCall(i int, result1 shim.StateQueryIteratorInterface, result2 *peer.QueryResponseMetadata, result3 error) { + fake.getQueryResultWithPaginationMutex.Lock() + defer fake.getQueryResultWithPaginationMutex.Unlock() + fake.GetQueryResultWithPaginationStub = nil + if fake.getQueryResultWithPaginationReturnsOnCall == nil { + fake.getQueryResultWithPaginationReturnsOnCall = make(map[int]struct { + result1 shim.StateQueryIteratorInterface + result2 *peer.QueryResponseMetadata + result3 error + }) + } + fake.getQueryResultWithPaginationReturnsOnCall[i] = struct { + result1 shim.StateQueryIteratorInterface + result2 *peer.QueryResponseMetadata + result3 error + }{result1, result2, result3} +} + +func (fake *ChaincodeStub) GetSignedProposal() (*peer.SignedProposal, error) { + fake.getSignedProposalMutex.Lock() + ret, specificReturn := fake.getSignedProposalReturnsOnCall[len(fake.getSignedProposalArgsForCall)] + fake.getSignedProposalArgsForCall = append(fake.getSignedProposalArgsForCall, struct { + }{}) + fake.recordInvocation("GetSignedProposal", []interface{}{}) + fake.getSignedProposalMutex.Unlock() + if fake.GetSignedProposalStub != nil { + return fake.GetSignedProposalStub() + } + if specificReturn { + return ret.result1, ret.result2 + } + fakeReturns := fake.getSignedProposalReturns + return fakeReturns.result1, fakeReturns.result2 +} + +func (fake *ChaincodeStub) GetSignedProposalCallCount() int { + fake.getSignedProposalMutex.RLock() + defer fake.getSignedProposalMutex.RUnlock() + return len(fake.getSignedProposalArgsForCall) +} + +func (fake *ChaincodeStub) GetSignedProposalCalls(stub func() (*peer.SignedProposal, error)) { + fake.getSignedProposalMutex.Lock() + defer fake.getSignedProposalMutex.Unlock() + fake.GetSignedProposalStub = stub +} + +func (fake *ChaincodeStub) GetSignedProposalReturns(result1 *peer.SignedProposal, result2 error) { + fake.getSignedProposalMutex.Lock() + defer fake.getSignedProposalMutex.Unlock() + fake.GetSignedProposalStub = nil + fake.getSignedProposalReturns = struct { + result1 *peer.SignedProposal + result2 error + }{result1, result2} +} + +func (fake *ChaincodeStub) GetSignedProposalReturnsOnCall(i int, result1 *peer.SignedProposal, result2 error) { + fake.getSignedProposalMutex.Lock() + defer fake.getSignedProposalMutex.Unlock() + fake.GetSignedProposalStub = nil + if fake.getSignedProposalReturnsOnCall == nil { + fake.getSignedProposalReturnsOnCall = make(map[int]struct { + result1 *peer.SignedProposal + result2 error + }) + } + fake.getSignedProposalReturnsOnCall[i] = struct { + result1 *peer.SignedProposal + result2 error + }{result1, result2} +} + +func (fake *ChaincodeStub) GetState(arg1 string) ([]byte, error) { + fake.getStateMutex.Lock() + ret, specificReturn := fake.getStateReturnsOnCall[len(fake.getStateArgsForCall)] + fake.getStateArgsForCall = append(fake.getStateArgsForCall, struct { + arg1 string + }{arg1}) + fake.recordInvocation("GetState", []interface{}{arg1}) + fake.getStateMutex.Unlock() + if fake.GetStateStub != nil { + return fake.GetStateStub(arg1) + } + if specificReturn { + return ret.result1, ret.result2 + } + fakeReturns := fake.getStateReturns + return fakeReturns.result1, fakeReturns.result2 +} + +func (fake *ChaincodeStub) GetStateCallCount() int { + fake.getStateMutex.RLock() + defer fake.getStateMutex.RUnlock() + return len(fake.getStateArgsForCall) +} + +func (fake *ChaincodeStub) GetStateCalls(stub func(string) ([]byte, error)) { + fake.getStateMutex.Lock() + defer fake.getStateMutex.Unlock() + fake.GetStateStub = stub +} + +func (fake *ChaincodeStub) GetStateArgsForCall(i int) string { + fake.getStateMutex.RLock() + defer fake.getStateMutex.RUnlock() + argsForCall := fake.getStateArgsForCall[i] + return argsForCall.arg1 +} + +func (fake *ChaincodeStub) GetStateReturns(result1 []byte, result2 error) { + fake.getStateMutex.Lock() + defer fake.getStateMutex.Unlock() + fake.GetStateStub = nil + fake.getStateReturns = struct { + result1 []byte + result2 error + }{result1, result2} +} + +func (fake *ChaincodeStub) GetStateReturnsOnCall(i int, result1 []byte, result2 error) { + fake.getStateMutex.Lock() + defer fake.getStateMutex.Unlock() + fake.GetStateStub = nil + if fake.getStateReturnsOnCall == nil { + fake.getStateReturnsOnCall = make(map[int]struct { + result1 []byte + result2 error + }) + } + fake.getStateReturnsOnCall[i] = struct { + result1 []byte + result2 error + }{result1, result2} +} + +func (fake *ChaincodeStub) GetStateByPartialCompositeKey(arg1 string, arg2 []string) (shim.StateQueryIteratorInterface, error) { + var arg2Copy []string + if arg2 != nil { + arg2Copy = make([]string, len(arg2)) + copy(arg2Copy, arg2) + } + fake.getStateByPartialCompositeKeyMutex.Lock() + ret, specificReturn := fake.getStateByPartialCompositeKeyReturnsOnCall[len(fake.getStateByPartialCompositeKeyArgsForCall)] + fake.getStateByPartialCompositeKeyArgsForCall = append(fake.getStateByPartialCompositeKeyArgsForCall, struct { + arg1 string + arg2 []string + }{arg1, arg2Copy}) + fake.recordInvocation("GetStateByPartialCompositeKey", []interface{}{arg1, arg2Copy}) + fake.getStateByPartialCompositeKeyMutex.Unlock() + if fake.GetStateByPartialCompositeKeyStub != nil { + return fake.GetStateByPartialCompositeKeyStub(arg1, arg2) + } + if specificReturn { + return ret.result1, ret.result2 + } + fakeReturns := fake.getStateByPartialCompositeKeyReturns + return fakeReturns.result1, fakeReturns.result2 +} + +func (fake *ChaincodeStub) GetStateByPartialCompositeKeyCallCount() int { + fake.getStateByPartialCompositeKeyMutex.RLock() + defer fake.getStateByPartialCompositeKeyMutex.RUnlock() + return len(fake.getStateByPartialCompositeKeyArgsForCall) +} + +func (fake *ChaincodeStub) GetStateByPartialCompositeKeyCalls(stub func(string, []string) (shim.StateQueryIteratorInterface, error)) { + fake.getStateByPartialCompositeKeyMutex.Lock() + defer fake.getStateByPartialCompositeKeyMutex.Unlock() + fake.GetStateByPartialCompositeKeyStub = stub +} + +func (fake *ChaincodeStub) GetStateByPartialCompositeKeyArgsForCall(i int) (string, []string) { + fake.getStateByPartialCompositeKeyMutex.RLock() + defer fake.getStateByPartialCompositeKeyMutex.RUnlock() + argsForCall := fake.getStateByPartialCompositeKeyArgsForCall[i] + return argsForCall.arg1, argsForCall.arg2 +} + +func (fake *ChaincodeStub) GetStateByPartialCompositeKeyReturns(result1 shim.StateQueryIteratorInterface, result2 error) { + fake.getStateByPartialCompositeKeyMutex.Lock() + defer fake.getStateByPartialCompositeKeyMutex.Unlock() + fake.GetStateByPartialCompositeKeyStub = nil + fake.getStateByPartialCompositeKeyReturns = struct { + result1 shim.StateQueryIteratorInterface + result2 error + }{result1, result2} +} + +func (fake *ChaincodeStub) GetStateByPartialCompositeKeyReturnsOnCall(i int, result1 shim.StateQueryIteratorInterface, result2 error) { + fake.getStateByPartialCompositeKeyMutex.Lock() + defer fake.getStateByPartialCompositeKeyMutex.Unlock() + fake.GetStateByPartialCompositeKeyStub = nil + if fake.getStateByPartialCompositeKeyReturnsOnCall == nil { + fake.getStateByPartialCompositeKeyReturnsOnCall = make(map[int]struct { + result1 shim.StateQueryIteratorInterface + result2 error + }) + } + fake.getStateByPartialCompositeKeyReturnsOnCall[i] = struct { + result1 shim.StateQueryIteratorInterface + result2 error + }{result1, result2} +} + +func (fake *ChaincodeStub) GetStateByPartialCompositeKeyWithPagination(arg1 string, arg2 []string, arg3 int32, arg4 string) (shim.StateQueryIteratorInterface, *peer.QueryResponseMetadata, error) { + var arg2Copy []string + if arg2 != nil { + arg2Copy = make([]string, len(arg2)) + copy(arg2Copy, arg2) + } + fake.getStateByPartialCompositeKeyWithPaginationMutex.Lock() + ret, specificReturn := fake.getStateByPartialCompositeKeyWithPaginationReturnsOnCall[len(fake.getStateByPartialCompositeKeyWithPaginationArgsForCall)] + fake.getStateByPartialCompositeKeyWithPaginationArgsForCall = append(fake.getStateByPartialCompositeKeyWithPaginationArgsForCall, struct { + arg1 string + arg2 []string + arg3 int32 + arg4 string + }{arg1, arg2Copy, arg3, arg4}) + fake.recordInvocation("GetStateByPartialCompositeKeyWithPagination", []interface{}{arg1, arg2Copy, arg3, arg4}) + fake.getStateByPartialCompositeKeyWithPaginationMutex.Unlock() + if fake.GetStateByPartialCompositeKeyWithPaginationStub != nil { + return fake.GetStateByPartialCompositeKeyWithPaginationStub(arg1, arg2, arg3, arg4) + } + if specificReturn { + return ret.result1, ret.result2, ret.result3 + } + fakeReturns := fake.getStateByPartialCompositeKeyWithPaginationReturns + return fakeReturns.result1, fakeReturns.result2, fakeReturns.result3 +} + +func (fake *ChaincodeStub) GetStateByPartialCompositeKeyWithPaginationCallCount() int { + fake.getStateByPartialCompositeKeyWithPaginationMutex.RLock() + defer fake.getStateByPartialCompositeKeyWithPaginationMutex.RUnlock() + return len(fake.getStateByPartialCompositeKeyWithPaginationArgsForCall) +} + +func (fake *ChaincodeStub) GetStateByPartialCompositeKeyWithPaginationCalls(stub func(string, []string, int32, string) (shim.StateQueryIteratorInterface, *peer.QueryResponseMetadata, error)) { + fake.getStateByPartialCompositeKeyWithPaginationMutex.Lock() + defer fake.getStateByPartialCompositeKeyWithPaginationMutex.Unlock() + fake.GetStateByPartialCompositeKeyWithPaginationStub = stub +} + +func (fake *ChaincodeStub) GetStateByPartialCompositeKeyWithPaginationArgsForCall(i int) (string, []string, int32, string) { + fake.getStateByPartialCompositeKeyWithPaginationMutex.RLock() + defer fake.getStateByPartialCompositeKeyWithPaginationMutex.RUnlock() + argsForCall := fake.getStateByPartialCompositeKeyWithPaginationArgsForCall[i] + return argsForCall.arg1, argsForCall.arg2, argsForCall.arg3, argsForCall.arg4 +} + +func (fake *ChaincodeStub) GetStateByPartialCompositeKeyWithPaginationReturns(result1 shim.StateQueryIteratorInterface, result2 *peer.QueryResponseMetadata, result3 error) { + fake.getStateByPartialCompositeKeyWithPaginationMutex.Lock() + defer fake.getStateByPartialCompositeKeyWithPaginationMutex.Unlock() + fake.GetStateByPartialCompositeKeyWithPaginationStub = nil + fake.getStateByPartialCompositeKeyWithPaginationReturns = struct { + result1 shim.StateQueryIteratorInterface + result2 *peer.QueryResponseMetadata + result3 error + }{result1, result2, result3} +} + +func (fake *ChaincodeStub) GetStateByPartialCompositeKeyWithPaginationReturnsOnCall(i int, result1 shim.StateQueryIteratorInterface, result2 *peer.QueryResponseMetadata, result3 error) { + fake.getStateByPartialCompositeKeyWithPaginationMutex.Lock() + defer fake.getStateByPartialCompositeKeyWithPaginationMutex.Unlock() + fake.GetStateByPartialCompositeKeyWithPaginationStub = nil + if fake.getStateByPartialCompositeKeyWithPaginationReturnsOnCall == nil { + fake.getStateByPartialCompositeKeyWithPaginationReturnsOnCall = make(map[int]struct { + result1 shim.StateQueryIteratorInterface + result2 *peer.QueryResponseMetadata + result3 error + }) + } + fake.getStateByPartialCompositeKeyWithPaginationReturnsOnCall[i] = struct { + result1 shim.StateQueryIteratorInterface + result2 *peer.QueryResponseMetadata + result3 error + }{result1, result2, result3} +} + +func (fake *ChaincodeStub) GetStateByRange(arg1 string, arg2 string) (shim.StateQueryIteratorInterface, error) { + fake.getStateByRangeMutex.Lock() + ret, specificReturn := fake.getStateByRangeReturnsOnCall[len(fake.getStateByRangeArgsForCall)] + fake.getStateByRangeArgsForCall = append(fake.getStateByRangeArgsForCall, struct { + arg1 string + arg2 string + }{arg1, arg2}) + fake.recordInvocation("GetStateByRange", []interface{}{arg1, arg2}) + fake.getStateByRangeMutex.Unlock() + if fake.GetStateByRangeStub != nil { + return fake.GetStateByRangeStub(arg1, arg2) + } + if specificReturn { + return ret.result1, ret.result2 + } + fakeReturns := fake.getStateByRangeReturns + return fakeReturns.result1, fakeReturns.result2 +} + +func (fake *ChaincodeStub) GetStateByRangeCallCount() int { + fake.getStateByRangeMutex.RLock() + defer fake.getStateByRangeMutex.RUnlock() + return len(fake.getStateByRangeArgsForCall) +} + +func (fake *ChaincodeStub) GetStateByRangeCalls(stub func(string, string) (shim.StateQueryIteratorInterface, error)) { + fake.getStateByRangeMutex.Lock() + defer fake.getStateByRangeMutex.Unlock() + fake.GetStateByRangeStub = stub +} + +func (fake *ChaincodeStub) GetStateByRangeArgsForCall(i int) (string, string) { + fake.getStateByRangeMutex.RLock() + defer fake.getStateByRangeMutex.RUnlock() + argsForCall := fake.getStateByRangeArgsForCall[i] + return argsForCall.arg1, argsForCall.arg2 +} + +func (fake *ChaincodeStub) GetStateByRangeReturns(result1 shim.StateQueryIteratorInterface, result2 error) { + fake.getStateByRangeMutex.Lock() + defer fake.getStateByRangeMutex.Unlock() + fake.GetStateByRangeStub = nil + fake.getStateByRangeReturns = struct { + result1 shim.StateQueryIteratorInterface + result2 error + }{result1, result2} +} + +func (fake *ChaincodeStub) GetStateByRangeReturnsOnCall(i int, result1 shim.StateQueryIteratorInterface, result2 error) { + fake.getStateByRangeMutex.Lock() + defer fake.getStateByRangeMutex.Unlock() + fake.GetStateByRangeStub = nil + if fake.getStateByRangeReturnsOnCall == nil { + fake.getStateByRangeReturnsOnCall = make(map[int]struct { + result1 shim.StateQueryIteratorInterface + result2 error + }) + } + fake.getStateByRangeReturnsOnCall[i] = struct { + result1 shim.StateQueryIteratorInterface + result2 error + }{result1, result2} +} + +func (fake *ChaincodeStub) GetStateByRangeWithPagination(arg1 string, arg2 string, arg3 int32, arg4 string) (shim.StateQueryIteratorInterface, *peer.QueryResponseMetadata, error) { + fake.getStateByRangeWithPaginationMutex.Lock() + ret, specificReturn := fake.getStateByRangeWithPaginationReturnsOnCall[len(fake.getStateByRangeWithPaginationArgsForCall)] + fake.getStateByRangeWithPaginationArgsForCall = append(fake.getStateByRangeWithPaginationArgsForCall, struct { + arg1 string + arg2 string + arg3 int32 + arg4 string + }{arg1, arg2, arg3, arg4}) + fake.recordInvocation("GetStateByRangeWithPagination", []interface{}{arg1, arg2, arg3, arg4}) + fake.getStateByRangeWithPaginationMutex.Unlock() + if fake.GetStateByRangeWithPaginationStub != nil { + return fake.GetStateByRangeWithPaginationStub(arg1, arg2, arg3, arg4) + } + if specificReturn { + return ret.result1, ret.result2, ret.result3 + } + fakeReturns := fake.getStateByRangeWithPaginationReturns + return fakeReturns.result1, fakeReturns.result2, fakeReturns.result3 +} + +func (fake *ChaincodeStub) GetStateByRangeWithPaginationCallCount() int { + fake.getStateByRangeWithPaginationMutex.RLock() + defer fake.getStateByRangeWithPaginationMutex.RUnlock() + return len(fake.getStateByRangeWithPaginationArgsForCall) +} + +func (fake *ChaincodeStub) GetStateByRangeWithPaginationCalls(stub func(string, string, int32, string) (shim.StateQueryIteratorInterface, *peer.QueryResponseMetadata, error)) { + fake.getStateByRangeWithPaginationMutex.Lock() + defer fake.getStateByRangeWithPaginationMutex.Unlock() + fake.GetStateByRangeWithPaginationStub = stub +} + +func (fake *ChaincodeStub) GetStateByRangeWithPaginationArgsForCall(i int) (string, string, int32, string) { + fake.getStateByRangeWithPaginationMutex.RLock() + defer fake.getStateByRangeWithPaginationMutex.RUnlock() + argsForCall := fake.getStateByRangeWithPaginationArgsForCall[i] + return argsForCall.arg1, argsForCall.arg2, argsForCall.arg3, argsForCall.arg4 +} + +func (fake *ChaincodeStub) GetStateByRangeWithPaginationReturns(result1 shim.StateQueryIteratorInterface, result2 *peer.QueryResponseMetadata, result3 error) { + fake.getStateByRangeWithPaginationMutex.Lock() + defer fake.getStateByRangeWithPaginationMutex.Unlock() + fake.GetStateByRangeWithPaginationStub = nil + fake.getStateByRangeWithPaginationReturns = struct { + result1 shim.StateQueryIteratorInterface + result2 *peer.QueryResponseMetadata + result3 error + }{result1, result2, result3} +} + +func (fake *ChaincodeStub) GetStateByRangeWithPaginationReturnsOnCall(i int, result1 shim.StateQueryIteratorInterface, result2 *peer.QueryResponseMetadata, result3 error) { + fake.getStateByRangeWithPaginationMutex.Lock() + defer fake.getStateByRangeWithPaginationMutex.Unlock() + fake.GetStateByRangeWithPaginationStub = nil + if fake.getStateByRangeWithPaginationReturnsOnCall == nil { + fake.getStateByRangeWithPaginationReturnsOnCall = make(map[int]struct { + result1 shim.StateQueryIteratorInterface + result2 *peer.QueryResponseMetadata + result3 error + }) + } + fake.getStateByRangeWithPaginationReturnsOnCall[i] = struct { + result1 shim.StateQueryIteratorInterface + result2 *peer.QueryResponseMetadata + result3 error + }{result1, result2, result3} +} + +func (fake *ChaincodeStub) GetStateValidationParameter(arg1 string) ([]byte, error) { + fake.getStateValidationParameterMutex.Lock() + ret, specificReturn := fake.getStateValidationParameterReturnsOnCall[len(fake.getStateValidationParameterArgsForCall)] + fake.getStateValidationParameterArgsForCall = append(fake.getStateValidationParameterArgsForCall, struct { + arg1 string + }{arg1}) + fake.recordInvocation("GetStateValidationParameter", []interface{}{arg1}) + fake.getStateValidationParameterMutex.Unlock() + if fake.GetStateValidationParameterStub != nil { + return fake.GetStateValidationParameterStub(arg1) + } + if specificReturn { + return ret.result1, ret.result2 + } + fakeReturns := fake.getStateValidationParameterReturns + return fakeReturns.result1, fakeReturns.result2 +} + +func (fake *ChaincodeStub) GetStateValidationParameterCallCount() int { + fake.getStateValidationParameterMutex.RLock() + defer fake.getStateValidationParameterMutex.RUnlock() + return len(fake.getStateValidationParameterArgsForCall) +} + +func (fake *ChaincodeStub) GetStateValidationParameterCalls(stub func(string) ([]byte, error)) { + fake.getStateValidationParameterMutex.Lock() + defer fake.getStateValidationParameterMutex.Unlock() + fake.GetStateValidationParameterStub = stub +} + +func (fake *ChaincodeStub) GetStateValidationParameterArgsForCall(i int) string { + fake.getStateValidationParameterMutex.RLock() + defer fake.getStateValidationParameterMutex.RUnlock() + argsForCall := fake.getStateValidationParameterArgsForCall[i] + return argsForCall.arg1 +} + +func (fake *ChaincodeStub) GetStateValidationParameterReturns(result1 []byte, result2 error) { + fake.getStateValidationParameterMutex.Lock() + defer fake.getStateValidationParameterMutex.Unlock() + fake.GetStateValidationParameterStub = nil + fake.getStateValidationParameterReturns = struct { + result1 []byte + result2 error + }{result1, result2} +} + +func (fake *ChaincodeStub) GetStateValidationParameterReturnsOnCall(i int, result1 []byte, result2 error) { + fake.getStateValidationParameterMutex.Lock() + defer fake.getStateValidationParameterMutex.Unlock() + fake.GetStateValidationParameterStub = nil + if fake.getStateValidationParameterReturnsOnCall == nil { + fake.getStateValidationParameterReturnsOnCall = make(map[int]struct { + result1 []byte + result2 error + }) + } + fake.getStateValidationParameterReturnsOnCall[i] = struct { + result1 []byte + result2 error + }{result1, result2} +} + +func (fake *ChaincodeStub) GetStringArgs() []string { + fake.getStringArgsMutex.Lock() + ret, specificReturn := fake.getStringArgsReturnsOnCall[len(fake.getStringArgsArgsForCall)] + fake.getStringArgsArgsForCall = append(fake.getStringArgsArgsForCall, struct { + }{}) + fake.recordInvocation("GetStringArgs", []interface{}{}) + fake.getStringArgsMutex.Unlock() + if fake.GetStringArgsStub != nil { + return fake.GetStringArgsStub() + } + if specificReturn { + return ret.result1 + } + fakeReturns := fake.getStringArgsReturns + return fakeReturns.result1 +} + +func (fake *ChaincodeStub) GetStringArgsCallCount() int { + fake.getStringArgsMutex.RLock() + defer fake.getStringArgsMutex.RUnlock() + return len(fake.getStringArgsArgsForCall) +} + +func (fake *ChaincodeStub) GetStringArgsCalls(stub func() []string) { + fake.getStringArgsMutex.Lock() + defer fake.getStringArgsMutex.Unlock() + fake.GetStringArgsStub = stub +} + +func (fake *ChaincodeStub) GetStringArgsReturns(result1 []string) { + fake.getStringArgsMutex.Lock() + defer fake.getStringArgsMutex.Unlock() + fake.GetStringArgsStub = nil + fake.getStringArgsReturns = struct { + result1 []string + }{result1} +} + +func (fake *ChaincodeStub) GetStringArgsReturnsOnCall(i int, result1 []string) { + fake.getStringArgsMutex.Lock() + defer fake.getStringArgsMutex.Unlock() + fake.GetStringArgsStub = nil + if fake.getStringArgsReturnsOnCall == nil { + fake.getStringArgsReturnsOnCall = make(map[int]struct { + result1 []string + }) + } + fake.getStringArgsReturnsOnCall[i] = struct { + result1 []string + }{result1} +} + +func (fake *ChaincodeStub) GetTransient() (map[string][]byte, error) { + fake.getTransientMutex.Lock() + ret, specificReturn := fake.getTransientReturnsOnCall[len(fake.getTransientArgsForCall)] + fake.getTransientArgsForCall = append(fake.getTransientArgsForCall, struct { + }{}) + fake.recordInvocation("GetTransient", []interface{}{}) + fake.getTransientMutex.Unlock() + if fake.GetTransientStub != nil { + return fake.GetTransientStub() + } + if specificReturn { + return ret.result1, ret.result2 + } + fakeReturns := fake.getTransientReturns + return fakeReturns.result1, fakeReturns.result2 +} + +func (fake *ChaincodeStub) GetTransientCallCount() int { + fake.getTransientMutex.RLock() + defer fake.getTransientMutex.RUnlock() + return len(fake.getTransientArgsForCall) +} + +func (fake *ChaincodeStub) GetTransientCalls(stub func() (map[string][]byte, error)) { + fake.getTransientMutex.Lock() + defer fake.getTransientMutex.Unlock() + fake.GetTransientStub = stub +} + +func (fake *ChaincodeStub) GetTransientReturns(result1 map[string][]byte, result2 error) { + fake.getTransientMutex.Lock() + defer fake.getTransientMutex.Unlock() + fake.GetTransientStub = nil + fake.getTransientReturns = struct { + result1 map[string][]byte + result2 error + }{result1, result2} +} + +func (fake *ChaincodeStub) GetTransientReturnsOnCall(i int, result1 map[string][]byte, result2 error) { + fake.getTransientMutex.Lock() + defer fake.getTransientMutex.Unlock() + fake.GetTransientStub = nil + if fake.getTransientReturnsOnCall == nil { + fake.getTransientReturnsOnCall = make(map[int]struct { + result1 map[string][]byte + result2 error + }) + } + fake.getTransientReturnsOnCall[i] = struct { + result1 map[string][]byte + result2 error + }{result1, result2} +} + +func (fake *ChaincodeStub) GetTxID() string { + fake.getTxIDMutex.Lock() + ret, specificReturn := fake.getTxIDReturnsOnCall[len(fake.getTxIDArgsForCall)] + fake.getTxIDArgsForCall = append(fake.getTxIDArgsForCall, struct { + }{}) + fake.recordInvocation("GetTxID", []interface{}{}) + fake.getTxIDMutex.Unlock() + if fake.GetTxIDStub != nil { + return fake.GetTxIDStub() + } + if specificReturn { + return ret.result1 + } + fakeReturns := fake.getTxIDReturns + return fakeReturns.result1 +} + +func (fake *ChaincodeStub) GetTxIDCallCount() int { + fake.getTxIDMutex.RLock() + defer fake.getTxIDMutex.RUnlock() + return len(fake.getTxIDArgsForCall) +} + +func (fake *ChaincodeStub) GetTxIDCalls(stub func() string) { + fake.getTxIDMutex.Lock() + defer fake.getTxIDMutex.Unlock() + fake.GetTxIDStub = stub +} + +func (fake *ChaincodeStub) GetTxIDReturns(result1 string) { + fake.getTxIDMutex.Lock() + defer fake.getTxIDMutex.Unlock() + fake.GetTxIDStub = nil + fake.getTxIDReturns = struct { + result1 string + }{result1} +} + +func (fake *ChaincodeStub) GetTxIDReturnsOnCall(i int, result1 string) { + fake.getTxIDMutex.Lock() + defer fake.getTxIDMutex.Unlock() + fake.GetTxIDStub = nil + if fake.getTxIDReturnsOnCall == nil { + fake.getTxIDReturnsOnCall = make(map[int]struct { + result1 string + }) + } + fake.getTxIDReturnsOnCall[i] = struct { + result1 string + }{result1} +} + +func (fake *ChaincodeStub) GetTxTimestamp() (*timestamp.Timestamp, error) { + fake.getTxTimestampMutex.Lock() + ret, specificReturn := fake.getTxTimestampReturnsOnCall[len(fake.getTxTimestampArgsForCall)] + fake.getTxTimestampArgsForCall = append(fake.getTxTimestampArgsForCall, struct { + }{}) + fake.recordInvocation("GetTxTimestamp", []interface{}{}) + fake.getTxTimestampMutex.Unlock() + if fake.GetTxTimestampStub != nil { + return fake.GetTxTimestampStub() + } + if specificReturn { + return ret.result1, ret.result2 + } + fakeReturns := fake.getTxTimestampReturns + return fakeReturns.result1, fakeReturns.result2 +} + +func (fake *ChaincodeStub) GetTxTimestampCallCount() int { + fake.getTxTimestampMutex.RLock() + defer fake.getTxTimestampMutex.RUnlock() + return len(fake.getTxTimestampArgsForCall) +} + +func (fake *ChaincodeStub) GetTxTimestampCalls(stub func() (*timestamp.Timestamp, error)) { + fake.getTxTimestampMutex.Lock() + defer fake.getTxTimestampMutex.Unlock() + fake.GetTxTimestampStub = stub +} + +func (fake *ChaincodeStub) GetTxTimestampReturns(result1 *timestamp.Timestamp, result2 error) { + fake.getTxTimestampMutex.Lock() + defer fake.getTxTimestampMutex.Unlock() + fake.GetTxTimestampStub = nil + fake.getTxTimestampReturns = struct { + result1 *timestamp.Timestamp + result2 error + }{result1, result2} +} + +func (fake *ChaincodeStub) GetTxTimestampReturnsOnCall(i int, result1 *timestamp.Timestamp, result2 error) { + fake.getTxTimestampMutex.Lock() + defer fake.getTxTimestampMutex.Unlock() + fake.GetTxTimestampStub = nil + if fake.getTxTimestampReturnsOnCall == nil { + fake.getTxTimestampReturnsOnCall = make(map[int]struct { + result1 *timestamp.Timestamp + result2 error + }) + } + fake.getTxTimestampReturnsOnCall[i] = struct { + result1 *timestamp.Timestamp + result2 error + }{result1, result2} +} + +func (fake *ChaincodeStub) InvokeChaincode(arg1 string, arg2 [][]byte, arg3 string) peer.Response { + var arg2Copy [][]byte + if arg2 != nil { + arg2Copy = make([][]byte, len(arg2)) + copy(arg2Copy, arg2) + } + fake.invokeChaincodeMutex.Lock() + ret, specificReturn := fake.invokeChaincodeReturnsOnCall[len(fake.invokeChaincodeArgsForCall)] + fake.invokeChaincodeArgsForCall = append(fake.invokeChaincodeArgsForCall, struct { + arg1 string + arg2 [][]byte + arg3 string + }{arg1, arg2Copy, arg3}) + fake.recordInvocation("InvokeChaincode", []interface{}{arg1, arg2Copy, arg3}) + fake.invokeChaincodeMutex.Unlock() + if fake.InvokeChaincodeStub != nil { + return fake.InvokeChaincodeStub(arg1, arg2, arg3) + } + if specificReturn { + return ret.result1 + } + fakeReturns := fake.invokeChaincodeReturns + return fakeReturns.result1 +} + +func (fake *ChaincodeStub) InvokeChaincodeCallCount() int { + fake.invokeChaincodeMutex.RLock() + defer fake.invokeChaincodeMutex.RUnlock() + return len(fake.invokeChaincodeArgsForCall) +} + +func (fake *ChaincodeStub) InvokeChaincodeCalls(stub func(string, [][]byte, string) peer.Response) { + fake.invokeChaincodeMutex.Lock() + defer fake.invokeChaincodeMutex.Unlock() + fake.InvokeChaincodeStub = stub +} + +func (fake *ChaincodeStub) InvokeChaincodeArgsForCall(i int) (string, [][]byte, string) { + fake.invokeChaincodeMutex.RLock() + defer fake.invokeChaincodeMutex.RUnlock() + argsForCall := fake.invokeChaincodeArgsForCall[i] + return argsForCall.arg1, argsForCall.arg2, argsForCall.arg3 +} + +func (fake *ChaincodeStub) InvokeChaincodeReturns(result1 peer.Response) { + fake.invokeChaincodeMutex.Lock() + defer fake.invokeChaincodeMutex.Unlock() + fake.InvokeChaincodeStub = nil + fake.invokeChaincodeReturns = struct { + result1 peer.Response + }{result1} +} + +func (fake *ChaincodeStub) InvokeChaincodeReturnsOnCall(i int, result1 peer.Response) { + fake.invokeChaincodeMutex.Lock() + defer fake.invokeChaincodeMutex.Unlock() + fake.InvokeChaincodeStub = nil + if fake.invokeChaincodeReturnsOnCall == nil { + fake.invokeChaincodeReturnsOnCall = make(map[int]struct { + result1 peer.Response + }) + } + fake.invokeChaincodeReturnsOnCall[i] = struct { + result1 peer.Response + }{result1} +} + +func (fake *ChaincodeStub) PutPrivateData(arg1 string, arg2 string, arg3 []byte) error { + var arg3Copy []byte + if arg3 != nil { + arg3Copy = make([]byte, len(arg3)) + copy(arg3Copy, arg3) + } + fake.putPrivateDataMutex.Lock() + ret, specificReturn := fake.putPrivateDataReturnsOnCall[len(fake.putPrivateDataArgsForCall)] + fake.putPrivateDataArgsForCall = append(fake.putPrivateDataArgsForCall, struct { + arg1 string + arg2 string + arg3 []byte + }{arg1, arg2, arg3Copy}) + fake.recordInvocation("PutPrivateData", []interface{}{arg1, arg2, arg3Copy}) + fake.putPrivateDataMutex.Unlock() + if fake.PutPrivateDataStub != nil { + return fake.PutPrivateDataStub(arg1, arg2, arg3) + } + if specificReturn { + return ret.result1 + } + fakeReturns := fake.putPrivateDataReturns + return fakeReturns.result1 +} + +func (fake *ChaincodeStub) PutPrivateDataCallCount() int { + fake.putPrivateDataMutex.RLock() + defer fake.putPrivateDataMutex.RUnlock() + return len(fake.putPrivateDataArgsForCall) +} + +func (fake *ChaincodeStub) PutPrivateDataCalls(stub func(string, string, []byte) error) { + fake.putPrivateDataMutex.Lock() + defer fake.putPrivateDataMutex.Unlock() + fake.PutPrivateDataStub = stub +} + +func (fake *ChaincodeStub) PutPrivateDataArgsForCall(i int) (string, string, []byte) { + fake.putPrivateDataMutex.RLock() + defer fake.putPrivateDataMutex.RUnlock() + argsForCall := fake.putPrivateDataArgsForCall[i] + return argsForCall.arg1, argsForCall.arg2, argsForCall.arg3 +} + +func (fake *ChaincodeStub) PutPrivateDataReturns(result1 error) { + fake.putPrivateDataMutex.Lock() + defer fake.putPrivateDataMutex.Unlock() + fake.PutPrivateDataStub = nil + fake.putPrivateDataReturns = struct { + result1 error + }{result1} +} + +func (fake *ChaincodeStub) PutPrivateDataReturnsOnCall(i int, result1 error) { + fake.putPrivateDataMutex.Lock() + defer fake.putPrivateDataMutex.Unlock() + fake.PutPrivateDataStub = nil + if fake.putPrivateDataReturnsOnCall == nil { + fake.putPrivateDataReturnsOnCall = make(map[int]struct { + result1 error + }) + } + fake.putPrivateDataReturnsOnCall[i] = struct { + result1 error + }{result1} +} + +func (fake *ChaincodeStub) PutState(arg1 string, arg2 []byte) error { + var arg2Copy []byte + if arg2 != nil { + arg2Copy = make([]byte, len(arg2)) + copy(arg2Copy, arg2) + } + fake.putStateMutex.Lock() + ret, specificReturn := fake.putStateReturnsOnCall[len(fake.putStateArgsForCall)] + fake.putStateArgsForCall = append(fake.putStateArgsForCall, struct { + arg1 string + arg2 []byte + }{arg1, arg2Copy}) + fake.recordInvocation("PutState", []interface{}{arg1, arg2Copy}) + fake.putStateMutex.Unlock() + if fake.PutStateStub != nil { + return fake.PutStateStub(arg1, arg2) + } + if specificReturn { + return ret.result1 + } + fakeReturns := fake.putStateReturns + return fakeReturns.result1 +} + +func (fake *ChaincodeStub) PutStateCallCount() int { + fake.putStateMutex.RLock() + defer fake.putStateMutex.RUnlock() + return len(fake.putStateArgsForCall) +} + +func (fake *ChaincodeStub) PutStateCalls(stub func(string, []byte) error) { + fake.putStateMutex.Lock() + defer fake.putStateMutex.Unlock() + fake.PutStateStub = stub +} + +func (fake *ChaincodeStub) PutStateArgsForCall(i int) (string, []byte) { + fake.putStateMutex.RLock() + defer fake.putStateMutex.RUnlock() + argsForCall := fake.putStateArgsForCall[i] + return argsForCall.arg1, argsForCall.arg2 +} + +func (fake *ChaincodeStub) PutStateReturns(result1 error) { + fake.putStateMutex.Lock() + defer fake.putStateMutex.Unlock() + fake.PutStateStub = nil + fake.putStateReturns = struct { + result1 error + }{result1} +} + +func (fake *ChaincodeStub) PutStateReturnsOnCall(i int, result1 error) { + fake.putStateMutex.Lock() + defer fake.putStateMutex.Unlock() + fake.PutStateStub = nil + if fake.putStateReturnsOnCall == nil { + fake.putStateReturnsOnCall = make(map[int]struct { + result1 error + }) + } + fake.putStateReturnsOnCall[i] = struct { + result1 error + }{result1} +} + +func (fake *ChaincodeStub) SetEvent(arg1 string, arg2 []byte) error { + var arg2Copy []byte + if arg2 != nil { + arg2Copy = make([]byte, len(arg2)) + copy(arg2Copy, arg2) + } + fake.setEventMutex.Lock() + ret, specificReturn := fake.setEventReturnsOnCall[len(fake.setEventArgsForCall)] + fake.setEventArgsForCall = append(fake.setEventArgsForCall, struct { + arg1 string + arg2 []byte + }{arg1, arg2Copy}) + fake.recordInvocation("SetEvent", []interface{}{arg1, arg2Copy}) + fake.setEventMutex.Unlock() + if fake.SetEventStub != nil { + return fake.SetEventStub(arg1, arg2) + } + if specificReturn { + return ret.result1 + } + fakeReturns := fake.setEventReturns + return fakeReturns.result1 +} + +func (fake *ChaincodeStub) SetEventCallCount() int { + fake.setEventMutex.RLock() + defer fake.setEventMutex.RUnlock() + return len(fake.setEventArgsForCall) +} + +func (fake *ChaincodeStub) SetEventCalls(stub func(string, []byte) error) { + fake.setEventMutex.Lock() + defer fake.setEventMutex.Unlock() + fake.SetEventStub = stub +} + +func (fake *ChaincodeStub) SetEventArgsForCall(i int) (string, []byte) { + fake.setEventMutex.RLock() + defer fake.setEventMutex.RUnlock() + argsForCall := fake.setEventArgsForCall[i] + return argsForCall.arg1, argsForCall.arg2 +} + +func (fake *ChaincodeStub) SetEventReturns(result1 error) { + fake.setEventMutex.Lock() + defer fake.setEventMutex.Unlock() + fake.SetEventStub = nil + fake.setEventReturns = struct { + result1 error + }{result1} +} + +func (fake *ChaincodeStub) SetEventReturnsOnCall(i int, result1 error) { + fake.setEventMutex.Lock() + defer fake.setEventMutex.Unlock() + fake.SetEventStub = nil + if fake.setEventReturnsOnCall == nil { + fake.setEventReturnsOnCall = make(map[int]struct { + result1 error + }) + } + fake.setEventReturnsOnCall[i] = struct { + result1 error + }{result1} +} + +func (fake *ChaincodeStub) SetPrivateDataValidationParameter(arg1 string, arg2 string, arg3 []byte) error { + var arg3Copy []byte + if arg3 != nil { + arg3Copy = make([]byte, len(arg3)) + copy(arg3Copy, arg3) + } + fake.setPrivateDataValidationParameterMutex.Lock() + ret, specificReturn := fake.setPrivateDataValidationParameterReturnsOnCall[len(fake.setPrivateDataValidationParameterArgsForCall)] + fake.setPrivateDataValidationParameterArgsForCall = append(fake.setPrivateDataValidationParameterArgsForCall, struct { + arg1 string + arg2 string + arg3 []byte + }{arg1, arg2, arg3Copy}) + fake.recordInvocation("SetPrivateDataValidationParameter", []interface{}{arg1, arg2, arg3Copy}) + fake.setPrivateDataValidationParameterMutex.Unlock() + if fake.SetPrivateDataValidationParameterStub != nil { + return fake.SetPrivateDataValidationParameterStub(arg1, arg2, arg3) + } + if specificReturn { + return ret.result1 + } + fakeReturns := fake.setPrivateDataValidationParameterReturns + return fakeReturns.result1 +} + +func (fake *ChaincodeStub) SetPrivateDataValidationParameterCallCount() int { + fake.setPrivateDataValidationParameterMutex.RLock() + defer fake.setPrivateDataValidationParameterMutex.RUnlock() + return len(fake.setPrivateDataValidationParameterArgsForCall) +} + +func (fake *ChaincodeStub) SetPrivateDataValidationParameterCalls(stub func(string, string, []byte) error) { + fake.setPrivateDataValidationParameterMutex.Lock() + defer fake.setPrivateDataValidationParameterMutex.Unlock() + fake.SetPrivateDataValidationParameterStub = stub +} + +func (fake *ChaincodeStub) SetPrivateDataValidationParameterArgsForCall(i int) (string, string, []byte) { + fake.setPrivateDataValidationParameterMutex.RLock() + defer fake.setPrivateDataValidationParameterMutex.RUnlock() + argsForCall := fake.setPrivateDataValidationParameterArgsForCall[i] + return argsForCall.arg1, argsForCall.arg2, argsForCall.arg3 +} + +func (fake *ChaincodeStub) SetPrivateDataValidationParameterReturns(result1 error) { + fake.setPrivateDataValidationParameterMutex.Lock() + defer fake.setPrivateDataValidationParameterMutex.Unlock() + fake.SetPrivateDataValidationParameterStub = nil + fake.setPrivateDataValidationParameterReturns = struct { + result1 error + }{result1} +} + +func (fake *ChaincodeStub) SetPrivateDataValidationParameterReturnsOnCall(i int, result1 error) { + fake.setPrivateDataValidationParameterMutex.Lock() + defer fake.setPrivateDataValidationParameterMutex.Unlock() + fake.SetPrivateDataValidationParameterStub = nil + if fake.setPrivateDataValidationParameterReturnsOnCall == nil { + fake.setPrivateDataValidationParameterReturnsOnCall = make(map[int]struct { + result1 error + }) + } + fake.setPrivateDataValidationParameterReturnsOnCall[i] = struct { + result1 error + }{result1} +} + +func (fake *ChaincodeStub) SetStateValidationParameter(arg1 string, arg2 []byte) error { + var arg2Copy []byte + if arg2 != nil { + arg2Copy = make([]byte, len(arg2)) + copy(arg2Copy, arg2) + } + fake.setStateValidationParameterMutex.Lock() + ret, specificReturn := fake.setStateValidationParameterReturnsOnCall[len(fake.setStateValidationParameterArgsForCall)] + fake.setStateValidationParameterArgsForCall = append(fake.setStateValidationParameterArgsForCall, struct { + arg1 string + arg2 []byte + }{arg1, arg2Copy}) + fake.recordInvocation("SetStateValidationParameter", []interface{}{arg1, arg2Copy}) + fake.setStateValidationParameterMutex.Unlock() + if fake.SetStateValidationParameterStub != nil { + return fake.SetStateValidationParameterStub(arg1, arg2) + } + if specificReturn { + return ret.result1 + } + fakeReturns := fake.setStateValidationParameterReturns + return fakeReturns.result1 +} + +func (fake *ChaincodeStub) SetStateValidationParameterCallCount() int { + fake.setStateValidationParameterMutex.RLock() + defer fake.setStateValidationParameterMutex.RUnlock() + return len(fake.setStateValidationParameterArgsForCall) +} + +func (fake *ChaincodeStub) SetStateValidationParameterCalls(stub func(string, []byte) error) { + fake.setStateValidationParameterMutex.Lock() + defer fake.setStateValidationParameterMutex.Unlock() + fake.SetStateValidationParameterStub = stub +} + +func (fake *ChaincodeStub) SetStateValidationParameterArgsForCall(i int) (string, []byte) { + fake.setStateValidationParameterMutex.RLock() + defer fake.setStateValidationParameterMutex.RUnlock() + argsForCall := fake.setStateValidationParameterArgsForCall[i] + return argsForCall.arg1, argsForCall.arg2 +} + +func (fake *ChaincodeStub) SetStateValidationParameterReturns(result1 error) { + fake.setStateValidationParameterMutex.Lock() + defer fake.setStateValidationParameterMutex.Unlock() + fake.SetStateValidationParameterStub = nil + fake.setStateValidationParameterReturns = struct { + result1 error + }{result1} +} + +func (fake *ChaincodeStub) SetStateValidationParameterReturnsOnCall(i int, result1 error) { + fake.setStateValidationParameterMutex.Lock() + defer fake.setStateValidationParameterMutex.Unlock() + fake.SetStateValidationParameterStub = nil + if fake.setStateValidationParameterReturnsOnCall == nil { + fake.setStateValidationParameterReturnsOnCall = make(map[int]struct { + result1 error + }) + } + fake.setStateValidationParameterReturnsOnCall[i] = struct { + result1 error + }{result1} +} + +func (fake *ChaincodeStub) SplitCompositeKey(arg1 string) (string, []string, error) { + fake.splitCompositeKeyMutex.Lock() + ret, specificReturn := fake.splitCompositeKeyReturnsOnCall[len(fake.splitCompositeKeyArgsForCall)] + fake.splitCompositeKeyArgsForCall = append(fake.splitCompositeKeyArgsForCall, struct { + arg1 string + }{arg1}) + fake.recordInvocation("SplitCompositeKey", []interface{}{arg1}) + fake.splitCompositeKeyMutex.Unlock() + if fake.SplitCompositeKeyStub != nil { + return fake.SplitCompositeKeyStub(arg1) + } + if specificReturn { + return ret.result1, ret.result2, ret.result3 + } + fakeReturns := fake.splitCompositeKeyReturns + return fakeReturns.result1, fakeReturns.result2, fakeReturns.result3 +} + +func (fake *ChaincodeStub) SplitCompositeKeyCallCount() int { + fake.splitCompositeKeyMutex.RLock() + defer fake.splitCompositeKeyMutex.RUnlock() + return len(fake.splitCompositeKeyArgsForCall) +} + +func (fake *ChaincodeStub) SplitCompositeKeyCalls(stub func(string) (string, []string, error)) { + fake.splitCompositeKeyMutex.Lock() + defer fake.splitCompositeKeyMutex.Unlock() + fake.SplitCompositeKeyStub = stub +} + +func (fake *ChaincodeStub) SplitCompositeKeyArgsForCall(i int) string { + fake.splitCompositeKeyMutex.RLock() + defer fake.splitCompositeKeyMutex.RUnlock() + argsForCall := fake.splitCompositeKeyArgsForCall[i] + return argsForCall.arg1 +} + +func (fake *ChaincodeStub) SplitCompositeKeyReturns(result1 string, result2 []string, result3 error) { + fake.splitCompositeKeyMutex.Lock() + defer fake.splitCompositeKeyMutex.Unlock() + fake.SplitCompositeKeyStub = nil + fake.splitCompositeKeyReturns = struct { + result1 string + result2 []string + result3 error + }{result1, result2, result3} +} + +func (fake *ChaincodeStub) SplitCompositeKeyReturnsOnCall(i int, result1 string, result2 []string, result3 error) { + fake.splitCompositeKeyMutex.Lock() + defer fake.splitCompositeKeyMutex.Unlock() + fake.SplitCompositeKeyStub = nil + if fake.splitCompositeKeyReturnsOnCall == nil { + fake.splitCompositeKeyReturnsOnCall = make(map[int]struct { + result1 string + result2 []string + result3 error + }) + } + fake.splitCompositeKeyReturnsOnCall[i] = struct { + result1 string + result2 []string + result3 error + }{result1, result2, result3} +} + +func (fake *ChaincodeStub) Invocations() map[string][][]interface{} { + fake.invocationsMutex.RLock() + defer fake.invocationsMutex.RUnlock() + fake.createCompositeKeyMutex.RLock() + defer fake.createCompositeKeyMutex.RUnlock() + fake.delPrivateDataMutex.RLock() + defer fake.delPrivateDataMutex.RUnlock() + fake.delStateMutex.RLock() + defer fake.delStateMutex.RUnlock() + fake.getArgsMutex.RLock() + defer fake.getArgsMutex.RUnlock() + fake.getArgsSliceMutex.RLock() + defer fake.getArgsSliceMutex.RUnlock() + fake.getBindingMutex.RLock() + defer fake.getBindingMutex.RUnlock() + fake.getChannelIDMutex.RLock() + defer fake.getChannelIDMutex.RUnlock() + fake.getCreatorMutex.RLock() + defer fake.getCreatorMutex.RUnlock() + fake.getDecorationsMutex.RLock() + defer fake.getDecorationsMutex.RUnlock() + fake.getFunctionAndParametersMutex.RLock() + defer fake.getFunctionAndParametersMutex.RUnlock() + fake.getHistoryForKeyMutex.RLock() + defer fake.getHistoryForKeyMutex.RUnlock() + fake.getPrivateDataMutex.RLock() + defer fake.getPrivateDataMutex.RUnlock() + fake.getPrivateDataByPartialCompositeKeyMutex.RLock() + defer fake.getPrivateDataByPartialCompositeKeyMutex.RUnlock() + fake.getPrivateDataByRangeMutex.RLock() + defer fake.getPrivateDataByRangeMutex.RUnlock() + fake.getPrivateDataHashMutex.RLock() + defer fake.getPrivateDataHashMutex.RUnlock() + fake.getPrivateDataQueryResultMutex.RLock() + defer fake.getPrivateDataQueryResultMutex.RUnlock() + fake.getPrivateDataValidationParameterMutex.RLock() + defer fake.getPrivateDataValidationParameterMutex.RUnlock() + fake.getQueryResultMutex.RLock() + defer fake.getQueryResultMutex.RUnlock() + fake.getQueryResultWithPaginationMutex.RLock() + defer fake.getQueryResultWithPaginationMutex.RUnlock() + fake.getSignedProposalMutex.RLock() + defer fake.getSignedProposalMutex.RUnlock() + fake.getStateMutex.RLock() + defer fake.getStateMutex.RUnlock() + fake.getStateByPartialCompositeKeyMutex.RLock() + defer fake.getStateByPartialCompositeKeyMutex.RUnlock() + fake.getStateByPartialCompositeKeyWithPaginationMutex.RLock() + defer fake.getStateByPartialCompositeKeyWithPaginationMutex.RUnlock() + fake.getStateByRangeMutex.RLock() + defer fake.getStateByRangeMutex.RUnlock() + fake.getStateByRangeWithPaginationMutex.RLock() + defer fake.getStateByRangeWithPaginationMutex.RUnlock() + fake.getStateValidationParameterMutex.RLock() + defer fake.getStateValidationParameterMutex.RUnlock() + fake.getStringArgsMutex.RLock() + defer fake.getStringArgsMutex.RUnlock() + fake.getTransientMutex.RLock() + defer fake.getTransientMutex.RUnlock() + fake.getTxIDMutex.RLock() + defer fake.getTxIDMutex.RUnlock() + fake.getTxTimestampMutex.RLock() + defer fake.getTxTimestampMutex.RUnlock() + fake.invokeChaincodeMutex.RLock() + defer fake.invokeChaincodeMutex.RUnlock() + fake.putPrivateDataMutex.RLock() + defer fake.putPrivateDataMutex.RUnlock() + fake.putStateMutex.RLock() + defer fake.putStateMutex.RUnlock() + fake.setEventMutex.RLock() + defer fake.setEventMutex.RUnlock() + fake.setPrivateDataValidationParameterMutex.RLock() + defer fake.setPrivateDataValidationParameterMutex.RUnlock() + fake.setStateValidationParameterMutex.RLock() + defer fake.setStateValidationParameterMutex.RUnlock() + fake.splitCompositeKeyMutex.RLock() + defer fake.splitCompositeKeyMutex.RUnlock() + copiedInvocations := map[string][][]interface{}{} + for key, value := range fake.invocations { + copiedInvocations[key] = value + } + return copiedInvocations +} + +func (fake *ChaincodeStub) recordInvocation(key string, args []interface{}) { + fake.invocationsMutex.Lock() + defer fake.invocationsMutex.Unlock() + if fake.invocations == nil { + fake.invocations = map[string][][]interface{}{} + } + if fake.invocations[key] == nil { + fake.invocations[key] = [][]interface{}{} + } + fake.invocations[key] = append(fake.invocations[key], args) +} diff --git a/topologies/t3/assets/chaincode/assets-transfer-basic/chaincode-go/chaincode/mocks/statequeryiterator.go b/topologies/t3/assets/chaincode/assets-transfer-basic/chaincode-go/chaincode/mocks/statequeryiterator.go new file mode 100644 index 0000000..27e3034 --- /dev/null +++ b/topologies/t3/assets/chaincode/assets-transfer-basic/chaincode-go/chaincode/mocks/statequeryiterator.go @@ -0,0 +1,232 @@ +// Code generated by counterfeiter. DO NOT EDIT. +package mocks + +import ( + "sync" + + "github.com/hyperledger/fabric-protos-go/ledger/queryresult" +) + +type StateQueryIterator struct { + CloseStub func() error + closeMutex sync.RWMutex + closeArgsForCall []struct { + } + closeReturns struct { + result1 error + } + closeReturnsOnCall map[int]struct { + result1 error + } + HasNextStub func() bool + hasNextMutex sync.RWMutex + hasNextArgsForCall []struct { + } + hasNextReturns struct { + result1 bool + } + hasNextReturnsOnCall map[int]struct { + result1 bool + } + NextStub func() (*queryresult.KV, error) + nextMutex sync.RWMutex + nextArgsForCall []struct { + } + nextReturns struct { + result1 *queryresult.KV + result2 error + } + nextReturnsOnCall map[int]struct { + result1 *queryresult.KV + result2 error + } + invocations map[string][][]interface{} + invocationsMutex sync.RWMutex +} + +func (fake *StateQueryIterator) Close() error { + fake.closeMutex.Lock() + ret, specificReturn := fake.closeReturnsOnCall[len(fake.closeArgsForCall)] + fake.closeArgsForCall = append(fake.closeArgsForCall, struct { + }{}) + fake.recordInvocation("Close", []interface{}{}) + fake.closeMutex.Unlock() + if fake.CloseStub != nil { + return fake.CloseStub() + } + if specificReturn { + return ret.result1 + } + fakeReturns := fake.closeReturns + return fakeReturns.result1 +} + +func (fake *StateQueryIterator) CloseCallCount() int { + fake.closeMutex.RLock() + defer fake.closeMutex.RUnlock() + return len(fake.closeArgsForCall) +} + +func (fake *StateQueryIterator) CloseCalls(stub func() error) { + fake.closeMutex.Lock() + defer fake.closeMutex.Unlock() + fake.CloseStub = stub +} + +func (fake *StateQueryIterator) CloseReturns(result1 error) { + fake.closeMutex.Lock() + defer fake.closeMutex.Unlock() + fake.CloseStub = nil + fake.closeReturns = struct { + result1 error + }{result1} +} + +func (fake *StateQueryIterator) CloseReturnsOnCall(i int, result1 error) { + fake.closeMutex.Lock() + defer fake.closeMutex.Unlock() + fake.CloseStub = nil + if fake.closeReturnsOnCall == nil { + fake.closeReturnsOnCall = make(map[int]struct { + result1 error + }) + } + fake.closeReturnsOnCall[i] = struct { + result1 error + }{result1} +} + +func (fake *StateQueryIterator) HasNext() bool { + fake.hasNextMutex.Lock() + ret, specificReturn := fake.hasNextReturnsOnCall[len(fake.hasNextArgsForCall)] + fake.hasNextArgsForCall = append(fake.hasNextArgsForCall, struct { + }{}) + fake.recordInvocation("HasNext", []interface{}{}) + fake.hasNextMutex.Unlock() + if fake.HasNextStub != nil { + return fake.HasNextStub() + } + if specificReturn { + return ret.result1 + } + fakeReturns := fake.hasNextReturns + return fakeReturns.result1 +} + +func (fake *StateQueryIterator) HasNextCallCount() int { + fake.hasNextMutex.RLock() + defer fake.hasNextMutex.RUnlock() + return len(fake.hasNextArgsForCall) +} + +func (fake *StateQueryIterator) HasNextCalls(stub func() bool) { + fake.hasNextMutex.Lock() + defer fake.hasNextMutex.Unlock() + fake.HasNextStub = stub +} + +func (fake *StateQueryIterator) HasNextReturns(result1 bool) { + fake.hasNextMutex.Lock() + defer fake.hasNextMutex.Unlock() + fake.HasNextStub = nil + fake.hasNextReturns = struct { + result1 bool + }{result1} +} + +func (fake *StateQueryIterator) HasNextReturnsOnCall(i int, result1 bool) { + fake.hasNextMutex.Lock() + defer fake.hasNextMutex.Unlock() + fake.HasNextStub = nil + if fake.hasNextReturnsOnCall == nil { + fake.hasNextReturnsOnCall = make(map[int]struct { + result1 bool + }) + } + fake.hasNextReturnsOnCall[i] = struct { + result1 bool + }{result1} +} + +func (fake *StateQueryIterator) Next() (*queryresult.KV, error) { + fake.nextMutex.Lock() + ret, specificReturn := fake.nextReturnsOnCall[len(fake.nextArgsForCall)] + fake.nextArgsForCall = append(fake.nextArgsForCall, struct { + }{}) + fake.recordInvocation("Next", []interface{}{}) + fake.nextMutex.Unlock() + if fake.NextStub != nil { + return fake.NextStub() + } + if specificReturn { + return ret.result1, ret.result2 + } + fakeReturns := fake.nextReturns + return fakeReturns.result1, fakeReturns.result2 +} + +func (fake *StateQueryIterator) NextCallCount() int { + fake.nextMutex.RLock() + defer fake.nextMutex.RUnlock() + return len(fake.nextArgsForCall) +} + +func (fake *StateQueryIterator) NextCalls(stub func() (*queryresult.KV, error)) { + fake.nextMutex.Lock() + defer fake.nextMutex.Unlock() + fake.NextStub = stub +} + +func (fake *StateQueryIterator) NextReturns(result1 *queryresult.KV, result2 error) { + fake.nextMutex.Lock() + defer fake.nextMutex.Unlock() + fake.NextStub = nil + fake.nextReturns = struct { + result1 *queryresult.KV + result2 error + }{result1, result2} +} + +func (fake *StateQueryIterator) NextReturnsOnCall(i int, result1 *queryresult.KV, result2 error) { + fake.nextMutex.Lock() + defer fake.nextMutex.Unlock() + fake.NextStub = nil + if fake.nextReturnsOnCall == nil { + fake.nextReturnsOnCall = make(map[int]struct { + result1 *queryresult.KV + result2 error + }) + } + fake.nextReturnsOnCall[i] = struct { + result1 *queryresult.KV + result2 error + }{result1, result2} +} + +func (fake *StateQueryIterator) Invocations() map[string][][]interface{} { + fake.invocationsMutex.RLock() + defer fake.invocationsMutex.RUnlock() + fake.closeMutex.RLock() + defer fake.closeMutex.RUnlock() + fake.hasNextMutex.RLock() + defer fake.hasNextMutex.RUnlock() + fake.nextMutex.RLock() + defer fake.nextMutex.RUnlock() + copiedInvocations := map[string][][]interface{}{} + for key, value := range fake.invocations { + copiedInvocations[key] = value + } + return copiedInvocations +} + +func (fake *StateQueryIterator) recordInvocation(key string, args []interface{}) { + fake.invocationsMutex.Lock() + defer fake.invocationsMutex.Unlock() + if fake.invocations == nil { + fake.invocations = map[string][][]interface{}{} + } + if fake.invocations[key] == nil { + fake.invocations[key] = [][]interface{}{} + } + fake.invocations[key] = append(fake.invocations[key], args) +} diff --git a/topologies/t3/assets/chaincode/assets-transfer-basic/chaincode-go/chaincode/mocks/transaction.go b/topologies/t3/assets/chaincode/assets-transfer-basic/chaincode-go/chaincode/mocks/transaction.go new file mode 100644 index 0000000..eea37db --- /dev/null +++ b/topologies/t3/assets/chaincode/assets-transfer-basic/chaincode-go/chaincode/mocks/transaction.go @@ -0,0 +1,164 @@ +// Code generated by counterfeiter. DO NOT EDIT. +package mocks + +import ( + "sync" + + "github.com/hyperledger/fabric-chaincode-go/pkg/cid" + "github.com/hyperledger/fabric-chaincode-go/shim" +) + +type TransactionContext struct { + GetClientIdentityStub func() cid.ClientIdentity + getClientIdentityMutex sync.RWMutex + getClientIdentityArgsForCall []struct { + } + getClientIdentityReturns struct { + result1 cid.ClientIdentity + } + getClientIdentityReturnsOnCall map[int]struct { + result1 cid.ClientIdentity + } + GetStubStub func() shim.ChaincodeStubInterface + getStubMutex sync.RWMutex + getStubArgsForCall []struct { + } + getStubReturns struct { + result1 shim.ChaincodeStubInterface + } + getStubReturnsOnCall map[int]struct { + result1 shim.ChaincodeStubInterface + } + invocations map[string][][]interface{} + invocationsMutex sync.RWMutex +} + +func (fake *TransactionContext) GetClientIdentity() cid.ClientIdentity { + fake.getClientIdentityMutex.Lock() + ret, specificReturn := fake.getClientIdentityReturnsOnCall[len(fake.getClientIdentityArgsForCall)] + fake.getClientIdentityArgsForCall = append(fake.getClientIdentityArgsForCall, struct { + }{}) + fake.recordInvocation("GetClientIdentity", []interface{}{}) + fake.getClientIdentityMutex.Unlock() + if fake.GetClientIdentityStub != nil { + return fake.GetClientIdentityStub() + } + if specificReturn { + return ret.result1 + } + fakeReturns := fake.getClientIdentityReturns + return fakeReturns.result1 +} + +func (fake *TransactionContext) GetClientIdentityCallCount() int { + fake.getClientIdentityMutex.RLock() + defer fake.getClientIdentityMutex.RUnlock() + return len(fake.getClientIdentityArgsForCall) +} + +func (fake *TransactionContext) GetClientIdentityCalls(stub func() cid.ClientIdentity) { + fake.getClientIdentityMutex.Lock() + defer fake.getClientIdentityMutex.Unlock() + fake.GetClientIdentityStub = stub +} + +func (fake *TransactionContext) GetClientIdentityReturns(result1 cid.ClientIdentity) { + fake.getClientIdentityMutex.Lock() + defer fake.getClientIdentityMutex.Unlock() + fake.GetClientIdentityStub = nil + fake.getClientIdentityReturns = struct { + result1 cid.ClientIdentity + }{result1} +} + +func (fake *TransactionContext) GetClientIdentityReturnsOnCall(i int, result1 cid.ClientIdentity) { + fake.getClientIdentityMutex.Lock() + defer fake.getClientIdentityMutex.Unlock() + fake.GetClientIdentityStub = nil + if fake.getClientIdentityReturnsOnCall == nil { + fake.getClientIdentityReturnsOnCall = make(map[int]struct { + result1 cid.ClientIdentity + }) + } + fake.getClientIdentityReturnsOnCall[i] = struct { + result1 cid.ClientIdentity + }{result1} +} + +func (fake *TransactionContext) GetStub() shim.ChaincodeStubInterface { + fake.getStubMutex.Lock() + ret, specificReturn := fake.getStubReturnsOnCall[len(fake.getStubArgsForCall)] + fake.getStubArgsForCall = append(fake.getStubArgsForCall, struct { + }{}) + fake.recordInvocation("GetStub", []interface{}{}) + fake.getStubMutex.Unlock() + if fake.GetStubStub != nil { + return fake.GetStubStub() + } + if specificReturn { + return ret.result1 + } + fakeReturns := fake.getStubReturns + return fakeReturns.result1 +} + +func (fake *TransactionContext) GetStubCallCount() int { + fake.getStubMutex.RLock() + defer fake.getStubMutex.RUnlock() + return len(fake.getStubArgsForCall) +} + +func (fake *TransactionContext) GetStubCalls(stub func() shim.ChaincodeStubInterface) { + fake.getStubMutex.Lock() + defer fake.getStubMutex.Unlock() + fake.GetStubStub = stub +} + +func (fake *TransactionContext) GetStubReturns(result1 shim.ChaincodeStubInterface) { + fake.getStubMutex.Lock() + defer fake.getStubMutex.Unlock() + fake.GetStubStub = nil + fake.getStubReturns = struct { + result1 shim.ChaincodeStubInterface + }{result1} +} + +func (fake *TransactionContext) GetStubReturnsOnCall(i int, result1 shim.ChaincodeStubInterface) { + fake.getStubMutex.Lock() + defer fake.getStubMutex.Unlock() + fake.GetStubStub = nil + if fake.getStubReturnsOnCall == nil { + fake.getStubReturnsOnCall = make(map[int]struct { + result1 shim.ChaincodeStubInterface + }) + } + fake.getStubReturnsOnCall[i] = struct { + result1 shim.ChaincodeStubInterface + }{result1} +} + +func (fake *TransactionContext) Invocations() map[string][][]interface{} { + fake.invocationsMutex.RLock() + defer fake.invocationsMutex.RUnlock() + fake.getClientIdentityMutex.RLock() + defer fake.getClientIdentityMutex.RUnlock() + fake.getStubMutex.RLock() + defer fake.getStubMutex.RUnlock() + copiedInvocations := map[string][][]interface{}{} + for key, value := range fake.invocations { + copiedInvocations[key] = value + } + return copiedInvocations +} + +func (fake *TransactionContext) recordInvocation(key string, args []interface{}) { + fake.invocationsMutex.Lock() + defer fake.invocationsMutex.Unlock() + if fake.invocations == nil { + fake.invocations = map[string][][]interface{}{} + } + if fake.invocations[key] == nil { + fake.invocations[key] = [][]interface{}{} + } + fake.invocations[key] = append(fake.invocations[key], args) +} diff --git a/topologies/t3/assets/chaincode/assets-transfer-basic/chaincode-go/chaincode/smartcontract.go b/topologies/t3/assets/chaincode/assets-transfer-basic/chaincode-go/chaincode/smartcontract.go new file mode 100644 index 0000000..71e8dd8 --- /dev/null +++ b/topologies/t3/assets/chaincode/assets-transfer-basic/chaincode-go/chaincode/smartcontract.go @@ -0,0 +1,185 @@ +package chaincode + +import ( + "encoding/json" + "fmt" + + "github.com/hyperledger/fabric-contract-api-go/contractapi" +) + +// SmartContract provides functions for managing an Asset +type SmartContract struct { + contractapi.Contract +} + +// Asset describes basic details of what makes up a simple asset +type Asset struct { + ID string `json:"ID"` + Color string `json:"color"` + Size int `json:"size"` + Owner string `json:"owner"` + AppraisedValue int `json:"appraisedValue"` +} + +// InitLedger adds a base set of assets to the ledger +func (s *SmartContract) InitLedger(ctx contractapi.TransactionContextInterface) error { + assets := []Asset{ + {ID: "asset1", Color: "blue", Size: 5, Owner: "Tomoko", AppraisedValue: 300}, + {ID: "asset2", Color: "red", Size: 5, Owner: "Brad", AppraisedValue: 400}, + {ID: "asset3", Color: "green", Size: 10, Owner: "Jin Soo", AppraisedValue: 500}, + {ID: "asset4", Color: "yellow", Size: 10, Owner: "Max", AppraisedValue: 600}, + {ID: "asset5", Color: "black", Size: 15, Owner: "Adriana", AppraisedValue: 700}, + {ID: "asset6", Color: "white", Size: 15, Owner: "Michel", AppraisedValue: 800}, + } + + for _, asset := range assets { + assetJSON, err := json.Marshal(asset) + if err != nil { + return err + } + + err = ctx.GetStub().PutState(asset.ID, assetJSON) + if err != nil { + return fmt.Errorf("failed to put to world state. %v", err) + } + } + + return nil +} + +// CreateAsset issues a new asset to the world state with given details. +func (s *SmartContract) CreateAsset(ctx contractapi.TransactionContextInterface, id string, color string, size int, owner string, appraisedValue int) error { + exists, err := s.AssetExists(ctx, id) + if err != nil { + return err + } + if exists { + return fmt.Errorf("the asset %s already exists", id) + } + + asset := Asset{ + ID: id, + Color: color, + Size: size, + Owner: owner, + AppraisedValue: appraisedValue, + } + assetJSON, err := json.Marshal(asset) + if err != nil { + return err + } + + return ctx.GetStub().PutState(id, assetJSON) +} + +// ReadAsset returns the asset stored in the world state with given id. +func (s *SmartContract) ReadAsset(ctx contractapi.TransactionContextInterface, id string) (*Asset, error) { + assetJSON, err := ctx.GetStub().GetState(id) + if err != nil { + return nil, fmt.Errorf("failed to read from world state: %v", err) + } + if assetJSON == nil { + return nil, fmt.Errorf("the asset %s does not exist", id) + } + + var asset Asset + err = json.Unmarshal(assetJSON, &asset) + if err != nil { + return nil, err + } + + return &asset, nil +} + +// UpdateAsset updates an existing asset in the world state with provided parameters. +func (s *SmartContract) UpdateAsset(ctx contractapi.TransactionContextInterface, id string, color string, size int, owner string, appraisedValue int) error { + exists, err := s.AssetExists(ctx, id) + if err != nil { + return err + } + if !exists { + return fmt.Errorf("the asset %s does not exist", id) + } + + // overwriting original asset with new asset + asset := Asset{ + ID: id, + Color: color, + Size: size, + Owner: owner, + AppraisedValue: appraisedValue, + } + assetJSON, err := json.Marshal(asset) + if err != nil { + return err + } + + return ctx.GetStub().PutState(id, assetJSON) +} + +// DeleteAsset deletes an given asset from the world state. +func (s *SmartContract) DeleteAsset(ctx contractapi.TransactionContextInterface, id string) error { + exists, err := s.AssetExists(ctx, id) + if err != nil { + return err + } + if !exists { + return fmt.Errorf("the asset %s does not exist", id) + } + + return ctx.GetStub().DelState(id) +} + +// AssetExists returns true when asset with given ID exists in world state +func (s *SmartContract) AssetExists(ctx contractapi.TransactionContextInterface, id string) (bool, error) { + assetJSON, err := ctx.GetStub().GetState(id) + if err != nil { + return false, fmt.Errorf("failed to read from world state: %v", err) + } + + return assetJSON != nil, nil +} + +// TransferAsset updates the owner field of asset with given id in world state. +func (s *SmartContract) TransferAsset(ctx contractapi.TransactionContextInterface, id string, newOwner string) error { + asset, err := s.ReadAsset(ctx, id) + if err != nil { + return err + } + + asset.Owner = newOwner + assetJSON, err := json.Marshal(asset) + if err != nil { + return err + } + + return ctx.GetStub().PutState(id, assetJSON) +} + +// GetAllAssets returns all assets found in world state +func (s *SmartContract) GetAllAssets(ctx contractapi.TransactionContextInterface) ([]*Asset, error) { + // range query with empty string for startKey and endKey does an + // open-ended query of all assets in the chaincode namespace. + resultsIterator, err := ctx.GetStub().GetStateByRange("", "") + if err != nil { + return nil, err + } + defer resultsIterator.Close() + + var assets []*Asset + for resultsIterator.HasNext() { + queryResponse, err := resultsIterator.Next() + if err != nil { + return nil, err + } + + var asset Asset + err = json.Unmarshal(queryResponse.Value, &asset) + if err != nil { + return nil, err + } + assets = append(assets, &asset) + } + + return assets, nil +} diff --git a/topologies/t3/assets/chaincode/assets-transfer-basic/chaincode-go/chaincode/smartcontract_test.go b/topologies/t3/assets/chaincode/assets-transfer-basic/chaincode-go/chaincode/smartcontract_test.go new file mode 100644 index 0000000..cb001de --- /dev/null +++ b/topologies/t3/assets/chaincode/assets-transfer-basic/chaincode-go/chaincode/smartcontract_test.go @@ -0,0 +1,184 @@ +package chaincode_test + +import ( + "encoding/json" + "fmt" + "testing" + + "github.com/hyperledger/fabric-chaincode-go/shim" + "github.com/hyperledger/fabric-contract-api-go/contractapi" + "github.com/hyperledger/fabric-protos-go/ledger/queryresult" + "github.com/hyperledger/fabric-samples/asset-transfer-basic/chaincode-go/chaincode" + "github.com/hyperledger/fabric-samples/asset-transfer-basic/chaincode-go/chaincode/mocks" + "github.com/stretchr/testify/require" +) + +//go:generate counterfeiter -o mocks/transaction.go -fake-name TransactionContext . transactionContext +type transactionContext interface { + contractapi.TransactionContextInterface +} + +//go:generate counterfeiter -o mocks/chaincodestub.go -fake-name ChaincodeStub . chaincodeStub +type chaincodeStub interface { + shim.ChaincodeStubInterface +} + +//go:generate counterfeiter -o mocks/statequeryiterator.go -fake-name StateQueryIterator . stateQueryIterator +type stateQueryIterator interface { + shim.StateQueryIteratorInterface +} + +func TestInitLedger(t *testing.T) { + chaincodeStub := &mocks.ChaincodeStub{} + transactionContext := &mocks.TransactionContext{} + transactionContext.GetStubReturns(chaincodeStub) + + assetTransfer := chaincode.SmartContract{} + err := assetTransfer.InitLedger(transactionContext) + require.NoError(t, err) + + chaincodeStub.PutStateReturns(fmt.Errorf("failed inserting key")) + err = assetTransfer.InitLedger(transactionContext) + require.EqualError(t, err, "failed to put to world state. failed inserting key") +} + +func TestCreateAsset(t *testing.T) { + chaincodeStub := &mocks.ChaincodeStub{} + transactionContext := &mocks.TransactionContext{} + transactionContext.GetStubReturns(chaincodeStub) + + assetTransfer := chaincode.SmartContract{} + err := assetTransfer.CreateAsset(transactionContext, "", "", 0, "", 0) + require.NoError(t, err) + + chaincodeStub.GetStateReturns([]byte{}, nil) + err = assetTransfer.CreateAsset(transactionContext, "asset1", "", 0, "", 0) + require.EqualError(t, err, "the asset asset1 already exists") + + chaincodeStub.GetStateReturns(nil, fmt.Errorf("unable to retrieve asset")) + err = assetTransfer.CreateAsset(transactionContext, "asset1", "", 0, "", 0) + require.EqualError(t, err, "failed to read from world state: unable to retrieve asset") +} + +func TestReadAsset(t *testing.T) { + chaincodeStub := &mocks.ChaincodeStub{} + transactionContext := &mocks.TransactionContext{} + transactionContext.GetStubReturns(chaincodeStub) + + expectedAsset := &chaincode.Asset{ID: "asset1"} + bytes, err := json.Marshal(expectedAsset) + require.NoError(t, err) + + chaincodeStub.GetStateReturns(bytes, nil) + assetTransfer := chaincode.SmartContract{} + asset, err := assetTransfer.ReadAsset(transactionContext, "") + require.NoError(t, err) + require.Equal(t, expectedAsset, asset) + + chaincodeStub.GetStateReturns(nil, fmt.Errorf("unable to retrieve asset")) + _, err = assetTransfer.ReadAsset(transactionContext, "") + require.EqualError(t, err, "failed to read from world state: unable to retrieve asset") + + chaincodeStub.GetStateReturns(nil, nil) + asset, err = assetTransfer.ReadAsset(transactionContext, "asset1") + require.EqualError(t, err, "the asset asset1 does not exist") + require.Nil(t, asset) +} + +func TestUpdateAsset(t *testing.T) { + chaincodeStub := &mocks.ChaincodeStub{} + transactionContext := &mocks.TransactionContext{} + transactionContext.GetStubReturns(chaincodeStub) + + expectedAsset := &chaincode.Asset{ID: "asset1"} + bytes, err := json.Marshal(expectedAsset) + require.NoError(t, err) + + chaincodeStub.GetStateReturns(bytes, nil) + assetTransfer := chaincode.SmartContract{} + err = assetTransfer.UpdateAsset(transactionContext, "", "", 0, "", 0) + require.NoError(t, err) + + chaincodeStub.GetStateReturns(nil, nil) + err = assetTransfer.UpdateAsset(transactionContext, "asset1", "", 0, "", 0) + require.EqualError(t, err, "the asset asset1 does not exist") + + chaincodeStub.GetStateReturns(nil, fmt.Errorf("unable to retrieve asset")) + err = assetTransfer.UpdateAsset(transactionContext, "asset1", "", 0, "", 0) + require.EqualError(t, err, "failed to read from world state: unable to retrieve asset") +} + +func TestDeleteAsset(t *testing.T) { + chaincodeStub := &mocks.ChaincodeStub{} + transactionContext := &mocks.TransactionContext{} + transactionContext.GetStubReturns(chaincodeStub) + + asset := &chaincode.Asset{ID: "asset1"} + bytes, err := json.Marshal(asset) + require.NoError(t, err) + + chaincodeStub.GetStateReturns(bytes, nil) + chaincodeStub.DelStateReturns(nil) + assetTransfer := chaincode.SmartContract{} + err = assetTransfer.DeleteAsset(transactionContext, "") + require.NoError(t, err) + + chaincodeStub.GetStateReturns(nil, nil) + err = assetTransfer.DeleteAsset(transactionContext, "asset1") + require.EqualError(t, err, "the asset asset1 does not exist") + + chaincodeStub.GetStateReturns(nil, fmt.Errorf("unable to retrieve asset")) + err = assetTransfer.DeleteAsset(transactionContext, "") + require.EqualError(t, err, "failed to read from world state: unable to retrieve asset") +} + +func TestTransferAsset(t *testing.T) { + chaincodeStub := &mocks.ChaincodeStub{} + transactionContext := &mocks.TransactionContext{} + transactionContext.GetStubReturns(chaincodeStub) + + asset := &chaincode.Asset{ID: "asset1"} + bytes, err := json.Marshal(asset) + require.NoError(t, err) + + chaincodeStub.GetStateReturns(bytes, nil) + assetTransfer := chaincode.SmartContract{} + err = assetTransfer.TransferAsset(transactionContext, "", "") + require.NoError(t, err) + + chaincodeStub.GetStateReturns(nil, fmt.Errorf("unable to retrieve asset")) + err = assetTransfer.TransferAsset(transactionContext, "", "") + require.EqualError(t, err, "failed to read from world state: unable to retrieve asset") +} + +func TestGetAllAssets(t *testing.T) { + asset := &chaincode.Asset{ID: "asset1"} + bytes, err := json.Marshal(asset) + require.NoError(t, err) + + iterator := &mocks.StateQueryIterator{} + iterator.HasNextReturnsOnCall(0, true) + iterator.HasNextReturnsOnCall(1, false) + iterator.NextReturns(&queryresult.KV{Value: bytes}, nil) + + chaincodeStub := &mocks.ChaincodeStub{} + transactionContext := &mocks.TransactionContext{} + transactionContext.GetStubReturns(chaincodeStub) + + chaincodeStub.GetStateByRangeReturns(iterator, nil) + assetTransfer := &chaincode.SmartContract{} + assets, err := assetTransfer.GetAllAssets(transactionContext) + require.NoError(t, err) + require.Equal(t, []*chaincode.Asset{asset}, assets) + + iterator.HasNextReturns(true) + iterator.NextReturns(nil, fmt.Errorf("failed retrieving next item")) + assets, err = assetTransfer.GetAllAssets(transactionContext) + require.EqualError(t, err, "failed retrieving next item") + require.Nil(t, assets) + + chaincodeStub.GetStateByRangeReturns(nil, fmt.Errorf("failed retrieving all assets")) + assets, err = assetTransfer.GetAllAssets(transactionContext) + require.EqualError(t, err, "failed retrieving all assets") + require.Nil(t, assets) +} diff --git a/topologies/t3/assets/chaincode/assets-transfer-basic/chaincode-go/go.mod b/topologies/t3/assets/chaincode/assets-transfer-basic/chaincode-go/go.mod new file mode 100644 index 0000000..630a157 --- /dev/null +++ b/topologies/t3/assets/chaincode/assets-transfer-basic/chaincode-go/go.mod @@ -0,0 +1,11 @@ +module github.com/hyperledger/fabric-samples/asset-transfer-basic/chaincode-go + +go 1.14 + +require ( + github.com/golang/protobuf v1.3.2 + github.com/hyperledger/fabric-chaincode-go v0.0.0-20210718160520-38d29fabecb9 + github.com/hyperledger/fabric-contract-api-go v1.1.1 + github.com/hyperledger/fabric-protos-go v0.0.0-20201028172056-a3136dde2354 + github.com/stretchr/testify v1.5.1 +) diff --git a/topologies/t3/assets/chaincode/assets-transfer-basic/chaincode-go/go.sum b/topologies/t3/assets/chaincode/assets-transfer-basic/chaincode-go/go.sum new file mode 100644 index 0000000..577c18b --- /dev/null +++ b/topologies/t3/assets/chaincode/assets-transfer-basic/chaincode-go/go.sum @@ -0,0 +1,154 @@ +cloud.google.com/go v0.26.0/go.mod h1:aQUYkXzVsufM+DwF1aE+0xfcU+56JwCaLick0ClmMTw= +github.com/BurntSushi/toml v0.3.1/go.mod h1:xHWCNGjB5oqiDr8zfno3MHue2Ht5sIBksp03qcyfWMU= +github.com/DATA-DOG/go-txdb v0.1.3/go.mod h1:DhAhxMXZpUJVGnT+p9IbzJoRKvlArO2pkHjnGX7o0n0= +github.com/PuerkitoBio/purell v1.1.1 h1:WEQqlqaGbrPkxLJWfBwQmfEAE1Z7ONdDLqrN38tNFfI= +github.com/PuerkitoBio/purell v1.1.1/go.mod h1:c11w/QuzBsJSee3cPx9rAFu61PvFxuPbtSwDGJws/X0= +github.com/PuerkitoBio/urlesc v0.0.0-20170810143723-de5bf2ad4578 h1:d+Bc7a5rLufV/sSk/8dngufqelfh6jnri85riMAaF/M= +github.com/PuerkitoBio/urlesc v0.0.0-20170810143723-de5bf2ad4578/go.mod h1:uGdkoq3SwY9Y+13GIhn11/XLaGBb4BfwItxLd5jeuXE= +github.com/armon/consul-api v0.0.0-20180202201655-eb2c6b5be1b6/go.mod h1:grANhF5doyWs3UAsr3K4I6qtAmlQcZDesFNEHPZAzj8= +github.com/client9/misspell v0.3.4/go.mod h1:qj6jICC3Q7zFZvVWo7KLAzC3yx5G7kyvSDkc90ppPyw= +github.com/coreos/etcd v3.3.10+incompatible/go.mod h1:uF7uidLiAD3TWHmW31ZFd/JWoc32PjwdhPthX9715RE= +github.com/coreos/go-etcd v2.0.0+incompatible/go.mod h1:Jez6KQU2B/sWsbdaef3ED8NzMklzPG4d5KIOhIy30Tk= +github.com/coreos/go-semver v0.2.0/go.mod h1:nnelYz7RCh+5ahJtPPxZlU+153eP4D4r3EedlOD2RNk= +github.com/cpuguy83/go-md2man v1.0.10/go.mod h1:SmD6nW6nTyfqj6ABTjUi3V3JVMnlJmwcJI5acqYI6dE= +github.com/cucumber/godog v0.8.0/go.mod h1:Cp3tEV1LRAyH/RuCThcxHS/+9ORZ+FMzPva2AZ5Ki+A= +github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38= +github.com/davecgh/go-spew v1.1.1 h1:vj9j/u1bqnvCEfJOwUhtlOARqs3+rkHYY13jYWTU97c= +github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38= +github.com/fsnotify/fsnotify v1.4.7/go.mod h1:jwhsz4b93w/PPRr/qN1Yymfu8t87LnFCMoQvtojpjFo= +github.com/go-openapi/jsonpointer v0.19.2/go.mod h1:3akKfEdA7DF1sugOqz1dVQHBcuDBPKZGEoHC/NkiQRg= +github.com/go-openapi/jsonpointer v0.19.3 h1:gihV7YNZK1iK6Tgwwsxo2rJbD1GTbdm72325Bq8FI3w= +github.com/go-openapi/jsonpointer v0.19.3/go.mod h1:Pl9vOtqEWErmShwVjC8pYs9cog34VGT37dQOVbmoatg= +github.com/go-openapi/jsonreference v0.19.2 h1:o20suLFB4Ri0tuzpWtyHlh7E7HnkqTNLq6aR6WVNS1w= +github.com/go-openapi/jsonreference v0.19.2/go.mod h1:jMjeRr2HHw6nAVajTXJ4eiUwohSTlpa0o73RUL1owJc= +github.com/go-openapi/spec v0.19.4 h1:ixzUSnHTd6hCemgtAJgluaTSGYpLNpJY4mA2DIkdOAo= +github.com/go-openapi/spec v0.19.4/go.mod h1:FpwSN1ksY1eteniUU7X0N/BgJ7a4WvBFVA8Lj9mJglo= +github.com/go-openapi/swag v0.19.2/go.mod h1:POnQmlKehdgb5mhVOsnJFsivZCEZ/vjK9gh66Z9tfKk= +github.com/go-openapi/swag v0.19.5 h1:lTz6Ys4CmqqCQmZPBlbQENR1/GucA2bzYTE12Pw4tFY= +github.com/go-openapi/swag v0.19.5/go.mod h1:POnQmlKehdgb5mhVOsnJFsivZCEZ/vjK9gh66Z9tfKk= +github.com/go-sql-driver/mysql v1.5.0/go.mod h1:DCzpHaOWr8IXmIStZouvnhqoel9Qv2LBy8hT2VhHyBg= +github.com/gobuffalo/envy v1.7.0 h1:GlXgaiBkmrYMHco6t4j7SacKO4XUjvh5pwXh0f4uxXU= +github.com/gobuffalo/envy v1.7.0/go.mod h1:n7DRkBerg/aorDM8kbduw5dN3oXGswK5liaSCx4T5NI= +github.com/gobuffalo/logger v1.0.0/go.mod h1:2zbswyIUa45I+c+FLXuWl9zSWEiVuthsk8ze5s8JvPs= +github.com/gobuffalo/packd v0.3.0 h1:eMwymTkA1uXsqxS0Tpoop3Lc0u3kTfiMBE6nKtQU4g4= +github.com/gobuffalo/packd v0.3.0/go.mod h1:zC7QkmNkYVGKPw4tHpBQ+ml7W/3tIebgeo1b36chA3Q= +github.com/gobuffalo/packr v1.30.1 h1:hu1fuVR3fXEZR7rXNW3h8rqSML8EVAf6KNm0NKO/wKg= +github.com/gobuffalo/packr v1.30.1/go.mod h1:ljMyFO2EcrnzsHsN99cvbq055Y9OhRrIaviy289eRuk= +github.com/gobuffalo/packr/v2 v2.5.1/go.mod h1:8f9c96ITobJlPzI44jj+4tHnEKNt0xXWSVlXRN9X1Iw= +github.com/golang/glog v0.0.0-20160126235308-23def4e6c14b h1:VKtxabqXZkF25pY9ekfRL6a582T4P37/31XEstQ5p58= +github.com/golang/glog v0.0.0-20160126235308-23def4e6c14b/go.mod h1:SBH7ygxi8pfUlaOkMMuAQtPIUF8ecWP5IEl/CR7VP2Q= +github.com/golang/mock v1.1.1/go.mod h1:oTYuIxOrZwtPieC+H1uAHpcLFnEyAGVDL/k47Jfbm0A= +github.com/golang/protobuf v1.2.0/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U= +github.com/golang/protobuf v1.3.2 h1:6nsPYzhq5kReh6QImI3k5qWzO4PEbvbIW2cwSfR/6xs= +github.com/golang/protobuf v1.3.2/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U= +github.com/google/go-cmp v0.2.0/go.mod h1:oXzfMopK8JAjlY9xF4vHSVASa0yLyX7SntLO5aqRK0M= +github.com/hashicorp/hcl v1.0.0/go.mod h1:E5yfLk+7swimpb2L/Alb/PJmXilQ/rhwaUYs4T20WEQ= +github.com/hyperledger/fabric-chaincode-go v0.0.0-20200424173110-d7076418f212 h1:1i4lnpV8BDgKOLi1hgElfBqdHXjXieSuj8629mwBZ8o= +github.com/hyperledger/fabric-chaincode-go v0.0.0-20200424173110-d7076418f212/go.mod h1:N7H3sA7Tx4k/YzFq7U0EPdqJtqvM4Kild0JoCc7C0Dc= +github.com/hyperledger/fabric-chaincode-go v0.0.0-20210718160520-38d29fabecb9 h1:1cAZHHrBYFrX3bwQGhOZtOB4sCM9QWVppd81O8vsPXs= +github.com/hyperledger/fabric-chaincode-go v0.0.0-20210718160520-38d29fabecb9/go.mod h1:N7H3sA7Tx4k/YzFq7U0EPdqJtqvM4Kild0JoCc7C0Dc= +github.com/hyperledger/fabric-contract-api-go v1.1.0 h1:K9uucl/6eX3NF0/b+CGIiO1IPm1VYQxBkpnVGJur2S4= +github.com/hyperledger/fabric-contract-api-go v1.1.0/go.mod h1:nHWt0B45fK53owcFpLtAe8DH0Q5P068mnzkNXMPSL7E= +github.com/hyperledger/fabric-contract-api-go v1.1.1 h1:gDhOC18gjgElNZ85kFWsbCQq95hyUP/21n++m0Sv6B0= +github.com/hyperledger/fabric-contract-api-go v1.1.1/go.mod h1:+39cWxbh5py3NtXpRA63rAH7NzXyED+QJx1EZr0tJPo= +github.com/hyperledger/fabric-protos-go v0.0.0-20190919234611-2a87503ac7c9/go.mod h1:xVYTjK4DtZRBxZ2D9aE4y6AbLaPwue2o/criQyQbVD0= +github.com/hyperledger/fabric-protos-go v0.0.0-20200424173316-dd554ba3746e h1:9PS5iezHk/j7XriSlNuSQILyCOfcZ9wZ3/PiucmSE8E= +github.com/hyperledger/fabric-protos-go v0.0.0-20200424173316-dd554ba3746e/go.mod h1:xVYTjK4DtZRBxZ2D9aE4y6AbLaPwue2o/criQyQbVD0= +github.com/hyperledger/fabric-protos-go v0.0.0-20201028172056-a3136dde2354 h1:6vLLEpvDbSlmUJFjg1hB5YMBpI+WgKguztlONcAFBoY= +github.com/hyperledger/fabric-protos-go v0.0.0-20201028172056-a3136dde2354/go.mod h1:xVYTjK4DtZRBxZ2D9aE4y6AbLaPwue2o/criQyQbVD0= +github.com/hyperledger/fabric-protos-go v0.0.0-20210722212527-1ed7094bb8fe h1:1ef+SKRVYiQCAMhZ5W6HZhStEcNNAm27V8YcptzSWnM= +github.com/hyperledger/fabric-protos-go v0.0.0-20210722212527-1ed7094bb8fe/go.mod h1:xVYTjK4DtZRBxZ2D9aE4y6AbLaPwue2o/criQyQbVD0= +github.com/inconshreveable/mousetrap v1.0.0/go.mod h1:PxqpIevigyE2G7u3NXJIT2ANytuPF1OarO4DADm73n8= +github.com/joho/godotenv v1.3.0 h1:Zjp+RcGpHhGlrMbJzXTrZZPrWj+1vfm90La1wgB6Bhc= +github.com/joho/godotenv v1.3.0/go.mod h1:7hK45KPybAkOC6peb+G5yklZfMxEjkZhHbwpqxOKXbg= +github.com/karrick/godirwalk v1.10.12/go.mod h1:RoGL9dQei4vP9ilrpETWE8CLOZ1kiN0LhBygSwrAsHA= +github.com/konsorten/go-windows-terminal-sequences v1.0.1/go.mod h1:T0+1ngSBFLxvqU3pZ+m/2kptfBszLMUkC4ZK/EgS/cQ= +github.com/konsorten/go-windows-terminal-sequences v1.0.2/go.mod h1:T0+1ngSBFLxvqU3pZ+m/2kptfBszLMUkC4ZK/EgS/cQ= +github.com/kr/pretty v0.1.0/go.mod h1:dAy3ld7l9f0ibDNOQOHHMYYIIbhfbHSm3C4ZsoJORNo= +github.com/kr/pretty v0.2.0 h1:s5hAObm+yFO5uHYt5dYjxi2rXrsnmRpJx4OYvIWUaQs= +github.com/kr/pretty v0.2.0/go.mod h1:ipq/a2n7PKx3OHsz4KJII5eveXtPO4qwEXGdVfWzfnI= +github.com/kr/pty v1.1.1/go.mod h1:pFQYn66WHrOpPYNljwOMqo10TkYh1fy3cYio2l3bCsQ= +github.com/kr/pty v1.1.5/go.mod h1:9r2w37qlBe7rQ6e1fg1S/9xpWHSnaqNdHD3WcMdbPDA= +github.com/kr/text v0.1.0 h1:45sCR5RtlFHMR4UwH9sdQ5TC8v0qDQCHnXt+kaKSTVE= +github.com/kr/text v0.1.0/go.mod h1:4Jbv+DJW3UT/LiOwJeYQe1efqtUx/iVham/4vfdArNI= +github.com/magiconair/properties v1.8.0/go.mod h1:PppfXfuXeibc/6YijjN8zIbojt8czPbwD3XqdrwzmxQ= +github.com/mailru/easyjson v0.0.0-20190614124828-94de47d64c63/go.mod h1:C1wdFJiN94OJF2b5HbByQZoLdCWB1Yqtg26g4irojpc= +github.com/mailru/easyjson v0.0.0-20190626092158-b2ccc519800e h1:hB2xlXdHp/pmPZq0y3QnmWAArdw9PqbmotexnWx/FU8= +github.com/mailru/easyjson v0.0.0-20190626092158-b2ccc519800e/go.mod h1:C1wdFJiN94OJF2b5HbByQZoLdCWB1Yqtg26g4irojpc= +github.com/mitchellh/go-homedir v1.1.0/go.mod h1:SfyaCUpYCn1Vlf4IUYiD9fPX4A5wJrkLzIz1N1q0pr0= +github.com/mitchellh/mapstructure v1.1.2/go.mod h1:FVVH3fgwuzCH5S8UJGiWEs2h04kUh9fWfEaFds41c1Y= +github.com/pelletier/go-toml v1.2.0/go.mod h1:5z9KED0ma1S8pY6P1sdut58dfprrGBbd/94hg7ilaic= +github.com/pmezard/go-difflib v1.0.0 h1:4DBwDE0NGyQoBHbLQYPwSUPoCMWR5BEzIk/f1lZbAQM= +github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4= +github.com/rogpeppe/go-internal v1.1.0/go.mod h1:M8bDsm7K2OlrFYOpmOWEs/qY81heoFRclV5y23lUDJ4= +github.com/rogpeppe/go-internal v1.3.0 h1:RR9dF3JtopPvtkroDZuVD7qquD0bnHlKSqaQhgwt8yk= +github.com/rogpeppe/go-internal v1.3.0/go.mod h1:M8bDsm7K2OlrFYOpmOWEs/qY81heoFRclV5y23lUDJ4= +github.com/russross/blackfriday v1.5.2/go.mod h1:JO/DiYxRf+HjHt06OyowR9PTA263kcR/rfWxYHBV53g= +github.com/sirupsen/logrus v1.4.2/go.mod h1:tLMulIdttU9McNUspp0xgXVQah82FyeX6MwdIuYE2rE= +github.com/spf13/afero v1.1.2/go.mod h1:j4pytiNVoe2o6bmDsKpLACNPDBIoEAkihy7loJ1B0CQ= +github.com/spf13/cast v1.3.0/go.mod h1:Qx5cxh0v+4UWYiBimWS+eyWzqEqokIECu5etghLkUJE= +github.com/spf13/cobra v0.0.5/go.mod h1:3K3wKZymM7VvHMDS9+Akkh4K60UwM26emMESw8tLCHU= +github.com/spf13/jwalterweatherman v1.0.0/go.mod h1:cQK4TGJAtQXfYWX+Ddv3mKDzgVb68N+wFjFa4jdeBTo= +github.com/spf13/pflag v1.0.3/go.mod h1:DYY7MBk1bdzusC3SYhjObp+wFpr4gzcvqqNjLnInEg4= +github.com/spf13/viper v1.3.2/go.mod h1:ZiWeW+zYFKm7srdB9IoDzzZXaJaI5eL9QjNiN/DMA2s= +github.com/stretchr/objx v0.1.0/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME= +github.com/stretchr/objx v0.1.1/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME= +github.com/stretchr/objx v0.2.0/go.mod h1:qt09Ya8vawLte6SNmTgCsAVtYtaKzEcn8ATUoHMkEqE= +github.com/stretchr/testify v1.2.2/go.mod h1:a8OnRcib4nhh0OaRAV+Yts87kKdq0PP7pXfy6kDkUVs= +github.com/stretchr/testify v1.3.0/go.mod h1:M5WIy9Dh21IEIfnGCwXGc5bZfKNJtfHm1UVUgZn+9EI= +github.com/stretchr/testify v1.4.0/go.mod h1:j7eGeouHqKxXV5pUuKE4zz7dFj8WfuZ+81PSLYec5m4= +github.com/stretchr/testify v1.5.1 h1:nOGnQDM7FYENwehXlg/kFVnos3rEvtKTjRvOWSzb6H4= +github.com/stretchr/testify v1.5.1/go.mod h1:5W2xD1RspED5o8YsWQXVCued0rvSQ+mT+I5cxcmMvtA= +github.com/ugorji/go/codec v0.0.0-20181204163529-d75b2dcb6bc8/go.mod h1:VFNgLljTbGfSG7qAOspJ7OScBnGdDN/yBr0sguwnwf0= +github.com/xeipuuv/gojsonpointer v0.0.0-20180127040702-4e3ac2762d5f h1:J9EGpcZtP0E/raorCMxlFGSTBrsSlaDGf3jU/qvAE2c= +github.com/xeipuuv/gojsonpointer v0.0.0-20180127040702-4e3ac2762d5f/go.mod h1:N2zxlSyiKSe5eX1tZViRH5QA0qijqEDrYZiPEAiq3wU= +github.com/xeipuuv/gojsonreference v0.0.0-20180127040603-bd5ef7bd5415 h1:EzJWgHovont7NscjpAxXsDA8S8BMYve8Y5+7cuRE7R0= +github.com/xeipuuv/gojsonreference v0.0.0-20180127040603-bd5ef7bd5415/go.mod h1:GwrjFmJcFw6At/Gs6z4yjiIwzuJ1/+UwLxMQDVQXShQ= +github.com/xeipuuv/gojsonschema v1.2.0 h1:LhYJRs+L4fBtjZUfuSZIKGeVu0QRy8e5Xi7D17UxZ74= +github.com/xeipuuv/gojsonschema v1.2.0/go.mod h1:anYRn/JVcOK2ZgGU+IjEV4nwlhoK5sQluxsYJ78Id3Y= +github.com/xordataexchange/crypt v0.0.3-0.20170626215501-b2862e3d0a77/go.mod h1:aYKd//L2LvnjZzWKhF00oedf4jCCReLcmhLdhm1A27Q= +golang.org/x/crypto v0.0.0-20181203042331-505ab145d0a9/go.mod h1:6SG95UA2DQfeDnfUPMdvaQW0Q7yPrPDi9nlGo2tz2b4= +golang.org/x/crypto v0.0.0-20190308221718-c2843e01d9a2/go.mod h1:djNgcEr1/C05ACkg1iLfiJU5Ep61QUkGW8qpdssI0+w= +golang.org/x/crypto v0.0.0-20190611184440-5c40567a22f8/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI= +golang.org/x/crypto v0.0.0-20190621222207-cc06ce4a13d4/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI= +golang.org/x/lint v0.0.0-20190313153728-d0100b6bd8b3/go.mod h1:6SW0HCj/g11FgYtHlgUYUwCkIfeOF89ocIRzGO/8vkc= +golang.org/x/net v0.0.0-20190311183353-d8887717615a/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg= +golang.org/x/net v0.0.0-20190404232315-eb5bcb51f2a3/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg= +golang.org/x/net v0.0.0-20190522155817-f3200d17e092/go.mod h1:HSz+uSET+XFnRR8LxR5pz3Of3rY3CfYBVs4xY44aLks= +golang.org/x/net v0.0.0-20190613194153-d28f0bde5980/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s= +golang.org/x/net v0.0.0-20190827160401-ba9fcec4b297 h1:k7pJ2yAPLPgbskkFdhRCsA77k2fySZ1zf2zCjvQCiIM= +golang.org/x/net v0.0.0-20190827160401-ba9fcec4b297/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s= +golang.org/x/oauth2 v0.0.0-20180821212333-d2e6202438be/go.mod h1:N/0e6XlmueqKjAGxoOufVs8QHGRruUQn6yWY3a++T0U= +golang.org/x/sync v0.0.0-20190423024810-112230192c58/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= +golang.org/x/sys v0.0.0-20181205085412-a5c9d58dba9a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY= +golang.org/x/sys v0.0.0-20190215142949-d0b11bdaac8a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY= +golang.org/x/sys v0.0.0-20190412213103-97732733099d/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= +golang.org/x/sys v0.0.0-20190422165155-953cdadca894/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= +golang.org/x/sys v0.0.0-20190515120540-06a5c4944438/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= +golang.org/x/sys v0.0.0-20190616124812-15dcb6c0061f/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= +golang.org/x/sys v0.0.0-20190710143415-6ec70d6a5542 h1:6ZQFf1D2YYDDI7eSwW8adlkkavTB9sw5I24FVtEvNUQ= +golang.org/x/sys v0.0.0-20190710143415-6ec70d6a5542/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= +golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ= +golang.org/x/text v0.3.2 h1:tW2bmiBqwgJj/UpqtC8EpXEZVYOwU0yG4iWbprSVAcs= +golang.org/x/text v0.3.2/go.mod h1:bEr9sfX3Q8Zfm5fL9x+3itogRgK3+ptLWKqgva+5dAk= +golang.org/x/tools v0.0.0-20180917221912-90fa682c2a6e/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ= +golang.org/x/tools v0.0.0-20190311212946-11955173bddd/go.mod h1:LCzVGOaR6xXOjkQ3onu1FJEFr0SW1gC7cKk1uF8kGRs= +golang.org/x/tools v0.0.0-20190524140312-2c0ae7006135/go.mod h1:RgjU9mgBXZiqYHBnxXauZ1Gv1EHHAz9KjViQ78xBX0Q= +golang.org/x/tools v0.0.0-20190614205625-5aca471b1d59/go.mod h1:/rFqwRUd4F7ZHNgwSSTFct+R/Kf4OFW1sUzUTQQTgfc= +golang.org/x/tools v0.0.0-20190624180213-70d37148ca0c/go.mod h1:/rFqwRUd4F7ZHNgwSSTFct+R/Kf4OFW1sUzUTQQTgfc= +google.golang.org/appengine v1.1.0/go.mod h1:EbEs0AVv82hx2wNQdGPgUI5lhzA/G0D9YwlJXL52JkM= +google.golang.org/genproto v0.0.0-20180817151627-c66870c02cf8/go.mod h1:JiN7NxoALGmiZfu7CAH4rXhgtRTLTxftemlI0sWmxmc= +google.golang.org/genproto v0.0.0-20180831171423-11092d34479b h1:lohp5blsw53GBXtLyLNaTXPXS9pJ1tiTw61ZHUoE9Qw= +google.golang.org/genproto v0.0.0-20180831171423-11092d34479b/go.mod h1:JiN7NxoALGmiZfu7CAH4rXhgtRTLTxftemlI0sWmxmc= +google.golang.org/grpc v1.23.0 h1:AzbTB6ux+okLTzP8Ru1Xs41C303zdcfEht7MQnYJt5A= +google.golang.org/grpc v1.23.0/go.mod h1:Y5yQAOtifL1yxbo5wqy6BxZv8vAUGQwXBOALyacEbxg= +gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0= +gopkg.in/check.v1 v1.0.0-20180628173108-788fd7840127/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0= +gopkg.in/check.v1 v1.0.0-20190902080502-41f04d3bba15 h1:YR8cESwS4TdDjEe65xsg0ogRM/Nc3DYOhEAlW+xobZo= +gopkg.in/check.v1 v1.0.0-20190902080502-41f04d3bba15/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0= +gopkg.in/errgo.v2 v2.1.0/go.mod h1:hNsd1EY+bozCKY1Ytp96fpM3vjJbqLJn88ws8XvfDNI= +gopkg.in/yaml.v2 v2.2.2 h1:ZCJp+EgiOT7lHqUV2J862kp8Qj64Jo6az82+3Td9dZw= +gopkg.in/yaml.v2 v2.2.2/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI= +gopkg.in/yaml.v2 v2.2.8 h1:obN1ZagJSUGI0Ek/LBmuj4SNLPfIny3KsKFopxRdj10= +gopkg.in/yaml.v2 v2.2.8/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI= +honnef.co/go/tools v0.0.0-20190523083050-ea95bdfd59fc/go.mod h1:rf3lG4BRIbNafJWhAfAdb/ePZxsR/4RtNHQocxwk9r4= diff --git a/topologies/t3/assets/chaincode/assets-transfer-basic/config/.gitignore b/topologies/t3/assets/chaincode/assets-transfer-basic/config/.gitignore new file mode 100644 index 0000000..21158f8 --- /dev/null +++ b/topologies/t3/assets/chaincode/assets-transfer-basic/config/.gitignore @@ -0,0 +1 @@ +connection.json \ No newline at end of file diff --git a/topologies/t3/assets/chaincode/assets-transfer-basic/config/org2/connection.json.template b/topologies/t3/assets/chaincode/assets-transfer-basic/config/org2/connection.json.template new file mode 100644 index 0000000..ec14f3b --- /dev/null +++ b/topologies/t3/assets/chaincode/assets-transfer-basic/config/org2/connection.json.template @@ -0,0 +1,9 @@ +{ + "address": "t3-org2-chaincode:7075", + "dial_timeout": "10s", + "tls_required": true, + "client_auth_required": false, + "client_key": "-----BEGIN EC PRIVATE KEY----- ... -----END EC PRIVATE KEY-----", + "client_cert": "-----BEGIN CERTIFICATE----- ... -----END CERTIFICATE-----", + "root_cert": "ROOT_CERT_VALUE" +} \ No newline at end of file diff --git a/topologies/t3/assets/chaincode/assets-transfer-basic/config/org2/metadata.json b/topologies/t3/assets/chaincode/assets-transfer-basic/config/org2/metadata.json new file mode 100644 index 0000000..db93b65 --- /dev/null +++ b/topologies/t3/assets/chaincode/assets-transfer-basic/config/org2/metadata.json @@ -0,0 +1 @@ +{"path":"","type":"external","label":"myccv1"} \ No newline at end of file diff --git a/topologies/t3/assets/chaincode/assets-transfer-basic/config/org2/peers.pem b/topologies/t3/assets/chaincode/assets-transfer-basic/config/org2/peers.pem new file mode 100644 index 0000000..3034d00 --- /dev/null +++ b/topologies/t3/assets/chaincode/assets-transfer-basic/config/org2/peers.pem @@ -0,0 +1 @@ +-----BEGIN CERTIFICATE-----\\nMIICJDCCAcqgAwIBAgIUfyeV+8QiTZRK/fjcVVckjZuHPLswCgYIKoZIzj0EAwIw\\nZjELMAkGA1UEBhMCVVMxFzAVBgNVBAgTDk5vcnRoIENhcm9saW5hMRQwEgYDVQQK\\nEwtIeXBlcmxlZGdlcjEPMA0GA1UECxMGRmFicmljMRcwFQYDVQQDEw50My1vcmcy\\nLWNhLXRsczAeFw0yMjA5MDYyMTIwMDBaFw0zNzA5MDIyMTIwMDBaMGYxCzAJBgNV\\nBAYTAlVTMRcwFQYDVQQIEw5Ob3J0aCBDYXJvbGluYTEUMBIGA1UEChMLSHlwZXJs\\nZWRnZXIxDzANBgNVBAsTBkZhYnJpYzEXMBUGA1UEAxMOdDMtb3JnMi1jYS10bHMw\\nWTATBgcqhkjOPQIBBggqhkjOPQMBBwNCAAQvEtS3LH/LQ5Rz444uWqxZ/Koc/FJ7\\n2W++qSttyot0d6n8gftHSZ2EDuIXVqCIKYzTBwZg/wfcGhkKTZyLvRO/o1YwVDAO\\nBgNVHQ8BAf8EBAMCAQYwEgYDVR0TAQH/BAgwBgEB/wIBATAdBgNVHQ4EFgQUXk9B\\ncTDRl39Xr0zl/zrmc/Tx0YcwDwYDVR0RBAgwBocEAAAAADAKBggqhkjOPQQDAgNI\\nADBFAiEA3HUpVs/ae8mqZm9j97zm2SV/G/1fWXch1vERpVcSY8QCIAE6VhDEjDvh\\nMmASlZvgmC2N83+zpDfPH/BMvCQXJRL8\\n-----END CERTIFICATE----- diff --git a/topologies/t3/assets/chaincode/assets-transfer-basic/config/org3/connection.json.template b/topologies/t3/assets/chaincode/assets-transfer-basic/config/org3/connection.json.template new file mode 100644 index 0000000..39edfdd --- /dev/null +++ b/topologies/t3/assets/chaincode/assets-transfer-basic/config/org3/connection.json.template @@ -0,0 +1,9 @@ +{ + "address": "t3-org3-chaincode:7075", + "dial_timeout": "10s", + "tls_required": true, + "client_auth_required": false, + "client_key": "-----BEGIN EC PRIVATE KEY----- ... -----END EC PRIVATE KEY-----", + "client_cert": "-----BEGIN CERTIFICATE----- ... -----END CERTIFICATE-----", + "root_cert": "ROOT_CERT_VALUE" +} \ No newline at end of file diff --git a/topologies/t3/assets/chaincode/assets-transfer-basic/config/org3/metadata.json b/topologies/t3/assets/chaincode/assets-transfer-basic/config/org3/metadata.json new file mode 100644 index 0000000..db93b65 --- /dev/null +++ b/topologies/t3/assets/chaincode/assets-transfer-basic/config/org3/metadata.json @@ -0,0 +1 @@ +{"path":"","type":"external","label":"myccv1"} \ No newline at end of file diff --git a/topologies/t3/assets/chaincode/assets-transfer-basic/config/org3/peers.pem b/topologies/t3/assets/chaincode/assets-transfer-basic/config/org3/peers.pem new file mode 100644 index 0000000..8bb7086 --- /dev/null +++ b/topologies/t3/assets/chaincode/assets-transfer-basic/config/org3/peers.pem @@ -0,0 +1 @@ +-----BEGIN CERTIFICATE-----\\nMIICJDCCAcqgAwIBAgIUExcdHfixgyLOWUYxR54M/Bin6oswCgYIKoZIzj0EAwIw\\nZjELMAkGA1UEBhMCVVMxFzAVBgNVBAgTDk5vcnRoIENhcm9saW5hMRQwEgYDVQQK\\nEwtIeXBlcmxlZGdlcjEPMA0GA1UECxMGRmFicmljMRcwFQYDVQQDEw50My1vcmcz\\nLWNhLXRsczAeFw0yMjA5MDYyMTIwMDBaFw0zNzA5MDIyMTIwMDBaMGYxCzAJBgNV\\nBAYTAlVTMRcwFQYDVQQIEw5Ob3J0aCBDYXJvbGluYTEUMBIGA1UEChMLSHlwZXJs\\nZWRnZXIxDzANBgNVBAsTBkZhYnJpYzEXMBUGA1UEAxMOdDMtb3JnMy1jYS10bHMw\\nWTATBgcqhkjOPQIBBggqhkjOPQMBBwNCAASrnzdmkcGZjBPxfp+Ug1vk5zcy5WS4\\nIszgZ0ucqXlqJgKacYb+d/4Q01ORW0mecRf92XnlHvUm4j71staHoLMwo1YwVDAO\\nBgNVHQ8BAf8EBAMCAQYwEgYDVR0TAQH/BAgwBgEB/wIBATAdBgNVHQ4EFgQUpJKM\\nmndkrXFWy0czhkrkrQTr3rMwDwYDVR0RBAgwBocEAAAAADAKBggqhkjOPQQDAgNI\\nADBFAiEAr0N1u6LZeHKnh78QLUIBdtc/jW/lemIwmY07hO9fDZsCIAkD3/yc/gpH\\nUpVNflW2pOVOyBQcpKb5zDbb0lvCZONf\\n-----END CERTIFICATE----- diff --git a/topologies/t3/config/config.yaml b/topologies/t3/config/config.yaml new file mode 100644 index 0000000..1f4cc49 --- /dev/null +++ b/topologies/t3/config/config.yaml @@ -0,0 +1,14 @@ +NodeOUs: + Enable: true + ClientOUIdentifier: + Certificate: cacerts/ca-cert.pem + OrganizationalUnitIdentifier: client + PeerOUIdentifier: + Certificate: cacerts/ca-cert.pem + OrganizationalUnitIdentifier: peer + AdminOUIdentifier: + Certificate: cacerts/ca-cert.pem + OrganizationalUnitIdentifier: admin + OrdererOUIdentifier: + Certificate: cacerts/ca-cert.pem + OrganizationalUnitIdentifier: orderer \ No newline at end of file diff --git a/topologies/t3/config/configtx.yaml b/topologies/t3/config/configtx.yaml new file mode 100644 index 0000000..1264040 --- /dev/null +++ b/topologies/t3/config/configtx.yaml @@ -0,0 +1,428 @@ +# Copyright IBM Corp. All Rights Reserved. +# +# SPDX-License-Identifier: Apache-2.0 +# + +--- +################################################################################ +# +# Section: Organizations +# +# - This section defines the different organizational identities which will +# be referenced later in the configuration. +# +################################################################################ +Organizations: + + # SampleOrg defines an MSP using the sampleconfig. It should never be used + # in production but may be used as a template for other definitions + - &org1 + # DefaultOrg defines the organization which is used in the sampleconfig + # of the fabric.git development environment + Name: org1 + + # ID to load the MSP definition as + ID: org1MSP + + # MSPDir is the filesystem path which contains the MSP configuration + MSPDir: /tmp/crypto-material/orgs/org1/msp + + # Policies defines the set of policies at this level of the config tree + # For organization policies, their canonical path is usually + # /Channel/// + Policies: + Readers: + Type: Signature + Rule: "OR('org1MSP.member')" + Writers: + Type: Signature + Rule: "OR('org1MSP.member')" + Admins: + Type: Signature + Rule: "OR('org1MSP.admin')" + Endorsement: + Type: Signature + Rule: "OR('org1MSP.member')" + + OrdererEndpoints: + - <>-org1-orderer1:7050 + - <>-org1-orderer2:7050 + - <>-org1-orderer3:7050 + + - &org2 + # DefaultOrg defines the organization which is used in the sampleconfig + # of the fabric.git development environment + Name: org2MSP + + # ID to load the MSP definition as + ID: org2MSP + + MSPDir: /tmp/crypto-material/orgs/org2/msp + + # Policies defines the set of policies at this level of the config tree + # For organization policies, their canonical path is usually + # /Channel/// + Policies: + Readers: + Type: Signature + Rule: "OR('org2MSP.member')" + Writers: + Type: Signature + Rule: "OR('org2MSP.member')" + Admins: + Type: Signature + Rule: "OR('org2MSP.admin')" + Endorsement: + Type: Signature + Rule: "OR('org2MSP.peer')" + + # leave this flag set to true. + AnchorPeers: + # AnchorPeers defines the location of peers which can be used + # for cross org gossip communication. Note, this value is only + # encoded in the genesis block in the Application section context + - Host: <>-org2-peer1 + Port: 7051 + + - &org3 + # DefaultOrg defines the organization which is used in the sampleconfig + # of the fabric.git development environment + Name: org3MSP + + # ID to load the MSP definition as + ID: org3MSP + + MSPDir: /tmp/crypto-material/orgs/org3/msp + + # Policies defines the set of policies at this level of the config tree + # For organization policies, their canonical path is usually + # /Channel/// + Policies: + Readers: + Type: Signature + Rule: "OR('org3MSP.member')" + Writers: + Type: Signature + Rule: "OR('org3MSP.member')" + Admins: + Type: Signature + Rule: "OR('org3MSP.admin')" + Endorsement: + Type: Signature + Rule: "OR('org3MSP.peer')" + + # leave this flag set to true. + AnchorPeers: + # AnchorPeers defines the location of peers which can be used + # for cross org gossip communication. Note, this value is only + # encoded in the genesis block in the Application section context + - Host: <>-org3-peer1 + Port: 7051 + +################################################################################ +# +# SECTION: Capabilities +# +# - This section defines the capabilities of fabric network. This is a new +# concept as of v1.1.0 and should not be utilized in mixed networks with +# v1.0.x peers and orderers. Capabilities define features which must be +# present in a fabric binary for that binary to safely participate in the +# fabric network. For instance, if a new MSP type is added, newer binaries +# might recognize and validate the signatures from this type, while older +# binaries without this support would be unable to validate those +# transactions. This could lead to different versions of the fabric binaries +# having different world states. Instead, defining a capability for a channel +# informs those binaries without this capability that they must cease +# processing transactions until they have been upgraded. For v1.0.x if any +# capabilities are defined (including a map with all capabilities turned off) +# then the v1.0.x peer will deliberately crash. +# +################################################################################ +Capabilities: + # Channel capabilities apply to both the orderers and the peers and must be + # supported by both. + # Set the value of the capability to true to require it. + Channel: &ChannelCapabilities + # V2_0 capability ensures that orderers and peers behave according + # to v2.0 channel capabilities. Orderers and peers from + # prior releases would behave in an incompatible way, and are therefore + # not able to participate in channels at v2.0 capability. + # Prior to enabling V2.0 channel capabilities, ensure that all + # orderers and peers on a channel are at v2.0.0 or later. + V2_0: true + + # Orderer capabilities apply only to the orderers, and may be safely + # used with prior release peers. + # Set the value of the capability to true to require it. + Orderer: &OrdererCapabilities + # V2_0 orderer capability ensures that orderers behave according + # to v2.0 orderer capabilities. Orderers from + # prior releases would behave in an incompatible way, and are therefore + # not able to participate in channels at v2.0 orderer capability. + # Prior to enabling V2.0 orderer capabilities, ensure that all + # orderers on channel are at v2.0.0 or later. + V2_0: true + + # Application capabilities apply only to the peer network, and may be safely + # used with prior release orderers. + # Set the value of the capability to true to require it. + Application: &ApplicationCapabilities + # V2_0 application capability ensures that peers behave according + # to v2.0 application capabilities. Peers from + # prior releases would behave in an incompatible way, and are therefore + # not able to participate in channels at v2.0 application capability. + # Prior to enabling V2.0 application capabilities, ensure that all + # peers on channel are at v2.0.0 or later. + V2_0: true + +################################################################################ +# +# SECTION: Application +# +# - This section defines the values to encode into a config transaction or +# genesis block for application related parameters +# +################################################################################ +Application: &ApplicationDefaults + ACLs: &ACLsDefault + + # ACL policy for lscc's "getid" function + lscc/ChaincodeExists: /Channel/Application/Readers + + # ACL policy for lscc's "getdepspec" function + lscc/GetDeploymentSpec: /Channel/Application/Readers + + # ACL policy for lscc's "getccdata" function + lscc/GetChaincodeData: /Channel/Application/Readers + + # ACL Policy for lscc's "getchaincodes" function + lscc/GetInstantiatedChaincodes: /Channel/Application/Readers + + + #---Query System Chaincode (qscc) function to policy mapping for access control---# + + # ACL policy for qscc's "GetChainInfo" function + qscc/GetChainInfo: /Channel/Application/Readers + + + # ACL policy for qscc's "GetBlockByNumber" function + qscc/GetBlockByNumber: /Channel/Application/Readers + + # ACL policy for qscc's "GetBlockByHash" function + qscc/GetBlockByHash: /Channel/Application/Readers + + # ACL policy for qscc's "GetTransactionByID" function + qscc/GetTransactionByID: /Channel/Application/Readers + + # ACL policy for qscc's "GetBlockByTxID" function + qscc/GetBlockByTxID: /Channel/Application/Readers + + #---Configuration System Chaincode (cscc) function to policy mapping for access control---# + + # ACL policy for cscc's "GetConfigBlock" function + cscc/GetConfigBlock: /Channel/Application/Readers + + # ACL policy for cscc's "GetConfigTree" function + cscc/GetConfigTree: /Channel/Application/Readers + + # ACL policy for cscc's "SimulateConfigTreeUpdate" function + cscc/SimulateConfigTreeUpdate: /Channel/Application/Readers + + #---Miscellanesous peer function to policy mapping for access control---# + + # ACL policy for invoking chaincodes on peer + peer/Propose: /Channel/Application/Writers + + # ACL policy for chaincode to chaincode invocation + peer/ChaincodeToChaincode: /Channel/Application/Readers + + #---Events resource to policy mapping for access control###---# + + # ACL policy for sending block events + event/Block: /Channel/Application/Readers + + # ACL policy for sending filtered block events + event/FilteredBlock: /Channel/Application/Readers + + # Chaincode Lifecycle Policies introduced in Fabric 2.x + # ACL policy for _lifecycle's "CheckCommitReadiness" function + _lifecycle/CheckCommitReadiness: /Channel/Application/Writers + + # ACL policy for _lifecycle's "CommitChaincodeDefinition" function + _lifecycle/CommitChaincodeDefinition: /Channel/Application/Writers + + # ACL policy for _lifecycle's "QueryChaincodeDefinition" function + _lifecycle/QueryChaincodeDefinition: /Channel/Application/Readers + + + # Organizations is the list of orgs which are defined as participants on + # the application side of the network + Organizations: + + # Policies defines the set of policies at this level of the config tree + # For Application policies, their canonical path is + # /Channel/Application/ + Policies: + Readers: + Type: ImplicitMeta + Rule: "ANY Readers" + Writers: + Type: ImplicitMeta + Rule: "ANY Writers" + Admins: + Type: ImplicitMeta + Rule: "MAJORITY Admins" + LifecycleEndorsement: + Type: ImplicitMeta + Rule: "MAJORITY Endorsement" + Endorsement: + Type: ImplicitMeta + Rule: "MAJORITY Endorsement" + + Capabilities: + <<: *ApplicationCapabilities +################################################################################ +# +# SECTION: Orderer +# +# - This section defines the values to encode into a config transaction or +# genesis block for orderer related parameters +# +################################################################################ +Orderer: &OrdererDefaults + + # Orderer Type: The orderer implementation to start + OrdererType: etcdraft + + # Addresses used to be the list of orderer addresses that clients and peers + # could connect to. However, this does not allow clients to associate orderer + # addresses and orderer organizations which can be useful for things such + # as TLS validation. The preferred way to specify orderer addresses is now + # to include the OrdererEndpoints item in your org definition + Addresses: + - <>-org1-orderer1:7050 + - <>-org1-orderer2:7050 + - <>-org1-orderer3:7050 + + EtcdRaft: + Consenters: + - Host: <>-org1-orderer1 + Port: 7050 + ClientTLSCert: /tmp/crypto-material/orderers/org1/orderer1/node-tls/msp/signcerts/cert.pem + ServerTLSCert: /tmp/crypto-material/orderers/org1/orderer1/node-tls/msp/signcerts/cert.pem + - Host: <>-org1-orderer2 + Port: 7050 + ClientTLSCert: /tmp/crypto-material/orderers/org1/orderer2/node-tls/msp/signcerts/cert.pem + ServerTLSCert: /tmp/crypto-material/orderers/org1/orderer2/node-tls/msp/signcerts/cert.pem + - Host: <>-org1-orderer3 + Port: 7050 + ClientTLSCert: /tmp/crypto-material/orderers/org1/orderer3/node-tls/msp/signcerts/cert.pem + ServerTLSCert: /tmp/crypto-material/orderers/org1/orderer3/node-tls/msp/signcerts/cert.pem + + # Batch Timeout: The amount of time to wait before creating a batch + BatchTimeout: 2s + + # Batch Size: Controls the number of messages batched into a block + BatchSize: + + # Max Message Count: The maximum number of messages to permit in a batch + MaxMessageCount: 10 + + # Absolute Max Bytes: The absolute maximum number of bytes allowed for + # the serialized messages in a batch. + AbsoluteMaxBytes: 99 MB + + # Preferred Max Bytes: The preferred maximum number of bytes allowed for + # the serialized messages in a batch. A message larger than the preferred + # max bytes will result in a batch larger than preferred max bytes. + PreferredMaxBytes: 512 KB + + # Organizations is the list of orgs which are defined as participants on + # the orderer side of the network + Organizations: + + # Policies defines the set of policies at this level of the config tree + # For Orderer policies, their canonical path is + # /Channel/Orderer/ + Policies: + Readers: + Type: ImplicitMeta + Rule: "ANY Readers" + Writers: + Type: ImplicitMeta + Rule: "ANY Writers" + Admins: + Type: ImplicitMeta + Rule: "MAJORITY Admins" + # BlockValidation specifies what signatures must be included in the block + # from the orderer for the peer to validate it. + BlockValidation: + Type: ImplicitMeta + Rule: "ANY Writers" + +################################################################################ +# +# CHANNEL +# +# This section defines the values to encode into a config transaction or +# genesis block for channel related parameters. +# +################################################################################ +Channel: &ChannelDefaults + # Policies defines the set of policies at this level of the config tree + # For Channel policies, their canonical path is + # /Channel/ + Policies: + # Who may invoke the 'Deliver' API + Readers: + Type: ImplicitMeta + Rule: "ANY Readers" + # Who may invoke the 'Broadcast' API + Writers: + Type: ImplicitMeta + Rule: "ANY Writers" + # By default, who may modify elements at this config level + Admins: + Type: ImplicitMeta + Rule: "MAJORITY Admins" + + # Capabilities describes the channel level capabilities, see the + # dedicated Capabilities section elsewhere in this file for a full + # description + Capabilities: + <<: *ChannelCapabilities + +################################################################################ +# +# Profile +# +# - Different configuration profiles may be encoded here to be specified +# as parameters to the configtxgen tool +# +################################################################################ +Profiles: + OrgsOrdererGenesis: + <<: *ChannelDefaults + Orderer: + <<: *OrdererDefaults + Organizations: + - *org1 + Capabilities: + <<: *OrdererCapabilities + Consortiums: + MainConsortium: + Organizations: + - *org2 + - *org3 + + OrgsChannel: + Consortium: MainConsortium + <<: *ChannelDefaults + Application: + <<: *ApplicationDefaults + Organizations: + - *org2 + - *org3 + Capabilities: + <<: *ApplicationCapabilities + diff --git a/topologies/t3/config/core.yaml b/topologies/t3/config/core.yaml new file mode 100644 index 0000000..61af3e2 --- /dev/null +++ b/topologies/t3/config/core.yaml @@ -0,0 +1,751 @@ +# Copyright IBM Corp. All Rights Reserved. +# +# SPDX-License-Identifier: Apache-2.0 +# + +############################################################################### +# +# Peer section +# +############################################################################### +peer: + + # The peer id provides a name for this peer instance and is used when + # naming docker resources. + id: jdoe + + # The networkId allows for logical separation of networks and is used when + # naming docker resources. + networkId: dev + + # The Address at local network interface this Peer will listen on. + # By default, it will listen on all network interfaces + listenAddress: 0.0.0.0:7051 + + # The endpoint this peer uses to listen for inbound chaincode connections. + # If this is commented-out, the listen address is selected to be + # the peer's address (see below) with port 7052 + # chaincodeListenAddress: 0.0.0.0:7052 + + # The endpoint the chaincode for this peer uses to connect to the peer. + # If this is not specified, the chaincodeListenAddress address is selected. + # And if chaincodeListenAddress is not specified, address is selected from + # peer address (see below). If specified peer address is invalid then it + # will fallback to the auto detected IP (local IP) regardless of the peer + # addressAutoDetect value. + # chaincodeAddress: 0.0.0.0:7052 + + # When used as peer config, this represents the endpoint to other peers + # in the same organization. For peers in other organization, see + # gossip.externalEndpoint for more info. + # When used as CLI config, this means the peer's endpoint to interact with + address: 0.0.0.0:7051 + + # Whether the Peer should programmatically determine its address + # This case is useful for docker containers. + # When set to true, will override peer address. + addressAutoDetect: false + + # Keepalive settings for peer server and clients + keepalive: + # Interval is the duration after which if the server does not see + # any activity from the client it pings the client to see if it's alive + interval: 7200s + # Timeout is the duration the server waits for a response + # from the client after sending a ping before closing the connection + timeout: 20s + # MinInterval is the minimum permitted time between client pings. + # If clients send pings more frequently, the peer server will + # disconnect them + minInterval: 60s + # Client keepalive settings for communicating with other peer nodes + client: + # Interval is the time between pings to peer nodes. This must + # greater than or equal to the minInterval specified by peer + # nodes + interval: 60s + # Timeout is the duration the client waits for a response from + # peer nodes before closing the connection + timeout: 20s + # DeliveryClient keepalive settings for communication with ordering + # nodes. + deliveryClient: + # Interval is the time between pings to ordering nodes. This must + # greater than or equal to the minInterval specified by ordering + # nodes. + interval: 60s + # Timeout is the duration the client waits for a response from + # ordering nodes before closing the connection + timeout: 20s + + + # Gossip related configuration + gossip: + # Bootstrap set to initialize gossip with. + # This is a list of other peers that this peer reaches out to at startup. + # Important: The endpoints here have to be endpoints of peers in the same + # organization, because the peer would refuse connecting to these endpoints + # unless they are in the same organization as the peer. + bootstrap: 127.0.0.1:7051 + + # NOTE: orgLeader and useLeaderElection parameters are mutual exclusive. + # Setting both to true would result in the termination of the peer + # since this is undefined state. If the peers are configured with + # useLeaderElection=false, make sure there is at least 1 peer in the + # organization that its orgLeader is set to true. + + # Defines whenever peer will initialize dynamic algorithm for + # "leader" selection, where leader is the peer to establish + # connection with ordering service and use delivery protocol + # to pull ledger blocks from ordering service. + useLeaderElection: false + # Statically defines peer to be an organization "leader", + # where this means that current peer will maintain connection + # with ordering service and disseminate block across peers in + # its own organization. Multiple peers or all peers in an organization + # may be configured as org leaders, so that they all pull + # blocks directly from ordering service. + orgLeader: true + + # Interval for membershipTracker polling + membershipTrackerInterval: 5s + + # Overrides the endpoint that the peer publishes to peers + # in its organization. For peers in foreign organizations + # see 'externalEndpoint' + endpoint: + # Maximum count of blocks stored in memory + maxBlockCountToStore: 10 + # Max time between consecutive message pushes(unit: millisecond) + maxPropagationBurstLatency: 10ms + # Max number of messages stored until a push is triggered to remote peers + maxPropagationBurstSize: 10 + # Number of times a message is pushed to remote peers + propagateIterations: 1 + # Number of peers selected to push messages to + propagatePeerNum: 3 + # Determines frequency of pull phases(unit: second) + # Must be greater than digestWaitTime + responseWaitTime + pullInterval: 4s + # Number of peers to pull from + pullPeerNum: 3 + # Determines frequency of pulling state info messages from peers(unit: second) + requestStateInfoInterval: 4s + # Determines frequency of pushing state info messages to peers(unit: second) + publishStateInfoInterval: 4s + # Maximum time a stateInfo message is kept until expired + stateInfoRetentionInterval: + # Time from startup certificates are included in Alive messages(unit: second) + publishCertPeriod: 10s + # Should we skip verifying block messages or not (currently not in use) + skipBlockVerification: false + # Dial timeout(unit: second) + dialTimeout: 3s + # Connection timeout(unit: second) + connTimeout: 2s + # Buffer size of received messages + recvBuffSize: 20 + # Buffer size of sending messages + sendBuffSize: 200 + # Time to wait before pull engine processes incoming digests (unit: second) + # Should be slightly smaller than requestWaitTime + digestWaitTime: 1s + # Time to wait before pull engine removes incoming nonce (unit: milliseconds) + # Should be slightly bigger than digestWaitTime + requestWaitTime: 1500ms + # Time to wait before pull engine ends pull (unit: second) + responseWaitTime: 2s + # Alive check interval(unit: second) + aliveTimeInterval: 5s + # Alive expiration timeout(unit: second) + aliveExpirationTimeout: 25s + # Reconnect interval(unit: second) + reconnectInterval: 25s + # Max number of attempts to connect to a peer + maxConnectionAttempts: 120 + # Message expiration factor for alive messages + msgExpirationFactor: 20 + # This is an endpoint that is published to peers outside of the organization. + # If this isn't set, the peer will not be known to other organizations. + externalEndpoint: + # Leader election service configuration + election: + # Longest time peer waits for stable membership during leader election startup (unit: second) + startupGracePeriod: 15s + # Interval gossip membership samples to check its stability (unit: second) + membershipSampleInterval: 1s + # Time passes since last declaration message before peer decides to perform leader election (unit: second) + leaderAliveThreshold: 10s + # Time between peer sends propose message and declares itself as a leader (sends declaration message) (unit: second) + leaderElectionDuration: 5s + + pvtData: + # pullRetryThreshold determines the maximum duration of time private data corresponding for a given block + # would be attempted to be pulled from peers until the block would be committed without the private data + pullRetryThreshold: 60s + # As private data enters the transient store, it is associated with the peer's ledger's height at that time. + # transientstoreMaxBlockRetention defines the maximum difference between the current ledger's height upon commit, + # and the private data residing inside the transient store that is guaranteed not to be purged. + # Private data is purged from the transient store when blocks with sequences that are multiples + # of transientstoreMaxBlockRetention are committed. + transientstoreMaxBlockRetention: 1000 + # pushAckTimeout is the maximum time to wait for an acknowledgement from each peer + # at private data push at endorsement time. + pushAckTimeout: 3s + # Block to live pulling margin, used as a buffer + # to prevent peer from trying to pull private data + # from peers that is soon to be purged in next N blocks. + # This helps a newly joined peer catch up to current + # blockchain height quicker. + btlPullMargin: 10 + # the process of reconciliation is done in an endless loop, while in each iteration reconciler tries to + # pull from the other peers the most recent missing blocks with a maximum batch size limitation. + # reconcileBatchSize determines the maximum batch size of missing private data that will be reconciled in a + # single iteration. + reconcileBatchSize: 10 + # reconcileSleepInterval determines the time reconciler sleeps from end of an iteration until the beginning + # of the next reconciliation iteration. + reconcileSleepInterval: 1m + # reconciliationEnabled is a flag that indicates whether private data reconciliation is enable or not. + reconciliationEnabled: true + # skipPullingInvalidTransactionsDuringCommit is a flag that indicates whether pulling of invalid + # transaction's private data from other peers need to be skipped during the commit time and pulled + # only through reconciler. + skipPullingInvalidTransactionsDuringCommit: false + # implicitCollectionDisseminationPolicy specifies the dissemination policy for the peer's own implicit collection. + # When a peer endorses a proposal that writes to its own implicit collection, below values override the default values + # for disseminating private data. + # Note that it is applicable to all channels the peer has joined. The implication is that requiredPeerCount has to + # be smaller than the number of peers in a channel that has the lowest numbers of peers from the organization. + implicitCollectionDisseminationPolicy: + # requiredPeerCount defines the minimum number of eligible peers to which the peer must successfully + # disseminate private data for its own implicit collection during endorsement. Default value is 0. + requiredPeerCount: 0 + # maxPeerCount defines the maximum number of eligible peers to which the peer will attempt to + # disseminate private data for its own implicit collection during endorsement. Default value is 1. + maxPeerCount: 1 + + # Gossip state transfer related configuration + state: + # indicates whenever state transfer is enabled or not + # default value is true, i.e. state transfer is active + # and takes care to sync up missing blocks allowing + # lagging peer to catch up to speed with rest network. + # Keep in mind that when peer.gossip.useLeaderElection is true + # and there are several peers in the organization, + # or peer.gossip.useLeaderElection is false alongside with + # peer.gossip.orgleader being false, the peer's ledger may lag behind + # the rest of the peers and will never catch up due to state transfer + # being disabled. + enabled: false + # checkInterval interval to check whether peer is lagging behind enough to + # request blocks via state transfer from another peer. + checkInterval: 10s + # responseTimeout amount of time to wait for state transfer response from + # other peers + responseTimeout: 3s + # batchSize the number of blocks to request via state transfer from another peer + batchSize: 10 + # blockBufferSize reflects the size of the re-ordering buffer + # which captures blocks and takes care to deliver them in order + # down to the ledger layer. The actual buffer size is bounded between + # 0 and 2*blockBufferSize, each channel maintains its own buffer + blockBufferSize: 20 + # maxRetries maximum number of re-tries to ask + # for single state transfer request + maxRetries: 3 + + # TLS Settings + tls: + # Require server-side TLS + enabled: false + # Require client certificates / mutual TLS. + # Note that clients that are not configured to use a certificate will + # fail to connect to the peer. + clientAuthRequired: false + # X.509 certificate used for TLS server + cert: + file: tls/server.crt + # Private key used for TLS server (and client if clientAuthEnabled + # is set to true + key: + file: tls/server.key + # Trusted root certificate chain for tls.cert + rootcert: + file: tls/ca.crt + # Set of root certificate authorities used to verify client certificates + clientRootCAs: + files: + - tls/ca.crt + # Private key used for TLS when making client connections. If + # not set, peer.tls.key.file will be used instead + clientKey: + file: + # X.509 certificate used for TLS when making client connections. + # If not set, peer.tls.cert.file will be used instead + clientCert: + file: + + # Authentication contains configuration parameters related to authenticating + # client messages + authentication: + # the acceptable difference between the current server time and the + # client's time as specified in a client request message + timewindow: 15m + + # Path on the file system where peer will store data (eg ledger). This + # location must be access control protected to prevent unintended + # modification that might corrupt the peer operations. + fileSystemPath: /var/hyperledger/production + + # BCCSP (Blockchain crypto provider): Select which crypto implementation or + # library to use + BCCSP: + Default: SW + # Settings for the SW crypto provider (i.e. when DEFAULT: SW) + SW: + # TODO: The default Hash and Security level needs refactoring to be + # fully configurable. Changing these defaults requires coordination + # SHA2 is hardcoded in several places, not only BCCSP + Hash: SHA2 + Security: 256 + # Location of Key Store + FileKeyStore: + # If "", defaults to 'mspConfigPath'/keystore + KeyStore: + # Settings for the PKCS#11 crypto provider (i.e. when DEFAULT: PKCS11) + PKCS11: + # Location of the PKCS11 module library + Library: + # Token Label + Label: + # User PIN + Pin: + Hash: + Security: + + # Path on the file system where peer will find MSP local configurations + mspConfigPath: msp + + # Identifier of the local MSP + # ----!!!!IMPORTANT!!!-!!!IMPORTANT!!!-!!!IMPORTANT!!!!---- + # Deployers need to change the value of the localMspId string. + # In particular, the name of the local MSP ID of a peer needs + # to match the name of one of the MSPs in each of the channel + # that this peer is a member of. Otherwise this peer's messages + # will not be identified as valid by other nodes. + localMspId: SampleOrg + + # CLI common client config options + client: + # connection timeout + connTimeout: 3s + + # Delivery service related config + deliveryclient: + # It sets the total time the delivery service may spend in reconnection + # attempts until its retry logic gives up and returns an error + reconnectTotalTimeThreshold: 3600s + + # It sets the delivery service <-> ordering service node connection timeout + connTimeout: 3s + + # It sets the delivery service maximal delay between consecutive retries + reConnectBackoffThreshold: 3600s + + # A list of orderer endpoint addresses which should be overridden + # when found in channel configurations. + addressOverrides: + # - from: + # to: + # caCertsFile: + # - from: + # to: + # caCertsFile: + + # Type for the local MSP - by default it's of type bccsp + localMspType: bccsp + + # Used with Go profiling tools only in none production environment. In + # production, it should be disabled (eg enabled: false) + profile: + enabled: false + listenAddress: 0.0.0.0:6060 + + # Handlers defines custom handlers that can filter and mutate + # objects passing within the peer, such as: + # Auth filter - reject or forward proposals from clients + # Decorators - append or mutate the chaincode input passed to the chaincode + # Endorsers - Custom signing over proposal response payload and its mutation + # Valid handler definition contains: + # - A name which is a factory method name defined in + # core/handlers/library/library.go for statically compiled handlers + # - library path to shared object binary for pluggable filters + # Auth filters and decorators are chained and executed in the order that + # they are defined. For example: + # authFilters: + # - + # name: FilterOne + # library: /opt/lib/filter.so + # - + # name: FilterTwo + # decorators: + # - + # name: DecoratorOne + # - + # name: DecoratorTwo + # library: /opt/lib/decorator.so + # Endorsers are configured as a map that its keys are the endorsement system chaincodes that are being overridden. + # Below is an example that overrides the default ESCC and uses an endorsement plugin that has the same functionality + # as the default ESCC. + # If the 'library' property is missing, the name is used as the constructor method in the builtin library similar + # to auth filters and decorators. + # endorsers: + # escc: + # name: DefaultESCC + # library: /etc/hyperledger/fabric/plugin/escc.so + handlers: + authFilters: + - + name: DefaultAuth + - + name: ExpirationCheck # This filter checks identity x509 certificate expiration + decorators: + - + name: DefaultDecorator + endorsers: + escc: + name: DefaultEndorsement + library: + validators: + vscc: + name: DefaultValidation + library: + + # library: /etc/hyperledger/fabric/plugin/escc.so + # Number of goroutines that will execute transaction validation in parallel. + # By default, the peer chooses the number of CPUs on the machine. Set this + # variable to override that choice. + # NOTE: overriding this value might negatively influence the performance of + # the peer so please change this value only if you know what you're doing + validatorPoolSize: + + # The discovery service is used by clients to query information about peers, + # such as - which peers have joined a certain channel, what is the latest + # channel config, and most importantly - given a chaincode and a channel, + # what possible sets of peers satisfy the endorsement policy. + discovery: + enabled: true + # Whether the authentication cache is enabled or not. + authCacheEnabled: true + # The maximum size of the cache, after which a purge takes place + authCacheMaxSize: 1000 + # The proportion (0 to 1) of entries that remain in the cache after the cache is purged due to overpopulation + authCachePurgeRetentionRatio: 0.75 + # Whether to allow non-admins to perform non channel scoped queries. + # When this is false, it means that only peer admins can perform non channel scoped queries. + orgMembersAllowedAccess: false + + # Limits is used to configure some internal resource limits. + limits: + # Concurrency limits the number of concurrently running requests to a service on each peer. + # Currently this option is only applied to endorser service and deliver service. + # When the property is missing or the value is 0, the concurrency limit is disabled for the service. + concurrency: + # endorserService limits concurrent requests to endorser service that handles chaincode deployment, query and invocation, + # including both user chaincodes and system chaincodes. + endorserService: 2500 + # deliverService limits concurrent event listeners registered to deliver service for blocks and transaction events. + deliverService: 2500 + +############################################################################### +# +# VM section +# +############################################################################### +vm: + + # Endpoint of the vm management system. For docker can be one of the following in general + # unix:///var/run/docker.sock + # http://localhost:2375 + # https://localhost:2376 + endpoint: unix:///var/run/docker.sock + + # settings for docker vms + docker: + tls: + enabled: false + ca: + file: docker/ca.crt + cert: + file: docker/tls.crt + key: + file: docker/tls.key + + # Enables/disables the standard out/err from chaincode containers for + # debugging purposes + attachStdout: false + + # Parameters on creating docker container. + # Container may be efficiently created using ipam & dns-server for cluster + # NetworkMode - sets the networking mode for the container. Supported + # standard values are: `host`(default),`bridge`,`ipvlan`,`none`. + # Dns - a list of DNS servers for the container to use. + # Note: `Privileged` `Binds` `Links` and `PortBindings` properties of + # Docker Host Config are not supported and will not be used if set. + # LogConfig - sets the logging driver (Type) and related options + # (Config) for Docker. For more info, + # https://docs.docker.com/engine/admin/logging/overview/ + # Note: Set LogConfig using Environment Variables is not supported. + hostConfig: + NetworkMode: host + Dns: + # - 192.168.0.1 + LogConfig: + Type: json-file + Config: + max-size: "50m" + max-file: "5" + Memory: 2147483648 + +############################################################################### +# +# Chaincode section +# +############################################################################### +chaincode: + + # The id is used by the Chaincode stub to register the executing Chaincode + # ID with the Peer and is generally supplied through ENV variables + # the `path` form of ID is provided when installing the chaincode. + # The `name` is used for all other requests and can be any string. + id: + path: + name: + + # Generic builder environment, suitable for most chaincode types + builder: $(DOCKER_NS)/fabric-ccenv:$(TWO_DIGIT_VERSION) + + # Enables/disables force pulling of the base docker images (listed below) + # during user chaincode instantiation. + # Useful when using moving image tags (such as :latest) + pull: false + + golang: + # golang will never need more than baseos + runtime: $(DOCKER_NS)/fabric-baseos:$(TWO_DIGIT_VERSION) + + # whether or not golang chaincode should be linked dynamically + dynamicLink: false + + java: + # This is an image based on java:openjdk-8 with addition compiler + # tools added for java shim layer packaging. + # This image is packed with shim layer libraries that are necessary + # for Java chaincode runtime. + runtime: $(DOCKER_NS)/fabric-javaenv:$(TWO_DIGIT_VERSION) + + node: + # This is an image based on node:$(NODE_VER)-alpine + runtime: $(DOCKER_NS)/fabric-nodeenv:$(TWO_DIGIT_VERSION) + + # List of directories to treat as external builders and launchers for + # chaincode. The external builder detection processing will iterate over the + # builders in the order specified below. + externalBuilders: + - path: /tmp/assets/buildpacks/externalchaincode + name: myccv1-builder + environmentWhitelist: + - GOPROXY + # propagateEnvironment: + # - ENVVAR_NAME_TO_PROPAGATE_FROM_PEER + # - GOPROXY + + # The maximum duration to wait for the chaincode build and install process + # to complete. + installTimeout: 300s + + # Timeout duration for starting up a container and waiting for Register + # to come through. + startuptimeout: 300s + + # Timeout duration for Invoke and Init calls to prevent runaway. + # This timeout is used by all chaincodes in all the channels, including + # system chaincodes. + # Note that during Invoke, if the image is not available (e.g. being + # cleaned up when in development environment), the peer will automatically + # build the image, which might take more time. In production environment, + # the chaincode image is unlikely to be deleted, so the timeout could be + # reduced accordingly. + executetimeout: 30s + + # There are 2 modes: "dev" and "net". + # In dev mode, user runs the chaincode after starting peer from + # command line on local machine. + # In net mode, peer will run chaincode in a docker container. + mode: net + + # keepalive in seconds. In situations where the communication goes through a + # proxy that does not support keep-alive, this parameter will maintain connection + # between peer and chaincode. + # A value <= 0 turns keepalive off + keepalive: 0 + + # enabled system chaincodes + system: + _lifecycle: enable + cscc: enable + lscc: enable + qscc: enable + + # Logging section for the chaincode container + logging: + # Default level for all loggers within the chaincode container + level: info + # Override default level for the 'shim' logger + shim: warning + # Format for the chaincode container logs + format: '%{color}%{time:2006-01-02 15:04:05.000 MST} [%{module}] %{shortfunc} -> %{level:.4s} %{id:03x}%{color:reset} %{message}' + +############################################################################### +# +# Ledger section - ledger configuration encompasses both the blockchain +# and the state +# +############################################################################### +ledger: + + blockchain: + + state: + # stateDatabase - options are "goleveldb", "CouchDB" + # goleveldb - default state database stored in goleveldb. + # CouchDB - store state database in CouchDB + stateDatabase: goleveldb + # Limit on the number of records to return per query + totalQueryLimit: 100000 + couchDBConfig: + # It is recommended to run CouchDB on the same server as the peer, and + # not map the CouchDB container port to a server port in docker-compose. + # Otherwise proper security must be provided on the connection between + # CouchDB client (on the peer) and server. + couchDBAddress: 127.0.0.1:5984 + # This username must have read and write authority on CouchDB + username: + # The password is recommended to pass as an environment variable + # during start up (eg CORE_LEDGER_STATE_COUCHDBCONFIG_PASSWORD). + # If it is stored here, the file must be access control protected + # to prevent unintended users from discovering the password. + password: + # Number of retries for CouchDB errors + maxRetries: 3 + # Number of retries for CouchDB errors during peer startup. + # The delay between retries doubles for each attempt. + # Default of 10 retries results in 11 attempts over 2 minutes. + maxRetriesOnStartup: 10 + # CouchDB request timeout (unit: duration, e.g. 20s) + requestTimeout: 35s + # Limit on the number of records per each CouchDB query + # Note that chaincode queries are only bound by totalQueryLimit. + # Internally the chaincode may execute multiple CouchDB queries, + # each of size internalQueryLimit. + internalQueryLimit: 1000 + # Limit on the number of records per CouchDB bulk update batch + maxBatchUpdateSize: 1000 + # Warm indexes after every N blocks. + # This option warms any indexes that have been + # deployed to CouchDB after every N blocks. + # A value of 1 will warm indexes after every block commit, + # to ensure fast selector queries. + # Increasing the value may improve write efficiency of peer and CouchDB, + # but may degrade query response time. + warmIndexesAfterNBlocks: 1 + # Create the _global_changes system database + # This is optional. Creating the global changes database will require + # additional system resources to track changes and maintain the database + createGlobalChangesDB: false + # CacheSize denotes the maximum mega bytes (MB) to be allocated for the in-memory state + # cache. Note that CacheSize needs to be a multiple of 32 MB. If it is not a multiple + # of 32 MB, the peer would round the size to the next multiple of 32 MB. + # To disable the cache, 0 MB needs to be assigned to the cacheSize. + cacheSize: 64 + + history: + # enableHistoryDatabase - options are true or false + # Indicates if the history of key updates should be stored. + # All history 'index' will be stored in goleveldb, regardless if using + # CouchDB or alternate database for the state. + enableHistoryDatabase: true + + pvtdataStore: + # the maximum db batch size for converting + # the ineligible missing data entries to eligible missing data entries + collElgProcMaxDbBatchSize: 5000 + # the minimum duration (in milliseconds) between writing + # two consecutive db batches for converting the ineligible missing data entries to eligible missing data entries + collElgProcDbBatchesInterval: 1000 + # The missing data entries are classified into two categories: + # (1) prioritized + # (2) deprioritized + # Initially, all missing data are in the prioritized list. When the + # reconciler is unable to fetch the missing data from other peers, + # the unreconciled missing data would be moved to the deprioritized list. + # The reconciler would retry deprioritized missing data after every + # deprioritizedDataReconcilerInterval (unit: minutes). Note that the + # interval needs to be greater than the reconcileSleepInterval + deprioritizedDataReconcilerInterval: 60m + +############################################################################### +# +# Operations section +# +############################################################################### +operations: + # host and port for the operations server + listenAddress: 127.0.0.1:9443 + + # TLS configuration for the operations endpoint + tls: + # TLS enabled + enabled: false + + # path to PEM encoded server certificate for the operations server + cert: + file: + + # path to PEM encoded server key for the operations server + key: + file: + + # most operations service endpoints require client authentication when TLS + # is enabled. clientAuthRequired requires client certificate authentication + # at the TLS layer to access all resources. + clientAuthRequired: false + + # paths to PEM encoded ca certificates to trust for client authentication + clientRootCAs: + files: [] + +############################################################################### +# +# Metrics section +# +############################################################################### +metrics: + # metrics provider is one of statsd, prometheus, or disabled + provider: disabled + + # statsd configuration + statsd: + # network type: tcp or udp + network: udp + + # statsd server address + address: 127.0.0.1:8125 + + # the interval at which locally cached counters and gauges are pushed + # to statsd; timings are pushed immediately + writeInterval: 10s + + # prefix is prepended to all emitted statsd metrics + prefix: \ No newline at end of file diff --git a/topologies/t3/connection.json b/topologies/t3/connection.json new file mode 100644 index 0000000..e69de29 diff --git a/topologies/t3/containers/cas/org1-cas/docker-compose-org1-cas.yml b/topologies/t3/containers/cas/org1-cas/docker-compose-org1-cas.yml new file mode 100644 index 0000000..1d941c5 --- /dev/null +++ b/topologies/t3/containers/cas/org1-cas/docker-compose-org1-cas.yml @@ -0,0 +1,29 @@ +version: "3.9" +services: + org1-ca-tls: + container_name: ${CURRENT_HL_TOPOLOGY}-org1-ca-tls + + command: sh -c 'fabric-ca-server start -d -b org1-ca-tls-admin:org1-ca-tls-adminpw --port 7054' + environment: + - FABRIC_CA_SERVER_HOME=/tmp/hyperledger/fabric-ca/ + - FABRIC_CA_SERVER_TLS_ENABLED=true + - FABRIC_CA_SERVER_CSR_CN=${CURRENT_HL_TOPOLOGY}-org1-ca-tls + - FABRIC_CA_SERVER_CSR_HOSTS=0.0.0.0 + - FABRIC_CA_SERVER_DEBUG=true + - TOPOLOGY=${CURRENT_HL_TOPOLOGY} + volumes: + - "${HL_TOPOLOGIES_BASE_FOLDER}/crypto-material/cas/org1/ca-tls:/tmp/hyperledger/fabric-ca" + - "${HL_TOPOLOGIES_BASE_FOLDER}:/tmp" + org1-ca-identities: + container_name: ${CURRENT_HL_TOPOLOGY}-org1-ca-identities + command: sh -c 'fabric-ca-server start -d -b org1-ca-identities-admin:org1-ca-identities-adminpw --port 7054' + environment: + - FABRIC_CA_SERVER_HOME=/tmp/hyperledger/fabric-ca/ + - FABRIC_CA_SERVER_TLS_ENABLED=true + - FABRIC_CA_SERVER_CSR_CN=${CURRENT_HL_TOPOLOGY}-org1-ca-identities + - FABRIC_CA_SERVER_CSR_HOSTS=0.0.0.0 + - FABRIC_CA_SERVER_DEBUG=true + - TOPOLOGY=${CURRENT_HL_TOPOLOGY} + volumes: + - "${HL_TOPOLOGIES_BASE_FOLDER}/crypto-material/cas/org1/ca-identities:/tmp/hyperledger/fabric-ca" + - "${HL_TOPOLOGIES_BASE_FOLDER}:/tmp" \ No newline at end of file diff --git a/topologies/t3/containers/cas/org2-cas/docker-compose-org2-cas.yml b/topologies/t3/containers/cas/org2-cas/docker-compose-org2-cas.yml new file mode 100644 index 0000000..e22813e --- /dev/null +++ b/topologies/t3/containers/cas/org2-cas/docker-compose-org2-cas.yml @@ -0,0 +1,28 @@ +version: "3.9" +services: + org2-ca-tls: + container_name: ${CURRENT_HL_TOPOLOGY}-org2-ca-tls + command: sh -c 'fabric-ca-server start -d -b org2-ca-tls-admin:org2-ca-tls-adminpw --port 7054' + environment: + - FABRIC_CA_SERVER_HOME=/tmp/hyperledger/fabric-ca/ + - FABRIC_CA_SERVER_TLS_ENABLED=true + - FABRIC_CA_SERVER_CSR_CN=${CURRENT_HL_TOPOLOGY}-org2-ca-tls + - FABRIC_CA_SERVER_CSR_HOSTS=0.0.0.0 + - FABRIC_CA_SERVER_DEBUG=true + - TOPOLOGY=${CURRENT_HL_TOPOLOGY} + volumes: + - "${HL_TOPOLOGIES_BASE_FOLDER}/crypto-material/cas/org2/ca-tls:/tmp/hyperledger/fabric-ca" + - "${HL_TOPOLOGIES_BASE_FOLDER}:/tmp" + org2-ca-identities: + container_name: ${CURRENT_HL_TOPOLOGY}-org2-ca-identities + command: sh -c 'fabric-ca-server start -d -b org2-ca-identities-admin:org2-ca-identities-adminpw --port 7054' + environment: + - FABRIC_CA_SERVER_HOME=/tmp/hyperledger/fabric-ca/ + - FABRIC_CA_SERVER_TLS_ENABLED=true + - FABRIC_CA_SERVER_CSR_CN=${CURRENT_HL_TOPOLOGY}-org2-ca-identities + - FABRIC_CA_SERVER_CSR_HOSTS=0.0.0.0 + - FABRIC_CA_SERVER_DEBUG=true + - TOPOLOGY=${CURRENT_HL_TOPOLOGY} + volumes: + - "${HL_TOPOLOGIES_BASE_FOLDER}/crypto-material/cas/org2/ca-identities:/tmp/hyperledger/fabric-ca" + - "${HL_TOPOLOGIES_BASE_FOLDER}:/tmp" \ No newline at end of file diff --git a/topologies/t3/containers/cas/org3-cas/docker-compose-org3-cas.yml b/topologies/t3/containers/cas/org3-cas/docker-compose-org3-cas.yml new file mode 100644 index 0000000..b18ea4d --- /dev/null +++ b/topologies/t3/containers/cas/org3-cas/docker-compose-org3-cas.yml @@ -0,0 +1,28 @@ +version: "3.9" +services: + org3-ca-tls: + container_name: ${CURRENT_HL_TOPOLOGY}-org3-ca-tls + command: sh -c 'fabric-ca-server start -d -b org3-ca-tls-admin:org3-ca-tls-adminpw --port 7054' + environment: + - FABRIC_CA_SERVER_HOME=/tmp/hyperledger/fabric-ca/ + - FABRIC_CA_SERVER_TLS_ENABLED=true + - FABRIC_CA_SERVER_CSR_CN=${CURRENT_HL_TOPOLOGY}-org3-ca-tls + - FABRIC_CA_SERVER_CSR_HOSTS=0.0.0.0 + - FABRIC_CA_SERVER_DEBUG=true + - TOPOLOGY=${CURRENT_HL_TOPOLOGY} + volumes: + - "${HL_TOPOLOGIES_BASE_FOLDER}/crypto-material/cas/org3/ca-tls:/tmp/hyperledger/fabric-ca" + - "${HL_TOPOLOGIES_BASE_FOLDER}:/tmp" + org3-ca-identities: + container_name: ${CURRENT_HL_TOPOLOGY}-org3-ca-identities + command: sh -c 'fabric-ca-server start -d -b org3-ca-identities-admin:org3-ca-identities-adminpw --port 7054' + environment: + - FABRIC_CA_SERVER_HOME=/tmp/hyperledger/fabric-ca/ + - FABRIC_CA_SERVER_TLS_ENABLED=true + - FABRIC_CA_SERVER_CSR_CN=${CURRENT_HL_TOPOLOGY}-org3-ca-identities + - FABRIC_CA_SERVER_CSR_HOSTS=0.0.0.0 + - FABRIC_CA_SERVER_DEBUG=true + - TOPOLOGY=${CURRENT_HL_TOPOLOGY} + volumes: + - "${HL_TOPOLOGIES_BASE_FOLDER}/crypto-material/cas/org3/ca-identities:/tmp/hyperledger/fabric-ca" + - "${HL_TOPOLOGIES_BASE_FOLDER}:/tmp" \ No newline at end of file diff --git a/topologies/t3/containers/chaincode/org2-chaincode/docker-compose-org2-chaincode.yml b/topologies/t3/containers/chaincode/org2-chaincode/docker-compose-org2-chaincode.yml new file mode 100644 index 0000000..5f40448 --- /dev/null +++ b/topologies/t3/containers/chaincode/org2-chaincode/docker-compose-org2-chaincode.yml @@ -0,0 +1,11 @@ +services: + org2-chaincode: + container_name: ${CURRENT_HL_TOPOLOGY}-org2-chaincode + environment: + - CHACINCODE_ID=${CC_ID} + - CHACINCODE_ADDRESS=0.0.0.0:7075 + - CHAINCODE_TLS_KEY=/tmp/crypto-material/orgs/org2/chaincode/msp/keystore/key.pem + - CHAINCODE_TLS_CERT=/tmp/crypto-material/orgs/org2/chaincode/msp/signcerts/cert.pem + # - CHAINCODE_CLIENT_CA_CERT=/tmp/crypto-material/cas/org2/ca-tls/ca-cert.pem + volumes: + - ${HL_TOPOLOGIES_BASE_FOLDER}:/tmp \ No newline at end of file diff --git a/topologies/t3/containers/chaincode/org3-chaincode/docker-compose-org3-chaincode.yml b/topologies/t3/containers/chaincode/org3-chaincode/docker-compose-org3-chaincode.yml new file mode 100644 index 0000000..3bf81c9 --- /dev/null +++ b/topologies/t3/containers/chaincode/org3-chaincode/docker-compose-org3-chaincode.yml @@ -0,0 +1,11 @@ +services: + org3-chaincode: + container_name: ${CURRENT_HL_TOPOLOGY}-org3-chaincode + environment: + - CHACINCODE_ID=${CC_ID} + - CHACINCODE_ADDRESS=0.0.0.0:7075 + - CHAINCODE_TLS_KEY=/tmp/crypto-material/orgs/org3/chaincode/msp/keystore/key.pem + - CHAINCODE_TLS_CERT=/tmp/crypto-material/orgs/org3/chaincode/msp/signcerts/cert.pem + # - CHAINCODE_CLIENT_CA_CERT=/tmp/crypto-material/cas/org3/ca-tls/ca-cert.pem + volumes: + - ${HL_TOPOLOGIES_BASE_FOLDER}:/tmp \ No newline at end of file diff --git a/topologies/t3/containers/clis/org2-clis/docker-compose-org2-clis.yml b/topologies/t3/containers/clis/org2-clis/docker-compose-org2-clis.yml new file mode 100644 index 0000000..4ceb8a0 --- /dev/null +++ b/topologies/t3/containers/clis/org2-clis/docker-compose-org2-clis.yml @@ -0,0 +1,21 @@ +services: + org2-cli-peer1: + container_name: ${CURRENT_HL_TOPOLOGY}-org2-cli-peer1 + tty: true + stdin_open: true + environment: + - GOPATH=/opt/gopath + - CORE_VM_ENDPOINT=unix:///host/var/run/docker.sock + - FABRIC_LOGGING_SPEC=DEBUG + - CORE_PEER_ID=${CURRENT_HL_TOPOLOGY}-org2-cli-peer1 + - CORE_PEER_ADDRESS=${CURRENT_HL_TOPOLOGY}-org2-peer1:7051 + - CORE_PEER_LOCALMSPID=org2MSP + - CORE_PEER_TLS_ENABLED=true + - CORE_PEER_TLS_ROOTCERT_FILE=/tmp/crypto-material/cas/org2/ca-tls/ca-cert.pem + - CORE_PEER_MSPCONFIGPATH=/tmp/crypto-material/peers/org2/peer1/node/msp + - TOPOLOGY=${CURRENT_HL_TOPOLOGY} + working_dir: /opt/gopath/src/github.com/hyperledger/fabric/org2 + command: sh + volumes: + - ${HL_TOPOLOGIES_BASE_FOLDER}/assets/chaincode:/opt/gopath/src/github.com/hyperledger/fabric-samples/chaincode + - ${HL_TOPOLOGIES_BASE_FOLDER}:/tmp diff --git a/topologies/t3/containers/clis/org3-clis/docker-compose-org3-clis.yml b/topologies/t3/containers/clis/org3-clis/docker-compose-org3-clis.yml new file mode 100644 index 0000000..68d2ec0 --- /dev/null +++ b/topologies/t3/containers/clis/org3-clis/docker-compose-org3-clis.yml @@ -0,0 +1,21 @@ +services: + org3-cli-peer1: + container_name: ${CURRENT_HL_TOPOLOGY}-org3-cli-peer1 + tty: true + stdin_open: true + environment: + - GOPATH=/opt/gopath + - CORE_VM_ENDPOINT=unix:///host/var/run/docker.sock + - FABRIC_LOGGING_SPEC=DEBUG + - CORE_PEER_ID=${CURRENT_HL_TOPOLOGY}-org3-cli-peer1 + - CORE_PEER_ADDRESS=${CURRENT_HL_TOPOLOGY}-org3-peer1:7051 + - CORE_PEER_LOCALMSPID=org3MSP + - CORE_PEER_TLS_ENABLED=true + - CORE_PEER_TLS_ROOTCERT_FILE=/tmp/crypto-material/cas/org3/ca-tls/ca-cert.pem + - CORE_PEER_MSPCONFIGPATH=/tmp/crypto-material/peers/org3/peer1/node/msp + - TOPOLOGY=${CURRENT_HL_TOPOLOGY} + working_dir: /opt/gopath/src/github.com/hyperledger/fabric/org3 + command: sh + volumes: + - ${HL_TOPOLOGIES_BASE_FOLDER}/assets/chaincode:/opt/gopath/src/github.com/hyperledger/fabric-samples/chaincode + - ${HL_TOPOLOGIES_BASE_FOLDER}:/tmp \ No newline at end of file diff --git a/topologies/t3/containers/orderers/org1-orderers/docker-compose-org1-orderers.yml b/topologies/t3/containers/orderers/org1-orderers/docker-compose-org1-orderers.yml new file mode 100644 index 0000000..0045893 --- /dev/null +++ b/topologies/t3/containers/orderers/org1-orderers/docker-compose-org1-orderers.yml @@ -0,0 +1,82 @@ +services: + org1-orderer1: + container_name: ${CURRENT_HL_TOPOLOGY}-org1-orderer1 + environment: + - ORDERER_HOME=/tmp/hyperledger/orderer/home + - ORDERER_FILELEDGER_LOCATION=/tmp/hyperledger/orderer/fileledger + - ORDERER_HOST=${CURRENT_HL_TOPOLOGY}-org1-orderer1 + - ORDERER_GENERAL_LISTENADDRESS=0.0.0.0 + - ORDERER_GENERAL_LISTENPORT=7050 + - ORDERER_GENERAL_GENESISMETHOD=file + # - ORDERER_GENERAL_GENESISFILE=/tmp/crypto-material/artifacts/channels/genesis.block + - ORDERER_GENERAL_BOOTSTRAPFILE=/tmp/crypto-material/artifacts/channels/genesis.block + - ORDERER_GENERAL_LOCALMSPID=org1MSP + - ORDERER_GENERAL_LOCALMSPDIR=/tmp/crypto-material/orderers/org1/orderer1/node/msp + - ORDERER_GENERAL_TLS_ENABLED=true + - ORDERER_GENERAL_TLS_CERTIFICATE=/tmp/crypto-material/orderers/org1/orderer1/node-tls/msp/signcerts/cert.pem + - ORDERER_GENERAL_TLS_PRIVATEKEY=/tmp/crypto-material/orderers/org1/orderer1/node-tls/msp/keystore/key.pem + # - ORDERER_GENERAL_TLS_ROOTCAS=[/tmp/crypto-material/cas/org1/ca-tls/ca-cert.pem] + - ORDERER_GENERAL_TLS_ROOTCAS=[/tmp/crypto-material/cas/org1/ca-tls/ca-cert.pem,/tmp/crypto-material/cas/org2/ca-tls/ca-cert.pem,/tmp/crypto-material/cas/org3/ca-tls/ca-cert.pem] + - ORDERER_GENERAL_LOGLEVEL=debug + - ORDERER_DEBUG_BROADCASTTRACEDIR=data/logs + - ORDERER_OPERATIONS_LISTENADDRESS=0.0.0.0:7051 + - ORDERER_METRICS_STATSD_ADDRESS=0.0.0.0:7052 + - ORDERER_GENERAL_PROFILE_ADDRESS=0.0.0.0:7053 + - TOPOLOGY=${CURRENT_HL_TOPOLOGY} + volumes: + - ${HL_TOPOLOGIES_BASE_FOLDER}/crypto-material:/tmp/crypto-material + - ${HL_TOPOLOGIES_BASE_FOLDER}/homefolders/orderers/org1/orderer1:/tmp/hyperledger/orderer + org1-orderer2: + container_name: ${CURRENT_HL_TOPOLOGY}-org1-orderer2 + environment: + - ORDERER_HOME=/tmp/hyperledger/orderer/home + - ORDERER_FILELEDGER_LOCATION=/tmp/hyperledger/orderer/fileledger + - ORDERER_HOST=${CURRENT_HL_TOPOLOGY}-org1-orderer2 + - ORDERER_GENERAL_LISTENADDRESS=0.0.0.0 + - ORDERER_GENERAL_LISTENPORT=7050 + - ORDERER_GENERAL_GENESISMETHOD=file + # - ORDERER_GENERAL_GENESISFILE=/tmp/crypto-material/artifacts/channels/genesis.block + - ORDERER_GENERAL_BOOTSTRAPFILE=/tmp/crypto-material/artifacts/channels/genesis.block + - ORDERER_GENERAL_LOCALMSPID=org1MSP + - ORDERER_GENERAL_LOCALMSPDIR=/tmp/crypto-material/orderers/org1/orderer2/node/msp + - ORDERER_GENERAL_TLS_ENABLED=true + - ORDERER_GENERAL_TLS_CERTIFICATE=/tmp/crypto-material/orderers/org1/orderer2/node-tls/msp/signcerts/cert.pem + - ORDERER_GENERAL_TLS_PRIVATEKEY=/tmp/crypto-material/orderers/org1/orderer2/node-tls/msp/keystore/key.pem + # - ORDERER_GENERAL_TLS_ROOTCAS=[/tmp/crypto-material/cas/org1/ca-tls/ca-cert.pem] + - ORDERER_GENERAL_TLS_ROOTCAS=[/tmp/crypto-material/cas/org1/ca-tls/ca-cert.pem,/tmp/crypto-material/cas/org2/ca-tls/ca-cert.pem,/tmp/crypto-material/cas/org3/ca-tls/ca-cert.pem] + - ORDERER_GENERAL_LOGLEVEL=debug + - ORDERER_DEBUG_BROADCASTTRACEDIR=data/logs + - ORDERER_OPERATIONS_LISTENADDRESS=0.0.0.0:7051 + - ORDERER_METRICS_STATSD_ADDRESS=0.0.0.0:7052 + - ORDERER_GENERAL_PROFILE_ADDRESS=0.0.0.0:7053 + - TOPOLOGY=${CURRENT_HL_TOPOLOGY} + volumes: + - ${HL_TOPOLOGIES_BASE_FOLDER}/crypto-material:/tmp/crypto-material + - ${HL_TOPOLOGIES_BASE_FOLDER}/homefolders/orderers/org1/orderer2:/tmp/hyperledger/orderer + org1-orderer3: + container_name: ${CURRENT_HL_TOPOLOGY}-org1-orderer3 + environment: + - ORDERER_HOME=/tmp/hyperledger/orderer/home + - ORDERER_FILELEDGER_LOCATION=/tmp/hyperledger/orderer/fileledger + - ORDERER_HOST=${CURRENT_HL_TOPOLOGY}-org1-orderer3 + - ORDERER_GENERAL_LISTENADDRESS=0.0.0.0 + - ORDERER_GENERAL_LISTENPORT=7050 + - ORDERER_GENERAL_GENESISMETHOD=file + # - ORDERER_GENERAL_GENESISFILE=/tmp/crypto-material/artifacts/channels/genesis.block + - ORDERER_GENERAL_BOOTSTRAPFILE=/tmp/crypto-material/artifacts/channels/genesis.block + - ORDERER_GENERAL_LOCALMSPID=org1MSP + - ORDERER_GENERAL_LOCALMSPDIR=/tmp/crypto-material/orderers/org1/orderer3/node/msp + - ORDERER_GENERAL_TLS_ENABLED=true + - ORDERER_GENERAL_TLS_CERTIFICATE=/tmp/crypto-material/orderers/org1/orderer3/node-tls/msp/signcerts/cert.pem + - ORDERER_GENERAL_TLS_PRIVATEKEY=/tmp/crypto-material/orderers/org1/orderer3/node-tls/msp/keystore/key.pem + # - ORDERER_GENERAL_TLS_ROOTCAS=[/tmp/crypto-material/cas/org1/ca-tls/ca-cert.pem] + - ORDERER_GENERAL_TLS_ROOTCAS=[/tmp/crypto-material/cas/org1/ca-tls/ca-cert.pem,/tmp/crypto-material/cas/org2/ca-tls/ca-cert.pem,/tmp/crypto-material/cas/org3/ca-tls/ca-cert.pem] + - ORDERER_GENERAL_LOGLEVEL=debug + - ORDERER_DEBUG_BROADCASTTRACEDIR=data/logs + - ORDERER_OPERATIONS_LISTENADDRESS=0.0.0.0:7051 + - ORDERER_METRICS_STATSD_ADDRESS=0.0.0.0:7052 + - ORDERER_GENERAL_PROFILE_ADDRESS=0.0.0.0:7053 + - TOPOLOGY=${CURRENT_HL_TOPOLOGY} + volumes: + - ${HL_TOPOLOGIES_BASE_FOLDER}/crypto-material:/tmp/crypto-material + - ${HL_TOPOLOGIES_BASE_FOLDER}/homefolders/orderers/org1/orderer3:/tmp/hyperledger/orderer diff --git a/topologies/t3/containers/peers/org2-peers/docker-compose-org2-peers.yml b/topologies/t3/containers/peers/org2-peers/docker-compose-org2-peers.yml new file mode 100644 index 0000000..68d93cf --- /dev/null +++ b/topologies/t3/containers/peers/org2-peers/docker-compose-org2-peers.yml @@ -0,0 +1,52 @@ +services: + org2-peer1: + container_name: ${CURRENT_HL_TOPOLOGY}-org2-peer1 + environment: + - CORE_PEER_ID=${CURRENT_HL_TOPOLOGY}-org2-peer1 + - CORE_PEER_ADDRESS=${CURRENT_HL_TOPOLOGY}-org2-peer1:7051 + - CORE_PEER_LOCALMSPID=org2MSP + - CORE_PEER_MSPCONFIGPATH=/tmp/crypto-material/peers/org2/peer1/node/msp + - CORE_VM_ENDPOINT=unix:///host/var/run/docker.sock + - CORE_VM_DOCKER_HOSTCONFIG_NETWORKMODE=hl-fabric-${CURRENT_HL_TOPOLOGY} + # - FABRIC_LOGGING_SPEC=debug + - CORE_PEER_TLS_ENABLED=true + - CORE_PEER_TLS_CERT_FILE=/tmp/crypto-material/peers/org2/peer1/node-tls/msp/signcerts/cert.pem + - CORE_PEER_TLS_KEY_FILE=/tmp/crypto-material/peers/org2/peer1/node-tls/msp/keystore/key.pem + - CORE_PEER_TLS_ROOTCERT_FILE=/tmp/crypto-material/peers/org2/peer1/node-tls/msp/tlscacerts/ca-cert.pem + - CORE_PEER_GOSSIP_USELEADERELECTION=true + - CORE_PEER_GOSSIP_ORGLEADER=false + - CORE_PEER_GOSSIP_EXTERNALENDPOINT=${CURRENT_HL_TOPOLOGY}-org2-peer1:7051 + - CORE_PEER_GOSSIP_SKIPHANDSHAKE=true + - TOPOLOGY=${CURRENT_HL_TOPOLOGY} + # - CORE_PEER_CHAINCODELISTENADDRESS=0.0.0.0:7052 + working_dir: /opt/gopath/src/github.com/hyperledger/fabric/org2/peer1 + volumes: + - /var/run:/host/var/run + - ${HL_TOPOLOGIES_BASE_FOLDER}:/tmp + - ${HL_TOPOLOGIES_BASE_FOLDER}/config/core.yaml:/etc/hyperledger/fabric/core.yaml + org2-peer2: + container_name: ${CURRENT_HL_TOPOLOGY}-org2-peer2 + environment: + - CORE_PEER_ID=${CURRENT_HL_TOPOLOGY}-org2-peer2 + - CORE_PEER_ADDRESS=${CURRENT_HL_TOPOLOGY}-org2-peer2:7051 + - CORE_PEER_LOCALMSPID=org2MSP + - CORE_PEER_MSPCONFIGPATH=/tmp/crypto-material/peers/org2/peer2/node/msp + - CORE_VM_ENDPOINT=unix:///host/var/run/docker.sock + - CORE_VM_DOCKER_HOSTCONFIG_NETWORKMODE=hl-fabric-${CURRENT_HL_TOPOLOGY} + # - FABRIC_LOGGING_SPEC=debug + - CORE_PEER_TLS_ENABLED=true + - CORE_PEER_TLS_CERT_FILE=/tmp/crypto-material/peers/org2/peer2/node-tls/msp/signcerts/cert.pem + - CORE_PEER_TLS_KEY_FILE=/tmp/crypto-material/peers/org2/peer2/node-tls/msp/keystore/key.pem + - CORE_PEER_TLS_ROOTCERT_FILE=/tmp/crypto-material/peers/org2/peer2/node-tls/msp/tlscacerts/ca-cert.pem + - CORE_PEER_GOSSIP_USELEADERELECTION=true + - CORE_PEER_GOSSIP_ORGLEADER=false + - CORE_PEER_GOSSIP_EXTERNALENDPOINT=${CURRENT_HL_TOPOLOGY}-org2-peer2:7051 + - CORE_PEER_GOSSIP_SKIPHANDSHAKE=true + # - CORE_PEER_CHAINCODELISTENADDRESS=0.0.0.0:7052 + - CORE_PEER_GOSSIP_BOOTSTRAP=${CURRENT_HL_TOPOLOGY}-org2-peer1:7051 + - TOPOLOGY=${CURRENT_HL_TOPOLOGY} + working_dir: /opt/gopath/src/github.com/hyperledger/fabric/org2/peer2 + volumes: + - /var/run:/host/var/run + - ${HL_TOPOLOGIES_BASE_FOLDER}:/tmp + - ${HL_TOPOLOGIES_BASE_FOLDER}/config/core.yaml:/etc/hyperledger/fabric/core.yaml \ No newline at end of file diff --git a/topologies/t3/containers/peers/org3-peers/docker-compose-org3-peers.yml b/topologies/t3/containers/peers/org3-peers/docker-compose-org3-peers.yml new file mode 100644 index 0000000..460903d --- /dev/null +++ b/topologies/t3/containers/peers/org3-peers/docker-compose-org3-peers.yml @@ -0,0 +1,52 @@ +services: + org3-peer1: + container_name: ${CURRENT_HL_TOPOLOGY}-org3-peer1 + environment: + - CORE_PEER_ID=${CURRENT_HL_TOPOLOGY}-org3-peer1 + - CORE_PEER_ADDRESS=${CURRENT_HL_TOPOLOGY}-org3-peer1:7051 + - CORE_PEER_LOCALMSPID=org3MSP + - CORE_PEER_MSPCONFIGPATH=/tmp/crypto-material/peers/org3/peer1/node/msp + - CORE_VM_ENDPOINT=unix:///host/var/run/docker.sock + - CORE_VM_DOCKER_HOSTCONFIG_NETWORKMODE=hl-fabric-${CURRENT_HL_TOPOLOGY} + # - FABRIC_LOGGING_SPEC=debug + - CORE_PEER_TLS_ENABLED=true + - CORE_PEER_TLS_CERT_FILE=/tmp/crypto-material/peers/org3/peer1/node-tls/msp/signcerts/cert.pem + - CORE_PEER_TLS_KEY_FILE=/tmp/crypto-material/peers/org3/peer1/node-tls/msp/keystore/key.pem + - CORE_PEER_TLS_ROOTCERT_FILE=/tmp/crypto-material/peers/org3/peer1/node-tls/msp/tlscacerts/ca-cert.pem + - CORE_PEER_GOSSIP_USELEADERELECTION=true + - CORE_PEER_GOSSIP_ORGLEADER=false + - CORE_PEER_GOSSIP_EXTERNALENDPOINT=${CURRENT_HL_TOPOLOGY}-org3-peer1:7051 + - CORE_PEER_GOSSIP_SKIPHANDSHAKE=true + - TOPOLOGY=${CURRENT_HL_TOPOLOGY} + # - CORE_PEER_CHAINCODELISTENADDRESS=0.0.0.0:7052 + working_dir: /opt/gopath/src/github.com/hyperledger/fabric/org3/peer1 + volumes: + - /var/run:/host/var/run + - ${HL_TOPOLOGIES_BASE_FOLDER}:/tmp + - ${HL_TOPOLOGIES_BASE_FOLDER}/config/core.yaml:/etc/hyperledger/fabric/core.yaml + org3-peer2: + container_name: ${CURRENT_HL_TOPOLOGY}-org3-peer2 + environment: + - CORE_PEER_ID=${CURRENT_HL_TOPOLOGY}-org3-peer2 + - CORE_PEER_ADDRESS=${CURRENT_HL_TOPOLOGY}-org3-peer2:7051 + - CORE_PEER_LOCALMSPID=org3MSP + - CORE_PEER_MSPCONFIGPATH=/tmp/crypto-material/peers/org3/peer2/node/msp + - CORE_VM_ENDPOINT=unix:///host/var/run/docker.sock + - CORE_VM_DOCKER_HOSTCONFIG_NETWORKMODE=hl-fabric-${CURRENT_HL_TOPOLOGY} + # - FABRIC_LOGGING_SPEC=debug + - CORE_PEER_TLS_ENABLED=true + - CORE_PEER_TLS_CERT_FILE=/tmp/crypto-material/peers/org3/peer2/node-tls/msp/signcerts/cert.pem + - CORE_PEER_TLS_KEY_FILE=/tmp/crypto-material/peers/org3/peer2/node-tls/msp/keystore/key.pem + - CORE_PEER_TLS_ROOTCERT_FILE=/tmp/crypto-material/peers/org3/peer2/node-tls/msp/tlscacerts/ca-cert.pem + - CORE_PEER_GOSSIP_USELEADERELECTION=true + - CORE_PEER_GOSSIP_ORGLEADER=false + - CORE_PEER_GOSSIP_EXTERNALENDPOINT=${CURRENT_HL_TOPOLOGY}-org3-peer2:7051 + - CORE_PEER_GOSSIP_SKIPHANDSHAKE=true + # - CORE_PEER_CHAINCODELISTENADDRESS=0.0.0.0:7052 + - CORE_PEER_GOSSIP_BOOTSTRAP=${CURRENT_HL_TOPOLOGY}-org3-peer1:7051 + - TOPOLOGY=${CURRENT_HL_TOPOLOGY} + working_dir: /opt/gopath/src/github.com/hyperledger/fabric/org3/peer2 + volumes: + - /var/run:/host/var/run + - ${HL_TOPOLOGIES_BASE_FOLDER}:/tmp + - ${HL_TOPOLOGIES_BASE_FOLDER}/config/core.yaml:/etc/hyperledger/fabric/core.yaml \ No newline at end of file diff --git a/topologies/t3/crypto-material/.gitkeep b/topologies/t3/crypto-material/.gitkeep new file mode 100644 index 0000000..e69de29 diff --git a/topologies/t3/docker-compose.yml b/topologies/t3/docker-compose.yml new file mode 100644 index 0000000..0894505 --- /dev/null +++ b/topologies/t3/docker-compose.yml @@ -0,0 +1,103 @@ +services: + org-shell-cmd: + image: alpine + networks: + - hl-fabric + org1-ca-tls: + image: hyperledger/fabric-ca:${FABRIC_CA_VERSION} + # ports: + # - :7054 + networks: + - hl-fabric + org1-ca-identities: + image: hyperledger/fabric-ca:${FABRIC_CA_VERSION} + # ports: + # - :7054 + networks: + - hl-fabric + org2-ca-tls: + image: hyperledger/fabric-ca:${FABRIC_CA_VERSION} + # ports: + # - :7054 + networks: + - hl-fabric + org2-ca-identities: + image: hyperledger/fabric-ca:${FABRIC_CA_VERSION} + # ports: + # - :7054 + networks: + - hl-fabric + org3-ca-tls: + image: hyperledger/fabric-ca:${FABRIC_CA_VERSION} + # ports: + # - :7054 + networks: + - hl-fabric + org3-ca-identities: + image: hyperledger/fabric-ca:${FABRIC_CA_VERSION} + # ports: + # - :7054 + networks: + - hl-fabric + org2-peer1: + image: hyperledger/fabric-peer:${FABRIC_PEER_VERSION} + # ports: + # - :7051 + networks: + - hl-fabric + org2-peer2: + image: hyperledger/fabric-peer:${FABRIC_PEER_VERSION} + # ports: + # - :7051 + networks: + - hl-fabric + org3-peer1: + image: hyperledger/fabric-peer:${FABRIC_PEER_VERSION} + # ports: + # - :7051 + networks: + - hl-fabric + org3-peer2: + image: hyperledger/fabric-peer:${FABRIC_PEER_VERSION} + # ports: + # - :7051 + networks: + - hl-fabric + org2-cli-peer1: + image: hyperledger/fabric-tools:${FABRIC_TOOLS_VERSION} + networks: + - hl-fabric + org3-cli-peer1: + image: hyperledger/fabric-tools:${FABRIC_TOOLS_VERSION} + networks: + - hl-fabric + org1-orderer1: + image: hyperledger/fabric-orderer:${PEER_ORDERER_VERSION} + # ports: + # - :7050 + networks: + - hl-fabric + org1-orderer2: + image: hyperledger/fabric-orderer:${PEER_ORDERER_VERSION} + # ports: + # - :7050 + networks: + - hl-fabric + org1-orderer3: + image: hyperledger/fabric-orderer:${PEER_ORDERER_VERSION} + # ports: + # - :7050 + networks: + - hl-fabric + org2-chaincode: + image: chaincode-hl-fabric:latest + # ports: + # - :7075 + networks: + - hl-fabric + org3-chaincode: + image: chaincode-hl-fabric:latest + # ports: + # - :7075 + networks: + - hl-fabric \ No newline at end of file diff --git a/topologies/t3/homefolders/.gitkeep b/topologies/t3/homefolders/.gitkeep new file mode 100644 index 0000000..e69de29 diff --git a/topologies/t3/scripts/all-org-peers-commit-chaincode.sh b/topologies/t3/scripts/all-org-peers-commit-chaincode.sh new file mode 100755 index 0000000..7810f33 --- /dev/null +++ b/topologies/t3/scripts/all-org-peers-commit-chaincode.sh @@ -0,0 +1,26 @@ +#!/bin/bash +set -e +set -x + +peer lifecycle chaincode checkcommitreadiness -C mychannel --name myccv1 -v "1.0" --sequence 1 + +export CORE_PEER_ADDRESS=${TOPOLOGY}-org2-peer1:7051 +export CORE_PEER_MSPCONFIGPATH=/tmp/crypto-material/orgs/org2/admins/admin/msp +QUERY_INSTALLED=`FABRIC_LOGGING_SPEC=ERROR peer lifecycle chaincode queryinstalled | grep myccv1` +IFS=' ' read -r -a array <<< $QUERY_INSTALLED +PACKAGE_ID=${array[2]} +PACKAGE_ID=${PACKAGE_ID::-1} +echo "The Package ID for the installed chaincode is: $PACKAGE_ID" + +peer lifecycle chaincode commit --channelID mychannel --name myccv1 --version "1.0" \ + --peerAddresses ${TOPOLOGY}-org2-peer1:7051 \ + --peerAddresses ${TOPOLOGY}-org2-peer2:7051 \ + --peerAddresses ${TOPOLOGY}-org3-peer1:7051 \ + --peerAddresses ${TOPOLOGY}-org3-peer2:7051 \ + --tlsRootCertFiles /tmp/crypto-material/cas/org2/ca-tls/ca-cert.pem \ + --tlsRootCertFiles /tmp/crypto-material/cas/org2/ca-tls/ca-cert.pem \ + --tlsRootCertFiles /tmp/crypto-material/cas/org3/ca-tls/ca-cert.pem \ + --tlsRootCertFiles /tmp/crypto-material/cas/org3/ca-tls/ca-cert.pem \ + --sequence 1 --tls --cafile /tmp/crypto-material/cas/org1/ca-tls/ca-cert.pem + +peer lifecycle chaincode querycommitted --channelID mychannel --name myccv1 \ No newline at end of file diff --git a/topologies/t3/scripts/all-org-peers-execute-chaincode.sh b/topologies/t3/scripts/all-org-peers-execute-chaincode.sh new file mode 100755 index 0000000..d8be52e --- /dev/null +++ b/topologies/t3/scripts/all-org-peers-execute-chaincode.sh @@ -0,0 +1,18 @@ +#!/bin/sh + +set -e +set -x + +export CORE_PEER_ADDRESS=${TOPOLOGY}-org2-peer1:7051 +export CORE_PEER_MSPCONFIGPATH=/tmp/crypto-material/orgs/org2/users/user/msp +peer chaincode invoke -C mychannel -n myccv1 -c '{"Args":["InitLedger"]}' --waitForEvent --tls --cafile /tmp/crypto-material/cas/org1/ca-tls/ca-cert.pem \ + --peerAddresses ${TOPOLOGY}-org2-peer1:7051 \ + --peerAddresses ${TOPOLOGY}-org2-peer2:7051 \ + --peerAddresses ${TOPOLOGY}-org3-peer1:7051 \ + --peerAddresses ${TOPOLOGY}-org3-peer2:7051 \ + --tlsRootCertFiles /tmp/crypto-material/cas/org2/ca-tls/ca-cert.pem \ + --tlsRootCertFiles /tmp/crypto-material/cas/org2/ca-tls/ca-cert.pem \ + --tlsRootCertFiles /tmp/crypto-material/cas/org3/ca-tls/ca-cert.pem \ + --tlsRootCertFiles /tmp/crypto-material/cas/org3/ca-tls/ca-cert.pem + +peer chaincode query -C mychannel -n myccv1 -c '{"Args":["GetAllAssets"]}' diff --git a/topologies/t3/scripts/channels-setup.sh b/topologies/t3/scripts/channels-setup.sh new file mode 100755 index 0000000..230aac8 --- /dev/null +++ b/topologies/t3/scripts/channels-setup.sh @@ -0,0 +1,7 @@ +#!/bin/sh +set -e +set -x + +export FABRIC_CFG_PATH=/tmp/crypto-material/config +configtxgen -profile OrgsOrdererGenesis -outputBlock /tmp/crypto-material/artifacts/channels/genesis.block -channelID syschannel +configtxgen -profile OrgsChannel -outputCreateChannelTx /tmp/crypto-material/artifacts/channels/channel.tx -channelID mychannel \ No newline at end of file diff --git a/topologies/t3/scripts/delete-state-data.sh b/topologies/t3/scripts/delete-state-data.sh new file mode 100755 index 0000000..9062431 --- /dev/null +++ b/topologies/t3/scripts/delete-state-data.sh @@ -0,0 +1,10 @@ +#!/bin/bash +set -e +set -x + +# remove crypto material files +rm -rf /tmp/crypto-material/* +touch /tmp/crypto-material/.gitkeep + +rm -rf /tmp/homefolders/* +touch /tmp/homefolders/.gitkeep diff --git a/topologies/t3/scripts/find-ca-private-key.sh b/topologies/t3/scripts/find-ca-private-key.sh new file mode 100755 index 0000000..29b3f95 --- /dev/null +++ b/topologies/t3/scripts/find-ca-private-key.sh @@ -0,0 +1,21 @@ +#!/bin/bash + +set -e +set -x + +find_private_key_path() { + CA_HOME=$1 + CA_CERTFILE=$CA_HOME/tls-cert.pem + CA_HASH=`openssl x509 -noout -pubkey -in $CA_CERTFILE | openssl md5` + + for x in $CA_HOME/msp/keystore/*_sk; do + CA_KEYFILE_HASH=`openssl pkey -pubout -in ${x%} | openssl md5` + if [[ "${CA_KEYFILE_HASH}" == "${CA_HASH}" ]] + then + echo ${x%} + return 0 + fi + done + + return -1 +} \ No newline at end of file diff --git a/topologies/t3/scripts/org1-enroll-identities-with-ca-identities.sh b/topologies/t3/scripts/org1-enroll-identities-with-ca-identities.sh new file mode 100755 index 0000000..5c5dec7 --- /dev/null +++ b/topologies/t3/scripts/org1-enroll-identities-with-ca-identities.sh @@ -0,0 +1,46 @@ +#!/bin/bash +set -e +set -x + +# enroll orderer1 node +export FABRIC_CA_CLIENT_HOME=/tmp/crypto-material/orderers/org1/orderer1/node +export FABRIC_CA_CLIENT_TLS_CERTFILES=/tmp/crypto-material/cas/org1/ca-identities/ca-cert.pem +fabric-ca-client enroll -d -u https://org1-orderer1:orderer1PW@0.0.0.0:7054 +mv /tmp/crypto-material/orderers/org1/orderer1/node/msp/cacerts/* /tmp/crypto-material/orderers/org1/orderer1/node/msp/cacerts/ca-cert.pem +cp /tmp/config/config.yaml /tmp/crypto-material/orderers/org1/orderer1/node/msp + +# enroll orderer2 node +export FABRIC_CA_CLIENT_HOME=/tmp/crypto-material/orderers/org1/orderer2/node +export FABRIC_CA_CLIENT_TLS_CERTFILES=/tmp/crypto-material/cas/org1/ca-identities/ca-cert.pem +fabric-ca-client enroll -d -u https://org1-orderer2:orderer2PW@0.0.0.0:7054 +mv /tmp/crypto-material/orderers/org1/orderer2/node/msp/cacerts/* /tmp/crypto-material/orderers/org1/orderer2/node/msp/cacerts/ca-cert.pem +cp /tmp/config/config.yaml /tmp/crypto-material/orderers/org1/orderer2/node/msp + +# enroll orderer3 node +export FABRIC_CA_CLIENT_HOME=/tmp/crypto-material/orderers/org1/orderer3/node +export FABRIC_CA_CLIENT_TLS_CERTFILES=/tmp/crypto-material/cas/org1/ca-identities/ca-cert.pem +fabric-ca-client enroll -d -u https://org1-orderer3:orderer3PW@0.0.0.0:7054 +mv /tmp/crypto-material/orderers/org1/orderer3/node/msp/cacerts/* /tmp/crypto-material/orderers/org1/orderer3/node/msp/cacerts/ca-cert.pem +cp /tmp/config/config.yaml /tmp/crypto-material/orderers/org1/orderer3/node/msp + +# enroll org1 admin +export FABRIC_CA_CLIENT_HOME=/tmp/crypto-material/orgs/org1/admins/admin +export FABRIC_CA_CLIENT_TLS_CERTFILES=/tmp/crypto-material/cas/org1/ca-identities/ca-cert.pem +fabric-ca-client enroll -d -u https://admin-org1:org1adminpw@0.0.0.0:7054 +mv /tmp/crypto-material/orgs/org1/admins/admin/msp/cacerts/* /tmp/crypto-material/orgs/org1/admins/admin/msp/cacerts/ca-cert.pem +cp /tmp/config/config.yaml /tmp/crypto-material/orgs/org1/admins/admin/msp + +mv /tmp/crypto-material/orderers/org1/orderer1/node-tls/msp/keystore/* /tmp/crypto-material/orderers/org1/orderer1/node-tls/msp/keystore/key.pem +mv /tmp/crypto-material/orderers/org1/orderer2/node-tls/msp/keystore/* /tmp/crypto-material/orderers/org1/orderer2/node-tls/msp/keystore/key.pem +mv /tmp/crypto-material/orderers/org1/orderer3/node-tls/msp/keystore/* /tmp/crypto-material/orderers/org1/orderer3/node-tls/msp/keystore/key.pem + +# setup org1 msp +mkdir -p /tmp/crypto-material/orgs/org1/msp +cp /tmp/config/config.yaml /tmp/crypto-material/orgs/org1/msp +mkdir -p /tmp/crypto-material/orgs/org1/msp/cacerts +cp /tmp/crypto-material/cas/org1/ca-identities/ca-cert.pem /tmp/crypto-material/orgs/org1/msp/cacerts/ca-cert.pem +mkdir -p /tmp/crypto-material/orgs/org1/msp/tlscacerts +cp /tmp/crypto-material/cas/org1/ca-tls/ca-cert.pem /tmp/crypto-material/orgs/org1/msp/tlscacerts/org1-tls-ca-cert.pem +cp /tmp/crypto-material/cas/org2/ca-tls/ca-cert.pem /tmp/crypto-material/orgs/org1/msp/tlscacerts/org2-tls-ca-cert.pem +cp /tmp/crypto-material/cas/org3/ca-tls/ca-cert.pem /tmp/crypto-material/orgs/org1/msp/tlscacerts/org3-tls-ca-cert.pem +mkdir -p /tmp/crypto-material/orgs/org1/msp/users \ No newline at end of file diff --git a/topologies/t3/scripts/org1-enroll-identities-with-ca-tls.sh b/topologies/t3/scripts/org1-enroll-identities-with-ca-tls.sh new file mode 100755 index 0000000..67e742b --- /dev/null +++ b/topologies/t3/scripts/org1-enroll-identities-with-ca-tls.sh @@ -0,0 +1,26 @@ +#!/bin/bash +set -e +set -x + +# enroll orderer1 node-tls +export FABRIC_CA_CLIENT_HOME=/tmp/crypto-material/orderers/org1/orderer1/node-tls +export FABRIC_CA_CLIENT_TLS_CERTFILES=/tmp/crypto-material/cas/org1/ca-tls/ca-cert.pem +fabric-ca-client enroll -d -u https://org1-orderer1:orderer1PW@0.0.0.0:7054 --enrollment.profile tls --csr.hosts ${TOPOLOGY}-org1-orderer1 + +# enroll orderer2 node-tls +export FABRIC_CA_CLIENT_HOME=/tmp/crypto-material/orderers/org1/orderer2/node-tls +export FABRIC_CA_CLIENT_TLS_CERTFILES=/tmp/crypto-material/cas/org1/ca-tls/ca-cert.pem +fabric-ca-client enroll -d -u https://org1-orderer2:orderer2PW@0.0.0.0:7054 --enrollment.profile tls --csr.hosts ${TOPOLOGY}-org1-orderer2 + +# enroll orderer3 node-tls +export FABRIC_CA_CLIENT_HOME=/tmp/crypto-material/orderers/org1/orderer3/node-tls +export FABRIC_CA_CLIENT_TLS_CERTFILES=/tmp/crypto-material/cas/org1/ca-tls/ca-cert.pem +fabric-ca-client enroll -d -u https://org1-orderer3:orderer3PW@0.0.0.0:7054 --enrollment.profile tls --csr.hosts ${TOPOLOGY}-org1-orderer3 + +mv /tmp/crypto-material/orderers/org1/orderer1/node-tls/msp/keystore/* /tmp/crypto-material/orderers/org1/orderer1/node-tls/msp/keystore/key.pem +mv /tmp/crypto-material/orderers/org1/orderer2/node-tls/msp/keystore/* /tmp/crypto-material/orderers/org1/orderer2/node-tls/msp/keystore/key.pem +mv /tmp/crypto-material/orderers/org1/orderer3/node-tls/msp/keystore/* /tmp/crypto-material/orderers/org1/orderer3/node-tls/msp/keystore/key.pem + +mv /tmp/crypto-material/orderers/org1/orderer1/node-tls/msp/tlscacerts/* /tmp/crypto-material/orderers/org1/orderer1/node-tls/msp/tlscacerts/ca-cert.pem +mv /tmp/crypto-material/orderers/org1/orderer2/node-tls/msp/tlscacerts/* /tmp/crypto-material/orderers/org1/orderer2/node-tls/msp/tlscacerts/ca-cert.pem +mv /tmp/crypto-material/orderers/org1/orderer3/node-tls/msp/tlscacerts/* /tmp/crypto-material/orderers/org1/orderer3/node-tls/msp/tlscacerts/ca-cert.pem \ No newline at end of file diff --git a/topologies/t3/scripts/org1-register-identities-with-ca-identities.sh b/topologies/t3/scripts/org1-register-identities-with-ca-identities.sh new file mode 100755 index 0000000..1e6dcb2 --- /dev/null +++ b/topologies/t3/scripts/org1-register-identities-with-ca-identities.sh @@ -0,0 +1,11 @@ +#!/bin/bash +set -e +set -x + +export FABRIC_CA_CLIENT_TLS_CERTFILES=/tmp/crypto-material/cas/org1/ca-identities/ca-cert.pem +export FABRIC_CA_CLIENT_HOME=/tmp/crypto-material/cas/org1/ca-identities-admin +fabric-ca-client enroll -d -u https://org1-ca-identities-admin:org1-ca-identities-adminpw@0.0.0.0:7054 +fabric-ca-client register -d --id.name org1-orderer1 --id.secret orderer1PW --id.type orderer -u https://0.0.0.0:7054 +fabric-ca-client register -d --id.name org1-orderer2 --id.secret orderer2PW --id.type orderer -u https://0.0.0.0:7054 +fabric-ca-client register -d --id.name org1-orderer3 --id.secret orderer3PW --id.type orderer -u https://0.0.0.0:7054 +fabric-ca-client register -d --id.name admin-org1 --id.secret org1adminpw --id.type admin --id.attrs "hf.Registrar.Roles=client,hf.Registrar.Attributes=*,hf.Revoker=true,hf.GenCRL=true,admin=true:ecert,abac.init=true:ecert" -u https://0.0.0.0:7054 \ No newline at end of file diff --git a/topologies/t3/scripts/org1-register-identities-with-ca-tls.sh b/topologies/t3/scripts/org1-register-identities-with-ca-tls.sh new file mode 100755 index 0000000..ab095a5 --- /dev/null +++ b/topologies/t3/scripts/org1-register-identities-with-ca-tls.sh @@ -0,0 +1,10 @@ +#!/bin/bash +set -e +set -x + +export FABRIC_CA_CLIENT_TLS_CERTFILES=/tmp/crypto-material/cas/org1/ca-tls/ca-cert.pem +export FABRIC_CA_CLIENT_HOME=/tmp/crypto-material/cas/org1/ca-tls-admin +fabric-ca-client enroll -d -u https://org1-ca-tls-admin:org1-ca-tls-adminpw@0.0.0.0:7054 +fabric-ca-client register -d --id.name org1-orderer1 --id.secret orderer1PW --id.type orderer -u https://0.0.0.0:7054 +fabric-ca-client register -d --id.name org1-orderer2 --id.secret orderer2PW --id.type orderer -u https://0.0.0.0:7054 +fabric-ca-client register -d --id.name org1-orderer3 --id.secret orderer3PW --id.type orderer -u https://0.0.0.0:7054 \ No newline at end of file diff --git a/topologies/t3/scripts/org2-approve-chaincode.sh b/topologies/t3/scripts/org2-approve-chaincode.sh new file mode 100755 index 0000000..fb8ac61 --- /dev/null +++ b/topologies/t3/scripts/org2-approve-chaincode.sh @@ -0,0 +1,22 @@ +#!/bin/bash +set -e +set -x + +export CORE_PEER_ADDRESS=${TOPOLOGY}-org2-peer1:7051 +export CORE_PEER_MSPCONFIGPATH=/tmp/crypto-material/orgs/org2/admins/admin/msp +QUERY_INSTALLED=`FABRIC_LOGGING_SPEC=ERROR peer lifecycle chaincode queryinstalled | grep myccv1` +IFS=' ' read -r -a array <<< $QUERY_INSTALLED +PACKAGE_ID=${array[2]} +PACKAGE_ID=${PACKAGE_ID::-1} +echo "The Package ID for the installed chaincode is: $PACKAGE_ID" + + +peer lifecycle chaincode approveformyorg --channelID mychannel --name myccv1 --version "1.0" \ + --package-id $PACKAGE_ID \ + --peerAddresses ${TOPOLOGY}-org2-peer1:7051 \ + --peerAddresses ${TOPOLOGY}-org2-peer2:7051 \ + --tlsRootCertFiles /tmp/crypto-material/cas/org2/ca-tls/ca-cert.pem \ + --tlsRootCertFiles /tmp/crypto-material/cas/org2/ca-tls/ca-cert.pem \ + --sequence 1 --tls --cafile /tmp/crypto-material/cas/org1/ca-tls/ca-cert.pem + +peer lifecycle chaincode queryapproved -C mychannel --name myccv1 \ No newline at end of file diff --git a/topologies/t3/scripts/org2-create-and-join-channels.sh b/topologies/t3/scripts/org2-create-and-join-channels.sh new file mode 100755 index 0000000..ed6a189 --- /dev/null +++ b/topologies/t3/scripts/org2-create-and-join-channels.sh @@ -0,0 +1,12 @@ +#!/bin/sh +set -e +set -x + +export CORE_PEER_MSPCONFIGPATH=/tmp/crypto-material/orgs/org2/admins/admin/msp +peer channel create -c mychannel -f /tmp/crypto-material/artifacts/channels/channel.tx -o ${TOPOLOGY}-org1-orderer1:7050 --outputBlock /tmp/crypto-material/artifacts/channels/mychannel.block --tls --cafile /tmp/crypto-material/cas/org1/ca-tls/ca-cert.pem + +export CORE_PEER_ADDRESS=${TOPOLOGY}-org2-peer1:7051 +peer channel join -b /tmp/crypto-material/artifacts/channels/mychannel.block + +export CORE_PEER_ADDRESS=${TOPOLOGY}-org2-peer2:7051 +peer channel join -b /tmp/crypto-material/artifacts/channels/mychannel.block \ No newline at end of file diff --git a/topologies/t3/scripts/org2-create-cas-tls-private-tls-key-file.sh b/topologies/t3/scripts/org2-create-cas-tls-private-tls-key-file.sh new file mode 100755 index 0000000..517a5d5 --- /dev/null +++ b/topologies/t3/scripts/org2-create-cas-tls-private-tls-key-file.sh @@ -0,0 +1,9 @@ +#!/bin/bash + +set -e +set -x + +source /tmp/scripts/find-ca-private-key.sh + +CA_PRIV_KEYFILE=`find_private_key_path /tmp/crypto-material/cas/org2/ca-tls` +cp $CA_PRIV_KEYFILE /tmp/crypto-material/cas/org2/ca-tls/msp/keystore/key.pem \ No newline at end of file diff --git a/topologies/t3/scripts/org2-enroll-identities-with-ca-identities.sh b/topologies/t3/scripts/org2-enroll-identities-with-ca-identities.sh new file mode 100755 index 0000000..ad0ea07 --- /dev/null +++ b/topologies/t3/scripts/org2-enroll-identities-with-ca-identities.sh @@ -0,0 +1,49 @@ +#!/bin/bash +set -e +set -x + +# enroll peer1 node +export FABRIC_CA_CLIENT_HOME=/tmp/crypto-material/peers/org2/peer1/node +export FABRIC_CA_CLIENT_TLS_CERTFILES=/tmp/crypto-material/cas/org2/ca-identities/ca-cert.pem +fabric-ca-client enroll -d -u https://peer1-org2:peer1PW@0.0.0.0:7054 +mv /tmp/crypto-material/peers/org2/peer1/node/msp/cacerts/* /tmp/crypto-material/peers/org2/peer1/node/msp/cacerts/ca-cert.pem +cp /tmp/config/config.yaml /tmp/crypto-material/peers/org2/peer1/node/msp + +# enroll peer2 node +export FABRIC_CA_CLIENT_HOME=/tmp/crypto-material/peers/org2/peer2/node +export FABRIC_CA_CLIENT_TLS_CERTFILES=/tmp/crypto-material/cas/org2/ca-identities/ca-cert.pem +fabric-ca-client enroll -d -u https://peer2-org2:peer2PW@0.0.0.0:7054 +mv /tmp/crypto-material/peers/org2/peer2/node/msp/cacerts/* /tmp/crypto-material/peers/org2/peer2/node/msp/cacerts/ca-cert.pem +cp /tmp/config/config.yaml /tmp/crypto-material/peers/org2/peer2/node/msp + +# enroll org2 admin +export FABRIC_CA_CLIENT_HOME=/tmp/crypto-material/orgs/org2/admins/admin +export FABRIC_CA_CLIENT_TLS_CERTFILES=/tmp/crypto-material/cas/org2/ca-identities/ca-cert.pem +fabric-ca-client enroll -d -u https://admin-org2:org2AdminPW@0.0.0.0:7054 +mv /tmp/crypto-material/orgs/org2/admins/admin/msp/cacerts/* /tmp/crypto-material/orgs/org2/admins/admin/msp/cacerts/ca-cert.pem +cp /tmp/config/config.yaml /tmp/crypto-material/orgs/org2/admins/admin/msp + +# enroll org2 user +export FABRIC_CA_CLIENT_HOME=/tmp/crypto-material/orgs/org2/users/user +export FABRIC_CA_CLIENT_TLS_CERTFILES=/tmp/crypto-material/cas/org2/ca-identities/ca-cert.pem +fabric-ca-client enroll -d -u https://admin-org2:org2AdminPW@0.0.0.0:7054 +mv /tmp/crypto-material/orgs/org2/users/user/msp/cacerts/* /tmp/crypto-material/orgs/org2/users/user/msp/cacerts/ca-cert.pem +cp /tmp/config/config.yaml /tmp/crypto-material/orgs/org2/users/user/msp + +# enroll org2 client +export FABRIC_CA_CLIENT_HOME=/tmp/crypto-material/orgs/org2/clients/client +export FABRIC_CA_CLIENT_TLS_CERTFILES=/tmp/crypto-material/cas/org2/ca-identities/ca-cert.pem +fabric-ca-client enroll -d -u https://admin-org2:org2AdminPW@0.0.0.0:7054 +mv /tmp/crypto-material/orgs/org2/clients/client/msp/cacerts/* /tmp/crypto-material/orgs/org2/clients/client/msp/cacerts/ca-cert.pem +cp /tmp/config/config.yaml /tmp/crypto-material/orgs/org2/clients/client/msp + +# setup org2 msp +mkdir -p /tmp/crypto-material/orgs/org2/msp +cp /tmp/config/config.yaml /tmp/crypto-material/orgs/org2/msp +mkdir -p /tmp/crypto-material/orgs/org2/msp/cacerts +cp /tmp/crypto-material/cas/org2/ca-identities/ca-cert.pem /tmp/crypto-material/orgs/org2/msp/cacerts/ca-cert.pem +mkdir -p /tmp/crypto-material/orgs/org2/msp/tlscacerts +cp /tmp/crypto-material/cas/org1/ca-tls/ca-cert.pem /tmp/crypto-material/orgs/org2/msp/tlscacerts/org1-tls-ca-cert.pem +cp /tmp/crypto-material/cas/org2/ca-tls/ca-cert.pem /tmp/crypto-material/orgs/org2/msp/tlscacerts/org2-tls-ca-cert.pem +cp /tmp/crypto-material/cas/org3/ca-tls/ca-cert.pem /tmp/crypto-material/orgs/org2/msp/tlscacerts/org3-tls-ca-cert.pem +mkdir -p /tmp/crypto-material/orgs/org2/msp/users diff --git a/topologies/t3/scripts/org2-enroll-identities-with-ca-tls.sh b/topologies/t3/scripts/org2-enroll-identities-with-ca-tls.sh new file mode 100755 index 0000000..0f9610c --- /dev/null +++ b/topologies/t3/scripts/org2-enroll-identities-with-ca-tls.sh @@ -0,0 +1,27 @@ +#!/bin/bash +set -e +set -x + +# enroll peer1 node-tls +export FABRIC_CA_CLIENT_HOME=/tmp/crypto-material/peers/org2/peer1/node-tls +export FABRIC_CA_CLIENT_TLS_CERTFILES=/tmp/crypto-material/cas/org2/ca-tls/ca-cert.pem +fabric-ca-client enroll -d -u https://peer1-org2:peer1PW@0.0.0.0:7054 --enrollment.profile tls --csr.hosts ${TOPOLOGY}-org2-peer1 + +# enroll peer2 node-tls +export FABRIC_CA_CLIENT_HOME=/tmp/crypto-material/peers/org2/peer2/node-tls +export FABRIC_CA_CLIENT_TLS_CERTFILES=/tmp/crypto-material/cas/org2/ca-tls/ca-cert.pem +fabric-ca-client enroll -d -u https://peer2-org2:peer2PW@0.0.0.0:7054 --enrollment.profile tls --csr.hosts ${TOPOLOGY}-org2-peer2 + +# enroll org2 chaincode +export FABRIC_CA_CLIENT_HOME=/tmp/crypto-material/orgs/org2/chaincode +export FABRIC_CA_CLIENT_TLS_CERTFILES=/tmp/crypto-material/cas/org2/ca-tls/ca-cert.pem +export FABRIC_CA_CLIENT_MSPDIR=msp +fabric-ca-client enroll -d -u https://chaincode-org2:chaincodeOrg2PW@0.0.0.0:7054 --enrollment.profile tls --csr.hosts ${TOPOLOGY}-org2-chaincode +mv /tmp/crypto-material/orgs/org2/chaincode/msp/keystore/* /tmp/crypto-material/orgs/org2/chaincode/msp/keystore/key.pem +chmod -R 777 /tmp/crypto-material/orgs/org2/chaincode/msp + +mv /tmp/crypto-material/peers/org2/peer1/node-tls/msp/keystore/* /tmp/crypto-material/peers/org2/peer1/node-tls/msp/keystore/key.pem +mv /tmp/crypto-material/peers/org2/peer2/node-tls/msp/keystore/* /tmp/crypto-material/peers/org2/peer2/node-tls/msp/keystore/key.pem + +mv /tmp/crypto-material/peers/org2/peer1/node-tls/msp/tlscacerts/* /tmp/crypto-material/peers/org2/peer1/node-tls/msp/tlscacerts/ca-cert.pem +mv /tmp/crypto-material/peers/org2/peer2/node-tls/msp/tlscacerts/* /tmp/crypto-material/peers/org2/peer2/node-tls/msp/tlscacerts/ca-cert.pem \ No newline at end of file diff --git a/topologies/t3/scripts/org2-get-installed-cc-id.sh b/topologies/t3/scripts/org2-get-installed-cc-id.sh new file mode 100755 index 0000000..be0d871 --- /dev/null +++ b/topologies/t3/scripts/org2-get-installed-cc-id.sh @@ -0,0 +1,11 @@ +#!/bin/bash + +set -e + +export CORE_PEER_ADDRESS=${TOPOLOGY}-org2-peer1:7051 +export CORE_PEER_MSPCONFIGPATH=/tmp/crypto-material/orgs/org2/admins/admin/msp +QUERY_INSTALLED=`FABRIC_LOGGING_SPEC=ERROR peer lifecycle chaincode queryinstalled | grep myccv1` +IFS=' ' read -r -a array <<< $QUERY_INSTALLED +PACKAGE_ID=${array[2]} +PACKAGE_ID=${PACKAGE_ID::-1} +echo $PACKAGE_ID \ No newline at end of file diff --git a/topologies/t3/scripts/org2-install-chaincode.sh b/topologies/t3/scripts/org2-install-chaincode.sh new file mode 100755 index 0000000..7257dba --- /dev/null +++ b/topologies/t3/scripts/org2-install-chaincode.sh @@ -0,0 +1,27 @@ +#!/bin/bash +set -e +set -x + +export CHAINCODE_DIR=/opt/gopath/src/github.com/hyperledger/fabric-samples/chaincode/assets-transfer-basic/chaincode-go + +export CORE_PEER_ADDRESS=${TOPOLOGY}-org2-peer1:7051 +export CORE_PEER_MSPCONFIGPATH=/tmp/crypto-material/orgs/org2/admins/admin/msp +cd /tmp/assets/chaincode/assets-transfer-basic/config/org2 + +rm -rf code.tar.gz +rm -rf myccv1.tgz +rm -rf connection.json +rm -rf ca-cert.pem +cp /tmp/crypto-material/cas/org2/ca-tls/ca-cert.pem peers.pem +#cat /tmp/crypto-material/cas/org2/ca-tls/ca-cert.pem /tmp/crypto-material/peers/org2/peer1/node-tls/msp/signcerts/cert.pem /tmp/crypto-material/peers/org2/peer2/node-tls/msp/signcerts/cert.pem > peers.pem + +sed -i ':a;N;$!ba;s/\n/\\\\n/g' peers.pem +ROOT_CERT=`cat peers.pem` +cat connection.json.template | sed -r "s;ROOT_CERT_VALUE;${ROOT_CERT};g" | tee connection.json > /dev/null +tar cfz code.tar.gz connection.json +tar cfz myccv1.tgz code.tar.gz metadata.json + +peer lifecycle chaincode install ./myccv1.tgz + +export CORE_PEER_ADDRESS=${TOPOLOGY}-org2-peer2:7051 +peer lifecycle chaincode install ./myccv1.tgz \ No newline at end of file diff --git a/topologies/t3/scripts/org2-register-identities-with-ca-identities.sh b/topologies/t3/scripts/org2-register-identities-with-ca-identities.sh new file mode 100755 index 0000000..beb2cd6 --- /dev/null +++ b/topologies/t3/scripts/org2-register-identities-with-ca-identities.sh @@ -0,0 +1,12 @@ +#!/bin/bash +set -e +set -x + +export FABRIC_CA_CLIENT_TLS_CERTFILES=/tmp/crypto-material/cas/org2/ca-identities/ca-cert.pem +export FABRIC_CA_CLIENT_HOME=/tmp/crypto-material/cas/org2/ca-identities-admin +fabric-ca-client enroll -d -u https://org2-ca-identities-admin:org2-ca-identities-adminpw@0.0.0.0:7054 +fabric-ca-client register -d --id.name peer1-org2 --id.secret peer1PW --id.type peer -u https://0.0.0.0:7054 +fabric-ca-client register -d --id.name peer2-org2 --id.secret peer2PW --id.type peer -u https://0.0.0.0:7054 +fabric-ca-client register -d --id.name admin-org2 --id.secret org2AdminPW --id.type admin -u https://0.0.0.0:7054 +fabric-ca-client register -d --id.name user-org2 --id.secret org2UserPW --id.type user -u https://0.0.0.0:7054 +fabric-ca-client register -d --id.name client-org2 --id.secret org2UserPW --id.type client -u https://0.0.0.0:7054 \ No newline at end of file diff --git a/topologies/t3/scripts/org2-register-identities-with-ca-tls.sh b/topologies/t3/scripts/org2-register-identities-with-ca-tls.sh new file mode 100755 index 0000000..d6c79f8 --- /dev/null +++ b/topologies/t3/scripts/org2-register-identities-with-ca-tls.sh @@ -0,0 +1,10 @@ +#!/bin/bash +set -e +set -x + +export FABRIC_CA_CLIENT_TLS_CERTFILES=/tmp/crypto-material/cas/org2/ca-tls/ca-cert.pem +export FABRIC_CA_CLIENT_HOME=/tmp/crypto-material/cas/org2/ca-tls-admin +fabric-ca-client enroll -d -u https://org2-ca-tls-admin:org2-ca-tls-adminpw@0.0.0.0:7054 +fabric-ca-client register -d --id.name peer1-org2 --id.secret peer1PW --id.type peer -u https://0.0.0.0:7054 +fabric-ca-client register -d --id.name peer2-org2 --id.secret peer2PW --id.type peer -u https://0.0.0.0:7054 +fabric-ca-client register -d --id.name chaincode-org2 --id.secret chaincodeOrg2PW --id.type peer -u https://0.0.0.0:7054 \ No newline at end of file diff --git a/topologies/t3/scripts/org3-approve-chaincode.sh b/topologies/t3/scripts/org3-approve-chaincode.sh new file mode 100755 index 0000000..094c94f --- /dev/null +++ b/topologies/t3/scripts/org3-approve-chaincode.sh @@ -0,0 +1,22 @@ +#!/bin/bash +set -e +set -x + +export CORE_PEER_ADDRESS=${TOPOLOGY}-org3-peer1:7051 +export CORE_PEER_MSPCONFIGPATH=/tmp/crypto-material/orgs/org3/admins/admin/msp +QUERY_INSTALLED=`FABRIC_LOGGING_SPEC=ERROR peer lifecycle chaincode queryinstalled | grep myccv1` +IFS=' ' read -r -a array <<< $QUERY_INSTALLED +PACKAGE_ID=${array[2]} +PACKAGE_ID=${PACKAGE_ID::-1} +echo "The Package ID for the installed chaincode is: $PACKAGE_ID" + + +peer lifecycle chaincode approveformyorg --channelID mychannel --name myccv1 --version "1.0" \ + --package-id $PACKAGE_ID \ + --peerAddresses ${TOPOLOGY}-org3-peer1:7051 \ + --peerAddresses ${TOPOLOGY}-org3-peer2:7051 \ + --tlsRootCertFiles /tmp/crypto-material/cas/org3/ca-tls/ca-cert.pem \ + --tlsRootCertFiles /tmp/crypto-material/cas/org3/ca-tls/ca-cert.pem \ + --sequence 1 --tls --cafile /tmp/crypto-material/cas/org1/ca-tls/ca-cert.pem + +peer lifecycle chaincode queryapproved -C mychannel --name myccv1 \ No newline at end of file diff --git a/topologies/t3/scripts/org3-create-and-join-channels.sh b/topologies/t3/scripts/org3-create-and-join-channels.sh new file mode 100755 index 0000000..ebf8b48 --- /dev/null +++ b/topologies/t3/scripts/org3-create-and-join-channels.sh @@ -0,0 +1,11 @@ +#!/bin/sh +set -e +set -x + +export CORE_PEER_MSPCONFIGPATH=/tmp/crypto-material/orgs/org3/admins/admin/msp + +export CORE_PEER_ADDRESS=${TOPOLOGY}-org3-peer1:7051 +peer channel join -b /tmp/crypto-material/artifacts/channels/mychannel.block + +export CORE_PEER_ADDRESS=${TOPOLOGY}-org3-peer2:7051 +peer channel join -b /tmp/crypto-material/artifacts/channels/mychannel.block \ No newline at end of file diff --git a/topologies/t3/scripts/org3-create-cas-tls-private-tls-key-file.sh b/topologies/t3/scripts/org3-create-cas-tls-private-tls-key-file.sh new file mode 100755 index 0000000..c949870 --- /dev/null +++ b/topologies/t3/scripts/org3-create-cas-tls-private-tls-key-file.sh @@ -0,0 +1,9 @@ +#!/bin/bash + +set -e +set -x + +source /tmp/scripts/find-ca-private-key.sh + +CA_PRIV_KEYFILE=`find_private_key_path /tmp/crypto-material/cas/org3/ca-tls` +cp $CA_PRIV_KEYFILE /tmp/crypto-material/cas/org3/ca-tls/msp/keystore/key.pem \ No newline at end of file diff --git a/topologies/t3/scripts/org3-enroll-identities-with-ca-identities.sh b/topologies/t3/scripts/org3-enroll-identities-with-ca-identities.sh new file mode 100755 index 0000000..36f317d --- /dev/null +++ b/topologies/t3/scripts/org3-enroll-identities-with-ca-identities.sh @@ -0,0 +1,49 @@ +#!/bin/bash +set -e +set -x + +# enroll peer1 node +export FABRIC_CA_CLIENT_HOME=/tmp/crypto-material/peers/org3/peer1/node +export FABRIC_CA_CLIENT_TLS_CERTFILES=/tmp/crypto-material/cas/org3/ca-identities/ca-cert.pem +fabric-ca-client enroll -d -u https://peer1-org3:peer1PW@0.0.0.0:7054 +mv /tmp/crypto-material/peers/org3/peer1/node/msp/cacerts/* /tmp/crypto-material/peers/org3/peer1/node/msp/cacerts/ca-cert.pem +cp /tmp/config/config.yaml /tmp/crypto-material/peers/org3/peer1/node/msp + +# enroll peer2 node +export FABRIC_CA_CLIENT_HOME=/tmp/crypto-material/peers/org3/peer2/node +export FABRIC_CA_CLIENT_TLS_CERTFILES=/tmp/crypto-material/cas/org3/ca-identities/ca-cert.pem +fabric-ca-client enroll -d -u https://peer2-org3:peer2PW@0.0.0.0:7054 +mv /tmp/crypto-material/peers/org3/peer2/node/msp/cacerts/* /tmp/crypto-material/peers/org3/peer2/node/msp/cacerts/ca-cert.pem +cp /tmp/config/config.yaml /tmp/crypto-material/peers/org3/peer2/node/msp + +# enroll org3 admin +export FABRIC_CA_CLIENT_HOME=/tmp/crypto-material/orgs/org3/admins/admin +export FABRIC_CA_CLIENT_TLS_CERTFILES=/tmp/crypto-material/cas/org3/ca-identities/ca-cert.pem +fabric-ca-client enroll -d -u https://admin-org3:org3AdminPW@0.0.0.0:7054 +mv /tmp/crypto-material/orgs/org3/admins/admin/msp/cacerts/* /tmp/crypto-material/orgs/org3/admins/admin/msp/cacerts/ca-cert.pem +cp /tmp/config/config.yaml /tmp/crypto-material/orgs/org3/admins/admin/msp + +# enroll org3 user +export FABRIC_CA_CLIENT_HOME=/tmp/crypto-material/orgs/org3/users/user +export FABRIC_CA_CLIENT_TLS_CERTFILES=/tmp/crypto-material/cas/org3/ca-identities/ca-cert.pem +fabric-ca-client enroll -d -u https://admin-org3:org3AdminPW@0.0.0.0:7054 +mv /tmp/crypto-material/orgs/org3/users/user/msp/cacerts/* /tmp/crypto-material/orgs/org3/users/user/msp/cacerts/ca-cert.pem +cp /tmp/config/config.yaml /tmp/crypto-material/orgs/org3/users/user/msp + +# enroll org3 client +export FABRIC_CA_CLIENT_HOME=/tmp/crypto-material/orgs/org3/clients/client +export FABRIC_CA_CLIENT_TLS_CERTFILES=/tmp/crypto-material/cas/org3/ca-identities/ca-cert.pem +fabric-ca-client enroll -d -u https://admin-org3:org3AdminPW@0.0.0.0:7054 +mv /tmp/crypto-material/orgs/org3/clients/client/msp/cacerts/* /tmp/crypto-material/orgs/org3/clients/client/msp/cacerts/ca-cert.pem +cp /tmp/config/config.yaml /tmp/crypto-material/orgs/org3/clients/client/msp + +# setup org3 msp +mkdir -p /tmp/crypto-material/orgs/org3/msp +cp /tmp/config/config.yaml /tmp/crypto-material/orgs/org3/msp +mkdir -p /tmp/crypto-material/orgs/org3/msp/cacerts +cp /tmp/crypto-material/cas/org3/ca-identities/ca-cert.pem /tmp/crypto-material/orgs/org3/msp/cacerts/ca-cert.pem +mkdir -p /tmp/crypto-material/orgs/org3/msp/tlscacerts +cp /tmp/crypto-material/cas/org1/ca-tls/ca-cert.pem /tmp/crypto-material/orgs/org3/msp/tlscacerts/org1-tls-ca-cert.pem +cp /tmp/crypto-material/cas/org2/ca-tls/ca-cert.pem /tmp/crypto-material/orgs/org3/msp/tlscacerts/org2-tls-ca-cert.pem +cp /tmp/crypto-material/cas/org3/ca-tls/ca-cert.pem /tmp/crypto-material/orgs/org3/msp/tlscacerts/org3-tls-ca-cert.pem +mkdir -p /tmp/crypto-material/orgs/org3/msp/users diff --git a/topologies/t3/scripts/org3-enroll-identities-with-ca-tls.sh b/topologies/t3/scripts/org3-enroll-identities-with-ca-tls.sh new file mode 100755 index 0000000..700ffd6 --- /dev/null +++ b/topologies/t3/scripts/org3-enroll-identities-with-ca-tls.sh @@ -0,0 +1,27 @@ +#!/bin/bash +set -e +set -x + +# enroll peer1 node-tls +export FABRIC_CA_CLIENT_HOME=/tmp/crypto-material/peers/org3/peer1/node-tls +export FABRIC_CA_CLIENT_TLS_CERTFILES=/tmp/crypto-material/cas/org3/ca-tls/ca-cert.pem +fabric-ca-client enroll -d -u https://peer1-org3:peer1PW@0.0.0.0:7054 --enrollment.profile tls --csr.hosts ${TOPOLOGY}-org3-peer1 + +# enroll peer2 node-tls +export FABRIC_CA_CLIENT_HOME=/tmp/crypto-material/peers/org3/peer2/node-tls +export FABRIC_CA_CLIENT_TLS_CERTFILES=/tmp/crypto-material/cas/org3/ca-tls/ca-cert.pem +fabric-ca-client enroll -d -u https://peer2-org3:peer2PW@0.0.0.0:7054 --enrollment.profile tls --csr.hosts ${TOPOLOGY}-org3-peer2 + +# enroll org3 chaincode +export FABRIC_CA_CLIENT_HOME=/tmp/crypto-material/orgs/org3/chaincode +export FABRIC_CA_CLIENT_TLS_CERTFILES=/tmp/crypto-material/cas/org3/ca-tls/ca-cert.pem +export FABRIC_CA_CLIENT_MSPDIR=msp +fabric-ca-client enroll -d -u https://chaincode-org3:chaincodeOrg3PW@0.0.0.0:7054 --enrollment.profile tls --csr.hosts ${TOPOLOGY}-org3-chaincode +mv /tmp/crypto-material/orgs/org3/chaincode/msp/keystore/* /tmp/crypto-material/orgs/org3/chaincode/msp/keystore/key.pem +chmod -R 777 /tmp/crypto-material/orgs/org3/chaincode/msp + +mv /tmp/crypto-material/peers/org3/peer1/node-tls/msp/keystore/* /tmp/crypto-material/peers/org3/peer1/node-tls/msp/keystore/key.pem +mv /tmp/crypto-material/peers/org3/peer2/node-tls/msp/keystore/* /tmp/crypto-material/peers/org3/peer2/node-tls/msp/keystore/key.pem + +mv /tmp/crypto-material/peers/org3/peer1/node-tls/msp/tlscacerts/* /tmp/crypto-material/peers/org3/peer1/node-tls/msp/tlscacerts/ca-cert.pem +mv /tmp/crypto-material/peers/org3/peer2/node-tls/msp/tlscacerts/* /tmp/crypto-material/peers/org3/peer2/node-tls/msp/tlscacerts/ca-cert.pem \ No newline at end of file diff --git a/topologies/t3/scripts/org3-get-installed-cc-id.sh b/topologies/t3/scripts/org3-get-installed-cc-id.sh new file mode 100755 index 0000000..e371eab --- /dev/null +++ b/topologies/t3/scripts/org3-get-installed-cc-id.sh @@ -0,0 +1,11 @@ +#!/bin/bash + +set -e + +export CORE_PEER_ADDRESS=${TOPOLOGY}-org3-peer1:7051 +export CORE_PEER_MSPCONFIGPATH=/tmp/crypto-material/orgs/org3/admins/admin/msp +QUERY_INSTALLED=`FABRIC_LOGGING_SPEC=ERROR peer lifecycle chaincode queryinstalled | grep myccv1` +IFS=' ' read -r -a array <<< $QUERY_INSTALLED +PACKAGE_ID=${array[2]} +PACKAGE_ID=${PACKAGE_ID::-1} +echo $PACKAGE_ID \ No newline at end of file diff --git a/topologies/t3/scripts/org3-install-chaincode.sh b/topologies/t3/scripts/org3-install-chaincode.sh new file mode 100755 index 0000000..9a48e53 --- /dev/null +++ b/topologies/t3/scripts/org3-install-chaincode.sh @@ -0,0 +1,28 @@ +#!/bin/bash +set -e +set -x + +export CHAINCODE_DIR=/opt/gopath/src/github.com/hyperledger/fabric-samples/chaincode/assets-transfer-basic/chaincode-go + +export CORE_PEER_ADDRESS=${TOPOLOGY}-org3-peer1:7051 +export CORE_PEER_MSPCONFIGPATH=/tmp/crypto-material/orgs/org3/admins/admin/msp +cd /tmp/assets/chaincode/assets-transfer-basic/config/org3 + +rm -rf code.tar.gz +rm -rf myccv1.tgz +rm -rf connection.json +rm -rf ca-cert.pem +cp connection.json.template connection.json +cp /tmp/crypto-material/cas/org3/ca-tls/ca-cert.pem peers.pem +#cat /tmp/crypto-material/cas/org3/ca-tls/ca-cert.pem /tmp/crypto-material/peers/org3/peer1/node-tls/msp/signcerts/cert.pem /tmp/crypto-material/peers/org3/peer2/node-tls/msp/signcerts/cert.pem > peers.pem + +sed -i ':a;N;$!ba;s/\n/\\\\n/g' peers.pem +ROOT_CERT=`cat peers.pem` +cat connection.json.template | sed -r "s;ROOT_CERT_VALUE;${ROOT_CERT};g" | tee connection.json > /dev/null +tar cfz code.tar.gz connection.json +tar cfz myccv1.tgz code.tar.gz metadata.json + +peer lifecycle chaincode install ./myccv1.tgz + +export CORE_PEER_ADDRESS=${TOPOLOGY}-org3-peer2:7051 +peer lifecycle chaincode install ./myccv1.tgz diff --git a/topologies/t3/scripts/org3-register-identities-with-ca-identities.sh b/topologies/t3/scripts/org3-register-identities-with-ca-identities.sh new file mode 100755 index 0000000..1c56144 --- /dev/null +++ b/topologies/t3/scripts/org3-register-identities-with-ca-identities.sh @@ -0,0 +1,12 @@ +#!/bin/bash +set -e +set -x + +export FABRIC_CA_CLIENT_TLS_CERTFILES=/tmp/crypto-material/cas/org3/ca-identities/ca-cert.pem +export FABRIC_CA_CLIENT_HOME=/tmp/crypto-material/cas/org3/ca-identities-admin +fabric-ca-client enroll -d -u https://org3-ca-identities-admin:org3-ca-identities-adminpw@0.0.0.0:7054 +fabric-ca-client register -d --id.name peer1-org3 --id.secret peer1PW --id.type peer -u https://0.0.0.0:7054 +fabric-ca-client register -d --id.name peer2-org3 --id.secret peer2PW --id.type peer -u https://0.0.0.0:7054 +fabric-ca-client register -d --id.name admin-org3 --id.secret org3AdminPW --id.type admin -u https://0.0.0.0:7054 +fabric-ca-client register -d --id.name user-org3 --id.secret org3UserPW --id.type user -u https://0.0.0.0:7054 +fabric-ca-client register -d --id.name client-org3 --id.secret org3UserPW --id.type client -u https://0.0.0.0:7054 \ No newline at end of file diff --git a/topologies/t3/scripts/org3-register-identities-with-ca-tls.sh b/topologies/t3/scripts/org3-register-identities-with-ca-tls.sh new file mode 100755 index 0000000..82d6cf2 --- /dev/null +++ b/topologies/t3/scripts/org3-register-identities-with-ca-tls.sh @@ -0,0 +1,10 @@ +#!/bin/bash +set -e +set -x + +export FABRIC_CA_CLIENT_TLS_CERTFILES=/tmp/crypto-material/cas/org3/ca-tls/ca-cert.pem +export FABRIC_CA_CLIENT_HOME=/tmp/crypto-material/cas/org3/ca-tls-admin +fabric-ca-client enroll -d -u https://org3-ca-tls-admin:org3-ca-tls-adminpw@0.0.0.0:7054 +fabric-ca-client register -d --id.name peer1-org3 --id.secret peer1PW --id.type peer -u https://0.0.0.0:7054 +fabric-ca-client register -d --id.name peer2-org3 --id.secret peer2PW --id.type peer -u https://0.0.0.0:7054 +fabric-ca-client register -d --id.name chaincode-org3 --id.secret chaincodeOrg3PW --id.type peer -u https://0.0.0.0:7054 \ No newline at end of file diff --git a/topologies/t3/scripts/patch-configtx.sh b/topologies/t3/scripts/patch-configtx.sh new file mode 100755 index 0000000..a4359d7 --- /dev/null +++ b/topologies/t3/scripts/patch-configtx.sh @@ -0,0 +1,8 @@ +#!/bin/bash +set -e +set -x + +mkdir -p /tmp/crypto-material/config +cp /tmp/config/configtx.yaml /tmp/crypto-material/config + +cat /tmp/config/configtx.yaml | sed -r "s;<>;${TOPOLOGY};g" | tee /tmp/crypto-material/config/configtx.yaml > /dev/null \ No newline at end of file diff --git a/topologies/t3/scripts/setup-docker-images.sh b/topologies/t3/scripts/setup-docker-images.sh new file mode 100755 index 0000000..9b0dc84 --- /dev/null +++ b/topologies/t3/scripts/setup-docker-images.sh @@ -0,0 +1,2 @@ +cd $1/assets +docker build -t chaincode-hl-fabric:latest . \ No newline at end of file diff --git a/topologies/t3/setup-network.sh b/topologies/t3/setup-network.sh new file mode 100755 index 0000000..ef93ef5 --- /dev/null +++ b/topologies/t3/setup-network.sh @@ -0,0 +1,178 @@ +#!/bin/bash +# -----stop script execution on error and log commands +set -e +set -x + +# -----get the folder of where the current script is located +export HL_TOPOLOGIES_BASE_FOLDER=$( cd ${0%/*} && pwd -P ) +rm -rf ./topologies + +export CURRENT_HL_TOPOLOGY=t3 +echo "Topology ${CURRENT_HL_TOPOLOGY} Root Folder Set to: ${HL_TOPOLOGIES_BASE_FOLDER}" + +# -----delete any old or existing docker networks and clear any state data---- +echo "Deleting the old network..." +./teardown-network.sh + +# -----bring up the hyperledger admin shell terminal console, used to bootstrap the network +docker compose --env-file ${HL_TOPOLOGIES_BASE_FOLDER}/.env -f ${HL_TOPOLOGIES_BASE_FOLDER}/../docker-compose-base.yml \ + -f ${HL_TOPOLOGIES_BASE_FOLDER}/../docker-compose-shell-cmd.yml up -d + +# -----begin by modifying the configtx yaml file with any string replacements +echo "Starting the setup of the new network..." + +docker exec ${CURRENT_HL_TOPOLOGY}-shell-cmd /bin/sh -c "/bin/sh /tmp/scripts/patch-configtx.sh" + +# ----------------------------------------------------------------------------- +# -----setup the CAs for all orgs and register with these the TLS-CA and Identities-CA users, such as admins, clients, etc...----- +# ----------------------------------------------------------------------------- +# -----org1 CAs +docker compose --env-file ${HL_TOPOLOGIES_BASE_FOLDER}/.env -f ${HL_TOPOLOGIES_BASE_FOLDER}/../docker-compose-base.yml \ + -f ${HL_TOPOLOGIES_BASE_FOLDER}/docker-compose.yml \ + -f ${HL_TOPOLOGIES_BASE_FOLDER}/containers/cas/org1-cas/docker-compose-org1-cas.yml up -d org1-ca-tls +sleep 2 +docker exec ${CURRENT_HL_TOPOLOGY}-org1-ca-tls /bin/sh -c "/bin/sh /tmp/scripts/org1-register-identities-with-ca-tls.sh" + +docker compose --env-file ${HL_TOPOLOGIES_BASE_FOLDER}/.env -f ${HL_TOPOLOGIES_BASE_FOLDER}/../docker-compose-base.yml \ + -f ${HL_TOPOLOGIES_BASE_FOLDER}/docker-compose.yml \ + -f ${HL_TOPOLOGIES_BASE_FOLDER}/containers/cas/org1-cas/docker-compose-org1-cas.yml up -d org1-ca-identities +sleep 2 +docker exec ${CURRENT_HL_TOPOLOGY}-org1-ca-identities /bin/sh -c "/bin/sh /tmp/scripts/org1-register-identities-with-ca-identities.sh" + +# -----org2 CAs +docker compose --env-file ${HL_TOPOLOGIES_BASE_FOLDER}/.env -f ${HL_TOPOLOGIES_BASE_FOLDER}/../docker-compose-base.yml \ + -f ${HL_TOPOLOGIES_BASE_FOLDER}/docker-compose.yml \ + -f ${HL_TOPOLOGIES_BASE_FOLDER}/containers/cas/org2-cas/docker-compose-org2-cas.yml up -d org2-ca-tls +sleep 2 +docker exec ${CURRENT_HL_TOPOLOGY}-org2-ca-tls /bin/sh -c "/bin/sh /tmp/scripts/org2-register-identities-with-ca-tls.sh" + +docker compose --env-file ${HL_TOPOLOGIES_BASE_FOLDER}/.env -f ${HL_TOPOLOGIES_BASE_FOLDER}/../docker-compose-base.yml \ + -f ${HL_TOPOLOGIES_BASE_FOLDER}/docker-compose.yml \ + -f ${HL_TOPOLOGIES_BASE_FOLDER}/containers/cas/org2-cas/docker-compose-org2-cas.yml up -d org2-ca-identities +sleep 2 +docker exec ${CURRENT_HL_TOPOLOGY}-org2-ca-identities /bin/sh -c "/bin/sh /tmp/scripts/org2-register-identities-with-ca-identities.sh" + +# -----org3 CAs +docker compose --env-file ${HL_TOPOLOGIES_BASE_FOLDER}/.env -f ${HL_TOPOLOGIES_BASE_FOLDER}/../docker-compose-base.yml \ + -f ${HL_TOPOLOGIES_BASE_FOLDER}/docker-compose.yml \ + -f ${HL_TOPOLOGIES_BASE_FOLDER}/containers/cas/org3-cas/docker-compose-org3-cas.yml up -d org3-ca-tls +sleep 2 +docker exec ${CURRENT_HL_TOPOLOGY}-org3-ca-tls /bin/sh -c "/bin/sh /tmp/scripts/org3-register-identities-with-ca-tls.sh" + +docker compose --env-file ${HL_TOPOLOGIES_BASE_FOLDER}/.env -f ${HL_TOPOLOGIES_BASE_FOLDER}/../docker-compose-base.yml \ + -f ${HL_TOPOLOGIES_BASE_FOLDER}/docker-compose.yml \ + -f ${HL_TOPOLOGIES_BASE_FOLDER}/containers/cas/org3-cas/docker-compose-org3-cas.yml up -d org3-ca-identities +sleep 2 +docker exec ${CURRENT_HL_TOPOLOGY}-org3-ca-identities /bin/sh -c "/bin/sh /tmp/scripts/org3-register-identities-with-ca-identities.sh" + + +# ----------------------------------------------------------------------------- +# -----setup the Peers for all orgs----- +# ----------------------------------------------------------------------------- +# -----begin by enrolling each orgs' TLS-CA and Identities-CA users +docker exec ${CURRENT_HL_TOPOLOGY}-org2-ca-tls /bin/sh -c "/bin/sh /tmp/scripts/org2-enroll-identities-with-ca-tls.sh" +docker exec ${CURRENT_HL_TOPOLOGY}-org2-ca-identities /bin/sh -c "/bin/sh /tmp/scripts/org2-enroll-identities-with-ca-identities.sh" + +docker exec ${CURRENT_HL_TOPOLOGY}-org3-ca-tls /bin/sh -c "/bin/sh /tmp/scripts/org3-enroll-identities-with-ca-tls.sh" +docker exec ${CURRENT_HL_TOPOLOGY}-org3-ca-identities /bin/sh -c "/bin/sh /tmp/scripts/org3-enroll-identities-with-ca-identities.sh" + +# -----org2 peers +docker compose --env-file ${HL_TOPOLOGIES_BASE_FOLDER}/.env -f ${HL_TOPOLOGIES_BASE_FOLDER}/../docker-compose-base.yml \ + -f ${HL_TOPOLOGIES_BASE_FOLDER}/docker-compose.yml \ + -f ${HL_TOPOLOGIES_BASE_FOLDER}/containers/peers/org2-peers/docker-compose-org2-peers.yml up -d org2-peer1 + +docker compose --env-file ${HL_TOPOLOGIES_BASE_FOLDER}/.env -f ${HL_TOPOLOGIES_BASE_FOLDER}/../docker-compose-base.yml \ + -f ${HL_TOPOLOGIES_BASE_FOLDER}/docker-compose.yml \ + -f ${HL_TOPOLOGIES_BASE_FOLDER}/containers/peers/org2-peers/docker-compose-org2-peers.yml up -d org2-peer2 + +# -----org3 peers +docker compose --env-file ${HL_TOPOLOGIES_BASE_FOLDER}/.env -f ${HL_TOPOLOGIES_BASE_FOLDER}/../docker-compose-base.yml \ + -f ${HL_TOPOLOGIES_BASE_FOLDER}/docker-compose.yml \ + -f ${HL_TOPOLOGIES_BASE_FOLDER}/containers/peers/org3-peers/docker-compose-org3-peers.yml up -d org3-peer1 + +docker compose --env-file ${HL_TOPOLOGIES_BASE_FOLDER}/.env -f ${HL_TOPOLOGIES_BASE_FOLDER}/../docker-compose-base.yml \ + -f ${HL_TOPOLOGIES_BASE_FOLDER}/docker-compose.yml \ + -f ${HL_TOPOLOGIES_BASE_FOLDER}/containers/peers/org3-peers/docker-compose-org3-peers.yml up -d org3-peer2 + +# ----------------------------------------------------------------------------- +# -----setup CLIs for each org----- +# ----------------------------------------------------------------------------- +docker compose --env-file ${HL_TOPOLOGIES_BASE_FOLDER}/.env -f ${HL_TOPOLOGIES_BASE_FOLDER}/../docker-compose-base.yml \ + -f ${HL_TOPOLOGIES_BASE_FOLDER}/docker-compose.yml \ + -f ${HL_TOPOLOGIES_BASE_FOLDER}/containers/clis/org2-clis/docker-compose-org2-clis.yml up -d org2-cli-peer1 + +docker compose --env-file ${HL_TOPOLOGIES_BASE_FOLDER}/.env -f ${HL_TOPOLOGIES_BASE_FOLDER}/../docker-compose-base.yml \ + -f ${HL_TOPOLOGIES_BASE_FOLDER}/docker-compose.yml \ + -f ${HL_TOPOLOGIES_BASE_FOLDER}/containers/clis/org3-clis/docker-compose-org3-clis.yml up -d org3-cli-peer1 + +# ----------------------------------------------------------------------------- +# -----setup Orderers ----- +# ----------------------------------------------------------------------------- +# -----enroll orderer users with TLS-CA and Identities-CA for org1, the orderer org +docker exec ${CURRENT_HL_TOPOLOGY}-org1-ca-tls /bin/sh -c "/bin/sh /tmp/scripts/org1-enroll-identities-with-ca-tls.sh" +docker exec ${CURRENT_HL_TOPOLOGY}-org1-ca-identities /bin/sh -c "/bin/sh /tmp/scripts/org1-enroll-identities-with-ca-identities.sh" + +# -----generate the genesis.block and mychannel artifacts +docker exec ${CURRENT_HL_TOPOLOGY}-org2-cli-peer1 /bin/sh -c "/tmp/scripts/channels-setup.sh" + +sleep 1 + +# -----bring up org1 orderers +docker compose --env-file ${HL_TOPOLOGIES_BASE_FOLDER}/.env -f ${HL_TOPOLOGIES_BASE_FOLDER}/../docker-compose-base.yml \ + -f ${HL_TOPOLOGIES_BASE_FOLDER}/docker-compose.yml \ + -f ${HL_TOPOLOGIES_BASE_FOLDER}/containers/orderers/org1-orderers/docker-compose-org1-orderers.yml up -d org1-orderer1 + +docker compose --env-file ${HL_TOPOLOGIES_BASE_FOLDER}/.env -f ${HL_TOPOLOGIES_BASE_FOLDER}/../docker-compose-base.yml \ + -f ${HL_TOPOLOGIES_BASE_FOLDER}/docker-compose.yml \ + -f ${HL_TOPOLOGIES_BASE_FOLDER}/containers/orderers/org1-orderers/docker-compose-org1-orderers.yml up -d org1-orderer2 + +docker compose --env-file ${HL_TOPOLOGIES_BASE_FOLDER}/.env -f ${HL_TOPOLOGIES_BASE_FOLDER}/../docker-compose-base.yml \ + -f ${HL_TOPOLOGIES_BASE_FOLDER}/docker-compose.yml \ + -f ${HL_TOPOLOGIES_BASE_FOLDER}/containers/orderers/org1-orderers/docker-compose-org1-orderers.yml up -d org1-orderer3 + +# -----need to wait until raft leader selection is completed for the orderers +sleep 4 + +# ----------------------------------------------------------------------------- +# -----setup Channels ----- +# ----------------------------------------------------------------------------- +docker exec ${CURRENT_HL_TOPOLOGY}-org2-cli-peer1 /bin/sh -c "/tmp/scripts/org2-create-and-join-channels.sh" +docker exec ${CURRENT_HL_TOPOLOGY}-org3-cli-peer1 /bin/sh -c "/tmp/scripts/org3-create-and-join-channels.sh" + +# ----------------------------------------------------------------------------- +# -----setup Chaincode ----- +# ----------------------------------------------------------------------------- + +./scripts/setup-docker-images.sh ${HL_TOPOLOGIES_BASE_FOLDER} + +docker exec ${CURRENT_HL_TOPOLOGY}-org2-cli-peer1 /bin/sh -c "/tmp/scripts/org2-install-chaincode.sh" +export CC_ID=`docker exec ${CURRENT_HL_TOPOLOGY}-org2-cli-peer1 /bin/sh -c "/tmp/scripts/org2-get-installed-cc-id.sh"` +docker exec ${CURRENT_HL_TOPOLOGY}-org2-cli-peer1 /bin/sh -c "/tmp/scripts/org2-create-cas-tls-private-tls-key-file.sh" +docker compose --env-file ${HL_TOPOLOGIES_BASE_FOLDER}/.env -f ${HL_TOPOLOGIES_BASE_FOLDER}/../docker-compose-base.yml \ + -f ${HL_TOPOLOGIES_BASE_FOLDER}/docker-compose.yml \ + -f ${HL_TOPOLOGIES_BASE_FOLDER}/containers/chaincode/org2-chaincode/docker-compose-org2-chaincode.yml up -d org2-chaincode + +docker exec ${CURRENT_HL_TOPOLOGY}-org3-cli-peer1 /bin/sh -c "/tmp/scripts/org3-install-chaincode.sh" +export CC_ID=`docker exec ${CURRENT_HL_TOPOLOGY}-org3-cli-peer1 /bin/sh -c "/tmp/scripts/org3-get-installed-cc-id.sh"` +docker exec ${CURRENT_HL_TOPOLOGY}-org3-cli-peer1 /bin/sh -c "/tmp/scripts/org3-create-cas-tls-private-tls-key-file.sh" +docker compose --env-file ${HL_TOPOLOGIES_BASE_FOLDER}/.env -f ${HL_TOPOLOGIES_BASE_FOLDER}/../docker-compose-base.yml \ + -f ${HL_TOPOLOGIES_BASE_FOLDER}/docker-compose.yml \ + -f ${HL_TOPOLOGIES_BASE_FOLDER}/containers/chaincode/org3-chaincode/docker-compose-org3-chaincode.yml up -d org3-chaincode + +sleep 1 + +docker exec ${CURRENT_HL_TOPOLOGY}-org2-cli-peer1 /bin/sh -c "/tmp/scripts/org2-approve-chaincode.sh" +docker exec ${CURRENT_HL_TOPOLOGY}-org3-cli-peer1 /bin/sh -c "/tmp/scripts/org3-approve-chaincode.sh" + +sleep 1 + +docker exec ${CURRENT_HL_TOPOLOGY}-org2-cli-peer1 /bin/sh -c "/tmp/scripts/all-org-peers-commit-chaincode.sh" + +sleep 1 + +# ----------------------------------------------------------------------------- +# -----test Chaincode with invoke and query----- +# ----------------------------------------------------------------------------- +docker exec ${CURRENT_HL_TOPOLOGY}-org2-cli-peer1 /bin/sh -c "/tmp/scripts/all-org-peers-execute-chaincode.sh" + +echo "******* NETWORK SETUP COMPLETED *******" diff --git a/topologies/t3/teardown-network.sh b/topologies/t3/teardown-network.sh new file mode 100755 index 0000000..32bf0ac --- /dev/null +++ b/topologies/t3/teardown-network.sh @@ -0,0 +1,33 @@ +#!/bin/bash +# -----stop script execution on error and log commands +set -e +set -x + +export CURRENT_HL_TOPOLOGY=t3 + +# ----------------------------------------------------------------------------- +# -----remove current topoloy containers containers +# ----------------------------------------------------------------------------- +# -----the chaincode dockers throw an error as not being able to be deleted, although they are being deleted. ignoring errors here +set +e +docker ps --filter name=${CURRENT_HL_TOPOLOGY}-org --filter status=running -aq | xargs -r docker stop | xargs -r docker rm +docker ps --filter name=${CURRENT_HL_TOPOLOGY}-org --filter status=exited -aq | xargs -r docker rm +set -e + +# ----------------------------------------------------------------------------- +# -----clear any state data written to disk +# ----------------------------------------------------------------------------- +# -----docker exec will throw error if no running container found +if [ "$( docker container inspect -f '{{.State.Status}}' ${CURRENT_HL_TOPOLOGY}-shell-cmd )" == "running" ] +then + docker exec ${CURRENT_HL_TOPOLOGY}-shell-cmd /bin/sh -c "/bin/sh /tmp/scripts/delete-state-data.sh" +else + echo "Shell Cmd Container is not running." +fi + +# ----------------------------------------------------------------------------- +# -----remove the bootstrap shell container +# ----------------------------------------------------------------------------- +# -----remove shell command container +docker ps --filter name=${CURRENT_HL_TOPOLOGY}-shell-cmd --filter status=running -aq | xargs -r docker stop | xargs -r docker rm +docker ps --filter name=${CURRENT_HL_TOPOLOGY}-shell-cmd --filter status=exited -aq | xargs -r docker rm diff --git a/topologies/t4/.env b/topologies/t4/.env new file mode 100644 index 0000000..be48327 --- /dev/null +++ b/topologies/t4/.env @@ -0,0 +1,5 @@ +COMPOSE_PROJECT_NAME=hl-fabric-topology-t4 +FABRIC_CA_VERSION=1.5 +FABRIC_PEER_VERSION=2.2.3 +FABRIC_TOOLS_VERSION=2.2.3 +PEER_ORDERER_VERSION=2.2.3 \ No newline at end of file diff --git a/topologies/t4/.gitignore b/topologies/t4/.gitignore new file mode 100644 index 0000000..ee0881a --- /dev/null +++ b/topologies/t4/.gitignore @@ -0,0 +1,2 @@ +crypto-material/*/** +homefolders/*/** \ No newline at end of file diff --git a/topologies/t4/README.md b/topologies/t4/README.md new file mode 100644 index 0000000..603058c --- /dev/null +++ b/topologies/t4/README.md @@ -0,0 +1,41 @@ +# T4: LDAP for Credentials Store +## Description +--- +T1 Network plus OpenLDAP server where Identities (user/passwords) are read from to authenticate enrollment requests +## Diagram +--- +![Diagram of components](../image_store/T4.png) + +## Relevant Documentation + +- https://hyperledger-fabric-ca.readthedocs.io/en/release-1.4/users-guide.html#configuring-ldap + +## Components List +--- +* Org 1 + * Orderer 1 + * Orderer 2 + * Orderer 3 + * TLS CA + * Identities CA + * OpenLDAP +* Org 2 + * Peer 1 + * Peer 1 CLI + * Peer 2 + * TLS CA + * Identities CA + * OpenLDAP +* Org 3 + * Peer 1 + * Peer 1 CLI + * Peer 2 + * TLS CA + * Identities CA + * OpenLDAP + +## Characteristics + +- World State Database Instance (LevelDB) embedded (in peer containers) +- Chaincode installed directly on peers +- Communication between all components done via TLS \ No newline at end of file diff --git a/topologies/t4/assets/chaincode/assets-transfer-basic/chaincode-go/assetTransfer.go b/topologies/t4/assets/chaincode/assets-transfer-basic/chaincode-go/assetTransfer.go new file mode 100644 index 0000000..9c619d5 --- /dev/null +++ b/topologies/t4/assets/chaincode/assets-transfer-basic/chaincode-go/assetTransfer.go @@ -0,0 +1,23 @@ +/* +SPDX-License-Identifier: Apache-2.0 +*/ + +package main + +import ( + "log" + + "github.com/hyperledger/fabric-contract-api-go/contractapi" + "github.com/hyperledger/fabric-samples/asset-transfer-basic/chaincode-go/chaincode" +) + +func main() { + assetChaincode, err := contractapi.NewChaincode(&chaincode.SmartContract{}) + if err != nil { + log.Panicf("Error creating asset-transfer-basic chaincode: %v", err) + } + + if err := assetChaincode.Start(); err != nil { + log.Panicf("Error starting asset-transfer-basic chaincode: %v", err) + } +} diff --git a/topologies/t4/assets/chaincode/assets-transfer-basic/chaincode-go/chaincode/mocks/chaincodestub.go b/topologies/t4/assets/chaincode/assets-transfer-basic/chaincode-go/chaincode/mocks/chaincodestub.go new file mode 100644 index 0000000..91348fe --- /dev/null +++ b/topologies/t4/assets/chaincode/assets-transfer-basic/chaincode-go/chaincode/mocks/chaincodestub.go @@ -0,0 +1,2878 @@ +// Code generated by counterfeiter. DO NOT EDIT. +package mocks + +import ( + "sync" + + "github.com/golang/protobuf/ptypes/timestamp" + "github.com/hyperledger/fabric-chaincode-go/shim" + "github.com/hyperledger/fabric-protos-go/peer" +) + +type ChaincodeStub struct { + CreateCompositeKeyStub func(string, []string) (string, error) + createCompositeKeyMutex sync.RWMutex + createCompositeKeyArgsForCall []struct { + arg1 string + arg2 []string + } + createCompositeKeyReturns struct { + result1 string + result2 error + } + createCompositeKeyReturnsOnCall map[int]struct { + result1 string + result2 error + } + DelPrivateDataStub func(string, string) error + delPrivateDataMutex sync.RWMutex + delPrivateDataArgsForCall []struct { + arg1 string + arg2 string + } + delPrivateDataReturns struct { + result1 error + } + delPrivateDataReturnsOnCall map[int]struct { + result1 error + } + DelStateStub func(string) error + delStateMutex sync.RWMutex + delStateArgsForCall []struct { + arg1 string + } + delStateReturns struct { + result1 error + } + delStateReturnsOnCall map[int]struct { + result1 error + } + GetArgsStub func() [][]byte + getArgsMutex sync.RWMutex + getArgsArgsForCall []struct { + } + getArgsReturns struct { + result1 [][]byte + } + getArgsReturnsOnCall map[int]struct { + result1 [][]byte + } + GetArgsSliceStub func() ([]byte, error) + getArgsSliceMutex sync.RWMutex + getArgsSliceArgsForCall []struct { + } + getArgsSliceReturns struct { + result1 []byte + result2 error + } + getArgsSliceReturnsOnCall map[int]struct { + result1 []byte + result2 error + } + GetBindingStub func() ([]byte, error) + getBindingMutex sync.RWMutex + getBindingArgsForCall []struct { + } + getBindingReturns struct { + result1 []byte + result2 error + } + getBindingReturnsOnCall map[int]struct { + result1 []byte + result2 error + } + GetChannelIDStub func() string + getChannelIDMutex sync.RWMutex + getChannelIDArgsForCall []struct { + } + getChannelIDReturns struct { + result1 string + } + getChannelIDReturnsOnCall map[int]struct { + result1 string + } + GetCreatorStub func() ([]byte, error) + getCreatorMutex sync.RWMutex + getCreatorArgsForCall []struct { + } + getCreatorReturns struct { + result1 []byte + result2 error + } + getCreatorReturnsOnCall map[int]struct { + result1 []byte + result2 error + } + GetDecorationsStub func() map[string][]byte + getDecorationsMutex sync.RWMutex + getDecorationsArgsForCall []struct { + } + getDecorationsReturns struct { + result1 map[string][]byte + } + getDecorationsReturnsOnCall map[int]struct { + result1 map[string][]byte + } + GetFunctionAndParametersStub func() (string, []string) + getFunctionAndParametersMutex sync.RWMutex + getFunctionAndParametersArgsForCall []struct { + } + getFunctionAndParametersReturns struct { + result1 string + result2 []string + } + getFunctionAndParametersReturnsOnCall map[int]struct { + result1 string + result2 []string + } + GetHistoryForKeyStub func(string) (shim.HistoryQueryIteratorInterface, error) + getHistoryForKeyMutex sync.RWMutex + getHistoryForKeyArgsForCall []struct { + arg1 string + } + getHistoryForKeyReturns struct { + result1 shim.HistoryQueryIteratorInterface + result2 error + } + getHistoryForKeyReturnsOnCall map[int]struct { + result1 shim.HistoryQueryIteratorInterface + result2 error + } + GetPrivateDataStub func(string, string) ([]byte, error) + getPrivateDataMutex sync.RWMutex + getPrivateDataArgsForCall []struct { + arg1 string + arg2 string + } + getPrivateDataReturns struct { + result1 []byte + result2 error + } + getPrivateDataReturnsOnCall map[int]struct { + result1 []byte + result2 error + } + GetPrivateDataByPartialCompositeKeyStub func(string, string, []string) (shim.StateQueryIteratorInterface, error) + getPrivateDataByPartialCompositeKeyMutex sync.RWMutex + getPrivateDataByPartialCompositeKeyArgsForCall []struct { + arg1 string + arg2 string + arg3 []string + } + getPrivateDataByPartialCompositeKeyReturns struct { + result1 shim.StateQueryIteratorInterface + result2 error + } + getPrivateDataByPartialCompositeKeyReturnsOnCall map[int]struct { + result1 shim.StateQueryIteratorInterface + result2 error + } + GetPrivateDataByRangeStub func(string, string, string) (shim.StateQueryIteratorInterface, error) + getPrivateDataByRangeMutex sync.RWMutex + getPrivateDataByRangeArgsForCall []struct { + arg1 string + arg2 string + arg3 string + } + getPrivateDataByRangeReturns struct { + result1 shim.StateQueryIteratorInterface + result2 error + } + getPrivateDataByRangeReturnsOnCall map[int]struct { + result1 shim.StateQueryIteratorInterface + result2 error + } + GetPrivateDataHashStub func(string, string) ([]byte, error) + getPrivateDataHashMutex sync.RWMutex + getPrivateDataHashArgsForCall []struct { + arg1 string + arg2 string + } + getPrivateDataHashReturns struct { + result1 []byte + result2 error + } + getPrivateDataHashReturnsOnCall map[int]struct { + result1 []byte + result2 error + } + GetPrivateDataQueryResultStub func(string, string) (shim.StateQueryIteratorInterface, error) + getPrivateDataQueryResultMutex sync.RWMutex + getPrivateDataQueryResultArgsForCall []struct { + arg1 string + arg2 string + } + getPrivateDataQueryResultReturns struct { + result1 shim.StateQueryIteratorInterface + result2 error + } + getPrivateDataQueryResultReturnsOnCall map[int]struct { + result1 shim.StateQueryIteratorInterface + result2 error + } + GetPrivateDataValidationParameterStub func(string, string) ([]byte, error) + getPrivateDataValidationParameterMutex sync.RWMutex + getPrivateDataValidationParameterArgsForCall []struct { + arg1 string + arg2 string + } + getPrivateDataValidationParameterReturns struct { + result1 []byte + result2 error + } + getPrivateDataValidationParameterReturnsOnCall map[int]struct { + result1 []byte + result2 error + } + GetQueryResultStub func(string) (shim.StateQueryIteratorInterface, error) + getQueryResultMutex sync.RWMutex + getQueryResultArgsForCall []struct { + arg1 string + } + getQueryResultReturns struct { + result1 shim.StateQueryIteratorInterface + result2 error + } + getQueryResultReturnsOnCall map[int]struct { + result1 shim.StateQueryIteratorInterface + result2 error + } + GetQueryResultWithPaginationStub func(string, int32, string) (shim.StateQueryIteratorInterface, *peer.QueryResponseMetadata, error) + getQueryResultWithPaginationMutex sync.RWMutex + getQueryResultWithPaginationArgsForCall []struct { + arg1 string + arg2 int32 + arg3 string + } + getQueryResultWithPaginationReturns struct { + result1 shim.StateQueryIteratorInterface + result2 *peer.QueryResponseMetadata + result3 error + } + getQueryResultWithPaginationReturnsOnCall map[int]struct { + result1 shim.StateQueryIteratorInterface + result2 *peer.QueryResponseMetadata + result3 error + } + GetSignedProposalStub func() (*peer.SignedProposal, error) + getSignedProposalMutex sync.RWMutex + getSignedProposalArgsForCall []struct { + } + getSignedProposalReturns struct { + result1 *peer.SignedProposal + result2 error + } + getSignedProposalReturnsOnCall map[int]struct { + result1 *peer.SignedProposal + result2 error + } + GetStateStub func(string) ([]byte, error) + getStateMutex sync.RWMutex + getStateArgsForCall []struct { + arg1 string + } + getStateReturns struct { + result1 []byte + result2 error + } + getStateReturnsOnCall map[int]struct { + result1 []byte + result2 error + } + GetStateByPartialCompositeKeyStub func(string, []string) (shim.StateQueryIteratorInterface, error) + getStateByPartialCompositeKeyMutex sync.RWMutex + getStateByPartialCompositeKeyArgsForCall []struct { + arg1 string + arg2 []string + } + getStateByPartialCompositeKeyReturns struct { + result1 shim.StateQueryIteratorInterface + result2 error + } + getStateByPartialCompositeKeyReturnsOnCall map[int]struct { + result1 shim.StateQueryIteratorInterface + result2 error + } + GetStateByPartialCompositeKeyWithPaginationStub func(string, []string, int32, string) (shim.StateQueryIteratorInterface, *peer.QueryResponseMetadata, error) + getStateByPartialCompositeKeyWithPaginationMutex sync.RWMutex + getStateByPartialCompositeKeyWithPaginationArgsForCall []struct { + arg1 string + arg2 []string + arg3 int32 + arg4 string + } + getStateByPartialCompositeKeyWithPaginationReturns struct { + result1 shim.StateQueryIteratorInterface + result2 *peer.QueryResponseMetadata + result3 error + } + getStateByPartialCompositeKeyWithPaginationReturnsOnCall map[int]struct { + result1 shim.StateQueryIteratorInterface + result2 *peer.QueryResponseMetadata + result3 error + } + GetStateByRangeStub func(string, string) (shim.StateQueryIteratorInterface, error) + getStateByRangeMutex sync.RWMutex + getStateByRangeArgsForCall []struct { + arg1 string + arg2 string + } + getStateByRangeReturns struct { + result1 shim.StateQueryIteratorInterface + result2 error + } + getStateByRangeReturnsOnCall map[int]struct { + result1 shim.StateQueryIteratorInterface + result2 error + } + GetStateByRangeWithPaginationStub func(string, string, int32, string) (shim.StateQueryIteratorInterface, *peer.QueryResponseMetadata, error) + getStateByRangeWithPaginationMutex sync.RWMutex + getStateByRangeWithPaginationArgsForCall []struct { + arg1 string + arg2 string + arg3 int32 + arg4 string + } + getStateByRangeWithPaginationReturns struct { + result1 shim.StateQueryIteratorInterface + result2 *peer.QueryResponseMetadata + result3 error + } + getStateByRangeWithPaginationReturnsOnCall map[int]struct { + result1 shim.StateQueryIteratorInterface + result2 *peer.QueryResponseMetadata + result3 error + } + GetStateValidationParameterStub func(string) ([]byte, error) + getStateValidationParameterMutex sync.RWMutex + getStateValidationParameterArgsForCall []struct { + arg1 string + } + getStateValidationParameterReturns struct { + result1 []byte + result2 error + } + getStateValidationParameterReturnsOnCall map[int]struct { + result1 []byte + result2 error + } + GetStringArgsStub func() []string + getStringArgsMutex sync.RWMutex + getStringArgsArgsForCall []struct { + } + getStringArgsReturns struct { + result1 []string + } + getStringArgsReturnsOnCall map[int]struct { + result1 []string + } + GetTransientStub func() (map[string][]byte, error) + getTransientMutex sync.RWMutex + getTransientArgsForCall []struct { + } + getTransientReturns struct { + result1 map[string][]byte + result2 error + } + getTransientReturnsOnCall map[int]struct { + result1 map[string][]byte + result2 error + } + GetTxIDStub func() string + getTxIDMutex sync.RWMutex + getTxIDArgsForCall []struct { + } + getTxIDReturns struct { + result1 string + } + getTxIDReturnsOnCall map[int]struct { + result1 string + } + GetTxTimestampStub func() (*timestamp.Timestamp, error) + getTxTimestampMutex sync.RWMutex + getTxTimestampArgsForCall []struct { + } + getTxTimestampReturns struct { + result1 *timestamp.Timestamp + result2 error + } + getTxTimestampReturnsOnCall map[int]struct { + result1 *timestamp.Timestamp + result2 error + } + InvokeChaincodeStub func(string, [][]byte, string) peer.Response + invokeChaincodeMutex sync.RWMutex + invokeChaincodeArgsForCall []struct { + arg1 string + arg2 [][]byte + arg3 string + } + invokeChaincodeReturns struct { + result1 peer.Response + } + invokeChaincodeReturnsOnCall map[int]struct { + result1 peer.Response + } + PutPrivateDataStub func(string, string, []byte) error + putPrivateDataMutex sync.RWMutex + putPrivateDataArgsForCall []struct { + arg1 string + arg2 string + arg3 []byte + } + putPrivateDataReturns struct { + result1 error + } + putPrivateDataReturnsOnCall map[int]struct { + result1 error + } + PutStateStub func(string, []byte) error + putStateMutex sync.RWMutex + putStateArgsForCall []struct { + arg1 string + arg2 []byte + } + putStateReturns struct { + result1 error + } + putStateReturnsOnCall map[int]struct { + result1 error + } + SetEventStub func(string, []byte) error + setEventMutex sync.RWMutex + setEventArgsForCall []struct { + arg1 string + arg2 []byte + } + setEventReturns struct { + result1 error + } + setEventReturnsOnCall map[int]struct { + result1 error + } + SetPrivateDataValidationParameterStub func(string, string, []byte) error + setPrivateDataValidationParameterMutex sync.RWMutex + setPrivateDataValidationParameterArgsForCall []struct { + arg1 string + arg2 string + arg3 []byte + } + setPrivateDataValidationParameterReturns struct { + result1 error + } + setPrivateDataValidationParameterReturnsOnCall map[int]struct { + result1 error + } + SetStateValidationParameterStub func(string, []byte) error + setStateValidationParameterMutex sync.RWMutex + setStateValidationParameterArgsForCall []struct { + arg1 string + arg2 []byte + } + setStateValidationParameterReturns struct { + result1 error + } + setStateValidationParameterReturnsOnCall map[int]struct { + result1 error + } + SplitCompositeKeyStub func(string) (string, []string, error) + splitCompositeKeyMutex sync.RWMutex + splitCompositeKeyArgsForCall []struct { + arg1 string + } + splitCompositeKeyReturns struct { + result1 string + result2 []string + result3 error + } + splitCompositeKeyReturnsOnCall map[int]struct { + result1 string + result2 []string + result3 error + } + invocations map[string][][]interface{} + invocationsMutex sync.RWMutex +} + +func (fake *ChaincodeStub) CreateCompositeKey(arg1 string, arg2 []string) (string, error) { + var arg2Copy []string + if arg2 != nil { + arg2Copy = make([]string, len(arg2)) + copy(arg2Copy, arg2) + } + fake.createCompositeKeyMutex.Lock() + ret, specificReturn := fake.createCompositeKeyReturnsOnCall[len(fake.createCompositeKeyArgsForCall)] + fake.createCompositeKeyArgsForCall = append(fake.createCompositeKeyArgsForCall, struct { + arg1 string + arg2 []string + }{arg1, arg2Copy}) + fake.recordInvocation("CreateCompositeKey", []interface{}{arg1, arg2Copy}) + fake.createCompositeKeyMutex.Unlock() + if fake.CreateCompositeKeyStub != nil { + return fake.CreateCompositeKeyStub(arg1, arg2) + } + if specificReturn { + return ret.result1, ret.result2 + } + fakeReturns := fake.createCompositeKeyReturns + return fakeReturns.result1, fakeReturns.result2 +} + +func (fake *ChaincodeStub) CreateCompositeKeyCallCount() int { + fake.createCompositeKeyMutex.RLock() + defer fake.createCompositeKeyMutex.RUnlock() + return len(fake.createCompositeKeyArgsForCall) +} + +func (fake *ChaincodeStub) CreateCompositeKeyCalls(stub func(string, []string) (string, error)) { + fake.createCompositeKeyMutex.Lock() + defer fake.createCompositeKeyMutex.Unlock() + fake.CreateCompositeKeyStub = stub +} + +func (fake *ChaincodeStub) CreateCompositeKeyArgsForCall(i int) (string, []string) { + fake.createCompositeKeyMutex.RLock() + defer fake.createCompositeKeyMutex.RUnlock() + argsForCall := fake.createCompositeKeyArgsForCall[i] + return argsForCall.arg1, argsForCall.arg2 +} + +func (fake *ChaincodeStub) CreateCompositeKeyReturns(result1 string, result2 error) { + fake.createCompositeKeyMutex.Lock() + defer fake.createCompositeKeyMutex.Unlock() + fake.CreateCompositeKeyStub = nil + fake.createCompositeKeyReturns = struct { + result1 string + result2 error + }{result1, result2} +} + +func (fake *ChaincodeStub) CreateCompositeKeyReturnsOnCall(i int, result1 string, result2 error) { + fake.createCompositeKeyMutex.Lock() + defer fake.createCompositeKeyMutex.Unlock() + fake.CreateCompositeKeyStub = nil + if fake.createCompositeKeyReturnsOnCall == nil { + fake.createCompositeKeyReturnsOnCall = make(map[int]struct { + result1 string + result2 error + }) + } + fake.createCompositeKeyReturnsOnCall[i] = struct { + result1 string + result2 error + }{result1, result2} +} + +func (fake *ChaincodeStub) DelPrivateData(arg1 string, arg2 string) error { + fake.delPrivateDataMutex.Lock() + ret, specificReturn := fake.delPrivateDataReturnsOnCall[len(fake.delPrivateDataArgsForCall)] + fake.delPrivateDataArgsForCall = append(fake.delPrivateDataArgsForCall, struct { + arg1 string + arg2 string + }{arg1, arg2}) + fake.recordInvocation("DelPrivateData", []interface{}{arg1, arg2}) + fake.delPrivateDataMutex.Unlock() + if fake.DelPrivateDataStub != nil { + return fake.DelPrivateDataStub(arg1, arg2) + } + if specificReturn { + return ret.result1 + } + fakeReturns := fake.delPrivateDataReturns + return fakeReturns.result1 +} + +func (fake *ChaincodeStub) DelPrivateDataCallCount() int { + fake.delPrivateDataMutex.RLock() + defer fake.delPrivateDataMutex.RUnlock() + return len(fake.delPrivateDataArgsForCall) +} + +func (fake *ChaincodeStub) DelPrivateDataCalls(stub func(string, string) error) { + fake.delPrivateDataMutex.Lock() + defer fake.delPrivateDataMutex.Unlock() + fake.DelPrivateDataStub = stub +} + +func (fake *ChaincodeStub) DelPrivateDataArgsForCall(i int) (string, string) { + fake.delPrivateDataMutex.RLock() + defer fake.delPrivateDataMutex.RUnlock() + argsForCall := fake.delPrivateDataArgsForCall[i] + return argsForCall.arg1, argsForCall.arg2 +} + +func (fake *ChaincodeStub) DelPrivateDataReturns(result1 error) { + fake.delPrivateDataMutex.Lock() + defer fake.delPrivateDataMutex.Unlock() + fake.DelPrivateDataStub = nil + fake.delPrivateDataReturns = struct { + result1 error + }{result1} +} + +func (fake *ChaincodeStub) DelPrivateDataReturnsOnCall(i int, result1 error) { + fake.delPrivateDataMutex.Lock() + defer fake.delPrivateDataMutex.Unlock() + fake.DelPrivateDataStub = nil + if fake.delPrivateDataReturnsOnCall == nil { + fake.delPrivateDataReturnsOnCall = make(map[int]struct { + result1 error + }) + } + fake.delPrivateDataReturnsOnCall[i] = struct { + result1 error + }{result1} +} + +func (fake *ChaincodeStub) DelState(arg1 string) error { + fake.delStateMutex.Lock() + ret, specificReturn := fake.delStateReturnsOnCall[len(fake.delStateArgsForCall)] + fake.delStateArgsForCall = append(fake.delStateArgsForCall, struct { + arg1 string + }{arg1}) + fake.recordInvocation("DelState", []interface{}{arg1}) + fake.delStateMutex.Unlock() + if fake.DelStateStub != nil { + return fake.DelStateStub(arg1) + } + if specificReturn { + return ret.result1 + } + fakeReturns := fake.delStateReturns + return fakeReturns.result1 +} + +func (fake *ChaincodeStub) DelStateCallCount() int { + fake.delStateMutex.RLock() + defer fake.delStateMutex.RUnlock() + return len(fake.delStateArgsForCall) +} + +func (fake *ChaincodeStub) DelStateCalls(stub func(string) error) { + fake.delStateMutex.Lock() + defer fake.delStateMutex.Unlock() + fake.DelStateStub = stub +} + +func (fake *ChaincodeStub) DelStateArgsForCall(i int) string { + fake.delStateMutex.RLock() + defer fake.delStateMutex.RUnlock() + argsForCall := fake.delStateArgsForCall[i] + return argsForCall.arg1 +} + +func (fake *ChaincodeStub) DelStateReturns(result1 error) { + fake.delStateMutex.Lock() + defer fake.delStateMutex.Unlock() + fake.DelStateStub = nil + fake.delStateReturns = struct { + result1 error + }{result1} +} + +func (fake *ChaincodeStub) DelStateReturnsOnCall(i int, result1 error) { + fake.delStateMutex.Lock() + defer fake.delStateMutex.Unlock() + fake.DelStateStub = nil + if fake.delStateReturnsOnCall == nil { + fake.delStateReturnsOnCall = make(map[int]struct { + result1 error + }) + } + fake.delStateReturnsOnCall[i] = struct { + result1 error + }{result1} +} + +func (fake *ChaincodeStub) GetArgs() [][]byte { + fake.getArgsMutex.Lock() + ret, specificReturn := fake.getArgsReturnsOnCall[len(fake.getArgsArgsForCall)] + fake.getArgsArgsForCall = append(fake.getArgsArgsForCall, struct { + }{}) + fake.recordInvocation("GetArgs", []interface{}{}) + fake.getArgsMutex.Unlock() + if fake.GetArgsStub != nil { + return fake.GetArgsStub() + } + if specificReturn { + return ret.result1 + } + fakeReturns := fake.getArgsReturns + return fakeReturns.result1 +} + +func (fake *ChaincodeStub) GetArgsCallCount() int { + fake.getArgsMutex.RLock() + defer fake.getArgsMutex.RUnlock() + return len(fake.getArgsArgsForCall) +} + +func (fake *ChaincodeStub) GetArgsCalls(stub func() [][]byte) { + fake.getArgsMutex.Lock() + defer fake.getArgsMutex.Unlock() + fake.GetArgsStub = stub +} + +func (fake *ChaincodeStub) GetArgsReturns(result1 [][]byte) { + fake.getArgsMutex.Lock() + defer fake.getArgsMutex.Unlock() + fake.GetArgsStub = nil + fake.getArgsReturns = struct { + result1 [][]byte + }{result1} +} + +func (fake *ChaincodeStub) GetArgsReturnsOnCall(i int, result1 [][]byte) { + fake.getArgsMutex.Lock() + defer fake.getArgsMutex.Unlock() + fake.GetArgsStub = nil + if fake.getArgsReturnsOnCall == nil { + fake.getArgsReturnsOnCall = make(map[int]struct { + result1 [][]byte + }) + } + fake.getArgsReturnsOnCall[i] = struct { + result1 [][]byte + }{result1} +} + +func (fake *ChaincodeStub) GetArgsSlice() ([]byte, error) { + fake.getArgsSliceMutex.Lock() + ret, specificReturn := fake.getArgsSliceReturnsOnCall[len(fake.getArgsSliceArgsForCall)] + fake.getArgsSliceArgsForCall = append(fake.getArgsSliceArgsForCall, struct { + }{}) + fake.recordInvocation("GetArgsSlice", []interface{}{}) + fake.getArgsSliceMutex.Unlock() + if fake.GetArgsSliceStub != nil { + return fake.GetArgsSliceStub() + } + if specificReturn { + return ret.result1, ret.result2 + } + fakeReturns := fake.getArgsSliceReturns + return fakeReturns.result1, fakeReturns.result2 +} + +func (fake *ChaincodeStub) GetArgsSliceCallCount() int { + fake.getArgsSliceMutex.RLock() + defer fake.getArgsSliceMutex.RUnlock() + return len(fake.getArgsSliceArgsForCall) +} + +func (fake *ChaincodeStub) GetArgsSliceCalls(stub func() ([]byte, error)) { + fake.getArgsSliceMutex.Lock() + defer fake.getArgsSliceMutex.Unlock() + fake.GetArgsSliceStub = stub +} + +func (fake *ChaincodeStub) GetArgsSliceReturns(result1 []byte, result2 error) { + fake.getArgsSliceMutex.Lock() + defer fake.getArgsSliceMutex.Unlock() + fake.GetArgsSliceStub = nil + fake.getArgsSliceReturns = struct { + result1 []byte + result2 error + }{result1, result2} +} + +func (fake *ChaincodeStub) GetArgsSliceReturnsOnCall(i int, result1 []byte, result2 error) { + fake.getArgsSliceMutex.Lock() + defer fake.getArgsSliceMutex.Unlock() + fake.GetArgsSliceStub = nil + if fake.getArgsSliceReturnsOnCall == nil { + fake.getArgsSliceReturnsOnCall = make(map[int]struct { + result1 []byte + result2 error + }) + } + fake.getArgsSliceReturnsOnCall[i] = struct { + result1 []byte + result2 error + }{result1, result2} +} + +func (fake *ChaincodeStub) GetBinding() ([]byte, error) { + fake.getBindingMutex.Lock() + ret, specificReturn := fake.getBindingReturnsOnCall[len(fake.getBindingArgsForCall)] + fake.getBindingArgsForCall = append(fake.getBindingArgsForCall, struct { + }{}) + fake.recordInvocation("GetBinding", []interface{}{}) + fake.getBindingMutex.Unlock() + if fake.GetBindingStub != nil { + return fake.GetBindingStub() + } + if specificReturn { + return ret.result1, ret.result2 + } + fakeReturns := fake.getBindingReturns + return fakeReturns.result1, fakeReturns.result2 +} + +func (fake *ChaincodeStub) GetBindingCallCount() int { + fake.getBindingMutex.RLock() + defer fake.getBindingMutex.RUnlock() + return len(fake.getBindingArgsForCall) +} + +func (fake *ChaincodeStub) GetBindingCalls(stub func() ([]byte, error)) { + fake.getBindingMutex.Lock() + defer fake.getBindingMutex.Unlock() + fake.GetBindingStub = stub +} + +func (fake *ChaincodeStub) GetBindingReturns(result1 []byte, result2 error) { + fake.getBindingMutex.Lock() + defer fake.getBindingMutex.Unlock() + fake.GetBindingStub = nil + fake.getBindingReturns = struct { + result1 []byte + result2 error + }{result1, result2} +} + +func (fake *ChaincodeStub) GetBindingReturnsOnCall(i int, result1 []byte, result2 error) { + fake.getBindingMutex.Lock() + defer fake.getBindingMutex.Unlock() + fake.GetBindingStub = nil + if fake.getBindingReturnsOnCall == nil { + fake.getBindingReturnsOnCall = make(map[int]struct { + result1 []byte + result2 error + }) + } + fake.getBindingReturnsOnCall[i] = struct { + result1 []byte + result2 error + }{result1, result2} +} + +func (fake *ChaincodeStub) GetChannelID() string { + fake.getChannelIDMutex.Lock() + ret, specificReturn := fake.getChannelIDReturnsOnCall[len(fake.getChannelIDArgsForCall)] + fake.getChannelIDArgsForCall = append(fake.getChannelIDArgsForCall, struct { + }{}) + fake.recordInvocation("GetChannelID", []interface{}{}) + fake.getChannelIDMutex.Unlock() + if fake.GetChannelIDStub != nil { + return fake.GetChannelIDStub() + } + if specificReturn { + return ret.result1 + } + fakeReturns := fake.getChannelIDReturns + return fakeReturns.result1 +} + +func (fake *ChaincodeStub) GetChannelIDCallCount() int { + fake.getChannelIDMutex.RLock() + defer fake.getChannelIDMutex.RUnlock() + return len(fake.getChannelIDArgsForCall) +} + +func (fake *ChaincodeStub) GetChannelIDCalls(stub func() string) { + fake.getChannelIDMutex.Lock() + defer fake.getChannelIDMutex.Unlock() + fake.GetChannelIDStub = stub +} + +func (fake *ChaincodeStub) GetChannelIDReturns(result1 string) { + fake.getChannelIDMutex.Lock() + defer fake.getChannelIDMutex.Unlock() + fake.GetChannelIDStub = nil + fake.getChannelIDReturns = struct { + result1 string + }{result1} +} + +func (fake *ChaincodeStub) GetChannelIDReturnsOnCall(i int, result1 string) { + fake.getChannelIDMutex.Lock() + defer fake.getChannelIDMutex.Unlock() + fake.GetChannelIDStub = nil + if fake.getChannelIDReturnsOnCall == nil { + fake.getChannelIDReturnsOnCall = make(map[int]struct { + result1 string + }) + } + fake.getChannelIDReturnsOnCall[i] = struct { + result1 string + }{result1} +} + +func (fake *ChaincodeStub) GetCreator() ([]byte, error) { + fake.getCreatorMutex.Lock() + ret, specificReturn := fake.getCreatorReturnsOnCall[len(fake.getCreatorArgsForCall)] + fake.getCreatorArgsForCall = append(fake.getCreatorArgsForCall, struct { + }{}) + fake.recordInvocation("GetCreator", []interface{}{}) + fake.getCreatorMutex.Unlock() + if fake.GetCreatorStub != nil { + return fake.GetCreatorStub() + } + if specificReturn { + return ret.result1, ret.result2 + } + fakeReturns := fake.getCreatorReturns + return fakeReturns.result1, fakeReturns.result2 +} + +func (fake *ChaincodeStub) GetCreatorCallCount() int { + fake.getCreatorMutex.RLock() + defer fake.getCreatorMutex.RUnlock() + return len(fake.getCreatorArgsForCall) +} + +func (fake *ChaincodeStub) GetCreatorCalls(stub func() ([]byte, error)) { + fake.getCreatorMutex.Lock() + defer fake.getCreatorMutex.Unlock() + fake.GetCreatorStub = stub +} + +func (fake *ChaincodeStub) GetCreatorReturns(result1 []byte, result2 error) { + fake.getCreatorMutex.Lock() + defer fake.getCreatorMutex.Unlock() + fake.GetCreatorStub = nil + fake.getCreatorReturns = struct { + result1 []byte + result2 error + }{result1, result2} +} + +func (fake *ChaincodeStub) GetCreatorReturnsOnCall(i int, result1 []byte, result2 error) { + fake.getCreatorMutex.Lock() + defer fake.getCreatorMutex.Unlock() + fake.GetCreatorStub = nil + if fake.getCreatorReturnsOnCall == nil { + fake.getCreatorReturnsOnCall = make(map[int]struct { + result1 []byte + result2 error + }) + } + fake.getCreatorReturnsOnCall[i] = struct { + result1 []byte + result2 error + }{result1, result2} +} + +func (fake *ChaincodeStub) GetDecorations() map[string][]byte { + fake.getDecorationsMutex.Lock() + ret, specificReturn := fake.getDecorationsReturnsOnCall[len(fake.getDecorationsArgsForCall)] + fake.getDecorationsArgsForCall = append(fake.getDecorationsArgsForCall, struct { + }{}) + fake.recordInvocation("GetDecorations", []interface{}{}) + fake.getDecorationsMutex.Unlock() + if fake.GetDecorationsStub != nil { + return fake.GetDecorationsStub() + } + if specificReturn { + return ret.result1 + } + fakeReturns := fake.getDecorationsReturns + return fakeReturns.result1 +} + +func (fake *ChaincodeStub) GetDecorationsCallCount() int { + fake.getDecorationsMutex.RLock() + defer fake.getDecorationsMutex.RUnlock() + return len(fake.getDecorationsArgsForCall) +} + +func (fake *ChaincodeStub) GetDecorationsCalls(stub func() map[string][]byte) { + fake.getDecorationsMutex.Lock() + defer fake.getDecorationsMutex.Unlock() + fake.GetDecorationsStub = stub +} + +func (fake *ChaincodeStub) GetDecorationsReturns(result1 map[string][]byte) { + fake.getDecorationsMutex.Lock() + defer fake.getDecorationsMutex.Unlock() + fake.GetDecorationsStub = nil + fake.getDecorationsReturns = struct { + result1 map[string][]byte + }{result1} +} + +func (fake *ChaincodeStub) GetDecorationsReturnsOnCall(i int, result1 map[string][]byte) { + fake.getDecorationsMutex.Lock() + defer fake.getDecorationsMutex.Unlock() + fake.GetDecorationsStub = nil + if fake.getDecorationsReturnsOnCall == nil { + fake.getDecorationsReturnsOnCall = make(map[int]struct { + result1 map[string][]byte + }) + } + fake.getDecorationsReturnsOnCall[i] = struct { + result1 map[string][]byte + }{result1} +} + +func (fake *ChaincodeStub) GetFunctionAndParameters() (string, []string) { + fake.getFunctionAndParametersMutex.Lock() + ret, specificReturn := fake.getFunctionAndParametersReturnsOnCall[len(fake.getFunctionAndParametersArgsForCall)] + fake.getFunctionAndParametersArgsForCall = append(fake.getFunctionAndParametersArgsForCall, struct { + }{}) + fake.recordInvocation("GetFunctionAndParameters", []interface{}{}) + fake.getFunctionAndParametersMutex.Unlock() + if fake.GetFunctionAndParametersStub != nil { + return fake.GetFunctionAndParametersStub() + } + if specificReturn { + return ret.result1, ret.result2 + } + fakeReturns := fake.getFunctionAndParametersReturns + return fakeReturns.result1, fakeReturns.result2 +} + +func (fake *ChaincodeStub) GetFunctionAndParametersCallCount() int { + fake.getFunctionAndParametersMutex.RLock() + defer fake.getFunctionAndParametersMutex.RUnlock() + return len(fake.getFunctionAndParametersArgsForCall) +} + +func (fake *ChaincodeStub) GetFunctionAndParametersCalls(stub func() (string, []string)) { + fake.getFunctionAndParametersMutex.Lock() + defer fake.getFunctionAndParametersMutex.Unlock() + fake.GetFunctionAndParametersStub = stub +} + +func (fake *ChaincodeStub) GetFunctionAndParametersReturns(result1 string, result2 []string) { + fake.getFunctionAndParametersMutex.Lock() + defer fake.getFunctionAndParametersMutex.Unlock() + fake.GetFunctionAndParametersStub = nil + fake.getFunctionAndParametersReturns = struct { + result1 string + result2 []string + }{result1, result2} +} + +func (fake *ChaincodeStub) GetFunctionAndParametersReturnsOnCall(i int, result1 string, result2 []string) { + fake.getFunctionAndParametersMutex.Lock() + defer fake.getFunctionAndParametersMutex.Unlock() + fake.GetFunctionAndParametersStub = nil + if fake.getFunctionAndParametersReturnsOnCall == nil { + fake.getFunctionAndParametersReturnsOnCall = make(map[int]struct { + result1 string + result2 []string + }) + } + fake.getFunctionAndParametersReturnsOnCall[i] = struct { + result1 string + result2 []string + }{result1, result2} +} + +func (fake *ChaincodeStub) GetHistoryForKey(arg1 string) (shim.HistoryQueryIteratorInterface, error) { + fake.getHistoryForKeyMutex.Lock() + ret, specificReturn := fake.getHistoryForKeyReturnsOnCall[len(fake.getHistoryForKeyArgsForCall)] + fake.getHistoryForKeyArgsForCall = append(fake.getHistoryForKeyArgsForCall, struct { + arg1 string + }{arg1}) + fake.recordInvocation("GetHistoryForKey", []interface{}{arg1}) + fake.getHistoryForKeyMutex.Unlock() + if fake.GetHistoryForKeyStub != nil { + return fake.GetHistoryForKeyStub(arg1) + } + if specificReturn { + return ret.result1, ret.result2 + } + fakeReturns := fake.getHistoryForKeyReturns + return fakeReturns.result1, fakeReturns.result2 +} + +func (fake *ChaincodeStub) GetHistoryForKeyCallCount() int { + fake.getHistoryForKeyMutex.RLock() + defer fake.getHistoryForKeyMutex.RUnlock() + return len(fake.getHistoryForKeyArgsForCall) +} + +func (fake *ChaincodeStub) GetHistoryForKeyCalls(stub func(string) (shim.HistoryQueryIteratorInterface, error)) { + fake.getHistoryForKeyMutex.Lock() + defer fake.getHistoryForKeyMutex.Unlock() + fake.GetHistoryForKeyStub = stub +} + +func (fake *ChaincodeStub) GetHistoryForKeyArgsForCall(i int) string { + fake.getHistoryForKeyMutex.RLock() + defer fake.getHistoryForKeyMutex.RUnlock() + argsForCall := fake.getHistoryForKeyArgsForCall[i] + return argsForCall.arg1 +} + +func (fake *ChaincodeStub) GetHistoryForKeyReturns(result1 shim.HistoryQueryIteratorInterface, result2 error) { + fake.getHistoryForKeyMutex.Lock() + defer fake.getHistoryForKeyMutex.Unlock() + fake.GetHistoryForKeyStub = nil + fake.getHistoryForKeyReturns = struct { + result1 shim.HistoryQueryIteratorInterface + result2 error + }{result1, result2} +} + +func (fake *ChaincodeStub) GetHistoryForKeyReturnsOnCall(i int, result1 shim.HistoryQueryIteratorInterface, result2 error) { + fake.getHistoryForKeyMutex.Lock() + defer fake.getHistoryForKeyMutex.Unlock() + fake.GetHistoryForKeyStub = nil + if fake.getHistoryForKeyReturnsOnCall == nil { + fake.getHistoryForKeyReturnsOnCall = make(map[int]struct { + result1 shim.HistoryQueryIteratorInterface + result2 error + }) + } + fake.getHistoryForKeyReturnsOnCall[i] = struct { + result1 shim.HistoryQueryIteratorInterface + result2 error + }{result1, result2} +} + +func (fake *ChaincodeStub) GetPrivateData(arg1 string, arg2 string) ([]byte, error) { + fake.getPrivateDataMutex.Lock() + ret, specificReturn := fake.getPrivateDataReturnsOnCall[len(fake.getPrivateDataArgsForCall)] + fake.getPrivateDataArgsForCall = append(fake.getPrivateDataArgsForCall, struct { + arg1 string + arg2 string + }{arg1, arg2}) + fake.recordInvocation("GetPrivateData", []interface{}{arg1, arg2}) + fake.getPrivateDataMutex.Unlock() + if fake.GetPrivateDataStub != nil { + return fake.GetPrivateDataStub(arg1, arg2) + } + if specificReturn { + return ret.result1, ret.result2 + } + fakeReturns := fake.getPrivateDataReturns + return fakeReturns.result1, fakeReturns.result2 +} + +func (fake *ChaincodeStub) GetPrivateDataCallCount() int { + fake.getPrivateDataMutex.RLock() + defer fake.getPrivateDataMutex.RUnlock() + return len(fake.getPrivateDataArgsForCall) +} + +func (fake *ChaincodeStub) GetPrivateDataCalls(stub func(string, string) ([]byte, error)) { + fake.getPrivateDataMutex.Lock() + defer fake.getPrivateDataMutex.Unlock() + fake.GetPrivateDataStub = stub +} + +func (fake *ChaincodeStub) GetPrivateDataArgsForCall(i int) (string, string) { + fake.getPrivateDataMutex.RLock() + defer fake.getPrivateDataMutex.RUnlock() + argsForCall := fake.getPrivateDataArgsForCall[i] + return argsForCall.arg1, argsForCall.arg2 +} + +func (fake *ChaincodeStub) GetPrivateDataReturns(result1 []byte, result2 error) { + fake.getPrivateDataMutex.Lock() + defer fake.getPrivateDataMutex.Unlock() + fake.GetPrivateDataStub = nil + fake.getPrivateDataReturns = struct { + result1 []byte + result2 error + }{result1, result2} +} + +func (fake *ChaincodeStub) GetPrivateDataReturnsOnCall(i int, result1 []byte, result2 error) { + fake.getPrivateDataMutex.Lock() + defer fake.getPrivateDataMutex.Unlock() + fake.GetPrivateDataStub = nil + if fake.getPrivateDataReturnsOnCall == nil { + fake.getPrivateDataReturnsOnCall = make(map[int]struct { + result1 []byte + result2 error + }) + } + fake.getPrivateDataReturnsOnCall[i] = struct { + result1 []byte + result2 error + }{result1, result2} +} + +func (fake *ChaincodeStub) GetPrivateDataByPartialCompositeKey(arg1 string, arg2 string, arg3 []string) (shim.StateQueryIteratorInterface, error) { + var arg3Copy []string + if arg3 != nil { + arg3Copy = make([]string, len(arg3)) + copy(arg3Copy, arg3) + } + fake.getPrivateDataByPartialCompositeKeyMutex.Lock() + ret, specificReturn := fake.getPrivateDataByPartialCompositeKeyReturnsOnCall[len(fake.getPrivateDataByPartialCompositeKeyArgsForCall)] + fake.getPrivateDataByPartialCompositeKeyArgsForCall = append(fake.getPrivateDataByPartialCompositeKeyArgsForCall, struct { + arg1 string + arg2 string + arg3 []string + }{arg1, arg2, arg3Copy}) + fake.recordInvocation("GetPrivateDataByPartialCompositeKey", []interface{}{arg1, arg2, arg3Copy}) + fake.getPrivateDataByPartialCompositeKeyMutex.Unlock() + if fake.GetPrivateDataByPartialCompositeKeyStub != nil { + return fake.GetPrivateDataByPartialCompositeKeyStub(arg1, arg2, arg3) + } + if specificReturn { + return ret.result1, ret.result2 + } + fakeReturns := fake.getPrivateDataByPartialCompositeKeyReturns + return fakeReturns.result1, fakeReturns.result2 +} + +func (fake *ChaincodeStub) GetPrivateDataByPartialCompositeKeyCallCount() int { + fake.getPrivateDataByPartialCompositeKeyMutex.RLock() + defer fake.getPrivateDataByPartialCompositeKeyMutex.RUnlock() + return len(fake.getPrivateDataByPartialCompositeKeyArgsForCall) +} + +func (fake *ChaincodeStub) GetPrivateDataByPartialCompositeKeyCalls(stub func(string, string, []string) (shim.StateQueryIteratorInterface, error)) { + fake.getPrivateDataByPartialCompositeKeyMutex.Lock() + defer fake.getPrivateDataByPartialCompositeKeyMutex.Unlock() + fake.GetPrivateDataByPartialCompositeKeyStub = stub +} + +func (fake *ChaincodeStub) GetPrivateDataByPartialCompositeKeyArgsForCall(i int) (string, string, []string) { + fake.getPrivateDataByPartialCompositeKeyMutex.RLock() + defer fake.getPrivateDataByPartialCompositeKeyMutex.RUnlock() + argsForCall := fake.getPrivateDataByPartialCompositeKeyArgsForCall[i] + return argsForCall.arg1, argsForCall.arg2, argsForCall.arg3 +} + +func (fake *ChaincodeStub) GetPrivateDataByPartialCompositeKeyReturns(result1 shim.StateQueryIteratorInterface, result2 error) { + fake.getPrivateDataByPartialCompositeKeyMutex.Lock() + defer fake.getPrivateDataByPartialCompositeKeyMutex.Unlock() + fake.GetPrivateDataByPartialCompositeKeyStub = nil + fake.getPrivateDataByPartialCompositeKeyReturns = struct { + result1 shim.StateQueryIteratorInterface + result2 error + }{result1, result2} +} + +func (fake *ChaincodeStub) GetPrivateDataByPartialCompositeKeyReturnsOnCall(i int, result1 shim.StateQueryIteratorInterface, result2 error) { + fake.getPrivateDataByPartialCompositeKeyMutex.Lock() + defer fake.getPrivateDataByPartialCompositeKeyMutex.Unlock() + fake.GetPrivateDataByPartialCompositeKeyStub = nil + if fake.getPrivateDataByPartialCompositeKeyReturnsOnCall == nil { + fake.getPrivateDataByPartialCompositeKeyReturnsOnCall = make(map[int]struct { + result1 shim.StateQueryIteratorInterface + result2 error + }) + } + fake.getPrivateDataByPartialCompositeKeyReturnsOnCall[i] = struct { + result1 shim.StateQueryIteratorInterface + result2 error + }{result1, result2} +} + +func (fake *ChaincodeStub) GetPrivateDataByRange(arg1 string, arg2 string, arg3 string) (shim.StateQueryIteratorInterface, error) { + fake.getPrivateDataByRangeMutex.Lock() + ret, specificReturn := fake.getPrivateDataByRangeReturnsOnCall[len(fake.getPrivateDataByRangeArgsForCall)] + fake.getPrivateDataByRangeArgsForCall = append(fake.getPrivateDataByRangeArgsForCall, struct { + arg1 string + arg2 string + arg3 string + }{arg1, arg2, arg3}) + fake.recordInvocation("GetPrivateDataByRange", []interface{}{arg1, arg2, arg3}) + fake.getPrivateDataByRangeMutex.Unlock() + if fake.GetPrivateDataByRangeStub != nil { + return fake.GetPrivateDataByRangeStub(arg1, arg2, arg3) + } + if specificReturn { + return ret.result1, ret.result2 + } + fakeReturns := fake.getPrivateDataByRangeReturns + return fakeReturns.result1, fakeReturns.result2 +} + +func (fake *ChaincodeStub) GetPrivateDataByRangeCallCount() int { + fake.getPrivateDataByRangeMutex.RLock() + defer fake.getPrivateDataByRangeMutex.RUnlock() + return len(fake.getPrivateDataByRangeArgsForCall) +} + +func (fake *ChaincodeStub) GetPrivateDataByRangeCalls(stub func(string, string, string) (shim.StateQueryIteratorInterface, error)) { + fake.getPrivateDataByRangeMutex.Lock() + defer fake.getPrivateDataByRangeMutex.Unlock() + fake.GetPrivateDataByRangeStub = stub +} + +func (fake *ChaincodeStub) GetPrivateDataByRangeArgsForCall(i int) (string, string, string) { + fake.getPrivateDataByRangeMutex.RLock() + defer fake.getPrivateDataByRangeMutex.RUnlock() + argsForCall := fake.getPrivateDataByRangeArgsForCall[i] + return argsForCall.arg1, argsForCall.arg2, argsForCall.arg3 +} + +func (fake *ChaincodeStub) GetPrivateDataByRangeReturns(result1 shim.StateQueryIteratorInterface, result2 error) { + fake.getPrivateDataByRangeMutex.Lock() + defer fake.getPrivateDataByRangeMutex.Unlock() + fake.GetPrivateDataByRangeStub = nil + fake.getPrivateDataByRangeReturns = struct { + result1 shim.StateQueryIteratorInterface + result2 error + }{result1, result2} +} + +func (fake *ChaincodeStub) GetPrivateDataByRangeReturnsOnCall(i int, result1 shim.StateQueryIteratorInterface, result2 error) { + fake.getPrivateDataByRangeMutex.Lock() + defer fake.getPrivateDataByRangeMutex.Unlock() + fake.GetPrivateDataByRangeStub = nil + if fake.getPrivateDataByRangeReturnsOnCall == nil { + fake.getPrivateDataByRangeReturnsOnCall = make(map[int]struct { + result1 shim.StateQueryIteratorInterface + result2 error + }) + } + fake.getPrivateDataByRangeReturnsOnCall[i] = struct { + result1 shim.StateQueryIteratorInterface + result2 error + }{result1, result2} +} + +func (fake *ChaincodeStub) GetPrivateDataHash(arg1 string, arg2 string) ([]byte, error) { + fake.getPrivateDataHashMutex.Lock() + ret, specificReturn := fake.getPrivateDataHashReturnsOnCall[len(fake.getPrivateDataHashArgsForCall)] + fake.getPrivateDataHashArgsForCall = append(fake.getPrivateDataHashArgsForCall, struct { + arg1 string + arg2 string + }{arg1, arg2}) + fake.recordInvocation("GetPrivateDataHash", []interface{}{arg1, arg2}) + fake.getPrivateDataHashMutex.Unlock() + if fake.GetPrivateDataHashStub != nil { + return fake.GetPrivateDataHashStub(arg1, arg2) + } + if specificReturn { + return ret.result1, ret.result2 + } + fakeReturns := fake.getPrivateDataHashReturns + return fakeReturns.result1, fakeReturns.result2 +} + +func (fake *ChaincodeStub) GetPrivateDataHashCallCount() int { + fake.getPrivateDataHashMutex.RLock() + defer fake.getPrivateDataHashMutex.RUnlock() + return len(fake.getPrivateDataHashArgsForCall) +} + +func (fake *ChaincodeStub) GetPrivateDataHashCalls(stub func(string, string) ([]byte, error)) { + fake.getPrivateDataHashMutex.Lock() + defer fake.getPrivateDataHashMutex.Unlock() + fake.GetPrivateDataHashStub = stub +} + +func (fake *ChaincodeStub) GetPrivateDataHashArgsForCall(i int) (string, string) { + fake.getPrivateDataHashMutex.RLock() + defer fake.getPrivateDataHashMutex.RUnlock() + argsForCall := fake.getPrivateDataHashArgsForCall[i] + return argsForCall.arg1, argsForCall.arg2 +} + +func (fake *ChaincodeStub) GetPrivateDataHashReturns(result1 []byte, result2 error) { + fake.getPrivateDataHashMutex.Lock() + defer fake.getPrivateDataHashMutex.Unlock() + fake.GetPrivateDataHashStub = nil + fake.getPrivateDataHashReturns = struct { + result1 []byte + result2 error + }{result1, result2} +} + +func (fake *ChaincodeStub) GetPrivateDataHashReturnsOnCall(i int, result1 []byte, result2 error) { + fake.getPrivateDataHashMutex.Lock() + defer fake.getPrivateDataHashMutex.Unlock() + fake.GetPrivateDataHashStub = nil + if fake.getPrivateDataHashReturnsOnCall == nil { + fake.getPrivateDataHashReturnsOnCall = make(map[int]struct { + result1 []byte + result2 error + }) + } + fake.getPrivateDataHashReturnsOnCall[i] = struct { + result1 []byte + result2 error + }{result1, result2} +} + +func (fake *ChaincodeStub) GetPrivateDataQueryResult(arg1 string, arg2 string) (shim.StateQueryIteratorInterface, error) { + fake.getPrivateDataQueryResultMutex.Lock() + ret, specificReturn := fake.getPrivateDataQueryResultReturnsOnCall[len(fake.getPrivateDataQueryResultArgsForCall)] + fake.getPrivateDataQueryResultArgsForCall = append(fake.getPrivateDataQueryResultArgsForCall, struct { + arg1 string + arg2 string + }{arg1, arg2}) + fake.recordInvocation("GetPrivateDataQueryResult", []interface{}{arg1, arg2}) + fake.getPrivateDataQueryResultMutex.Unlock() + if fake.GetPrivateDataQueryResultStub != nil { + return fake.GetPrivateDataQueryResultStub(arg1, arg2) + } + if specificReturn { + return ret.result1, ret.result2 + } + fakeReturns := fake.getPrivateDataQueryResultReturns + return fakeReturns.result1, fakeReturns.result2 +} + +func (fake *ChaincodeStub) GetPrivateDataQueryResultCallCount() int { + fake.getPrivateDataQueryResultMutex.RLock() + defer fake.getPrivateDataQueryResultMutex.RUnlock() + return len(fake.getPrivateDataQueryResultArgsForCall) +} + +func (fake *ChaincodeStub) GetPrivateDataQueryResultCalls(stub func(string, string) (shim.StateQueryIteratorInterface, error)) { + fake.getPrivateDataQueryResultMutex.Lock() + defer fake.getPrivateDataQueryResultMutex.Unlock() + fake.GetPrivateDataQueryResultStub = stub +} + +func (fake *ChaincodeStub) GetPrivateDataQueryResultArgsForCall(i int) (string, string) { + fake.getPrivateDataQueryResultMutex.RLock() + defer fake.getPrivateDataQueryResultMutex.RUnlock() + argsForCall := fake.getPrivateDataQueryResultArgsForCall[i] + return argsForCall.arg1, argsForCall.arg2 +} + +func (fake *ChaincodeStub) GetPrivateDataQueryResultReturns(result1 shim.StateQueryIteratorInterface, result2 error) { + fake.getPrivateDataQueryResultMutex.Lock() + defer fake.getPrivateDataQueryResultMutex.Unlock() + fake.GetPrivateDataQueryResultStub = nil + fake.getPrivateDataQueryResultReturns = struct { + result1 shim.StateQueryIteratorInterface + result2 error + }{result1, result2} +} + +func (fake *ChaincodeStub) GetPrivateDataQueryResultReturnsOnCall(i int, result1 shim.StateQueryIteratorInterface, result2 error) { + fake.getPrivateDataQueryResultMutex.Lock() + defer fake.getPrivateDataQueryResultMutex.Unlock() + fake.GetPrivateDataQueryResultStub = nil + if fake.getPrivateDataQueryResultReturnsOnCall == nil { + fake.getPrivateDataQueryResultReturnsOnCall = make(map[int]struct { + result1 shim.StateQueryIteratorInterface + result2 error + }) + } + fake.getPrivateDataQueryResultReturnsOnCall[i] = struct { + result1 shim.StateQueryIteratorInterface + result2 error + }{result1, result2} +} + +func (fake *ChaincodeStub) GetPrivateDataValidationParameter(arg1 string, arg2 string) ([]byte, error) { + fake.getPrivateDataValidationParameterMutex.Lock() + ret, specificReturn := fake.getPrivateDataValidationParameterReturnsOnCall[len(fake.getPrivateDataValidationParameterArgsForCall)] + fake.getPrivateDataValidationParameterArgsForCall = append(fake.getPrivateDataValidationParameterArgsForCall, struct { + arg1 string + arg2 string + }{arg1, arg2}) + fake.recordInvocation("GetPrivateDataValidationParameter", []interface{}{arg1, arg2}) + fake.getPrivateDataValidationParameterMutex.Unlock() + if fake.GetPrivateDataValidationParameterStub != nil { + return fake.GetPrivateDataValidationParameterStub(arg1, arg2) + } + if specificReturn { + return ret.result1, ret.result2 + } + fakeReturns := fake.getPrivateDataValidationParameterReturns + return fakeReturns.result1, fakeReturns.result2 +} + +func (fake *ChaincodeStub) GetPrivateDataValidationParameterCallCount() int { + fake.getPrivateDataValidationParameterMutex.RLock() + defer fake.getPrivateDataValidationParameterMutex.RUnlock() + return len(fake.getPrivateDataValidationParameterArgsForCall) +} + +func (fake *ChaincodeStub) GetPrivateDataValidationParameterCalls(stub func(string, string) ([]byte, error)) { + fake.getPrivateDataValidationParameterMutex.Lock() + defer fake.getPrivateDataValidationParameterMutex.Unlock() + fake.GetPrivateDataValidationParameterStub = stub +} + +func (fake *ChaincodeStub) GetPrivateDataValidationParameterArgsForCall(i int) (string, string) { + fake.getPrivateDataValidationParameterMutex.RLock() + defer fake.getPrivateDataValidationParameterMutex.RUnlock() + argsForCall := fake.getPrivateDataValidationParameterArgsForCall[i] + return argsForCall.arg1, argsForCall.arg2 +} + +func (fake *ChaincodeStub) GetPrivateDataValidationParameterReturns(result1 []byte, result2 error) { + fake.getPrivateDataValidationParameterMutex.Lock() + defer fake.getPrivateDataValidationParameterMutex.Unlock() + fake.GetPrivateDataValidationParameterStub = nil + fake.getPrivateDataValidationParameterReturns = struct { + result1 []byte + result2 error + }{result1, result2} +} + +func (fake *ChaincodeStub) GetPrivateDataValidationParameterReturnsOnCall(i int, result1 []byte, result2 error) { + fake.getPrivateDataValidationParameterMutex.Lock() + defer fake.getPrivateDataValidationParameterMutex.Unlock() + fake.GetPrivateDataValidationParameterStub = nil + if fake.getPrivateDataValidationParameterReturnsOnCall == nil { + fake.getPrivateDataValidationParameterReturnsOnCall = make(map[int]struct { + result1 []byte + result2 error + }) + } + fake.getPrivateDataValidationParameterReturnsOnCall[i] = struct { + result1 []byte + result2 error + }{result1, result2} +} + +func (fake *ChaincodeStub) GetQueryResult(arg1 string) (shim.StateQueryIteratorInterface, error) { + fake.getQueryResultMutex.Lock() + ret, specificReturn := fake.getQueryResultReturnsOnCall[len(fake.getQueryResultArgsForCall)] + fake.getQueryResultArgsForCall = append(fake.getQueryResultArgsForCall, struct { + arg1 string + }{arg1}) + fake.recordInvocation("GetQueryResult", []interface{}{arg1}) + fake.getQueryResultMutex.Unlock() + if fake.GetQueryResultStub != nil { + return fake.GetQueryResultStub(arg1) + } + if specificReturn { + return ret.result1, ret.result2 + } + fakeReturns := fake.getQueryResultReturns + return fakeReturns.result1, fakeReturns.result2 +} + +func (fake *ChaincodeStub) GetQueryResultCallCount() int { + fake.getQueryResultMutex.RLock() + defer fake.getQueryResultMutex.RUnlock() + return len(fake.getQueryResultArgsForCall) +} + +func (fake *ChaincodeStub) GetQueryResultCalls(stub func(string) (shim.StateQueryIteratorInterface, error)) { + fake.getQueryResultMutex.Lock() + defer fake.getQueryResultMutex.Unlock() + fake.GetQueryResultStub = stub +} + +func (fake *ChaincodeStub) GetQueryResultArgsForCall(i int) string { + fake.getQueryResultMutex.RLock() + defer fake.getQueryResultMutex.RUnlock() + argsForCall := fake.getQueryResultArgsForCall[i] + return argsForCall.arg1 +} + +func (fake *ChaincodeStub) GetQueryResultReturns(result1 shim.StateQueryIteratorInterface, result2 error) { + fake.getQueryResultMutex.Lock() + defer fake.getQueryResultMutex.Unlock() + fake.GetQueryResultStub = nil + fake.getQueryResultReturns = struct { + result1 shim.StateQueryIteratorInterface + result2 error + }{result1, result2} +} + +func (fake *ChaincodeStub) GetQueryResultReturnsOnCall(i int, result1 shim.StateQueryIteratorInterface, result2 error) { + fake.getQueryResultMutex.Lock() + defer fake.getQueryResultMutex.Unlock() + fake.GetQueryResultStub = nil + if fake.getQueryResultReturnsOnCall == nil { + fake.getQueryResultReturnsOnCall = make(map[int]struct { + result1 shim.StateQueryIteratorInterface + result2 error + }) + } + fake.getQueryResultReturnsOnCall[i] = struct { + result1 shim.StateQueryIteratorInterface + result2 error + }{result1, result2} +} + +func (fake *ChaincodeStub) GetQueryResultWithPagination(arg1 string, arg2 int32, arg3 string) (shim.StateQueryIteratorInterface, *peer.QueryResponseMetadata, error) { + fake.getQueryResultWithPaginationMutex.Lock() + ret, specificReturn := fake.getQueryResultWithPaginationReturnsOnCall[len(fake.getQueryResultWithPaginationArgsForCall)] + fake.getQueryResultWithPaginationArgsForCall = append(fake.getQueryResultWithPaginationArgsForCall, struct { + arg1 string + arg2 int32 + arg3 string + }{arg1, arg2, arg3}) + fake.recordInvocation("GetQueryResultWithPagination", []interface{}{arg1, arg2, arg3}) + fake.getQueryResultWithPaginationMutex.Unlock() + if fake.GetQueryResultWithPaginationStub != nil { + return fake.GetQueryResultWithPaginationStub(arg1, arg2, arg3) + } + if specificReturn { + return ret.result1, ret.result2, ret.result3 + } + fakeReturns := fake.getQueryResultWithPaginationReturns + return fakeReturns.result1, fakeReturns.result2, fakeReturns.result3 +} + +func (fake *ChaincodeStub) GetQueryResultWithPaginationCallCount() int { + fake.getQueryResultWithPaginationMutex.RLock() + defer fake.getQueryResultWithPaginationMutex.RUnlock() + return len(fake.getQueryResultWithPaginationArgsForCall) +} + +func (fake *ChaincodeStub) GetQueryResultWithPaginationCalls(stub func(string, int32, string) (shim.StateQueryIteratorInterface, *peer.QueryResponseMetadata, error)) { + fake.getQueryResultWithPaginationMutex.Lock() + defer fake.getQueryResultWithPaginationMutex.Unlock() + fake.GetQueryResultWithPaginationStub = stub +} + +func (fake *ChaincodeStub) GetQueryResultWithPaginationArgsForCall(i int) (string, int32, string) { + fake.getQueryResultWithPaginationMutex.RLock() + defer fake.getQueryResultWithPaginationMutex.RUnlock() + argsForCall := fake.getQueryResultWithPaginationArgsForCall[i] + return argsForCall.arg1, argsForCall.arg2, argsForCall.arg3 +} + +func (fake *ChaincodeStub) GetQueryResultWithPaginationReturns(result1 shim.StateQueryIteratorInterface, result2 *peer.QueryResponseMetadata, result3 error) { + fake.getQueryResultWithPaginationMutex.Lock() + defer fake.getQueryResultWithPaginationMutex.Unlock() + fake.GetQueryResultWithPaginationStub = nil + fake.getQueryResultWithPaginationReturns = struct { + result1 shim.StateQueryIteratorInterface + result2 *peer.QueryResponseMetadata + result3 error + }{result1, result2, result3} +} + +func (fake *ChaincodeStub) GetQueryResultWithPaginationReturnsOnCall(i int, result1 shim.StateQueryIteratorInterface, result2 *peer.QueryResponseMetadata, result3 error) { + fake.getQueryResultWithPaginationMutex.Lock() + defer fake.getQueryResultWithPaginationMutex.Unlock() + fake.GetQueryResultWithPaginationStub = nil + if fake.getQueryResultWithPaginationReturnsOnCall == nil { + fake.getQueryResultWithPaginationReturnsOnCall = make(map[int]struct { + result1 shim.StateQueryIteratorInterface + result2 *peer.QueryResponseMetadata + result3 error + }) + } + fake.getQueryResultWithPaginationReturnsOnCall[i] = struct { + result1 shim.StateQueryIteratorInterface + result2 *peer.QueryResponseMetadata + result3 error + }{result1, result2, result3} +} + +func (fake *ChaincodeStub) GetSignedProposal() (*peer.SignedProposal, error) { + fake.getSignedProposalMutex.Lock() + ret, specificReturn := fake.getSignedProposalReturnsOnCall[len(fake.getSignedProposalArgsForCall)] + fake.getSignedProposalArgsForCall = append(fake.getSignedProposalArgsForCall, struct { + }{}) + fake.recordInvocation("GetSignedProposal", []interface{}{}) + fake.getSignedProposalMutex.Unlock() + if fake.GetSignedProposalStub != nil { + return fake.GetSignedProposalStub() + } + if specificReturn { + return ret.result1, ret.result2 + } + fakeReturns := fake.getSignedProposalReturns + return fakeReturns.result1, fakeReturns.result2 +} + +func (fake *ChaincodeStub) GetSignedProposalCallCount() int { + fake.getSignedProposalMutex.RLock() + defer fake.getSignedProposalMutex.RUnlock() + return len(fake.getSignedProposalArgsForCall) +} + +func (fake *ChaincodeStub) GetSignedProposalCalls(stub func() (*peer.SignedProposal, error)) { + fake.getSignedProposalMutex.Lock() + defer fake.getSignedProposalMutex.Unlock() + fake.GetSignedProposalStub = stub +} + +func (fake *ChaincodeStub) GetSignedProposalReturns(result1 *peer.SignedProposal, result2 error) { + fake.getSignedProposalMutex.Lock() + defer fake.getSignedProposalMutex.Unlock() + fake.GetSignedProposalStub = nil + fake.getSignedProposalReturns = struct { + result1 *peer.SignedProposal + result2 error + }{result1, result2} +} + +func (fake *ChaincodeStub) GetSignedProposalReturnsOnCall(i int, result1 *peer.SignedProposal, result2 error) { + fake.getSignedProposalMutex.Lock() + defer fake.getSignedProposalMutex.Unlock() + fake.GetSignedProposalStub = nil + if fake.getSignedProposalReturnsOnCall == nil { + fake.getSignedProposalReturnsOnCall = make(map[int]struct { + result1 *peer.SignedProposal + result2 error + }) + } + fake.getSignedProposalReturnsOnCall[i] = struct { + result1 *peer.SignedProposal + result2 error + }{result1, result2} +} + +func (fake *ChaincodeStub) GetState(arg1 string) ([]byte, error) { + fake.getStateMutex.Lock() + ret, specificReturn := fake.getStateReturnsOnCall[len(fake.getStateArgsForCall)] + fake.getStateArgsForCall = append(fake.getStateArgsForCall, struct { + arg1 string + }{arg1}) + fake.recordInvocation("GetState", []interface{}{arg1}) + fake.getStateMutex.Unlock() + if fake.GetStateStub != nil { + return fake.GetStateStub(arg1) + } + if specificReturn { + return ret.result1, ret.result2 + } + fakeReturns := fake.getStateReturns + return fakeReturns.result1, fakeReturns.result2 +} + +func (fake *ChaincodeStub) GetStateCallCount() int { + fake.getStateMutex.RLock() + defer fake.getStateMutex.RUnlock() + return len(fake.getStateArgsForCall) +} + +func (fake *ChaincodeStub) GetStateCalls(stub func(string) ([]byte, error)) { + fake.getStateMutex.Lock() + defer fake.getStateMutex.Unlock() + fake.GetStateStub = stub +} + +func (fake *ChaincodeStub) GetStateArgsForCall(i int) string { + fake.getStateMutex.RLock() + defer fake.getStateMutex.RUnlock() + argsForCall := fake.getStateArgsForCall[i] + return argsForCall.arg1 +} + +func (fake *ChaincodeStub) GetStateReturns(result1 []byte, result2 error) { + fake.getStateMutex.Lock() + defer fake.getStateMutex.Unlock() + fake.GetStateStub = nil + fake.getStateReturns = struct { + result1 []byte + result2 error + }{result1, result2} +} + +func (fake *ChaincodeStub) GetStateReturnsOnCall(i int, result1 []byte, result2 error) { + fake.getStateMutex.Lock() + defer fake.getStateMutex.Unlock() + fake.GetStateStub = nil + if fake.getStateReturnsOnCall == nil { + fake.getStateReturnsOnCall = make(map[int]struct { + result1 []byte + result2 error + }) + } + fake.getStateReturnsOnCall[i] = struct { + result1 []byte + result2 error + }{result1, result2} +} + +func (fake *ChaincodeStub) GetStateByPartialCompositeKey(arg1 string, arg2 []string) (shim.StateQueryIteratorInterface, error) { + var arg2Copy []string + if arg2 != nil { + arg2Copy = make([]string, len(arg2)) + copy(arg2Copy, arg2) + } + fake.getStateByPartialCompositeKeyMutex.Lock() + ret, specificReturn := fake.getStateByPartialCompositeKeyReturnsOnCall[len(fake.getStateByPartialCompositeKeyArgsForCall)] + fake.getStateByPartialCompositeKeyArgsForCall = append(fake.getStateByPartialCompositeKeyArgsForCall, struct { + arg1 string + arg2 []string + }{arg1, arg2Copy}) + fake.recordInvocation("GetStateByPartialCompositeKey", []interface{}{arg1, arg2Copy}) + fake.getStateByPartialCompositeKeyMutex.Unlock() + if fake.GetStateByPartialCompositeKeyStub != nil { + return fake.GetStateByPartialCompositeKeyStub(arg1, arg2) + } + if specificReturn { + return ret.result1, ret.result2 + } + fakeReturns := fake.getStateByPartialCompositeKeyReturns + return fakeReturns.result1, fakeReturns.result2 +} + +func (fake *ChaincodeStub) GetStateByPartialCompositeKeyCallCount() int { + fake.getStateByPartialCompositeKeyMutex.RLock() + defer fake.getStateByPartialCompositeKeyMutex.RUnlock() + return len(fake.getStateByPartialCompositeKeyArgsForCall) +} + +func (fake *ChaincodeStub) GetStateByPartialCompositeKeyCalls(stub func(string, []string) (shim.StateQueryIteratorInterface, error)) { + fake.getStateByPartialCompositeKeyMutex.Lock() + defer fake.getStateByPartialCompositeKeyMutex.Unlock() + fake.GetStateByPartialCompositeKeyStub = stub +} + +func (fake *ChaincodeStub) GetStateByPartialCompositeKeyArgsForCall(i int) (string, []string) { + fake.getStateByPartialCompositeKeyMutex.RLock() + defer fake.getStateByPartialCompositeKeyMutex.RUnlock() + argsForCall := fake.getStateByPartialCompositeKeyArgsForCall[i] + return argsForCall.arg1, argsForCall.arg2 +} + +func (fake *ChaincodeStub) GetStateByPartialCompositeKeyReturns(result1 shim.StateQueryIteratorInterface, result2 error) { + fake.getStateByPartialCompositeKeyMutex.Lock() + defer fake.getStateByPartialCompositeKeyMutex.Unlock() + fake.GetStateByPartialCompositeKeyStub = nil + fake.getStateByPartialCompositeKeyReturns = struct { + result1 shim.StateQueryIteratorInterface + result2 error + }{result1, result2} +} + +func (fake *ChaincodeStub) GetStateByPartialCompositeKeyReturnsOnCall(i int, result1 shim.StateQueryIteratorInterface, result2 error) { + fake.getStateByPartialCompositeKeyMutex.Lock() + defer fake.getStateByPartialCompositeKeyMutex.Unlock() + fake.GetStateByPartialCompositeKeyStub = nil + if fake.getStateByPartialCompositeKeyReturnsOnCall == nil { + fake.getStateByPartialCompositeKeyReturnsOnCall = make(map[int]struct { + result1 shim.StateQueryIteratorInterface + result2 error + }) + } + fake.getStateByPartialCompositeKeyReturnsOnCall[i] = struct { + result1 shim.StateQueryIteratorInterface + result2 error + }{result1, result2} +} + +func (fake *ChaincodeStub) GetStateByPartialCompositeKeyWithPagination(arg1 string, arg2 []string, arg3 int32, arg4 string) (shim.StateQueryIteratorInterface, *peer.QueryResponseMetadata, error) { + var arg2Copy []string + if arg2 != nil { + arg2Copy = make([]string, len(arg2)) + copy(arg2Copy, arg2) + } + fake.getStateByPartialCompositeKeyWithPaginationMutex.Lock() + ret, specificReturn := fake.getStateByPartialCompositeKeyWithPaginationReturnsOnCall[len(fake.getStateByPartialCompositeKeyWithPaginationArgsForCall)] + fake.getStateByPartialCompositeKeyWithPaginationArgsForCall = append(fake.getStateByPartialCompositeKeyWithPaginationArgsForCall, struct { + arg1 string + arg2 []string + arg3 int32 + arg4 string + }{arg1, arg2Copy, arg3, arg4}) + fake.recordInvocation("GetStateByPartialCompositeKeyWithPagination", []interface{}{arg1, arg2Copy, arg3, arg4}) + fake.getStateByPartialCompositeKeyWithPaginationMutex.Unlock() + if fake.GetStateByPartialCompositeKeyWithPaginationStub != nil { + return fake.GetStateByPartialCompositeKeyWithPaginationStub(arg1, arg2, arg3, arg4) + } + if specificReturn { + return ret.result1, ret.result2, ret.result3 + } + fakeReturns := fake.getStateByPartialCompositeKeyWithPaginationReturns + return fakeReturns.result1, fakeReturns.result2, fakeReturns.result3 +} + +func (fake *ChaincodeStub) GetStateByPartialCompositeKeyWithPaginationCallCount() int { + fake.getStateByPartialCompositeKeyWithPaginationMutex.RLock() + defer fake.getStateByPartialCompositeKeyWithPaginationMutex.RUnlock() + return len(fake.getStateByPartialCompositeKeyWithPaginationArgsForCall) +} + +func (fake *ChaincodeStub) GetStateByPartialCompositeKeyWithPaginationCalls(stub func(string, []string, int32, string) (shim.StateQueryIteratorInterface, *peer.QueryResponseMetadata, error)) { + fake.getStateByPartialCompositeKeyWithPaginationMutex.Lock() + defer fake.getStateByPartialCompositeKeyWithPaginationMutex.Unlock() + fake.GetStateByPartialCompositeKeyWithPaginationStub = stub +} + +func (fake *ChaincodeStub) GetStateByPartialCompositeKeyWithPaginationArgsForCall(i int) (string, []string, int32, string) { + fake.getStateByPartialCompositeKeyWithPaginationMutex.RLock() + defer fake.getStateByPartialCompositeKeyWithPaginationMutex.RUnlock() + argsForCall := fake.getStateByPartialCompositeKeyWithPaginationArgsForCall[i] + return argsForCall.arg1, argsForCall.arg2, argsForCall.arg3, argsForCall.arg4 +} + +func (fake *ChaincodeStub) GetStateByPartialCompositeKeyWithPaginationReturns(result1 shim.StateQueryIteratorInterface, result2 *peer.QueryResponseMetadata, result3 error) { + fake.getStateByPartialCompositeKeyWithPaginationMutex.Lock() + defer fake.getStateByPartialCompositeKeyWithPaginationMutex.Unlock() + fake.GetStateByPartialCompositeKeyWithPaginationStub = nil + fake.getStateByPartialCompositeKeyWithPaginationReturns = struct { + result1 shim.StateQueryIteratorInterface + result2 *peer.QueryResponseMetadata + result3 error + }{result1, result2, result3} +} + +func (fake *ChaincodeStub) GetStateByPartialCompositeKeyWithPaginationReturnsOnCall(i int, result1 shim.StateQueryIteratorInterface, result2 *peer.QueryResponseMetadata, result3 error) { + fake.getStateByPartialCompositeKeyWithPaginationMutex.Lock() + defer fake.getStateByPartialCompositeKeyWithPaginationMutex.Unlock() + fake.GetStateByPartialCompositeKeyWithPaginationStub = nil + if fake.getStateByPartialCompositeKeyWithPaginationReturnsOnCall == nil { + fake.getStateByPartialCompositeKeyWithPaginationReturnsOnCall = make(map[int]struct { + result1 shim.StateQueryIteratorInterface + result2 *peer.QueryResponseMetadata + result3 error + }) + } + fake.getStateByPartialCompositeKeyWithPaginationReturnsOnCall[i] = struct { + result1 shim.StateQueryIteratorInterface + result2 *peer.QueryResponseMetadata + result3 error + }{result1, result2, result3} +} + +func (fake *ChaincodeStub) GetStateByRange(arg1 string, arg2 string) (shim.StateQueryIteratorInterface, error) { + fake.getStateByRangeMutex.Lock() + ret, specificReturn := fake.getStateByRangeReturnsOnCall[len(fake.getStateByRangeArgsForCall)] + fake.getStateByRangeArgsForCall = append(fake.getStateByRangeArgsForCall, struct { + arg1 string + arg2 string + }{arg1, arg2}) + fake.recordInvocation("GetStateByRange", []interface{}{arg1, arg2}) + fake.getStateByRangeMutex.Unlock() + if fake.GetStateByRangeStub != nil { + return fake.GetStateByRangeStub(arg1, arg2) + } + if specificReturn { + return ret.result1, ret.result2 + } + fakeReturns := fake.getStateByRangeReturns + return fakeReturns.result1, fakeReturns.result2 +} + +func (fake *ChaincodeStub) GetStateByRangeCallCount() int { + fake.getStateByRangeMutex.RLock() + defer fake.getStateByRangeMutex.RUnlock() + return len(fake.getStateByRangeArgsForCall) +} + +func (fake *ChaincodeStub) GetStateByRangeCalls(stub func(string, string) (shim.StateQueryIteratorInterface, error)) { + fake.getStateByRangeMutex.Lock() + defer fake.getStateByRangeMutex.Unlock() + fake.GetStateByRangeStub = stub +} + +func (fake *ChaincodeStub) GetStateByRangeArgsForCall(i int) (string, string) { + fake.getStateByRangeMutex.RLock() + defer fake.getStateByRangeMutex.RUnlock() + argsForCall := fake.getStateByRangeArgsForCall[i] + return argsForCall.arg1, argsForCall.arg2 +} + +func (fake *ChaincodeStub) GetStateByRangeReturns(result1 shim.StateQueryIteratorInterface, result2 error) { + fake.getStateByRangeMutex.Lock() + defer fake.getStateByRangeMutex.Unlock() + fake.GetStateByRangeStub = nil + fake.getStateByRangeReturns = struct { + result1 shim.StateQueryIteratorInterface + result2 error + }{result1, result2} +} + +func (fake *ChaincodeStub) GetStateByRangeReturnsOnCall(i int, result1 shim.StateQueryIteratorInterface, result2 error) { + fake.getStateByRangeMutex.Lock() + defer fake.getStateByRangeMutex.Unlock() + fake.GetStateByRangeStub = nil + if fake.getStateByRangeReturnsOnCall == nil { + fake.getStateByRangeReturnsOnCall = make(map[int]struct { + result1 shim.StateQueryIteratorInterface + result2 error + }) + } + fake.getStateByRangeReturnsOnCall[i] = struct { + result1 shim.StateQueryIteratorInterface + result2 error + }{result1, result2} +} + +func (fake *ChaincodeStub) GetStateByRangeWithPagination(arg1 string, arg2 string, arg3 int32, arg4 string) (shim.StateQueryIteratorInterface, *peer.QueryResponseMetadata, error) { + fake.getStateByRangeWithPaginationMutex.Lock() + ret, specificReturn := fake.getStateByRangeWithPaginationReturnsOnCall[len(fake.getStateByRangeWithPaginationArgsForCall)] + fake.getStateByRangeWithPaginationArgsForCall = append(fake.getStateByRangeWithPaginationArgsForCall, struct { + arg1 string + arg2 string + arg3 int32 + arg4 string + }{arg1, arg2, arg3, arg4}) + fake.recordInvocation("GetStateByRangeWithPagination", []interface{}{arg1, arg2, arg3, arg4}) + fake.getStateByRangeWithPaginationMutex.Unlock() + if fake.GetStateByRangeWithPaginationStub != nil { + return fake.GetStateByRangeWithPaginationStub(arg1, arg2, arg3, arg4) + } + if specificReturn { + return ret.result1, ret.result2, ret.result3 + } + fakeReturns := fake.getStateByRangeWithPaginationReturns + return fakeReturns.result1, fakeReturns.result2, fakeReturns.result3 +} + +func (fake *ChaincodeStub) GetStateByRangeWithPaginationCallCount() int { + fake.getStateByRangeWithPaginationMutex.RLock() + defer fake.getStateByRangeWithPaginationMutex.RUnlock() + return len(fake.getStateByRangeWithPaginationArgsForCall) +} + +func (fake *ChaincodeStub) GetStateByRangeWithPaginationCalls(stub func(string, string, int32, string) (shim.StateQueryIteratorInterface, *peer.QueryResponseMetadata, error)) { + fake.getStateByRangeWithPaginationMutex.Lock() + defer fake.getStateByRangeWithPaginationMutex.Unlock() + fake.GetStateByRangeWithPaginationStub = stub +} + +func (fake *ChaincodeStub) GetStateByRangeWithPaginationArgsForCall(i int) (string, string, int32, string) { + fake.getStateByRangeWithPaginationMutex.RLock() + defer fake.getStateByRangeWithPaginationMutex.RUnlock() + argsForCall := fake.getStateByRangeWithPaginationArgsForCall[i] + return argsForCall.arg1, argsForCall.arg2, argsForCall.arg3, argsForCall.arg4 +} + +func (fake *ChaincodeStub) GetStateByRangeWithPaginationReturns(result1 shim.StateQueryIteratorInterface, result2 *peer.QueryResponseMetadata, result3 error) { + fake.getStateByRangeWithPaginationMutex.Lock() + defer fake.getStateByRangeWithPaginationMutex.Unlock() + fake.GetStateByRangeWithPaginationStub = nil + fake.getStateByRangeWithPaginationReturns = struct { + result1 shim.StateQueryIteratorInterface + result2 *peer.QueryResponseMetadata + result3 error + }{result1, result2, result3} +} + +func (fake *ChaincodeStub) GetStateByRangeWithPaginationReturnsOnCall(i int, result1 shim.StateQueryIteratorInterface, result2 *peer.QueryResponseMetadata, result3 error) { + fake.getStateByRangeWithPaginationMutex.Lock() + defer fake.getStateByRangeWithPaginationMutex.Unlock() + fake.GetStateByRangeWithPaginationStub = nil + if fake.getStateByRangeWithPaginationReturnsOnCall == nil { + fake.getStateByRangeWithPaginationReturnsOnCall = make(map[int]struct { + result1 shim.StateQueryIteratorInterface + result2 *peer.QueryResponseMetadata + result3 error + }) + } + fake.getStateByRangeWithPaginationReturnsOnCall[i] = struct { + result1 shim.StateQueryIteratorInterface + result2 *peer.QueryResponseMetadata + result3 error + }{result1, result2, result3} +} + +func (fake *ChaincodeStub) GetStateValidationParameter(arg1 string) ([]byte, error) { + fake.getStateValidationParameterMutex.Lock() + ret, specificReturn := fake.getStateValidationParameterReturnsOnCall[len(fake.getStateValidationParameterArgsForCall)] + fake.getStateValidationParameterArgsForCall = append(fake.getStateValidationParameterArgsForCall, struct { + arg1 string + }{arg1}) + fake.recordInvocation("GetStateValidationParameter", []interface{}{arg1}) + fake.getStateValidationParameterMutex.Unlock() + if fake.GetStateValidationParameterStub != nil { + return fake.GetStateValidationParameterStub(arg1) + } + if specificReturn { + return ret.result1, ret.result2 + } + fakeReturns := fake.getStateValidationParameterReturns + return fakeReturns.result1, fakeReturns.result2 +} + +func (fake *ChaincodeStub) GetStateValidationParameterCallCount() int { + fake.getStateValidationParameterMutex.RLock() + defer fake.getStateValidationParameterMutex.RUnlock() + return len(fake.getStateValidationParameterArgsForCall) +} + +func (fake *ChaincodeStub) GetStateValidationParameterCalls(stub func(string) ([]byte, error)) { + fake.getStateValidationParameterMutex.Lock() + defer fake.getStateValidationParameterMutex.Unlock() + fake.GetStateValidationParameterStub = stub +} + +func (fake *ChaincodeStub) GetStateValidationParameterArgsForCall(i int) string { + fake.getStateValidationParameterMutex.RLock() + defer fake.getStateValidationParameterMutex.RUnlock() + argsForCall := fake.getStateValidationParameterArgsForCall[i] + return argsForCall.arg1 +} + +func (fake *ChaincodeStub) GetStateValidationParameterReturns(result1 []byte, result2 error) { + fake.getStateValidationParameterMutex.Lock() + defer fake.getStateValidationParameterMutex.Unlock() + fake.GetStateValidationParameterStub = nil + fake.getStateValidationParameterReturns = struct { + result1 []byte + result2 error + }{result1, result2} +} + +func (fake *ChaincodeStub) GetStateValidationParameterReturnsOnCall(i int, result1 []byte, result2 error) { + fake.getStateValidationParameterMutex.Lock() + defer fake.getStateValidationParameterMutex.Unlock() + fake.GetStateValidationParameterStub = nil + if fake.getStateValidationParameterReturnsOnCall == nil { + fake.getStateValidationParameterReturnsOnCall = make(map[int]struct { + result1 []byte + result2 error + }) + } + fake.getStateValidationParameterReturnsOnCall[i] = struct { + result1 []byte + result2 error + }{result1, result2} +} + +func (fake *ChaincodeStub) GetStringArgs() []string { + fake.getStringArgsMutex.Lock() + ret, specificReturn := fake.getStringArgsReturnsOnCall[len(fake.getStringArgsArgsForCall)] + fake.getStringArgsArgsForCall = append(fake.getStringArgsArgsForCall, struct { + }{}) + fake.recordInvocation("GetStringArgs", []interface{}{}) + fake.getStringArgsMutex.Unlock() + if fake.GetStringArgsStub != nil { + return fake.GetStringArgsStub() + } + if specificReturn { + return ret.result1 + } + fakeReturns := fake.getStringArgsReturns + return fakeReturns.result1 +} + +func (fake *ChaincodeStub) GetStringArgsCallCount() int { + fake.getStringArgsMutex.RLock() + defer fake.getStringArgsMutex.RUnlock() + return len(fake.getStringArgsArgsForCall) +} + +func (fake *ChaincodeStub) GetStringArgsCalls(stub func() []string) { + fake.getStringArgsMutex.Lock() + defer fake.getStringArgsMutex.Unlock() + fake.GetStringArgsStub = stub +} + +func (fake *ChaincodeStub) GetStringArgsReturns(result1 []string) { + fake.getStringArgsMutex.Lock() + defer fake.getStringArgsMutex.Unlock() + fake.GetStringArgsStub = nil + fake.getStringArgsReturns = struct { + result1 []string + }{result1} +} + +func (fake *ChaincodeStub) GetStringArgsReturnsOnCall(i int, result1 []string) { + fake.getStringArgsMutex.Lock() + defer fake.getStringArgsMutex.Unlock() + fake.GetStringArgsStub = nil + if fake.getStringArgsReturnsOnCall == nil { + fake.getStringArgsReturnsOnCall = make(map[int]struct { + result1 []string + }) + } + fake.getStringArgsReturnsOnCall[i] = struct { + result1 []string + }{result1} +} + +func (fake *ChaincodeStub) GetTransient() (map[string][]byte, error) { + fake.getTransientMutex.Lock() + ret, specificReturn := fake.getTransientReturnsOnCall[len(fake.getTransientArgsForCall)] + fake.getTransientArgsForCall = append(fake.getTransientArgsForCall, struct { + }{}) + fake.recordInvocation("GetTransient", []interface{}{}) + fake.getTransientMutex.Unlock() + if fake.GetTransientStub != nil { + return fake.GetTransientStub() + } + if specificReturn { + return ret.result1, ret.result2 + } + fakeReturns := fake.getTransientReturns + return fakeReturns.result1, fakeReturns.result2 +} + +func (fake *ChaincodeStub) GetTransientCallCount() int { + fake.getTransientMutex.RLock() + defer fake.getTransientMutex.RUnlock() + return len(fake.getTransientArgsForCall) +} + +func (fake *ChaincodeStub) GetTransientCalls(stub func() (map[string][]byte, error)) { + fake.getTransientMutex.Lock() + defer fake.getTransientMutex.Unlock() + fake.GetTransientStub = stub +} + +func (fake *ChaincodeStub) GetTransientReturns(result1 map[string][]byte, result2 error) { + fake.getTransientMutex.Lock() + defer fake.getTransientMutex.Unlock() + fake.GetTransientStub = nil + fake.getTransientReturns = struct { + result1 map[string][]byte + result2 error + }{result1, result2} +} + +func (fake *ChaincodeStub) GetTransientReturnsOnCall(i int, result1 map[string][]byte, result2 error) { + fake.getTransientMutex.Lock() + defer fake.getTransientMutex.Unlock() + fake.GetTransientStub = nil + if fake.getTransientReturnsOnCall == nil { + fake.getTransientReturnsOnCall = make(map[int]struct { + result1 map[string][]byte + result2 error + }) + } + fake.getTransientReturnsOnCall[i] = struct { + result1 map[string][]byte + result2 error + }{result1, result2} +} + +func (fake *ChaincodeStub) GetTxID() string { + fake.getTxIDMutex.Lock() + ret, specificReturn := fake.getTxIDReturnsOnCall[len(fake.getTxIDArgsForCall)] + fake.getTxIDArgsForCall = append(fake.getTxIDArgsForCall, struct { + }{}) + fake.recordInvocation("GetTxID", []interface{}{}) + fake.getTxIDMutex.Unlock() + if fake.GetTxIDStub != nil { + return fake.GetTxIDStub() + } + if specificReturn { + return ret.result1 + } + fakeReturns := fake.getTxIDReturns + return fakeReturns.result1 +} + +func (fake *ChaincodeStub) GetTxIDCallCount() int { + fake.getTxIDMutex.RLock() + defer fake.getTxIDMutex.RUnlock() + return len(fake.getTxIDArgsForCall) +} + +func (fake *ChaincodeStub) GetTxIDCalls(stub func() string) { + fake.getTxIDMutex.Lock() + defer fake.getTxIDMutex.Unlock() + fake.GetTxIDStub = stub +} + +func (fake *ChaincodeStub) GetTxIDReturns(result1 string) { + fake.getTxIDMutex.Lock() + defer fake.getTxIDMutex.Unlock() + fake.GetTxIDStub = nil + fake.getTxIDReturns = struct { + result1 string + }{result1} +} + +func (fake *ChaincodeStub) GetTxIDReturnsOnCall(i int, result1 string) { + fake.getTxIDMutex.Lock() + defer fake.getTxIDMutex.Unlock() + fake.GetTxIDStub = nil + if fake.getTxIDReturnsOnCall == nil { + fake.getTxIDReturnsOnCall = make(map[int]struct { + result1 string + }) + } + fake.getTxIDReturnsOnCall[i] = struct { + result1 string + }{result1} +} + +func (fake *ChaincodeStub) GetTxTimestamp() (*timestamp.Timestamp, error) { + fake.getTxTimestampMutex.Lock() + ret, specificReturn := fake.getTxTimestampReturnsOnCall[len(fake.getTxTimestampArgsForCall)] + fake.getTxTimestampArgsForCall = append(fake.getTxTimestampArgsForCall, struct { + }{}) + fake.recordInvocation("GetTxTimestamp", []interface{}{}) + fake.getTxTimestampMutex.Unlock() + if fake.GetTxTimestampStub != nil { + return fake.GetTxTimestampStub() + } + if specificReturn { + return ret.result1, ret.result2 + } + fakeReturns := fake.getTxTimestampReturns + return fakeReturns.result1, fakeReturns.result2 +} + +func (fake *ChaincodeStub) GetTxTimestampCallCount() int { + fake.getTxTimestampMutex.RLock() + defer fake.getTxTimestampMutex.RUnlock() + return len(fake.getTxTimestampArgsForCall) +} + +func (fake *ChaincodeStub) GetTxTimestampCalls(stub func() (*timestamp.Timestamp, error)) { + fake.getTxTimestampMutex.Lock() + defer fake.getTxTimestampMutex.Unlock() + fake.GetTxTimestampStub = stub +} + +func (fake *ChaincodeStub) GetTxTimestampReturns(result1 *timestamp.Timestamp, result2 error) { + fake.getTxTimestampMutex.Lock() + defer fake.getTxTimestampMutex.Unlock() + fake.GetTxTimestampStub = nil + fake.getTxTimestampReturns = struct { + result1 *timestamp.Timestamp + result2 error + }{result1, result2} +} + +func (fake *ChaincodeStub) GetTxTimestampReturnsOnCall(i int, result1 *timestamp.Timestamp, result2 error) { + fake.getTxTimestampMutex.Lock() + defer fake.getTxTimestampMutex.Unlock() + fake.GetTxTimestampStub = nil + if fake.getTxTimestampReturnsOnCall == nil { + fake.getTxTimestampReturnsOnCall = make(map[int]struct { + result1 *timestamp.Timestamp + result2 error + }) + } + fake.getTxTimestampReturnsOnCall[i] = struct { + result1 *timestamp.Timestamp + result2 error + }{result1, result2} +} + +func (fake *ChaincodeStub) InvokeChaincode(arg1 string, arg2 [][]byte, arg3 string) peer.Response { + var arg2Copy [][]byte + if arg2 != nil { + arg2Copy = make([][]byte, len(arg2)) + copy(arg2Copy, arg2) + } + fake.invokeChaincodeMutex.Lock() + ret, specificReturn := fake.invokeChaincodeReturnsOnCall[len(fake.invokeChaincodeArgsForCall)] + fake.invokeChaincodeArgsForCall = append(fake.invokeChaincodeArgsForCall, struct { + arg1 string + arg2 [][]byte + arg3 string + }{arg1, arg2Copy, arg3}) + fake.recordInvocation("InvokeChaincode", []interface{}{arg1, arg2Copy, arg3}) + fake.invokeChaincodeMutex.Unlock() + if fake.InvokeChaincodeStub != nil { + return fake.InvokeChaincodeStub(arg1, arg2, arg3) + } + if specificReturn { + return ret.result1 + } + fakeReturns := fake.invokeChaincodeReturns + return fakeReturns.result1 +} + +func (fake *ChaincodeStub) InvokeChaincodeCallCount() int { + fake.invokeChaincodeMutex.RLock() + defer fake.invokeChaincodeMutex.RUnlock() + return len(fake.invokeChaincodeArgsForCall) +} + +func (fake *ChaincodeStub) InvokeChaincodeCalls(stub func(string, [][]byte, string) peer.Response) { + fake.invokeChaincodeMutex.Lock() + defer fake.invokeChaincodeMutex.Unlock() + fake.InvokeChaincodeStub = stub +} + +func (fake *ChaincodeStub) InvokeChaincodeArgsForCall(i int) (string, [][]byte, string) { + fake.invokeChaincodeMutex.RLock() + defer fake.invokeChaincodeMutex.RUnlock() + argsForCall := fake.invokeChaincodeArgsForCall[i] + return argsForCall.arg1, argsForCall.arg2, argsForCall.arg3 +} + +func (fake *ChaincodeStub) InvokeChaincodeReturns(result1 peer.Response) { + fake.invokeChaincodeMutex.Lock() + defer fake.invokeChaincodeMutex.Unlock() + fake.InvokeChaincodeStub = nil + fake.invokeChaincodeReturns = struct { + result1 peer.Response + }{result1} +} + +func (fake *ChaincodeStub) InvokeChaincodeReturnsOnCall(i int, result1 peer.Response) { + fake.invokeChaincodeMutex.Lock() + defer fake.invokeChaincodeMutex.Unlock() + fake.InvokeChaincodeStub = nil + if fake.invokeChaincodeReturnsOnCall == nil { + fake.invokeChaincodeReturnsOnCall = make(map[int]struct { + result1 peer.Response + }) + } + fake.invokeChaincodeReturnsOnCall[i] = struct { + result1 peer.Response + }{result1} +} + +func (fake *ChaincodeStub) PutPrivateData(arg1 string, arg2 string, arg3 []byte) error { + var arg3Copy []byte + if arg3 != nil { + arg3Copy = make([]byte, len(arg3)) + copy(arg3Copy, arg3) + } + fake.putPrivateDataMutex.Lock() + ret, specificReturn := fake.putPrivateDataReturnsOnCall[len(fake.putPrivateDataArgsForCall)] + fake.putPrivateDataArgsForCall = append(fake.putPrivateDataArgsForCall, struct { + arg1 string + arg2 string + arg3 []byte + }{arg1, arg2, arg3Copy}) + fake.recordInvocation("PutPrivateData", []interface{}{arg1, arg2, arg3Copy}) + fake.putPrivateDataMutex.Unlock() + if fake.PutPrivateDataStub != nil { + return fake.PutPrivateDataStub(arg1, arg2, arg3) + } + if specificReturn { + return ret.result1 + } + fakeReturns := fake.putPrivateDataReturns + return fakeReturns.result1 +} + +func (fake *ChaincodeStub) PutPrivateDataCallCount() int { + fake.putPrivateDataMutex.RLock() + defer fake.putPrivateDataMutex.RUnlock() + return len(fake.putPrivateDataArgsForCall) +} + +func (fake *ChaincodeStub) PutPrivateDataCalls(stub func(string, string, []byte) error) { + fake.putPrivateDataMutex.Lock() + defer fake.putPrivateDataMutex.Unlock() + fake.PutPrivateDataStub = stub +} + +func (fake *ChaincodeStub) PutPrivateDataArgsForCall(i int) (string, string, []byte) { + fake.putPrivateDataMutex.RLock() + defer fake.putPrivateDataMutex.RUnlock() + argsForCall := fake.putPrivateDataArgsForCall[i] + return argsForCall.arg1, argsForCall.arg2, argsForCall.arg3 +} + +func (fake *ChaincodeStub) PutPrivateDataReturns(result1 error) { + fake.putPrivateDataMutex.Lock() + defer fake.putPrivateDataMutex.Unlock() + fake.PutPrivateDataStub = nil + fake.putPrivateDataReturns = struct { + result1 error + }{result1} +} + +func (fake *ChaincodeStub) PutPrivateDataReturnsOnCall(i int, result1 error) { + fake.putPrivateDataMutex.Lock() + defer fake.putPrivateDataMutex.Unlock() + fake.PutPrivateDataStub = nil + if fake.putPrivateDataReturnsOnCall == nil { + fake.putPrivateDataReturnsOnCall = make(map[int]struct { + result1 error + }) + } + fake.putPrivateDataReturnsOnCall[i] = struct { + result1 error + }{result1} +} + +func (fake *ChaincodeStub) PutState(arg1 string, arg2 []byte) error { + var arg2Copy []byte + if arg2 != nil { + arg2Copy = make([]byte, len(arg2)) + copy(arg2Copy, arg2) + } + fake.putStateMutex.Lock() + ret, specificReturn := fake.putStateReturnsOnCall[len(fake.putStateArgsForCall)] + fake.putStateArgsForCall = append(fake.putStateArgsForCall, struct { + arg1 string + arg2 []byte + }{arg1, arg2Copy}) + fake.recordInvocation("PutState", []interface{}{arg1, arg2Copy}) + fake.putStateMutex.Unlock() + if fake.PutStateStub != nil { + return fake.PutStateStub(arg1, arg2) + } + if specificReturn { + return ret.result1 + } + fakeReturns := fake.putStateReturns + return fakeReturns.result1 +} + +func (fake *ChaincodeStub) PutStateCallCount() int { + fake.putStateMutex.RLock() + defer fake.putStateMutex.RUnlock() + return len(fake.putStateArgsForCall) +} + +func (fake *ChaincodeStub) PutStateCalls(stub func(string, []byte) error) { + fake.putStateMutex.Lock() + defer fake.putStateMutex.Unlock() + fake.PutStateStub = stub +} + +func (fake *ChaincodeStub) PutStateArgsForCall(i int) (string, []byte) { + fake.putStateMutex.RLock() + defer fake.putStateMutex.RUnlock() + argsForCall := fake.putStateArgsForCall[i] + return argsForCall.arg1, argsForCall.arg2 +} + +func (fake *ChaincodeStub) PutStateReturns(result1 error) { + fake.putStateMutex.Lock() + defer fake.putStateMutex.Unlock() + fake.PutStateStub = nil + fake.putStateReturns = struct { + result1 error + }{result1} +} + +func (fake *ChaincodeStub) PutStateReturnsOnCall(i int, result1 error) { + fake.putStateMutex.Lock() + defer fake.putStateMutex.Unlock() + fake.PutStateStub = nil + if fake.putStateReturnsOnCall == nil { + fake.putStateReturnsOnCall = make(map[int]struct { + result1 error + }) + } + fake.putStateReturnsOnCall[i] = struct { + result1 error + }{result1} +} + +func (fake *ChaincodeStub) SetEvent(arg1 string, arg2 []byte) error { + var arg2Copy []byte + if arg2 != nil { + arg2Copy = make([]byte, len(arg2)) + copy(arg2Copy, arg2) + } + fake.setEventMutex.Lock() + ret, specificReturn := fake.setEventReturnsOnCall[len(fake.setEventArgsForCall)] + fake.setEventArgsForCall = append(fake.setEventArgsForCall, struct { + arg1 string + arg2 []byte + }{arg1, arg2Copy}) + fake.recordInvocation("SetEvent", []interface{}{arg1, arg2Copy}) + fake.setEventMutex.Unlock() + if fake.SetEventStub != nil { + return fake.SetEventStub(arg1, arg2) + } + if specificReturn { + return ret.result1 + } + fakeReturns := fake.setEventReturns + return fakeReturns.result1 +} + +func (fake *ChaincodeStub) SetEventCallCount() int { + fake.setEventMutex.RLock() + defer fake.setEventMutex.RUnlock() + return len(fake.setEventArgsForCall) +} + +func (fake *ChaincodeStub) SetEventCalls(stub func(string, []byte) error) { + fake.setEventMutex.Lock() + defer fake.setEventMutex.Unlock() + fake.SetEventStub = stub +} + +func (fake *ChaincodeStub) SetEventArgsForCall(i int) (string, []byte) { + fake.setEventMutex.RLock() + defer fake.setEventMutex.RUnlock() + argsForCall := fake.setEventArgsForCall[i] + return argsForCall.arg1, argsForCall.arg2 +} + +func (fake *ChaincodeStub) SetEventReturns(result1 error) { + fake.setEventMutex.Lock() + defer fake.setEventMutex.Unlock() + fake.SetEventStub = nil + fake.setEventReturns = struct { + result1 error + }{result1} +} + +func (fake *ChaincodeStub) SetEventReturnsOnCall(i int, result1 error) { + fake.setEventMutex.Lock() + defer fake.setEventMutex.Unlock() + fake.SetEventStub = nil + if fake.setEventReturnsOnCall == nil { + fake.setEventReturnsOnCall = make(map[int]struct { + result1 error + }) + } + fake.setEventReturnsOnCall[i] = struct { + result1 error + }{result1} +} + +func (fake *ChaincodeStub) SetPrivateDataValidationParameter(arg1 string, arg2 string, arg3 []byte) error { + var arg3Copy []byte + if arg3 != nil { + arg3Copy = make([]byte, len(arg3)) + copy(arg3Copy, arg3) + } + fake.setPrivateDataValidationParameterMutex.Lock() + ret, specificReturn := fake.setPrivateDataValidationParameterReturnsOnCall[len(fake.setPrivateDataValidationParameterArgsForCall)] + fake.setPrivateDataValidationParameterArgsForCall = append(fake.setPrivateDataValidationParameterArgsForCall, struct { + arg1 string + arg2 string + arg3 []byte + }{arg1, arg2, arg3Copy}) + fake.recordInvocation("SetPrivateDataValidationParameter", []interface{}{arg1, arg2, arg3Copy}) + fake.setPrivateDataValidationParameterMutex.Unlock() + if fake.SetPrivateDataValidationParameterStub != nil { + return fake.SetPrivateDataValidationParameterStub(arg1, arg2, arg3) + } + if specificReturn { + return ret.result1 + } + fakeReturns := fake.setPrivateDataValidationParameterReturns + return fakeReturns.result1 +} + +func (fake *ChaincodeStub) SetPrivateDataValidationParameterCallCount() int { + fake.setPrivateDataValidationParameterMutex.RLock() + defer fake.setPrivateDataValidationParameterMutex.RUnlock() + return len(fake.setPrivateDataValidationParameterArgsForCall) +} + +func (fake *ChaincodeStub) SetPrivateDataValidationParameterCalls(stub func(string, string, []byte) error) { + fake.setPrivateDataValidationParameterMutex.Lock() + defer fake.setPrivateDataValidationParameterMutex.Unlock() + fake.SetPrivateDataValidationParameterStub = stub +} + +func (fake *ChaincodeStub) SetPrivateDataValidationParameterArgsForCall(i int) (string, string, []byte) { + fake.setPrivateDataValidationParameterMutex.RLock() + defer fake.setPrivateDataValidationParameterMutex.RUnlock() + argsForCall := fake.setPrivateDataValidationParameterArgsForCall[i] + return argsForCall.arg1, argsForCall.arg2, argsForCall.arg3 +} + +func (fake *ChaincodeStub) SetPrivateDataValidationParameterReturns(result1 error) { + fake.setPrivateDataValidationParameterMutex.Lock() + defer fake.setPrivateDataValidationParameterMutex.Unlock() + fake.SetPrivateDataValidationParameterStub = nil + fake.setPrivateDataValidationParameterReturns = struct { + result1 error + }{result1} +} + +func (fake *ChaincodeStub) SetPrivateDataValidationParameterReturnsOnCall(i int, result1 error) { + fake.setPrivateDataValidationParameterMutex.Lock() + defer fake.setPrivateDataValidationParameterMutex.Unlock() + fake.SetPrivateDataValidationParameterStub = nil + if fake.setPrivateDataValidationParameterReturnsOnCall == nil { + fake.setPrivateDataValidationParameterReturnsOnCall = make(map[int]struct { + result1 error + }) + } + fake.setPrivateDataValidationParameterReturnsOnCall[i] = struct { + result1 error + }{result1} +} + +func (fake *ChaincodeStub) SetStateValidationParameter(arg1 string, arg2 []byte) error { + var arg2Copy []byte + if arg2 != nil { + arg2Copy = make([]byte, len(arg2)) + copy(arg2Copy, arg2) + } + fake.setStateValidationParameterMutex.Lock() + ret, specificReturn := fake.setStateValidationParameterReturnsOnCall[len(fake.setStateValidationParameterArgsForCall)] + fake.setStateValidationParameterArgsForCall = append(fake.setStateValidationParameterArgsForCall, struct { + arg1 string + arg2 []byte + }{arg1, arg2Copy}) + fake.recordInvocation("SetStateValidationParameter", []interface{}{arg1, arg2Copy}) + fake.setStateValidationParameterMutex.Unlock() + if fake.SetStateValidationParameterStub != nil { + return fake.SetStateValidationParameterStub(arg1, arg2) + } + if specificReturn { + return ret.result1 + } + fakeReturns := fake.setStateValidationParameterReturns + return fakeReturns.result1 +} + +func (fake *ChaincodeStub) SetStateValidationParameterCallCount() int { + fake.setStateValidationParameterMutex.RLock() + defer fake.setStateValidationParameterMutex.RUnlock() + return len(fake.setStateValidationParameterArgsForCall) +} + +func (fake *ChaincodeStub) SetStateValidationParameterCalls(stub func(string, []byte) error) { + fake.setStateValidationParameterMutex.Lock() + defer fake.setStateValidationParameterMutex.Unlock() + fake.SetStateValidationParameterStub = stub +} + +func (fake *ChaincodeStub) SetStateValidationParameterArgsForCall(i int) (string, []byte) { + fake.setStateValidationParameterMutex.RLock() + defer fake.setStateValidationParameterMutex.RUnlock() + argsForCall := fake.setStateValidationParameterArgsForCall[i] + return argsForCall.arg1, argsForCall.arg2 +} + +func (fake *ChaincodeStub) SetStateValidationParameterReturns(result1 error) { + fake.setStateValidationParameterMutex.Lock() + defer fake.setStateValidationParameterMutex.Unlock() + fake.SetStateValidationParameterStub = nil + fake.setStateValidationParameterReturns = struct { + result1 error + }{result1} +} + +func (fake *ChaincodeStub) SetStateValidationParameterReturnsOnCall(i int, result1 error) { + fake.setStateValidationParameterMutex.Lock() + defer fake.setStateValidationParameterMutex.Unlock() + fake.SetStateValidationParameterStub = nil + if fake.setStateValidationParameterReturnsOnCall == nil { + fake.setStateValidationParameterReturnsOnCall = make(map[int]struct { + result1 error + }) + } + fake.setStateValidationParameterReturnsOnCall[i] = struct { + result1 error + }{result1} +} + +func (fake *ChaincodeStub) SplitCompositeKey(arg1 string) (string, []string, error) { + fake.splitCompositeKeyMutex.Lock() + ret, specificReturn := fake.splitCompositeKeyReturnsOnCall[len(fake.splitCompositeKeyArgsForCall)] + fake.splitCompositeKeyArgsForCall = append(fake.splitCompositeKeyArgsForCall, struct { + arg1 string + }{arg1}) + fake.recordInvocation("SplitCompositeKey", []interface{}{arg1}) + fake.splitCompositeKeyMutex.Unlock() + if fake.SplitCompositeKeyStub != nil { + return fake.SplitCompositeKeyStub(arg1) + } + if specificReturn { + return ret.result1, ret.result2, ret.result3 + } + fakeReturns := fake.splitCompositeKeyReturns + return fakeReturns.result1, fakeReturns.result2, fakeReturns.result3 +} + +func (fake *ChaincodeStub) SplitCompositeKeyCallCount() int { + fake.splitCompositeKeyMutex.RLock() + defer fake.splitCompositeKeyMutex.RUnlock() + return len(fake.splitCompositeKeyArgsForCall) +} + +func (fake *ChaincodeStub) SplitCompositeKeyCalls(stub func(string) (string, []string, error)) { + fake.splitCompositeKeyMutex.Lock() + defer fake.splitCompositeKeyMutex.Unlock() + fake.SplitCompositeKeyStub = stub +} + +func (fake *ChaincodeStub) SplitCompositeKeyArgsForCall(i int) string { + fake.splitCompositeKeyMutex.RLock() + defer fake.splitCompositeKeyMutex.RUnlock() + argsForCall := fake.splitCompositeKeyArgsForCall[i] + return argsForCall.arg1 +} + +func (fake *ChaincodeStub) SplitCompositeKeyReturns(result1 string, result2 []string, result3 error) { + fake.splitCompositeKeyMutex.Lock() + defer fake.splitCompositeKeyMutex.Unlock() + fake.SplitCompositeKeyStub = nil + fake.splitCompositeKeyReturns = struct { + result1 string + result2 []string + result3 error + }{result1, result2, result3} +} + +func (fake *ChaincodeStub) SplitCompositeKeyReturnsOnCall(i int, result1 string, result2 []string, result3 error) { + fake.splitCompositeKeyMutex.Lock() + defer fake.splitCompositeKeyMutex.Unlock() + fake.SplitCompositeKeyStub = nil + if fake.splitCompositeKeyReturnsOnCall == nil { + fake.splitCompositeKeyReturnsOnCall = make(map[int]struct { + result1 string + result2 []string + result3 error + }) + } + fake.splitCompositeKeyReturnsOnCall[i] = struct { + result1 string + result2 []string + result3 error + }{result1, result2, result3} +} + +func (fake *ChaincodeStub) Invocations() map[string][][]interface{} { + fake.invocationsMutex.RLock() + defer fake.invocationsMutex.RUnlock() + fake.createCompositeKeyMutex.RLock() + defer fake.createCompositeKeyMutex.RUnlock() + fake.delPrivateDataMutex.RLock() + defer fake.delPrivateDataMutex.RUnlock() + fake.delStateMutex.RLock() + defer fake.delStateMutex.RUnlock() + fake.getArgsMutex.RLock() + defer fake.getArgsMutex.RUnlock() + fake.getArgsSliceMutex.RLock() + defer fake.getArgsSliceMutex.RUnlock() + fake.getBindingMutex.RLock() + defer fake.getBindingMutex.RUnlock() + fake.getChannelIDMutex.RLock() + defer fake.getChannelIDMutex.RUnlock() + fake.getCreatorMutex.RLock() + defer fake.getCreatorMutex.RUnlock() + fake.getDecorationsMutex.RLock() + defer fake.getDecorationsMutex.RUnlock() + fake.getFunctionAndParametersMutex.RLock() + defer fake.getFunctionAndParametersMutex.RUnlock() + fake.getHistoryForKeyMutex.RLock() + defer fake.getHistoryForKeyMutex.RUnlock() + fake.getPrivateDataMutex.RLock() + defer fake.getPrivateDataMutex.RUnlock() + fake.getPrivateDataByPartialCompositeKeyMutex.RLock() + defer fake.getPrivateDataByPartialCompositeKeyMutex.RUnlock() + fake.getPrivateDataByRangeMutex.RLock() + defer fake.getPrivateDataByRangeMutex.RUnlock() + fake.getPrivateDataHashMutex.RLock() + defer fake.getPrivateDataHashMutex.RUnlock() + fake.getPrivateDataQueryResultMutex.RLock() + defer fake.getPrivateDataQueryResultMutex.RUnlock() + fake.getPrivateDataValidationParameterMutex.RLock() + defer fake.getPrivateDataValidationParameterMutex.RUnlock() + fake.getQueryResultMutex.RLock() + defer fake.getQueryResultMutex.RUnlock() + fake.getQueryResultWithPaginationMutex.RLock() + defer fake.getQueryResultWithPaginationMutex.RUnlock() + fake.getSignedProposalMutex.RLock() + defer fake.getSignedProposalMutex.RUnlock() + fake.getStateMutex.RLock() + defer fake.getStateMutex.RUnlock() + fake.getStateByPartialCompositeKeyMutex.RLock() + defer fake.getStateByPartialCompositeKeyMutex.RUnlock() + fake.getStateByPartialCompositeKeyWithPaginationMutex.RLock() + defer fake.getStateByPartialCompositeKeyWithPaginationMutex.RUnlock() + fake.getStateByRangeMutex.RLock() + defer fake.getStateByRangeMutex.RUnlock() + fake.getStateByRangeWithPaginationMutex.RLock() + defer fake.getStateByRangeWithPaginationMutex.RUnlock() + fake.getStateValidationParameterMutex.RLock() + defer fake.getStateValidationParameterMutex.RUnlock() + fake.getStringArgsMutex.RLock() + defer fake.getStringArgsMutex.RUnlock() + fake.getTransientMutex.RLock() + defer fake.getTransientMutex.RUnlock() + fake.getTxIDMutex.RLock() + defer fake.getTxIDMutex.RUnlock() + fake.getTxTimestampMutex.RLock() + defer fake.getTxTimestampMutex.RUnlock() + fake.invokeChaincodeMutex.RLock() + defer fake.invokeChaincodeMutex.RUnlock() + fake.putPrivateDataMutex.RLock() + defer fake.putPrivateDataMutex.RUnlock() + fake.putStateMutex.RLock() + defer fake.putStateMutex.RUnlock() + fake.setEventMutex.RLock() + defer fake.setEventMutex.RUnlock() + fake.setPrivateDataValidationParameterMutex.RLock() + defer fake.setPrivateDataValidationParameterMutex.RUnlock() + fake.setStateValidationParameterMutex.RLock() + defer fake.setStateValidationParameterMutex.RUnlock() + fake.splitCompositeKeyMutex.RLock() + defer fake.splitCompositeKeyMutex.RUnlock() + copiedInvocations := map[string][][]interface{}{} + for key, value := range fake.invocations { + copiedInvocations[key] = value + } + return copiedInvocations +} + +func (fake *ChaincodeStub) recordInvocation(key string, args []interface{}) { + fake.invocationsMutex.Lock() + defer fake.invocationsMutex.Unlock() + if fake.invocations == nil { + fake.invocations = map[string][][]interface{}{} + } + if fake.invocations[key] == nil { + fake.invocations[key] = [][]interface{}{} + } + fake.invocations[key] = append(fake.invocations[key], args) +} diff --git a/topologies/t4/assets/chaincode/assets-transfer-basic/chaincode-go/chaincode/mocks/statequeryiterator.go b/topologies/t4/assets/chaincode/assets-transfer-basic/chaincode-go/chaincode/mocks/statequeryiterator.go new file mode 100644 index 0000000..27e3034 --- /dev/null +++ b/topologies/t4/assets/chaincode/assets-transfer-basic/chaincode-go/chaincode/mocks/statequeryiterator.go @@ -0,0 +1,232 @@ +// Code generated by counterfeiter. DO NOT EDIT. +package mocks + +import ( + "sync" + + "github.com/hyperledger/fabric-protos-go/ledger/queryresult" +) + +type StateQueryIterator struct { + CloseStub func() error + closeMutex sync.RWMutex + closeArgsForCall []struct { + } + closeReturns struct { + result1 error + } + closeReturnsOnCall map[int]struct { + result1 error + } + HasNextStub func() bool + hasNextMutex sync.RWMutex + hasNextArgsForCall []struct { + } + hasNextReturns struct { + result1 bool + } + hasNextReturnsOnCall map[int]struct { + result1 bool + } + NextStub func() (*queryresult.KV, error) + nextMutex sync.RWMutex + nextArgsForCall []struct { + } + nextReturns struct { + result1 *queryresult.KV + result2 error + } + nextReturnsOnCall map[int]struct { + result1 *queryresult.KV + result2 error + } + invocations map[string][][]interface{} + invocationsMutex sync.RWMutex +} + +func (fake *StateQueryIterator) Close() error { + fake.closeMutex.Lock() + ret, specificReturn := fake.closeReturnsOnCall[len(fake.closeArgsForCall)] + fake.closeArgsForCall = append(fake.closeArgsForCall, struct { + }{}) + fake.recordInvocation("Close", []interface{}{}) + fake.closeMutex.Unlock() + if fake.CloseStub != nil { + return fake.CloseStub() + } + if specificReturn { + return ret.result1 + } + fakeReturns := fake.closeReturns + return fakeReturns.result1 +} + +func (fake *StateQueryIterator) CloseCallCount() int { + fake.closeMutex.RLock() + defer fake.closeMutex.RUnlock() + return len(fake.closeArgsForCall) +} + +func (fake *StateQueryIterator) CloseCalls(stub func() error) { + fake.closeMutex.Lock() + defer fake.closeMutex.Unlock() + fake.CloseStub = stub +} + +func (fake *StateQueryIterator) CloseReturns(result1 error) { + fake.closeMutex.Lock() + defer fake.closeMutex.Unlock() + fake.CloseStub = nil + fake.closeReturns = struct { + result1 error + }{result1} +} + +func (fake *StateQueryIterator) CloseReturnsOnCall(i int, result1 error) { + fake.closeMutex.Lock() + defer fake.closeMutex.Unlock() + fake.CloseStub = nil + if fake.closeReturnsOnCall == nil { + fake.closeReturnsOnCall = make(map[int]struct { + result1 error + }) + } + fake.closeReturnsOnCall[i] = struct { + result1 error + }{result1} +} + +func (fake *StateQueryIterator) HasNext() bool { + fake.hasNextMutex.Lock() + ret, specificReturn := fake.hasNextReturnsOnCall[len(fake.hasNextArgsForCall)] + fake.hasNextArgsForCall = append(fake.hasNextArgsForCall, struct { + }{}) + fake.recordInvocation("HasNext", []interface{}{}) + fake.hasNextMutex.Unlock() + if fake.HasNextStub != nil { + return fake.HasNextStub() + } + if specificReturn { + return ret.result1 + } + fakeReturns := fake.hasNextReturns + return fakeReturns.result1 +} + +func (fake *StateQueryIterator) HasNextCallCount() int { + fake.hasNextMutex.RLock() + defer fake.hasNextMutex.RUnlock() + return len(fake.hasNextArgsForCall) +} + +func (fake *StateQueryIterator) HasNextCalls(stub func() bool) { + fake.hasNextMutex.Lock() + defer fake.hasNextMutex.Unlock() + fake.HasNextStub = stub +} + +func (fake *StateQueryIterator) HasNextReturns(result1 bool) { + fake.hasNextMutex.Lock() + defer fake.hasNextMutex.Unlock() + fake.HasNextStub = nil + fake.hasNextReturns = struct { + result1 bool + }{result1} +} + +func (fake *StateQueryIterator) HasNextReturnsOnCall(i int, result1 bool) { + fake.hasNextMutex.Lock() + defer fake.hasNextMutex.Unlock() + fake.HasNextStub = nil + if fake.hasNextReturnsOnCall == nil { + fake.hasNextReturnsOnCall = make(map[int]struct { + result1 bool + }) + } + fake.hasNextReturnsOnCall[i] = struct { + result1 bool + }{result1} +} + +func (fake *StateQueryIterator) Next() (*queryresult.KV, error) { + fake.nextMutex.Lock() + ret, specificReturn := fake.nextReturnsOnCall[len(fake.nextArgsForCall)] + fake.nextArgsForCall = append(fake.nextArgsForCall, struct { + }{}) + fake.recordInvocation("Next", []interface{}{}) + fake.nextMutex.Unlock() + if fake.NextStub != nil { + return fake.NextStub() + } + if specificReturn { + return ret.result1, ret.result2 + } + fakeReturns := fake.nextReturns + return fakeReturns.result1, fakeReturns.result2 +} + +func (fake *StateQueryIterator) NextCallCount() int { + fake.nextMutex.RLock() + defer fake.nextMutex.RUnlock() + return len(fake.nextArgsForCall) +} + +func (fake *StateQueryIterator) NextCalls(stub func() (*queryresult.KV, error)) { + fake.nextMutex.Lock() + defer fake.nextMutex.Unlock() + fake.NextStub = stub +} + +func (fake *StateQueryIterator) NextReturns(result1 *queryresult.KV, result2 error) { + fake.nextMutex.Lock() + defer fake.nextMutex.Unlock() + fake.NextStub = nil + fake.nextReturns = struct { + result1 *queryresult.KV + result2 error + }{result1, result2} +} + +func (fake *StateQueryIterator) NextReturnsOnCall(i int, result1 *queryresult.KV, result2 error) { + fake.nextMutex.Lock() + defer fake.nextMutex.Unlock() + fake.NextStub = nil + if fake.nextReturnsOnCall == nil { + fake.nextReturnsOnCall = make(map[int]struct { + result1 *queryresult.KV + result2 error + }) + } + fake.nextReturnsOnCall[i] = struct { + result1 *queryresult.KV + result2 error + }{result1, result2} +} + +func (fake *StateQueryIterator) Invocations() map[string][][]interface{} { + fake.invocationsMutex.RLock() + defer fake.invocationsMutex.RUnlock() + fake.closeMutex.RLock() + defer fake.closeMutex.RUnlock() + fake.hasNextMutex.RLock() + defer fake.hasNextMutex.RUnlock() + fake.nextMutex.RLock() + defer fake.nextMutex.RUnlock() + copiedInvocations := map[string][][]interface{}{} + for key, value := range fake.invocations { + copiedInvocations[key] = value + } + return copiedInvocations +} + +func (fake *StateQueryIterator) recordInvocation(key string, args []interface{}) { + fake.invocationsMutex.Lock() + defer fake.invocationsMutex.Unlock() + if fake.invocations == nil { + fake.invocations = map[string][][]interface{}{} + } + if fake.invocations[key] == nil { + fake.invocations[key] = [][]interface{}{} + } + fake.invocations[key] = append(fake.invocations[key], args) +} diff --git a/topologies/t4/assets/chaincode/assets-transfer-basic/chaincode-go/chaincode/mocks/transaction.go b/topologies/t4/assets/chaincode/assets-transfer-basic/chaincode-go/chaincode/mocks/transaction.go new file mode 100644 index 0000000..eea37db --- /dev/null +++ b/topologies/t4/assets/chaincode/assets-transfer-basic/chaincode-go/chaincode/mocks/transaction.go @@ -0,0 +1,164 @@ +// Code generated by counterfeiter. DO NOT EDIT. +package mocks + +import ( + "sync" + + "github.com/hyperledger/fabric-chaincode-go/pkg/cid" + "github.com/hyperledger/fabric-chaincode-go/shim" +) + +type TransactionContext struct { + GetClientIdentityStub func() cid.ClientIdentity + getClientIdentityMutex sync.RWMutex + getClientIdentityArgsForCall []struct { + } + getClientIdentityReturns struct { + result1 cid.ClientIdentity + } + getClientIdentityReturnsOnCall map[int]struct { + result1 cid.ClientIdentity + } + GetStubStub func() shim.ChaincodeStubInterface + getStubMutex sync.RWMutex + getStubArgsForCall []struct { + } + getStubReturns struct { + result1 shim.ChaincodeStubInterface + } + getStubReturnsOnCall map[int]struct { + result1 shim.ChaincodeStubInterface + } + invocations map[string][][]interface{} + invocationsMutex sync.RWMutex +} + +func (fake *TransactionContext) GetClientIdentity() cid.ClientIdentity { + fake.getClientIdentityMutex.Lock() + ret, specificReturn := fake.getClientIdentityReturnsOnCall[len(fake.getClientIdentityArgsForCall)] + fake.getClientIdentityArgsForCall = append(fake.getClientIdentityArgsForCall, struct { + }{}) + fake.recordInvocation("GetClientIdentity", []interface{}{}) + fake.getClientIdentityMutex.Unlock() + if fake.GetClientIdentityStub != nil { + return fake.GetClientIdentityStub() + } + if specificReturn { + return ret.result1 + } + fakeReturns := fake.getClientIdentityReturns + return fakeReturns.result1 +} + +func (fake *TransactionContext) GetClientIdentityCallCount() int { + fake.getClientIdentityMutex.RLock() + defer fake.getClientIdentityMutex.RUnlock() + return len(fake.getClientIdentityArgsForCall) +} + +func (fake *TransactionContext) GetClientIdentityCalls(stub func() cid.ClientIdentity) { + fake.getClientIdentityMutex.Lock() + defer fake.getClientIdentityMutex.Unlock() + fake.GetClientIdentityStub = stub +} + +func (fake *TransactionContext) GetClientIdentityReturns(result1 cid.ClientIdentity) { + fake.getClientIdentityMutex.Lock() + defer fake.getClientIdentityMutex.Unlock() + fake.GetClientIdentityStub = nil + fake.getClientIdentityReturns = struct { + result1 cid.ClientIdentity + }{result1} +} + +func (fake *TransactionContext) GetClientIdentityReturnsOnCall(i int, result1 cid.ClientIdentity) { + fake.getClientIdentityMutex.Lock() + defer fake.getClientIdentityMutex.Unlock() + fake.GetClientIdentityStub = nil + if fake.getClientIdentityReturnsOnCall == nil { + fake.getClientIdentityReturnsOnCall = make(map[int]struct { + result1 cid.ClientIdentity + }) + } + fake.getClientIdentityReturnsOnCall[i] = struct { + result1 cid.ClientIdentity + }{result1} +} + +func (fake *TransactionContext) GetStub() shim.ChaincodeStubInterface { + fake.getStubMutex.Lock() + ret, specificReturn := fake.getStubReturnsOnCall[len(fake.getStubArgsForCall)] + fake.getStubArgsForCall = append(fake.getStubArgsForCall, struct { + }{}) + fake.recordInvocation("GetStub", []interface{}{}) + fake.getStubMutex.Unlock() + if fake.GetStubStub != nil { + return fake.GetStubStub() + } + if specificReturn { + return ret.result1 + } + fakeReturns := fake.getStubReturns + return fakeReturns.result1 +} + +func (fake *TransactionContext) GetStubCallCount() int { + fake.getStubMutex.RLock() + defer fake.getStubMutex.RUnlock() + return len(fake.getStubArgsForCall) +} + +func (fake *TransactionContext) GetStubCalls(stub func() shim.ChaincodeStubInterface) { + fake.getStubMutex.Lock() + defer fake.getStubMutex.Unlock() + fake.GetStubStub = stub +} + +func (fake *TransactionContext) GetStubReturns(result1 shim.ChaincodeStubInterface) { + fake.getStubMutex.Lock() + defer fake.getStubMutex.Unlock() + fake.GetStubStub = nil + fake.getStubReturns = struct { + result1 shim.ChaincodeStubInterface + }{result1} +} + +func (fake *TransactionContext) GetStubReturnsOnCall(i int, result1 shim.ChaincodeStubInterface) { + fake.getStubMutex.Lock() + defer fake.getStubMutex.Unlock() + fake.GetStubStub = nil + if fake.getStubReturnsOnCall == nil { + fake.getStubReturnsOnCall = make(map[int]struct { + result1 shim.ChaincodeStubInterface + }) + } + fake.getStubReturnsOnCall[i] = struct { + result1 shim.ChaincodeStubInterface + }{result1} +} + +func (fake *TransactionContext) Invocations() map[string][][]interface{} { + fake.invocationsMutex.RLock() + defer fake.invocationsMutex.RUnlock() + fake.getClientIdentityMutex.RLock() + defer fake.getClientIdentityMutex.RUnlock() + fake.getStubMutex.RLock() + defer fake.getStubMutex.RUnlock() + copiedInvocations := map[string][][]interface{}{} + for key, value := range fake.invocations { + copiedInvocations[key] = value + } + return copiedInvocations +} + +func (fake *TransactionContext) recordInvocation(key string, args []interface{}) { + fake.invocationsMutex.Lock() + defer fake.invocationsMutex.Unlock() + if fake.invocations == nil { + fake.invocations = map[string][][]interface{}{} + } + if fake.invocations[key] == nil { + fake.invocations[key] = [][]interface{}{} + } + fake.invocations[key] = append(fake.invocations[key], args) +} diff --git a/topologies/t4/assets/chaincode/assets-transfer-basic/chaincode-go/chaincode/smartcontract.go b/topologies/t4/assets/chaincode/assets-transfer-basic/chaincode-go/chaincode/smartcontract.go new file mode 100644 index 0000000..71e8dd8 --- /dev/null +++ b/topologies/t4/assets/chaincode/assets-transfer-basic/chaincode-go/chaincode/smartcontract.go @@ -0,0 +1,185 @@ +package chaincode + +import ( + "encoding/json" + "fmt" + + "github.com/hyperledger/fabric-contract-api-go/contractapi" +) + +// SmartContract provides functions for managing an Asset +type SmartContract struct { + contractapi.Contract +} + +// Asset describes basic details of what makes up a simple asset +type Asset struct { + ID string `json:"ID"` + Color string `json:"color"` + Size int `json:"size"` + Owner string `json:"owner"` + AppraisedValue int `json:"appraisedValue"` +} + +// InitLedger adds a base set of assets to the ledger +func (s *SmartContract) InitLedger(ctx contractapi.TransactionContextInterface) error { + assets := []Asset{ + {ID: "asset1", Color: "blue", Size: 5, Owner: "Tomoko", AppraisedValue: 300}, + {ID: "asset2", Color: "red", Size: 5, Owner: "Brad", AppraisedValue: 400}, + {ID: "asset3", Color: "green", Size: 10, Owner: "Jin Soo", AppraisedValue: 500}, + {ID: "asset4", Color: "yellow", Size: 10, Owner: "Max", AppraisedValue: 600}, + {ID: "asset5", Color: "black", Size: 15, Owner: "Adriana", AppraisedValue: 700}, + {ID: "asset6", Color: "white", Size: 15, Owner: "Michel", AppraisedValue: 800}, + } + + for _, asset := range assets { + assetJSON, err := json.Marshal(asset) + if err != nil { + return err + } + + err = ctx.GetStub().PutState(asset.ID, assetJSON) + if err != nil { + return fmt.Errorf("failed to put to world state. %v", err) + } + } + + return nil +} + +// CreateAsset issues a new asset to the world state with given details. +func (s *SmartContract) CreateAsset(ctx contractapi.TransactionContextInterface, id string, color string, size int, owner string, appraisedValue int) error { + exists, err := s.AssetExists(ctx, id) + if err != nil { + return err + } + if exists { + return fmt.Errorf("the asset %s already exists", id) + } + + asset := Asset{ + ID: id, + Color: color, + Size: size, + Owner: owner, + AppraisedValue: appraisedValue, + } + assetJSON, err := json.Marshal(asset) + if err != nil { + return err + } + + return ctx.GetStub().PutState(id, assetJSON) +} + +// ReadAsset returns the asset stored in the world state with given id. +func (s *SmartContract) ReadAsset(ctx contractapi.TransactionContextInterface, id string) (*Asset, error) { + assetJSON, err := ctx.GetStub().GetState(id) + if err != nil { + return nil, fmt.Errorf("failed to read from world state: %v", err) + } + if assetJSON == nil { + return nil, fmt.Errorf("the asset %s does not exist", id) + } + + var asset Asset + err = json.Unmarshal(assetJSON, &asset) + if err != nil { + return nil, err + } + + return &asset, nil +} + +// UpdateAsset updates an existing asset in the world state with provided parameters. +func (s *SmartContract) UpdateAsset(ctx contractapi.TransactionContextInterface, id string, color string, size int, owner string, appraisedValue int) error { + exists, err := s.AssetExists(ctx, id) + if err != nil { + return err + } + if !exists { + return fmt.Errorf("the asset %s does not exist", id) + } + + // overwriting original asset with new asset + asset := Asset{ + ID: id, + Color: color, + Size: size, + Owner: owner, + AppraisedValue: appraisedValue, + } + assetJSON, err := json.Marshal(asset) + if err != nil { + return err + } + + return ctx.GetStub().PutState(id, assetJSON) +} + +// DeleteAsset deletes an given asset from the world state. +func (s *SmartContract) DeleteAsset(ctx contractapi.TransactionContextInterface, id string) error { + exists, err := s.AssetExists(ctx, id) + if err != nil { + return err + } + if !exists { + return fmt.Errorf("the asset %s does not exist", id) + } + + return ctx.GetStub().DelState(id) +} + +// AssetExists returns true when asset with given ID exists in world state +func (s *SmartContract) AssetExists(ctx contractapi.TransactionContextInterface, id string) (bool, error) { + assetJSON, err := ctx.GetStub().GetState(id) + if err != nil { + return false, fmt.Errorf("failed to read from world state: %v", err) + } + + return assetJSON != nil, nil +} + +// TransferAsset updates the owner field of asset with given id in world state. +func (s *SmartContract) TransferAsset(ctx contractapi.TransactionContextInterface, id string, newOwner string) error { + asset, err := s.ReadAsset(ctx, id) + if err != nil { + return err + } + + asset.Owner = newOwner + assetJSON, err := json.Marshal(asset) + if err != nil { + return err + } + + return ctx.GetStub().PutState(id, assetJSON) +} + +// GetAllAssets returns all assets found in world state +func (s *SmartContract) GetAllAssets(ctx contractapi.TransactionContextInterface) ([]*Asset, error) { + // range query with empty string for startKey and endKey does an + // open-ended query of all assets in the chaincode namespace. + resultsIterator, err := ctx.GetStub().GetStateByRange("", "") + if err != nil { + return nil, err + } + defer resultsIterator.Close() + + var assets []*Asset + for resultsIterator.HasNext() { + queryResponse, err := resultsIterator.Next() + if err != nil { + return nil, err + } + + var asset Asset + err = json.Unmarshal(queryResponse.Value, &asset) + if err != nil { + return nil, err + } + assets = append(assets, &asset) + } + + return assets, nil +} diff --git a/topologies/t4/assets/chaincode/assets-transfer-basic/chaincode-go/chaincode/smartcontract_test.go b/topologies/t4/assets/chaincode/assets-transfer-basic/chaincode-go/chaincode/smartcontract_test.go new file mode 100644 index 0000000..cb001de --- /dev/null +++ b/topologies/t4/assets/chaincode/assets-transfer-basic/chaincode-go/chaincode/smartcontract_test.go @@ -0,0 +1,184 @@ +package chaincode_test + +import ( + "encoding/json" + "fmt" + "testing" + + "github.com/hyperledger/fabric-chaincode-go/shim" + "github.com/hyperledger/fabric-contract-api-go/contractapi" + "github.com/hyperledger/fabric-protos-go/ledger/queryresult" + "github.com/hyperledger/fabric-samples/asset-transfer-basic/chaincode-go/chaincode" + "github.com/hyperledger/fabric-samples/asset-transfer-basic/chaincode-go/chaincode/mocks" + "github.com/stretchr/testify/require" +) + +//go:generate counterfeiter -o mocks/transaction.go -fake-name TransactionContext . transactionContext +type transactionContext interface { + contractapi.TransactionContextInterface +} + +//go:generate counterfeiter -o mocks/chaincodestub.go -fake-name ChaincodeStub . chaincodeStub +type chaincodeStub interface { + shim.ChaincodeStubInterface +} + +//go:generate counterfeiter -o mocks/statequeryiterator.go -fake-name StateQueryIterator . stateQueryIterator +type stateQueryIterator interface { + shim.StateQueryIteratorInterface +} + +func TestInitLedger(t *testing.T) { + chaincodeStub := &mocks.ChaincodeStub{} + transactionContext := &mocks.TransactionContext{} + transactionContext.GetStubReturns(chaincodeStub) + + assetTransfer := chaincode.SmartContract{} + err := assetTransfer.InitLedger(transactionContext) + require.NoError(t, err) + + chaincodeStub.PutStateReturns(fmt.Errorf("failed inserting key")) + err = assetTransfer.InitLedger(transactionContext) + require.EqualError(t, err, "failed to put to world state. failed inserting key") +} + +func TestCreateAsset(t *testing.T) { + chaincodeStub := &mocks.ChaincodeStub{} + transactionContext := &mocks.TransactionContext{} + transactionContext.GetStubReturns(chaincodeStub) + + assetTransfer := chaincode.SmartContract{} + err := assetTransfer.CreateAsset(transactionContext, "", "", 0, "", 0) + require.NoError(t, err) + + chaincodeStub.GetStateReturns([]byte{}, nil) + err = assetTransfer.CreateAsset(transactionContext, "asset1", "", 0, "", 0) + require.EqualError(t, err, "the asset asset1 already exists") + + chaincodeStub.GetStateReturns(nil, fmt.Errorf("unable to retrieve asset")) + err = assetTransfer.CreateAsset(transactionContext, "asset1", "", 0, "", 0) + require.EqualError(t, err, "failed to read from world state: unable to retrieve asset") +} + +func TestReadAsset(t *testing.T) { + chaincodeStub := &mocks.ChaincodeStub{} + transactionContext := &mocks.TransactionContext{} + transactionContext.GetStubReturns(chaincodeStub) + + expectedAsset := &chaincode.Asset{ID: "asset1"} + bytes, err := json.Marshal(expectedAsset) + require.NoError(t, err) + + chaincodeStub.GetStateReturns(bytes, nil) + assetTransfer := chaincode.SmartContract{} + asset, err := assetTransfer.ReadAsset(transactionContext, "") + require.NoError(t, err) + require.Equal(t, expectedAsset, asset) + + chaincodeStub.GetStateReturns(nil, fmt.Errorf("unable to retrieve asset")) + _, err = assetTransfer.ReadAsset(transactionContext, "") + require.EqualError(t, err, "failed to read from world state: unable to retrieve asset") + + chaincodeStub.GetStateReturns(nil, nil) + asset, err = assetTransfer.ReadAsset(transactionContext, "asset1") + require.EqualError(t, err, "the asset asset1 does not exist") + require.Nil(t, asset) +} + +func TestUpdateAsset(t *testing.T) { + chaincodeStub := &mocks.ChaincodeStub{} + transactionContext := &mocks.TransactionContext{} + transactionContext.GetStubReturns(chaincodeStub) + + expectedAsset := &chaincode.Asset{ID: "asset1"} + bytes, err := json.Marshal(expectedAsset) + require.NoError(t, err) + + chaincodeStub.GetStateReturns(bytes, nil) + assetTransfer := chaincode.SmartContract{} + err = assetTransfer.UpdateAsset(transactionContext, "", "", 0, "", 0) + require.NoError(t, err) + + chaincodeStub.GetStateReturns(nil, nil) + err = assetTransfer.UpdateAsset(transactionContext, "asset1", "", 0, "", 0) + require.EqualError(t, err, "the asset asset1 does not exist") + + chaincodeStub.GetStateReturns(nil, fmt.Errorf("unable to retrieve asset")) + err = assetTransfer.UpdateAsset(transactionContext, "asset1", "", 0, "", 0) + require.EqualError(t, err, "failed to read from world state: unable to retrieve asset") +} + +func TestDeleteAsset(t *testing.T) { + chaincodeStub := &mocks.ChaincodeStub{} + transactionContext := &mocks.TransactionContext{} + transactionContext.GetStubReturns(chaincodeStub) + + asset := &chaincode.Asset{ID: "asset1"} + bytes, err := json.Marshal(asset) + require.NoError(t, err) + + chaincodeStub.GetStateReturns(bytes, nil) + chaincodeStub.DelStateReturns(nil) + assetTransfer := chaincode.SmartContract{} + err = assetTransfer.DeleteAsset(transactionContext, "") + require.NoError(t, err) + + chaincodeStub.GetStateReturns(nil, nil) + err = assetTransfer.DeleteAsset(transactionContext, "asset1") + require.EqualError(t, err, "the asset asset1 does not exist") + + chaincodeStub.GetStateReturns(nil, fmt.Errorf("unable to retrieve asset")) + err = assetTransfer.DeleteAsset(transactionContext, "") + require.EqualError(t, err, "failed to read from world state: unable to retrieve asset") +} + +func TestTransferAsset(t *testing.T) { + chaincodeStub := &mocks.ChaincodeStub{} + transactionContext := &mocks.TransactionContext{} + transactionContext.GetStubReturns(chaincodeStub) + + asset := &chaincode.Asset{ID: "asset1"} + bytes, err := json.Marshal(asset) + require.NoError(t, err) + + chaincodeStub.GetStateReturns(bytes, nil) + assetTransfer := chaincode.SmartContract{} + err = assetTransfer.TransferAsset(transactionContext, "", "") + require.NoError(t, err) + + chaincodeStub.GetStateReturns(nil, fmt.Errorf("unable to retrieve asset")) + err = assetTransfer.TransferAsset(transactionContext, "", "") + require.EqualError(t, err, "failed to read from world state: unable to retrieve asset") +} + +func TestGetAllAssets(t *testing.T) { + asset := &chaincode.Asset{ID: "asset1"} + bytes, err := json.Marshal(asset) + require.NoError(t, err) + + iterator := &mocks.StateQueryIterator{} + iterator.HasNextReturnsOnCall(0, true) + iterator.HasNextReturnsOnCall(1, false) + iterator.NextReturns(&queryresult.KV{Value: bytes}, nil) + + chaincodeStub := &mocks.ChaincodeStub{} + transactionContext := &mocks.TransactionContext{} + transactionContext.GetStubReturns(chaincodeStub) + + chaincodeStub.GetStateByRangeReturns(iterator, nil) + assetTransfer := &chaincode.SmartContract{} + assets, err := assetTransfer.GetAllAssets(transactionContext) + require.NoError(t, err) + require.Equal(t, []*chaincode.Asset{asset}, assets) + + iterator.HasNextReturns(true) + iterator.NextReturns(nil, fmt.Errorf("failed retrieving next item")) + assets, err = assetTransfer.GetAllAssets(transactionContext) + require.EqualError(t, err, "failed retrieving next item") + require.Nil(t, assets) + + chaincodeStub.GetStateByRangeReturns(nil, fmt.Errorf("failed retrieving all assets")) + assets, err = assetTransfer.GetAllAssets(transactionContext) + require.EqualError(t, err, "failed retrieving all assets") + require.Nil(t, assets) +} diff --git a/topologies/t4/assets/chaincode/assets-transfer-basic/chaincode-go/go.mod b/topologies/t4/assets/chaincode/assets-transfer-basic/chaincode-go/go.mod new file mode 100644 index 0000000..630a157 --- /dev/null +++ b/topologies/t4/assets/chaincode/assets-transfer-basic/chaincode-go/go.mod @@ -0,0 +1,11 @@ +module github.com/hyperledger/fabric-samples/asset-transfer-basic/chaincode-go + +go 1.14 + +require ( + github.com/golang/protobuf v1.3.2 + github.com/hyperledger/fabric-chaincode-go v0.0.0-20210718160520-38d29fabecb9 + github.com/hyperledger/fabric-contract-api-go v1.1.1 + github.com/hyperledger/fabric-protos-go v0.0.0-20201028172056-a3136dde2354 + github.com/stretchr/testify v1.5.1 +) diff --git a/topologies/t4/assets/chaincode/assets-transfer-basic/chaincode-go/go.sum b/topologies/t4/assets/chaincode/assets-transfer-basic/chaincode-go/go.sum new file mode 100644 index 0000000..577c18b --- /dev/null +++ b/topologies/t4/assets/chaincode/assets-transfer-basic/chaincode-go/go.sum @@ -0,0 +1,154 @@ +cloud.google.com/go v0.26.0/go.mod h1:aQUYkXzVsufM+DwF1aE+0xfcU+56JwCaLick0ClmMTw= +github.com/BurntSushi/toml v0.3.1/go.mod h1:xHWCNGjB5oqiDr8zfno3MHue2Ht5sIBksp03qcyfWMU= +github.com/DATA-DOG/go-txdb v0.1.3/go.mod h1:DhAhxMXZpUJVGnT+p9IbzJoRKvlArO2pkHjnGX7o0n0= +github.com/PuerkitoBio/purell v1.1.1 h1:WEQqlqaGbrPkxLJWfBwQmfEAE1Z7ONdDLqrN38tNFfI= +github.com/PuerkitoBio/purell v1.1.1/go.mod h1:c11w/QuzBsJSee3cPx9rAFu61PvFxuPbtSwDGJws/X0= +github.com/PuerkitoBio/urlesc v0.0.0-20170810143723-de5bf2ad4578 h1:d+Bc7a5rLufV/sSk/8dngufqelfh6jnri85riMAaF/M= +github.com/PuerkitoBio/urlesc v0.0.0-20170810143723-de5bf2ad4578/go.mod h1:uGdkoq3SwY9Y+13GIhn11/XLaGBb4BfwItxLd5jeuXE= +github.com/armon/consul-api v0.0.0-20180202201655-eb2c6b5be1b6/go.mod h1:grANhF5doyWs3UAsr3K4I6qtAmlQcZDesFNEHPZAzj8= +github.com/client9/misspell v0.3.4/go.mod h1:qj6jICC3Q7zFZvVWo7KLAzC3yx5G7kyvSDkc90ppPyw= +github.com/coreos/etcd v3.3.10+incompatible/go.mod h1:uF7uidLiAD3TWHmW31ZFd/JWoc32PjwdhPthX9715RE= +github.com/coreos/go-etcd v2.0.0+incompatible/go.mod h1:Jez6KQU2B/sWsbdaef3ED8NzMklzPG4d5KIOhIy30Tk= +github.com/coreos/go-semver v0.2.0/go.mod h1:nnelYz7RCh+5ahJtPPxZlU+153eP4D4r3EedlOD2RNk= +github.com/cpuguy83/go-md2man v1.0.10/go.mod h1:SmD6nW6nTyfqj6ABTjUi3V3JVMnlJmwcJI5acqYI6dE= +github.com/cucumber/godog v0.8.0/go.mod h1:Cp3tEV1LRAyH/RuCThcxHS/+9ORZ+FMzPva2AZ5Ki+A= +github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38= +github.com/davecgh/go-spew v1.1.1 h1:vj9j/u1bqnvCEfJOwUhtlOARqs3+rkHYY13jYWTU97c= +github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38= +github.com/fsnotify/fsnotify v1.4.7/go.mod h1:jwhsz4b93w/PPRr/qN1Yymfu8t87LnFCMoQvtojpjFo= +github.com/go-openapi/jsonpointer v0.19.2/go.mod h1:3akKfEdA7DF1sugOqz1dVQHBcuDBPKZGEoHC/NkiQRg= +github.com/go-openapi/jsonpointer v0.19.3 h1:gihV7YNZK1iK6Tgwwsxo2rJbD1GTbdm72325Bq8FI3w= +github.com/go-openapi/jsonpointer v0.19.3/go.mod h1:Pl9vOtqEWErmShwVjC8pYs9cog34VGT37dQOVbmoatg= +github.com/go-openapi/jsonreference v0.19.2 h1:o20suLFB4Ri0tuzpWtyHlh7E7HnkqTNLq6aR6WVNS1w= +github.com/go-openapi/jsonreference v0.19.2/go.mod h1:jMjeRr2HHw6nAVajTXJ4eiUwohSTlpa0o73RUL1owJc= +github.com/go-openapi/spec v0.19.4 h1:ixzUSnHTd6hCemgtAJgluaTSGYpLNpJY4mA2DIkdOAo= +github.com/go-openapi/spec v0.19.4/go.mod h1:FpwSN1ksY1eteniUU7X0N/BgJ7a4WvBFVA8Lj9mJglo= +github.com/go-openapi/swag v0.19.2/go.mod h1:POnQmlKehdgb5mhVOsnJFsivZCEZ/vjK9gh66Z9tfKk= +github.com/go-openapi/swag v0.19.5 h1:lTz6Ys4CmqqCQmZPBlbQENR1/GucA2bzYTE12Pw4tFY= +github.com/go-openapi/swag v0.19.5/go.mod h1:POnQmlKehdgb5mhVOsnJFsivZCEZ/vjK9gh66Z9tfKk= +github.com/go-sql-driver/mysql v1.5.0/go.mod h1:DCzpHaOWr8IXmIStZouvnhqoel9Qv2LBy8hT2VhHyBg= +github.com/gobuffalo/envy v1.7.0 h1:GlXgaiBkmrYMHco6t4j7SacKO4XUjvh5pwXh0f4uxXU= +github.com/gobuffalo/envy v1.7.0/go.mod h1:n7DRkBerg/aorDM8kbduw5dN3oXGswK5liaSCx4T5NI= +github.com/gobuffalo/logger v1.0.0/go.mod h1:2zbswyIUa45I+c+FLXuWl9zSWEiVuthsk8ze5s8JvPs= +github.com/gobuffalo/packd v0.3.0 h1:eMwymTkA1uXsqxS0Tpoop3Lc0u3kTfiMBE6nKtQU4g4= +github.com/gobuffalo/packd v0.3.0/go.mod h1:zC7QkmNkYVGKPw4tHpBQ+ml7W/3tIebgeo1b36chA3Q= +github.com/gobuffalo/packr v1.30.1 h1:hu1fuVR3fXEZR7rXNW3h8rqSML8EVAf6KNm0NKO/wKg= +github.com/gobuffalo/packr v1.30.1/go.mod h1:ljMyFO2EcrnzsHsN99cvbq055Y9OhRrIaviy289eRuk= +github.com/gobuffalo/packr/v2 v2.5.1/go.mod h1:8f9c96ITobJlPzI44jj+4tHnEKNt0xXWSVlXRN9X1Iw= +github.com/golang/glog v0.0.0-20160126235308-23def4e6c14b h1:VKtxabqXZkF25pY9ekfRL6a582T4P37/31XEstQ5p58= +github.com/golang/glog v0.0.0-20160126235308-23def4e6c14b/go.mod h1:SBH7ygxi8pfUlaOkMMuAQtPIUF8ecWP5IEl/CR7VP2Q= +github.com/golang/mock v1.1.1/go.mod h1:oTYuIxOrZwtPieC+H1uAHpcLFnEyAGVDL/k47Jfbm0A= +github.com/golang/protobuf v1.2.0/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U= +github.com/golang/protobuf v1.3.2 h1:6nsPYzhq5kReh6QImI3k5qWzO4PEbvbIW2cwSfR/6xs= +github.com/golang/protobuf v1.3.2/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U= +github.com/google/go-cmp v0.2.0/go.mod h1:oXzfMopK8JAjlY9xF4vHSVASa0yLyX7SntLO5aqRK0M= +github.com/hashicorp/hcl v1.0.0/go.mod h1:E5yfLk+7swimpb2L/Alb/PJmXilQ/rhwaUYs4T20WEQ= +github.com/hyperledger/fabric-chaincode-go v0.0.0-20200424173110-d7076418f212 h1:1i4lnpV8BDgKOLi1hgElfBqdHXjXieSuj8629mwBZ8o= +github.com/hyperledger/fabric-chaincode-go v0.0.0-20200424173110-d7076418f212/go.mod h1:N7H3sA7Tx4k/YzFq7U0EPdqJtqvM4Kild0JoCc7C0Dc= +github.com/hyperledger/fabric-chaincode-go v0.0.0-20210718160520-38d29fabecb9 h1:1cAZHHrBYFrX3bwQGhOZtOB4sCM9QWVppd81O8vsPXs= +github.com/hyperledger/fabric-chaincode-go v0.0.0-20210718160520-38d29fabecb9/go.mod h1:N7H3sA7Tx4k/YzFq7U0EPdqJtqvM4Kild0JoCc7C0Dc= +github.com/hyperledger/fabric-contract-api-go v1.1.0 h1:K9uucl/6eX3NF0/b+CGIiO1IPm1VYQxBkpnVGJur2S4= +github.com/hyperledger/fabric-contract-api-go v1.1.0/go.mod h1:nHWt0B45fK53owcFpLtAe8DH0Q5P068mnzkNXMPSL7E= +github.com/hyperledger/fabric-contract-api-go v1.1.1 h1:gDhOC18gjgElNZ85kFWsbCQq95hyUP/21n++m0Sv6B0= +github.com/hyperledger/fabric-contract-api-go v1.1.1/go.mod h1:+39cWxbh5py3NtXpRA63rAH7NzXyED+QJx1EZr0tJPo= +github.com/hyperledger/fabric-protos-go v0.0.0-20190919234611-2a87503ac7c9/go.mod h1:xVYTjK4DtZRBxZ2D9aE4y6AbLaPwue2o/criQyQbVD0= +github.com/hyperledger/fabric-protos-go v0.0.0-20200424173316-dd554ba3746e h1:9PS5iezHk/j7XriSlNuSQILyCOfcZ9wZ3/PiucmSE8E= +github.com/hyperledger/fabric-protos-go v0.0.0-20200424173316-dd554ba3746e/go.mod h1:xVYTjK4DtZRBxZ2D9aE4y6AbLaPwue2o/criQyQbVD0= +github.com/hyperledger/fabric-protos-go v0.0.0-20201028172056-a3136dde2354 h1:6vLLEpvDbSlmUJFjg1hB5YMBpI+WgKguztlONcAFBoY= +github.com/hyperledger/fabric-protos-go v0.0.0-20201028172056-a3136dde2354/go.mod h1:xVYTjK4DtZRBxZ2D9aE4y6AbLaPwue2o/criQyQbVD0= +github.com/hyperledger/fabric-protos-go v0.0.0-20210722212527-1ed7094bb8fe h1:1ef+SKRVYiQCAMhZ5W6HZhStEcNNAm27V8YcptzSWnM= +github.com/hyperledger/fabric-protos-go v0.0.0-20210722212527-1ed7094bb8fe/go.mod h1:xVYTjK4DtZRBxZ2D9aE4y6AbLaPwue2o/criQyQbVD0= +github.com/inconshreveable/mousetrap v1.0.0/go.mod h1:PxqpIevigyE2G7u3NXJIT2ANytuPF1OarO4DADm73n8= +github.com/joho/godotenv v1.3.0 h1:Zjp+RcGpHhGlrMbJzXTrZZPrWj+1vfm90La1wgB6Bhc= +github.com/joho/godotenv v1.3.0/go.mod h1:7hK45KPybAkOC6peb+G5yklZfMxEjkZhHbwpqxOKXbg= +github.com/karrick/godirwalk v1.10.12/go.mod h1:RoGL9dQei4vP9ilrpETWE8CLOZ1kiN0LhBygSwrAsHA= +github.com/konsorten/go-windows-terminal-sequences v1.0.1/go.mod h1:T0+1ngSBFLxvqU3pZ+m/2kptfBszLMUkC4ZK/EgS/cQ= +github.com/konsorten/go-windows-terminal-sequences v1.0.2/go.mod h1:T0+1ngSBFLxvqU3pZ+m/2kptfBszLMUkC4ZK/EgS/cQ= +github.com/kr/pretty v0.1.0/go.mod h1:dAy3ld7l9f0ibDNOQOHHMYYIIbhfbHSm3C4ZsoJORNo= +github.com/kr/pretty v0.2.0 h1:s5hAObm+yFO5uHYt5dYjxi2rXrsnmRpJx4OYvIWUaQs= +github.com/kr/pretty v0.2.0/go.mod h1:ipq/a2n7PKx3OHsz4KJII5eveXtPO4qwEXGdVfWzfnI= +github.com/kr/pty v1.1.1/go.mod h1:pFQYn66WHrOpPYNljwOMqo10TkYh1fy3cYio2l3bCsQ= +github.com/kr/pty v1.1.5/go.mod h1:9r2w37qlBe7rQ6e1fg1S/9xpWHSnaqNdHD3WcMdbPDA= +github.com/kr/text v0.1.0 h1:45sCR5RtlFHMR4UwH9sdQ5TC8v0qDQCHnXt+kaKSTVE= +github.com/kr/text v0.1.0/go.mod h1:4Jbv+DJW3UT/LiOwJeYQe1efqtUx/iVham/4vfdArNI= +github.com/magiconair/properties v1.8.0/go.mod h1:PppfXfuXeibc/6YijjN8zIbojt8czPbwD3XqdrwzmxQ= +github.com/mailru/easyjson v0.0.0-20190614124828-94de47d64c63/go.mod h1:C1wdFJiN94OJF2b5HbByQZoLdCWB1Yqtg26g4irojpc= +github.com/mailru/easyjson v0.0.0-20190626092158-b2ccc519800e h1:hB2xlXdHp/pmPZq0y3QnmWAArdw9PqbmotexnWx/FU8= +github.com/mailru/easyjson v0.0.0-20190626092158-b2ccc519800e/go.mod h1:C1wdFJiN94OJF2b5HbByQZoLdCWB1Yqtg26g4irojpc= +github.com/mitchellh/go-homedir v1.1.0/go.mod h1:SfyaCUpYCn1Vlf4IUYiD9fPX4A5wJrkLzIz1N1q0pr0= +github.com/mitchellh/mapstructure v1.1.2/go.mod h1:FVVH3fgwuzCH5S8UJGiWEs2h04kUh9fWfEaFds41c1Y= +github.com/pelletier/go-toml v1.2.0/go.mod h1:5z9KED0ma1S8pY6P1sdut58dfprrGBbd/94hg7ilaic= +github.com/pmezard/go-difflib v1.0.0 h1:4DBwDE0NGyQoBHbLQYPwSUPoCMWR5BEzIk/f1lZbAQM= +github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4= +github.com/rogpeppe/go-internal v1.1.0/go.mod h1:M8bDsm7K2OlrFYOpmOWEs/qY81heoFRclV5y23lUDJ4= +github.com/rogpeppe/go-internal v1.3.0 h1:RR9dF3JtopPvtkroDZuVD7qquD0bnHlKSqaQhgwt8yk= +github.com/rogpeppe/go-internal v1.3.0/go.mod h1:M8bDsm7K2OlrFYOpmOWEs/qY81heoFRclV5y23lUDJ4= +github.com/russross/blackfriday v1.5.2/go.mod h1:JO/DiYxRf+HjHt06OyowR9PTA263kcR/rfWxYHBV53g= +github.com/sirupsen/logrus v1.4.2/go.mod h1:tLMulIdttU9McNUspp0xgXVQah82FyeX6MwdIuYE2rE= +github.com/spf13/afero v1.1.2/go.mod h1:j4pytiNVoe2o6bmDsKpLACNPDBIoEAkihy7loJ1B0CQ= +github.com/spf13/cast v1.3.0/go.mod h1:Qx5cxh0v+4UWYiBimWS+eyWzqEqokIECu5etghLkUJE= +github.com/spf13/cobra v0.0.5/go.mod h1:3K3wKZymM7VvHMDS9+Akkh4K60UwM26emMESw8tLCHU= +github.com/spf13/jwalterweatherman v1.0.0/go.mod h1:cQK4TGJAtQXfYWX+Ddv3mKDzgVb68N+wFjFa4jdeBTo= +github.com/spf13/pflag v1.0.3/go.mod h1:DYY7MBk1bdzusC3SYhjObp+wFpr4gzcvqqNjLnInEg4= +github.com/spf13/viper v1.3.2/go.mod h1:ZiWeW+zYFKm7srdB9IoDzzZXaJaI5eL9QjNiN/DMA2s= +github.com/stretchr/objx v0.1.0/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME= +github.com/stretchr/objx v0.1.1/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME= +github.com/stretchr/objx v0.2.0/go.mod h1:qt09Ya8vawLte6SNmTgCsAVtYtaKzEcn8ATUoHMkEqE= +github.com/stretchr/testify v1.2.2/go.mod h1:a8OnRcib4nhh0OaRAV+Yts87kKdq0PP7pXfy6kDkUVs= +github.com/stretchr/testify v1.3.0/go.mod h1:M5WIy9Dh21IEIfnGCwXGc5bZfKNJtfHm1UVUgZn+9EI= +github.com/stretchr/testify v1.4.0/go.mod h1:j7eGeouHqKxXV5pUuKE4zz7dFj8WfuZ+81PSLYec5m4= +github.com/stretchr/testify v1.5.1 h1:nOGnQDM7FYENwehXlg/kFVnos3rEvtKTjRvOWSzb6H4= +github.com/stretchr/testify v1.5.1/go.mod h1:5W2xD1RspED5o8YsWQXVCued0rvSQ+mT+I5cxcmMvtA= +github.com/ugorji/go/codec v0.0.0-20181204163529-d75b2dcb6bc8/go.mod h1:VFNgLljTbGfSG7qAOspJ7OScBnGdDN/yBr0sguwnwf0= +github.com/xeipuuv/gojsonpointer v0.0.0-20180127040702-4e3ac2762d5f h1:J9EGpcZtP0E/raorCMxlFGSTBrsSlaDGf3jU/qvAE2c= +github.com/xeipuuv/gojsonpointer v0.0.0-20180127040702-4e3ac2762d5f/go.mod h1:N2zxlSyiKSe5eX1tZViRH5QA0qijqEDrYZiPEAiq3wU= +github.com/xeipuuv/gojsonreference v0.0.0-20180127040603-bd5ef7bd5415 h1:EzJWgHovont7NscjpAxXsDA8S8BMYve8Y5+7cuRE7R0= +github.com/xeipuuv/gojsonreference v0.0.0-20180127040603-bd5ef7bd5415/go.mod h1:GwrjFmJcFw6At/Gs6z4yjiIwzuJ1/+UwLxMQDVQXShQ= +github.com/xeipuuv/gojsonschema v1.2.0 h1:LhYJRs+L4fBtjZUfuSZIKGeVu0QRy8e5Xi7D17UxZ74= +github.com/xeipuuv/gojsonschema v1.2.0/go.mod h1:anYRn/JVcOK2ZgGU+IjEV4nwlhoK5sQluxsYJ78Id3Y= +github.com/xordataexchange/crypt v0.0.3-0.20170626215501-b2862e3d0a77/go.mod h1:aYKd//L2LvnjZzWKhF00oedf4jCCReLcmhLdhm1A27Q= +golang.org/x/crypto v0.0.0-20181203042331-505ab145d0a9/go.mod h1:6SG95UA2DQfeDnfUPMdvaQW0Q7yPrPDi9nlGo2tz2b4= +golang.org/x/crypto v0.0.0-20190308221718-c2843e01d9a2/go.mod h1:djNgcEr1/C05ACkg1iLfiJU5Ep61QUkGW8qpdssI0+w= +golang.org/x/crypto v0.0.0-20190611184440-5c40567a22f8/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI= +golang.org/x/crypto v0.0.0-20190621222207-cc06ce4a13d4/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI= +golang.org/x/lint v0.0.0-20190313153728-d0100b6bd8b3/go.mod h1:6SW0HCj/g11FgYtHlgUYUwCkIfeOF89ocIRzGO/8vkc= +golang.org/x/net v0.0.0-20190311183353-d8887717615a/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg= +golang.org/x/net v0.0.0-20190404232315-eb5bcb51f2a3/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg= +golang.org/x/net v0.0.0-20190522155817-f3200d17e092/go.mod h1:HSz+uSET+XFnRR8LxR5pz3Of3rY3CfYBVs4xY44aLks= +golang.org/x/net v0.0.0-20190613194153-d28f0bde5980/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s= +golang.org/x/net v0.0.0-20190827160401-ba9fcec4b297 h1:k7pJ2yAPLPgbskkFdhRCsA77k2fySZ1zf2zCjvQCiIM= +golang.org/x/net v0.0.0-20190827160401-ba9fcec4b297/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s= +golang.org/x/oauth2 v0.0.0-20180821212333-d2e6202438be/go.mod h1:N/0e6XlmueqKjAGxoOufVs8QHGRruUQn6yWY3a++T0U= +golang.org/x/sync v0.0.0-20190423024810-112230192c58/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= +golang.org/x/sys v0.0.0-20181205085412-a5c9d58dba9a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY= +golang.org/x/sys v0.0.0-20190215142949-d0b11bdaac8a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY= +golang.org/x/sys v0.0.0-20190412213103-97732733099d/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= +golang.org/x/sys v0.0.0-20190422165155-953cdadca894/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= +golang.org/x/sys v0.0.0-20190515120540-06a5c4944438/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= +golang.org/x/sys v0.0.0-20190616124812-15dcb6c0061f/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= +golang.org/x/sys v0.0.0-20190710143415-6ec70d6a5542 h1:6ZQFf1D2YYDDI7eSwW8adlkkavTB9sw5I24FVtEvNUQ= +golang.org/x/sys v0.0.0-20190710143415-6ec70d6a5542/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= +golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ= +golang.org/x/text v0.3.2 h1:tW2bmiBqwgJj/UpqtC8EpXEZVYOwU0yG4iWbprSVAcs= +golang.org/x/text v0.3.2/go.mod h1:bEr9sfX3Q8Zfm5fL9x+3itogRgK3+ptLWKqgva+5dAk= +golang.org/x/tools v0.0.0-20180917221912-90fa682c2a6e/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ= +golang.org/x/tools v0.0.0-20190311212946-11955173bddd/go.mod h1:LCzVGOaR6xXOjkQ3onu1FJEFr0SW1gC7cKk1uF8kGRs= +golang.org/x/tools v0.0.0-20190524140312-2c0ae7006135/go.mod h1:RgjU9mgBXZiqYHBnxXauZ1Gv1EHHAz9KjViQ78xBX0Q= +golang.org/x/tools v0.0.0-20190614205625-5aca471b1d59/go.mod h1:/rFqwRUd4F7ZHNgwSSTFct+R/Kf4OFW1sUzUTQQTgfc= +golang.org/x/tools v0.0.0-20190624180213-70d37148ca0c/go.mod h1:/rFqwRUd4F7ZHNgwSSTFct+R/Kf4OFW1sUzUTQQTgfc= +google.golang.org/appengine v1.1.0/go.mod h1:EbEs0AVv82hx2wNQdGPgUI5lhzA/G0D9YwlJXL52JkM= +google.golang.org/genproto v0.0.0-20180817151627-c66870c02cf8/go.mod h1:JiN7NxoALGmiZfu7CAH4rXhgtRTLTxftemlI0sWmxmc= +google.golang.org/genproto v0.0.0-20180831171423-11092d34479b h1:lohp5blsw53GBXtLyLNaTXPXS9pJ1tiTw61ZHUoE9Qw= +google.golang.org/genproto v0.0.0-20180831171423-11092d34479b/go.mod h1:JiN7NxoALGmiZfu7CAH4rXhgtRTLTxftemlI0sWmxmc= +google.golang.org/grpc v1.23.0 h1:AzbTB6ux+okLTzP8Ru1Xs41C303zdcfEht7MQnYJt5A= +google.golang.org/grpc v1.23.0/go.mod h1:Y5yQAOtifL1yxbo5wqy6BxZv8vAUGQwXBOALyacEbxg= +gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0= +gopkg.in/check.v1 v1.0.0-20180628173108-788fd7840127/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0= +gopkg.in/check.v1 v1.0.0-20190902080502-41f04d3bba15 h1:YR8cESwS4TdDjEe65xsg0ogRM/Nc3DYOhEAlW+xobZo= +gopkg.in/check.v1 v1.0.0-20190902080502-41f04d3bba15/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0= +gopkg.in/errgo.v2 v2.1.0/go.mod h1:hNsd1EY+bozCKY1Ytp96fpM3vjJbqLJn88ws8XvfDNI= +gopkg.in/yaml.v2 v2.2.2 h1:ZCJp+EgiOT7lHqUV2J862kp8Qj64Jo6az82+3Td9dZw= +gopkg.in/yaml.v2 v2.2.2/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI= +gopkg.in/yaml.v2 v2.2.8 h1:obN1ZagJSUGI0Ek/LBmuj4SNLPfIny3KsKFopxRdj10= +gopkg.in/yaml.v2 v2.2.8/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI= +honnef.co/go/tools v0.0.0-20190523083050-ea95bdfd59fc/go.mod h1:rf3lG4BRIbNafJWhAfAdb/ePZxsR/4RtNHQocxwk9r4= diff --git a/topologies/t4/config/config.yaml b/topologies/t4/config/config.yaml new file mode 100644 index 0000000..0caf0ff --- /dev/null +++ b/topologies/t4/config/config.yaml @@ -0,0 +1,14 @@ +NodeOUs: + Enable: true + ClientOUIdentifier: + Certificate: cacerts/ca-cert.pem + OrganizationalUnitIdentifier: hlclient + PeerOUIdentifier: + Certificate: cacerts/ca-cert.pem + OrganizationalUnitIdentifier: hlpeer + AdminOUIdentifier: + Certificate: cacerts/ca-cert.pem + OrganizationalUnitIdentifier: hladmin + OrdererOUIdentifier: + Certificate: cacerts/ca-cert.pem + OrganizationalUnitIdentifier: hlorderer diff --git a/topologies/t4/config/configtx.yaml b/topologies/t4/config/configtx.yaml new file mode 100644 index 0000000..1264040 --- /dev/null +++ b/topologies/t4/config/configtx.yaml @@ -0,0 +1,428 @@ +# Copyright IBM Corp. All Rights Reserved. +# +# SPDX-License-Identifier: Apache-2.0 +# + +--- +################################################################################ +# +# Section: Organizations +# +# - This section defines the different organizational identities which will +# be referenced later in the configuration. +# +################################################################################ +Organizations: + + # SampleOrg defines an MSP using the sampleconfig. It should never be used + # in production but may be used as a template for other definitions + - &org1 + # DefaultOrg defines the organization which is used in the sampleconfig + # of the fabric.git development environment + Name: org1 + + # ID to load the MSP definition as + ID: org1MSP + + # MSPDir is the filesystem path which contains the MSP configuration + MSPDir: /tmp/crypto-material/orgs/org1/msp + + # Policies defines the set of policies at this level of the config tree + # For organization policies, their canonical path is usually + # /Channel/// + Policies: + Readers: + Type: Signature + Rule: "OR('org1MSP.member')" + Writers: + Type: Signature + Rule: "OR('org1MSP.member')" + Admins: + Type: Signature + Rule: "OR('org1MSP.admin')" + Endorsement: + Type: Signature + Rule: "OR('org1MSP.member')" + + OrdererEndpoints: + - <>-org1-orderer1:7050 + - <>-org1-orderer2:7050 + - <>-org1-orderer3:7050 + + - &org2 + # DefaultOrg defines the organization which is used in the sampleconfig + # of the fabric.git development environment + Name: org2MSP + + # ID to load the MSP definition as + ID: org2MSP + + MSPDir: /tmp/crypto-material/orgs/org2/msp + + # Policies defines the set of policies at this level of the config tree + # For organization policies, their canonical path is usually + # /Channel/// + Policies: + Readers: + Type: Signature + Rule: "OR('org2MSP.member')" + Writers: + Type: Signature + Rule: "OR('org2MSP.member')" + Admins: + Type: Signature + Rule: "OR('org2MSP.admin')" + Endorsement: + Type: Signature + Rule: "OR('org2MSP.peer')" + + # leave this flag set to true. + AnchorPeers: + # AnchorPeers defines the location of peers which can be used + # for cross org gossip communication. Note, this value is only + # encoded in the genesis block in the Application section context + - Host: <>-org2-peer1 + Port: 7051 + + - &org3 + # DefaultOrg defines the organization which is used in the sampleconfig + # of the fabric.git development environment + Name: org3MSP + + # ID to load the MSP definition as + ID: org3MSP + + MSPDir: /tmp/crypto-material/orgs/org3/msp + + # Policies defines the set of policies at this level of the config tree + # For organization policies, their canonical path is usually + # /Channel/// + Policies: + Readers: + Type: Signature + Rule: "OR('org3MSP.member')" + Writers: + Type: Signature + Rule: "OR('org3MSP.member')" + Admins: + Type: Signature + Rule: "OR('org3MSP.admin')" + Endorsement: + Type: Signature + Rule: "OR('org3MSP.peer')" + + # leave this flag set to true. + AnchorPeers: + # AnchorPeers defines the location of peers which can be used + # for cross org gossip communication. Note, this value is only + # encoded in the genesis block in the Application section context + - Host: <>-org3-peer1 + Port: 7051 + +################################################################################ +# +# SECTION: Capabilities +# +# - This section defines the capabilities of fabric network. This is a new +# concept as of v1.1.0 and should not be utilized in mixed networks with +# v1.0.x peers and orderers. Capabilities define features which must be +# present in a fabric binary for that binary to safely participate in the +# fabric network. For instance, if a new MSP type is added, newer binaries +# might recognize and validate the signatures from this type, while older +# binaries without this support would be unable to validate those +# transactions. This could lead to different versions of the fabric binaries +# having different world states. Instead, defining a capability for a channel +# informs those binaries without this capability that they must cease +# processing transactions until they have been upgraded. For v1.0.x if any +# capabilities are defined (including a map with all capabilities turned off) +# then the v1.0.x peer will deliberately crash. +# +################################################################################ +Capabilities: + # Channel capabilities apply to both the orderers and the peers and must be + # supported by both. + # Set the value of the capability to true to require it. + Channel: &ChannelCapabilities + # V2_0 capability ensures that orderers and peers behave according + # to v2.0 channel capabilities. Orderers and peers from + # prior releases would behave in an incompatible way, and are therefore + # not able to participate in channels at v2.0 capability. + # Prior to enabling V2.0 channel capabilities, ensure that all + # orderers and peers on a channel are at v2.0.0 or later. + V2_0: true + + # Orderer capabilities apply only to the orderers, and may be safely + # used with prior release peers. + # Set the value of the capability to true to require it. + Orderer: &OrdererCapabilities + # V2_0 orderer capability ensures that orderers behave according + # to v2.0 orderer capabilities. Orderers from + # prior releases would behave in an incompatible way, and are therefore + # not able to participate in channels at v2.0 orderer capability. + # Prior to enabling V2.0 orderer capabilities, ensure that all + # orderers on channel are at v2.0.0 or later. + V2_0: true + + # Application capabilities apply only to the peer network, and may be safely + # used with prior release orderers. + # Set the value of the capability to true to require it. + Application: &ApplicationCapabilities + # V2_0 application capability ensures that peers behave according + # to v2.0 application capabilities. Peers from + # prior releases would behave in an incompatible way, and are therefore + # not able to participate in channels at v2.0 application capability. + # Prior to enabling V2.0 application capabilities, ensure that all + # peers on channel are at v2.0.0 or later. + V2_0: true + +################################################################################ +# +# SECTION: Application +# +# - This section defines the values to encode into a config transaction or +# genesis block for application related parameters +# +################################################################################ +Application: &ApplicationDefaults + ACLs: &ACLsDefault + + # ACL policy for lscc's "getid" function + lscc/ChaincodeExists: /Channel/Application/Readers + + # ACL policy for lscc's "getdepspec" function + lscc/GetDeploymentSpec: /Channel/Application/Readers + + # ACL policy for lscc's "getccdata" function + lscc/GetChaincodeData: /Channel/Application/Readers + + # ACL Policy for lscc's "getchaincodes" function + lscc/GetInstantiatedChaincodes: /Channel/Application/Readers + + + #---Query System Chaincode (qscc) function to policy mapping for access control---# + + # ACL policy for qscc's "GetChainInfo" function + qscc/GetChainInfo: /Channel/Application/Readers + + + # ACL policy for qscc's "GetBlockByNumber" function + qscc/GetBlockByNumber: /Channel/Application/Readers + + # ACL policy for qscc's "GetBlockByHash" function + qscc/GetBlockByHash: /Channel/Application/Readers + + # ACL policy for qscc's "GetTransactionByID" function + qscc/GetTransactionByID: /Channel/Application/Readers + + # ACL policy for qscc's "GetBlockByTxID" function + qscc/GetBlockByTxID: /Channel/Application/Readers + + #---Configuration System Chaincode (cscc) function to policy mapping for access control---# + + # ACL policy for cscc's "GetConfigBlock" function + cscc/GetConfigBlock: /Channel/Application/Readers + + # ACL policy for cscc's "GetConfigTree" function + cscc/GetConfigTree: /Channel/Application/Readers + + # ACL policy for cscc's "SimulateConfigTreeUpdate" function + cscc/SimulateConfigTreeUpdate: /Channel/Application/Readers + + #---Miscellanesous peer function to policy mapping for access control---# + + # ACL policy for invoking chaincodes on peer + peer/Propose: /Channel/Application/Writers + + # ACL policy for chaincode to chaincode invocation + peer/ChaincodeToChaincode: /Channel/Application/Readers + + #---Events resource to policy mapping for access control###---# + + # ACL policy for sending block events + event/Block: /Channel/Application/Readers + + # ACL policy for sending filtered block events + event/FilteredBlock: /Channel/Application/Readers + + # Chaincode Lifecycle Policies introduced in Fabric 2.x + # ACL policy for _lifecycle's "CheckCommitReadiness" function + _lifecycle/CheckCommitReadiness: /Channel/Application/Writers + + # ACL policy for _lifecycle's "CommitChaincodeDefinition" function + _lifecycle/CommitChaincodeDefinition: /Channel/Application/Writers + + # ACL policy for _lifecycle's "QueryChaincodeDefinition" function + _lifecycle/QueryChaincodeDefinition: /Channel/Application/Readers + + + # Organizations is the list of orgs which are defined as participants on + # the application side of the network + Organizations: + + # Policies defines the set of policies at this level of the config tree + # For Application policies, their canonical path is + # /Channel/Application/ + Policies: + Readers: + Type: ImplicitMeta + Rule: "ANY Readers" + Writers: + Type: ImplicitMeta + Rule: "ANY Writers" + Admins: + Type: ImplicitMeta + Rule: "MAJORITY Admins" + LifecycleEndorsement: + Type: ImplicitMeta + Rule: "MAJORITY Endorsement" + Endorsement: + Type: ImplicitMeta + Rule: "MAJORITY Endorsement" + + Capabilities: + <<: *ApplicationCapabilities +################################################################################ +# +# SECTION: Orderer +# +# - This section defines the values to encode into a config transaction or +# genesis block for orderer related parameters +# +################################################################################ +Orderer: &OrdererDefaults + + # Orderer Type: The orderer implementation to start + OrdererType: etcdraft + + # Addresses used to be the list of orderer addresses that clients and peers + # could connect to. However, this does not allow clients to associate orderer + # addresses and orderer organizations which can be useful for things such + # as TLS validation. The preferred way to specify orderer addresses is now + # to include the OrdererEndpoints item in your org definition + Addresses: + - <>-org1-orderer1:7050 + - <>-org1-orderer2:7050 + - <>-org1-orderer3:7050 + + EtcdRaft: + Consenters: + - Host: <>-org1-orderer1 + Port: 7050 + ClientTLSCert: /tmp/crypto-material/orderers/org1/orderer1/node-tls/msp/signcerts/cert.pem + ServerTLSCert: /tmp/crypto-material/orderers/org1/orderer1/node-tls/msp/signcerts/cert.pem + - Host: <>-org1-orderer2 + Port: 7050 + ClientTLSCert: /tmp/crypto-material/orderers/org1/orderer2/node-tls/msp/signcerts/cert.pem + ServerTLSCert: /tmp/crypto-material/orderers/org1/orderer2/node-tls/msp/signcerts/cert.pem + - Host: <>-org1-orderer3 + Port: 7050 + ClientTLSCert: /tmp/crypto-material/orderers/org1/orderer3/node-tls/msp/signcerts/cert.pem + ServerTLSCert: /tmp/crypto-material/orderers/org1/orderer3/node-tls/msp/signcerts/cert.pem + + # Batch Timeout: The amount of time to wait before creating a batch + BatchTimeout: 2s + + # Batch Size: Controls the number of messages batched into a block + BatchSize: + + # Max Message Count: The maximum number of messages to permit in a batch + MaxMessageCount: 10 + + # Absolute Max Bytes: The absolute maximum number of bytes allowed for + # the serialized messages in a batch. + AbsoluteMaxBytes: 99 MB + + # Preferred Max Bytes: The preferred maximum number of bytes allowed for + # the serialized messages in a batch. A message larger than the preferred + # max bytes will result in a batch larger than preferred max bytes. + PreferredMaxBytes: 512 KB + + # Organizations is the list of orgs which are defined as participants on + # the orderer side of the network + Organizations: + + # Policies defines the set of policies at this level of the config tree + # For Orderer policies, their canonical path is + # /Channel/Orderer/ + Policies: + Readers: + Type: ImplicitMeta + Rule: "ANY Readers" + Writers: + Type: ImplicitMeta + Rule: "ANY Writers" + Admins: + Type: ImplicitMeta + Rule: "MAJORITY Admins" + # BlockValidation specifies what signatures must be included in the block + # from the orderer for the peer to validate it. + BlockValidation: + Type: ImplicitMeta + Rule: "ANY Writers" + +################################################################################ +# +# CHANNEL +# +# This section defines the values to encode into a config transaction or +# genesis block for channel related parameters. +# +################################################################################ +Channel: &ChannelDefaults + # Policies defines the set of policies at this level of the config tree + # For Channel policies, their canonical path is + # /Channel/ + Policies: + # Who may invoke the 'Deliver' API + Readers: + Type: ImplicitMeta + Rule: "ANY Readers" + # Who may invoke the 'Broadcast' API + Writers: + Type: ImplicitMeta + Rule: "ANY Writers" + # By default, who may modify elements at this config level + Admins: + Type: ImplicitMeta + Rule: "MAJORITY Admins" + + # Capabilities describes the channel level capabilities, see the + # dedicated Capabilities section elsewhere in this file for a full + # description + Capabilities: + <<: *ChannelCapabilities + +################################################################################ +# +# Profile +# +# - Different configuration profiles may be encoded here to be specified +# as parameters to the configtxgen tool +# +################################################################################ +Profiles: + OrgsOrdererGenesis: + <<: *ChannelDefaults + Orderer: + <<: *OrdererDefaults + Organizations: + - *org1 + Capabilities: + <<: *OrdererCapabilities + Consortiums: + MainConsortium: + Organizations: + - *org2 + - *org3 + + OrgsChannel: + Consortium: MainConsortium + <<: *ChannelDefaults + Application: + <<: *ApplicationDefaults + Organizations: + - *org2 + - *org3 + Capabilities: + <<: *ApplicationCapabilities + diff --git a/topologies/t4/config/fabric-ca/org1/identities/fabric-ca-server-config.yaml b/topologies/t4/config/fabric-ca/org1/identities/fabric-ca-server-config.yaml new file mode 100644 index 0000000..7f4b4b3 --- /dev/null +++ b/topologies/t4/config/fabric-ca/org1/identities/fabric-ca-server-config.yaml @@ -0,0 +1,544 @@ +############################################################################# +# This is a configuration file for the fabric-ca-server command. +# +# COMMAND LINE ARGUMENTS AND ENVIRONMENT VARIABLES +# ------------------------------------------------ +# Each configuration element can be overridden via command line +# arguments or environment variables. The precedence for determining +# the value of each element is as follows: +# 1) command line argument +# Examples: +# a) --port 443 +# To set the listening port +# b) --ca.keyfile ../mykey.pem +# To set the "keyfile" element in the "ca" section below; +# note the '.' separator character. +# 2) environment variable +# Examples: +# a) FABRIC_CA_SERVER_PORT=443 +# To set the listening port +# b) FABRIC_CA_SERVER_CA_KEYFILE="../mykey.pem" +# To set the "keyfile" element in the "ca" section below; +# note the '_' separator character. +# 3) configuration file +# 4) default value (if there is one) +# All default values are shown beside each element below. +# +# FILE NAME ELEMENTS +# ------------------ +# The value of all fields whose name ends with "file" or "files" are +# name or names of other files. +# For example, see "tls.certfile" and "tls.clientauth.certfiles". +# The value of each of these fields can be a simple filename, a +# relative path, or an absolute path. If the value is not an +# absolute path, it is interpreted as being relative to the location +# of this configuration file. +# +############################################################################# + +# Version of config file +version: 1.5.5 + +# Server's listening port (default: 7054) +port: 7054 + +# Cross-Origin Resource Sharing (CORS) +cors: + enabled: false + origins: + - "*" + +# Enables debug logging (default: false) +debug: false + +# Size limit of an acceptable CRL in bytes (default: 512000) +crlsizelimit: 512000 + +############################################################################# +# TLS section for the server's listening port +# +# The following types are supported for client authentication: NoClientCert, +# RequestClientCert, RequireAnyClientCert, VerifyClientCertIfGiven, +# and RequireAndVerifyClientCert. +# +# Certfiles is a list of root certificate authorities that the server uses +# when verifying client certificates. +############################################################################# +tls: + # Enable TLS (default: false) + enabled: false + # TLS for the server's listening port + certfile: + keyfile: + clientauth: + type: noclientcert + certfiles: + +############################################################################# +# The CA section contains information related to the Certificate Authority +# including the name of the CA, which should be unique for all members +# of a blockchain network. It also includes the key and certificate files +# used when issuing enrollment certificates (ECerts). +# The chainfile (if it exists) contains the certificate chain which +# should be trusted for this CA, where the 1st in the chain is always the +# root CA certificate. +############################################################################# +ca: + # Name of this CA + name: + # Key file (is only used to import a private key into BCCSP) + keyfile: + # Certificate file (default: ca-cert.pem) + certfile: + # Chain file + chainfile: + # Ignore Certificate Expiration in the case of re-enroll + reenrollIgnoreCertExpiry: false + +############################################################################# +# The gencrl REST endpoint is used to generate a CRL that contains revoked +# certificates. This section contains configuration options that are used +# during gencrl request processing. +############################################################################# +crl: + # Specifies expiration for the generated CRL. The number of hours + # specified by this property is added to the UTC time, the resulting time + # is used to set the 'Next Update' date of the CRL. + expiry: 24h + +############################################################################# +# The registry section controls how the fabric-ca-server does two things: +# 1) authenticates enrollment requests which contain a username and password +# (also known as an enrollment ID and secret). +# 2) once authenticated, retrieves the identity's attribute names and values. +# These attributes are useful for making access control decisions in +# chaincode. +# There are two main configuration options: +# 1) The fabric-ca-server is the registry. +# This is true if "ldap.enabled" in the ldap section below is false. +# 2) An LDAP server is the registry, in which case the fabric-ca-server +# calls the LDAP server to perform these tasks. +# This is true if "ldap.enabled" in the ldap section below is true, +# which means this "registry" section is ignored. +############################################################################# +registry: + # Maximum number of times a password/secret can be reused for enrollment + # (default: -1, which means there is no limit) + maxenrollments: -1 + + # Contains identity information which is used when LDAP is disabled + identities: + - name: org1-ca-tls-admin + pass: org1-ca-tls-adminpw + type: client + affiliation: "" + attrs: + hf.Registrar.Roles: "*" + hf.Registrar.DelegateRoles: "*" + hf.Revoker: true + hf.IntermediateCA: true + hf.GenCRL: true + hf.Registrar.Attributes: "*" + hf.AffiliationMgr: true + +############################################################################# +# Database section +# Supported types are: "sqlite3", "postgres", and "mysql". +# The datasource value depends on the type. +# If the type is "sqlite3", the datasource value is a file name to use +# as the database store. Since "sqlite3" is an embedded database, it +# may not be used if you want to run the fabric-ca-server in a cluster. +# To run the fabric-ca-server in a cluster, you must choose "postgres" +# or "mysql". +############################################################################# +db: + type: sqlite3 + datasource: fabric-ca-server.db + tls: + enabled: false + certfiles: + client: + certfile: + keyfile: +############################################################################# +# LDAP section +# If LDAP is enabled, the fabric-ca-server calls LDAP to: +# 1) authenticate enrollment ID and secret (i.e. username and password) +# for enrollment requests; +# 2) To retrieve identity attributes +############################################################################# +ldap: + # Enables or disables the LDAP client (default: false) + # If this is set to true, the "registry" section is ignored. + enabled: true + # The URL of the LDAP server + url: ldap://uid=org1-ca-identities-admin,ou=ldap,ou=identities,ou=hlfabric,dc=org1,dc=org:org1-ca-identities-adminpw@org1-ca-openldap:1389/dc=org1,dc=org + userfilter: (uid=%s) + # TLS configuration for the client connection to the LDAP server + tls: + certfiles: + client: + certfile: + keyfile: + # Attribute related configuration for mapping from LDAP entries to Fabric CA attributes + attribute: + # 'names' is an array of strings containing the LDAP attribute names which are + # requested from the LDAP server for an LDAP identity's entry + names: + [ + "uid", + "cn", + "hfType", + "hfRegistrarRoles", + "hfRegistrarDelegateRoles", + "hfAffiliation", + "hf.Revoker", + "hfGenCRL", + "hfAffiliationMgr", + "hfIntermediateCA", + "hfCustomField1", + "hfCustomField2", + ] + # The 'converters' section is used to convert an LDAP entry to the value of + # a fabric CA attribute. + # For example, the following converts an LDAP 'uid' attribute + # whose value begins with 'revoker' to a fabric CA attribute + # named "hf.Revoker" with a value of "true" (because the boolean expression + # evaluates to true). + # converters: + # - name: hf.Revoker + # value: attr("uid") =~ "revoker*" + converters: + - name: hf.EnrollmentID + value: attr("uid") + - name: hf.Type + value: attr("hfType") + - name: hf.Affiliation + value: 'attr("hfAffiliation") == "empty" ? "" : attr("hfAffiliation")' + - name: hf.Registrar.Roles + value: attr("hfRegistrarRoles") + - name: hf.Registrar.DelegateRoles + value: attr("hfRegistrarDelegateRoles") + - name: hf.Revoker + value: attr("hfRevoker") == "TRUE" + - name: hf.GenCRL + value: attr("hfGenCRL") == "TRUE" + - name: hf.AffiliationMgr + value: attr("hfAffiliationMgr") == "TRUE" + - name: hf.IntermediateCA + value: attr("hfIntermediateCA") == "TRUE" + - name: hf.CustomField1 + value: attr("hfCustomField1") + - name: hf.CustomField2 + value: attr("hfCustomField2") + # The 'maps' section contains named maps which may be referenced by the 'map' + # function in the 'converters' section to map LDAP responses to arbitrary values. + # For example, assume a user has an LDAP attribute named 'member' which has multiple + # values which are each a distinguished name (i.e. a DN). For simplicity, assume the + # values of the 'member' attribute are 'dn1', 'dn2', and 'dn3'. + # Further assume the following configuration. + # converters: + # - name: hf.Registrar.Roles + # value: map(attr("member"),"groups") + # maps: + # groups: + # - name: dn1 + # value: peer + # - name: dn2 + # value: client + # The value of the user's 'hf.Registrar.Roles' attribute is then computed to be + # "peer,client,dn3". This is because the value of 'attr("member")' is + # "dn1,dn2,dn3", and the call to 'map' with a 2nd argument of + # "group" replaces "dn1" with "peer" and "dn2" with "client". + maps: + groups: + - name: + value: + +############################################################################# +# Affiliations section. Fabric CA server can be bootstrapped with the +# affiliations specified in this section. Affiliations are specified as maps. +# For example: +# businessunit1: +# department1: +# - team1 +# businessunit2: +# - department2 +# - department3 +# +# Affiliations are hierarchical in nature. In the above example, +# department1 (used as businessunit1.department1) is the child of businessunit1. +# team1 (used as businessunit1.department1.team1) is the child of department1. +# department2 (used as businessunit2.department2) and department3 (businessunit2.department3) +# are children of businessunit2. +# Note: Affiliations are case sensitive except for the non-leaf affiliations +# (like businessunit1, department1, businessunit2) that are specified in the configuration file, +# which are always stored in lower case. +############################################################################# +affiliations: + org1: + - department1 + - department2 + org2: + - department1 + +############################################################################# +# Signing section +# +# The "default" subsection is used to sign enrollment certificates; +# the default expiration ("expiry" field) is "8760h", which is 1 year in hours. +# +# The "ca" profile subsection is used to sign intermediate CA certificates; +# the default expiration ("expiry" field) is "43800h" which is 5 years in hours. +# Note that "isca" is true, meaning that it issues a CA certificate. +# A maxpathlen of 0 means that the intermediate CA cannot issue other +# intermediate CA certificates, though it can still issue end entity certificates. +# (See RFC 5280, section 4.2.1.9) +# +# The "tls" profile subsection is used to sign TLS certificate requests; +# the default expiration ("expiry" field) is "8760h", which is 1 year in hours. +############################################################################# +signing: + default: + usage: + - digital signature + expiry: 8760h + profiles: + ca: + usage: + - cert sign + - crl sign + expiry: 43800h + caconstraint: + isca: true + maxpathlen: 0 + tls: + usage: + - signing + - key encipherment + - server auth + - client auth + - key agreement + expiry: 8760h + +########################################################################### +# Certificate Signing Request (CSR) section. +# This controls the creation of the root CA certificate. +# The expiration for the root CA certificate is configured with the +# "ca.expiry" field below, whose default value is "131400h" which is +# 15 years in hours. +# The pathlength field is used to limit CA certificate hierarchy as described +# in section 4.2.1.9 of RFC 5280. +# Examples: +# 1) No pathlength value means no limit is requested. +# 2) pathlength == 1 means a limit of 1 is requested which is the default for +# a root CA. This means the root CA can issue intermediate CA certificates, +# but these intermediate CAs may not in turn issue other CA certificates +# though they can still issue end entity certificates. +# 3) pathlength == 0 means a limit of 0 is requested; +# this is the default for an intermediate CA, which means it can not issue +# CA certificates though it can still issue end entity certificates. +# The "hosts" field will be used to specify Subject Alternative Names +# if the server creates a self-signed TLS certificate. +########################################################################### +csr: + cn: fabric-ca-server + keyrequest: + algo: ecdsa + size: 256 + # names: + # - C: US + # ST: "North Carolina" + # L: + # O: Hyperledger + # OU: Fabric + hosts: + - a4b94aae84ff + - localhost + ca: + expiry: 131400h + pathlength: 1 + +########################################################################### +# Each CA can issue both X509 enrollment certificate as well as Idemix +# Credential. This section specifies configuration for the issuer component +# that is responsible for issuing Idemix credentials. +########################################################################### +idemix: + # Specifies pool size for revocation handles. A revocation handle is an unique identifier of an + # Idemix credential. The issuer will create a pool revocation handles of this specified size. When + # a credential is requested, issuer will get handle from the pool and assign it to the credential. + # Issuer will repopulate the pool with new handles when the last handle in the pool is used. + # A revocation handle and credential revocation information (CRI) are used to create non revocation proof + # by the prover to prove to the verifier that her credential is not revoked. + rhpoolsize: 1000 + + # The Idemix credential issuance is a two step process. First step is to get a nonce from the issuer + # and second step is send credential request that is constructed using the nonce to the isuser to + # request a credential. This configuration property specifies expiration for the nonces. By default is + # nonces expire after 15 seconds. The value is expressed in the time.Duration format (see https://golang.org/pkg/time/#ParseDuration). + nonceexpiration: 15s + + # Specifies interval at which expired nonces are removed from datastore. Default value is 15 minutes. + # The value is expressed in the time.Duration format (see https://golang.org/pkg/time/#ParseDuration) + noncesweepinterval: 15m + + # Specifies the Elliptic Curve used by Identity Mixer. + # It can be any of: {"amcl.Fp256bn", "gurvy.Bn254", "amcl.Fp256Miraclbn"}. + # If unspecified, it defaults to 'amcl.Fp256bn'. + curve: amcl.Fp256bn + +############################################################################# +# BCCSP (BlockChain Crypto Service Provider) section is used to select which +# crypto library implementation to use +############################################################################# +bccsp: + default: SW + sw: + hash: SHA2 + security: 256 + filekeystore: + # The directory used for the software file-based keystore + keystore: msp/keystore + +############################################################################# +# Multi CA section +# +# Each Fabric CA server contains one CA by default. This section is used +# to configure multiple CAs in a single server. +# +# 1) --cacount +# Automatically generate non-default CAs. The names of these +# additional CAs are "ca1", "ca2", ... "caN", where "N" is +# This is particularly useful in a development environment to quickly set up +# multiple CAs. Note that, this config option is not applicable to intermediate CA server +# i.e., Fabric CA server that is started with intermediate.parentserver.url config +# option (-u command line option) +# +# 2) --cafiles +# For each CA config file in the list, generate a separate signing CA. Each CA +# config file in this list MAY contain all of the same elements as are found in +# the server config file except port, debug, and tls sections. +# +# Examples: +# fabric-ca-server start -b admin:adminpw --cacount 2 +# +# fabric-ca-server start -b admin:adminpw --cafiles ca/ca1/fabric-ca-server-config.yaml +# --cafiles ca/ca2/fabric-ca-server-config.yaml +# +############################################################################# + +cacount: + +cafiles: + +############################################################################# +# Intermediate CA section +# +# The relationship between servers and CAs is as follows: +# 1) A single server process may contain or function as one or more CAs. +# This is configured by the "Multi CA section" above. +# 2) Each CA is either a root CA or an intermediate CA. +# 3) Each intermediate CA has a parent CA which is either a root CA or another intermediate CA. +# +# This section pertains to configuration of #2 and #3. +# If the "intermediate.parentserver.url" property is set, +# then this is an intermediate CA with the specified parent +# CA. +# +# parentserver section +# url - The URL of the parent server +# caname - Name of the CA to enroll within the server +# +# enrollment section used to enroll intermediate CA with parent CA +# profile - Name of the signing profile to use in issuing the certificate +# label - Label to use in HSM operations +# +# tls section for secure socket connection +# certfiles - PEM-encoded list of trusted root certificate files +# client: +# certfile - PEM-encoded certificate file for when client authentication +# is enabled on server +# keyfile - PEM-encoded key file for when client authentication +# is enabled on server +############################################################################# +intermediate: + parentserver: + url: + caname: + + enrollment: + hosts: + profile: + label: + + tls: + certfiles: + client: + certfile: + keyfile: + +############################################################################# +# CA configuration section +# +# Configure the number of incorrect password attempts are allowed for +# identities. By default, the value of 'passwordattempts' is 10, which +# means that 10 incorrect password attempts can be made before an identity get +# locked out. +############################################################################# +cfg: + identities: + passwordattempts: 10 + +############################################################################### +# +# Operations section +# +############################################################################### +operations: + # host and port for the operations server + listenAddress: 127.0.0.1:9443 + + # TLS configuration for the operations endpoint + tls: + # TLS enabled + enabled: false + + # path to PEM encoded server certificate for the operations server + cert: + file: + + # path to PEM encoded server key for the operations server + key: + file: + + # require client certificate authentication to access all resources + clientAuthRequired: false + + # paths to PEM encoded ca certificates to trust for client authentication + clientRootCAs: + files: [] + +############################################################################### +# +# Metrics section +# +############################################################################### +metrics: + # statsd, prometheus, or disabled + provider: disabled + + # statsd configuration + statsd: + # network type: tcp or udp + network: udp + + # statsd server address + address: 127.0.0.1:8125 + + # the interval at which locally cached counters and gauges are pushed + # to statsd; timings are pushed immediately + writeInterval: 10s + + # prefix is prepended to all emitted statsd metrics + prefix: server diff --git a/topologies/t4/config/fabric-ca/org1/tls/fabric-ca-server-config.yaml b/topologies/t4/config/fabric-ca/org1/tls/fabric-ca-server-config.yaml new file mode 100644 index 0000000..bbc1efa --- /dev/null +++ b/topologies/t4/config/fabric-ca/org1/tls/fabric-ca-server-config.yaml @@ -0,0 +1,544 @@ +############################################################################# +# This is a configuration file for the fabric-ca-server command. +# +# COMMAND LINE ARGUMENTS AND ENVIRONMENT VARIABLES +# ------------------------------------------------ +# Each configuration element can be overridden via command line +# arguments or environment variables. The precedence for determining +# the value of each element is as follows: +# 1) command line argument +# Examples: +# a) --port 443 +# To set the listening port +# b) --ca.keyfile ../mykey.pem +# To set the "keyfile" element in the "ca" section below; +# note the '.' separator character. +# 2) environment variable +# Examples: +# a) FABRIC_CA_SERVER_PORT=443 +# To set the listening port +# b) FABRIC_CA_SERVER_CA_KEYFILE="../mykey.pem" +# To set the "keyfile" element in the "ca" section below; +# note the '_' separator character. +# 3) configuration file +# 4) default value (if there is one) +# All default values are shown beside each element below. +# +# FILE NAME ELEMENTS +# ------------------ +# The value of all fields whose name ends with "file" or "files" are +# name or names of other files. +# For example, see "tls.certfile" and "tls.clientauth.certfiles". +# The value of each of these fields can be a simple filename, a +# relative path, or an absolute path. If the value is not an +# absolute path, it is interpreted as being relative to the location +# of this configuration file. +# +############################################################################# + +# Version of config file +version: 1.5.5 + +# Server's listening port (default: 7054) +port: 7054 + +# Cross-Origin Resource Sharing (CORS) +cors: + enabled: false + origins: + - "*" + +# Enables debug logging (default: false) +debug: false + +# Size limit of an acceptable CRL in bytes (default: 512000) +crlsizelimit: 512000 + +############################################################################# +# TLS section for the server's listening port +# +# The following types are supported for client authentication: NoClientCert, +# RequestClientCert, RequireAnyClientCert, VerifyClientCertIfGiven, +# and RequireAndVerifyClientCert. +# +# Certfiles is a list of root certificate authorities that the server uses +# when verifying client certificates. +############################################################################# +tls: + # Enable TLS (default: false) + enabled: false + # TLS for the server's listening port + certfile: + keyfile: + clientauth: + type: noclientcert + certfiles: + +############################################################################# +# The CA section contains information related to the Certificate Authority +# including the name of the CA, which should be unique for all members +# of a blockchain network. It also includes the key and certificate files +# used when issuing enrollment certificates (ECerts). +# The chainfile (if it exists) contains the certificate chain which +# should be trusted for this CA, where the 1st in the chain is always the +# root CA certificate. +############################################################################# +ca: + # Name of this CA + name: + # Key file (is only used to import a private key into BCCSP) + keyfile: + # Certificate file (default: ca-cert.pem) + certfile: + # Chain file + chainfile: + # Ignore Certificate Expiration in the case of re-enroll + reenrollIgnoreCertExpiry: false + +############################################################################# +# The gencrl REST endpoint is used to generate a CRL that contains revoked +# certificates. This section contains configuration options that are used +# during gencrl request processing. +############################################################################# +crl: + # Specifies expiration for the generated CRL. The number of hours + # specified by this property is added to the UTC time, the resulting time + # is used to set the 'Next Update' date of the CRL. + expiry: 24h + +############################################################################# +# The registry section controls how the fabric-ca-server does two things: +# 1) authenticates enrollment requests which contain a username and password +# (also known as an enrollment ID and secret). +# 2) once authenticated, retrieves the identity's attribute names and values. +# These attributes are useful for making access control decisions in +# chaincode. +# There are two main configuration options: +# 1) The fabric-ca-server is the registry. +# This is true if "ldap.enabled" in the ldap section below is false. +# 2) An LDAP server is the registry, in which case the fabric-ca-server +# calls the LDAP server to perform these tasks. +# This is true if "ldap.enabled" in the ldap section below is true, +# which means this "registry" section is ignored. +############################################################################# +registry: + # Maximum number of times a password/secret can be reused for enrollment + # (default: -1, which means there is no limit) + maxenrollments: -1 + + # Contains identity information which is used when LDAP is disabled + identities: + - name: org1-ca-tls-admin + pass: org1-ca-tls-adminpw + type: client + affiliation: "" + attrs: + hf.Registrar.Roles: "*" + hf.Registrar.DelegateRoles: "*" + hf.Revoker: true + hf.IntermediateCA: true + hf.GenCRL: true + hf.Registrar.Attributes: "*" + hf.AffiliationMgr: true + +############################################################################# +# Database section +# Supported types are: "sqlite3", "postgres", and "mysql". +# The datasource value depends on the type. +# If the type is "sqlite3", the datasource value is a file name to use +# as the database store. Since "sqlite3" is an embedded database, it +# may not be used if you want to run the fabric-ca-server in a cluster. +# To run the fabric-ca-server in a cluster, you must choose "postgres" +# or "mysql". +############################################################################# +db: + type: sqlite3 + datasource: fabric-ca-server.db + tls: + enabled: false + certfiles: + client: + certfile: + keyfile: +############################################################################# +# LDAP section +# If LDAP is enabled, the fabric-ca-server calls LDAP to: +# 1) authenticate enrollment ID and secret (i.e. username and password) +# for enrollment requests; +# 2) To retrieve identity attributes +############################################################################# +ldap: + # Enables or disables the LDAP client (default: false) + # If this is set to true, the "registry" section is ignored. + enabled: true + # The URL of the LDAP server + url: ldap://uid=org1-ca-tls-admin,ou=ldap,ou=tls,ou=hlfabric,dc=org1,dc=org:org1-ca-tls-adminpw@org1-ca-openldap:1389/dc=org1,dc=org + userfilter: (uid=%s) + # TLS configuration for the client connection to the LDAP server + tls: + certfiles: + client: + certfile: + keyfile: + # Attribute related configuration for mapping from LDAP entries to Fabric CA attributes + attribute: + # 'names' is an array of strings containing the LDAP attribute names which are + # requested from the LDAP server for an LDAP identity's entry + names: + [ + "uid", + "cn", + "hfType", + "hfRegistrarRoles", + "hfRegistrarDelegateRoles", + "hfAffiliation", + "hf.Revoker", + "hfGenCRL", + "hfAffiliationMgr", + "hfIntermediateCA", + "hfCustomField1", + "hfCustomField2", + ] + # The 'converters' section is used to convert an LDAP entry to the value of + # a fabric CA attribute. + # For example, the following converts an LDAP 'uid' attribute + # whose value begins with 'revoker' to a fabric CA attribute + # named "hf.Revoker" with a value of "true" (because the boolean expression + # evaluates to true). + # converters: + # - name: hf.Revoker + # value: attr("uid") =~ "revoker*" + converters: + - name: hf.EnrollmentID + value: attr("uid") + - name: hf.Type + value: attr("hfType") + - name: hf.Affiliation + value: 'attr("hfAffiliation") == "empty" ? "" : attr("hfAffiliation")' + - name: hf.Registrar.Roles + value: attr("hfRegistrarRoles") + - name: hf.Registrar.DelegateRoles + value: attr("hfRegistrarDelegateRoles") + - name: hf.Revoker + value: attr("hfRevoker") == "TRUE" + - name: hf.GenCRL + value: attr("hfGenCRL") == "TRUE" + - name: hf.AffiliationMgr + value: attr("hfAffiliationMgr") == "TRUE" + - name: hf.IntermediateCA + value: attr("hfIntermediateCA") == "TRUE" + - name: hf.CustomField1 + value: attr("hfCustomField1") + - name: hf.CustomField2 + value: attr("hfCustomField2") + # The 'maps' section contains named maps which may be referenced by the 'map' + # function in the 'converters' section to map LDAP responses to arbitrary values. + # For example, assume a user has an LDAP attribute named 'member' which has multiple + # values which are each a distinguished name (i.e. a DN). For simplicity, assume the + # values of the 'member' attribute are 'dn1', 'dn2', and 'dn3'. + # Further assume the following configuration. + # converters: + # - name: hf.Registrar.Roles + # value: map(attr("member"),"groups") + # maps: + # groups: + # - name: dn1 + # value: peer + # - name: dn2 + # value: client + # The value of the user's 'hf.Registrar.Roles' attribute is then computed to be + # "peer,client,dn3". This is because the value of 'attr("member")' is + # "dn1,dn2,dn3", and the call to 'map' with a 2nd argument of + # "group" replaces "dn1" with "peer" and "dn2" with "client". + maps: + groups: + - name: + value: + +############################################################################# +# Affiliations section. Fabric CA server can be bootstrapped with the +# affiliations specified in this section. Affiliations are specified as maps. +# For example: +# businessunit1: +# department1: +# - team1 +# businessunit2: +# - department2 +# - department3 +# +# Affiliations are hierarchical in nature. In the above example, +# department1 (used as businessunit1.department1) is the child of businessunit1. +# team1 (used as businessunit1.department1.team1) is the child of department1. +# department2 (used as businessunit2.department2) and department3 (businessunit2.department3) +# are children of businessunit2. +# Note: Affiliations are case sensitive except for the non-leaf affiliations +# (like businessunit1, department1, businessunit2) that are specified in the configuration file, +# which are always stored in lower case. +############################################################################# +affiliations: + org1: + - department1 + - department2 + org2: + - department1 + +############################################################################# +# Signing section +# +# The "default" subsection is used to sign enrollment certificates; +# the default expiration ("expiry" field) is "8760h", which is 1 year in hours. +# +# The "ca" profile subsection is used to sign intermediate CA certificates; +# the default expiration ("expiry" field) is "43800h" which is 5 years in hours. +# Note that "isca" is true, meaning that it issues a CA certificate. +# A maxpathlen of 0 means that the intermediate CA cannot issue other +# intermediate CA certificates, though it can still issue end entity certificates. +# (See RFC 5280, section 4.2.1.9) +# +# The "tls" profile subsection is used to sign TLS certificate requests; +# the default expiration ("expiry" field) is "8760h", which is 1 year in hours. +############################################################################# +signing: + default: + usage: + - digital signature + expiry: 8760h + profiles: + ca: + usage: + - cert sign + - crl sign + expiry: 43800h + caconstraint: + isca: true + maxpathlen: 0 + tls: + usage: + - signing + - key encipherment + - server auth + - client auth + - key agreement + expiry: 8760h + +########################################################################### +# Certificate Signing Request (CSR) section. +# This controls the creation of the root CA certificate. +# The expiration for the root CA certificate is configured with the +# "ca.expiry" field below, whose default value is "131400h" which is +# 15 years in hours. +# The pathlength field is used to limit CA certificate hierarchy as described +# in section 4.2.1.9 of RFC 5280. +# Examples: +# 1) No pathlength value means no limit is requested. +# 2) pathlength == 1 means a limit of 1 is requested which is the default for +# a root CA. This means the root CA can issue intermediate CA certificates, +# but these intermediate CAs may not in turn issue other CA certificates +# though they can still issue end entity certificates. +# 3) pathlength == 0 means a limit of 0 is requested; +# this is the default for an intermediate CA, which means it can not issue +# CA certificates though it can still issue end entity certificates. +# The "hosts" field will be used to specify Subject Alternative Names +# if the server creates a self-signed TLS certificate. +########################################################################### +csr: + cn: fabric-ca-server + keyrequest: + algo: ecdsa + size: 256 + # names: + # - C: US + # ST: "North Carolina" + # L: + # O: Hyperledger + # OU: Fabric + hosts: + - a4b94aae84ff + - localhost + ca: + expiry: 131400h + pathlength: 1 + +########################################################################### +# Each CA can issue both X509 enrollment certificate as well as Idemix +# Credential. This section specifies configuration for the issuer component +# that is responsible for issuing Idemix credentials. +########################################################################### +idemix: + # Specifies pool size for revocation handles. A revocation handle is an unique identifier of an + # Idemix credential. The issuer will create a pool revocation handles of this specified size. When + # a credential is requested, issuer will get handle from the pool and assign it to the credential. + # Issuer will repopulate the pool with new handles when the last handle in the pool is used. + # A revocation handle and credential revocation information (CRI) are used to create non revocation proof + # by the prover to prove to the verifier that her credential is not revoked. + rhpoolsize: 1000 + + # The Idemix credential issuance is a two step process. First step is to get a nonce from the issuer + # and second step is send credential request that is constructed using the nonce to the isuser to + # request a credential. This configuration property specifies expiration for the nonces. By default is + # nonces expire after 15 seconds. The value is expressed in the time.Duration format (see https://golang.org/pkg/time/#ParseDuration). + nonceexpiration: 15s + + # Specifies interval at which expired nonces are removed from datastore. Default value is 15 minutes. + # The value is expressed in the time.Duration format (see https://golang.org/pkg/time/#ParseDuration) + noncesweepinterval: 15m + + # Specifies the Elliptic Curve used by Identity Mixer. + # It can be any of: {"amcl.Fp256bn", "gurvy.Bn254", "amcl.Fp256Miraclbn"}. + # If unspecified, it defaults to 'amcl.Fp256bn'. + curve: amcl.Fp256bn + +############################################################################# +# BCCSP (BlockChain Crypto Service Provider) section is used to select which +# crypto library implementation to use +############################################################################# +bccsp: + default: SW + sw: + hash: SHA2 + security: 256 + filekeystore: + # The directory used for the software file-based keystore + keystore: msp/keystore + +############################################################################# +# Multi CA section +# +# Each Fabric CA server contains one CA by default. This section is used +# to configure multiple CAs in a single server. +# +# 1) --cacount +# Automatically generate non-default CAs. The names of these +# additional CAs are "ca1", "ca2", ... "caN", where "N" is +# This is particularly useful in a development environment to quickly set up +# multiple CAs. Note that, this config option is not applicable to intermediate CA server +# i.e., Fabric CA server that is started with intermediate.parentserver.url config +# option (-u command line option) +# +# 2) --cafiles +# For each CA config file in the list, generate a separate signing CA. Each CA +# config file in this list MAY contain all of the same elements as are found in +# the server config file except port, debug, and tls sections. +# +# Examples: +# fabric-ca-server start -b admin:adminpw --cacount 2 +# +# fabric-ca-server start -b admin:adminpw --cafiles ca/ca1/fabric-ca-server-config.yaml +# --cafiles ca/ca2/fabric-ca-server-config.yaml +# +############################################################################# + +cacount: + +cafiles: + +############################################################################# +# Intermediate CA section +# +# The relationship between servers and CAs is as follows: +# 1) A single server process may contain or function as one or more CAs. +# This is configured by the "Multi CA section" above. +# 2) Each CA is either a root CA or an intermediate CA. +# 3) Each intermediate CA has a parent CA which is either a root CA or another intermediate CA. +# +# This section pertains to configuration of #2 and #3. +# If the "intermediate.parentserver.url" property is set, +# then this is an intermediate CA with the specified parent +# CA. +# +# parentserver section +# url - The URL of the parent server +# caname - Name of the CA to enroll within the server +# +# enrollment section used to enroll intermediate CA with parent CA +# profile - Name of the signing profile to use in issuing the certificate +# label - Label to use in HSM operations +# +# tls section for secure socket connection +# certfiles - PEM-encoded list of trusted root certificate files +# client: +# certfile - PEM-encoded certificate file for when client authentication +# is enabled on server +# keyfile - PEM-encoded key file for when client authentication +# is enabled on server +############################################################################# +intermediate: + parentserver: + url: + caname: + + enrollment: + hosts: + profile: + label: + + tls: + certfiles: + client: + certfile: + keyfile: + +############################################################################# +# CA configuration section +# +# Configure the number of incorrect password attempts are allowed for +# identities. By default, the value of 'passwordattempts' is 10, which +# means that 10 incorrect password attempts can be made before an identity get +# locked out. +############################################################################# +cfg: + identities: + passwordattempts: 10 + +############################################################################### +# +# Operations section +# +############################################################################### +operations: + # host and port for the operations server + listenAddress: 127.0.0.1:9443 + + # TLS configuration for the operations endpoint + tls: + # TLS enabled + enabled: false + + # path to PEM encoded server certificate for the operations server + cert: + file: + + # path to PEM encoded server key for the operations server + key: + file: + + # require client certificate authentication to access all resources + clientAuthRequired: false + + # paths to PEM encoded ca certificates to trust for client authentication + clientRootCAs: + files: [] + +############################################################################### +# +# Metrics section +# +############################################################################### +metrics: + # statsd, prometheus, or disabled + provider: disabled + + # statsd configuration + statsd: + # network type: tcp or udp + network: udp + + # statsd server address + address: 127.0.0.1:8125 + + # the interval at which locally cached counters and gauges are pushed + # to statsd; timings are pushed immediately + writeInterval: 10s + + # prefix is prepended to all emitted statsd metrics + prefix: server diff --git a/topologies/t4/config/fabric-ca/org2/identities/fabric-ca-server-config.yaml b/topologies/t4/config/fabric-ca/org2/identities/fabric-ca-server-config.yaml new file mode 100644 index 0000000..92ef13a --- /dev/null +++ b/topologies/t4/config/fabric-ca/org2/identities/fabric-ca-server-config.yaml @@ -0,0 +1,544 @@ +############################################################################# +# This is a configuration file for the fabric-ca-server command. +# +# COMMAND LINE ARGUMENTS AND ENVIRONMENT VARIABLES +# ------------------------------------------------ +# Each configuration element can be overridden via command line +# arguments or environment variables. The precedence for determining +# the value of each element is as follows: +# 1) command line argument +# Examples: +# a) --port 443 +# To set the listening port +# b) --ca.keyfile ../mykey.pem +# To set the "keyfile" element in the "ca" section below; +# note the '.' separator character. +# 2) environment variable +# Examples: +# a) FABRIC_CA_SERVER_PORT=443 +# To set the listening port +# b) FABRIC_CA_SERVER_CA_KEYFILE="../mykey.pem" +# To set the "keyfile" element in the "ca" section below; +# note the '_' separator character. +# 3) configuration file +# 4) default value (if there is one) +# All default values are shown beside each element below. +# +# FILE NAME ELEMENTS +# ------------------ +# The value of all fields whose name ends with "file" or "files" are +# name or names of other files. +# For example, see "tls.certfile" and "tls.clientauth.certfiles". +# The value of each of these fields can be a simple filename, a +# relative path, or an absolute path. If the value is not an +# absolute path, it is interpreted as being relative to the location +# of this configuration file. +# +############################################################################# + +# Version of config file +version: 1.5.5 + +# Server's listening port (default: 7054) +port: 7054 + +# Cross-Origin Resource Sharing (CORS) +cors: + enabled: false + origins: + - "*" + +# Enables debug logging (default: false) +debug: false + +# Size limit of an acceptable CRL in bytes (default: 512000) +crlsizelimit: 512000 + +############################################################################# +# TLS section for the server's listening port +# +# The following types are supported for client authentication: NoClientCert, +# RequestClientCert, RequireAnyClientCert, VerifyClientCertIfGiven, +# and RequireAndVerifyClientCert. +# +# Certfiles is a list of root certificate authorities that the server uses +# when verifying client certificates. +############################################################################# +tls: + # Enable TLS (default: false) + enabled: false + # TLS for the server's listening port + certfile: + keyfile: + clientauth: + type: noclientcert + certfiles: + +############################################################################# +# The CA section contains information related to the Certificate Authority +# including the name of the CA, which should be unique for all members +# of a blockchain network. It also includes the key and certificate files +# used when issuing enrollment certificates (ECerts). +# The chainfile (if it exists) contains the certificate chain which +# should be trusted for this CA, where the 1st in the chain is always the +# root CA certificate. +############################################################################# +ca: + # Name of this CA + name: + # Key file (is only used to import a private key into BCCSP) + keyfile: + # Certificate file (default: ca-cert.pem) + certfile: + # Chain file + chainfile: + # Ignore Certificate Expiration in the case of re-enroll + reenrollIgnoreCertExpiry: false + +############################################################################# +# The gencrl REST endpoint is used to generate a CRL that contains revoked +# certificates. This section contains configuration options that are used +# during gencrl request processing. +############################################################################# +crl: + # Specifies expiration for the generated CRL. The number of hours + # specified by this property is added to the UTC time, the resulting time + # is used to set the 'Next Update' date of the CRL. + expiry: 24h + +############################################################################# +# The registry section controls how the fabric-ca-server does two things: +# 1) authenticates enrollment requests which contain a username and password +# (also known as an enrollment ID and secret). +# 2) once authenticated, retrieves the identity's attribute names and values. +# These attributes are useful for making access control decisions in +# chaincode. +# There are two main configuration options: +# 1) The fabric-ca-server is the registry. +# This is true if "ldap.enabled" in the ldap section below is false. +# 2) An LDAP server is the registry, in which case the fabric-ca-server +# calls the LDAP server to perform these tasks. +# This is true if "ldap.enabled" in the ldap section below is true, +# which means this "registry" section is ignored. +############################################################################# +registry: + # Maximum number of times a password/secret can be reused for enrollment + # (default: -1, which means there is no limit) + maxenrollments: -1 + + # Contains identity information which is used when LDAP is disabled + identities: + - name: org2-ca-tls-admin + pass: org2-ca-tls-adminpw + type: client + affiliation: "" + attrs: + hf.Registrar.Roles: "*" + hf.Registrar.DelegateRoles: "*" + hf.Revoker: true + hf.IntermediateCA: true + hf.GenCRL: true + hf.Registrar.Attributes: "*" + hf.AffiliationMgr: true + +############################################################################# +# Database section +# Supported types are: "sqlite3", "postgres", and "mysql". +# The datasource value depends on the type. +# If the type is "sqlite3", the datasource value is a file name to use +# as the database store. Since "sqlite3" is an embedded database, it +# may not be used if you want to run the fabric-ca-server in a cluster. +# To run the fabric-ca-server in a cluster, you must choose "postgres" +# or "mysql". +############################################################################# +db: + type: sqlite3 + datasource: fabric-ca-server.db + tls: + enabled: false + certfiles: + client: + certfile: + keyfile: +############################################################################# +# LDAP section +# If LDAP is enabled, the fabric-ca-server calls LDAP to: +# 1) authenticate enrollment ID and secret (i.e. username and password) +# for enrollment requests; +# 2) To retrieve identity attributes +############################################################################# +ldap: + # Enables or disables the LDAP client (default: false) + # If this is set to true, the "registry" section is ignored. + enabled: true + # The URL of the LDAP server + url: ldap://uid=org2-ca-identities-admin,ou=ldap,ou=identities,ou=hlfabric,dc=org2,dc=org:org2-ca-identities-adminpw@org2-ca-openldap:1389/dc=org2,dc=org + userfilter: (uid=%s) + # TLS configuration for the client connection to the LDAP server + tls: + certfiles: + client: + certfile: + keyfile: + # Attribute related configuration for mapping from LDAP entries to Fabric CA attributes + attribute: + # 'names' is an array of strings containing the LDAP attribute names which are + # requested from the LDAP server for an LDAP identity's entry + names: + [ + "uid", + "cn", + "hfType", + "hfRegistrarRoles", + "hfRegistrarDelegateRoles", + "hfAffiliation", + "hf.Revoker", + "hfGenCRL", + "hfAffiliationMgr", + "hfIntermediateCA", + "hfCustomField1", + "hfCustomField2", + ] + # The 'converters' section is used to convert an LDAP entry to the value of + # a fabric CA attribute. + # For example, the following converts an LDAP 'uid' attribute + # whose value begins with 'revoker' to a fabric CA attribute + # named "hf.Revoker" with a value of "true" (because the boolean expression + # evaluates to true). + # converters: + # - name: hf.Revoker + # value: attr("uid") =~ "revoker*" + converters: + - name: hf.EnrollmentID + value: attr("uid") + - name: hf.Type + value: attr("hfType") + - name: hf.Affiliation + value: 'attr("hfAffiliation") == "empty" ? "" : attr("hfAffiliation")' + - name: hf.Registrar.Roles + value: attr("hfRegistrarRoles") + - name: hf.Registrar.DelegateRoles + value: attr("hfRegistrarDelegateRoles") + - name: hf.Revoker + value: attr("hfRevoker") == "TRUE" + - name: hf.GenCRL + value: attr("hfGenCRL") == "TRUE" + - name: hf.AffiliationMgr + value: attr("hfAffiliationMgr") == "TRUE" + - name: hf.IntermediateCA + value: attr("hfIntermediateCA") == "TRUE" + - name: hf.CustomField1 + value: attr("hfCustomField1") + - name: hf.CustomField2 + value: attr("hfCustomField2") + # The 'maps' section contains named maps which may be referenced by the 'map' + # function in the 'converters' section to map LDAP responses to arbitrary values. + # For example, assume a user has an LDAP attribute named 'member' which has multiple + # values which are each a distinguished name (i.e. a DN). For simplicity, assume the + # values of the 'member' attribute are 'dn1', 'dn2', and 'dn3'. + # Further assume the following configuration. + # converters: + # - name: hf.Registrar.Roles + # value: map(attr("member"),"groups") + # maps: + # groups: + # - name: dn1 + # value: peer + # - name: dn2 + # value: client + # The value of the user's 'hf.Registrar.Roles' attribute is then computed to be + # "peer,client,dn3". This is because the value of 'attr("member")' is + # "dn1,dn2,dn3", and the call to 'map' with a 2nd argument of + # "group" replaces "dn1" with "peer" and "dn2" with "client". + maps: + groups: + - name: + value: + +############################################################################# +# Affiliations section. Fabric CA server can be bootstrapped with the +# affiliations specified in this section. Affiliations are specified as maps. +# For example: +# businessunit1: +# department1: +# - team1 +# businessunit2: +# - department2 +# - department3 +# +# Affiliations are hierarchical in nature. In the above example, +# department1 (used as businessunit1.department1) is the child of businessunit1. +# team1 (used as businessunit1.department1.team1) is the child of department1. +# department2 (used as businessunit2.department2) and department3 (businessunit2.department3) +# are children of businessunit2. +# Note: Affiliations are case sensitive except for the non-leaf affiliations +# (like businessunit1, department1, businessunit2) that are specified in the configuration file, +# which are always stored in lower case. +############################################################################# +affiliations: + org2: + - department1 + - department2 + org3: + - department1 + +############################################################################# +# Signing section +# +# The "default" subsection is used to sign enrollment certificates; +# the default expiration ("expiry" field) is "8760h", which is 1 year in hours. +# +# The "ca" profile subsection is used to sign intermediate CA certificates; +# the default expiration ("expiry" field) is "43800h" which is 5 years in hours. +# Note that "isca" is true, meaning that it issues a CA certificate. +# A maxpathlen of 0 means that the intermediate CA cannot issue other +# intermediate CA certificates, though it can still issue end entity certificates. +# (See RFC 5280, section 4.2.1.9) +# +# The "tls" profile subsection is used to sign TLS certificate requests; +# the default expiration ("expiry" field) is "8760h", which is 1 year in hours. +############################################################################# +signing: + default: + usage: + - digital signature + expiry: 8760h + profiles: + ca: + usage: + - cert sign + - crl sign + expiry: 43800h + caconstraint: + isca: true + maxpathlen: 0 + tls: + usage: + - signing + - key encipherment + - server auth + - client auth + - key agreement + expiry: 8760h + +########################################################################### +# Certificate Signing Request (CSR) section. +# This controls the creation of the root CA certificate. +# The expiration for the root CA certificate is configured with the +# "ca.expiry" field below, whose default value is "131400h" which is +# 15 years in hours. +# The pathlength field is used to limit CA certificate hierarchy as described +# in section 4.2.1.9 of RFC 5280. +# Examples: +# 1) No pathlength value means no limit is requested. +# 2) pathlength == 1 means a limit of 1 is requested which is the default for +# a root CA. This means the root CA can issue intermediate CA certificates, +# but these intermediate CAs may not in turn issue other CA certificates +# though they can still issue end entity certificates. +# 3) pathlength == 0 means a limit of 0 is requested; +# this is the default for an intermediate CA, which means it can not issue +# CA certificates though it can still issue end entity certificates. +# The "hosts" field will be used to specify Subject Alternative Names +# if the server creates a self-signed TLS certificate. +########################################################################### +csr: + cn: fabric-ca-server + keyrequest: + algo: ecdsa + size: 256 + # names: + # - C: US + # ST: "North Carolina" + # L: + # O: Hyperledger + # OU: Fabric + hosts: + - a4b94aae84ff + - localhost + ca: + expiry: 131400h + pathlength: 1 + +########################################################################### +# Each CA can issue both X509 enrollment certificate as well as Idemix +# Credential. This section specifies configuration for the issuer component +# that is responsible for issuing Idemix credentials. +########################################################################### +idemix: + # Specifies pool size for revocation handles. A revocation handle is an unique identifier of an + # Idemix credential. The issuer will create a pool revocation handles of this specified size. When + # a credential is requested, issuer will get handle from the pool and assign it to the credential. + # Issuer will repopulate the pool with new handles when the last handle in the pool is used. + # A revocation handle and credential revocation information (CRI) are used to create non revocation proof + # by the prover to prove to the verifier that her credential is not revoked. + rhpoolsize: 1000 + + # The Idemix credential issuance is a two step process. First step is to get a nonce from the issuer + # and second step is send credential request that is constructed using the nonce to the isuser to + # request a credential. This configuration property specifies expiration for the nonces. By default is + # nonces expire after 15 seconds. The value is expressed in the time.Duration format (see https://golang.org/pkg/time/#ParseDuration). + nonceexpiration: 15s + + # Specifies interval at which expired nonces are removed from datastore. Default value is 15 minutes. + # The value is expressed in the time.Duration format (see https://golang.org/pkg/time/#ParseDuration) + noncesweepinterval: 15m + + # Specifies the Elliptic Curve used by Identity Mixer. + # It can be any of: {"amcl.Fp256bn", "gurvy.Bn254", "amcl.Fp256Miraclbn"}. + # If unspecified, it defaults to 'amcl.Fp256bn'. + curve: amcl.Fp256bn + +############################################################################# +# BCCSP (BlockChain Crypto Service Provider) section is used to select which +# crypto library implementation to use +############################################################################# +bccsp: + default: SW + sw: + hash: SHA2 + security: 256 + filekeystore: + # The directory used for the software file-based keystore + keystore: msp/keystore + +############################################################################# +# Multi CA section +# +# Each Fabric CA server contains one CA by default. This section is used +# to configure multiple CAs in a single server. +# +# 1) --cacount +# Automatically generate non-default CAs. The names of these +# additional CAs are "ca1", "ca2", ... "caN", where "N" is +# This is particularly useful in a development environment to quickly set up +# multiple CAs. Note that, this config option is not applicable to intermediate CA server +# i.e., Fabric CA server that is started with intermediate.parentserver.url config +# option (-u command line option) +# +# 2) --cafiles +# For each CA config file in the list, generate a separate signing CA. Each CA +# config file in this list MAY contain all of the same elements as are found in +# the server config file except port, debug, and tls sections. +# +# Examples: +# fabric-ca-server start -b admin:adminpw --cacount 2 +# +# fabric-ca-server start -b admin:adminpw --cafiles ca/ca1/fabric-ca-server-config.yaml +# --cafiles ca/ca2/fabric-ca-server-config.yaml +# +############################################################################# + +cacount: + +cafiles: + +############################################################################# +# Intermediate CA section +# +# The relationship between servers and CAs is as follows: +# 1) A single server process may contain or function as one or more CAs. +# This is configured by the "Multi CA section" above. +# 2) Each CA is either a root CA or an intermediate CA. +# 3) Each intermediate CA has a parent CA which is either a root CA or another intermediate CA. +# +# This section pertains to configuration of #2 and #3. +# If the "intermediate.parentserver.url" property is set, +# then this is an intermediate CA with the specified parent +# CA. +# +# parentserver section +# url - The URL of the parent server +# caname - Name of the CA to enroll within the server +# +# enrollment section used to enroll intermediate CA with parent CA +# profile - Name of the signing profile to use in issuing the certificate +# label - Label to use in HSM operations +# +# tls section for secure socket connection +# certfiles - PEM-encoded list of trusted root certificate files +# client: +# certfile - PEM-encoded certificate file for when client authentication +# is enabled on server +# keyfile - PEM-encoded key file for when client authentication +# is enabled on server +############################################################################# +intermediate: + parentserver: + url: + caname: + + enrollment: + hosts: + profile: + label: + + tls: + certfiles: + client: + certfile: + keyfile: + +############################################################################# +# CA configuration section +# +# Configure the number of incorrect password attempts are allowed for +# identities. By default, the value of 'passwordattempts' is 10, which +# means that 10 incorrect password attempts can be made before an identity get +# locked out. +############################################################################# +cfg: + identities: + passwordattempts: 10 + +############################################################################### +# +# Operations section +# +############################################################################### +operations: + # host and port for the operations server + listenAddress: 127.0.0.1:9443 + + # TLS configuration for the operations endpoint + tls: + # TLS enabled + enabled: false + + # path to PEM encoded server certificate for the operations server + cert: + file: + + # path to PEM encoded server key for the operations server + key: + file: + + # require client certificate authentication to access all resources + clientAuthRequired: false + + # paths to PEM encoded ca certificates to trust for client authentication + clientRootCAs: + files: [] + +############################################################################### +# +# Metrics section +# +############################################################################### +metrics: + # statsd, prometheus, or disabled + provider: disabled + + # statsd configuration + statsd: + # network type: tcp or udp + network: udp + + # statsd server address + address: 127.0.0.1:8125 + + # the interval at which locally cached counters and gauges are pushed + # to statsd; timings are pushed immediately + writeInterval: 10s + + # prefix is prepended to all emitted statsd metrics + prefix: server diff --git a/topologies/t4/config/fabric-ca/org2/tls/fabric-ca-server-config.yaml b/topologies/t4/config/fabric-ca/org2/tls/fabric-ca-server-config.yaml new file mode 100644 index 0000000..6edecb9 --- /dev/null +++ b/topologies/t4/config/fabric-ca/org2/tls/fabric-ca-server-config.yaml @@ -0,0 +1,544 @@ +############################################################################# +# This is a configuration file for the fabric-ca-server command. +# +# COMMAND LINE ARGUMENTS AND ENVIRONMENT VARIABLES +# ------------------------------------------------ +# Each configuration element can be overridden via command line +# arguments or environment variables. The precedence for determining +# the value of each element is as follows: +# 1) command line argument +# Examples: +# a) --port 443 +# To set the listening port +# b) --ca.keyfile ../mykey.pem +# To set the "keyfile" element in the "ca" section below; +# note the '.' separator character. +# 2) environment variable +# Examples: +# a) FABRIC_CA_SERVER_PORT=443 +# To set the listening port +# b) FABRIC_CA_SERVER_CA_KEYFILE="../mykey.pem" +# To set the "keyfile" element in the "ca" section below; +# note the '_' separator character. +# 3) configuration file +# 4) default value (if there is one) +# All default values are shown beside each element below. +# +# FILE NAME ELEMENTS +# ------------------ +# The value of all fields whose name ends with "file" or "files" are +# name or names of other files. +# For example, see "tls.certfile" and "tls.clientauth.certfiles". +# The value of each of these fields can be a simple filename, a +# relative path, or an absolute path. If the value is not an +# absolute path, it is interpreted as being relative to the location +# of this configuration file. +# +############################################################################# + +# Version of config file +version: 1.5.5 + +# Server's listening port (default: 7054) +port: 7054 + +# Cross-Origin Resource Sharing (CORS) +cors: + enabled: false + origins: + - "*" + +# Enables debug logging (default: false) +debug: false + +# Size limit of an acceptable CRL in bytes (default: 512000) +crlsizelimit: 512000 + +############################################################################# +# TLS section for the server's listening port +# +# The following types are supported for client authentication: NoClientCert, +# RequestClientCert, RequireAnyClientCert, VerifyClientCertIfGiven, +# and RequireAndVerifyClientCert. +# +# Certfiles is a list of root certificate authorities that the server uses +# when verifying client certificates. +############################################################################# +tls: + # Enable TLS (default: false) + enabled: false + # TLS for the server's listening port + certfile: + keyfile: + clientauth: + type: noclientcert + certfiles: + +############################################################################# +# The CA section contains information related to the Certificate Authority +# including the name of the CA, which should be unique for all members +# of a blockchain network. It also includes the key and certificate files +# used when issuing enrollment certificates (ECerts). +# The chainfile (if it exists) contains the certificate chain which +# should be trusted for this CA, where the 1st in the chain is always the +# root CA certificate. +############################################################################# +ca: + # Name of this CA + name: + # Key file (is only used to import a private key into BCCSP) + keyfile: + # Certificate file (default: ca-cert.pem) + certfile: + # Chain file + chainfile: + # Ignore Certificate Expiration in the case of re-enroll + reenrollIgnoreCertExpiry: false + +############################################################################# +# The gencrl REST endpoint is used to generate a CRL that contains revoked +# certificates. This section contains configuration options that are used +# during gencrl request processing. +############################################################################# +crl: + # Specifies expiration for the generated CRL. The number of hours + # specified by this property is added to the UTC time, the resulting time + # is used to set the 'Next Update' date of the CRL. + expiry: 24h + +############################################################################# +# The registry section controls how the fabric-ca-server does two things: +# 1) authenticates enrollment requests which contain a username and password +# (also known as an enrollment ID and secret). +# 2) once authenticated, retrieves the identity's attribute names and values. +# These attributes are useful for making access control decisions in +# chaincode. +# There are two main configuration options: +# 1) The fabric-ca-server is the registry. +# This is true if "ldap.enabled" in the ldap section below is false. +# 2) An LDAP server is the registry, in which case the fabric-ca-server +# calls the LDAP server to perform these tasks. +# This is true if "ldap.enabled" in the ldap section below is true, +# which means this "registry" section is ignored. +############################################################################# +registry: + # Maximum number of times a password/secret can be reused for enrollment + # (default: -1, which means there is no limit) + maxenrollments: -1 + + # Contains identity information which is used when LDAP is disabled + identities: + - name: org2-ca-tls-admin + pass: org2-ca-tls-adminpw + type: client + affiliation: "" + attrs: + hf.Registrar.Roles: "*" + hf.Registrar.DelegateRoles: "*" + hf.Revoker: true + hf.IntermediateCA: true + hf.GenCRL: true + hf.Registrar.Attributes: "*" + hf.AffiliationMgr: true + +############################################################################# +# Database section +# Supported types are: "sqlite3", "postgres", and "mysql". +# The datasource value depends on the type. +# If the type is "sqlite3", the datasource value is a file name to use +# as the database store. Since "sqlite3" is an embedded database, it +# may not be used if you want to run the fabric-ca-server in a cluster. +# To run the fabric-ca-server in a cluster, you must choose "postgres" +# or "mysql". +############################################################################# +db: + type: sqlite3 + datasource: fabric-ca-server.db + tls: + enabled: false + certfiles: + client: + certfile: + keyfile: +############################################################################# +# LDAP section +# If LDAP is enabled, the fabric-ca-server calls LDAP to: +# 1) authenticate enrollment ID and secret (i.e. username and password) +# for enrollment requests; +# 2) To retrieve identity attributes +############################################################################# +ldap: + # Enables or disables the LDAP client (default: false) + # If this is set to true, the "registry" section is ignored. + enabled: true + # The URL of the LDAP server + url: ldap://uid=org2-ca-tls-admin,ou=ldap,ou=tls,ou=hlfabric,dc=org2,dc=org:org2-ca-tls-adminpw@org2-ca-openldap:1389/dc=org2,dc=org + userfilter: (uid=%s) + # TLS configuration for the client connection to the LDAP server + tls: + certfiles: + client: + certfile: + keyfile: + # Attribute related configuration for mapping from LDAP entries to Fabric CA attributes + attribute: + # 'names' is an array of strings containing the LDAP attribute names which are + # requested from the LDAP server for an LDAP identity's entry + names: + [ + "uid", + "cn", + "hfType", + "hfRegistrarRoles", + "hfRegistrarDelegateRoles", + "hfAffiliation", + "hf.Revoker", + "hfGenCRL", + "hfAffiliationMgr", + "hfIntermediateCA", + "hfCustomField1", + "hfCustomField2", + ] + # The 'converters' section is used to convert an LDAP entry to the value of + # a fabric CA attribute. + # For example, the following converts an LDAP 'uid' attribute + # whose value begins with 'revoker' to a fabric CA attribute + # named "hf.Revoker" with a value of "true" (because the boolean expression + # evaluates to true). + # converters: + # - name: hf.Revoker + # value: attr("uid") =~ "revoker*" + converters: + - name: hf.EnrollmentID + value: attr("uid") + - name: hf.Type + value: attr("hfType") + - name: hf.Affiliation + value: 'attr("hfAffiliation") == "empty" ? "" : attr("hfAffiliation")' + - name: hf.Registrar.Roles + value: attr("hfRegistrarRoles") + - name: hf.Registrar.DelegateRoles + value: attr("hfRegistrarDelegateRoles") + - name: hf.Revoker + value: attr("hfRevoker") == "TRUE" + - name: hf.GenCRL + value: attr("hfGenCRL") == "TRUE" + - name: hf.AffiliationMgr + value: attr("hfAffiliationMgr") == "TRUE" + - name: hf.IntermediateCA + value: attr("hfIntermediateCA") == "TRUE" + - name: hf.CustomField1 + value: attr("hfCustomField1") + - name: hf.CustomField2 + value: attr("hfCustomField2") + # The 'maps' section contains named maps which may be referenced by the 'map' + # function in the 'converters' section to map LDAP responses to arbitrary values. + # For example, assume a user has an LDAP attribute named 'member' which has multiple + # values which are each a distinguished name (i.e. a DN). For simplicity, assume the + # values of the 'member' attribute are 'dn1', 'dn2', and 'dn3'. + # Further assume the following configuration. + # converters: + # - name: hf.Registrar.Roles + # value: map(attr("member"),"groups") + # maps: + # groups: + # - name: dn1 + # value: peer + # - name: dn2 + # value: client + # The value of the user's 'hf.Registrar.Roles' attribute is then computed to be + # "peer,client,dn3". This is because the value of 'attr("member")' is + # "dn1,dn2,dn3", and the call to 'map' with a 2nd argument of + # "group" replaces "dn1" with "peer" and "dn2" with "client". + maps: + groups: + - name: + value: + +############################################################################# +# Affiliations section. Fabric CA server can be bootstrapped with the +# affiliations specified in this section. Affiliations are specified as maps. +# For example: +# businessunit1: +# department1: +# - team1 +# businessunit2: +# - department2 +# - department3 +# +# Affiliations are hierarchical in nature. In the above example, +# department1 (used as businessunit1.department1) is the child of businessunit1. +# team1 (used as businessunit1.department1.team1) is the child of department1. +# department2 (used as businessunit2.department2) and department3 (businessunit2.department3) +# are children of businessunit2. +# Note: Affiliations are case sensitive except for the non-leaf affiliations +# (like businessunit1, department1, businessunit2) that are specified in the configuration file, +# which are always stored in lower case. +############################################################################# +affiliations: + org2: + - department1 + - department2 + org3: + - department1 + +############################################################################# +# Signing section +# +# The "default" subsection is used to sign enrollment certificates; +# the default expiration ("expiry" field) is "8760h", which is 1 year in hours. +# +# The "ca" profile subsection is used to sign intermediate CA certificates; +# the default expiration ("expiry" field) is "43800h" which is 5 years in hours. +# Note that "isca" is true, meaning that it issues a CA certificate. +# A maxpathlen of 0 means that the intermediate CA cannot issue other +# intermediate CA certificates, though it can still issue end entity certificates. +# (See RFC 5280, section 4.2.1.9) +# +# The "tls" profile subsection is used to sign TLS certificate requests; +# the default expiration ("expiry" field) is "8760h", which is 1 year in hours. +############################################################################# +signing: + default: + usage: + - digital signature + expiry: 8760h + profiles: + ca: + usage: + - cert sign + - crl sign + expiry: 43800h + caconstraint: + isca: true + maxpathlen: 0 + tls: + usage: + - signing + - key encipherment + - server auth + - client auth + - key agreement + expiry: 8760h + +########################################################################### +# Certificate Signing Request (CSR) section. +# This controls the creation of the root CA certificate. +# The expiration for the root CA certificate is configured with the +# "ca.expiry" field below, whose default value is "131400h" which is +# 15 years in hours. +# The pathlength field is used to limit CA certificate hierarchy as described +# in section 4.2.1.9 of RFC 5280. +# Examples: +# 1) No pathlength value means no limit is requested. +# 2) pathlength == 1 means a limit of 1 is requested which is the default for +# a root CA. This means the root CA can issue intermediate CA certificates, +# but these intermediate CAs may not in turn issue other CA certificates +# though they can still issue end entity certificates. +# 3) pathlength == 0 means a limit of 0 is requested; +# this is the default for an intermediate CA, which means it can not issue +# CA certificates though it can still issue end entity certificates. +# The "hosts" field will be used to specify Subject Alternative Names +# if the server creates a self-signed TLS certificate. +########################################################################### +csr: + cn: fabric-ca-server + keyrequest: + algo: ecdsa + size: 256 + # names: + # - C: US + # ST: "North Carolina" + # L: + # O: Hyperledger + # OU: Fabric + hosts: + - a4b94aae84ff + - localhost + ca: + expiry: 131400h + pathlength: 1 + +########################################################################### +# Each CA can issue both X509 enrollment certificate as well as Idemix +# Credential. This section specifies configuration for the issuer component +# that is responsible for issuing Idemix credentials. +########################################################################### +idemix: + # Specifies pool size for revocation handles. A revocation handle is an unique identifier of an + # Idemix credential. The issuer will create a pool revocation handles of this specified size. When + # a credential is requested, issuer will get handle from the pool and assign it to the credential. + # Issuer will repopulate the pool with new handles when the last handle in the pool is used. + # A revocation handle and credential revocation information (CRI) are used to create non revocation proof + # by the prover to prove to the verifier that her credential is not revoked. + rhpoolsize: 1000 + + # The Idemix credential issuance is a two step process. First step is to get a nonce from the issuer + # and second step is send credential request that is constructed using the nonce to the isuser to + # request a credential. This configuration property specifies expiration for the nonces. By default is + # nonces expire after 15 seconds. The value is expressed in the time.Duration format (see https://golang.org/pkg/time/#ParseDuration). + nonceexpiration: 15s + + # Specifies interval at which expired nonces are removed from datastore. Default value is 15 minutes. + # The value is expressed in the time.Duration format (see https://golang.org/pkg/time/#ParseDuration) + noncesweepinterval: 15m + + # Specifies the Elliptic Curve used by Identity Mixer. + # It can be any of: {"amcl.Fp256bn", "gurvy.Bn254", "amcl.Fp256Miraclbn"}. + # If unspecified, it defaults to 'amcl.Fp256bn'. + curve: amcl.Fp256bn + +############################################################################# +# BCCSP (BlockChain Crypto Service Provider) section is used to select which +# crypto library implementation to use +############################################################################# +bccsp: + default: SW + sw: + hash: SHA2 + security: 256 + filekeystore: + # The directory used for the software file-based keystore + keystore: msp/keystore + +############################################################################# +# Multi CA section +# +# Each Fabric CA server contains one CA by default. This section is used +# to configure multiple CAs in a single server. +# +# 1) --cacount +# Automatically generate non-default CAs. The names of these +# additional CAs are "ca1", "ca2", ... "caN", where "N" is +# This is particularly useful in a development environment to quickly set up +# multiple CAs. Note that, this config option is not applicable to intermediate CA server +# i.e., Fabric CA server that is started with intermediate.parentserver.url config +# option (-u command line option) +# +# 2) --cafiles +# For each CA config file in the list, generate a separate signing CA. Each CA +# config file in this list MAY contain all of the same elements as are found in +# the server config file except port, debug, and tls sections. +# +# Examples: +# fabric-ca-server start -b admin:adminpw --cacount 2 +# +# fabric-ca-server start -b admin:adminpw --cafiles ca/ca1/fabric-ca-server-config.yaml +# --cafiles ca/ca2/fabric-ca-server-config.yaml +# +############################################################################# + +cacount: + +cafiles: + +############################################################################# +# Intermediate CA section +# +# The relationship between servers and CAs is as follows: +# 1) A single server process may contain or function as one or more CAs. +# This is configured by the "Multi CA section" above. +# 2) Each CA is either a root CA or an intermediate CA. +# 3) Each intermediate CA has a parent CA which is either a root CA or another intermediate CA. +# +# This section pertains to configuration of #2 and #3. +# If the "intermediate.parentserver.url" property is set, +# then this is an intermediate CA with the specified parent +# CA. +# +# parentserver section +# url - The URL of the parent server +# caname - Name of the CA to enroll within the server +# +# enrollment section used to enroll intermediate CA with parent CA +# profile - Name of the signing profile to use in issuing the certificate +# label - Label to use in HSM operations +# +# tls section for secure socket connection +# certfiles - PEM-encoded list of trusted root certificate files +# client: +# certfile - PEM-encoded certificate file for when client authentication +# is enabled on server +# keyfile - PEM-encoded key file for when client authentication +# is enabled on server +############################################################################# +intermediate: + parentserver: + url: + caname: + + enrollment: + hosts: + profile: + label: + + tls: + certfiles: + client: + certfile: + keyfile: + +############################################################################# +# CA configuration section +# +# Configure the number of incorrect password attempts are allowed for +# identities. By default, the value of 'passwordattempts' is 10, which +# means that 10 incorrect password attempts can be made before an identity get +# locked out. +############################################################################# +cfg: + identities: + passwordattempts: 10 + +############################################################################### +# +# Operations section +# +############################################################################### +operations: + # host and port for the operations server + listenAddress: 127.0.0.1:9443 + + # TLS configuration for the operations endpoint + tls: + # TLS enabled + enabled: false + + # path to PEM encoded server certificate for the operations server + cert: + file: + + # path to PEM encoded server key for the operations server + key: + file: + + # require client certificate authentication to access all resources + clientAuthRequired: false + + # paths to PEM encoded ca certificates to trust for client authentication + clientRootCAs: + files: [] + +############################################################################### +# +# Metrics section +# +############################################################################### +metrics: + # statsd, prometheus, or disabled + provider: disabled + + # statsd configuration + statsd: + # network type: tcp or udp + network: udp + + # statsd server address + address: 127.0.0.1:8125 + + # the interval at which locally cached counters and gauges are pushed + # to statsd; timings are pushed immediately + writeInterval: 10s + + # prefix is prepended to all emitted statsd metrics + prefix: server diff --git a/topologies/t4/config/fabric-ca/org3/identities/fabric-ca-server-config.yaml b/topologies/t4/config/fabric-ca/org3/identities/fabric-ca-server-config.yaml new file mode 100644 index 0000000..20c4336 --- /dev/null +++ b/topologies/t4/config/fabric-ca/org3/identities/fabric-ca-server-config.yaml @@ -0,0 +1,544 @@ +############################################################################# +# This is a configuration file for the fabric-ca-server command. +# +# COMMAND LINE ARGUMENTS AND ENVIRONMENT VARIABLES +# ------------------------------------------------ +# Each configuration element can be overridden via command line +# arguments or environment variables. The precedence for determining +# the value of each element is as follows: +# 1) command line argument +# Examples: +# a) --port 443 +# To set the listening port +# b) --ca.keyfile ../mykey.pem +# To set the "keyfile" element in the "ca" section below; +# note the '.' separator character. +# 2) environment variable +# Examples: +# a) FABRIC_CA_SERVER_PORT=443 +# To set the listening port +# b) FABRIC_CA_SERVER_CA_KEYFILE="../mykey.pem" +# To set the "keyfile" element in the "ca" section below; +# note the '_' separator character. +# 3) configuration file +# 4) default value (if there is one) +# All default values are shown beside each element below. +# +# FILE NAME ELEMENTS +# ------------------ +# The value of all fields whose name ends with "file" or "files" are +# name or names of other files. +# For example, see "tls.certfile" and "tls.clientauth.certfiles". +# The value of each of these fields can be a simple filename, a +# relative path, or an absolute path. If the value is not an +# absolute path, it is interpreted as being relative to the location +# of this configuration file. +# +############################################################################# + +# Version of config file +version: 1.5.5 + +# Server's listening port (default: 7054) +port: 7054 + +# Cross-Origin Resource Sharing (CORS) +cors: + enabled: false + origins: + - "*" + +# Enables debug logging (default: false) +debug: false + +# Size limit of an acceptable CRL in bytes (default: 512000) +crlsizelimit: 512000 + +############################################################################# +# TLS section for the server's listening port +# +# The following types are supported for client authentication: NoClientCert, +# RequestClientCert, RequireAnyClientCert, VerifyClientCertIfGiven, +# and RequireAndVerifyClientCert. +# +# Certfiles is a list of root certificate authorities that the server uses +# when verifying client certificates. +############################################################################# +tls: + # Enable TLS (default: false) + enabled: false + # TLS for the server's listening port + certfile: + keyfile: + clientauth: + type: noclientcert + certfiles: + +############################################################################# +# The CA section contains information related to the Certificate Authority +# including the name of the CA, which should be unique for all members +# of a blockchain network. It also includes the key and certificate files +# used when issuing enrollment certificates (ECerts). +# The chainfile (if it exists) contains the certificate chain which +# should be trusted for this CA, where the 1st in the chain is always the +# root CA certificate. +############################################################################# +ca: + # Name of this CA + name: + # Key file (is only used to import a private key into BCCSP) + keyfile: + # Certificate file (default: ca-cert.pem) + certfile: + # Chain file + chainfile: + # Ignore Certificate Expiration in the case of re-enroll + reenrollIgnoreCertExpiry: false + +############################################################################# +# The gencrl REST endpoint is used to generate a CRL that contains revoked +# certificates. This section contains configuration options that are used +# during gencrl request processing. +############################################################################# +crl: + # Specifies expiration for the generated CRL. The number of hours + # specified by this property is added to the UTC time, the resulting time + # is used to set the 'Next Update' date of the CRL. + expiry: 24h + +############################################################################# +# The registry section controls how the fabric-ca-server does two things: +# 1) authenticates enrollment requests which contain a username and password +# (also known as an enrollment ID and secret). +# 2) once authenticated, retrieves the identity's attribute names and values. +# These attributes are useful for making access control decisions in +# chaincode. +# There are two main configuration options: +# 1) The fabric-ca-server is the registry. +# This is true if "ldap.enabled" in the ldap section below is false. +# 2) An LDAP server is the registry, in which case the fabric-ca-server +# calls the LDAP server to perform these tasks. +# This is true if "ldap.enabled" in the ldap section below is true, +# which means this "registry" section is ignored. +############################################################################# +registry: + # Maximum number of times a password/secret can be reused for enrollment + # (default: -1, which means there is no limit) + maxenrollments: -1 + + # Contains identity information which is used when LDAP is disabled + identities: + - name: org3-ca-tls-admin + pass: org3-ca-tls-adminpw + type: client + affiliation: "" + attrs: + hf.Registrar.Roles: "*" + hf.Registrar.DelegateRoles: "*" + hf.Revoker: true + hf.IntermediateCA: true + hf.GenCRL: true + hf.Registrar.Attributes: "*" + hf.AffiliationMgr: true + +############################################################################# +# Database section +# Supported types are: "sqlite3", "postgres", and "mysql". +# The datasource value depends on the type. +# If the type is "sqlite3", the datasource value is a file name to use +# as the database store. Since "sqlite3" is an embedded database, it +# may not be used if you want to run the fabric-ca-server in a cluster. +# To run the fabric-ca-server in a cluster, you must choose "postgres" +# or "mysql". +############################################################################# +db: + type: sqlite3 + datasource: fabric-ca-server.db + tls: + enabled: false + certfiles: + client: + certfile: + keyfile: +############################################################################# +# LDAP section +# If LDAP is enabled, the fabric-ca-server calls LDAP to: +# 1) authenticate enrollment ID and secret (i.e. username and password) +# for enrollment requests; +# 2) To retrieve identity attributes +############################################################################# +ldap: + # Enables or disables the LDAP client (default: false) + # If this is set to true, the "registry" section is ignored. + enabled: true + # The URL of the LDAP server + url: ldap://uid=org3-ca-identities-admin,ou=ldap,ou=identities,ou=hlfabric,dc=org3,dc=org:org3-ca-identities-adminpw@org3-ca-openldap:1389/dc=org3,dc=org + userfilter: (uid=%s) + # TLS configuration for the client connection to the LDAP server + tls: + certfiles: + client: + certfile: + keyfile: + # Attribute related configuration for mapping from LDAP entries to Fabric CA attributes + attribute: + # 'names' is an array of strings containing the LDAP attribute names which are + # requested from the LDAP server for an LDAP identity's entry + names: + [ + "uid", + "cn", + "hfType", + "hfRegistrarRoles", + "hfRegistrarDelegateRoles", + "hfAffiliation", + "hf.Revoker", + "hfGenCRL", + "hfAffiliationMgr", + "hfIntermediateCA", + "hfCustomField1", + "hfCustomField2", + ] + # The 'converters' section is used to convert an LDAP entry to the value of + # a fabric CA attribute. + # For example, the following converts an LDAP 'uid' attribute + # whose value begins with 'revoker' to a fabric CA attribute + # named "hf.Revoker" with a value of "true" (because the boolean expression + # evaluates to true). + # converters: + # - name: hf.Revoker + # value: attr("uid") =~ "revoker*" + converters: + - name: hf.EnrollmentID + value: attr("uid") + - name: hf.Type + value: attr("hfType") + - name: hf.Affiliation + value: 'attr("hfAffiliation") == "empty" ? "" : attr("hfAffiliation")' + - name: hf.Registrar.Roles + value: attr("hfRegistrarRoles") + - name: hf.Registrar.DelegateRoles + value: attr("hfRegistrarDelegateRoles") + - name: hf.Revoker + value: attr("hfRevoker") == "TRUE" + - name: hf.GenCRL + value: attr("hfGenCRL") == "TRUE" + - name: hf.AffiliationMgr + value: attr("hfAffiliationMgr") == "TRUE" + - name: hf.IntermediateCA + value: attr("hfIntermediateCA") == "TRUE" + - name: hf.CustomField1 + value: attr("hfCustomField1") + - name: hf.CustomField2 + value: attr("hfCustomField2") + # The 'maps' section contains named maps which may be referenced by the 'map' + # function in the 'converters' section to map LDAP responses to arbitrary values. + # For example, assume a user has an LDAP attribute named 'member' which has multiple + # values which are each a distinguished name (i.e. a DN). For simplicity, assume the + # values of the 'member' attribute are 'dn1', 'dn2', and 'dn3'. + # Further assume the following configuration. + # converters: + # - name: hf.Registrar.Roles + # value: map(attr("member"),"groups") + # maps: + # groups: + # - name: dn1 + # value: peer + # - name: dn2 + # value: client + # The value of the user's 'hf.Registrar.Roles' attribute is then computed to be + # "peer,client,dn3". This is because the value of 'attr("member")' is + # "dn1,dn2,dn3", and the call to 'map' with a 2nd argument of + # "group" replaces "dn1" with "peer" and "dn2" with "client". + maps: + groups: + - name: + value: + +############################################################################# +# Affiliations section. Fabric CA server can be bootstrapped with the +# affiliations specified in this section. Affiliations are specified as maps. +# For example: +# businessunit1: +# department1: +# - team1 +# businessunit2: +# - department2 +# - department3 +# +# Affiliations are hierarchical in nature. In the above example, +# department1 (used as businessunit1.department1) is the child of businessunit1. +# team1 (used as businessunit1.department1.team1) is the child of department1. +# department2 (used as businessunit2.department2) and department3 (businessunit2.department3) +# are children of businessunit2. +# Note: Affiliations are case sensitive except for the non-leaf affiliations +# (like businessunit1, department1, businessunit2) that are specified in the configuration file, +# which are always stored in lower case. +############################################################################# +affiliations: + org2: + - department1 + - department2 + org3: + - department1 + +############################################################################# +# Signing section +# +# The "default" subsection is used to sign enrollment certificates; +# the default expiration ("expiry" field) is "8760h", which is 1 year in hours. +# +# The "ca" profile subsection is used to sign intermediate CA certificates; +# the default expiration ("expiry" field) is "43800h" which is 5 years in hours. +# Note that "isca" is true, meaning that it issues a CA certificate. +# A maxpathlen of 0 means that the intermediate CA cannot issue other +# intermediate CA certificates, though it can still issue end entity certificates. +# (See RFC 5280, section 4.2.1.9) +# +# The "tls" profile subsection is used to sign TLS certificate requests; +# the default expiration ("expiry" field) is "8760h", which is 1 year in hours. +############################################################################# +signing: + default: + usage: + - digital signature + expiry: 8760h + profiles: + ca: + usage: + - cert sign + - crl sign + expiry: 43800h + caconstraint: + isca: true + maxpathlen: 0 + tls: + usage: + - signing + - key encipherment + - server auth + - client auth + - key agreement + expiry: 8760h + +########################################################################### +# Certificate Signing Request (CSR) section. +# This controls the creation of the root CA certificate. +# The expiration for the root CA certificate is configured with the +# "ca.expiry" field below, whose default value is "131400h" which is +# 15 years in hours. +# The pathlength field is used to limit CA certificate hierarchy as described +# in section 4.2.1.9 of RFC 5280. +# Examples: +# 1) No pathlength value means no limit is requested. +# 2) pathlength == 1 means a limit of 1 is requested which is the default for +# a root CA. This means the root CA can issue intermediate CA certificates, +# but these intermediate CAs may not in turn issue other CA certificates +# though they can still issue end entity certificates. +# 3) pathlength == 0 means a limit of 0 is requested; +# this is the default for an intermediate CA, which means it can not issue +# CA certificates though it can still issue end entity certificates. +# The "hosts" field will be used to specify Subject Alternative Names +# if the server creates a self-signed TLS certificate. +########################################################################### +csr: + cn: fabric-ca-server + keyrequest: + algo: ecdsa + size: 256 + # names: + # - C: US + # ST: "North Carolina" + # L: + # O: Hyperledger + # OU: Fabric + hosts: + - a4b94aae84ff + - localhost + ca: + expiry: 131400h + pathlength: 1 + +########################################################################### +# Each CA can issue both X509 enrollment certificate as well as Idemix +# Credential. This section specifies configuration for the issuer component +# that is responsible for issuing Idemix credentials. +########################################################################### +idemix: + # Specifies pool size for revocation handles. A revocation handle is an unique identifier of an + # Idemix credential. The issuer will create a pool revocation handles of this specified size. When + # a credential is requested, issuer will get handle from the pool and assign it to the credential. + # Issuer will repopulate the pool with new handles when the last handle in the pool is used. + # A revocation handle and credential revocation information (CRI) are used to create non revocation proof + # by the prover to prove to the verifier that her credential is not revoked. + rhpoolsize: 1000 + + # The Idemix credential issuance is a two step process. First step is to get a nonce from the issuer + # and second step is send credential request that is constructed using the nonce to the isuser to + # request a credential. This configuration property specifies expiration for the nonces. By default is + # nonces expire after 15 seconds. The value is expressed in the time.Duration format (see https://golang.org/pkg/time/#ParseDuration). + nonceexpiration: 15s + + # Specifies interval at which expired nonces are removed from datastore. Default value is 15 minutes. + # The value is expressed in the time.Duration format (see https://golang.org/pkg/time/#ParseDuration) + noncesweepinterval: 15m + + # Specifies the Elliptic Curve used by Identity Mixer. + # It can be any of: {"amcl.Fp256bn", "gurvy.Bn254", "amcl.Fp256Miraclbn"}. + # If unspecified, it defaults to 'amcl.Fp256bn'. + curve: amcl.Fp256bn + +############################################################################# +# BCCSP (BlockChain Crypto Service Provider) section is used to select which +# crypto library implementation to use +############################################################################# +bccsp: + default: SW + sw: + hash: SHA2 + security: 256 + filekeystore: + # The directory used for the software file-based keystore + keystore: msp/keystore + +############################################################################# +# Multi CA section +# +# Each Fabric CA server contains one CA by default. This section is used +# to configure multiple CAs in a single server. +# +# 1) --cacount +# Automatically generate non-default CAs. The names of these +# additional CAs are "ca1", "ca2", ... "caN", where "N" is +# This is particularly useful in a development environment to quickly set up +# multiple CAs. Note that, this config option is not applicable to intermediate CA server +# i.e., Fabric CA server that is started with intermediate.parentserver.url config +# option (-u command line option) +# +# 2) --cafiles +# For each CA config file in the list, generate a separate signing CA. Each CA +# config file in this list MAY contain all of the same elements as are found in +# the server config file except port, debug, and tls sections. +# +# Examples: +# fabric-ca-server start -b admin:adminpw --cacount 2 +# +# fabric-ca-server start -b admin:adminpw --cafiles ca/ca1/fabric-ca-server-config.yaml +# --cafiles ca/ca2/fabric-ca-server-config.yaml +# +############################################################################# + +cacount: + +cafiles: + +############################################################################# +# Intermediate CA section +# +# The relationship between servers and CAs is as follows: +# 1) A single server process may contain or function as one or more CAs. +# This is configured by the "Multi CA section" above. +# 2) Each CA is either a root CA or an intermediate CA. +# 3) Each intermediate CA has a parent CA which is either a root CA or another intermediate CA. +# +# This section pertains to configuration of #2 and #3. +# If the "intermediate.parentserver.url" property is set, +# then this is an intermediate CA with the specified parent +# CA. +# +# parentserver section +# url - The URL of the parent server +# caname - Name of the CA to enroll within the server +# +# enrollment section used to enroll intermediate CA with parent CA +# profile - Name of the signing profile to use in issuing the certificate +# label - Label to use in HSM operations +# +# tls section for secure socket connection +# certfiles - PEM-encoded list of trusted root certificate files +# client: +# certfile - PEM-encoded certificate file for when client authentication +# is enabled on server +# keyfile - PEM-encoded key file for when client authentication +# is enabled on server +############################################################################# +intermediate: + parentserver: + url: + caname: + + enrollment: + hosts: + profile: + label: + + tls: + certfiles: + client: + certfile: + keyfile: + +############################################################################# +# CA configuration section +# +# Configure the number of incorrect password attempts are allowed for +# identities. By default, the value of 'passwordattempts' is 10, which +# means that 10 incorrect password attempts can be made before an identity get +# locked out. +############################################################################# +cfg: + identities: + passwordattempts: 10 + +############################################################################### +# +# Operations section +# +############################################################################### +operations: + # host and port for the operations server + listenAddress: 127.0.0.1:9443 + + # TLS configuration for the operations endpoint + tls: + # TLS enabled + enabled: false + + # path to PEM encoded server certificate for the operations server + cert: + file: + + # path to PEM encoded server key for the operations server + key: + file: + + # require client certificate authentication to access all resources + clientAuthRequired: false + + # paths to PEM encoded ca certificates to trust for client authentication + clientRootCAs: + files: [] + +############################################################################### +# +# Metrics section +# +############################################################################### +metrics: + # statsd, prometheus, or disabled + provider: disabled + + # statsd configuration + statsd: + # network type: tcp or udp + network: udp + + # statsd server address + address: 127.0.0.1:8125 + + # the interval at which locally cached counters and gauges are pushed + # to statsd; timings are pushed immediately + writeInterval: 10s + + # prefix is prepended to all emitted statsd metrics + prefix: server diff --git a/topologies/t4/config/fabric-ca/org3/tls/fabric-ca-server-config.yaml b/topologies/t4/config/fabric-ca/org3/tls/fabric-ca-server-config.yaml new file mode 100644 index 0000000..69cad70 --- /dev/null +++ b/topologies/t4/config/fabric-ca/org3/tls/fabric-ca-server-config.yaml @@ -0,0 +1,544 @@ +############################################################################# +# This is a configuration file for the fabric-ca-server command. +# +# COMMAND LINE ARGUMENTS AND ENVIRONMENT VARIABLES +# ------------------------------------------------ +# Each configuration element can be overridden via command line +# arguments or environment variables. The precedence for determining +# the value of each element is as follows: +# 1) command line argument +# Examples: +# a) --port 443 +# To set the listening port +# b) --ca.keyfile ../mykey.pem +# To set the "keyfile" element in the "ca" section below; +# note the '.' separator character. +# 2) environment variable +# Examples: +# a) FABRIC_CA_SERVER_PORT=443 +# To set the listening port +# b) FABRIC_CA_SERVER_CA_KEYFILE="../mykey.pem" +# To set the "keyfile" element in the "ca" section below; +# note the '_' separator character. +# 3) configuration file +# 4) default value (if there is one) +# All default values are shown beside each element below. +# +# FILE NAME ELEMENTS +# ------------------ +# The value of all fields whose name ends with "file" or "files" are +# name or names of other files. +# For example, see "tls.certfile" and "tls.clientauth.certfiles". +# The value of each of these fields can be a simple filename, a +# relative path, or an absolute path. If the value is not an +# absolute path, it is interpreted as being relative to the location +# of this configuration file. +# +############################################################################# + +# Version of config file +version: 1.5.5 + +# Server's listening port (default: 7054) +port: 7054 + +# Cross-Origin Resource Sharing (CORS) +cors: + enabled: false + origins: + - "*" + +# Enables debug logging (default: false) +debug: false + +# Size limit of an acceptable CRL in bytes (default: 512000) +crlsizelimit: 512000 + +############################################################################# +# TLS section for the server's listening port +# +# The following types are supported for client authentication: NoClientCert, +# RequestClientCert, RequireAnyClientCert, VerifyClientCertIfGiven, +# and RequireAndVerifyClientCert. +# +# Certfiles is a list of root certificate authorities that the server uses +# when verifying client certificates. +############################################################################# +tls: + # Enable TLS (default: false) + enabled: false + # TLS for the server's listening port + certfile: + keyfile: + clientauth: + type: noclientcert + certfiles: + +############################################################################# +# The CA section contains information related to the Certificate Authority +# including the name of the CA, which should be unique for all members +# of a blockchain network. It also includes the key and certificate files +# used when issuing enrollment certificates (ECerts). +# The chainfile (if it exists) contains the certificate chain which +# should be trusted for this CA, where the 1st in the chain is always the +# root CA certificate. +############################################################################# +ca: + # Name of this CA + name: + # Key file (is only used to import a private key into BCCSP) + keyfile: + # Certificate file (default: ca-cert.pem) + certfile: + # Chain file + chainfile: + # Ignore Certificate Expiration in the case of re-enroll + reenrollIgnoreCertExpiry: false + +############################################################################# +# The gencrl REST endpoint is used to generate a CRL that contains revoked +# certificates. This section contains configuration options that are used +# during gencrl request processing. +############################################################################# +crl: + # Specifies expiration for the generated CRL. The number of hours + # specified by this property is added to the UTC time, the resulting time + # is used to set the 'Next Update' date of the CRL. + expiry: 24h + +############################################################################# +# The registry section controls how the fabric-ca-server does two things: +# 1) authenticates enrollment requests which contain a username and password +# (also known as an enrollment ID and secret). +# 2) once authenticated, retrieves the identity's attribute names and values. +# These attributes are useful for making access control decisions in +# chaincode. +# There are two main configuration options: +# 1) The fabric-ca-server is the registry. +# This is true if "ldap.enabled" in the ldap section below is false. +# 2) An LDAP server is the registry, in which case the fabric-ca-server +# calls the LDAP server to perform these tasks. +# This is true if "ldap.enabled" in the ldap section below is true, +# which means this "registry" section is ignored. +############################################################################# +registry: + # Maximum number of times a password/secret can be reused for enrollment + # (default: -1, which means there is no limit) + maxenrollments: -1 + + # Contains identity information which is used when LDAP is disabled + identities: + - name: org3-ca-tls-admin + pass: org3-ca-tls-adminpw + type: client + affiliation: "" + attrs: + hf.Registrar.Roles: "*" + hf.Registrar.DelegateRoles: "*" + hf.Revoker: true + hf.IntermediateCA: true + hf.GenCRL: true + hf.Registrar.Attributes: "*" + hf.AffiliationMgr: true + +############################################################################# +# Database section +# Supported types are: "sqlite3", "postgres", and "mysql". +# The datasource value depends on the type. +# If the type is "sqlite3", the datasource value is a file name to use +# as the database store. Since "sqlite3" is an embedded database, it +# may not be used if you want to run the fabric-ca-server in a cluster. +# To run the fabric-ca-server in a cluster, you must choose "postgres" +# or "mysql". +############################################################################# +db: + type: sqlite3 + datasource: fabric-ca-server.db + tls: + enabled: false + certfiles: + client: + certfile: + keyfile: +############################################################################# +# LDAP section +# If LDAP is enabled, the fabric-ca-server calls LDAP to: +# 1) authenticate enrollment ID and secret (i.e. username and password) +# for enrollment requests; +# 2) To retrieve identity attributes +############################################################################# +ldap: + # Enables or disables the LDAP client (default: false) + # If this is set to true, the "registry" section is ignored. + enabled: true + # The URL of the LDAP server + url: ldap://uid=org3-ca-tls-admin,ou=ldap,ou=tls,ou=hlfabric,dc=org3,dc=org:org3-ca-tls-adminpw@org3-ca-openldap:1389/dc=org3,dc=org + userfilter: (uid=%s) + # TLS configuration for the client connection to the LDAP server + tls: + certfiles: + client: + certfile: + keyfile: + # Attribute related configuration for mapping from LDAP entries to Fabric CA attributes + attribute: + # 'names' is an array of strings containing the LDAP attribute names which are + # requested from the LDAP server for an LDAP identity's entry + names: + [ + "uid", + "cn", + "hfType", + "hfRegistrarRoles", + "hfRegistrarDelegateRoles", + "hfAffiliation", + "hf.Revoker", + "hfGenCRL", + "hfAffiliationMgr", + "hfIntermediateCA", + "hfCustomField1", + "hfCustomField2", + ] + # The 'converters' section is used to convert an LDAP entry to the value of + # a fabric CA attribute. + # For example, the following converts an LDAP 'uid' attribute + # whose value begins with 'revoker' to a fabric CA attribute + # named "hf.Revoker" with a value of "true" (because the boolean expression + # evaluates to true). + # converters: + # - name: hf.Revoker + # value: attr("uid") =~ "revoker*" + converters: + - name: hf.EnrollmentID + value: attr("uid") + - name: hf.Type + value: attr("hfType") + - name: hf.Affiliation + value: 'attr("hfAffiliation") == "empty" ? "" : attr("hfAffiliation")' + - name: hf.Registrar.Roles + value: attr("hfRegistrarRoles") + - name: hf.Registrar.DelegateRoles + value: attr("hfRegistrarDelegateRoles") + - name: hf.Revoker + value: attr("hfRevoker") == "TRUE" + - name: hf.GenCRL + value: attr("hfGenCRL") == "TRUE" + - name: hf.AffiliationMgr + value: attr("hfAffiliationMgr") == "TRUE" + - name: hf.IntermediateCA + value: attr("hfIntermediateCA") == "TRUE" + - name: hf.CustomField1 + value: attr("hfCustomField1") + - name: hf.CustomField2 + value: attr("hfCustomField2") + # The 'maps' section contains named maps which may be referenced by the 'map' + # function in the 'converters' section to map LDAP responses to arbitrary values. + # For example, assume a user has an LDAP attribute named 'member' which has multiple + # values which are each a distinguished name (i.e. a DN). For simplicity, assume the + # values of the 'member' attribute are 'dn1', 'dn2', and 'dn3'. + # Further assume the following configuration. + # converters: + # - name: hf.Registrar.Roles + # value: map(attr("member"),"groups") + # maps: + # groups: + # - name: dn1 + # value: peer + # - name: dn2 + # value: client + # The value of the user's 'hf.Registrar.Roles' attribute is then computed to be + # "peer,client,dn3". This is because the value of 'attr("member")' is + # "dn1,dn2,dn3", and the call to 'map' with a 2nd argument of + # "group" replaces "dn1" with "peer" and "dn2" with "client". + maps: + groups: + - name: + value: + +############################################################################# +# Affiliations section. Fabric CA server can be bootstrapped with the +# affiliations specified in this section. Affiliations are specified as maps. +# For example: +# businessunit1: +# department1: +# - team1 +# businessunit2: +# - department2 +# - department3 +# +# Affiliations are hierarchical in nature. In the above example, +# department1 (used as businessunit1.department1) is the child of businessunit1. +# team1 (used as businessunit1.department1.team1) is the child of department1. +# department2 (used as businessunit2.department2) and department3 (businessunit2.department3) +# are children of businessunit2. +# Note: Affiliations are case sensitive except for the non-leaf affiliations +# (like businessunit1, department1, businessunit2) that are specified in the configuration file, +# which are always stored in lower case. +############################################################################# +affiliations: + org2: + - department1 + - department2 + org3: + - department1 + +############################################################################# +# Signing section +# +# The "default" subsection is used to sign enrollment certificates; +# the default expiration ("expiry" field) is "8760h", which is 1 year in hours. +# +# The "ca" profile subsection is used to sign intermediate CA certificates; +# the default expiration ("expiry" field) is "43800h" which is 5 years in hours. +# Note that "isca" is true, meaning that it issues a CA certificate. +# A maxpathlen of 0 means that the intermediate CA cannot issue other +# intermediate CA certificates, though it can still issue end entity certificates. +# (See RFC 5280, section 4.2.1.9) +# +# The "tls" profile subsection is used to sign TLS certificate requests; +# the default expiration ("expiry" field) is "8760h", which is 1 year in hours. +############################################################################# +signing: + default: + usage: + - digital signature + expiry: 8760h + profiles: + ca: + usage: + - cert sign + - crl sign + expiry: 43800h + caconstraint: + isca: true + maxpathlen: 0 + tls: + usage: + - signing + - key encipherment + - server auth + - client auth + - key agreement + expiry: 8760h + +########################################################################### +# Certificate Signing Request (CSR) section. +# This controls the creation of the root CA certificate. +# The expiration for the root CA certificate is configured with the +# "ca.expiry" field below, whose default value is "131400h" which is +# 15 years in hours. +# The pathlength field is used to limit CA certificate hierarchy as described +# in section 4.2.1.9 of RFC 5280. +# Examples: +# 1) No pathlength value means no limit is requested. +# 2) pathlength == 1 means a limit of 1 is requested which is the default for +# a root CA. This means the root CA can issue intermediate CA certificates, +# but these intermediate CAs may not in turn issue other CA certificates +# though they can still issue end entity certificates. +# 3) pathlength == 0 means a limit of 0 is requested; +# this is the default for an intermediate CA, which means it can not issue +# CA certificates though it can still issue end entity certificates. +# The "hosts" field will be used to specify Subject Alternative Names +# if the server creates a self-signed TLS certificate. +########################################################################### +csr: + cn: fabric-ca-server + keyrequest: + algo: ecdsa + size: 256 + # names: + # - C: US + # ST: "North Carolina" + # L: + # O: Hyperledger + # OU: Fabric + hosts: + - a4b94aae84ff + - localhost + ca: + expiry: 131400h + pathlength: 1 + +########################################################################### +# Each CA can issue both X509 enrollment certificate as well as Idemix +# Credential. This section specifies configuration for the issuer component +# that is responsible for issuing Idemix credentials. +########################################################################### +idemix: + # Specifies pool size for revocation handles. A revocation handle is an unique identifier of an + # Idemix credential. The issuer will create a pool revocation handles of this specified size. When + # a credential is requested, issuer will get handle from the pool and assign it to the credential. + # Issuer will repopulate the pool with new handles when the last handle in the pool is used. + # A revocation handle and credential revocation information (CRI) are used to create non revocation proof + # by the prover to prove to the verifier that her credential is not revoked. + rhpoolsize: 1000 + + # The Idemix credential issuance is a two step process. First step is to get a nonce from the issuer + # and second step is send credential request that is constructed using the nonce to the isuser to + # request a credential. This configuration property specifies expiration for the nonces. By default is + # nonces expire after 15 seconds. The value is expressed in the time.Duration format (see https://golang.org/pkg/time/#ParseDuration). + nonceexpiration: 15s + + # Specifies interval at which expired nonces are removed from datastore. Default value is 15 minutes. + # The value is expressed in the time.Duration format (see https://golang.org/pkg/time/#ParseDuration) + noncesweepinterval: 15m + + # Specifies the Elliptic Curve used by Identity Mixer. + # It can be any of: {"amcl.Fp256bn", "gurvy.Bn254", "amcl.Fp256Miraclbn"}. + # If unspecified, it defaults to 'amcl.Fp256bn'. + curve: amcl.Fp256bn + +############################################################################# +# BCCSP (BlockChain Crypto Service Provider) section is used to select which +# crypto library implementation to use +############################################################################# +bccsp: + default: SW + sw: + hash: SHA2 + security: 256 + filekeystore: + # The directory used for the software file-based keystore + keystore: msp/keystore + +############################################################################# +# Multi CA section +# +# Each Fabric CA server contains one CA by default. This section is used +# to configure multiple CAs in a single server. +# +# 1) --cacount +# Automatically generate non-default CAs. The names of these +# additional CAs are "ca1", "ca2", ... "caN", where "N" is +# This is particularly useful in a development environment to quickly set up +# multiple CAs. Note that, this config option is not applicable to intermediate CA server +# i.e., Fabric CA server that is started with intermediate.parentserver.url config +# option (-u command line option) +# +# 2) --cafiles +# For each CA config file in the list, generate a separate signing CA. Each CA +# config file in this list MAY contain all of the same elements as are found in +# the server config file except port, debug, and tls sections. +# +# Examples: +# fabric-ca-server start -b admin:adminpw --cacount 2 +# +# fabric-ca-server start -b admin:adminpw --cafiles ca/ca1/fabric-ca-server-config.yaml +# --cafiles ca/ca2/fabric-ca-server-config.yaml +# +############################################################################# + +cacount: + +cafiles: + +############################################################################# +# Intermediate CA section +# +# The relationship between servers and CAs is as follows: +# 1) A single server process may contain or function as one or more CAs. +# This is configured by the "Multi CA section" above. +# 2) Each CA is either a root CA or an intermediate CA. +# 3) Each intermediate CA has a parent CA which is either a root CA or another intermediate CA. +# +# This section pertains to configuration of #2 and #3. +# If the "intermediate.parentserver.url" property is set, +# then this is an intermediate CA with the specified parent +# CA. +# +# parentserver section +# url - The URL of the parent server +# caname - Name of the CA to enroll within the server +# +# enrollment section used to enroll intermediate CA with parent CA +# profile - Name of the signing profile to use in issuing the certificate +# label - Label to use in HSM operations +# +# tls section for secure socket connection +# certfiles - PEM-encoded list of trusted root certificate files +# client: +# certfile - PEM-encoded certificate file for when client authentication +# is enabled on server +# keyfile - PEM-encoded key file for when client authentication +# is enabled on server +############################################################################# +intermediate: + parentserver: + url: + caname: + + enrollment: + hosts: + profile: + label: + + tls: + certfiles: + client: + certfile: + keyfile: + +############################################################################# +# CA configuration section +# +# Configure the number of incorrect password attempts are allowed for +# identities. By default, the value of 'passwordattempts' is 10, which +# means that 10 incorrect password attempts can be made before an identity get +# locked out. +############################################################################# +cfg: + identities: + passwordattempts: 10 + +############################################################################### +# +# Operations section +# +############################################################################### +operations: + # host and port for the operations server + listenAddress: 127.0.0.1:9443 + + # TLS configuration for the operations endpoint + tls: + # TLS enabled + enabled: false + + # path to PEM encoded server certificate for the operations server + cert: + file: + + # path to PEM encoded server key for the operations server + key: + file: + + # require client certificate authentication to access all resources + clientAuthRequired: false + + # paths to PEM encoded ca certificates to trust for client authentication + clientRootCAs: + files: [] + +############################################################################### +# +# Metrics section +# +############################################################################### +metrics: + # statsd, prometheus, or disabled + provider: disabled + + # statsd configuration + statsd: + # network type: tcp or udp + network: udp + + # statsd server address + address: 127.0.0.1:8125 + + # the interval at which locally cached counters and gauges are pushed + # to statsd; timings are pushed immediately + writeInterval: 10s + + # prefix is prepended to all emitted statsd metrics + prefix: server diff --git a/topologies/t4/config/openldap/org1/ldif/1-org1-ous.ldif b/topologies/t4/config/openldap/org1/ldif/1-org1-ous.ldif new file mode 100644 index 0000000..557213d --- /dev/null +++ b/topologies/t4/config/openldap/org1/ldif/1-org1-ous.ldif @@ -0,0 +1,69 @@ +dn: dc=org1,dc=org +objectClass: dcObject +objectClass: organization +dc: org1 +o: Org 1 + +dn: ou=hlfabric,dc=org1,dc=org +objectClass: organizationalUnit +ou: hlfabric + +# tls ous + +dn: ou=tls,ou=hlfabric,dc=org1,dc=org +objectClass: organizationalUnit +ou: tls + +dn: ou=ldap,ou=tls,ou=hlfabric,dc=org1,dc=org +objectClass: organizationalUnit +ou: ldap + +dn: ou=hlclient,ou=tls,ou=hlfabric,dc=org1,dc=org +objectClass: organizationalUnit +ou: hlclient + +dn: ou=hlorderer,ou=tls,ou=hlfabric,dc=org1,dc=org +objectClass: organizationalUnit +ou: hlorderer + +dn: ou=hlpeer,ou=tls,ou=hlfabric,dc=org1,dc=org +objectClass: organizationalUnit +ou: hlpeer + +dn: ou=hladmin,ou=tls,ou=hlfabric,dc=org1,dc=org +objectClass: organizationalUnit +ou: hladmin + +dn: ou=user,ou=tls,ou=hlfabric,dc=org1,dc=org +objectClass: organizationalUnit +ou: user + +# identities ous + +dn: ou=identities,ou=hlfabric,dc=org1,dc=org +objectClass: organizationalUnit +ou: identities + +dn: ou=ldap,ou=identities,ou=hlfabric,dc=org1,dc=org +objectClass: organizationalUnit +ou: ldap + +dn: ou=hlclient,ou=identities,ou=hlfabric,dc=org1,dc=org +objectClass: organizationalUnit +ou: hlclient + +dn: ou=hlorderer,ou=identities,ou=hlfabric,dc=org1,dc=org +objectClass: organizationalUnit +ou: hlorderer + +dn: ou=hlpeer,ou=identities,ou=hlfabric,dc=org1,dc=org +objectClass: organizationalUnit +ou: hlpeer + +dn: ou=hladmin,ou=identities,ou=hlfabric,dc=org1,dc=org +objectClass: organizationalUnit +ou: hladmin + +dn: ou=user,ou=identities,ou=hlfabric,dc=org1,dc=org +objectClass: organizationalUnit +ou: user \ No newline at end of file diff --git a/topologies/t4/config/openldap/org1/ldif/2-org1-accounts.ldif b/topologies/t4/config/openldap/org1/ldif/2-org1-accounts.ldif new file mode 100644 index 0000000..9772639 --- /dev/null +++ b/topologies/t4/config/openldap/org1/ldif/2-org1-accounts.ldif @@ -0,0 +1,109 @@ +# tls accounts + +dn: uid=org1-ca-tls-admin,ou=ldap,ou=tls,ou=hlfabric,dc=org1,dc=org +objectClass: account +objectClass: simpleSecurityObject +objectClass: hyperledgerfabric +uid: org1-ca-tls-admin +cn: org1-ca-tls-admin +userPassword: org1-ca-tls-adminpw +hfRegistrarAttributes: * +hfAffiliation: empty +hfRegistrarRoles: * +hfRegistrarDelegateRoles: * +hfAffiliationMgr: TRUE +hfRevoker: TRUE +hfIntermediateCA: TRUE +hfGenCRL: TRUE +hfCustomField1: abc:ecert +hfCustomField2: xyz + +dn: uid=org1-tls-orderer1,ou=hlorderer,ou=tls,ou=hlfabric,dc=org1,dc=org +objectClass: account +objectClass: simpleSecurityObject +objectClass: hyperledgerfabric +uid: org1-tls-orderer1 +cn: org1-tls-orderer1 +userPassword: orderer1PW +hfType: orderer +hfAffiliation: empty + +dn: uid=org1-tls-orderer2,ou=hlorderer,ou=tls,ou=hlfabric,dc=org1,dc=org +objectClass: account +objectClass: simpleSecurityObject +objectClass: hyperledgerfabric +uid: org1-tls-orderer2 +cn: org1-tls-orderer2 +userPassword: orderer2PW +hfType: orderer +hfAffiliation: empty + +dn: uid=org1-tls-orderer3,ou=hlorderer,ou=tls,ou=hlfabric,dc=org1,dc=org +objectClass: account +objectClass: simpleSecurityObject +objectClass: hyperledgerfabric +uid: org1-tls-orderer3 +cn: org1-tls-orderer3 +userPassword: orderer3PW +hfType: orderer +hfAffiliation: empty + +# identities accounts + +dn: uid=org1-ca-identities-admin,ou=ldap,ou=identities,ou=hlfabric,dc=org1,dc=org +objectClass: account +objectClass: simpleSecurityObject +objectClass: hyperledgerfabric +uid: org1-ca-tls-admin +cn: org1-ca-tls-admin +userPassword: org1-ca-identities-adminpw +hfRegistrarAttributes: * +hfAffiliation: empty +hfRegistrarRoles: * +hfRegistrarDelegateRoles: * +hfAffiliationMgr: TRUE +hfRevoker: TRUE +hfIntermediateCA: TRUE +hfGenCRL: TRUE +hfCustomField1: abc:ecert +hfCustomField2: xyz + +dn: uid=org1-identities-orderer1,ou=hlorderer,ou=identities,ou=hlfabric,dc=org1,dc=org +objectClass: account +objectClass: simpleSecurityObject +objectClass: hyperledgerfabric +uid: org1-identities-orderer1 +cn: org1-identities-orderer1 +userPassword: orderer1PW +hfType: orderer +hfAffiliation: empty + +dn: uid=org1-identities-orderer2,ou=hlorderer,ou=identities,ou=hlfabric,dc=org1,dc=org +objectClass: account +objectClass: simpleSecurityObject +objectClass: hyperledgerfabric +uid: org1-identities-orderer2 +cn: org1-identities-orderer2 +userPassword: orderer2PW +hfType: orderer +hfAffiliation: empty + +dn: uid=org1-identities-orderer3,ou=hlorderer,ou=identities,ou=hlfabric,dc=org1,dc=org +objectClass: account +objectClass: simpleSecurityObject +objectClass: hyperledgerfabric +uid: org1-identities-orderer3 +cn: org1-identities-orderer3 +userPassword: orderer3PW +hfType: orderer +hfAffiliation: empty + +dn: uid=admin-org1,ou=hladmin,ou=identities,ou=hlfabric,dc=org1,dc=org +objectClass: account +objectClass: simpleSecurityObject +objectClass: hyperledgerfabric +uid: admin-org1 +cn: admin-org1 +userPassword: org1AdminPW +hfType: admin +hfAffiliation: empty diff --git a/topologies/t4/config/openldap/org2/ldif/1-org2-ous.ldif b/topologies/t4/config/openldap/org2/ldif/1-org2-ous.ldif new file mode 100644 index 0000000..1088587 --- /dev/null +++ b/topologies/t4/config/openldap/org2/ldif/1-org2-ous.ldif @@ -0,0 +1,69 @@ +dn: dc=org2,dc=org +objectClass: dcObject +objectClass: organization +dc: org2 +o: Org 2 + +dn: ou=hlfabric,dc=org2,dc=org +objectClass: organizationalUnit +ou: hlfabric + +# tls ous + +dn: ou=tls,ou=hlfabric,dc=org2,dc=org +objectClass: organizationalUnit +ou: tls + +dn: ou=ldap,ou=tls,ou=hlfabric,dc=org2,dc=org +objectClass: organizationalUnit +ou: ldap + +dn: ou=hlclient,ou=tls,ou=hlfabric,dc=org2,dc=org +objectClass: organizationalUnit +ou: hlclient + +dn: ou=hlorderer,ou=tls,ou=hlfabric,dc=org2,dc=org +objectClass: organizationalUnit +ou: hlorderer + +dn: ou=hlpeer,ou=tls,ou=hlfabric,dc=org2,dc=org +objectClass: organizationalUnit +ou: hlpeer + +dn: ou=hladmin,ou=tls,ou=hlfabric,dc=org2,dc=org +objectClass: organizationalUnit +ou: hladmin + +dn: ou=user,ou=tls,ou=hlfabric,dc=org2,dc=org +objectClass: organizationalUnit +ou: user + +# identities ous + +dn: ou=identities,ou=hlfabric,dc=org2,dc=org +objectClass: organizationalUnit +ou: identities + +dn: ou=ldap,ou=identities,ou=hlfabric,dc=org2,dc=org +objectClass: organizationalUnit +ou: ldap + +dn: ou=hlclient,ou=identities,ou=hlfabric,dc=org2,dc=org +objectClass: organizationalUnit +ou: hlclient + +dn: ou=hlorderer,ou=identities,ou=hlfabric,dc=org2,dc=org +objectClass: organizationalUnit +ou: hlorderer + +dn: ou=hlpeer,ou=identities,ou=hlfabric,dc=org2,dc=org +objectClass: organizationalUnit +ou: hlpeer + +dn: ou=hladmin,ou=identities,ou=hlfabric,dc=org2,dc=org +objectClass: organizationalUnit +ou: hladmin + +dn: ou=user,ou=identities,ou=hlfabric,dc=org2,dc=org +objectClass: organizationalUnit +ou: user \ No newline at end of file diff --git a/topologies/t4/config/openldap/org2/ldif/2-org2-accounts.ldif b/topologies/t4/config/openldap/org2/ldif/2-org2-accounts.ldif new file mode 100644 index 0000000..d5342c0 --- /dev/null +++ b/topologies/t4/config/openldap/org2/ldif/2-org2-accounts.ldif @@ -0,0 +1,99 @@ +# tls accounts + +dn: uid=org2-ca-tls-admin,ou=ldap,ou=tls,ou=hlfabric,dc=org2,dc=org +objectClass: account +objectClass: simpleSecurityObject +objectClass: hyperledgerfabric +uid: org2-ca-tls-admin +cn: org2-ca-tls-admin +userPassword: org2-ca-tls-adminpw +hfRegistrarAttributes: * +hfAffiliation: empty +hfRegistrarRoles: * +hfRegistrarDelegateRoles: * +hfAffiliationMgr: TRUE +hfRevoker: TRUE +hfIntermediateCA: TRUE +hfGenCRL: TRUE +hfCustomField1: abc:ecert +hfCustomField2: xyz + +dn: uid=org2-tls-peer1,ou=hlpeer,ou=tls,ou=hlfabric,dc=org2,dc=org +objectClass: account +objectClass: simpleSecurityObject +objectClass: hyperledgerfabric +uid: org2-tls-peer1 +cn: org2-tls-peer1 +userPassword: peer1PW +hfType: peer +hfAffiliation: empty + +dn: uid=org2-tls-peer2,ou=hlpeer,ou=tls,ou=hlfabric,dc=org2,dc=org +objectClass: account +objectClass: simpleSecurityObject +objectClass: hyperledgerfabric +uid: org2-tls-peer2 +cn: org2-tls-peer2 +userPassword: peer2PW +hfType: peer +hfAffiliation: empty + +# identities accounts + +dn: uid=org2-ca-identities-admin,ou=ldap,ou=identities,ou=hlfabric,dc=org2,dc=org +objectClass: account +objectClass: simpleSecurityObject +objectClass: hyperledgerfabric +uid: org2-ca-tls-admin +cn: org2-ca-tls-admin +userPassword: org2-ca-identities-adminpw +hfRegistrarAttributes: * +hfAffiliation: empty +hfRegistrarRoles: * +hfRegistrarDelegateRoles: * +hfAffiliationMgr: TRUE +hfRevoker: TRUE +hfIntermediateCA: TRUE +hfGenCRL: TRUE +hfCustomField1: abc:ecert +hfCustomField2: xyz + +dn: uid=org2-identities-peer1,ou=hlpeer,ou=identities,ou=hlfabric,dc=org2,dc=org +objectClass: account +objectClass: simpleSecurityObject +objectClass: hyperledgerfabric +uid: org2-identities-peer1 +cn: org2-identities-peer1 +userPassword: peer1PW +hfType: peer +hfAffiliation: empty + +dn: uid=org2-identities-peer2,ou=hlpeer,ou=identities,ou=hlfabric,dc=org2,dc=org +objectClass: account +objectClass: simpleSecurityObject +objectClass: hyperledgerfabric +uid: org2-identities-peer2 +cn: org2-identities-peer2 +userPassword: peer2PW +hfType: peer +hfAffiliation: empty + +dn: uid=admin-org2,ou=hladmin,ou=identities,ou=hlfabric,dc=org2,dc=org +objectClass: account +objectClass: simpleSecurityObject +objectClass: hyperledgerfabric +uid: admin-org2 +cn: admin-org2 +userPassword: org2AdminPW +hfType: admin +hfAffiliation: empty + +dn: uid=user-org2,ou=user,ou=identities,ou=hlfabric,dc=org2,dc=org +objectClass: account +objectClass: simpleSecurityObject +objectClass: hyperledgerfabric +uid: user-org2 +cn: user-org2 +userPassword: org2UserPW +hfType: user +hfAffiliation: empty \ No newline at end of file diff --git a/topologies/t4/config/openldap/org3/ldif/1-org3-ous.ldif b/topologies/t4/config/openldap/org3/ldif/1-org3-ous.ldif new file mode 100644 index 0000000..6951de1 --- /dev/null +++ b/topologies/t4/config/openldap/org3/ldif/1-org3-ous.ldif @@ -0,0 +1,69 @@ +dn: dc=org3,dc=org +objectClass: dcObject +objectClass: organization +dc: org3 +o: Org 3 + +dn: ou=hlfabric,dc=org3,dc=org +objectClass: organizationalUnit +ou: hlfabric + +# tls ous + +dn: ou=tls,ou=hlfabric,dc=org3,dc=org +objectClass: organizationalUnit +ou: tls + +dn: ou=ldap,ou=tls,ou=hlfabric,dc=org3,dc=org +objectClass: organizationalUnit +ou: ldap + +dn: ou=hlclient,ou=tls,ou=hlfabric,dc=org3,dc=org +objectClass: organizationalUnit +ou: hlclient + +dn: ou=hlorderer,ou=tls,ou=hlfabric,dc=org3,dc=org +objectClass: organizationalUnit +ou: hlorderer + +dn: ou=hlpeer,ou=tls,ou=hlfabric,dc=org3,dc=org +objectClass: organizationalUnit +ou: hlpeer + +dn: ou=hladmin,ou=tls,ou=hlfabric,dc=org3,dc=org +objectClass: organizationalUnit +ou: hladmin + +dn: ou=user,ou=tls,ou=hlfabric,dc=org3,dc=org +objectClass: organizationalUnit +ou: user + +# identities ous + +dn: ou=identities,ou=hlfabric,dc=org3,dc=org +objectClass: organizationalUnit +ou: identities + +dn: ou=ldap,ou=identities,ou=hlfabric,dc=org3,dc=org +objectClass: organizationalUnit +ou: ldap + +dn: ou=hlclient,ou=identities,ou=hlfabric,dc=org3,dc=org +objectClass: organizationalUnit +ou: hlclient + +dn: ou=hlorderer,ou=identities,ou=hlfabric,dc=org3,dc=org +objectClass: organizationalUnit +ou: hlorderer + +dn: ou=hlpeer,ou=identities,ou=hlfabric,dc=org3,dc=org +objectClass: organizationalUnit +ou: hlpeer + +dn: ou=hladmin,ou=identities,ou=hlfabric,dc=org3,dc=org +objectClass: organizationalUnit +ou: hladmin + +dn: ou=user,ou=identities,ou=hlfabric,dc=org3,dc=org +objectClass: organizationalUnit +ou: user \ No newline at end of file diff --git a/topologies/t4/config/openldap/org3/ldif/2-org3-accounts.ldif b/topologies/t4/config/openldap/org3/ldif/2-org3-accounts.ldif new file mode 100644 index 0000000..9f3640f --- /dev/null +++ b/topologies/t4/config/openldap/org3/ldif/2-org3-accounts.ldif @@ -0,0 +1,99 @@ +# tls accounts + +dn: uid=org3-ca-tls-admin,ou=ldap,ou=tls,ou=hlfabric,dc=org3,dc=org +objectClass: account +objectClass: simpleSecurityObject +objectClass: hyperledgerfabric +uid: org3-ca-tls-admin +cn: org3-ca-tls-admin +userPassword: org3-ca-tls-adminpw +hfRegistrarAttributes: * +hfAffiliation: empty +hfRegistrarRoles: * +hfRegistrarDelegateRoles: * +hfAffiliationMgr: TRUE +hfRevoker: TRUE +hfIntermediateCA: TRUE +hfGenCRL: TRUE +hfCustomField1: abc:ecert +hfCustomField2: xyz + +dn: uid=org3-tls-peer1,ou=hlpeer,ou=tls,ou=hlfabric,dc=org3,dc=org +objectClass: account +objectClass: simpleSecurityObject +objectClass: hyperledgerfabric +uid: org3-tls-peer1 +cn: org3-tls-peer1 +userPassword: peer1PW +hfType: peer +hfAffiliation: empty + +dn: uid=org3-tls-peer2,ou=hlpeer,ou=tls,ou=hlfabric,dc=org3,dc=org +objectClass: account +objectClass: simpleSecurityObject +objectClass: hyperledgerfabric +uid: org3-tls-peer2 +cn: org3-tls-peer2 +userPassword: peer2PW +hfType: peer +hfAffiliation: empty + +# identities accounts + +dn: uid=org3-ca-identities-admin,ou=ldap,ou=identities,ou=hlfabric,dc=org3,dc=org +objectClass: account +objectClass: simpleSecurityObject +objectClass: hyperledgerfabric +uid: org3-ca-tls-admin +cn: org3-ca-tls-admin +userPassword: org3-ca-identities-adminpw +hfRegistrarAttributes: * +hfAffiliation: empty +hfRegistrarRoles: * +hfRegistrarDelegateRoles: * +hfAffiliationMgr: TRUE +hfRevoker: TRUE +hfIntermediateCA: TRUE +hfGenCRL: TRUE +hfCustomField1: abc:ecert +hfCustomField2: xyz + +dn: uid=org3-identities-peer1,ou=hlpeer,ou=identities,ou=hlfabric,dc=org3,dc=org +objectClass: account +objectClass: simpleSecurityObject +objectClass: hyperledgerfabric +uid: org3-identities-peer1 +cn: org3-identities-peer1 +userPassword: peer1PW +hfType: peer +hfAffiliation: empty + +dn: uid=org3-identities-peer2,ou=hlpeer,ou=identities,ou=hlfabric,dc=org3,dc=org +objectClass: account +objectClass: simpleSecurityObject +objectClass: hyperledgerfabric +uid: org3-identities-peer2 +cn: org3-identities-peer2 +userPassword: peer2PW +hfType: peer +hfAffiliation: empty + +dn: uid=admin-org3,ou=hladmin,ou=identities,ou=hlfabric,dc=org3,dc=org +objectClass: account +objectClass: simpleSecurityObject +objectClass: hyperledgerfabric +uid: admin-org3 +cn: admin-org3 +userPassword: org3AdminPW +hfType: admin +hfAffiliation: empty + +dn: uid=user-org3,ou=user,ou=identities,ou=hlfabric,dc=org3,dc=org +objectClass: account +objectClass: simpleSecurityObject +objectClass: hyperledgerfabric +uid: user-org3 +cn: user-org3 +userPassword: org3UserPW +hfType: user +hfAffiliation: empty \ No newline at end of file diff --git a/topologies/t4/config/openldap/schema/hyperledgerfabric-schema.ldif b/topologies/t4/config/openldap/schema/hyperledgerfabric-schema.ldif new file mode 100644 index 0000000..3e23497 --- /dev/null +++ b/topologies/t4/config/openldap/schema/hyperledgerfabric-schema.ldif @@ -0,0 +1,58 @@ +dn: cn=hyperledgerfabric,cn=schema,cn=config +objectClass: olcSchemaConfig +cn: hyperledgerfabric +olcAttributeTypes: ( 1.3.6.1.4.1.7747757.1.1.2.1.1 + NAME 'hfType' + EQUALITY caseIgnoreMatch + SUBSTR caseIgnoreSubstringsMatch + SYNTAX 1.3.6.1.4.1.1466.115.121.1.15{128} ) +olcAttributeTypes: ( 1.3.6.1.4.1.7747757.1.1.2.1.2 + NAME 'hfAffiliation' + EQUALITY caseIgnoreMatch + SUBSTR caseIgnoreSubstringsMatch + SYNTAX 1.3.6.1.4.1.1466.115.121.1.15{128} ) +olcAttributeTypes: ( 1.3.6.1.4.1.7747757.1.1.2.1.3 + NAME 'hfRegistrarRoles' + EQUALITY caseIgnoreMatch + SUBSTR caseIgnoreSubstringsMatch + SYNTAX 1.3.6.1.4.1.1466.115.121.1.15{128} ) +olcAttributeTypes: ( 1.3.6.1.4.1.7747757.1.1.2.1.4 + NAME 'hfRegistrarAttributes' + EQUALITY caseIgnoreMatch + SUBSTR caseIgnoreSubstringsMatch + SYNTAX 1.3.6.1.4.1.1466.115.121.1.15{128} ) +olcAttributeTypes: ( 1.3.6.1.4.1.7747757.1.1.2.1.5 + NAME 'hfRegistrarDelegateRoles' + EQUALITY caseIgnoreMatch + SUBSTR caseIgnoreSubstringsMatch + SYNTAX 1.3.6.1.4.1.1466.115.121.1.15{128} ) +olcAttributeTypes: ( 1.3.6.1.4.1.7747757.1.1.2.1.6 + NAME 'hfRevoker' + SYNTAX 1.3.6.1.4.1.1466.115.121.1.7 ) +olcAttributeTypes: ( 1.3.6.1.4.1.7747757.1.1.2.1.7 + NAME 'hfGenCRL' + SYNTAX 1.3.6.1.4.1.1466.115.121.1.7 ) +olcAttributeTypes: ( 1.3.6.1.4.1.7747757.1.1.2.1.8 + NAME 'hfAffiliationMgr' + SYNTAX 1.3.6.1.4.1.1466.115.121.1.7 ) +olcAttributeTypes: ( 1.3.6.1.4.1.7747757.1.1.2.1.9 + NAME 'hfIntermediateCA' + SYNTAX 1.3.6.1.4.1.1466.115.121.1.7 ) +olcAttributeTypes: ( 1.3.6.1.4.1.7747757.1.1.2.1.10 + NAME 'hfCustomField1' + EQUALITY caseIgnoreMatch + SUBSTR caseIgnoreSubstringsMatch + SYNTAX 1.3.6.1.4.1.1466.115.121.1.15{128} ) +olcAttributeTypes: ( 1.3.6.1.4.1.7747757.1.1.2.1.11 + NAME 'hfCustomField2' + EQUALITY caseIgnoreMatch + SUBSTR caseIgnoreSubstringsMatch + SYNTAX 1.3.6.1.4.1.1466.115.121.1.15{128} ) +olcObjectClasses: ( 1.3.6.1.4.1.7747757.1.1.2.2.1 + NAME 'hyperledgerFabric' + DESC 'RFC1274: To be used for accounts that need Hyperledger Fabric attributes' + SUP top AUXILIARY + MUST ( cn ) + MAY ( hfType $ hfAffiliation $ hfRegistrarRoles $ hfRegistrarAttributes $ hfRegistrarDelegateRoles $ hfRevoker $ hfGenCRL $ hfAffiliationMgr $ hfIntermediateCA $ hfCustomField1 $ hfCustomField2 ) ) + + diff --git a/topologies/t4/containers/cas/org1-cas/docker-compose-org1-cas-openldap.yml b/topologies/t4/containers/cas/org1-cas/docker-compose-org1-cas-openldap.yml new file mode 100644 index 0000000..08cb884 --- /dev/null +++ b/topologies/t4/containers/cas/org1-cas/docker-compose-org1-cas-openldap.yml @@ -0,0 +1,15 @@ +services: + org1-ca-openldap: + container_name: ${CURRENT_HL_TOPOLOGY}-org1-ca-openldap + environment: + - LDAP_BIND_DN=cn=admin,dc=org1,dc=org + - LDAP_BIND_PASSWORD=adminpassword + - LDAP_ROOT=dc=org1,dc=org + - LDAP_ALLOW_ANON_BINDING=no + - LDAP_CUSTOM_LDIF_DIR=/opt/bitnami/openldap/etc/ldif + - LDAP_LOGLEVEL=8 + - BITNAMI_DEBUG=true + - LDAP_EXTRA_SCHEMAS=cosine,inetorgperson,nis,hyperledgerfabric-schema + volumes: + - "${HL_TOPOLOGIES_BASE_FOLDER}/config/openldap/org1/ldif:/opt/bitnami/openldap/etc/ldif" + - "${HL_TOPOLOGIES_BASE_FOLDER}/config/openldap/schema/hyperledgerfabric-schema.ldif:/opt/bitnami/openldap/etc/schema/hyperledgerfabric-schema.ldif" diff --git a/topologies/t4/containers/cas/org1-cas/docker-compose-org1-cas.yml b/topologies/t4/containers/cas/org1-cas/docker-compose-org1-cas.yml new file mode 100644 index 0000000..4e829ae --- /dev/null +++ b/topologies/t4/containers/cas/org1-cas/docker-compose-org1-cas.yml @@ -0,0 +1,31 @@ +version: "3.9" +services: + org1-ca-tls: + container_name: ${CURRENT_HL_TOPOLOGY}-org1-ca-tls + + command: sh -c 'fabric-ca-server start -d -b org1-ca-tls-admin:org1-ca-tls-adminpw --port 7054' + environment: + - FABRIC_CA_SERVER_HOME=/tmp/hyperledger/fabric-ca/ + - FABRIC_CA_SERVER_TLS_ENABLED=true + - FABRIC_CA_SERVER_CSR_CN=${CURRENT_HL_TOPOLOGY}-org1-ca-tls + - FABRIC_CA_SERVER_CSR_HOSTS=0.0.0.0 + - FABRIC_CA_SERVER_DEBUG=true + - TOPOLOGY=${CURRENT_HL_TOPOLOGY} + volumes: + - "${HL_TOPOLOGIES_BASE_FOLDER}/crypto-material/cas/org1/ca-tls:/tmp/hyperledger/fabric-ca" + - "${HL_TOPOLOGIES_BASE_FOLDER}:/tmp" + - "${HL_TOPOLOGIES_BASE_FOLDER}/config/fabric-ca/org1/tls/fabric-ca-server-config.yaml:/tmp/hyperledger/fabric-ca/fabric-ca-server-config.yaml" + org1-ca-identities: + container_name: ${CURRENT_HL_TOPOLOGY}-org1-ca-identities + command: sh -c 'fabric-ca-server start -d -b org1-ca-identities-admin:org1-ca-identities-adminpw --port 7054' + environment: + - FABRIC_CA_SERVER_HOME=/tmp/hyperledger/fabric-ca/ + - FABRIC_CA_SERVER_TLS_ENABLED=true + - FABRIC_CA_SERVER_CSR_CN=${CURRENT_HL_TOPOLOGY}-org1-ca-identities + - FABRIC_CA_SERVER_CSR_HOSTS=0.0.0.0 + - FABRIC_CA_SERVER_DEBUG=true + - TOPOLOGY=${CURRENT_HL_TOPOLOGY} + volumes: + - "${HL_TOPOLOGIES_BASE_FOLDER}/crypto-material/cas/org1/ca-identities:/tmp/hyperledger/fabric-ca" + - "${HL_TOPOLOGIES_BASE_FOLDER}:/tmp" + - "${HL_TOPOLOGIES_BASE_FOLDER}/config/fabric-ca/org1/identities/fabric-ca-server-config.yaml:/tmp/hyperledger/fabric-ca/fabric-ca-server-config.yaml" diff --git a/topologies/t4/containers/cas/org2-cas/docker-compose-org2-cas-openldap.yml b/topologies/t4/containers/cas/org2-cas/docker-compose-org2-cas-openldap.yml new file mode 100644 index 0000000..1fa8784 --- /dev/null +++ b/topologies/t4/containers/cas/org2-cas/docker-compose-org2-cas-openldap.yml @@ -0,0 +1,15 @@ +services: + org2-ca-openldap: + container_name: ${CURRENT_HL_TOPOLOGY}-org2-ca-openldap + environment: + - LDAP_BIND_DN=cn=admin,dc=org2,dc=org + - LDAP_BIND_PASSWORD=adminpassword + - LDAP_ROOT=dc=org2,dc=org + - LDAP_ALLOW_ANON_BINDING=no + - LDAP_CUSTOM_LDIF_DIR=/opt/bitnami/openldap/etc/ldif + - LDAP_LOGLEVEL=8 + - BITNAMI_DEBUG=true + - LDAP_EXTRA_SCHEMAS=cosine,inetorgperson,nis,hyperledgerfabric-schema + volumes: + - "${HL_TOPOLOGIES_BASE_FOLDER}/config/openldap/org2/ldif:/opt/bitnami/openldap/etc/ldif" + - "${HL_TOPOLOGIES_BASE_FOLDER}/config/openldap/schema/hyperledgerfabric-schema.ldif:/opt/bitnami/openldap/etc/schema/hyperledgerfabric-schema.ldif" diff --git a/topologies/t4/containers/cas/org2-cas/docker-compose-org2-cas.yml b/topologies/t4/containers/cas/org2-cas/docker-compose-org2-cas.yml new file mode 100644 index 0000000..e6a173e --- /dev/null +++ b/topologies/t4/containers/cas/org2-cas/docker-compose-org2-cas.yml @@ -0,0 +1,30 @@ +version: "3.9" +services: + org2-ca-tls: + container_name: ${CURRENT_HL_TOPOLOGY}-org2-ca-tls + command: sh -c 'fabric-ca-server start -d -b org2-ca-tls-admin:org2-ca-tls-adminpw --port 7054' + environment: + - FABRIC_CA_SERVER_HOME=/tmp/hyperledger/fabric-ca/ + - FABRIC_CA_SERVER_TLS_ENABLED=true + - FABRIC_CA_SERVER_CSR_CN=${CURRENT_HL_TOPOLOGY}-org2-ca-tls + - FABRIC_CA_SERVER_CSR_HOSTS=0.0.0.0 + - FABRIC_CA_SERVER_DEBUG=true + - TOPOLOGY=${CURRENT_HL_TOPOLOGY} + volumes: + - "${HL_TOPOLOGIES_BASE_FOLDER}/crypto-material/cas/org2/ca-tls:/tmp/hyperledger/fabric-ca" + - "${HL_TOPOLOGIES_BASE_FOLDER}:/tmp" + - "${HL_TOPOLOGIES_BASE_FOLDER}/config/fabric-ca/org2/tls/fabric-ca-server-config.yaml:/tmp/hyperledger/fabric-ca/fabric-ca-server-config.yaml" + org2-ca-identities: + container_name: ${CURRENT_HL_TOPOLOGY}-org2-ca-identities + command: sh -c 'fabric-ca-server start -d -b org2-ca-identities-admin:org2-ca-identities-adminpw --port 7054' + environment: + - FABRIC_CA_SERVER_HOME=/tmp/hyperledger/fabric-ca/ + - FABRIC_CA_SERVER_TLS_ENABLED=true + - FABRIC_CA_SERVER_CSR_CN=${CURRENT_HL_TOPOLOGY}-org2-ca-identities + - FABRIC_CA_SERVER_CSR_HOSTS=0.0.0.0 + - FABRIC_CA_SERVER_DEBUG=true + - TOPOLOGY=${CURRENT_HL_TOPOLOGY} + volumes: + - "${HL_TOPOLOGIES_BASE_FOLDER}/crypto-material/cas/org2/ca-identities:/tmp/hyperledger/fabric-ca" + - "${HL_TOPOLOGIES_BASE_FOLDER}:/tmp" + - "${HL_TOPOLOGIES_BASE_FOLDER}/config/fabric-ca/org2/identities/fabric-ca-server-config.yaml:/tmp/hyperledger/fabric-ca/fabric-ca-server-config.yaml" diff --git a/topologies/t4/containers/cas/org3-cas/docker-compose-org3-cas-openldap.yml b/topologies/t4/containers/cas/org3-cas/docker-compose-org3-cas-openldap.yml new file mode 100644 index 0000000..2ac1761 --- /dev/null +++ b/topologies/t4/containers/cas/org3-cas/docker-compose-org3-cas-openldap.yml @@ -0,0 +1,15 @@ +services: + org3-ca-openldap: + container_name: ${CURRENT_HL_TOPOLOGY}-org3-ca-openldap + environment: + - LDAP_BIND_DN=cn=admin,dc=org3,dc=org + - LDAP_BIND_PASSWORD=adminpassword + - LDAP_ROOT=dc=org3,dc=org + - LDAP_ALLOW_ANON_BINDING=no + - LDAP_CUSTOM_LDIF_DIR=/opt/bitnami/openldap/etc/ldif + - LDAP_LOGLEVEL=8 + - BITNAMI_DEBUG=true + - LDAP_EXTRA_SCHEMAS=cosine,inetorgperson,nis,hyperledgerfabric-schema + volumes: + - "${HL_TOPOLOGIES_BASE_FOLDER}/config/openldap/org3/ldif:/opt/bitnami/openldap/etc/ldif" + - "${HL_TOPOLOGIES_BASE_FOLDER}/config/openldap/schema/hyperledgerfabric-schema.ldif:/opt/bitnami/openldap/etc/schema/hyperledgerfabric-schema.ldif" diff --git a/topologies/t4/containers/cas/org3-cas/docker-compose-org3-cas.yml b/topologies/t4/containers/cas/org3-cas/docker-compose-org3-cas.yml new file mode 100644 index 0000000..876e80f --- /dev/null +++ b/topologies/t4/containers/cas/org3-cas/docker-compose-org3-cas.yml @@ -0,0 +1,30 @@ +version: "3.9" +services: + org3-ca-tls: + container_name: ${CURRENT_HL_TOPOLOGY}-org3-ca-tls + command: sh -c 'fabric-ca-server start -d -b org3-ca-tls-admin:org3-ca-tls-adminpw --port 7054' + environment: + - FABRIC_CA_SERVER_HOME=/tmp/hyperledger/fabric-ca/ + - FABRIC_CA_SERVER_TLS_ENABLED=true + - FABRIC_CA_SERVER_CSR_CN=${CURRENT_HL_TOPOLOGY}-org3-ca-tls + - FABRIC_CA_SERVER_CSR_HOSTS=0.0.0.0 + - FABRIC_CA_SERVER_DEBUG=true + - TOPOLOGY=${CURRENT_HL_TOPOLOGY} + volumes: + - "${HL_TOPOLOGIES_BASE_FOLDER}/crypto-material/cas/org3/ca-tls:/tmp/hyperledger/fabric-ca" + - "${HL_TOPOLOGIES_BASE_FOLDER}:/tmp" + - "${HL_TOPOLOGIES_BASE_FOLDER}/config/fabric-ca/org3/tls/fabric-ca-server-config.yaml:/tmp/hyperledger/fabric-ca/fabric-ca-server-config.yaml" + org3-ca-identities: + container_name: ${CURRENT_HL_TOPOLOGY}-org3-ca-identities + command: sh -c 'fabric-ca-server start -d -b org3-ca-identities-admin:org3-ca-identities-adminpw --port 7054' + environment: + - FABRIC_CA_SERVER_HOME=/tmp/hyperledger/fabric-ca/ + - FABRIC_CA_SERVER_TLS_ENABLED=true + - FABRIC_CA_SERVER_CSR_CN=${CURRENT_HL_TOPOLOGY}-org3-ca-identities + - FABRIC_CA_SERVER_CSR_HOSTS=0.0.0.0 + - FABRIC_CA_SERVER_DEBUG=true + - TOPOLOGY=${CURRENT_HL_TOPOLOGY} + volumes: + - "${HL_TOPOLOGIES_BASE_FOLDER}/crypto-material/cas/org3/ca-identities:/tmp/hyperledger/fabric-ca" + - "${HL_TOPOLOGIES_BASE_FOLDER}:/tmp" + - "${HL_TOPOLOGIES_BASE_FOLDER}/config/fabric-ca/org3/identities/fabric-ca-server-config.yaml:/tmp/hyperledger/fabric-ca/fabric-ca-server-config.yaml" diff --git a/topologies/t4/containers/clis/org2-clis/docker-compose-org2-clis.yml b/topologies/t4/containers/clis/org2-clis/docker-compose-org2-clis.yml new file mode 100644 index 0000000..4ceb8a0 --- /dev/null +++ b/topologies/t4/containers/clis/org2-clis/docker-compose-org2-clis.yml @@ -0,0 +1,21 @@ +services: + org2-cli-peer1: + container_name: ${CURRENT_HL_TOPOLOGY}-org2-cli-peer1 + tty: true + stdin_open: true + environment: + - GOPATH=/opt/gopath + - CORE_VM_ENDPOINT=unix:///host/var/run/docker.sock + - FABRIC_LOGGING_SPEC=DEBUG + - CORE_PEER_ID=${CURRENT_HL_TOPOLOGY}-org2-cli-peer1 + - CORE_PEER_ADDRESS=${CURRENT_HL_TOPOLOGY}-org2-peer1:7051 + - CORE_PEER_LOCALMSPID=org2MSP + - CORE_PEER_TLS_ENABLED=true + - CORE_PEER_TLS_ROOTCERT_FILE=/tmp/crypto-material/cas/org2/ca-tls/ca-cert.pem + - CORE_PEER_MSPCONFIGPATH=/tmp/crypto-material/peers/org2/peer1/node/msp + - TOPOLOGY=${CURRENT_HL_TOPOLOGY} + working_dir: /opt/gopath/src/github.com/hyperledger/fabric/org2 + command: sh + volumes: + - ${HL_TOPOLOGIES_BASE_FOLDER}/assets/chaincode:/opt/gopath/src/github.com/hyperledger/fabric-samples/chaincode + - ${HL_TOPOLOGIES_BASE_FOLDER}:/tmp diff --git a/topologies/t4/containers/clis/org3-clis/docker-compose-org3-clis.yml b/topologies/t4/containers/clis/org3-clis/docker-compose-org3-clis.yml new file mode 100644 index 0000000..68d2ec0 --- /dev/null +++ b/topologies/t4/containers/clis/org3-clis/docker-compose-org3-clis.yml @@ -0,0 +1,21 @@ +services: + org3-cli-peer1: + container_name: ${CURRENT_HL_TOPOLOGY}-org3-cli-peer1 + tty: true + stdin_open: true + environment: + - GOPATH=/opt/gopath + - CORE_VM_ENDPOINT=unix:///host/var/run/docker.sock + - FABRIC_LOGGING_SPEC=DEBUG + - CORE_PEER_ID=${CURRENT_HL_TOPOLOGY}-org3-cli-peer1 + - CORE_PEER_ADDRESS=${CURRENT_HL_TOPOLOGY}-org3-peer1:7051 + - CORE_PEER_LOCALMSPID=org3MSP + - CORE_PEER_TLS_ENABLED=true + - CORE_PEER_TLS_ROOTCERT_FILE=/tmp/crypto-material/cas/org3/ca-tls/ca-cert.pem + - CORE_PEER_MSPCONFIGPATH=/tmp/crypto-material/peers/org3/peer1/node/msp + - TOPOLOGY=${CURRENT_HL_TOPOLOGY} + working_dir: /opt/gopath/src/github.com/hyperledger/fabric/org3 + command: sh + volumes: + - ${HL_TOPOLOGIES_BASE_FOLDER}/assets/chaincode:/opt/gopath/src/github.com/hyperledger/fabric-samples/chaincode + - ${HL_TOPOLOGIES_BASE_FOLDER}:/tmp \ No newline at end of file diff --git a/topologies/t4/containers/orderers/org1-orderers/docker-compose-org1-orderers.yml b/topologies/t4/containers/orderers/org1-orderers/docker-compose-org1-orderers.yml new file mode 100644 index 0000000..0045893 --- /dev/null +++ b/topologies/t4/containers/orderers/org1-orderers/docker-compose-org1-orderers.yml @@ -0,0 +1,82 @@ +services: + org1-orderer1: + container_name: ${CURRENT_HL_TOPOLOGY}-org1-orderer1 + environment: + - ORDERER_HOME=/tmp/hyperledger/orderer/home + - ORDERER_FILELEDGER_LOCATION=/tmp/hyperledger/orderer/fileledger + - ORDERER_HOST=${CURRENT_HL_TOPOLOGY}-org1-orderer1 + - ORDERER_GENERAL_LISTENADDRESS=0.0.0.0 + - ORDERER_GENERAL_LISTENPORT=7050 + - ORDERER_GENERAL_GENESISMETHOD=file + # - ORDERER_GENERAL_GENESISFILE=/tmp/crypto-material/artifacts/channels/genesis.block + - ORDERER_GENERAL_BOOTSTRAPFILE=/tmp/crypto-material/artifacts/channels/genesis.block + - ORDERER_GENERAL_LOCALMSPID=org1MSP + - ORDERER_GENERAL_LOCALMSPDIR=/tmp/crypto-material/orderers/org1/orderer1/node/msp + - ORDERER_GENERAL_TLS_ENABLED=true + - ORDERER_GENERAL_TLS_CERTIFICATE=/tmp/crypto-material/orderers/org1/orderer1/node-tls/msp/signcerts/cert.pem + - ORDERER_GENERAL_TLS_PRIVATEKEY=/tmp/crypto-material/orderers/org1/orderer1/node-tls/msp/keystore/key.pem + # - ORDERER_GENERAL_TLS_ROOTCAS=[/tmp/crypto-material/cas/org1/ca-tls/ca-cert.pem] + - ORDERER_GENERAL_TLS_ROOTCAS=[/tmp/crypto-material/cas/org1/ca-tls/ca-cert.pem,/tmp/crypto-material/cas/org2/ca-tls/ca-cert.pem,/tmp/crypto-material/cas/org3/ca-tls/ca-cert.pem] + - ORDERER_GENERAL_LOGLEVEL=debug + - ORDERER_DEBUG_BROADCASTTRACEDIR=data/logs + - ORDERER_OPERATIONS_LISTENADDRESS=0.0.0.0:7051 + - ORDERER_METRICS_STATSD_ADDRESS=0.0.0.0:7052 + - ORDERER_GENERAL_PROFILE_ADDRESS=0.0.0.0:7053 + - TOPOLOGY=${CURRENT_HL_TOPOLOGY} + volumes: + - ${HL_TOPOLOGIES_BASE_FOLDER}/crypto-material:/tmp/crypto-material + - ${HL_TOPOLOGIES_BASE_FOLDER}/homefolders/orderers/org1/orderer1:/tmp/hyperledger/orderer + org1-orderer2: + container_name: ${CURRENT_HL_TOPOLOGY}-org1-orderer2 + environment: + - ORDERER_HOME=/tmp/hyperledger/orderer/home + - ORDERER_FILELEDGER_LOCATION=/tmp/hyperledger/orderer/fileledger + - ORDERER_HOST=${CURRENT_HL_TOPOLOGY}-org1-orderer2 + - ORDERER_GENERAL_LISTENADDRESS=0.0.0.0 + - ORDERER_GENERAL_LISTENPORT=7050 + - ORDERER_GENERAL_GENESISMETHOD=file + # - ORDERER_GENERAL_GENESISFILE=/tmp/crypto-material/artifacts/channels/genesis.block + - ORDERER_GENERAL_BOOTSTRAPFILE=/tmp/crypto-material/artifacts/channels/genesis.block + - ORDERER_GENERAL_LOCALMSPID=org1MSP + - ORDERER_GENERAL_LOCALMSPDIR=/tmp/crypto-material/orderers/org1/orderer2/node/msp + - ORDERER_GENERAL_TLS_ENABLED=true + - ORDERER_GENERAL_TLS_CERTIFICATE=/tmp/crypto-material/orderers/org1/orderer2/node-tls/msp/signcerts/cert.pem + - ORDERER_GENERAL_TLS_PRIVATEKEY=/tmp/crypto-material/orderers/org1/orderer2/node-tls/msp/keystore/key.pem + # - ORDERER_GENERAL_TLS_ROOTCAS=[/tmp/crypto-material/cas/org1/ca-tls/ca-cert.pem] + - ORDERER_GENERAL_TLS_ROOTCAS=[/tmp/crypto-material/cas/org1/ca-tls/ca-cert.pem,/tmp/crypto-material/cas/org2/ca-tls/ca-cert.pem,/tmp/crypto-material/cas/org3/ca-tls/ca-cert.pem] + - ORDERER_GENERAL_LOGLEVEL=debug + - ORDERER_DEBUG_BROADCASTTRACEDIR=data/logs + - ORDERER_OPERATIONS_LISTENADDRESS=0.0.0.0:7051 + - ORDERER_METRICS_STATSD_ADDRESS=0.0.0.0:7052 + - ORDERER_GENERAL_PROFILE_ADDRESS=0.0.0.0:7053 + - TOPOLOGY=${CURRENT_HL_TOPOLOGY} + volumes: + - ${HL_TOPOLOGIES_BASE_FOLDER}/crypto-material:/tmp/crypto-material + - ${HL_TOPOLOGIES_BASE_FOLDER}/homefolders/orderers/org1/orderer2:/tmp/hyperledger/orderer + org1-orderer3: + container_name: ${CURRENT_HL_TOPOLOGY}-org1-orderer3 + environment: + - ORDERER_HOME=/tmp/hyperledger/orderer/home + - ORDERER_FILELEDGER_LOCATION=/tmp/hyperledger/orderer/fileledger + - ORDERER_HOST=${CURRENT_HL_TOPOLOGY}-org1-orderer3 + - ORDERER_GENERAL_LISTENADDRESS=0.0.0.0 + - ORDERER_GENERAL_LISTENPORT=7050 + - ORDERER_GENERAL_GENESISMETHOD=file + # - ORDERER_GENERAL_GENESISFILE=/tmp/crypto-material/artifacts/channels/genesis.block + - ORDERER_GENERAL_BOOTSTRAPFILE=/tmp/crypto-material/artifacts/channels/genesis.block + - ORDERER_GENERAL_LOCALMSPID=org1MSP + - ORDERER_GENERAL_LOCALMSPDIR=/tmp/crypto-material/orderers/org1/orderer3/node/msp + - ORDERER_GENERAL_TLS_ENABLED=true + - ORDERER_GENERAL_TLS_CERTIFICATE=/tmp/crypto-material/orderers/org1/orderer3/node-tls/msp/signcerts/cert.pem + - ORDERER_GENERAL_TLS_PRIVATEKEY=/tmp/crypto-material/orderers/org1/orderer3/node-tls/msp/keystore/key.pem + # - ORDERER_GENERAL_TLS_ROOTCAS=[/tmp/crypto-material/cas/org1/ca-tls/ca-cert.pem] + - ORDERER_GENERAL_TLS_ROOTCAS=[/tmp/crypto-material/cas/org1/ca-tls/ca-cert.pem,/tmp/crypto-material/cas/org2/ca-tls/ca-cert.pem,/tmp/crypto-material/cas/org3/ca-tls/ca-cert.pem] + - ORDERER_GENERAL_LOGLEVEL=debug + - ORDERER_DEBUG_BROADCASTTRACEDIR=data/logs + - ORDERER_OPERATIONS_LISTENADDRESS=0.0.0.0:7051 + - ORDERER_METRICS_STATSD_ADDRESS=0.0.0.0:7052 + - ORDERER_GENERAL_PROFILE_ADDRESS=0.0.0.0:7053 + - TOPOLOGY=${CURRENT_HL_TOPOLOGY} + volumes: + - ${HL_TOPOLOGIES_BASE_FOLDER}/crypto-material:/tmp/crypto-material + - ${HL_TOPOLOGIES_BASE_FOLDER}/homefolders/orderers/org1/orderer3:/tmp/hyperledger/orderer diff --git a/topologies/t4/containers/peers/org2-peers/docker-compose-org2-peers.yml b/topologies/t4/containers/peers/org2-peers/docker-compose-org2-peers.yml new file mode 100644 index 0000000..f115b6d --- /dev/null +++ b/topologies/t4/containers/peers/org2-peers/docker-compose-org2-peers.yml @@ -0,0 +1,50 @@ +services: + org2-peer1: + container_name: ${CURRENT_HL_TOPOLOGY}-org2-peer1 + environment: + - CORE_PEER_ID=${CURRENT_HL_TOPOLOGY}-org2-peer1 + - CORE_PEER_ADDRESS=${CURRENT_HL_TOPOLOGY}-org2-peer1:7051 + - CORE_PEER_LOCALMSPID=org2MSP + - CORE_PEER_MSPCONFIGPATH=/tmp/crypto-material/peers/org2/peer1/node/msp + - CORE_VM_ENDPOINT=unix:///host/var/run/docker.sock + - CORE_VM_DOCKER_HOSTCONFIG_NETWORKMODE=hl-fabric-${CURRENT_HL_TOPOLOGY} + # - FABRIC_LOGGING_SPEC=debug + - CORE_PEER_TLS_ENABLED=true + - CORE_PEER_TLS_CERT_FILE=/tmp/crypto-material/peers/org2/peer1/node-tls/msp/signcerts/cert.pem + - CORE_PEER_TLS_KEY_FILE=/tmp/crypto-material/peers/org2/peer1/node-tls/msp/keystore/key.pem + - CORE_PEER_TLS_ROOTCERT_FILE=/tmp/crypto-material/peers/org2/peer1/node-tls/msp/tlscacerts/ca-cert.pem + - CORE_PEER_GOSSIP_USELEADERELECTION=true + - CORE_PEER_GOSSIP_ORGLEADER=false + - CORE_PEER_GOSSIP_EXTERNALENDPOINT=${CURRENT_HL_TOPOLOGY}-org2-peer1:7051 + - CORE_PEER_GOSSIP_SKIPHANDSHAKE=true + - TOPOLOGY=${CURRENT_HL_TOPOLOGY} + # - CORE_PEER_CHAINCODELISTENADDRESS=0.0.0.0:7052 + working_dir: /opt/gopath/src/github.com/hyperledger/fabric/org2/peer1 + volumes: + - /var/run:/host/var/run + - ${HL_TOPOLOGIES_BASE_FOLDER}:/tmp + org2-peer2: + container_name: ${CURRENT_HL_TOPOLOGY}-org2-peer2 + environment: + - CORE_PEER_ID=${CURRENT_HL_TOPOLOGY}-org2-peer2 + - CORE_PEER_ADDRESS=${CURRENT_HL_TOPOLOGY}-org2-peer2:7051 + - CORE_PEER_LOCALMSPID=org2MSP + - CORE_PEER_MSPCONFIGPATH=/tmp/crypto-material/peers/org2/peer2/node/msp + - CORE_VM_ENDPOINT=unix:///host/var/run/docker.sock + - CORE_VM_DOCKER_HOSTCONFIG_NETWORKMODE=hl-fabric-${CURRENT_HL_TOPOLOGY} + # - FABRIC_LOGGING_SPEC=debug + - CORE_PEER_TLS_ENABLED=true + - CORE_PEER_TLS_CERT_FILE=/tmp/crypto-material/peers/org2/peer2/node-tls/msp/signcerts/cert.pem + - CORE_PEER_TLS_KEY_FILE=/tmp/crypto-material/peers/org2/peer2/node-tls/msp/keystore/key.pem + - CORE_PEER_TLS_ROOTCERT_FILE=/tmp/crypto-material/peers/org2/peer2/node-tls/msp/tlscacerts/ca-cert.pem + - CORE_PEER_GOSSIP_USELEADERELECTION=true + - CORE_PEER_GOSSIP_ORGLEADER=false + - CORE_PEER_GOSSIP_EXTERNALENDPOINT=${CURRENT_HL_TOPOLOGY}-org2-peer2:7051 + - CORE_PEER_GOSSIP_SKIPHANDSHAKE=true + # - CORE_PEER_CHAINCODELISTENADDRESS=0.0.0.0:7052 + - CORE_PEER_GOSSIP_BOOTSTRAP=${CURRENT_HL_TOPOLOGY}-org2-peer1:7051 + - TOPOLOGY=${CURRENT_HL_TOPOLOGY} + working_dir: /opt/gopath/src/github.com/hyperledger/fabric/org2/peer2 + volumes: + - /var/run:/host/var/run + - ${HL_TOPOLOGIES_BASE_FOLDER}:/tmp \ No newline at end of file diff --git a/topologies/t4/containers/peers/org3-peers/docker-compose-org3-peers.yml b/topologies/t4/containers/peers/org3-peers/docker-compose-org3-peers.yml new file mode 100644 index 0000000..c58de7b --- /dev/null +++ b/topologies/t4/containers/peers/org3-peers/docker-compose-org3-peers.yml @@ -0,0 +1,50 @@ +services: + org3-peer1: + container_name: ${CURRENT_HL_TOPOLOGY}-org3-peer1 + environment: + - CORE_PEER_ID=${CURRENT_HL_TOPOLOGY}-org3-peer1 + - CORE_PEER_ADDRESS=${CURRENT_HL_TOPOLOGY}-org3-peer1:7051 + - CORE_PEER_LOCALMSPID=org3MSP + - CORE_PEER_MSPCONFIGPATH=/tmp/crypto-material/peers/org3/peer1/node/msp + - CORE_VM_ENDPOINT=unix:///host/var/run/docker.sock + - CORE_VM_DOCKER_HOSTCONFIG_NETWORKMODE=hl-fabric-${CURRENT_HL_TOPOLOGY} + # - FABRIC_LOGGING_SPEC=debug + - CORE_PEER_TLS_ENABLED=true + - CORE_PEER_TLS_CERT_FILE=/tmp/crypto-material/peers/org3/peer1/node-tls/msp/signcerts/cert.pem + - CORE_PEER_TLS_KEY_FILE=/tmp/crypto-material/peers/org3/peer1/node-tls/msp/keystore/key.pem + - CORE_PEER_TLS_ROOTCERT_FILE=/tmp/crypto-material/peers/org3/peer1/node-tls/msp/tlscacerts/ca-cert.pem + - CORE_PEER_GOSSIP_USELEADERELECTION=true + - CORE_PEER_GOSSIP_ORGLEADER=false + - CORE_PEER_GOSSIP_EXTERNALENDPOINT=${CURRENT_HL_TOPOLOGY}-org3-peer1:7051 + - CORE_PEER_GOSSIP_SKIPHANDSHAKE=true + - TOPOLOGY=${CURRENT_HL_TOPOLOGY} + # - CORE_PEER_CHAINCODELISTENADDRESS=0.0.0.0:7052 + working_dir: /opt/gopath/src/github.com/hyperledger/fabric/org3/peer1 + volumes: + - /var/run:/host/var/run + - ${HL_TOPOLOGIES_BASE_FOLDER}:/tmp + org3-peer2: + container_name: ${CURRENT_HL_TOPOLOGY}-org3-peer2 + environment: + - CORE_PEER_ID=${CURRENT_HL_TOPOLOGY}-org3-peer2 + - CORE_PEER_ADDRESS=${CURRENT_HL_TOPOLOGY}-org3-peer2:7051 + - CORE_PEER_LOCALMSPID=org3MSP + - CORE_PEER_MSPCONFIGPATH=/tmp/crypto-material/peers/org3/peer2/node/msp + - CORE_VM_ENDPOINT=unix:///host/var/run/docker.sock + - CORE_VM_DOCKER_HOSTCONFIG_NETWORKMODE=hl-fabric-${CURRENT_HL_TOPOLOGY} + # - FABRIC_LOGGING_SPEC=debug + - CORE_PEER_TLS_ENABLED=true + - CORE_PEER_TLS_CERT_FILE=/tmp/crypto-material/peers/org3/peer2/node-tls/msp/signcerts/cert.pem + - CORE_PEER_TLS_KEY_FILE=/tmp/crypto-material/peers/org3/peer2/node-tls/msp/keystore/key.pem + - CORE_PEER_TLS_ROOTCERT_FILE=/tmp/crypto-material/peers/org3/peer2/node-tls/msp/tlscacerts/ca-cert.pem + - CORE_PEER_GOSSIP_USELEADERELECTION=true + - CORE_PEER_GOSSIP_ORGLEADER=false + - CORE_PEER_GOSSIP_EXTERNALENDPOINT=${CURRENT_HL_TOPOLOGY}-org3-peer2:7051 + - CORE_PEER_GOSSIP_SKIPHANDSHAKE=true + # - CORE_PEER_CHAINCODELISTENADDRESS=0.0.0.0:7052 + - CORE_PEER_GOSSIP_BOOTSTRAP=${CURRENT_HL_TOPOLOGY}-org3-peer1:7051 + - TOPOLOGY=${CURRENT_HL_TOPOLOGY} + working_dir: /opt/gopath/src/github.com/hyperledger/fabric/org3/peer2 + volumes: + - /var/run:/host/var/run + - ${HL_TOPOLOGIES_BASE_FOLDER}:/tmp \ No newline at end of file diff --git a/topologies/t4/crypto-material/.gitkeep b/topologies/t4/crypto-material/.gitkeep new file mode 100644 index 0000000..e69de29 diff --git a/topologies/t4/docker-compose.yml b/topologies/t4/docker-compose.yml new file mode 100644 index 0000000..a5f7e16 --- /dev/null +++ b/topologies/t4/docker-compose.yml @@ -0,0 +1,109 @@ +services: + org-shell-cmd: + image: alpine + networks: + - hl-fabric + org1-ca-tls: + image: hyperledger/fabric-ca:${FABRIC_CA_VERSION} + # ports: + # - :7054 + networks: + - hl-fabric + org1-ca-identities: + image: hyperledger/fabric-ca:${FABRIC_CA_VERSION} + # ports: + # - :7054 + networks: + - hl-fabric + org2-ca-tls: + image: hyperledger/fabric-ca:${FABRIC_CA_VERSION} + # ports: + # - :7054 + networks: + - hl-fabric + org2-ca-identities: + image: hyperledger/fabric-ca:${FABRIC_CA_VERSION} + # ports: + # - :7054 + networks: + - hl-fabric + org3-ca-tls: + image: hyperledger/fabric-ca:${FABRIC_CA_VERSION} + # ports: + # - :7054 + networks: + - hl-fabric + org3-ca-identities: + image: hyperledger/fabric-ca:${FABRIC_CA_VERSION} + # ports: + # - :7054 + networks: + - hl-fabric + org2-peer1: + image: hyperledger/fabric-peer:${FABRIC_PEER_VERSION} + # ports: + # - :7051 + networks: + - hl-fabric + org2-peer2: + image: hyperledger/fabric-peer:${FABRIC_PEER_VERSION} + # ports: + # - :7051 + networks: + - hl-fabric + org3-peer1: + image: hyperledger/fabric-peer:${FABRIC_PEER_VERSION} + # ports: + # - :7051 + networks: + - hl-fabric + org3-peer2: + image: hyperledger/fabric-peer:${FABRIC_PEER_VERSION} + # ports: + # - :7051 + networks: + - hl-fabric + org2-cli-peer1: + image: hyperledger/fabric-tools:${FABRIC_TOOLS_VERSION} + networks: + - hl-fabric + org3-cli-peer1: + image: hyperledger/fabric-tools:${FABRIC_TOOLS_VERSION} + networks: + - hl-fabric + org1-orderer1: + image: hyperledger/fabric-orderer:${PEER_ORDERER_VERSION} + # ports: + # - :7050 + networks: + - hl-fabric + org1-orderer2: + image: hyperledger/fabric-orderer:${PEER_ORDERER_VERSION} + # ports: + # - :7050 + networks: + - hl-fabric + org1-orderer3: + image: hyperledger/fabric-orderer:${PEER_ORDERER_VERSION} + # ports: + # - :7050 + networks: + - hl-fabric + org1-ca-openldap: + image: bitnami/openldap:latest + # ports: + # - :1389 + networks: + - hl-fabric + org2-ca-openldap: + image: bitnami/openldap:latest + # ports: + # - :1389 + networks: + - hl-fabric + org3-ca-openldap: + image: bitnami/openldap:latest + # ports: + # - :1389 + networks: + - hl-fabric diff --git a/topologies/t4/homefolders/.gitkeep b/topologies/t4/homefolders/.gitkeep new file mode 100644 index 0000000..e69de29 diff --git a/topologies/t4/scripts/all-org-peers-commit-chaincode.sh b/topologies/t4/scripts/all-org-peers-commit-chaincode.sh new file mode 100755 index 0000000..7810f33 --- /dev/null +++ b/topologies/t4/scripts/all-org-peers-commit-chaincode.sh @@ -0,0 +1,26 @@ +#!/bin/bash +set -e +set -x + +peer lifecycle chaincode checkcommitreadiness -C mychannel --name myccv1 -v "1.0" --sequence 1 + +export CORE_PEER_ADDRESS=${TOPOLOGY}-org2-peer1:7051 +export CORE_PEER_MSPCONFIGPATH=/tmp/crypto-material/orgs/org2/admins/admin/msp +QUERY_INSTALLED=`FABRIC_LOGGING_SPEC=ERROR peer lifecycle chaincode queryinstalled | grep myccv1` +IFS=' ' read -r -a array <<< $QUERY_INSTALLED +PACKAGE_ID=${array[2]} +PACKAGE_ID=${PACKAGE_ID::-1} +echo "The Package ID for the installed chaincode is: $PACKAGE_ID" + +peer lifecycle chaincode commit --channelID mychannel --name myccv1 --version "1.0" \ + --peerAddresses ${TOPOLOGY}-org2-peer1:7051 \ + --peerAddresses ${TOPOLOGY}-org2-peer2:7051 \ + --peerAddresses ${TOPOLOGY}-org3-peer1:7051 \ + --peerAddresses ${TOPOLOGY}-org3-peer2:7051 \ + --tlsRootCertFiles /tmp/crypto-material/cas/org2/ca-tls/ca-cert.pem \ + --tlsRootCertFiles /tmp/crypto-material/cas/org2/ca-tls/ca-cert.pem \ + --tlsRootCertFiles /tmp/crypto-material/cas/org3/ca-tls/ca-cert.pem \ + --tlsRootCertFiles /tmp/crypto-material/cas/org3/ca-tls/ca-cert.pem \ + --sequence 1 --tls --cafile /tmp/crypto-material/cas/org1/ca-tls/ca-cert.pem + +peer lifecycle chaincode querycommitted --channelID mychannel --name myccv1 \ No newline at end of file diff --git a/topologies/t4/scripts/all-org-peers-execute-chaincode.sh b/topologies/t4/scripts/all-org-peers-execute-chaincode.sh new file mode 100755 index 0000000..d8be52e --- /dev/null +++ b/topologies/t4/scripts/all-org-peers-execute-chaincode.sh @@ -0,0 +1,18 @@ +#!/bin/sh + +set -e +set -x + +export CORE_PEER_ADDRESS=${TOPOLOGY}-org2-peer1:7051 +export CORE_PEER_MSPCONFIGPATH=/tmp/crypto-material/orgs/org2/users/user/msp +peer chaincode invoke -C mychannel -n myccv1 -c '{"Args":["InitLedger"]}' --waitForEvent --tls --cafile /tmp/crypto-material/cas/org1/ca-tls/ca-cert.pem \ + --peerAddresses ${TOPOLOGY}-org2-peer1:7051 \ + --peerAddresses ${TOPOLOGY}-org2-peer2:7051 \ + --peerAddresses ${TOPOLOGY}-org3-peer1:7051 \ + --peerAddresses ${TOPOLOGY}-org3-peer2:7051 \ + --tlsRootCertFiles /tmp/crypto-material/cas/org2/ca-tls/ca-cert.pem \ + --tlsRootCertFiles /tmp/crypto-material/cas/org2/ca-tls/ca-cert.pem \ + --tlsRootCertFiles /tmp/crypto-material/cas/org3/ca-tls/ca-cert.pem \ + --tlsRootCertFiles /tmp/crypto-material/cas/org3/ca-tls/ca-cert.pem + +peer chaincode query -C mychannel -n myccv1 -c '{"Args":["GetAllAssets"]}' diff --git a/topologies/t4/scripts/channels-setup.sh b/topologies/t4/scripts/channels-setup.sh new file mode 100755 index 0000000..230aac8 --- /dev/null +++ b/topologies/t4/scripts/channels-setup.sh @@ -0,0 +1,7 @@ +#!/bin/sh +set -e +set -x + +export FABRIC_CFG_PATH=/tmp/crypto-material/config +configtxgen -profile OrgsOrdererGenesis -outputBlock /tmp/crypto-material/artifacts/channels/genesis.block -channelID syschannel +configtxgen -profile OrgsChannel -outputCreateChannelTx /tmp/crypto-material/artifacts/channels/channel.tx -channelID mychannel \ No newline at end of file diff --git a/topologies/t4/scripts/delete-state-data.sh b/topologies/t4/scripts/delete-state-data.sh new file mode 100755 index 0000000..9062431 --- /dev/null +++ b/topologies/t4/scripts/delete-state-data.sh @@ -0,0 +1,10 @@ +#!/bin/bash +set -e +set -x + +# remove crypto material files +rm -rf /tmp/crypto-material/* +touch /tmp/crypto-material/.gitkeep + +rm -rf /tmp/homefolders/* +touch /tmp/homefolders/.gitkeep diff --git a/topologies/t4/scripts/org1-enroll-identities-with-ca-identities.sh b/topologies/t4/scripts/org1-enroll-identities-with-ca-identities.sh new file mode 100755 index 0000000..db81c82 --- /dev/null +++ b/topologies/t4/scripts/org1-enroll-identities-with-ca-identities.sh @@ -0,0 +1,46 @@ +#!/bin/bash +set -e +set -x + +# enroll orderer1 node +export FABRIC_CA_CLIENT_HOME=/tmp/crypto-material/orderers/org1/orderer1/node +export FABRIC_CA_CLIENT_TLS_CERTFILES=/tmp/crypto-material/cas/org1/ca-identities/ca-cert.pem +fabric-ca-client enroll -d -u https://org1-identities-orderer1:orderer1PW@0.0.0.0:7054 --csr.names "C=US,ST=Washington" +mv /tmp/crypto-material/orderers/org1/orderer1/node/msp/cacerts/* /tmp/crypto-material/orderers/org1/orderer1/node/msp/cacerts/ca-cert.pem +cp /tmp/config/config.yaml /tmp/crypto-material/orderers/org1/orderer1/node/msp + +# enroll orderer2 node +export FABRIC_CA_CLIENT_HOME=/tmp/crypto-material/orderers/org1/orderer2/node +export FABRIC_CA_CLIENT_TLS_CERTFILES=/tmp/crypto-material/cas/org1/ca-identities/ca-cert.pem +fabric-ca-client enroll -d -u https://org1-identities-orderer2:orderer2PW@0.0.0.0:7054 --csr.names "C=US,ST=Washington" +mv /tmp/crypto-material/orderers/org1/orderer2/node/msp/cacerts/* /tmp/crypto-material/orderers/org1/orderer2/node/msp/cacerts/ca-cert.pem +cp /tmp/config/config.yaml /tmp/crypto-material/orderers/org1/orderer2/node/msp + +# enroll orderer3 node +export FABRIC_CA_CLIENT_HOME=/tmp/crypto-material/orderers/org1/orderer3/node +export FABRIC_CA_CLIENT_TLS_CERTFILES=/tmp/crypto-material/cas/org1/ca-identities/ca-cert.pem +fabric-ca-client enroll -d -u https://org1-identities-orderer3:orderer3PW@0.0.0.0:7054 --csr.names "C=US,ST=Washington" +mv /tmp/crypto-material/orderers/org1/orderer3/node/msp/cacerts/* /tmp/crypto-material/orderers/org1/orderer3/node/msp/cacerts/ca-cert.pem +cp /tmp/config/config.yaml /tmp/crypto-material/orderers/org1/orderer3/node/msp + +# enroll org1 admin +export FABRIC_CA_CLIENT_HOME=/tmp/crypto-material/orgs/org1/admins/admin +export FABRIC_CA_CLIENT_TLS_CERTFILES=/tmp/crypto-material/cas/org1/ca-identities/ca-cert.pem +fabric-ca-client enroll -d -u https://admin-org1:org1AdminPW@0.0.0.0:7054 +mv /tmp/crypto-material/orgs/org1/admins/admin/msp/cacerts/* /tmp/crypto-material/orgs/org1/admins/admin/msp/cacerts/ca-cert.pem +cp /tmp/config/config.yaml /tmp/crypto-material/orgs/org1/admins/admin/msp + +mv /tmp/crypto-material/orderers/org1/orderer1/node-tls/msp/keystore/* /tmp/crypto-material/orderers/org1/orderer1/node-tls/msp/keystore/key.pem +mv /tmp/crypto-material/orderers/org1/orderer2/node-tls/msp/keystore/* /tmp/crypto-material/orderers/org1/orderer2/node-tls/msp/keystore/key.pem +mv /tmp/crypto-material/orderers/org1/orderer3/node-tls/msp/keystore/* /tmp/crypto-material/orderers/org1/orderer3/node-tls/msp/keystore/key.pem + +# setup org1 msp +mkdir -p /tmp/crypto-material/orgs/org1/msp +cp /tmp/config/config.yaml /tmp/crypto-material/orgs/org1/msp +mkdir -p /tmp/crypto-material/orgs/org1/msp/cacerts +cp /tmp/crypto-material/cas/org1/ca-identities/ca-cert.pem /tmp/crypto-material/orgs/org1/msp/cacerts/ca-cert.pem +mkdir -p /tmp/crypto-material/orgs/org1/msp/tlscacerts +cp /tmp/crypto-material/cas/org1/ca-tls/ca-cert.pem /tmp/crypto-material/orgs/org1/msp/tlscacerts/org1-tls-ca-cert.pem +cp /tmp/crypto-material/cas/org2/ca-tls/ca-cert.pem /tmp/crypto-material/orgs/org1/msp/tlscacerts/org2-tls-ca-cert.pem +cp /tmp/crypto-material/cas/org3/ca-tls/ca-cert.pem /tmp/crypto-material/orgs/org1/msp/tlscacerts/org3-tls-ca-cert.pem +mkdir -p /tmp/crypto-material/orgs/org1/msp/users \ No newline at end of file diff --git a/topologies/t4/scripts/org1-enroll-identities-with-ca-tls.sh b/topologies/t4/scripts/org1-enroll-identities-with-ca-tls.sh new file mode 100755 index 0000000..d06ad97 --- /dev/null +++ b/topologies/t4/scripts/org1-enroll-identities-with-ca-tls.sh @@ -0,0 +1,26 @@ +#!/bin/bash +set -e +set -x + +# enroll orderer1 node-tls +export FABRIC_CA_CLIENT_HOME=/tmp/crypto-material/orderers/org1/orderer1/node-tls +export FABRIC_CA_CLIENT_TLS_CERTFILES=/tmp/crypto-material/cas/org1/ca-tls/ca-cert.pem +fabric-ca-client enroll -d -u https://org1-tls-orderer1:orderer1PW@0.0.0.0:7054 --csr.names "C=US,ST=Washington" --enrollment.attrs "hf.Type,hf.EnrollmentID,hf.Affiliation" --enrollment.profile tls --csr.hosts ${TOPOLOGY}-org1-orderer1 + +# enroll orderer2 node-tls +export FABRIC_CA_CLIENT_HOME=/tmp/crypto-material/orderers/org1/orderer2/node-tls +export FABRIC_CA_CLIENT_TLS_CERTFILES=/tmp/crypto-material/cas/org1/ca-tls/ca-cert.pem +fabric-ca-client enroll -d -u https://org1-tls-orderer2:orderer2PW@0.0.0.0:7054 --csr.names "C=US,ST=Washington" --enrollment.attrs "hf.Type,hf.EnrollmentID,hf.Affiliation" --enrollment.profile tls --csr.hosts ${TOPOLOGY}-org1-orderer2 + +# enroll orderer3 node-tls +export FABRIC_CA_CLIENT_HOME=/tmp/crypto-material/orderers/org1/orderer3/node-tls +export FABRIC_CA_CLIENT_TLS_CERTFILES=/tmp/crypto-material/cas/org1/ca-tls/ca-cert.pem +fabric-ca-client enroll -d -u https://org1-tls-orderer3:orderer3PW@0.0.0.0:7054 --csr.names "C=US,ST=Washington" --enrollment.attrs "hf.Type,hf.EnrollmentID,hf.Affiliation" --enrollment.profile tls --csr.hosts ${TOPOLOGY}-org1-orderer3 + +mv /tmp/crypto-material/orderers/org1/orderer1/node-tls/msp/keystore/* /tmp/crypto-material/orderers/org1/orderer1/node-tls/msp/keystore/key.pem +mv /tmp/crypto-material/orderers/org1/orderer2/node-tls/msp/keystore/* /tmp/crypto-material/orderers/org1/orderer2/node-tls/msp/keystore/key.pem +mv /tmp/crypto-material/orderers/org1/orderer3/node-tls/msp/keystore/* /tmp/crypto-material/orderers/org1/orderer3/node-tls/msp/keystore/key.pem + +mv /tmp/crypto-material/orderers/org1/orderer1/node-tls/msp/tlscacerts/* /tmp/crypto-material/orderers/org1/orderer1/node-tls/msp/tlscacerts/ca-cert.pem +mv /tmp/crypto-material/orderers/org1/orderer2/node-tls/msp/tlscacerts/* /tmp/crypto-material/orderers/org1/orderer2/node-tls/msp/tlscacerts/ca-cert.pem +mv /tmp/crypto-material/orderers/org1/orderer3/node-tls/msp/tlscacerts/* /tmp/crypto-material/orderers/org1/orderer3/node-tls/msp/tlscacerts/ca-cert.pem \ No newline at end of file diff --git a/topologies/t4/scripts/org1-register-identities-with-ca-identities.sh b/topologies/t4/scripts/org1-register-identities-with-ca-identities.sh new file mode 100755 index 0000000..1e6dcb2 --- /dev/null +++ b/topologies/t4/scripts/org1-register-identities-with-ca-identities.sh @@ -0,0 +1,11 @@ +#!/bin/bash +set -e +set -x + +export FABRIC_CA_CLIENT_TLS_CERTFILES=/tmp/crypto-material/cas/org1/ca-identities/ca-cert.pem +export FABRIC_CA_CLIENT_HOME=/tmp/crypto-material/cas/org1/ca-identities-admin +fabric-ca-client enroll -d -u https://org1-ca-identities-admin:org1-ca-identities-adminpw@0.0.0.0:7054 +fabric-ca-client register -d --id.name org1-orderer1 --id.secret orderer1PW --id.type orderer -u https://0.0.0.0:7054 +fabric-ca-client register -d --id.name org1-orderer2 --id.secret orderer2PW --id.type orderer -u https://0.0.0.0:7054 +fabric-ca-client register -d --id.name org1-orderer3 --id.secret orderer3PW --id.type orderer -u https://0.0.0.0:7054 +fabric-ca-client register -d --id.name admin-org1 --id.secret org1adminpw --id.type admin --id.attrs "hf.Registrar.Roles=client,hf.Registrar.Attributes=*,hf.Revoker=true,hf.GenCRL=true,admin=true:ecert,abac.init=true:ecert" -u https://0.0.0.0:7054 \ No newline at end of file diff --git a/topologies/t4/scripts/org1-register-identities-with-ca-tls.sh b/topologies/t4/scripts/org1-register-identities-with-ca-tls.sh new file mode 100755 index 0000000..ab095a5 --- /dev/null +++ b/topologies/t4/scripts/org1-register-identities-with-ca-tls.sh @@ -0,0 +1,10 @@ +#!/bin/bash +set -e +set -x + +export FABRIC_CA_CLIENT_TLS_CERTFILES=/tmp/crypto-material/cas/org1/ca-tls/ca-cert.pem +export FABRIC_CA_CLIENT_HOME=/tmp/crypto-material/cas/org1/ca-tls-admin +fabric-ca-client enroll -d -u https://org1-ca-tls-admin:org1-ca-tls-adminpw@0.0.0.0:7054 +fabric-ca-client register -d --id.name org1-orderer1 --id.secret orderer1PW --id.type orderer -u https://0.0.0.0:7054 +fabric-ca-client register -d --id.name org1-orderer2 --id.secret orderer2PW --id.type orderer -u https://0.0.0.0:7054 +fabric-ca-client register -d --id.name org1-orderer3 --id.secret orderer3PW --id.type orderer -u https://0.0.0.0:7054 \ No newline at end of file diff --git a/topologies/t4/scripts/org2-approve-chaincode.sh b/topologies/t4/scripts/org2-approve-chaincode.sh new file mode 100755 index 0000000..fb8ac61 --- /dev/null +++ b/topologies/t4/scripts/org2-approve-chaincode.sh @@ -0,0 +1,22 @@ +#!/bin/bash +set -e +set -x + +export CORE_PEER_ADDRESS=${TOPOLOGY}-org2-peer1:7051 +export CORE_PEER_MSPCONFIGPATH=/tmp/crypto-material/orgs/org2/admins/admin/msp +QUERY_INSTALLED=`FABRIC_LOGGING_SPEC=ERROR peer lifecycle chaincode queryinstalled | grep myccv1` +IFS=' ' read -r -a array <<< $QUERY_INSTALLED +PACKAGE_ID=${array[2]} +PACKAGE_ID=${PACKAGE_ID::-1} +echo "The Package ID for the installed chaincode is: $PACKAGE_ID" + + +peer lifecycle chaincode approveformyorg --channelID mychannel --name myccv1 --version "1.0" \ + --package-id $PACKAGE_ID \ + --peerAddresses ${TOPOLOGY}-org2-peer1:7051 \ + --peerAddresses ${TOPOLOGY}-org2-peer2:7051 \ + --tlsRootCertFiles /tmp/crypto-material/cas/org2/ca-tls/ca-cert.pem \ + --tlsRootCertFiles /tmp/crypto-material/cas/org2/ca-tls/ca-cert.pem \ + --sequence 1 --tls --cafile /tmp/crypto-material/cas/org1/ca-tls/ca-cert.pem + +peer lifecycle chaincode queryapproved -C mychannel --name myccv1 \ No newline at end of file diff --git a/topologies/t4/scripts/org2-create-and-join-channels.sh b/topologies/t4/scripts/org2-create-and-join-channels.sh new file mode 100755 index 0000000..ed6a189 --- /dev/null +++ b/topologies/t4/scripts/org2-create-and-join-channels.sh @@ -0,0 +1,12 @@ +#!/bin/sh +set -e +set -x + +export CORE_PEER_MSPCONFIGPATH=/tmp/crypto-material/orgs/org2/admins/admin/msp +peer channel create -c mychannel -f /tmp/crypto-material/artifacts/channels/channel.tx -o ${TOPOLOGY}-org1-orderer1:7050 --outputBlock /tmp/crypto-material/artifacts/channels/mychannel.block --tls --cafile /tmp/crypto-material/cas/org1/ca-tls/ca-cert.pem + +export CORE_PEER_ADDRESS=${TOPOLOGY}-org2-peer1:7051 +peer channel join -b /tmp/crypto-material/artifacts/channels/mychannel.block + +export CORE_PEER_ADDRESS=${TOPOLOGY}-org2-peer2:7051 +peer channel join -b /tmp/crypto-material/artifacts/channels/mychannel.block \ No newline at end of file diff --git a/topologies/t4/scripts/org2-enroll-identities-with-ca-identities.sh b/topologies/t4/scripts/org2-enroll-identities-with-ca-identities.sh new file mode 100755 index 0000000..d7e0797 --- /dev/null +++ b/topologies/t4/scripts/org2-enroll-identities-with-ca-identities.sh @@ -0,0 +1,49 @@ +#!/bin/bash +set -e +set -x + +# enroll peer1 node +export FABRIC_CA_CLIENT_HOME=/tmp/crypto-material/peers/org2/peer1/node +export FABRIC_CA_CLIENT_TLS_CERTFILES=/tmp/crypto-material/cas/org2/ca-identities/ca-cert.pem +fabric-ca-client enroll -d -u https://org2-identities-peer1:peer1PW@0.0.0.0:7054 +mv /tmp/crypto-material/peers/org2/peer1/node/msp/cacerts/* /tmp/crypto-material/peers/org2/peer1/node/msp/cacerts/ca-cert.pem +cp /tmp/config/config.yaml /tmp/crypto-material/peers/org2/peer1/node/msp + +# enroll peer2 node +export FABRIC_CA_CLIENT_HOME=/tmp/crypto-material/peers/org2/peer2/node +export FABRIC_CA_CLIENT_TLS_CERTFILES=/tmp/crypto-material/cas/org2/ca-identities/ca-cert.pem +fabric-ca-client enroll -d -u https://org2-identities-peer2:peer2PW@0.0.0.0:7054 +mv /tmp/crypto-material/peers/org2/peer2/node/msp/cacerts/* /tmp/crypto-material/peers/org2/peer2/node/msp/cacerts/ca-cert.pem +cp /tmp/config/config.yaml /tmp/crypto-material/peers/org2/peer2/node/msp + +# enroll org2 admin +export FABRIC_CA_CLIENT_HOME=/tmp/crypto-material/orgs/org2/admins/admin +export FABRIC_CA_CLIENT_TLS_CERTFILES=/tmp/crypto-material/cas/org2/ca-identities/ca-cert.pem +fabric-ca-client enroll -d -u https://admin-org2:org2AdminPW@0.0.0.0:7054 +mv /tmp/crypto-material/orgs/org2/admins/admin/msp/cacerts/* /tmp/crypto-material/orgs/org2/admins/admin/msp/cacerts/ca-cert.pem +cp /tmp/config/config.yaml /tmp/crypto-material/orgs/org2/admins/admin/msp + +# enroll org2 user +export FABRIC_CA_CLIENT_HOME=/tmp/crypto-material/orgs/org2/users/user +export FABRIC_CA_CLIENT_TLS_CERTFILES=/tmp/crypto-material/cas/org2/ca-identities/ca-cert.pem +fabric-ca-client enroll -d -u https://admin-org2:org2AdminPW@0.0.0.0:7054 +mv /tmp/crypto-material/orgs/org2/users/user/msp/cacerts/* /tmp/crypto-material/orgs/org2/users/user/msp/cacerts/ca-cert.pem +cp /tmp/config/config.yaml /tmp/crypto-material/orgs/org2/users/user/msp + +# enroll org2 client +export FABRIC_CA_CLIENT_HOME=/tmp/crypto-material/orgs/org2/clients/client +export FABRIC_CA_CLIENT_TLS_CERTFILES=/tmp/crypto-material/cas/org2/ca-identities/ca-cert.pem +fabric-ca-client enroll -d -u https://admin-org2:org2AdminPW@0.0.0.0:7054 +mv /tmp/crypto-material/orgs/org2/clients/client/msp/cacerts/* /tmp/crypto-material/orgs/org2/clients/client/msp/cacerts/ca-cert.pem +cp /tmp/config/config.yaml /tmp/crypto-material/orgs/org2/clients/client/msp + +# setup org2 msp +mkdir -p /tmp/crypto-material/orgs/org2/msp +cp /tmp/config/config.yaml /tmp/crypto-material/orgs/org2/msp +mkdir -p /tmp/crypto-material/orgs/org2/msp/cacerts +cp /tmp/crypto-material/cas/org2/ca-identities/ca-cert.pem /tmp/crypto-material/orgs/org2/msp/cacerts/ca-cert.pem +mkdir -p /tmp/crypto-material/orgs/org2/msp/tlscacerts +cp /tmp/crypto-material/cas/org1/ca-tls/ca-cert.pem /tmp/crypto-material/orgs/org2/msp/tlscacerts/org1-tls-ca-cert.pem +cp /tmp/crypto-material/cas/org2/ca-tls/ca-cert.pem /tmp/crypto-material/orgs/org2/msp/tlscacerts/org2-tls-ca-cert.pem +cp /tmp/crypto-material/cas/org3/ca-tls/ca-cert.pem /tmp/crypto-material/orgs/org2/msp/tlscacerts/org3-tls-ca-cert.pem +mkdir -p /tmp/crypto-material/orgs/org2/msp/users diff --git a/topologies/t4/scripts/org2-enroll-identities-with-ca-tls.sh b/topologies/t4/scripts/org2-enroll-identities-with-ca-tls.sh new file mode 100755 index 0000000..191412b --- /dev/null +++ b/topologies/t4/scripts/org2-enroll-identities-with-ca-tls.sh @@ -0,0 +1,19 @@ +#!/bin/bash +set -e +set -x + +# enroll peer1 node-tls +export FABRIC_CA_CLIENT_HOME=/tmp/crypto-material/peers/org2/peer1/node-tls +export FABRIC_CA_CLIENT_TLS_CERTFILES=/tmp/crypto-material/cas/org2/ca-tls/ca-cert.pem +fabric-ca-client enroll -d -u https://org2-tls-peer1:peer1PW@0.0.0.0:7054 --csr.names "C=US,ST=Washington" --enrollment.attrs "hf.Type,hf.EnrollmentID,hf.Affiliation" --enrollment.profile tls --csr.hosts ${TOPOLOGY}-org2-peer1 + +# enroll peer2 node-tls +export FABRIC_CA_CLIENT_HOME=/tmp/crypto-material/peers/org2/peer2/node-tls +export FABRIC_CA_CLIENT_TLS_CERTFILES=/tmp/crypto-material/cas/org2/ca-tls/ca-cert.pem +fabric-ca-client enroll -d -u https://org2-tls-peer2:peer2PW@0.0.0.0:7054 --csr.names "C=US,ST=Washington" --enrollment.attrs "hf.Type,hf.EnrollmentID,hf.Affiliation" --enrollment.profile tls --csr.hosts ${TOPOLOGY}-org2-peer2 + +mv /tmp/crypto-material/peers/org2/peer1/node-tls/msp/keystore/* /tmp/crypto-material/peers/org2/peer1/node-tls/msp/keystore/key.pem +mv /tmp/crypto-material/peers/org2/peer2/node-tls/msp/keystore/* /tmp/crypto-material/peers/org2/peer2/node-tls/msp/keystore/key.pem + +mv /tmp/crypto-material/peers/org2/peer1/node-tls/msp/tlscacerts/* /tmp/crypto-material/peers/org2/peer1/node-tls/msp/tlscacerts/ca-cert.pem +mv /tmp/crypto-material/peers/org2/peer2/node-tls/msp/tlscacerts/* /tmp/crypto-material/peers/org2/peer2/node-tls/msp/tlscacerts/ca-cert.pem \ No newline at end of file diff --git a/topologies/t4/scripts/org2-install-chaincode.sh b/topologies/t4/scripts/org2-install-chaincode.sh new file mode 100755 index 0000000..b9db570 --- /dev/null +++ b/topologies/t4/scripts/org2-install-chaincode.sh @@ -0,0 +1,13 @@ +#!/bin/bash +set -e +set -x + +export CHAINCODE_DIR=/opt/gopath/src/github.com/hyperledger/fabric-samples/chaincode/assets-transfer-basic/chaincode-go + +export CORE_PEER_ADDRESS=${TOPOLOGY}-org2-peer1:7051 +export CORE_PEER_MSPCONFIGPATH=/tmp/crypto-material/orgs/org2/admins/admin/msp +peer lifecycle chaincode package mycc.tar.gz --path $CHAINCODE_DIR --lang golang --label myccv1 +peer lifecycle chaincode install mycc.tar.gz + +export CORE_PEER_ADDRESS=${TOPOLOGY}-org2-peer2:7051 +peer lifecycle chaincode install mycc.tar.gz \ No newline at end of file diff --git a/topologies/t4/scripts/org2-register-identities-with-ca-identities.sh b/topologies/t4/scripts/org2-register-identities-with-ca-identities.sh new file mode 100755 index 0000000..beb2cd6 --- /dev/null +++ b/topologies/t4/scripts/org2-register-identities-with-ca-identities.sh @@ -0,0 +1,12 @@ +#!/bin/bash +set -e +set -x + +export FABRIC_CA_CLIENT_TLS_CERTFILES=/tmp/crypto-material/cas/org2/ca-identities/ca-cert.pem +export FABRIC_CA_CLIENT_HOME=/tmp/crypto-material/cas/org2/ca-identities-admin +fabric-ca-client enroll -d -u https://org2-ca-identities-admin:org2-ca-identities-adminpw@0.0.0.0:7054 +fabric-ca-client register -d --id.name peer1-org2 --id.secret peer1PW --id.type peer -u https://0.0.0.0:7054 +fabric-ca-client register -d --id.name peer2-org2 --id.secret peer2PW --id.type peer -u https://0.0.0.0:7054 +fabric-ca-client register -d --id.name admin-org2 --id.secret org2AdminPW --id.type admin -u https://0.0.0.0:7054 +fabric-ca-client register -d --id.name user-org2 --id.secret org2UserPW --id.type user -u https://0.0.0.0:7054 +fabric-ca-client register -d --id.name client-org2 --id.secret org2UserPW --id.type client -u https://0.0.0.0:7054 \ No newline at end of file diff --git a/topologies/t4/scripts/org2-register-identities-with-ca-tls.sh b/topologies/t4/scripts/org2-register-identities-with-ca-tls.sh new file mode 100755 index 0000000..3a99816 --- /dev/null +++ b/topologies/t4/scripts/org2-register-identities-with-ca-tls.sh @@ -0,0 +1,9 @@ +#!/bin/bash +set -e +set -x + +export FABRIC_CA_CLIENT_TLS_CERTFILES=/tmp/crypto-material/cas/org2/ca-tls/ca-cert.pem +export FABRIC_CA_CLIENT_HOME=/tmp/crypto-material/cas/org2/ca-tls-admin +fabric-ca-client enroll -d -u https://org2-ca-tls-admin:org2-ca-tls-adminpw@0.0.0.0:7054 +fabric-ca-client register -d --id.name peer1-org2 --id.secret peer1PW --id.type peer -u https://0.0.0.0:7054 +fabric-ca-client register -d --id.name peer2-org2 --id.secret peer2PW --id.type peer -u https://0.0.0.0:7054 \ No newline at end of file diff --git a/topologies/t4/scripts/org3-approve-chaincode.sh b/topologies/t4/scripts/org3-approve-chaincode.sh new file mode 100755 index 0000000..094c94f --- /dev/null +++ b/topologies/t4/scripts/org3-approve-chaincode.sh @@ -0,0 +1,22 @@ +#!/bin/bash +set -e +set -x + +export CORE_PEER_ADDRESS=${TOPOLOGY}-org3-peer1:7051 +export CORE_PEER_MSPCONFIGPATH=/tmp/crypto-material/orgs/org3/admins/admin/msp +QUERY_INSTALLED=`FABRIC_LOGGING_SPEC=ERROR peer lifecycle chaincode queryinstalled | grep myccv1` +IFS=' ' read -r -a array <<< $QUERY_INSTALLED +PACKAGE_ID=${array[2]} +PACKAGE_ID=${PACKAGE_ID::-1} +echo "The Package ID for the installed chaincode is: $PACKAGE_ID" + + +peer lifecycle chaincode approveformyorg --channelID mychannel --name myccv1 --version "1.0" \ + --package-id $PACKAGE_ID \ + --peerAddresses ${TOPOLOGY}-org3-peer1:7051 \ + --peerAddresses ${TOPOLOGY}-org3-peer2:7051 \ + --tlsRootCertFiles /tmp/crypto-material/cas/org3/ca-tls/ca-cert.pem \ + --tlsRootCertFiles /tmp/crypto-material/cas/org3/ca-tls/ca-cert.pem \ + --sequence 1 --tls --cafile /tmp/crypto-material/cas/org1/ca-tls/ca-cert.pem + +peer lifecycle chaincode queryapproved -C mychannel --name myccv1 \ No newline at end of file diff --git a/topologies/t4/scripts/org3-create-and-join-channels.sh b/topologies/t4/scripts/org3-create-and-join-channels.sh new file mode 100755 index 0000000..ebf8b48 --- /dev/null +++ b/topologies/t4/scripts/org3-create-and-join-channels.sh @@ -0,0 +1,11 @@ +#!/bin/sh +set -e +set -x + +export CORE_PEER_MSPCONFIGPATH=/tmp/crypto-material/orgs/org3/admins/admin/msp + +export CORE_PEER_ADDRESS=${TOPOLOGY}-org3-peer1:7051 +peer channel join -b /tmp/crypto-material/artifacts/channels/mychannel.block + +export CORE_PEER_ADDRESS=${TOPOLOGY}-org3-peer2:7051 +peer channel join -b /tmp/crypto-material/artifacts/channels/mychannel.block \ No newline at end of file diff --git a/topologies/t4/scripts/org3-enroll-identities-with-ca-identities.sh b/topologies/t4/scripts/org3-enroll-identities-with-ca-identities.sh new file mode 100755 index 0000000..b8f9b29 --- /dev/null +++ b/topologies/t4/scripts/org3-enroll-identities-with-ca-identities.sh @@ -0,0 +1,49 @@ +#!/bin/bash +set -e +set -x + +# enroll peer1 node +export FABRIC_CA_CLIENT_HOME=/tmp/crypto-material/peers/org3/peer1/node +export FABRIC_CA_CLIENT_TLS_CERTFILES=/tmp/crypto-material/cas/org3/ca-identities/ca-cert.pem +fabric-ca-client enroll -d -u https://org3-identities-peer1:peer1PW@0.0.0.0:7054 +mv /tmp/crypto-material/peers/org3/peer1/node/msp/cacerts/* /tmp/crypto-material/peers/org3/peer1/node/msp/cacerts/ca-cert.pem +cp /tmp/config/config.yaml /tmp/crypto-material/peers/org3/peer1/node/msp + +# enroll peer2 node +export FABRIC_CA_CLIENT_HOME=/tmp/crypto-material/peers/org3/peer2/node +export FABRIC_CA_CLIENT_TLS_CERTFILES=/tmp/crypto-material/cas/org3/ca-identities/ca-cert.pem +fabric-ca-client enroll -d -u https://org3-identities-peer2:peer2PW@0.0.0.0:7054 +mv /tmp/crypto-material/peers/org3/peer2/node/msp/cacerts/* /tmp/crypto-material/peers/org3/peer2/node/msp/cacerts/ca-cert.pem +cp /tmp/config/config.yaml /tmp/crypto-material/peers/org3/peer2/node/msp + +# enroll org3 admin +export FABRIC_CA_CLIENT_HOME=/tmp/crypto-material/orgs/org3/admins/admin +export FABRIC_CA_CLIENT_TLS_CERTFILES=/tmp/crypto-material/cas/org3/ca-identities/ca-cert.pem +fabric-ca-client enroll -d -u https://admin-org3:org3AdminPW@0.0.0.0:7054 +mv /tmp/crypto-material/orgs/org3/admins/admin/msp/cacerts/* /tmp/crypto-material/orgs/org3/admins/admin/msp/cacerts/ca-cert.pem +cp /tmp/config/config.yaml /tmp/crypto-material/orgs/org3/admins/admin/msp + +# enroll org3 user +export FABRIC_CA_CLIENT_HOME=/tmp/crypto-material/orgs/org3/users/user +export FABRIC_CA_CLIENT_TLS_CERTFILES=/tmp/crypto-material/cas/org3/ca-identities/ca-cert.pem +fabric-ca-client enroll -d -u https://admin-org3:org3AdminPW@0.0.0.0:7054 +mv /tmp/crypto-material/orgs/org3/users/user/msp/cacerts/* /tmp/crypto-material/orgs/org3/users/user/msp/cacerts/ca-cert.pem +cp /tmp/config/config.yaml /tmp/crypto-material/orgs/org3/users/user/msp + +# enroll org3 client +export FABRIC_CA_CLIENT_HOME=/tmp/crypto-material/orgs/org3/clients/client +export FABRIC_CA_CLIENT_TLS_CERTFILES=/tmp/crypto-material/cas/org3/ca-identities/ca-cert.pem +fabric-ca-client enroll -d -u https://admin-org3:org3AdminPW@0.0.0.0:7054 +mv /tmp/crypto-material/orgs/org3/clients/client/msp/cacerts/* /tmp/crypto-material/orgs/org3/clients/client/msp/cacerts/ca-cert.pem +cp /tmp/config/config.yaml /tmp/crypto-material/orgs/org3/clients/client/msp + +# setup org3 msp +mkdir -p /tmp/crypto-material/orgs/org3/msp +cp /tmp/config/config.yaml /tmp/crypto-material/orgs/org3/msp +mkdir -p /tmp/crypto-material/orgs/org3/msp/cacerts +cp /tmp/crypto-material/cas/org3/ca-identities/ca-cert.pem /tmp/crypto-material/orgs/org3/msp/cacerts/ca-cert.pem +mkdir -p /tmp/crypto-material/orgs/org3/msp/tlscacerts +cp /tmp/crypto-material/cas/org1/ca-tls/ca-cert.pem /tmp/crypto-material/orgs/org3/msp/tlscacerts/org1-tls-ca-cert.pem +cp /tmp/crypto-material/cas/org2/ca-tls/ca-cert.pem /tmp/crypto-material/orgs/org3/msp/tlscacerts/org2-tls-ca-cert.pem +cp /tmp/crypto-material/cas/org3/ca-tls/ca-cert.pem /tmp/crypto-material/orgs/org3/msp/tlscacerts/org3-tls-ca-cert.pem +mkdir -p /tmp/crypto-material/orgs/org3/msp/users diff --git a/topologies/t4/scripts/org3-enroll-identities-with-ca-tls.sh b/topologies/t4/scripts/org3-enroll-identities-with-ca-tls.sh new file mode 100755 index 0000000..7e2244f --- /dev/null +++ b/topologies/t4/scripts/org3-enroll-identities-with-ca-tls.sh @@ -0,0 +1,19 @@ +#!/bin/bash +set -e +set -x + +# enroll peer1 node-tls +export FABRIC_CA_CLIENT_HOME=/tmp/crypto-material/peers/org3/peer1/node-tls +export FABRIC_CA_CLIENT_TLS_CERTFILES=/tmp/crypto-material/cas/org3/ca-tls/ca-cert.pem +fabric-ca-client enroll -d -u https://org3-tls-peer1:peer1PW@0.0.0.0:7054 --csr.names "C=US,ST=Washington" --enrollment.attrs "hf.Type,hf.EnrollmentID,hf.Affiliation" --enrollment.profile tls --csr.hosts ${TOPOLOGY}-org3-peer1 + +# enroll peer2 node-tls +export FABRIC_CA_CLIENT_HOME=/tmp/crypto-material/peers/org3/peer2/node-tls +export FABRIC_CA_CLIENT_TLS_CERTFILES=/tmp/crypto-material/cas/org3/ca-tls/ca-cert.pem +fabric-ca-client enroll -d -u https://org3-tls-peer2:peer2PW@0.0.0.0:7054 --csr.names "C=US,ST=Washington" --enrollment.attrs "hf.Type,hf.EnrollmentID,hf.Affiliation" --enrollment.profile tls --csr.hosts ${TOPOLOGY}-org3-peer2 + +mv /tmp/crypto-material/peers/org3/peer1/node-tls/msp/keystore/* /tmp/crypto-material/peers/org3/peer1/node-tls/msp/keystore/key.pem +mv /tmp/crypto-material/peers/org3/peer2/node-tls/msp/keystore/* /tmp/crypto-material/peers/org3/peer2/node-tls/msp/keystore/key.pem + +mv /tmp/crypto-material/peers/org3/peer1/node-tls/msp/tlscacerts/* /tmp/crypto-material/peers/org3/peer1/node-tls/msp/tlscacerts/ca-cert.pem +mv /tmp/crypto-material/peers/org3/peer2/node-tls/msp/tlscacerts/* /tmp/crypto-material/peers/org3/peer2/node-tls/msp/tlscacerts/ca-cert.pem \ No newline at end of file diff --git a/topologies/t4/scripts/org3-install-chaincode.sh b/topologies/t4/scripts/org3-install-chaincode.sh new file mode 100755 index 0000000..f6b8789 --- /dev/null +++ b/topologies/t4/scripts/org3-install-chaincode.sh @@ -0,0 +1,13 @@ +#!/bin/bash +set -e +set -x + +export CHAINCODE_DIR=/opt/gopath/src/github.com/hyperledger/fabric-samples/chaincode/assets-transfer-basic/chaincode-go + +export CORE_PEER_ADDRESS=${TOPOLOGY}-org3-peer1:7051 +export CORE_PEER_MSPCONFIGPATH=/tmp/crypto-material/orgs/org3/admins/admin/msp +peer lifecycle chaincode package mycc.tar.gz --path $CHAINCODE_DIR --lang golang --label myccv1 +peer lifecycle chaincode install mycc.tar.gz + +export CORE_PEER_ADDRESS=${TOPOLOGY}-org3-peer2:7051 +peer lifecycle chaincode install mycc.tar.gz diff --git a/topologies/t4/scripts/org3-register-identities-with-ca-identities.sh b/topologies/t4/scripts/org3-register-identities-with-ca-identities.sh new file mode 100755 index 0000000..1c56144 --- /dev/null +++ b/topologies/t4/scripts/org3-register-identities-with-ca-identities.sh @@ -0,0 +1,12 @@ +#!/bin/bash +set -e +set -x + +export FABRIC_CA_CLIENT_TLS_CERTFILES=/tmp/crypto-material/cas/org3/ca-identities/ca-cert.pem +export FABRIC_CA_CLIENT_HOME=/tmp/crypto-material/cas/org3/ca-identities-admin +fabric-ca-client enroll -d -u https://org3-ca-identities-admin:org3-ca-identities-adminpw@0.0.0.0:7054 +fabric-ca-client register -d --id.name peer1-org3 --id.secret peer1PW --id.type peer -u https://0.0.0.0:7054 +fabric-ca-client register -d --id.name peer2-org3 --id.secret peer2PW --id.type peer -u https://0.0.0.0:7054 +fabric-ca-client register -d --id.name admin-org3 --id.secret org3AdminPW --id.type admin -u https://0.0.0.0:7054 +fabric-ca-client register -d --id.name user-org3 --id.secret org3UserPW --id.type user -u https://0.0.0.0:7054 +fabric-ca-client register -d --id.name client-org3 --id.secret org3UserPW --id.type client -u https://0.0.0.0:7054 \ No newline at end of file diff --git a/topologies/t4/scripts/org3-register-identities-with-ca-tls.sh b/topologies/t4/scripts/org3-register-identities-with-ca-tls.sh new file mode 100755 index 0000000..a5ccd7f --- /dev/null +++ b/topologies/t4/scripts/org3-register-identities-with-ca-tls.sh @@ -0,0 +1,9 @@ +#!/bin/bash +set -e +set -x + +export FABRIC_CA_CLIENT_TLS_CERTFILES=/tmp/crypto-material/cas/org3/ca-tls/ca-cert.pem +export FABRIC_CA_CLIENT_HOME=/tmp/crypto-material/cas/org3/ca-tls-admin +fabric-ca-client enroll -d -u https://org3-ca-tls-admin:org3-ca-tls-adminpw@0.0.0.0:7054 +fabric-ca-client register -d --id.name peer1-org3 --id.secret peer1PW --id.type peer -u https://0.0.0.0:7054 +fabric-ca-client register -d --id.name peer2-org3 --id.secret peer2PW --id.type peer -u https://0.0.0.0:7054 \ No newline at end of file diff --git a/topologies/t4/scripts/patch-configtx.sh b/topologies/t4/scripts/patch-configtx.sh new file mode 100755 index 0000000..a4359d7 --- /dev/null +++ b/topologies/t4/scripts/patch-configtx.sh @@ -0,0 +1,8 @@ +#!/bin/bash +set -e +set -x + +mkdir -p /tmp/crypto-material/config +cp /tmp/config/configtx.yaml /tmp/crypto-material/config + +cat /tmp/config/configtx.yaml | sed -r "s;<>;${TOPOLOGY};g" | tee /tmp/crypto-material/config/configtx.yaml > /dev/null \ No newline at end of file diff --git a/topologies/t4/setup-network.sh b/topologies/t4/setup-network.sh new file mode 100755 index 0000000..fd45270 --- /dev/null +++ b/topologies/t4/setup-network.sh @@ -0,0 +1,178 @@ +#!/bin/bash +# -----stop script execution on error and log commands +set -e +set -x + +# -----get the folder of where the current script is located +export HL_TOPOLOGIES_BASE_FOLDER=$( cd ${0%/*} && pwd -P ) +rm -rf ./topologies + +export CURRENT_HL_TOPOLOGY=t1 +echo "Topology ${CURRENT_HL_TOPOLOGY} Root Folder Set to: ${HL_TOPOLOGIES_BASE_FOLDER}" + +# -----delete any old or existing docker networks and clear any state data---- +echo "Deleting the old network..." +./teardown-network.sh + +# -----bring up the hyperledger admin shell terminal console, used to bootstrap the network +docker compose --env-file ${HL_TOPOLOGIES_BASE_FOLDER}/.env -f ${HL_TOPOLOGIES_BASE_FOLDER}/../docker-compose-base.yml \ + -f ${HL_TOPOLOGIES_BASE_FOLDER}/../docker-compose-shell-cmd.yml up -d + +# -----begin by modifying the configtx yaml file with any string replacements +echo "Starting the setup of the new network..." + +docker exec ${CURRENT_HL_TOPOLOGY}-shell-cmd /bin/sh -c "/bin/sh /tmp/scripts/patch-configtx.sh" + +# ----------------------------------------------------------------------------- +# -----setup the OpenLDAD servers for all orgs ----- +# ----------------------------------------------------------------------------- +docker compose --env-file ${HL_TOPOLOGIES_BASE_FOLDER}/.env -f ${HL_TOPOLOGIES_BASE_FOLDER}/../docker-compose-base.yml \ + -f ${HL_TOPOLOGIES_BASE_FOLDER}/docker-compose.yml \ + -f ${HL_TOPOLOGIES_BASE_FOLDER}/containers/cas/org1-cas/docker-compose-org1-cas-openldap.yml up -d org1-ca-openldap +docker compose --env-file ${HL_TOPOLOGIES_BASE_FOLDER}/.env -f ${HL_TOPOLOGIES_BASE_FOLDER}/../docker-compose-base.yml \ + -f ${HL_TOPOLOGIES_BASE_FOLDER}/docker-compose.yml \ + -f ${HL_TOPOLOGIES_BASE_FOLDER}/containers/cas/org2-cas/docker-compose-org2-cas-openldap.yml up -d org2-ca-openldap +docker compose --env-file ${HL_TOPOLOGIES_BASE_FOLDER}/.env -f ${HL_TOPOLOGIES_BASE_FOLDER}/../docker-compose-base.yml \ + -f ${HL_TOPOLOGIES_BASE_FOLDER}/docker-compose.yml \ + -f ${HL_TOPOLOGIES_BASE_FOLDER}/containers/cas/org3-cas/docker-compose-org3-cas-openldap.yml up -d org3-ca-openldap +# ----------------------------------------------------------------------------- +# -----setup the CAs for all orgs----- +# -----registration of accounts not needed since acccounts are setup inside OpenLDAP +# ----------------------------------------------------------------------------- +# -----org1 CAs +docker compose --env-file ${HL_TOPOLOGIES_BASE_FOLDER}/.env -f ${HL_TOPOLOGIES_BASE_FOLDER}/../docker-compose-base.yml \ + -f ${HL_TOPOLOGIES_BASE_FOLDER}/docker-compose.yml \ + -f ${HL_TOPOLOGIES_BASE_FOLDER}/containers/cas/org1-cas/docker-compose-org1-cas.yml up -d org1-ca-tls +sleep 2 +# docker exec ${CURRENT_HL_TOPOLOGY}-org1-ca-tls /bin/sh -c "/bin/sh /tmp/scripts/org1-register-identities-with-ca-tls.sh" + +docker compose --env-file ${HL_TOPOLOGIES_BASE_FOLDER}/.env -f ${HL_TOPOLOGIES_BASE_FOLDER}/../docker-compose-base.yml \ + -f ${HL_TOPOLOGIES_BASE_FOLDER}/docker-compose.yml \ + -f ${HL_TOPOLOGIES_BASE_FOLDER}/containers/cas/org1-cas/docker-compose-org1-cas.yml up -d org1-ca-identities +sleep 2 +# docker exec ${CURRENT_HL_TOPOLOGY}-org1-ca-identities /bin/sh -c "/bin/sh /tmp/scripts/org1-register-identities-with-ca-identities.sh" + +# -----org2 CAs +docker compose --env-file ${HL_TOPOLOGIES_BASE_FOLDER}/.env -f ${HL_TOPOLOGIES_BASE_FOLDER}/../docker-compose-base.yml \ + -f ${HL_TOPOLOGIES_BASE_FOLDER}/docker-compose.yml \ + -f ${HL_TOPOLOGIES_BASE_FOLDER}/containers/cas/org2-cas/docker-compose-org2-cas.yml up -d org2-ca-tls +sleep 2 +# docker exec ${CURRENT_HL_TOPOLOGY}-org2-ca-tls /bin/sh -c "/bin/sh /tmp/scripts/org2-register-identities-with-ca-tls.sh" + +docker compose --env-file ${HL_TOPOLOGIES_BASE_FOLDER}/.env -f ${HL_TOPOLOGIES_BASE_FOLDER}/../docker-compose-base.yml \ + -f ${HL_TOPOLOGIES_BASE_FOLDER}/docker-compose.yml \ + -f ${HL_TOPOLOGIES_BASE_FOLDER}/containers/cas/org2-cas/docker-compose-org2-cas.yml up -d org2-ca-identities +sleep 2 +# docker exec ${CURRENT_HL_TOPOLOGY}-org2-ca-identities /bin/sh -c "/bin/sh /tmp/scripts/org2-register-identities-with-ca-identities.sh" + +# -----org3 CAs +docker compose --env-file ${HL_TOPOLOGIES_BASE_FOLDER}/.env -f ${HL_TOPOLOGIES_BASE_FOLDER}/../docker-compose-base.yml \ + -f ${HL_TOPOLOGIES_BASE_FOLDER}/docker-compose.yml \ + -f ${HL_TOPOLOGIES_BASE_FOLDER}/containers/cas/org3-cas/docker-compose-org3-cas.yml up -d org3-ca-tls +sleep 2 +# docker exec ${CURRENT_HL_TOPOLOGY}-org3-ca-tls /bin/sh -c "/bin/sh /tmp/scripts/org3-register-identities-with-ca-tls.sh" + +docker compose --env-file ${HL_TOPOLOGIES_BASE_FOLDER}/.env -f ${HL_TOPOLOGIES_BASE_FOLDER}/../docker-compose-base.yml \ + -f ${HL_TOPOLOGIES_BASE_FOLDER}/docker-compose.yml \ + -f ${HL_TOPOLOGIES_BASE_FOLDER}/containers/cas/org3-cas/docker-compose-org3-cas.yml up -d org3-ca-identities +sleep 2 +# docker exec ${CURRENT_HL_TOPOLOGY}-org3-ca-identities /bin/sh -c "/bin/sh /tmp/scripts/org3-register-identities-with-ca-identities.sh" + +# ----------------------------------------------------------------------------- +# -----setup the Peers for all orgs----- +# ----------------------------------------------------------------------------- +# -----begin by enrolling each orgs' TLS-CA and Identities-CA users +docker exec ${CURRENT_HL_TOPOLOGY}-org2-ca-tls /bin/sh -c "/bin/sh /tmp/scripts/org2-enroll-identities-with-ca-tls.sh" +docker exec ${CURRENT_HL_TOPOLOGY}-org2-ca-identities /bin/sh -c "/bin/sh /tmp/scripts/org2-enroll-identities-with-ca-identities.sh" + +docker exec ${CURRENT_HL_TOPOLOGY}-org3-ca-tls /bin/sh -c "/bin/sh /tmp/scripts/org3-enroll-identities-with-ca-tls.sh" +docker exec ${CURRENT_HL_TOPOLOGY}-org3-ca-identities /bin/sh -c "/bin/sh /tmp/scripts/org3-enroll-identities-with-ca-identities.sh" + +sleep 2 + +# -----org2 peers +docker compose --env-file ${HL_TOPOLOGIES_BASE_FOLDER}/.env -f ${HL_TOPOLOGIES_BASE_FOLDER}/../docker-compose-base.yml \ + -f ${HL_TOPOLOGIES_BASE_FOLDER}/docker-compose.yml \ + -f ${HL_TOPOLOGIES_BASE_FOLDER}/containers/peers/org2-peers/docker-compose-org2-peers.yml up -d org2-peer1 + +docker compose --env-file ${HL_TOPOLOGIES_BASE_FOLDER}/.env -f ${HL_TOPOLOGIES_BASE_FOLDER}/../docker-compose-base.yml \ + -f ${HL_TOPOLOGIES_BASE_FOLDER}/docker-compose.yml \ + -f ${HL_TOPOLOGIES_BASE_FOLDER}/containers/peers/org2-peers/docker-compose-org2-peers.yml up -d org2-peer2 + +# -----org3 peers +docker compose --env-file ${HL_TOPOLOGIES_BASE_FOLDER}/.env -f ${HL_TOPOLOGIES_BASE_FOLDER}/../docker-compose-base.yml \ + -f ${HL_TOPOLOGIES_BASE_FOLDER}/docker-compose.yml \ + -f ${HL_TOPOLOGIES_BASE_FOLDER}/containers/peers/org3-peers/docker-compose-org3-peers.yml up -d org3-peer1 + +docker compose --env-file ${HL_TOPOLOGIES_BASE_FOLDER}/.env -f ${HL_TOPOLOGIES_BASE_FOLDER}/../docker-compose-base.yml \ + -f ${HL_TOPOLOGIES_BASE_FOLDER}/docker-compose.yml \ + -f ${HL_TOPOLOGIES_BASE_FOLDER}/containers/peers/org3-peers/docker-compose-org3-peers.yml up -d org3-peer2 + +# ----------------------------------------------------------------------------- +# -----setup CLIs for each org----- +# ----------------------------------------------------------------------------- +docker compose --env-file ${HL_TOPOLOGIES_BASE_FOLDER}/.env -f ${HL_TOPOLOGIES_BASE_FOLDER}/../docker-compose-base.yml \ + -f ${HL_TOPOLOGIES_BASE_FOLDER}/docker-compose.yml \ + -f ${HL_TOPOLOGIES_BASE_FOLDER}/containers/clis/org2-clis/docker-compose-org2-clis.yml up -d org2-cli-peer1 + +docker compose --env-file ${HL_TOPOLOGIES_BASE_FOLDER}/.env -f ${HL_TOPOLOGIES_BASE_FOLDER}/../docker-compose-base.yml \ + -f ${HL_TOPOLOGIES_BASE_FOLDER}/docker-compose.yml \ + -f ${HL_TOPOLOGIES_BASE_FOLDER}/containers/clis/org3-clis/docker-compose-org3-clis.yml up -d org3-cli-peer1 + +# ----------------------------------------------------------------------------- +# -----setup Orderers ----- +# ----------------------------------------------------------------------------- +# -----enroll orderer users with TLS-CA and Identities-CA for org1, the orderer org +docker exec ${CURRENT_HL_TOPOLOGY}-org1-ca-tls /bin/sh -c "/bin/sh /tmp/scripts/org1-enroll-identities-with-ca-tls.sh" +docker exec ${CURRENT_HL_TOPOLOGY}-org1-ca-identities /bin/sh -c "/bin/sh /tmp/scripts/org1-enroll-identities-with-ca-identities.sh" + +# -----generate the genesis.block and mychannel artifacts +docker exec ${CURRENT_HL_TOPOLOGY}-org2-cli-peer1 /bin/sh -c "/tmp/scripts/channels-setup.sh" + +sleep 1 + +# -----bring up org1 orderers +docker compose --env-file ${HL_TOPOLOGIES_BASE_FOLDER}/.env -f ${HL_TOPOLOGIES_BASE_FOLDER}/../docker-compose-base.yml \ + -f ${HL_TOPOLOGIES_BASE_FOLDER}/docker-compose.yml \ + -f ${HL_TOPOLOGIES_BASE_FOLDER}/containers/orderers/org1-orderers/docker-compose-org1-orderers.yml up -d org1-orderer1 + +docker compose --env-file ${HL_TOPOLOGIES_BASE_FOLDER}/.env -f ${HL_TOPOLOGIES_BASE_FOLDER}/../docker-compose-base.yml \ + -f ${HL_TOPOLOGIES_BASE_FOLDER}/docker-compose.yml \ + -f ${HL_TOPOLOGIES_BASE_FOLDER}/containers/orderers/org1-orderers/docker-compose-org1-orderers.yml up -d org1-orderer2 + +docker compose --env-file ${HL_TOPOLOGIES_BASE_FOLDER}/.env -f ${HL_TOPOLOGIES_BASE_FOLDER}/../docker-compose-base.yml \ + -f ${HL_TOPOLOGIES_BASE_FOLDER}/docker-compose.yml \ + -f ${HL_TOPOLOGIES_BASE_FOLDER}/containers/orderers/org1-orderers/docker-compose-org1-orderers.yml up -d org1-orderer3 + +# -----need to wait until raft leader selection is completed for the orderers +sleep 4 + +# ----------------------------------------------------------------------------- +# -----setup Channels ----- +# ----------------------------------------------------------------------------- +docker exec ${CURRENT_HL_TOPOLOGY}-org2-cli-peer1 /bin/sh -c "/tmp/scripts/org2-create-and-join-channels.sh" +docker exec ${CURRENT_HL_TOPOLOGY}-org3-cli-peer1 /bin/sh -c "/tmp/scripts/org3-create-and-join-channels.sh" + +# ----------------------------------------------------------------------------- +# -----setup Chaincode ----- +# ----------------------------------------------------------------------------- +docker exec ${CURRENT_HL_TOPOLOGY}-org2-cli-peer1 /bin/sh -c "/tmp/scripts/org2-install-chaincode.sh" +docker exec ${CURRENT_HL_TOPOLOGY}-org3-cli-peer1 /bin/sh -c "/tmp/scripts/org3-install-chaincode.sh" + +sleep 1 + +docker exec ${CURRENT_HL_TOPOLOGY}-org2-cli-peer1 /bin/sh -c "/tmp/scripts/org2-approve-chaincode.sh" +docker exec ${CURRENT_HL_TOPOLOGY}-org3-cli-peer1 /bin/sh -c "/tmp/scripts/org3-approve-chaincode.sh" + +sleep 1 + +docker exec ${CURRENT_HL_TOPOLOGY}-org2-cli-peer1 /bin/sh -c "/tmp/scripts/all-org-peers-commit-chaincode.sh" + +sleep 1 + +# ----------------------------------------------------------------------------- +# -----test Chaincode with invoke and query----- +# ----------------------------------------------------------------------------- +docker exec ${CURRENT_HL_TOPOLOGY}-org2-cli-peer1 /bin/sh -c "/tmp/scripts/all-org-peers-execute-chaincode.sh" + +echo "******* NETWORK SETUP COMPLETED *******" diff --git a/topologies/t4/teardown-network.sh b/topologies/t4/teardown-network.sh new file mode 100755 index 0000000..c86a94b --- /dev/null +++ b/topologies/t4/teardown-network.sh @@ -0,0 +1,33 @@ +#!/bin/bash +# -----stop script execution on error and log commands +set -e +set -x + +export CURRENT_HL_TOPOLOGY=t1 + +# ----------------------------------------------------------------------------- +# -----remove current topoloy containers containers +# ----------------------------------------------------------------------------- +# -----the chaincode dockers throw an error as not being able to be deleted, although they are being deleted. ignoring errors here +set +e +docker ps --filter name=${CURRENT_HL_TOPOLOGY}-org --filter status=running -aq | xargs -r docker stop | xargs -r docker rm +docker ps --filter name=${CURRENT_HL_TOPOLOGY}-org --filter status=exited -aq | xargs -r docker rm +set -e + +# ----------------------------------------------------------------------------- +# -----clear any state data written to disk +# ----------------------------------------------------------------------------- +# -----docker exec will throw error if no running container found +if [ "$( docker container inspect -f '{{.State.Status}}' ${CURRENT_HL_TOPOLOGY}-shell-cmd )" == "running" ] +then + docker exec ${CURRENT_HL_TOPOLOGY}-shell-cmd /bin/sh -c "/bin/sh /tmp/scripts/delete-state-data.sh" +else + echo "Shell Cmd Container is not running." +fi + +# ----------------------------------------------------------------------------- +# -----remove the bootstrap shell container +# ----------------------------------------------------------------------------- +# -----remove shell command container +docker ps --filter name=${CURRENT_HL_TOPOLOGY}-shell-cmd --filter status=running -aq | xargs -r docker stop | xargs -r docker rm +docker ps --filter name=${CURRENT_HL_TOPOLOGY}-shell-cmd --filter status=exited -aq | xargs -r docker rm diff --git a/topologies/t5/.env b/topologies/t5/.env new file mode 100644 index 0000000..d9e228b --- /dev/null +++ b/topologies/t5/.env @@ -0,0 +1,5 @@ +COMPOSE_PROJECT_NAME=hl-fabric-topology-t5 +FABRIC_CA_VERSION=1.5 +FABRIC_PEER_VERSION=2.4.6 +FABRIC_TOOLS_VERSION=2.4.6 +PEER_ORDERER_VERSION=2.4.6 \ No newline at end of file diff --git a/topologies/t5/.gitignore b/topologies/t5/.gitignore new file mode 100644 index 0000000..ee0881a --- /dev/null +++ b/topologies/t5/.gitignore @@ -0,0 +1,2 @@ +crypto-material/*/** +homefolders/*/** \ No newline at end of file diff --git a/topologies/t5/README.md b/topologies/t5/README.md new file mode 100644 index 0000000..a8713d9 --- /dev/null +++ b/topologies/t5/README.md @@ -0,0 +1,41 @@ +# T5: Postgres for CA Clustering +## Description +--- +T1 Network plus a Postgres DB per instance to store identities and certs issued by the Fabric CA. Multiple Fabric CAs working a cluster with Postgres as the backend. This applies to the Org 1 only. + +## Diagram +--- +![Diagram of components](../image_store/T5.png) + +## Relevant Documentation + +- https://hyperledger-fabric-ca.readthedocs.io/en/release-1.4/users-guide.html#configuring-the-database +- https://hyperledger-fabric-ca.readthedocs.io/en/release-1.4/users-guide.html#setting-up-a-cluster + +## Components List +--- +* Org 1 + * Orderer 1 + * Orderer 2 + * Orderer 3 + * TLS CA - 2 nodes + * Identities CA - 2 nodes + * Postgres +* Org 2 + * Peer 1 + * Peer 1 CLI + * Peer 2 + * TLS CA + * Identities CA +* Org 3 + * Peer 1 + * Peer 1 CLI + * Peer 2 + * TLS CA + * Identities CA + +## Characteristics + +- World State Database Instance (LevelDB) embedded (in peer containers) +- Chaincode installed directly on peers +- Communication between all components done via TLS \ No newline at end of file diff --git a/topologies/t5/assets/chaincode/assets-transfer-basic/chaincode-go/assetTransfer.go b/topologies/t5/assets/chaincode/assets-transfer-basic/chaincode-go/assetTransfer.go new file mode 100644 index 0000000..9c619d5 --- /dev/null +++ b/topologies/t5/assets/chaincode/assets-transfer-basic/chaincode-go/assetTransfer.go @@ -0,0 +1,23 @@ +/* +SPDX-License-Identifier: Apache-2.0 +*/ + +package main + +import ( + "log" + + "github.com/hyperledger/fabric-contract-api-go/contractapi" + "github.com/hyperledger/fabric-samples/asset-transfer-basic/chaincode-go/chaincode" +) + +func main() { + assetChaincode, err := contractapi.NewChaincode(&chaincode.SmartContract{}) + if err != nil { + log.Panicf("Error creating asset-transfer-basic chaincode: %v", err) + } + + if err := assetChaincode.Start(); err != nil { + log.Panicf("Error starting asset-transfer-basic chaincode: %v", err) + } +} diff --git a/topologies/t5/assets/chaincode/assets-transfer-basic/chaincode-go/chaincode/mocks/chaincodestub.go b/topologies/t5/assets/chaincode/assets-transfer-basic/chaincode-go/chaincode/mocks/chaincodestub.go new file mode 100644 index 0000000..91348fe --- /dev/null +++ b/topologies/t5/assets/chaincode/assets-transfer-basic/chaincode-go/chaincode/mocks/chaincodestub.go @@ -0,0 +1,2878 @@ +// Code generated by counterfeiter. DO NOT EDIT. +package mocks + +import ( + "sync" + + "github.com/golang/protobuf/ptypes/timestamp" + "github.com/hyperledger/fabric-chaincode-go/shim" + "github.com/hyperledger/fabric-protos-go/peer" +) + +type ChaincodeStub struct { + CreateCompositeKeyStub func(string, []string) (string, error) + createCompositeKeyMutex sync.RWMutex + createCompositeKeyArgsForCall []struct { + arg1 string + arg2 []string + } + createCompositeKeyReturns struct { + result1 string + result2 error + } + createCompositeKeyReturnsOnCall map[int]struct { + result1 string + result2 error + } + DelPrivateDataStub func(string, string) error + delPrivateDataMutex sync.RWMutex + delPrivateDataArgsForCall []struct { + arg1 string + arg2 string + } + delPrivateDataReturns struct { + result1 error + } + delPrivateDataReturnsOnCall map[int]struct { + result1 error + } + DelStateStub func(string) error + delStateMutex sync.RWMutex + delStateArgsForCall []struct { + arg1 string + } + delStateReturns struct { + result1 error + } + delStateReturnsOnCall map[int]struct { + result1 error + } + GetArgsStub func() [][]byte + getArgsMutex sync.RWMutex + getArgsArgsForCall []struct { + } + getArgsReturns struct { + result1 [][]byte + } + getArgsReturnsOnCall map[int]struct { + result1 [][]byte + } + GetArgsSliceStub func() ([]byte, error) + getArgsSliceMutex sync.RWMutex + getArgsSliceArgsForCall []struct { + } + getArgsSliceReturns struct { + result1 []byte + result2 error + } + getArgsSliceReturnsOnCall map[int]struct { + result1 []byte + result2 error + } + GetBindingStub func() ([]byte, error) + getBindingMutex sync.RWMutex + getBindingArgsForCall []struct { + } + getBindingReturns struct { + result1 []byte + result2 error + } + getBindingReturnsOnCall map[int]struct { + result1 []byte + result2 error + } + GetChannelIDStub func() string + getChannelIDMutex sync.RWMutex + getChannelIDArgsForCall []struct { + } + getChannelIDReturns struct { + result1 string + } + getChannelIDReturnsOnCall map[int]struct { + result1 string + } + GetCreatorStub func() ([]byte, error) + getCreatorMutex sync.RWMutex + getCreatorArgsForCall []struct { + } + getCreatorReturns struct { + result1 []byte + result2 error + } + getCreatorReturnsOnCall map[int]struct { + result1 []byte + result2 error + } + GetDecorationsStub func() map[string][]byte + getDecorationsMutex sync.RWMutex + getDecorationsArgsForCall []struct { + } + getDecorationsReturns struct { + result1 map[string][]byte + } + getDecorationsReturnsOnCall map[int]struct { + result1 map[string][]byte + } + GetFunctionAndParametersStub func() (string, []string) + getFunctionAndParametersMutex sync.RWMutex + getFunctionAndParametersArgsForCall []struct { + } + getFunctionAndParametersReturns struct { + result1 string + result2 []string + } + getFunctionAndParametersReturnsOnCall map[int]struct { + result1 string + result2 []string + } + GetHistoryForKeyStub func(string) (shim.HistoryQueryIteratorInterface, error) + getHistoryForKeyMutex sync.RWMutex + getHistoryForKeyArgsForCall []struct { + arg1 string + } + getHistoryForKeyReturns struct { + result1 shim.HistoryQueryIteratorInterface + result2 error + } + getHistoryForKeyReturnsOnCall map[int]struct { + result1 shim.HistoryQueryIteratorInterface + result2 error + } + GetPrivateDataStub func(string, string) ([]byte, error) + getPrivateDataMutex sync.RWMutex + getPrivateDataArgsForCall []struct { + arg1 string + arg2 string + } + getPrivateDataReturns struct { + result1 []byte + result2 error + } + getPrivateDataReturnsOnCall map[int]struct { + result1 []byte + result2 error + } + GetPrivateDataByPartialCompositeKeyStub func(string, string, []string) (shim.StateQueryIteratorInterface, error) + getPrivateDataByPartialCompositeKeyMutex sync.RWMutex + getPrivateDataByPartialCompositeKeyArgsForCall []struct { + arg1 string + arg2 string + arg3 []string + } + getPrivateDataByPartialCompositeKeyReturns struct { + result1 shim.StateQueryIteratorInterface + result2 error + } + getPrivateDataByPartialCompositeKeyReturnsOnCall map[int]struct { + result1 shim.StateQueryIteratorInterface + result2 error + } + GetPrivateDataByRangeStub func(string, string, string) (shim.StateQueryIteratorInterface, error) + getPrivateDataByRangeMutex sync.RWMutex + getPrivateDataByRangeArgsForCall []struct { + arg1 string + arg2 string + arg3 string + } + getPrivateDataByRangeReturns struct { + result1 shim.StateQueryIteratorInterface + result2 error + } + getPrivateDataByRangeReturnsOnCall map[int]struct { + result1 shim.StateQueryIteratorInterface + result2 error + } + GetPrivateDataHashStub func(string, string) ([]byte, error) + getPrivateDataHashMutex sync.RWMutex + getPrivateDataHashArgsForCall []struct { + arg1 string + arg2 string + } + getPrivateDataHashReturns struct { + result1 []byte + result2 error + } + getPrivateDataHashReturnsOnCall map[int]struct { + result1 []byte + result2 error + } + GetPrivateDataQueryResultStub func(string, string) (shim.StateQueryIteratorInterface, error) + getPrivateDataQueryResultMutex sync.RWMutex + getPrivateDataQueryResultArgsForCall []struct { + arg1 string + arg2 string + } + getPrivateDataQueryResultReturns struct { + result1 shim.StateQueryIteratorInterface + result2 error + } + getPrivateDataQueryResultReturnsOnCall map[int]struct { + result1 shim.StateQueryIteratorInterface + result2 error + } + GetPrivateDataValidationParameterStub func(string, string) ([]byte, error) + getPrivateDataValidationParameterMutex sync.RWMutex + getPrivateDataValidationParameterArgsForCall []struct { + arg1 string + arg2 string + } + getPrivateDataValidationParameterReturns struct { + result1 []byte + result2 error + } + getPrivateDataValidationParameterReturnsOnCall map[int]struct { + result1 []byte + result2 error + } + GetQueryResultStub func(string) (shim.StateQueryIteratorInterface, error) + getQueryResultMutex sync.RWMutex + getQueryResultArgsForCall []struct { + arg1 string + } + getQueryResultReturns struct { + result1 shim.StateQueryIteratorInterface + result2 error + } + getQueryResultReturnsOnCall map[int]struct { + result1 shim.StateQueryIteratorInterface + result2 error + } + GetQueryResultWithPaginationStub func(string, int32, string) (shim.StateQueryIteratorInterface, *peer.QueryResponseMetadata, error) + getQueryResultWithPaginationMutex sync.RWMutex + getQueryResultWithPaginationArgsForCall []struct { + arg1 string + arg2 int32 + arg3 string + } + getQueryResultWithPaginationReturns struct { + result1 shim.StateQueryIteratorInterface + result2 *peer.QueryResponseMetadata + result3 error + } + getQueryResultWithPaginationReturnsOnCall map[int]struct { + result1 shim.StateQueryIteratorInterface + result2 *peer.QueryResponseMetadata + result3 error + } + GetSignedProposalStub func() (*peer.SignedProposal, error) + getSignedProposalMutex sync.RWMutex + getSignedProposalArgsForCall []struct { + } + getSignedProposalReturns struct { + result1 *peer.SignedProposal + result2 error + } + getSignedProposalReturnsOnCall map[int]struct { + result1 *peer.SignedProposal + result2 error + } + GetStateStub func(string) ([]byte, error) + getStateMutex sync.RWMutex + getStateArgsForCall []struct { + arg1 string + } + getStateReturns struct { + result1 []byte + result2 error + } + getStateReturnsOnCall map[int]struct { + result1 []byte + result2 error + } + GetStateByPartialCompositeKeyStub func(string, []string) (shim.StateQueryIteratorInterface, error) + getStateByPartialCompositeKeyMutex sync.RWMutex + getStateByPartialCompositeKeyArgsForCall []struct { + arg1 string + arg2 []string + } + getStateByPartialCompositeKeyReturns struct { + result1 shim.StateQueryIteratorInterface + result2 error + } + getStateByPartialCompositeKeyReturnsOnCall map[int]struct { + result1 shim.StateQueryIteratorInterface + result2 error + } + GetStateByPartialCompositeKeyWithPaginationStub func(string, []string, int32, string) (shim.StateQueryIteratorInterface, *peer.QueryResponseMetadata, error) + getStateByPartialCompositeKeyWithPaginationMutex sync.RWMutex + getStateByPartialCompositeKeyWithPaginationArgsForCall []struct { + arg1 string + arg2 []string + arg3 int32 + arg4 string + } + getStateByPartialCompositeKeyWithPaginationReturns struct { + result1 shim.StateQueryIteratorInterface + result2 *peer.QueryResponseMetadata + result3 error + } + getStateByPartialCompositeKeyWithPaginationReturnsOnCall map[int]struct { + result1 shim.StateQueryIteratorInterface + result2 *peer.QueryResponseMetadata + result3 error + } + GetStateByRangeStub func(string, string) (shim.StateQueryIteratorInterface, error) + getStateByRangeMutex sync.RWMutex + getStateByRangeArgsForCall []struct { + arg1 string + arg2 string + } + getStateByRangeReturns struct { + result1 shim.StateQueryIteratorInterface + result2 error + } + getStateByRangeReturnsOnCall map[int]struct { + result1 shim.StateQueryIteratorInterface + result2 error + } + GetStateByRangeWithPaginationStub func(string, string, int32, string) (shim.StateQueryIteratorInterface, *peer.QueryResponseMetadata, error) + getStateByRangeWithPaginationMutex sync.RWMutex + getStateByRangeWithPaginationArgsForCall []struct { + arg1 string + arg2 string + arg3 int32 + arg4 string + } + getStateByRangeWithPaginationReturns struct { + result1 shim.StateQueryIteratorInterface + result2 *peer.QueryResponseMetadata + result3 error + } + getStateByRangeWithPaginationReturnsOnCall map[int]struct { + result1 shim.StateQueryIteratorInterface + result2 *peer.QueryResponseMetadata + result3 error + } + GetStateValidationParameterStub func(string) ([]byte, error) + getStateValidationParameterMutex sync.RWMutex + getStateValidationParameterArgsForCall []struct { + arg1 string + } + getStateValidationParameterReturns struct { + result1 []byte + result2 error + } + getStateValidationParameterReturnsOnCall map[int]struct { + result1 []byte + result2 error + } + GetStringArgsStub func() []string + getStringArgsMutex sync.RWMutex + getStringArgsArgsForCall []struct { + } + getStringArgsReturns struct { + result1 []string + } + getStringArgsReturnsOnCall map[int]struct { + result1 []string + } + GetTransientStub func() (map[string][]byte, error) + getTransientMutex sync.RWMutex + getTransientArgsForCall []struct { + } + getTransientReturns struct { + result1 map[string][]byte + result2 error + } + getTransientReturnsOnCall map[int]struct { + result1 map[string][]byte + result2 error + } + GetTxIDStub func() string + getTxIDMutex sync.RWMutex + getTxIDArgsForCall []struct { + } + getTxIDReturns struct { + result1 string + } + getTxIDReturnsOnCall map[int]struct { + result1 string + } + GetTxTimestampStub func() (*timestamp.Timestamp, error) + getTxTimestampMutex sync.RWMutex + getTxTimestampArgsForCall []struct { + } + getTxTimestampReturns struct { + result1 *timestamp.Timestamp + result2 error + } + getTxTimestampReturnsOnCall map[int]struct { + result1 *timestamp.Timestamp + result2 error + } + InvokeChaincodeStub func(string, [][]byte, string) peer.Response + invokeChaincodeMutex sync.RWMutex + invokeChaincodeArgsForCall []struct { + arg1 string + arg2 [][]byte + arg3 string + } + invokeChaincodeReturns struct { + result1 peer.Response + } + invokeChaincodeReturnsOnCall map[int]struct { + result1 peer.Response + } + PutPrivateDataStub func(string, string, []byte) error + putPrivateDataMutex sync.RWMutex + putPrivateDataArgsForCall []struct { + arg1 string + arg2 string + arg3 []byte + } + putPrivateDataReturns struct { + result1 error + } + putPrivateDataReturnsOnCall map[int]struct { + result1 error + } + PutStateStub func(string, []byte) error + putStateMutex sync.RWMutex + putStateArgsForCall []struct { + arg1 string + arg2 []byte + } + putStateReturns struct { + result1 error + } + putStateReturnsOnCall map[int]struct { + result1 error + } + SetEventStub func(string, []byte) error + setEventMutex sync.RWMutex + setEventArgsForCall []struct { + arg1 string + arg2 []byte + } + setEventReturns struct { + result1 error + } + setEventReturnsOnCall map[int]struct { + result1 error + } + SetPrivateDataValidationParameterStub func(string, string, []byte) error + setPrivateDataValidationParameterMutex sync.RWMutex + setPrivateDataValidationParameterArgsForCall []struct { + arg1 string + arg2 string + arg3 []byte + } + setPrivateDataValidationParameterReturns struct { + result1 error + } + setPrivateDataValidationParameterReturnsOnCall map[int]struct { + result1 error + } + SetStateValidationParameterStub func(string, []byte) error + setStateValidationParameterMutex sync.RWMutex + setStateValidationParameterArgsForCall []struct { + arg1 string + arg2 []byte + } + setStateValidationParameterReturns struct { + result1 error + } + setStateValidationParameterReturnsOnCall map[int]struct { + result1 error + } + SplitCompositeKeyStub func(string) (string, []string, error) + splitCompositeKeyMutex sync.RWMutex + splitCompositeKeyArgsForCall []struct { + arg1 string + } + splitCompositeKeyReturns struct { + result1 string + result2 []string + result3 error + } + splitCompositeKeyReturnsOnCall map[int]struct { + result1 string + result2 []string + result3 error + } + invocations map[string][][]interface{} + invocationsMutex sync.RWMutex +} + +func (fake *ChaincodeStub) CreateCompositeKey(arg1 string, arg2 []string) (string, error) { + var arg2Copy []string + if arg2 != nil { + arg2Copy = make([]string, len(arg2)) + copy(arg2Copy, arg2) + } + fake.createCompositeKeyMutex.Lock() + ret, specificReturn := fake.createCompositeKeyReturnsOnCall[len(fake.createCompositeKeyArgsForCall)] + fake.createCompositeKeyArgsForCall = append(fake.createCompositeKeyArgsForCall, struct { + arg1 string + arg2 []string + }{arg1, arg2Copy}) + fake.recordInvocation("CreateCompositeKey", []interface{}{arg1, arg2Copy}) + fake.createCompositeKeyMutex.Unlock() + if fake.CreateCompositeKeyStub != nil { + return fake.CreateCompositeKeyStub(arg1, arg2) + } + if specificReturn { + return ret.result1, ret.result2 + } + fakeReturns := fake.createCompositeKeyReturns + return fakeReturns.result1, fakeReturns.result2 +} + +func (fake *ChaincodeStub) CreateCompositeKeyCallCount() int { + fake.createCompositeKeyMutex.RLock() + defer fake.createCompositeKeyMutex.RUnlock() + return len(fake.createCompositeKeyArgsForCall) +} + +func (fake *ChaincodeStub) CreateCompositeKeyCalls(stub func(string, []string) (string, error)) { + fake.createCompositeKeyMutex.Lock() + defer fake.createCompositeKeyMutex.Unlock() + fake.CreateCompositeKeyStub = stub +} + +func (fake *ChaincodeStub) CreateCompositeKeyArgsForCall(i int) (string, []string) { + fake.createCompositeKeyMutex.RLock() + defer fake.createCompositeKeyMutex.RUnlock() + argsForCall := fake.createCompositeKeyArgsForCall[i] + return argsForCall.arg1, argsForCall.arg2 +} + +func (fake *ChaincodeStub) CreateCompositeKeyReturns(result1 string, result2 error) { + fake.createCompositeKeyMutex.Lock() + defer fake.createCompositeKeyMutex.Unlock() + fake.CreateCompositeKeyStub = nil + fake.createCompositeKeyReturns = struct { + result1 string + result2 error + }{result1, result2} +} + +func (fake *ChaincodeStub) CreateCompositeKeyReturnsOnCall(i int, result1 string, result2 error) { + fake.createCompositeKeyMutex.Lock() + defer fake.createCompositeKeyMutex.Unlock() + fake.CreateCompositeKeyStub = nil + if fake.createCompositeKeyReturnsOnCall == nil { + fake.createCompositeKeyReturnsOnCall = make(map[int]struct { + result1 string + result2 error + }) + } + fake.createCompositeKeyReturnsOnCall[i] = struct { + result1 string + result2 error + }{result1, result2} +} + +func (fake *ChaincodeStub) DelPrivateData(arg1 string, arg2 string) error { + fake.delPrivateDataMutex.Lock() + ret, specificReturn := fake.delPrivateDataReturnsOnCall[len(fake.delPrivateDataArgsForCall)] + fake.delPrivateDataArgsForCall = append(fake.delPrivateDataArgsForCall, struct { + arg1 string + arg2 string + }{arg1, arg2}) + fake.recordInvocation("DelPrivateData", []interface{}{arg1, arg2}) + fake.delPrivateDataMutex.Unlock() + if fake.DelPrivateDataStub != nil { + return fake.DelPrivateDataStub(arg1, arg2) + } + if specificReturn { + return ret.result1 + } + fakeReturns := fake.delPrivateDataReturns + return fakeReturns.result1 +} + +func (fake *ChaincodeStub) DelPrivateDataCallCount() int { + fake.delPrivateDataMutex.RLock() + defer fake.delPrivateDataMutex.RUnlock() + return len(fake.delPrivateDataArgsForCall) +} + +func (fake *ChaincodeStub) DelPrivateDataCalls(stub func(string, string) error) { + fake.delPrivateDataMutex.Lock() + defer fake.delPrivateDataMutex.Unlock() + fake.DelPrivateDataStub = stub +} + +func (fake *ChaincodeStub) DelPrivateDataArgsForCall(i int) (string, string) { + fake.delPrivateDataMutex.RLock() + defer fake.delPrivateDataMutex.RUnlock() + argsForCall := fake.delPrivateDataArgsForCall[i] + return argsForCall.arg1, argsForCall.arg2 +} + +func (fake *ChaincodeStub) DelPrivateDataReturns(result1 error) { + fake.delPrivateDataMutex.Lock() + defer fake.delPrivateDataMutex.Unlock() + fake.DelPrivateDataStub = nil + fake.delPrivateDataReturns = struct { + result1 error + }{result1} +} + +func (fake *ChaincodeStub) DelPrivateDataReturnsOnCall(i int, result1 error) { + fake.delPrivateDataMutex.Lock() + defer fake.delPrivateDataMutex.Unlock() + fake.DelPrivateDataStub = nil + if fake.delPrivateDataReturnsOnCall == nil { + fake.delPrivateDataReturnsOnCall = make(map[int]struct { + result1 error + }) + } + fake.delPrivateDataReturnsOnCall[i] = struct { + result1 error + }{result1} +} + +func (fake *ChaincodeStub) DelState(arg1 string) error { + fake.delStateMutex.Lock() + ret, specificReturn := fake.delStateReturnsOnCall[len(fake.delStateArgsForCall)] + fake.delStateArgsForCall = append(fake.delStateArgsForCall, struct { + arg1 string + }{arg1}) + fake.recordInvocation("DelState", []interface{}{arg1}) + fake.delStateMutex.Unlock() + if fake.DelStateStub != nil { + return fake.DelStateStub(arg1) + } + if specificReturn { + return ret.result1 + } + fakeReturns := fake.delStateReturns + return fakeReturns.result1 +} + +func (fake *ChaincodeStub) DelStateCallCount() int { + fake.delStateMutex.RLock() + defer fake.delStateMutex.RUnlock() + return len(fake.delStateArgsForCall) +} + +func (fake *ChaincodeStub) DelStateCalls(stub func(string) error) { + fake.delStateMutex.Lock() + defer fake.delStateMutex.Unlock() + fake.DelStateStub = stub +} + +func (fake *ChaincodeStub) DelStateArgsForCall(i int) string { + fake.delStateMutex.RLock() + defer fake.delStateMutex.RUnlock() + argsForCall := fake.delStateArgsForCall[i] + return argsForCall.arg1 +} + +func (fake *ChaincodeStub) DelStateReturns(result1 error) { + fake.delStateMutex.Lock() + defer fake.delStateMutex.Unlock() + fake.DelStateStub = nil + fake.delStateReturns = struct { + result1 error + }{result1} +} + +func (fake *ChaincodeStub) DelStateReturnsOnCall(i int, result1 error) { + fake.delStateMutex.Lock() + defer fake.delStateMutex.Unlock() + fake.DelStateStub = nil + if fake.delStateReturnsOnCall == nil { + fake.delStateReturnsOnCall = make(map[int]struct { + result1 error + }) + } + fake.delStateReturnsOnCall[i] = struct { + result1 error + }{result1} +} + +func (fake *ChaincodeStub) GetArgs() [][]byte { + fake.getArgsMutex.Lock() + ret, specificReturn := fake.getArgsReturnsOnCall[len(fake.getArgsArgsForCall)] + fake.getArgsArgsForCall = append(fake.getArgsArgsForCall, struct { + }{}) + fake.recordInvocation("GetArgs", []interface{}{}) + fake.getArgsMutex.Unlock() + if fake.GetArgsStub != nil { + return fake.GetArgsStub() + } + if specificReturn { + return ret.result1 + } + fakeReturns := fake.getArgsReturns + return fakeReturns.result1 +} + +func (fake *ChaincodeStub) GetArgsCallCount() int { + fake.getArgsMutex.RLock() + defer fake.getArgsMutex.RUnlock() + return len(fake.getArgsArgsForCall) +} + +func (fake *ChaincodeStub) GetArgsCalls(stub func() [][]byte) { + fake.getArgsMutex.Lock() + defer fake.getArgsMutex.Unlock() + fake.GetArgsStub = stub +} + +func (fake *ChaincodeStub) GetArgsReturns(result1 [][]byte) { + fake.getArgsMutex.Lock() + defer fake.getArgsMutex.Unlock() + fake.GetArgsStub = nil + fake.getArgsReturns = struct { + result1 [][]byte + }{result1} +} + +func (fake *ChaincodeStub) GetArgsReturnsOnCall(i int, result1 [][]byte) { + fake.getArgsMutex.Lock() + defer fake.getArgsMutex.Unlock() + fake.GetArgsStub = nil + if fake.getArgsReturnsOnCall == nil { + fake.getArgsReturnsOnCall = make(map[int]struct { + result1 [][]byte + }) + } + fake.getArgsReturnsOnCall[i] = struct { + result1 [][]byte + }{result1} +} + +func (fake *ChaincodeStub) GetArgsSlice() ([]byte, error) { + fake.getArgsSliceMutex.Lock() + ret, specificReturn := fake.getArgsSliceReturnsOnCall[len(fake.getArgsSliceArgsForCall)] + fake.getArgsSliceArgsForCall = append(fake.getArgsSliceArgsForCall, struct { + }{}) + fake.recordInvocation("GetArgsSlice", []interface{}{}) + fake.getArgsSliceMutex.Unlock() + if fake.GetArgsSliceStub != nil { + return fake.GetArgsSliceStub() + } + if specificReturn { + return ret.result1, ret.result2 + } + fakeReturns := fake.getArgsSliceReturns + return fakeReturns.result1, fakeReturns.result2 +} + +func (fake *ChaincodeStub) GetArgsSliceCallCount() int { + fake.getArgsSliceMutex.RLock() + defer fake.getArgsSliceMutex.RUnlock() + return len(fake.getArgsSliceArgsForCall) +} + +func (fake *ChaincodeStub) GetArgsSliceCalls(stub func() ([]byte, error)) { + fake.getArgsSliceMutex.Lock() + defer fake.getArgsSliceMutex.Unlock() + fake.GetArgsSliceStub = stub +} + +func (fake *ChaincodeStub) GetArgsSliceReturns(result1 []byte, result2 error) { + fake.getArgsSliceMutex.Lock() + defer fake.getArgsSliceMutex.Unlock() + fake.GetArgsSliceStub = nil + fake.getArgsSliceReturns = struct { + result1 []byte + result2 error + }{result1, result2} +} + +func (fake *ChaincodeStub) GetArgsSliceReturnsOnCall(i int, result1 []byte, result2 error) { + fake.getArgsSliceMutex.Lock() + defer fake.getArgsSliceMutex.Unlock() + fake.GetArgsSliceStub = nil + if fake.getArgsSliceReturnsOnCall == nil { + fake.getArgsSliceReturnsOnCall = make(map[int]struct { + result1 []byte + result2 error + }) + } + fake.getArgsSliceReturnsOnCall[i] = struct { + result1 []byte + result2 error + }{result1, result2} +} + +func (fake *ChaincodeStub) GetBinding() ([]byte, error) { + fake.getBindingMutex.Lock() + ret, specificReturn := fake.getBindingReturnsOnCall[len(fake.getBindingArgsForCall)] + fake.getBindingArgsForCall = append(fake.getBindingArgsForCall, struct { + }{}) + fake.recordInvocation("GetBinding", []interface{}{}) + fake.getBindingMutex.Unlock() + if fake.GetBindingStub != nil { + return fake.GetBindingStub() + } + if specificReturn { + return ret.result1, ret.result2 + } + fakeReturns := fake.getBindingReturns + return fakeReturns.result1, fakeReturns.result2 +} + +func (fake *ChaincodeStub) GetBindingCallCount() int { + fake.getBindingMutex.RLock() + defer fake.getBindingMutex.RUnlock() + return len(fake.getBindingArgsForCall) +} + +func (fake *ChaincodeStub) GetBindingCalls(stub func() ([]byte, error)) { + fake.getBindingMutex.Lock() + defer fake.getBindingMutex.Unlock() + fake.GetBindingStub = stub +} + +func (fake *ChaincodeStub) GetBindingReturns(result1 []byte, result2 error) { + fake.getBindingMutex.Lock() + defer fake.getBindingMutex.Unlock() + fake.GetBindingStub = nil + fake.getBindingReturns = struct { + result1 []byte + result2 error + }{result1, result2} +} + +func (fake *ChaincodeStub) GetBindingReturnsOnCall(i int, result1 []byte, result2 error) { + fake.getBindingMutex.Lock() + defer fake.getBindingMutex.Unlock() + fake.GetBindingStub = nil + if fake.getBindingReturnsOnCall == nil { + fake.getBindingReturnsOnCall = make(map[int]struct { + result1 []byte + result2 error + }) + } + fake.getBindingReturnsOnCall[i] = struct { + result1 []byte + result2 error + }{result1, result2} +} + +func (fake *ChaincodeStub) GetChannelID() string { + fake.getChannelIDMutex.Lock() + ret, specificReturn := fake.getChannelIDReturnsOnCall[len(fake.getChannelIDArgsForCall)] + fake.getChannelIDArgsForCall = append(fake.getChannelIDArgsForCall, struct { + }{}) + fake.recordInvocation("GetChannelID", []interface{}{}) + fake.getChannelIDMutex.Unlock() + if fake.GetChannelIDStub != nil { + return fake.GetChannelIDStub() + } + if specificReturn { + return ret.result1 + } + fakeReturns := fake.getChannelIDReturns + return fakeReturns.result1 +} + +func (fake *ChaincodeStub) GetChannelIDCallCount() int { + fake.getChannelIDMutex.RLock() + defer fake.getChannelIDMutex.RUnlock() + return len(fake.getChannelIDArgsForCall) +} + +func (fake *ChaincodeStub) GetChannelIDCalls(stub func() string) { + fake.getChannelIDMutex.Lock() + defer fake.getChannelIDMutex.Unlock() + fake.GetChannelIDStub = stub +} + +func (fake *ChaincodeStub) GetChannelIDReturns(result1 string) { + fake.getChannelIDMutex.Lock() + defer fake.getChannelIDMutex.Unlock() + fake.GetChannelIDStub = nil + fake.getChannelIDReturns = struct { + result1 string + }{result1} +} + +func (fake *ChaincodeStub) GetChannelIDReturnsOnCall(i int, result1 string) { + fake.getChannelIDMutex.Lock() + defer fake.getChannelIDMutex.Unlock() + fake.GetChannelIDStub = nil + if fake.getChannelIDReturnsOnCall == nil { + fake.getChannelIDReturnsOnCall = make(map[int]struct { + result1 string + }) + } + fake.getChannelIDReturnsOnCall[i] = struct { + result1 string + }{result1} +} + +func (fake *ChaincodeStub) GetCreator() ([]byte, error) { + fake.getCreatorMutex.Lock() + ret, specificReturn := fake.getCreatorReturnsOnCall[len(fake.getCreatorArgsForCall)] + fake.getCreatorArgsForCall = append(fake.getCreatorArgsForCall, struct { + }{}) + fake.recordInvocation("GetCreator", []interface{}{}) + fake.getCreatorMutex.Unlock() + if fake.GetCreatorStub != nil { + return fake.GetCreatorStub() + } + if specificReturn { + return ret.result1, ret.result2 + } + fakeReturns := fake.getCreatorReturns + return fakeReturns.result1, fakeReturns.result2 +} + +func (fake *ChaincodeStub) GetCreatorCallCount() int { + fake.getCreatorMutex.RLock() + defer fake.getCreatorMutex.RUnlock() + return len(fake.getCreatorArgsForCall) +} + +func (fake *ChaincodeStub) GetCreatorCalls(stub func() ([]byte, error)) { + fake.getCreatorMutex.Lock() + defer fake.getCreatorMutex.Unlock() + fake.GetCreatorStub = stub +} + +func (fake *ChaincodeStub) GetCreatorReturns(result1 []byte, result2 error) { + fake.getCreatorMutex.Lock() + defer fake.getCreatorMutex.Unlock() + fake.GetCreatorStub = nil + fake.getCreatorReturns = struct { + result1 []byte + result2 error + }{result1, result2} +} + +func (fake *ChaincodeStub) GetCreatorReturnsOnCall(i int, result1 []byte, result2 error) { + fake.getCreatorMutex.Lock() + defer fake.getCreatorMutex.Unlock() + fake.GetCreatorStub = nil + if fake.getCreatorReturnsOnCall == nil { + fake.getCreatorReturnsOnCall = make(map[int]struct { + result1 []byte + result2 error + }) + } + fake.getCreatorReturnsOnCall[i] = struct { + result1 []byte + result2 error + }{result1, result2} +} + +func (fake *ChaincodeStub) GetDecorations() map[string][]byte { + fake.getDecorationsMutex.Lock() + ret, specificReturn := fake.getDecorationsReturnsOnCall[len(fake.getDecorationsArgsForCall)] + fake.getDecorationsArgsForCall = append(fake.getDecorationsArgsForCall, struct { + }{}) + fake.recordInvocation("GetDecorations", []interface{}{}) + fake.getDecorationsMutex.Unlock() + if fake.GetDecorationsStub != nil { + return fake.GetDecorationsStub() + } + if specificReturn { + return ret.result1 + } + fakeReturns := fake.getDecorationsReturns + return fakeReturns.result1 +} + +func (fake *ChaincodeStub) GetDecorationsCallCount() int { + fake.getDecorationsMutex.RLock() + defer fake.getDecorationsMutex.RUnlock() + return len(fake.getDecorationsArgsForCall) +} + +func (fake *ChaincodeStub) GetDecorationsCalls(stub func() map[string][]byte) { + fake.getDecorationsMutex.Lock() + defer fake.getDecorationsMutex.Unlock() + fake.GetDecorationsStub = stub +} + +func (fake *ChaincodeStub) GetDecorationsReturns(result1 map[string][]byte) { + fake.getDecorationsMutex.Lock() + defer fake.getDecorationsMutex.Unlock() + fake.GetDecorationsStub = nil + fake.getDecorationsReturns = struct { + result1 map[string][]byte + }{result1} +} + +func (fake *ChaincodeStub) GetDecorationsReturnsOnCall(i int, result1 map[string][]byte) { + fake.getDecorationsMutex.Lock() + defer fake.getDecorationsMutex.Unlock() + fake.GetDecorationsStub = nil + if fake.getDecorationsReturnsOnCall == nil { + fake.getDecorationsReturnsOnCall = make(map[int]struct { + result1 map[string][]byte + }) + } + fake.getDecorationsReturnsOnCall[i] = struct { + result1 map[string][]byte + }{result1} +} + +func (fake *ChaincodeStub) GetFunctionAndParameters() (string, []string) { + fake.getFunctionAndParametersMutex.Lock() + ret, specificReturn := fake.getFunctionAndParametersReturnsOnCall[len(fake.getFunctionAndParametersArgsForCall)] + fake.getFunctionAndParametersArgsForCall = append(fake.getFunctionAndParametersArgsForCall, struct { + }{}) + fake.recordInvocation("GetFunctionAndParameters", []interface{}{}) + fake.getFunctionAndParametersMutex.Unlock() + if fake.GetFunctionAndParametersStub != nil { + return fake.GetFunctionAndParametersStub() + } + if specificReturn { + return ret.result1, ret.result2 + } + fakeReturns := fake.getFunctionAndParametersReturns + return fakeReturns.result1, fakeReturns.result2 +} + +func (fake *ChaincodeStub) GetFunctionAndParametersCallCount() int { + fake.getFunctionAndParametersMutex.RLock() + defer fake.getFunctionAndParametersMutex.RUnlock() + return len(fake.getFunctionAndParametersArgsForCall) +} + +func (fake *ChaincodeStub) GetFunctionAndParametersCalls(stub func() (string, []string)) { + fake.getFunctionAndParametersMutex.Lock() + defer fake.getFunctionAndParametersMutex.Unlock() + fake.GetFunctionAndParametersStub = stub +} + +func (fake *ChaincodeStub) GetFunctionAndParametersReturns(result1 string, result2 []string) { + fake.getFunctionAndParametersMutex.Lock() + defer fake.getFunctionAndParametersMutex.Unlock() + fake.GetFunctionAndParametersStub = nil + fake.getFunctionAndParametersReturns = struct { + result1 string + result2 []string + }{result1, result2} +} + +func (fake *ChaincodeStub) GetFunctionAndParametersReturnsOnCall(i int, result1 string, result2 []string) { + fake.getFunctionAndParametersMutex.Lock() + defer fake.getFunctionAndParametersMutex.Unlock() + fake.GetFunctionAndParametersStub = nil + if fake.getFunctionAndParametersReturnsOnCall == nil { + fake.getFunctionAndParametersReturnsOnCall = make(map[int]struct { + result1 string + result2 []string + }) + } + fake.getFunctionAndParametersReturnsOnCall[i] = struct { + result1 string + result2 []string + }{result1, result2} +} + +func (fake *ChaincodeStub) GetHistoryForKey(arg1 string) (shim.HistoryQueryIteratorInterface, error) { + fake.getHistoryForKeyMutex.Lock() + ret, specificReturn := fake.getHistoryForKeyReturnsOnCall[len(fake.getHistoryForKeyArgsForCall)] + fake.getHistoryForKeyArgsForCall = append(fake.getHistoryForKeyArgsForCall, struct { + arg1 string + }{arg1}) + fake.recordInvocation("GetHistoryForKey", []interface{}{arg1}) + fake.getHistoryForKeyMutex.Unlock() + if fake.GetHistoryForKeyStub != nil { + return fake.GetHistoryForKeyStub(arg1) + } + if specificReturn { + return ret.result1, ret.result2 + } + fakeReturns := fake.getHistoryForKeyReturns + return fakeReturns.result1, fakeReturns.result2 +} + +func (fake *ChaincodeStub) GetHistoryForKeyCallCount() int { + fake.getHistoryForKeyMutex.RLock() + defer fake.getHistoryForKeyMutex.RUnlock() + return len(fake.getHistoryForKeyArgsForCall) +} + +func (fake *ChaincodeStub) GetHistoryForKeyCalls(stub func(string) (shim.HistoryQueryIteratorInterface, error)) { + fake.getHistoryForKeyMutex.Lock() + defer fake.getHistoryForKeyMutex.Unlock() + fake.GetHistoryForKeyStub = stub +} + +func (fake *ChaincodeStub) GetHistoryForKeyArgsForCall(i int) string { + fake.getHistoryForKeyMutex.RLock() + defer fake.getHistoryForKeyMutex.RUnlock() + argsForCall := fake.getHistoryForKeyArgsForCall[i] + return argsForCall.arg1 +} + +func (fake *ChaincodeStub) GetHistoryForKeyReturns(result1 shim.HistoryQueryIteratorInterface, result2 error) { + fake.getHistoryForKeyMutex.Lock() + defer fake.getHistoryForKeyMutex.Unlock() + fake.GetHistoryForKeyStub = nil + fake.getHistoryForKeyReturns = struct { + result1 shim.HistoryQueryIteratorInterface + result2 error + }{result1, result2} +} + +func (fake *ChaincodeStub) GetHistoryForKeyReturnsOnCall(i int, result1 shim.HistoryQueryIteratorInterface, result2 error) { + fake.getHistoryForKeyMutex.Lock() + defer fake.getHistoryForKeyMutex.Unlock() + fake.GetHistoryForKeyStub = nil + if fake.getHistoryForKeyReturnsOnCall == nil { + fake.getHistoryForKeyReturnsOnCall = make(map[int]struct { + result1 shim.HistoryQueryIteratorInterface + result2 error + }) + } + fake.getHistoryForKeyReturnsOnCall[i] = struct { + result1 shim.HistoryQueryIteratorInterface + result2 error + }{result1, result2} +} + +func (fake *ChaincodeStub) GetPrivateData(arg1 string, arg2 string) ([]byte, error) { + fake.getPrivateDataMutex.Lock() + ret, specificReturn := fake.getPrivateDataReturnsOnCall[len(fake.getPrivateDataArgsForCall)] + fake.getPrivateDataArgsForCall = append(fake.getPrivateDataArgsForCall, struct { + arg1 string + arg2 string + }{arg1, arg2}) + fake.recordInvocation("GetPrivateData", []interface{}{arg1, arg2}) + fake.getPrivateDataMutex.Unlock() + if fake.GetPrivateDataStub != nil { + return fake.GetPrivateDataStub(arg1, arg2) + } + if specificReturn { + return ret.result1, ret.result2 + } + fakeReturns := fake.getPrivateDataReturns + return fakeReturns.result1, fakeReturns.result2 +} + +func (fake *ChaincodeStub) GetPrivateDataCallCount() int { + fake.getPrivateDataMutex.RLock() + defer fake.getPrivateDataMutex.RUnlock() + return len(fake.getPrivateDataArgsForCall) +} + +func (fake *ChaincodeStub) GetPrivateDataCalls(stub func(string, string) ([]byte, error)) { + fake.getPrivateDataMutex.Lock() + defer fake.getPrivateDataMutex.Unlock() + fake.GetPrivateDataStub = stub +} + +func (fake *ChaincodeStub) GetPrivateDataArgsForCall(i int) (string, string) { + fake.getPrivateDataMutex.RLock() + defer fake.getPrivateDataMutex.RUnlock() + argsForCall := fake.getPrivateDataArgsForCall[i] + return argsForCall.arg1, argsForCall.arg2 +} + +func (fake *ChaincodeStub) GetPrivateDataReturns(result1 []byte, result2 error) { + fake.getPrivateDataMutex.Lock() + defer fake.getPrivateDataMutex.Unlock() + fake.GetPrivateDataStub = nil + fake.getPrivateDataReturns = struct { + result1 []byte + result2 error + }{result1, result2} +} + +func (fake *ChaincodeStub) GetPrivateDataReturnsOnCall(i int, result1 []byte, result2 error) { + fake.getPrivateDataMutex.Lock() + defer fake.getPrivateDataMutex.Unlock() + fake.GetPrivateDataStub = nil + if fake.getPrivateDataReturnsOnCall == nil { + fake.getPrivateDataReturnsOnCall = make(map[int]struct { + result1 []byte + result2 error + }) + } + fake.getPrivateDataReturnsOnCall[i] = struct { + result1 []byte + result2 error + }{result1, result2} +} + +func (fake *ChaincodeStub) GetPrivateDataByPartialCompositeKey(arg1 string, arg2 string, arg3 []string) (shim.StateQueryIteratorInterface, error) { + var arg3Copy []string + if arg3 != nil { + arg3Copy = make([]string, len(arg3)) + copy(arg3Copy, arg3) + } + fake.getPrivateDataByPartialCompositeKeyMutex.Lock() + ret, specificReturn := fake.getPrivateDataByPartialCompositeKeyReturnsOnCall[len(fake.getPrivateDataByPartialCompositeKeyArgsForCall)] + fake.getPrivateDataByPartialCompositeKeyArgsForCall = append(fake.getPrivateDataByPartialCompositeKeyArgsForCall, struct { + arg1 string + arg2 string + arg3 []string + }{arg1, arg2, arg3Copy}) + fake.recordInvocation("GetPrivateDataByPartialCompositeKey", []interface{}{arg1, arg2, arg3Copy}) + fake.getPrivateDataByPartialCompositeKeyMutex.Unlock() + if fake.GetPrivateDataByPartialCompositeKeyStub != nil { + return fake.GetPrivateDataByPartialCompositeKeyStub(arg1, arg2, arg3) + } + if specificReturn { + return ret.result1, ret.result2 + } + fakeReturns := fake.getPrivateDataByPartialCompositeKeyReturns + return fakeReturns.result1, fakeReturns.result2 +} + +func (fake *ChaincodeStub) GetPrivateDataByPartialCompositeKeyCallCount() int { + fake.getPrivateDataByPartialCompositeKeyMutex.RLock() + defer fake.getPrivateDataByPartialCompositeKeyMutex.RUnlock() + return len(fake.getPrivateDataByPartialCompositeKeyArgsForCall) +} + +func (fake *ChaincodeStub) GetPrivateDataByPartialCompositeKeyCalls(stub func(string, string, []string) (shim.StateQueryIteratorInterface, error)) { + fake.getPrivateDataByPartialCompositeKeyMutex.Lock() + defer fake.getPrivateDataByPartialCompositeKeyMutex.Unlock() + fake.GetPrivateDataByPartialCompositeKeyStub = stub +} + +func (fake *ChaincodeStub) GetPrivateDataByPartialCompositeKeyArgsForCall(i int) (string, string, []string) { + fake.getPrivateDataByPartialCompositeKeyMutex.RLock() + defer fake.getPrivateDataByPartialCompositeKeyMutex.RUnlock() + argsForCall := fake.getPrivateDataByPartialCompositeKeyArgsForCall[i] + return argsForCall.arg1, argsForCall.arg2, argsForCall.arg3 +} + +func (fake *ChaincodeStub) GetPrivateDataByPartialCompositeKeyReturns(result1 shim.StateQueryIteratorInterface, result2 error) { + fake.getPrivateDataByPartialCompositeKeyMutex.Lock() + defer fake.getPrivateDataByPartialCompositeKeyMutex.Unlock() + fake.GetPrivateDataByPartialCompositeKeyStub = nil + fake.getPrivateDataByPartialCompositeKeyReturns = struct { + result1 shim.StateQueryIteratorInterface + result2 error + }{result1, result2} +} + +func (fake *ChaincodeStub) GetPrivateDataByPartialCompositeKeyReturnsOnCall(i int, result1 shim.StateQueryIteratorInterface, result2 error) { + fake.getPrivateDataByPartialCompositeKeyMutex.Lock() + defer fake.getPrivateDataByPartialCompositeKeyMutex.Unlock() + fake.GetPrivateDataByPartialCompositeKeyStub = nil + if fake.getPrivateDataByPartialCompositeKeyReturnsOnCall == nil { + fake.getPrivateDataByPartialCompositeKeyReturnsOnCall = make(map[int]struct { + result1 shim.StateQueryIteratorInterface + result2 error + }) + } + fake.getPrivateDataByPartialCompositeKeyReturnsOnCall[i] = struct { + result1 shim.StateQueryIteratorInterface + result2 error + }{result1, result2} +} + +func (fake *ChaincodeStub) GetPrivateDataByRange(arg1 string, arg2 string, arg3 string) (shim.StateQueryIteratorInterface, error) { + fake.getPrivateDataByRangeMutex.Lock() + ret, specificReturn := fake.getPrivateDataByRangeReturnsOnCall[len(fake.getPrivateDataByRangeArgsForCall)] + fake.getPrivateDataByRangeArgsForCall = append(fake.getPrivateDataByRangeArgsForCall, struct { + arg1 string + arg2 string + arg3 string + }{arg1, arg2, arg3}) + fake.recordInvocation("GetPrivateDataByRange", []interface{}{arg1, arg2, arg3}) + fake.getPrivateDataByRangeMutex.Unlock() + if fake.GetPrivateDataByRangeStub != nil { + return fake.GetPrivateDataByRangeStub(arg1, arg2, arg3) + } + if specificReturn { + return ret.result1, ret.result2 + } + fakeReturns := fake.getPrivateDataByRangeReturns + return fakeReturns.result1, fakeReturns.result2 +} + +func (fake *ChaincodeStub) GetPrivateDataByRangeCallCount() int { + fake.getPrivateDataByRangeMutex.RLock() + defer fake.getPrivateDataByRangeMutex.RUnlock() + return len(fake.getPrivateDataByRangeArgsForCall) +} + +func (fake *ChaincodeStub) GetPrivateDataByRangeCalls(stub func(string, string, string) (shim.StateQueryIteratorInterface, error)) { + fake.getPrivateDataByRangeMutex.Lock() + defer fake.getPrivateDataByRangeMutex.Unlock() + fake.GetPrivateDataByRangeStub = stub +} + +func (fake *ChaincodeStub) GetPrivateDataByRangeArgsForCall(i int) (string, string, string) { + fake.getPrivateDataByRangeMutex.RLock() + defer fake.getPrivateDataByRangeMutex.RUnlock() + argsForCall := fake.getPrivateDataByRangeArgsForCall[i] + return argsForCall.arg1, argsForCall.arg2, argsForCall.arg3 +} + +func (fake *ChaincodeStub) GetPrivateDataByRangeReturns(result1 shim.StateQueryIteratorInterface, result2 error) { + fake.getPrivateDataByRangeMutex.Lock() + defer fake.getPrivateDataByRangeMutex.Unlock() + fake.GetPrivateDataByRangeStub = nil + fake.getPrivateDataByRangeReturns = struct { + result1 shim.StateQueryIteratorInterface + result2 error + }{result1, result2} +} + +func (fake *ChaincodeStub) GetPrivateDataByRangeReturnsOnCall(i int, result1 shim.StateQueryIteratorInterface, result2 error) { + fake.getPrivateDataByRangeMutex.Lock() + defer fake.getPrivateDataByRangeMutex.Unlock() + fake.GetPrivateDataByRangeStub = nil + if fake.getPrivateDataByRangeReturnsOnCall == nil { + fake.getPrivateDataByRangeReturnsOnCall = make(map[int]struct { + result1 shim.StateQueryIteratorInterface + result2 error + }) + } + fake.getPrivateDataByRangeReturnsOnCall[i] = struct { + result1 shim.StateQueryIteratorInterface + result2 error + }{result1, result2} +} + +func (fake *ChaincodeStub) GetPrivateDataHash(arg1 string, arg2 string) ([]byte, error) { + fake.getPrivateDataHashMutex.Lock() + ret, specificReturn := fake.getPrivateDataHashReturnsOnCall[len(fake.getPrivateDataHashArgsForCall)] + fake.getPrivateDataHashArgsForCall = append(fake.getPrivateDataHashArgsForCall, struct { + arg1 string + arg2 string + }{arg1, arg2}) + fake.recordInvocation("GetPrivateDataHash", []interface{}{arg1, arg2}) + fake.getPrivateDataHashMutex.Unlock() + if fake.GetPrivateDataHashStub != nil { + return fake.GetPrivateDataHashStub(arg1, arg2) + } + if specificReturn { + return ret.result1, ret.result2 + } + fakeReturns := fake.getPrivateDataHashReturns + return fakeReturns.result1, fakeReturns.result2 +} + +func (fake *ChaincodeStub) GetPrivateDataHashCallCount() int { + fake.getPrivateDataHashMutex.RLock() + defer fake.getPrivateDataHashMutex.RUnlock() + return len(fake.getPrivateDataHashArgsForCall) +} + +func (fake *ChaincodeStub) GetPrivateDataHashCalls(stub func(string, string) ([]byte, error)) { + fake.getPrivateDataHashMutex.Lock() + defer fake.getPrivateDataHashMutex.Unlock() + fake.GetPrivateDataHashStub = stub +} + +func (fake *ChaincodeStub) GetPrivateDataHashArgsForCall(i int) (string, string) { + fake.getPrivateDataHashMutex.RLock() + defer fake.getPrivateDataHashMutex.RUnlock() + argsForCall := fake.getPrivateDataHashArgsForCall[i] + return argsForCall.arg1, argsForCall.arg2 +} + +func (fake *ChaincodeStub) GetPrivateDataHashReturns(result1 []byte, result2 error) { + fake.getPrivateDataHashMutex.Lock() + defer fake.getPrivateDataHashMutex.Unlock() + fake.GetPrivateDataHashStub = nil + fake.getPrivateDataHashReturns = struct { + result1 []byte + result2 error + }{result1, result2} +} + +func (fake *ChaincodeStub) GetPrivateDataHashReturnsOnCall(i int, result1 []byte, result2 error) { + fake.getPrivateDataHashMutex.Lock() + defer fake.getPrivateDataHashMutex.Unlock() + fake.GetPrivateDataHashStub = nil + if fake.getPrivateDataHashReturnsOnCall == nil { + fake.getPrivateDataHashReturnsOnCall = make(map[int]struct { + result1 []byte + result2 error + }) + } + fake.getPrivateDataHashReturnsOnCall[i] = struct { + result1 []byte + result2 error + }{result1, result2} +} + +func (fake *ChaincodeStub) GetPrivateDataQueryResult(arg1 string, arg2 string) (shim.StateQueryIteratorInterface, error) { + fake.getPrivateDataQueryResultMutex.Lock() + ret, specificReturn := fake.getPrivateDataQueryResultReturnsOnCall[len(fake.getPrivateDataQueryResultArgsForCall)] + fake.getPrivateDataQueryResultArgsForCall = append(fake.getPrivateDataQueryResultArgsForCall, struct { + arg1 string + arg2 string + }{arg1, arg2}) + fake.recordInvocation("GetPrivateDataQueryResult", []interface{}{arg1, arg2}) + fake.getPrivateDataQueryResultMutex.Unlock() + if fake.GetPrivateDataQueryResultStub != nil { + return fake.GetPrivateDataQueryResultStub(arg1, arg2) + } + if specificReturn { + return ret.result1, ret.result2 + } + fakeReturns := fake.getPrivateDataQueryResultReturns + return fakeReturns.result1, fakeReturns.result2 +} + +func (fake *ChaincodeStub) GetPrivateDataQueryResultCallCount() int { + fake.getPrivateDataQueryResultMutex.RLock() + defer fake.getPrivateDataQueryResultMutex.RUnlock() + return len(fake.getPrivateDataQueryResultArgsForCall) +} + +func (fake *ChaincodeStub) GetPrivateDataQueryResultCalls(stub func(string, string) (shim.StateQueryIteratorInterface, error)) { + fake.getPrivateDataQueryResultMutex.Lock() + defer fake.getPrivateDataQueryResultMutex.Unlock() + fake.GetPrivateDataQueryResultStub = stub +} + +func (fake *ChaincodeStub) GetPrivateDataQueryResultArgsForCall(i int) (string, string) { + fake.getPrivateDataQueryResultMutex.RLock() + defer fake.getPrivateDataQueryResultMutex.RUnlock() + argsForCall := fake.getPrivateDataQueryResultArgsForCall[i] + return argsForCall.arg1, argsForCall.arg2 +} + +func (fake *ChaincodeStub) GetPrivateDataQueryResultReturns(result1 shim.StateQueryIteratorInterface, result2 error) { + fake.getPrivateDataQueryResultMutex.Lock() + defer fake.getPrivateDataQueryResultMutex.Unlock() + fake.GetPrivateDataQueryResultStub = nil + fake.getPrivateDataQueryResultReturns = struct { + result1 shim.StateQueryIteratorInterface + result2 error + }{result1, result2} +} + +func (fake *ChaincodeStub) GetPrivateDataQueryResultReturnsOnCall(i int, result1 shim.StateQueryIteratorInterface, result2 error) { + fake.getPrivateDataQueryResultMutex.Lock() + defer fake.getPrivateDataQueryResultMutex.Unlock() + fake.GetPrivateDataQueryResultStub = nil + if fake.getPrivateDataQueryResultReturnsOnCall == nil { + fake.getPrivateDataQueryResultReturnsOnCall = make(map[int]struct { + result1 shim.StateQueryIteratorInterface + result2 error + }) + } + fake.getPrivateDataQueryResultReturnsOnCall[i] = struct { + result1 shim.StateQueryIteratorInterface + result2 error + }{result1, result2} +} + +func (fake *ChaincodeStub) GetPrivateDataValidationParameter(arg1 string, arg2 string) ([]byte, error) { + fake.getPrivateDataValidationParameterMutex.Lock() + ret, specificReturn := fake.getPrivateDataValidationParameterReturnsOnCall[len(fake.getPrivateDataValidationParameterArgsForCall)] + fake.getPrivateDataValidationParameterArgsForCall = append(fake.getPrivateDataValidationParameterArgsForCall, struct { + arg1 string + arg2 string + }{arg1, arg2}) + fake.recordInvocation("GetPrivateDataValidationParameter", []interface{}{arg1, arg2}) + fake.getPrivateDataValidationParameterMutex.Unlock() + if fake.GetPrivateDataValidationParameterStub != nil { + return fake.GetPrivateDataValidationParameterStub(arg1, arg2) + } + if specificReturn { + return ret.result1, ret.result2 + } + fakeReturns := fake.getPrivateDataValidationParameterReturns + return fakeReturns.result1, fakeReturns.result2 +} + +func (fake *ChaincodeStub) GetPrivateDataValidationParameterCallCount() int { + fake.getPrivateDataValidationParameterMutex.RLock() + defer fake.getPrivateDataValidationParameterMutex.RUnlock() + return len(fake.getPrivateDataValidationParameterArgsForCall) +} + +func (fake *ChaincodeStub) GetPrivateDataValidationParameterCalls(stub func(string, string) ([]byte, error)) { + fake.getPrivateDataValidationParameterMutex.Lock() + defer fake.getPrivateDataValidationParameterMutex.Unlock() + fake.GetPrivateDataValidationParameterStub = stub +} + +func (fake *ChaincodeStub) GetPrivateDataValidationParameterArgsForCall(i int) (string, string) { + fake.getPrivateDataValidationParameterMutex.RLock() + defer fake.getPrivateDataValidationParameterMutex.RUnlock() + argsForCall := fake.getPrivateDataValidationParameterArgsForCall[i] + return argsForCall.arg1, argsForCall.arg2 +} + +func (fake *ChaincodeStub) GetPrivateDataValidationParameterReturns(result1 []byte, result2 error) { + fake.getPrivateDataValidationParameterMutex.Lock() + defer fake.getPrivateDataValidationParameterMutex.Unlock() + fake.GetPrivateDataValidationParameterStub = nil + fake.getPrivateDataValidationParameterReturns = struct { + result1 []byte + result2 error + }{result1, result2} +} + +func (fake *ChaincodeStub) GetPrivateDataValidationParameterReturnsOnCall(i int, result1 []byte, result2 error) { + fake.getPrivateDataValidationParameterMutex.Lock() + defer fake.getPrivateDataValidationParameterMutex.Unlock() + fake.GetPrivateDataValidationParameterStub = nil + if fake.getPrivateDataValidationParameterReturnsOnCall == nil { + fake.getPrivateDataValidationParameterReturnsOnCall = make(map[int]struct { + result1 []byte + result2 error + }) + } + fake.getPrivateDataValidationParameterReturnsOnCall[i] = struct { + result1 []byte + result2 error + }{result1, result2} +} + +func (fake *ChaincodeStub) GetQueryResult(arg1 string) (shim.StateQueryIteratorInterface, error) { + fake.getQueryResultMutex.Lock() + ret, specificReturn := fake.getQueryResultReturnsOnCall[len(fake.getQueryResultArgsForCall)] + fake.getQueryResultArgsForCall = append(fake.getQueryResultArgsForCall, struct { + arg1 string + }{arg1}) + fake.recordInvocation("GetQueryResult", []interface{}{arg1}) + fake.getQueryResultMutex.Unlock() + if fake.GetQueryResultStub != nil { + return fake.GetQueryResultStub(arg1) + } + if specificReturn { + return ret.result1, ret.result2 + } + fakeReturns := fake.getQueryResultReturns + return fakeReturns.result1, fakeReturns.result2 +} + +func (fake *ChaincodeStub) GetQueryResultCallCount() int { + fake.getQueryResultMutex.RLock() + defer fake.getQueryResultMutex.RUnlock() + return len(fake.getQueryResultArgsForCall) +} + +func (fake *ChaincodeStub) GetQueryResultCalls(stub func(string) (shim.StateQueryIteratorInterface, error)) { + fake.getQueryResultMutex.Lock() + defer fake.getQueryResultMutex.Unlock() + fake.GetQueryResultStub = stub +} + +func (fake *ChaincodeStub) GetQueryResultArgsForCall(i int) string { + fake.getQueryResultMutex.RLock() + defer fake.getQueryResultMutex.RUnlock() + argsForCall := fake.getQueryResultArgsForCall[i] + return argsForCall.arg1 +} + +func (fake *ChaincodeStub) GetQueryResultReturns(result1 shim.StateQueryIteratorInterface, result2 error) { + fake.getQueryResultMutex.Lock() + defer fake.getQueryResultMutex.Unlock() + fake.GetQueryResultStub = nil + fake.getQueryResultReturns = struct { + result1 shim.StateQueryIteratorInterface + result2 error + }{result1, result2} +} + +func (fake *ChaincodeStub) GetQueryResultReturnsOnCall(i int, result1 shim.StateQueryIteratorInterface, result2 error) { + fake.getQueryResultMutex.Lock() + defer fake.getQueryResultMutex.Unlock() + fake.GetQueryResultStub = nil + if fake.getQueryResultReturnsOnCall == nil { + fake.getQueryResultReturnsOnCall = make(map[int]struct { + result1 shim.StateQueryIteratorInterface + result2 error + }) + } + fake.getQueryResultReturnsOnCall[i] = struct { + result1 shim.StateQueryIteratorInterface + result2 error + }{result1, result2} +} + +func (fake *ChaincodeStub) GetQueryResultWithPagination(arg1 string, arg2 int32, arg3 string) (shim.StateQueryIteratorInterface, *peer.QueryResponseMetadata, error) { + fake.getQueryResultWithPaginationMutex.Lock() + ret, specificReturn := fake.getQueryResultWithPaginationReturnsOnCall[len(fake.getQueryResultWithPaginationArgsForCall)] + fake.getQueryResultWithPaginationArgsForCall = append(fake.getQueryResultWithPaginationArgsForCall, struct { + arg1 string + arg2 int32 + arg3 string + }{arg1, arg2, arg3}) + fake.recordInvocation("GetQueryResultWithPagination", []interface{}{arg1, arg2, arg3}) + fake.getQueryResultWithPaginationMutex.Unlock() + if fake.GetQueryResultWithPaginationStub != nil { + return fake.GetQueryResultWithPaginationStub(arg1, arg2, arg3) + } + if specificReturn { + return ret.result1, ret.result2, ret.result3 + } + fakeReturns := fake.getQueryResultWithPaginationReturns + return fakeReturns.result1, fakeReturns.result2, fakeReturns.result3 +} + +func (fake *ChaincodeStub) GetQueryResultWithPaginationCallCount() int { + fake.getQueryResultWithPaginationMutex.RLock() + defer fake.getQueryResultWithPaginationMutex.RUnlock() + return len(fake.getQueryResultWithPaginationArgsForCall) +} + +func (fake *ChaincodeStub) GetQueryResultWithPaginationCalls(stub func(string, int32, string) (shim.StateQueryIteratorInterface, *peer.QueryResponseMetadata, error)) { + fake.getQueryResultWithPaginationMutex.Lock() + defer fake.getQueryResultWithPaginationMutex.Unlock() + fake.GetQueryResultWithPaginationStub = stub +} + +func (fake *ChaincodeStub) GetQueryResultWithPaginationArgsForCall(i int) (string, int32, string) { + fake.getQueryResultWithPaginationMutex.RLock() + defer fake.getQueryResultWithPaginationMutex.RUnlock() + argsForCall := fake.getQueryResultWithPaginationArgsForCall[i] + return argsForCall.arg1, argsForCall.arg2, argsForCall.arg3 +} + +func (fake *ChaincodeStub) GetQueryResultWithPaginationReturns(result1 shim.StateQueryIteratorInterface, result2 *peer.QueryResponseMetadata, result3 error) { + fake.getQueryResultWithPaginationMutex.Lock() + defer fake.getQueryResultWithPaginationMutex.Unlock() + fake.GetQueryResultWithPaginationStub = nil + fake.getQueryResultWithPaginationReturns = struct { + result1 shim.StateQueryIteratorInterface + result2 *peer.QueryResponseMetadata + result3 error + }{result1, result2, result3} +} + +func (fake *ChaincodeStub) GetQueryResultWithPaginationReturnsOnCall(i int, result1 shim.StateQueryIteratorInterface, result2 *peer.QueryResponseMetadata, result3 error) { + fake.getQueryResultWithPaginationMutex.Lock() + defer fake.getQueryResultWithPaginationMutex.Unlock() + fake.GetQueryResultWithPaginationStub = nil + if fake.getQueryResultWithPaginationReturnsOnCall == nil { + fake.getQueryResultWithPaginationReturnsOnCall = make(map[int]struct { + result1 shim.StateQueryIteratorInterface + result2 *peer.QueryResponseMetadata + result3 error + }) + } + fake.getQueryResultWithPaginationReturnsOnCall[i] = struct { + result1 shim.StateQueryIteratorInterface + result2 *peer.QueryResponseMetadata + result3 error + }{result1, result2, result3} +} + +func (fake *ChaincodeStub) GetSignedProposal() (*peer.SignedProposal, error) { + fake.getSignedProposalMutex.Lock() + ret, specificReturn := fake.getSignedProposalReturnsOnCall[len(fake.getSignedProposalArgsForCall)] + fake.getSignedProposalArgsForCall = append(fake.getSignedProposalArgsForCall, struct { + }{}) + fake.recordInvocation("GetSignedProposal", []interface{}{}) + fake.getSignedProposalMutex.Unlock() + if fake.GetSignedProposalStub != nil { + return fake.GetSignedProposalStub() + } + if specificReturn { + return ret.result1, ret.result2 + } + fakeReturns := fake.getSignedProposalReturns + return fakeReturns.result1, fakeReturns.result2 +} + +func (fake *ChaincodeStub) GetSignedProposalCallCount() int { + fake.getSignedProposalMutex.RLock() + defer fake.getSignedProposalMutex.RUnlock() + return len(fake.getSignedProposalArgsForCall) +} + +func (fake *ChaincodeStub) GetSignedProposalCalls(stub func() (*peer.SignedProposal, error)) { + fake.getSignedProposalMutex.Lock() + defer fake.getSignedProposalMutex.Unlock() + fake.GetSignedProposalStub = stub +} + +func (fake *ChaincodeStub) GetSignedProposalReturns(result1 *peer.SignedProposal, result2 error) { + fake.getSignedProposalMutex.Lock() + defer fake.getSignedProposalMutex.Unlock() + fake.GetSignedProposalStub = nil + fake.getSignedProposalReturns = struct { + result1 *peer.SignedProposal + result2 error + }{result1, result2} +} + +func (fake *ChaincodeStub) GetSignedProposalReturnsOnCall(i int, result1 *peer.SignedProposal, result2 error) { + fake.getSignedProposalMutex.Lock() + defer fake.getSignedProposalMutex.Unlock() + fake.GetSignedProposalStub = nil + if fake.getSignedProposalReturnsOnCall == nil { + fake.getSignedProposalReturnsOnCall = make(map[int]struct { + result1 *peer.SignedProposal + result2 error + }) + } + fake.getSignedProposalReturnsOnCall[i] = struct { + result1 *peer.SignedProposal + result2 error + }{result1, result2} +} + +func (fake *ChaincodeStub) GetState(arg1 string) ([]byte, error) { + fake.getStateMutex.Lock() + ret, specificReturn := fake.getStateReturnsOnCall[len(fake.getStateArgsForCall)] + fake.getStateArgsForCall = append(fake.getStateArgsForCall, struct { + arg1 string + }{arg1}) + fake.recordInvocation("GetState", []interface{}{arg1}) + fake.getStateMutex.Unlock() + if fake.GetStateStub != nil { + return fake.GetStateStub(arg1) + } + if specificReturn { + return ret.result1, ret.result2 + } + fakeReturns := fake.getStateReturns + return fakeReturns.result1, fakeReturns.result2 +} + +func (fake *ChaincodeStub) GetStateCallCount() int { + fake.getStateMutex.RLock() + defer fake.getStateMutex.RUnlock() + return len(fake.getStateArgsForCall) +} + +func (fake *ChaincodeStub) GetStateCalls(stub func(string) ([]byte, error)) { + fake.getStateMutex.Lock() + defer fake.getStateMutex.Unlock() + fake.GetStateStub = stub +} + +func (fake *ChaincodeStub) GetStateArgsForCall(i int) string { + fake.getStateMutex.RLock() + defer fake.getStateMutex.RUnlock() + argsForCall := fake.getStateArgsForCall[i] + return argsForCall.arg1 +} + +func (fake *ChaincodeStub) GetStateReturns(result1 []byte, result2 error) { + fake.getStateMutex.Lock() + defer fake.getStateMutex.Unlock() + fake.GetStateStub = nil + fake.getStateReturns = struct { + result1 []byte + result2 error + }{result1, result2} +} + +func (fake *ChaincodeStub) GetStateReturnsOnCall(i int, result1 []byte, result2 error) { + fake.getStateMutex.Lock() + defer fake.getStateMutex.Unlock() + fake.GetStateStub = nil + if fake.getStateReturnsOnCall == nil { + fake.getStateReturnsOnCall = make(map[int]struct { + result1 []byte + result2 error + }) + } + fake.getStateReturnsOnCall[i] = struct { + result1 []byte + result2 error + }{result1, result2} +} + +func (fake *ChaincodeStub) GetStateByPartialCompositeKey(arg1 string, arg2 []string) (shim.StateQueryIteratorInterface, error) { + var arg2Copy []string + if arg2 != nil { + arg2Copy = make([]string, len(arg2)) + copy(arg2Copy, arg2) + } + fake.getStateByPartialCompositeKeyMutex.Lock() + ret, specificReturn := fake.getStateByPartialCompositeKeyReturnsOnCall[len(fake.getStateByPartialCompositeKeyArgsForCall)] + fake.getStateByPartialCompositeKeyArgsForCall = append(fake.getStateByPartialCompositeKeyArgsForCall, struct { + arg1 string + arg2 []string + }{arg1, arg2Copy}) + fake.recordInvocation("GetStateByPartialCompositeKey", []interface{}{arg1, arg2Copy}) + fake.getStateByPartialCompositeKeyMutex.Unlock() + if fake.GetStateByPartialCompositeKeyStub != nil { + return fake.GetStateByPartialCompositeKeyStub(arg1, arg2) + } + if specificReturn { + return ret.result1, ret.result2 + } + fakeReturns := fake.getStateByPartialCompositeKeyReturns + return fakeReturns.result1, fakeReturns.result2 +} + +func (fake *ChaincodeStub) GetStateByPartialCompositeKeyCallCount() int { + fake.getStateByPartialCompositeKeyMutex.RLock() + defer fake.getStateByPartialCompositeKeyMutex.RUnlock() + return len(fake.getStateByPartialCompositeKeyArgsForCall) +} + +func (fake *ChaincodeStub) GetStateByPartialCompositeKeyCalls(stub func(string, []string) (shim.StateQueryIteratorInterface, error)) { + fake.getStateByPartialCompositeKeyMutex.Lock() + defer fake.getStateByPartialCompositeKeyMutex.Unlock() + fake.GetStateByPartialCompositeKeyStub = stub +} + +func (fake *ChaincodeStub) GetStateByPartialCompositeKeyArgsForCall(i int) (string, []string) { + fake.getStateByPartialCompositeKeyMutex.RLock() + defer fake.getStateByPartialCompositeKeyMutex.RUnlock() + argsForCall := fake.getStateByPartialCompositeKeyArgsForCall[i] + return argsForCall.arg1, argsForCall.arg2 +} + +func (fake *ChaincodeStub) GetStateByPartialCompositeKeyReturns(result1 shim.StateQueryIteratorInterface, result2 error) { + fake.getStateByPartialCompositeKeyMutex.Lock() + defer fake.getStateByPartialCompositeKeyMutex.Unlock() + fake.GetStateByPartialCompositeKeyStub = nil + fake.getStateByPartialCompositeKeyReturns = struct { + result1 shim.StateQueryIteratorInterface + result2 error + }{result1, result2} +} + +func (fake *ChaincodeStub) GetStateByPartialCompositeKeyReturnsOnCall(i int, result1 shim.StateQueryIteratorInterface, result2 error) { + fake.getStateByPartialCompositeKeyMutex.Lock() + defer fake.getStateByPartialCompositeKeyMutex.Unlock() + fake.GetStateByPartialCompositeKeyStub = nil + if fake.getStateByPartialCompositeKeyReturnsOnCall == nil { + fake.getStateByPartialCompositeKeyReturnsOnCall = make(map[int]struct { + result1 shim.StateQueryIteratorInterface + result2 error + }) + } + fake.getStateByPartialCompositeKeyReturnsOnCall[i] = struct { + result1 shim.StateQueryIteratorInterface + result2 error + }{result1, result2} +} + +func (fake *ChaincodeStub) GetStateByPartialCompositeKeyWithPagination(arg1 string, arg2 []string, arg3 int32, arg4 string) (shim.StateQueryIteratorInterface, *peer.QueryResponseMetadata, error) { + var arg2Copy []string + if arg2 != nil { + arg2Copy = make([]string, len(arg2)) + copy(arg2Copy, arg2) + } + fake.getStateByPartialCompositeKeyWithPaginationMutex.Lock() + ret, specificReturn := fake.getStateByPartialCompositeKeyWithPaginationReturnsOnCall[len(fake.getStateByPartialCompositeKeyWithPaginationArgsForCall)] + fake.getStateByPartialCompositeKeyWithPaginationArgsForCall = append(fake.getStateByPartialCompositeKeyWithPaginationArgsForCall, struct { + arg1 string + arg2 []string + arg3 int32 + arg4 string + }{arg1, arg2Copy, arg3, arg4}) + fake.recordInvocation("GetStateByPartialCompositeKeyWithPagination", []interface{}{arg1, arg2Copy, arg3, arg4}) + fake.getStateByPartialCompositeKeyWithPaginationMutex.Unlock() + if fake.GetStateByPartialCompositeKeyWithPaginationStub != nil { + return fake.GetStateByPartialCompositeKeyWithPaginationStub(arg1, arg2, arg3, arg4) + } + if specificReturn { + return ret.result1, ret.result2, ret.result3 + } + fakeReturns := fake.getStateByPartialCompositeKeyWithPaginationReturns + return fakeReturns.result1, fakeReturns.result2, fakeReturns.result3 +} + +func (fake *ChaincodeStub) GetStateByPartialCompositeKeyWithPaginationCallCount() int { + fake.getStateByPartialCompositeKeyWithPaginationMutex.RLock() + defer fake.getStateByPartialCompositeKeyWithPaginationMutex.RUnlock() + return len(fake.getStateByPartialCompositeKeyWithPaginationArgsForCall) +} + +func (fake *ChaincodeStub) GetStateByPartialCompositeKeyWithPaginationCalls(stub func(string, []string, int32, string) (shim.StateQueryIteratorInterface, *peer.QueryResponseMetadata, error)) { + fake.getStateByPartialCompositeKeyWithPaginationMutex.Lock() + defer fake.getStateByPartialCompositeKeyWithPaginationMutex.Unlock() + fake.GetStateByPartialCompositeKeyWithPaginationStub = stub +} + +func (fake *ChaincodeStub) GetStateByPartialCompositeKeyWithPaginationArgsForCall(i int) (string, []string, int32, string) { + fake.getStateByPartialCompositeKeyWithPaginationMutex.RLock() + defer fake.getStateByPartialCompositeKeyWithPaginationMutex.RUnlock() + argsForCall := fake.getStateByPartialCompositeKeyWithPaginationArgsForCall[i] + return argsForCall.arg1, argsForCall.arg2, argsForCall.arg3, argsForCall.arg4 +} + +func (fake *ChaincodeStub) GetStateByPartialCompositeKeyWithPaginationReturns(result1 shim.StateQueryIteratorInterface, result2 *peer.QueryResponseMetadata, result3 error) { + fake.getStateByPartialCompositeKeyWithPaginationMutex.Lock() + defer fake.getStateByPartialCompositeKeyWithPaginationMutex.Unlock() + fake.GetStateByPartialCompositeKeyWithPaginationStub = nil + fake.getStateByPartialCompositeKeyWithPaginationReturns = struct { + result1 shim.StateQueryIteratorInterface + result2 *peer.QueryResponseMetadata + result3 error + }{result1, result2, result3} +} + +func (fake *ChaincodeStub) GetStateByPartialCompositeKeyWithPaginationReturnsOnCall(i int, result1 shim.StateQueryIteratorInterface, result2 *peer.QueryResponseMetadata, result3 error) { + fake.getStateByPartialCompositeKeyWithPaginationMutex.Lock() + defer fake.getStateByPartialCompositeKeyWithPaginationMutex.Unlock() + fake.GetStateByPartialCompositeKeyWithPaginationStub = nil + if fake.getStateByPartialCompositeKeyWithPaginationReturnsOnCall == nil { + fake.getStateByPartialCompositeKeyWithPaginationReturnsOnCall = make(map[int]struct { + result1 shim.StateQueryIteratorInterface + result2 *peer.QueryResponseMetadata + result3 error + }) + } + fake.getStateByPartialCompositeKeyWithPaginationReturnsOnCall[i] = struct { + result1 shim.StateQueryIteratorInterface + result2 *peer.QueryResponseMetadata + result3 error + }{result1, result2, result3} +} + +func (fake *ChaincodeStub) GetStateByRange(arg1 string, arg2 string) (shim.StateQueryIteratorInterface, error) { + fake.getStateByRangeMutex.Lock() + ret, specificReturn := fake.getStateByRangeReturnsOnCall[len(fake.getStateByRangeArgsForCall)] + fake.getStateByRangeArgsForCall = append(fake.getStateByRangeArgsForCall, struct { + arg1 string + arg2 string + }{arg1, arg2}) + fake.recordInvocation("GetStateByRange", []interface{}{arg1, arg2}) + fake.getStateByRangeMutex.Unlock() + if fake.GetStateByRangeStub != nil { + return fake.GetStateByRangeStub(arg1, arg2) + } + if specificReturn { + return ret.result1, ret.result2 + } + fakeReturns := fake.getStateByRangeReturns + return fakeReturns.result1, fakeReturns.result2 +} + +func (fake *ChaincodeStub) GetStateByRangeCallCount() int { + fake.getStateByRangeMutex.RLock() + defer fake.getStateByRangeMutex.RUnlock() + return len(fake.getStateByRangeArgsForCall) +} + +func (fake *ChaincodeStub) GetStateByRangeCalls(stub func(string, string) (shim.StateQueryIteratorInterface, error)) { + fake.getStateByRangeMutex.Lock() + defer fake.getStateByRangeMutex.Unlock() + fake.GetStateByRangeStub = stub +} + +func (fake *ChaincodeStub) GetStateByRangeArgsForCall(i int) (string, string) { + fake.getStateByRangeMutex.RLock() + defer fake.getStateByRangeMutex.RUnlock() + argsForCall := fake.getStateByRangeArgsForCall[i] + return argsForCall.arg1, argsForCall.arg2 +} + +func (fake *ChaincodeStub) GetStateByRangeReturns(result1 shim.StateQueryIteratorInterface, result2 error) { + fake.getStateByRangeMutex.Lock() + defer fake.getStateByRangeMutex.Unlock() + fake.GetStateByRangeStub = nil + fake.getStateByRangeReturns = struct { + result1 shim.StateQueryIteratorInterface + result2 error + }{result1, result2} +} + +func (fake *ChaincodeStub) GetStateByRangeReturnsOnCall(i int, result1 shim.StateQueryIteratorInterface, result2 error) { + fake.getStateByRangeMutex.Lock() + defer fake.getStateByRangeMutex.Unlock() + fake.GetStateByRangeStub = nil + if fake.getStateByRangeReturnsOnCall == nil { + fake.getStateByRangeReturnsOnCall = make(map[int]struct { + result1 shim.StateQueryIteratorInterface + result2 error + }) + } + fake.getStateByRangeReturnsOnCall[i] = struct { + result1 shim.StateQueryIteratorInterface + result2 error + }{result1, result2} +} + +func (fake *ChaincodeStub) GetStateByRangeWithPagination(arg1 string, arg2 string, arg3 int32, arg4 string) (shim.StateQueryIteratorInterface, *peer.QueryResponseMetadata, error) { + fake.getStateByRangeWithPaginationMutex.Lock() + ret, specificReturn := fake.getStateByRangeWithPaginationReturnsOnCall[len(fake.getStateByRangeWithPaginationArgsForCall)] + fake.getStateByRangeWithPaginationArgsForCall = append(fake.getStateByRangeWithPaginationArgsForCall, struct { + arg1 string + arg2 string + arg3 int32 + arg4 string + }{arg1, arg2, arg3, arg4}) + fake.recordInvocation("GetStateByRangeWithPagination", []interface{}{arg1, arg2, arg3, arg4}) + fake.getStateByRangeWithPaginationMutex.Unlock() + if fake.GetStateByRangeWithPaginationStub != nil { + return fake.GetStateByRangeWithPaginationStub(arg1, arg2, arg3, arg4) + } + if specificReturn { + return ret.result1, ret.result2, ret.result3 + } + fakeReturns := fake.getStateByRangeWithPaginationReturns + return fakeReturns.result1, fakeReturns.result2, fakeReturns.result3 +} + +func (fake *ChaincodeStub) GetStateByRangeWithPaginationCallCount() int { + fake.getStateByRangeWithPaginationMutex.RLock() + defer fake.getStateByRangeWithPaginationMutex.RUnlock() + return len(fake.getStateByRangeWithPaginationArgsForCall) +} + +func (fake *ChaincodeStub) GetStateByRangeWithPaginationCalls(stub func(string, string, int32, string) (shim.StateQueryIteratorInterface, *peer.QueryResponseMetadata, error)) { + fake.getStateByRangeWithPaginationMutex.Lock() + defer fake.getStateByRangeWithPaginationMutex.Unlock() + fake.GetStateByRangeWithPaginationStub = stub +} + +func (fake *ChaincodeStub) GetStateByRangeWithPaginationArgsForCall(i int) (string, string, int32, string) { + fake.getStateByRangeWithPaginationMutex.RLock() + defer fake.getStateByRangeWithPaginationMutex.RUnlock() + argsForCall := fake.getStateByRangeWithPaginationArgsForCall[i] + return argsForCall.arg1, argsForCall.arg2, argsForCall.arg3, argsForCall.arg4 +} + +func (fake *ChaincodeStub) GetStateByRangeWithPaginationReturns(result1 shim.StateQueryIteratorInterface, result2 *peer.QueryResponseMetadata, result3 error) { + fake.getStateByRangeWithPaginationMutex.Lock() + defer fake.getStateByRangeWithPaginationMutex.Unlock() + fake.GetStateByRangeWithPaginationStub = nil + fake.getStateByRangeWithPaginationReturns = struct { + result1 shim.StateQueryIteratorInterface + result2 *peer.QueryResponseMetadata + result3 error + }{result1, result2, result3} +} + +func (fake *ChaincodeStub) GetStateByRangeWithPaginationReturnsOnCall(i int, result1 shim.StateQueryIteratorInterface, result2 *peer.QueryResponseMetadata, result3 error) { + fake.getStateByRangeWithPaginationMutex.Lock() + defer fake.getStateByRangeWithPaginationMutex.Unlock() + fake.GetStateByRangeWithPaginationStub = nil + if fake.getStateByRangeWithPaginationReturnsOnCall == nil { + fake.getStateByRangeWithPaginationReturnsOnCall = make(map[int]struct { + result1 shim.StateQueryIteratorInterface + result2 *peer.QueryResponseMetadata + result3 error + }) + } + fake.getStateByRangeWithPaginationReturnsOnCall[i] = struct { + result1 shim.StateQueryIteratorInterface + result2 *peer.QueryResponseMetadata + result3 error + }{result1, result2, result3} +} + +func (fake *ChaincodeStub) GetStateValidationParameter(arg1 string) ([]byte, error) { + fake.getStateValidationParameterMutex.Lock() + ret, specificReturn := fake.getStateValidationParameterReturnsOnCall[len(fake.getStateValidationParameterArgsForCall)] + fake.getStateValidationParameterArgsForCall = append(fake.getStateValidationParameterArgsForCall, struct { + arg1 string + }{arg1}) + fake.recordInvocation("GetStateValidationParameter", []interface{}{arg1}) + fake.getStateValidationParameterMutex.Unlock() + if fake.GetStateValidationParameterStub != nil { + return fake.GetStateValidationParameterStub(arg1) + } + if specificReturn { + return ret.result1, ret.result2 + } + fakeReturns := fake.getStateValidationParameterReturns + return fakeReturns.result1, fakeReturns.result2 +} + +func (fake *ChaincodeStub) GetStateValidationParameterCallCount() int { + fake.getStateValidationParameterMutex.RLock() + defer fake.getStateValidationParameterMutex.RUnlock() + return len(fake.getStateValidationParameterArgsForCall) +} + +func (fake *ChaincodeStub) GetStateValidationParameterCalls(stub func(string) ([]byte, error)) { + fake.getStateValidationParameterMutex.Lock() + defer fake.getStateValidationParameterMutex.Unlock() + fake.GetStateValidationParameterStub = stub +} + +func (fake *ChaincodeStub) GetStateValidationParameterArgsForCall(i int) string { + fake.getStateValidationParameterMutex.RLock() + defer fake.getStateValidationParameterMutex.RUnlock() + argsForCall := fake.getStateValidationParameterArgsForCall[i] + return argsForCall.arg1 +} + +func (fake *ChaincodeStub) GetStateValidationParameterReturns(result1 []byte, result2 error) { + fake.getStateValidationParameterMutex.Lock() + defer fake.getStateValidationParameterMutex.Unlock() + fake.GetStateValidationParameterStub = nil + fake.getStateValidationParameterReturns = struct { + result1 []byte + result2 error + }{result1, result2} +} + +func (fake *ChaincodeStub) GetStateValidationParameterReturnsOnCall(i int, result1 []byte, result2 error) { + fake.getStateValidationParameterMutex.Lock() + defer fake.getStateValidationParameterMutex.Unlock() + fake.GetStateValidationParameterStub = nil + if fake.getStateValidationParameterReturnsOnCall == nil { + fake.getStateValidationParameterReturnsOnCall = make(map[int]struct { + result1 []byte + result2 error + }) + } + fake.getStateValidationParameterReturnsOnCall[i] = struct { + result1 []byte + result2 error + }{result1, result2} +} + +func (fake *ChaincodeStub) GetStringArgs() []string { + fake.getStringArgsMutex.Lock() + ret, specificReturn := fake.getStringArgsReturnsOnCall[len(fake.getStringArgsArgsForCall)] + fake.getStringArgsArgsForCall = append(fake.getStringArgsArgsForCall, struct { + }{}) + fake.recordInvocation("GetStringArgs", []interface{}{}) + fake.getStringArgsMutex.Unlock() + if fake.GetStringArgsStub != nil { + return fake.GetStringArgsStub() + } + if specificReturn { + return ret.result1 + } + fakeReturns := fake.getStringArgsReturns + return fakeReturns.result1 +} + +func (fake *ChaincodeStub) GetStringArgsCallCount() int { + fake.getStringArgsMutex.RLock() + defer fake.getStringArgsMutex.RUnlock() + return len(fake.getStringArgsArgsForCall) +} + +func (fake *ChaincodeStub) GetStringArgsCalls(stub func() []string) { + fake.getStringArgsMutex.Lock() + defer fake.getStringArgsMutex.Unlock() + fake.GetStringArgsStub = stub +} + +func (fake *ChaincodeStub) GetStringArgsReturns(result1 []string) { + fake.getStringArgsMutex.Lock() + defer fake.getStringArgsMutex.Unlock() + fake.GetStringArgsStub = nil + fake.getStringArgsReturns = struct { + result1 []string + }{result1} +} + +func (fake *ChaincodeStub) GetStringArgsReturnsOnCall(i int, result1 []string) { + fake.getStringArgsMutex.Lock() + defer fake.getStringArgsMutex.Unlock() + fake.GetStringArgsStub = nil + if fake.getStringArgsReturnsOnCall == nil { + fake.getStringArgsReturnsOnCall = make(map[int]struct { + result1 []string + }) + } + fake.getStringArgsReturnsOnCall[i] = struct { + result1 []string + }{result1} +} + +func (fake *ChaincodeStub) GetTransient() (map[string][]byte, error) { + fake.getTransientMutex.Lock() + ret, specificReturn := fake.getTransientReturnsOnCall[len(fake.getTransientArgsForCall)] + fake.getTransientArgsForCall = append(fake.getTransientArgsForCall, struct { + }{}) + fake.recordInvocation("GetTransient", []interface{}{}) + fake.getTransientMutex.Unlock() + if fake.GetTransientStub != nil { + return fake.GetTransientStub() + } + if specificReturn { + return ret.result1, ret.result2 + } + fakeReturns := fake.getTransientReturns + return fakeReturns.result1, fakeReturns.result2 +} + +func (fake *ChaincodeStub) GetTransientCallCount() int { + fake.getTransientMutex.RLock() + defer fake.getTransientMutex.RUnlock() + return len(fake.getTransientArgsForCall) +} + +func (fake *ChaincodeStub) GetTransientCalls(stub func() (map[string][]byte, error)) { + fake.getTransientMutex.Lock() + defer fake.getTransientMutex.Unlock() + fake.GetTransientStub = stub +} + +func (fake *ChaincodeStub) GetTransientReturns(result1 map[string][]byte, result2 error) { + fake.getTransientMutex.Lock() + defer fake.getTransientMutex.Unlock() + fake.GetTransientStub = nil + fake.getTransientReturns = struct { + result1 map[string][]byte + result2 error + }{result1, result2} +} + +func (fake *ChaincodeStub) GetTransientReturnsOnCall(i int, result1 map[string][]byte, result2 error) { + fake.getTransientMutex.Lock() + defer fake.getTransientMutex.Unlock() + fake.GetTransientStub = nil + if fake.getTransientReturnsOnCall == nil { + fake.getTransientReturnsOnCall = make(map[int]struct { + result1 map[string][]byte + result2 error + }) + } + fake.getTransientReturnsOnCall[i] = struct { + result1 map[string][]byte + result2 error + }{result1, result2} +} + +func (fake *ChaincodeStub) GetTxID() string { + fake.getTxIDMutex.Lock() + ret, specificReturn := fake.getTxIDReturnsOnCall[len(fake.getTxIDArgsForCall)] + fake.getTxIDArgsForCall = append(fake.getTxIDArgsForCall, struct { + }{}) + fake.recordInvocation("GetTxID", []interface{}{}) + fake.getTxIDMutex.Unlock() + if fake.GetTxIDStub != nil { + return fake.GetTxIDStub() + } + if specificReturn { + return ret.result1 + } + fakeReturns := fake.getTxIDReturns + return fakeReturns.result1 +} + +func (fake *ChaincodeStub) GetTxIDCallCount() int { + fake.getTxIDMutex.RLock() + defer fake.getTxIDMutex.RUnlock() + return len(fake.getTxIDArgsForCall) +} + +func (fake *ChaincodeStub) GetTxIDCalls(stub func() string) { + fake.getTxIDMutex.Lock() + defer fake.getTxIDMutex.Unlock() + fake.GetTxIDStub = stub +} + +func (fake *ChaincodeStub) GetTxIDReturns(result1 string) { + fake.getTxIDMutex.Lock() + defer fake.getTxIDMutex.Unlock() + fake.GetTxIDStub = nil + fake.getTxIDReturns = struct { + result1 string + }{result1} +} + +func (fake *ChaincodeStub) GetTxIDReturnsOnCall(i int, result1 string) { + fake.getTxIDMutex.Lock() + defer fake.getTxIDMutex.Unlock() + fake.GetTxIDStub = nil + if fake.getTxIDReturnsOnCall == nil { + fake.getTxIDReturnsOnCall = make(map[int]struct { + result1 string + }) + } + fake.getTxIDReturnsOnCall[i] = struct { + result1 string + }{result1} +} + +func (fake *ChaincodeStub) GetTxTimestamp() (*timestamp.Timestamp, error) { + fake.getTxTimestampMutex.Lock() + ret, specificReturn := fake.getTxTimestampReturnsOnCall[len(fake.getTxTimestampArgsForCall)] + fake.getTxTimestampArgsForCall = append(fake.getTxTimestampArgsForCall, struct { + }{}) + fake.recordInvocation("GetTxTimestamp", []interface{}{}) + fake.getTxTimestampMutex.Unlock() + if fake.GetTxTimestampStub != nil { + return fake.GetTxTimestampStub() + } + if specificReturn { + return ret.result1, ret.result2 + } + fakeReturns := fake.getTxTimestampReturns + return fakeReturns.result1, fakeReturns.result2 +} + +func (fake *ChaincodeStub) GetTxTimestampCallCount() int { + fake.getTxTimestampMutex.RLock() + defer fake.getTxTimestampMutex.RUnlock() + return len(fake.getTxTimestampArgsForCall) +} + +func (fake *ChaincodeStub) GetTxTimestampCalls(stub func() (*timestamp.Timestamp, error)) { + fake.getTxTimestampMutex.Lock() + defer fake.getTxTimestampMutex.Unlock() + fake.GetTxTimestampStub = stub +} + +func (fake *ChaincodeStub) GetTxTimestampReturns(result1 *timestamp.Timestamp, result2 error) { + fake.getTxTimestampMutex.Lock() + defer fake.getTxTimestampMutex.Unlock() + fake.GetTxTimestampStub = nil + fake.getTxTimestampReturns = struct { + result1 *timestamp.Timestamp + result2 error + }{result1, result2} +} + +func (fake *ChaincodeStub) GetTxTimestampReturnsOnCall(i int, result1 *timestamp.Timestamp, result2 error) { + fake.getTxTimestampMutex.Lock() + defer fake.getTxTimestampMutex.Unlock() + fake.GetTxTimestampStub = nil + if fake.getTxTimestampReturnsOnCall == nil { + fake.getTxTimestampReturnsOnCall = make(map[int]struct { + result1 *timestamp.Timestamp + result2 error + }) + } + fake.getTxTimestampReturnsOnCall[i] = struct { + result1 *timestamp.Timestamp + result2 error + }{result1, result2} +} + +func (fake *ChaincodeStub) InvokeChaincode(arg1 string, arg2 [][]byte, arg3 string) peer.Response { + var arg2Copy [][]byte + if arg2 != nil { + arg2Copy = make([][]byte, len(arg2)) + copy(arg2Copy, arg2) + } + fake.invokeChaincodeMutex.Lock() + ret, specificReturn := fake.invokeChaincodeReturnsOnCall[len(fake.invokeChaincodeArgsForCall)] + fake.invokeChaincodeArgsForCall = append(fake.invokeChaincodeArgsForCall, struct { + arg1 string + arg2 [][]byte + arg3 string + }{arg1, arg2Copy, arg3}) + fake.recordInvocation("InvokeChaincode", []interface{}{arg1, arg2Copy, arg3}) + fake.invokeChaincodeMutex.Unlock() + if fake.InvokeChaincodeStub != nil { + return fake.InvokeChaincodeStub(arg1, arg2, arg3) + } + if specificReturn { + return ret.result1 + } + fakeReturns := fake.invokeChaincodeReturns + return fakeReturns.result1 +} + +func (fake *ChaincodeStub) InvokeChaincodeCallCount() int { + fake.invokeChaincodeMutex.RLock() + defer fake.invokeChaincodeMutex.RUnlock() + return len(fake.invokeChaincodeArgsForCall) +} + +func (fake *ChaincodeStub) InvokeChaincodeCalls(stub func(string, [][]byte, string) peer.Response) { + fake.invokeChaincodeMutex.Lock() + defer fake.invokeChaincodeMutex.Unlock() + fake.InvokeChaincodeStub = stub +} + +func (fake *ChaincodeStub) InvokeChaincodeArgsForCall(i int) (string, [][]byte, string) { + fake.invokeChaincodeMutex.RLock() + defer fake.invokeChaincodeMutex.RUnlock() + argsForCall := fake.invokeChaincodeArgsForCall[i] + return argsForCall.arg1, argsForCall.arg2, argsForCall.arg3 +} + +func (fake *ChaincodeStub) InvokeChaincodeReturns(result1 peer.Response) { + fake.invokeChaincodeMutex.Lock() + defer fake.invokeChaincodeMutex.Unlock() + fake.InvokeChaincodeStub = nil + fake.invokeChaincodeReturns = struct { + result1 peer.Response + }{result1} +} + +func (fake *ChaincodeStub) InvokeChaincodeReturnsOnCall(i int, result1 peer.Response) { + fake.invokeChaincodeMutex.Lock() + defer fake.invokeChaincodeMutex.Unlock() + fake.InvokeChaincodeStub = nil + if fake.invokeChaincodeReturnsOnCall == nil { + fake.invokeChaincodeReturnsOnCall = make(map[int]struct { + result1 peer.Response + }) + } + fake.invokeChaincodeReturnsOnCall[i] = struct { + result1 peer.Response + }{result1} +} + +func (fake *ChaincodeStub) PutPrivateData(arg1 string, arg2 string, arg3 []byte) error { + var arg3Copy []byte + if arg3 != nil { + arg3Copy = make([]byte, len(arg3)) + copy(arg3Copy, arg3) + } + fake.putPrivateDataMutex.Lock() + ret, specificReturn := fake.putPrivateDataReturnsOnCall[len(fake.putPrivateDataArgsForCall)] + fake.putPrivateDataArgsForCall = append(fake.putPrivateDataArgsForCall, struct { + arg1 string + arg2 string + arg3 []byte + }{arg1, arg2, arg3Copy}) + fake.recordInvocation("PutPrivateData", []interface{}{arg1, arg2, arg3Copy}) + fake.putPrivateDataMutex.Unlock() + if fake.PutPrivateDataStub != nil { + return fake.PutPrivateDataStub(arg1, arg2, arg3) + } + if specificReturn { + return ret.result1 + } + fakeReturns := fake.putPrivateDataReturns + return fakeReturns.result1 +} + +func (fake *ChaincodeStub) PutPrivateDataCallCount() int { + fake.putPrivateDataMutex.RLock() + defer fake.putPrivateDataMutex.RUnlock() + return len(fake.putPrivateDataArgsForCall) +} + +func (fake *ChaincodeStub) PutPrivateDataCalls(stub func(string, string, []byte) error) { + fake.putPrivateDataMutex.Lock() + defer fake.putPrivateDataMutex.Unlock() + fake.PutPrivateDataStub = stub +} + +func (fake *ChaincodeStub) PutPrivateDataArgsForCall(i int) (string, string, []byte) { + fake.putPrivateDataMutex.RLock() + defer fake.putPrivateDataMutex.RUnlock() + argsForCall := fake.putPrivateDataArgsForCall[i] + return argsForCall.arg1, argsForCall.arg2, argsForCall.arg3 +} + +func (fake *ChaincodeStub) PutPrivateDataReturns(result1 error) { + fake.putPrivateDataMutex.Lock() + defer fake.putPrivateDataMutex.Unlock() + fake.PutPrivateDataStub = nil + fake.putPrivateDataReturns = struct { + result1 error + }{result1} +} + +func (fake *ChaincodeStub) PutPrivateDataReturnsOnCall(i int, result1 error) { + fake.putPrivateDataMutex.Lock() + defer fake.putPrivateDataMutex.Unlock() + fake.PutPrivateDataStub = nil + if fake.putPrivateDataReturnsOnCall == nil { + fake.putPrivateDataReturnsOnCall = make(map[int]struct { + result1 error + }) + } + fake.putPrivateDataReturnsOnCall[i] = struct { + result1 error + }{result1} +} + +func (fake *ChaincodeStub) PutState(arg1 string, arg2 []byte) error { + var arg2Copy []byte + if arg2 != nil { + arg2Copy = make([]byte, len(arg2)) + copy(arg2Copy, arg2) + } + fake.putStateMutex.Lock() + ret, specificReturn := fake.putStateReturnsOnCall[len(fake.putStateArgsForCall)] + fake.putStateArgsForCall = append(fake.putStateArgsForCall, struct { + arg1 string + arg2 []byte + }{arg1, arg2Copy}) + fake.recordInvocation("PutState", []interface{}{arg1, arg2Copy}) + fake.putStateMutex.Unlock() + if fake.PutStateStub != nil { + return fake.PutStateStub(arg1, arg2) + } + if specificReturn { + return ret.result1 + } + fakeReturns := fake.putStateReturns + return fakeReturns.result1 +} + +func (fake *ChaincodeStub) PutStateCallCount() int { + fake.putStateMutex.RLock() + defer fake.putStateMutex.RUnlock() + return len(fake.putStateArgsForCall) +} + +func (fake *ChaincodeStub) PutStateCalls(stub func(string, []byte) error) { + fake.putStateMutex.Lock() + defer fake.putStateMutex.Unlock() + fake.PutStateStub = stub +} + +func (fake *ChaincodeStub) PutStateArgsForCall(i int) (string, []byte) { + fake.putStateMutex.RLock() + defer fake.putStateMutex.RUnlock() + argsForCall := fake.putStateArgsForCall[i] + return argsForCall.arg1, argsForCall.arg2 +} + +func (fake *ChaincodeStub) PutStateReturns(result1 error) { + fake.putStateMutex.Lock() + defer fake.putStateMutex.Unlock() + fake.PutStateStub = nil + fake.putStateReturns = struct { + result1 error + }{result1} +} + +func (fake *ChaincodeStub) PutStateReturnsOnCall(i int, result1 error) { + fake.putStateMutex.Lock() + defer fake.putStateMutex.Unlock() + fake.PutStateStub = nil + if fake.putStateReturnsOnCall == nil { + fake.putStateReturnsOnCall = make(map[int]struct { + result1 error + }) + } + fake.putStateReturnsOnCall[i] = struct { + result1 error + }{result1} +} + +func (fake *ChaincodeStub) SetEvent(arg1 string, arg2 []byte) error { + var arg2Copy []byte + if arg2 != nil { + arg2Copy = make([]byte, len(arg2)) + copy(arg2Copy, arg2) + } + fake.setEventMutex.Lock() + ret, specificReturn := fake.setEventReturnsOnCall[len(fake.setEventArgsForCall)] + fake.setEventArgsForCall = append(fake.setEventArgsForCall, struct { + arg1 string + arg2 []byte + }{arg1, arg2Copy}) + fake.recordInvocation("SetEvent", []interface{}{arg1, arg2Copy}) + fake.setEventMutex.Unlock() + if fake.SetEventStub != nil { + return fake.SetEventStub(arg1, arg2) + } + if specificReturn { + return ret.result1 + } + fakeReturns := fake.setEventReturns + return fakeReturns.result1 +} + +func (fake *ChaincodeStub) SetEventCallCount() int { + fake.setEventMutex.RLock() + defer fake.setEventMutex.RUnlock() + return len(fake.setEventArgsForCall) +} + +func (fake *ChaincodeStub) SetEventCalls(stub func(string, []byte) error) { + fake.setEventMutex.Lock() + defer fake.setEventMutex.Unlock() + fake.SetEventStub = stub +} + +func (fake *ChaincodeStub) SetEventArgsForCall(i int) (string, []byte) { + fake.setEventMutex.RLock() + defer fake.setEventMutex.RUnlock() + argsForCall := fake.setEventArgsForCall[i] + return argsForCall.arg1, argsForCall.arg2 +} + +func (fake *ChaincodeStub) SetEventReturns(result1 error) { + fake.setEventMutex.Lock() + defer fake.setEventMutex.Unlock() + fake.SetEventStub = nil + fake.setEventReturns = struct { + result1 error + }{result1} +} + +func (fake *ChaincodeStub) SetEventReturnsOnCall(i int, result1 error) { + fake.setEventMutex.Lock() + defer fake.setEventMutex.Unlock() + fake.SetEventStub = nil + if fake.setEventReturnsOnCall == nil { + fake.setEventReturnsOnCall = make(map[int]struct { + result1 error + }) + } + fake.setEventReturnsOnCall[i] = struct { + result1 error + }{result1} +} + +func (fake *ChaincodeStub) SetPrivateDataValidationParameter(arg1 string, arg2 string, arg3 []byte) error { + var arg3Copy []byte + if arg3 != nil { + arg3Copy = make([]byte, len(arg3)) + copy(arg3Copy, arg3) + } + fake.setPrivateDataValidationParameterMutex.Lock() + ret, specificReturn := fake.setPrivateDataValidationParameterReturnsOnCall[len(fake.setPrivateDataValidationParameterArgsForCall)] + fake.setPrivateDataValidationParameterArgsForCall = append(fake.setPrivateDataValidationParameterArgsForCall, struct { + arg1 string + arg2 string + arg3 []byte + }{arg1, arg2, arg3Copy}) + fake.recordInvocation("SetPrivateDataValidationParameter", []interface{}{arg1, arg2, arg3Copy}) + fake.setPrivateDataValidationParameterMutex.Unlock() + if fake.SetPrivateDataValidationParameterStub != nil { + return fake.SetPrivateDataValidationParameterStub(arg1, arg2, arg3) + } + if specificReturn { + return ret.result1 + } + fakeReturns := fake.setPrivateDataValidationParameterReturns + return fakeReturns.result1 +} + +func (fake *ChaincodeStub) SetPrivateDataValidationParameterCallCount() int { + fake.setPrivateDataValidationParameterMutex.RLock() + defer fake.setPrivateDataValidationParameterMutex.RUnlock() + return len(fake.setPrivateDataValidationParameterArgsForCall) +} + +func (fake *ChaincodeStub) SetPrivateDataValidationParameterCalls(stub func(string, string, []byte) error) { + fake.setPrivateDataValidationParameterMutex.Lock() + defer fake.setPrivateDataValidationParameterMutex.Unlock() + fake.SetPrivateDataValidationParameterStub = stub +} + +func (fake *ChaincodeStub) SetPrivateDataValidationParameterArgsForCall(i int) (string, string, []byte) { + fake.setPrivateDataValidationParameterMutex.RLock() + defer fake.setPrivateDataValidationParameterMutex.RUnlock() + argsForCall := fake.setPrivateDataValidationParameterArgsForCall[i] + return argsForCall.arg1, argsForCall.arg2, argsForCall.arg3 +} + +func (fake *ChaincodeStub) SetPrivateDataValidationParameterReturns(result1 error) { + fake.setPrivateDataValidationParameterMutex.Lock() + defer fake.setPrivateDataValidationParameterMutex.Unlock() + fake.SetPrivateDataValidationParameterStub = nil + fake.setPrivateDataValidationParameterReturns = struct { + result1 error + }{result1} +} + +func (fake *ChaincodeStub) SetPrivateDataValidationParameterReturnsOnCall(i int, result1 error) { + fake.setPrivateDataValidationParameterMutex.Lock() + defer fake.setPrivateDataValidationParameterMutex.Unlock() + fake.SetPrivateDataValidationParameterStub = nil + if fake.setPrivateDataValidationParameterReturnsOnCall == nil { + fake.setPrivateDataValidationParameterReturnsOnCall = make(map[int]struct { + result1 error + }) + } + fake.setPrivateDataValidationParameterReturnsOnCall[i] = struct { + result1 error + }{result1} +} + +func (fake *ChaincodeStub) SetStateValidationParameter(arg1 string, arg2 []byte) error { + var arg2Copy []byte + if arg2 != nil { + arg2Copy = make([]byte, len(arg2)) + copy(arg2Copy, arg2) + } + fake.setStateValidationParameterMutex.Lock() + ret, specificReturn := fake.setStateValidationParameterReturnsOnCall[len(fake.setStateValidationParameterArgsForCall)] + fake.setStateValidationParameterArgsForCall = append(fake.setStateValidationParameterArgsForCall, struct { + arg1 string + arg2 []byte + }{arg1, arg2Copy}) + fake.recordInvocation("SetStateValidationParameter", []interface{}{arg1, arg2Copy}) + fake.setStateValidationParameterMutex.Unlock() + if fake.SetStateValidationParameterStub != nil { + return fake.SetStateValidationParameterStub(arg1, arg2) + } + if specificReturn { + return ret.result1 + } + fakeReturns := fake.setStateValidationParameterReturns + return fakeReturns.result1 +} + +func (fake *ChaincodeStub) SetStateValidationParameterCallCount() int { + fake.setStateValidationParameterMutex.RLock() + defer fake.setStateValidationParameterMutex.RUnlock() + return len(fake.setStateValidationParameterArgsForCall) +} + +func (fake *ChaincodeStub) SetStateValidationParameterCalls(stub func(string, []byte) error) { + fake.setStateValidationParameterMutex.Lock() + defer fake.setStateValidationParameterMutex.Unlock() + fake.SetStateValidationParameterStub = stub +} + +func (fake *ChaincodeStub) SetStateValidationParameterArgsForCall(i int) (string, []byte) { + fake.setStateValidationParameterMutex.RLock() + defer fake.setStateValidationParameterMutex.RUnlock() + argsForCall := fake.setStateValidationParameterArgsForCall[i] + return argsForCall.arg1, argsForCall.arg2 +} + +func (fake *ChaincodeStub) SetStateValidationParameterReturns(result1 error) { + fake.setStateValidationParameterMutex.Lock() + defer fake.setStateValidationParameterMutex.Unlock() + fake.SetStateValidationParameterStub = nil + fake.setStateValidationParameterReturns = struct { + result1 error + }{result1} +} + +func (fake *ChaincodeStub) SetStateValidationParameterReturnsOnCall(i int, result1 error) { + fake.setStateValidationParameterMutex.Lock() + defer fake.setStateValidationParameterMutex.Unlock() + fake.SetStateValidationParameterStub = nil + if fake.setStateValidationParameterReturnsOnCall == nil { + fake.setStateValidationParameterReturnsOnCall = make(map[int]struct { + result1 error + }) + } + fake.setStateValidationParameterReturnsOnCall[i] = struct { + result1 error + }{result1} +} + +func (fake *ChaincodeStub) SplitCompositeKey(arg1 string) (string, []string, error) { + fake.splitCompositeKeyMutex.Lock() + ret, specificReturn := fake.splitCompositeKeyReturnsOnCall[len(fake.splitCompositeKeyArgsForCall)] + fake.splitCompositeKeyArgsForCall = append(fake.splitCompositeKeyArgsForCall, struct { + arg1 string + }{arg1}) + fake.recordInvocation("SplitCompositeKey", []interface{}{arg1}) + fake.splitCompositeKeyMutex.Unlock() + if fake.SplitCompositeKeyStub != nil { + return fake.SplitCompositeKeyStub(arg1) + } + if specificReturn { + return ret.result1, ret.result2, ret.result3 + } + fakeReturns := fake.splitCompositeKeyReturns + return fakeReturns.result1, fakeReturns.result2, fakeReturns.result3 +} + +func (fake *ChaincodeStub) SplitCompositeKeyCallCount() int { + fake.splitCompositeKeyMutex.RLock() + defer fake.splitCompositeKeyMutex.RUnlock() + return len(fake.splitCompositeKeyArgsForCall) +} + +func (fake *ChaincodeStub) SplitCompositeKeyCalls(stub func(string) (string, []string, error)) { + fake.splitCompositeKeyMutex.Lock() + defer fake.splitCompositeKeyMutex.Unlock() + fake.SplitCompositeKeyStub = stub +} + +func (fake *ChaincodeStub) SplitCompositeKeyArgsForCall(i int) string { + fake.splitCompositeKeyMutex.RLock() + defer fake.splitCompositeKeyMutex.RUnlock() + argsForCall := fake.splitCompositeKeyArgsForCall[i] + return argsForCall.arg1 +} + +func (fake *ChaincodeStub) SplitCompositeKeyReturns(result1 string, result2 []string, result3 error) { + fake.splitCompositeKeyMutex.Lock() + defer fake.splitCompositeKeyMutex.Unlock() + fake.SplitCompositeKeyStub = nil + fake.splitCompositeKeyReturns = struct { + result1 string + result2 []string + result3 error + }{result1, result2, result3} +} + +func (fake *ChaincodeStub) SplitCompositeKeyReturnsOnCall(i int, result1 string, result2 []string, result3 error) { + fake.splitCompositeKeyMutex.Lock() + defer fake.splitCompositeKeyMutex.Unlock() + fake.SplitCompositeKeyStub = nil + if fake.splitCompositeKeyReturnsOnCall == nil { + fake.splitCompositeKeyReturnsOnCall = make(map[int]struct { + result1 string + result2 []string + result3 error + }) + } + fake.splitCompositeKeyReturnsOnCall[i] = struct { + result1 string + result2 []string + result3 error + }{result1, result2, result3} +} + +func (fake *ChaincodeStub) Invocations() map[string][][]interface{} { + fake.invocationsMutex.RLock() + defer fake.invocationsMutex.RUnlock() + fake.createCompositeKeyMutex.RLock() + defer fake.createCompositeKeyMutex.RUnlock() + fake.delPrivateDataMutex.RLock() + defer fake.delPrivateDataMutex.RUnlock() + fake.delStateMutex.RLock() + defer fake.delStateMutex.RUnlock() + fake.getArgsMutex.RLock() + defer fake.getArgsMutex.RUnlock() + fake.getArgsSliceMutex.RLock() + defer fake.getArgsSliceMutex.RUnlock() + fake.getBindingMutex.RLock() + defer fake.getBindingMutex.RUnlock() + fake.getChannelIDMutex.RLock() + defer fake.getChannelIDMutex.RUnlock() + fake.getCreatorMutex.RLock() + defer fake.getCreatorMutex.RUnlock() + fake.getDecorationsMutex.RLock() + defer fake.getDecorationsMutex.RUnlock() + fake.getFunctionAndParametersMutex.RLock() + defer fake.getFunctionAndParametersMutex.RUnlock() + fake.getHistoryForKeyMutex.RLock() + defer fake.getHistoryForKeyMutex.RUnlock() + fake.getPrivateDataMutex.RLock() + defer fake.getPrivateDataMutex.RUnlock() + fake.getPrivateDataByPartialCompositeKeyMutex.RLock() + defer fake.getPrivateDataByPartialCompositeKeyMutex.RUnlock() + fake.getPrivateDataByRangeMutex.RLock() + defer fake.getPrivateDataByRangeMutex.RUnlock() + fake.getPrivateDataHashMutex.RLock() + defer fake.getPrivateDataHashMutex.RUnlock() + fake.getPrivateDataQueryResultMutex.RLock() + defer fake.getPrivateDataQueryResultMutex.RUnlock() + fake.getPrivateDataValidationParameterMutex.RLock() + defer fake.getPrivateDataValidationParameterMutex.RUnlock() + fake.getQueryResultMutex.RLock() + defer fake.getQueryResultMutex.RUnlock() + fake.getQueryResultWithPaginationMutex.RLock() + defer fake.getQueryResultWithPaginationMutex.RUnlock() + fake.getSignedProposalMutex.RLock() + defer fake.getSignedProposalMutex.RUnlock() + fake.getStateMutex.RLock() + defer fake.getStateMutex.RUnlock() + fake.getStateByPartialCompositeKeyMutex.RLock() + defer fake.getStateByPartialCompositeKeyMutex.RUnlock() + fake.getStateByPartialCompositeKeyWithPaginationMutex.RLock() + defer fake.getStateByPartialCompositeKeyWithPaginationMutex.RUnlock() + fake.getStateByRangeMutex.RLock() + defer fake.getStateByRangeMutex.RUnlock() + fake.getStateByRangeWithPaginationMutex.RLock() + defer fake.getStateByRangeWithPaginationMutex.RUnlock() + fake.getStateValidationParameterMutex.RLock() + defer fake.getStateValidationParameterMutex.RUnlock() + fake.getStringArgsMutex.RLock() + defer fake.getStringArgsMutex.RUnlock() + fake.getTransientMutex.RLock() + defer fake.getTransientMutex.RUnlock() + fake.getTxIDMutex.RLock() + defer fake.getTxIDMutex.RUnlock() + fake.getTxTimestampMutex.RLock() + defer fake.getTxTimestampMutex.RUnlock() + fake.invokeChaincodeMutex.RLock() + defer fake.invokeChaincodeMutex.RUnlock() + fake.putPrivateDataMutex.RLock() + defer fake.putPrivateDataMutex.RUnlock() + fake.putStateMutex.RLock() + defer fake.putStateMutex.RUnlock() + fake.setEventMutex.RLock() + defer fake.setEventMutex.RUnlock() + fake.setPrivateDataValidationParameterMutex.RLock() + defer fake.setPrivateDataValidationParameterMutex.RUnlock() + fake.setStateValidationParameterMutex.RLock() + defer fake.setStateValidationParameterMutex.RUnlock() + fake.splitCompositeKeyMutex.RLock() + defer fake.splitCompositeKeyMutex.RUnlock() + copiedInvocations := map[string][][]interface{}{} + for key, value := range fake.invocations { + copiedInvocations[key] = value + } + return copiedInvocations +} + +func (fake *ChaincodeStub) recordInvocation(key string, args []interface{}) { + fake.invocationsMutex.Lock() + defer fake.invocationsMutex.Unlock() + if fake.invocations == nil { + fake.invocations = map[string][][]interface{}{} + } + if fake.invocations[key] == nil { + fake.invocations[key] = [][]interface{}{} + } + fake.invocations[key] = append(fake.invocations[key], args) +} diff --git a/topologies/t5/assets/chaincode/assets-transfer-basic/chaincode-go/chaincode/mocks/statequeryiterator.go b/topologies/t5/assets/chaincode/assets-transfer-basic/chaincode-go/chaincode/mocks/statequeryiterator.go new file mode 100644 index 0000000..27e3034 --- /dev/null +++ b/topologies/t5/assets/chaincode/assets-transfer-basic/chaincode-go/chaincode/mocks/statequeryiterator.go @@ -0,0 +1,232 @@ +// Code generated by counterfeiter. DO NOT EDIT. +package mocks + +import ( + "sync" + + "github.com/hyperledger/fabric-protos-go/ledger/queryresult" +) + +type StateQueryIterator struct { + CloseStub func() error + closeMutex sync.RWMutex + closeArgsForCall []struct { + } + closeReturns struct { + result1 error + } + closeReturnsOnCall map[int]struct { + result1 error + } + HasNextStub func() bool + hasNextMutex sync.RWMutex + hasNextArgsForCall []struct { + } + hasNextReturns struct { + result1 bool + } + hasNextReturnsOnCall map[int]struct { + result1 bool + } + NextStub func() (*queryresult.KV, error) + nextMutex sync.RWMutex + nextArgsForCall []struct { + } + nextReturns struct { + result1 *queryresult.KV + result2 error + } + nextReturnsOnCall map[int]struct { + result1 *queryresult.KV + result2 error + } + invocations map[string][][]interface{} + invocationsMutex sync.RWMutex +} + +func (fake *StateQueryIterator) Close() error { + fake.closeMutex.Lock() + ret, specificReturn := fake.closeReturnsOnCall[len(fake.closeArgsForCall)] + fake.closeArgsForCall = append(fake.closeArgsForCall, struct { + }{}) + fake.recordInvocation("Close", []interface{}{}) + fake.closeMutex.Unlock() + if fake.CloseStub != nil { + return fake.CloseStub() + } + if specificReturn { + return ret.result1 + } + fakeReturns := fake.closeReturns + return fakeReturns.result1 +} + +func (fake *StateQueryIterator) CloseCallCount() int { + fake.closeMutex.RLock() + defer fake.closeMutex.RUnlock() + return len(fake.closeArgsForCall) +} + +func (fake *StateQueryIterator) CloseCalls(stub func() error) { + fake.closeMutex.Lock() + defer fake.closeMutex.Unlock() + fake.CloseStub = stub +} + +func (fake *StateQueryIterator) CloseReturns(result1 error) { + fake.closeMutex.Lock() + defer fake.closeMutex.Unlock() + fake.CloseStub = nil + fake.closeReturns = struct { + result1 error + }{result1} +} + +func (fake *StateQueryIterator) CloseReturnsOnCall(i int, result1 error) { + fake.closeMutex.Lock() + defer fake.closeMutex.Unlock() + fake.CloseStub = nil + if fake.closeReturnsOnCall == nil { + fake.closeReturnsOnCall = make(map[int]struct { + result1 error + }) + } + fake.closeReturnsOnCall[i] = struct { + result1 error + }{result1} +} + +func (fake *StateQueryIterator) HasNext() bool { + fake.hasNextMutex.Lock() + ret, specificReturn := fake.hasNextReturnsOnCall[len(fake.hasNextArgsForCall)] + fake.hasNextArgsForCall = append(fake.hasNextArgsForCall, struct { + }{}) + fake.recordInvocation("HasNext", []interface{}{}) + fake.hasNextMutex.Unlock() + if fake.HasNextStub != nil { + return fake.HasNextStub() + } + if specificReturn { + return ret.result1 + } + fakeReturns := fake.hasNextReturns + return fakeReturns.result1 +} + +func (fake *StateQueryIterator) HasNextCallCount() int { + fake.hasNextMutex.RLock() + defer fake.hasNextMutex.RUnlock() + return len(fake.hasNextArgsForCall) +} + +func (fake *StateQueryIterator) HasNextCalls(stub func() bool) { + fake.hasNextMutex.Lock() + defer fake.hasNextMutex.Unlock() + fake.HasNextStub = stub +} + +func (fake *StateQueryIterator) HasNextReturns(result1 bool) { + fake.hasNextMutex.Lock() + defer fake.hasNextMutex.Unlock() + fake.HasNextStub = nil + fake.hasNextReturns = struct { + result1 bool + }{result1} +} + +func (fake *StateQueryIterator) HasNextReturnsOnCall(i int, result1 bool) { + fake.hasNextMutex.Lock() + defer fake.hasNextMutex.Unlock() + fake.HasNextStub = nil + if fake.hasNextReturnsOnCall == nil { + fake.hasNextReturnsOnCall = make(map[int]struct { + result1 bool + }) + } + fake.hasNextReturnsOnCall[i] = struct { + result1 bool + }{result1} +} + +func (fake *StateQueryIterator) Next() (*queryresult.KV, error) { + fake.nextMutex.Lock() + ret, specificReturn := fake.nextReturnsOnCall[len(fake.nextArgsForCall)] + fake.nextArgsForCall = append(fake.nextArgsForCall, struct { + }{}) + fake.recordInvocation("Next", []interface{}{}) + fake.nextMutex.Unlock() + if fake.NextStub != nil { + return fake.NextStub() + } + if specificReturn { + return ret.result1, ret.result2 + } + fakeReturns := fake.nextReturns + return fakeReturns.result1, fakeReturns.result2 +} + +func (fake *StateQueryIterator) NextCallCount() int { + fake.nextMutex.RLock() + defer fake.nextMutex.RUnlock() + return len(fake.nextArgsForCall) +} + +func (fake *StateQueryIterator) NextCalls(stub func() (*queryresult.KV, error)) { + fake.nextMutex.Lock() + defer fake.nextMutex.Unlock() + fake.NextStub = stub +} + +func (fake *StateQueryIterator) NextReturns(result1 *queryresult.KV, result2 error) { + fake.nextMutex.Lock() + defer fake.nextMutex.Unlock() + fake.NextStub = nil + fake.nextReturns = struct { + result1 *queryresult.KV + result2 error + }{result1, result2} +} + +func (fake *StateQueryIterator) NextReturnsOnCall(i int, result1 *queryresult.KV, result2 error) { + fake.nextMutex.Lock() + defer fake.nextMutex.Unlock() + fake.NextStub = nil + if fake.nextReturnsOnCall == nil { + fake.nextReturnsOnCall = make(map[int]struct { + result1 *queryresult.KV + result2 error + }) + } + fake.nextReturnsOnCall[i] = struct { + result1 *queryresult.KV + result2 error + }{result1, result2} +} + +func (fake *StateQueryIterator) Invocations() map[string][][]interface{} { + fake.invocationsMutex.RLock() + defer fake.invocationsMutex.RUnlock() + fake.closeMutex.RLock() + defer fake.closeMutex.RUnlock() + fake.hasNextMutex.RLock() + defer fake.hasNextMutex.RUnlock() + fake.nextMutex.RLock() + defer fake.nextMutex.RUnlock() + copiedInvocations := map[string][][]interface{}{} + for key, value := range fake.invocations { + copiedInvocations[key] = value + } + return copiedInvocations +} + +func (fake *StateQueryIterator) recordInvocation(key string, args []interface{}) { + fake.invocationsMutex.Lock() + defer fake.invocationsMutex.Unlock() + if fake.invocations == nil { + fake.invocations = map[string][][]interface{}{} + } + if fake.invocations[key] == nil { + fake.invocations[key] = [][]interface{}{} + } + fake.invocations[key] = append(fake.invocations[key], args) +} diff --git a/topologies/t5/assets/chaincode/assets-transfer-basic/chaincode-go/chaincode/mocks/transaction.go b/topologies/t5/assets/chaincode/assets-transfer-basic/chaincode-go/chaincode/mocks/transaction.go new file mode 100644 index 0000000..eea37db --- /dev/null +++ b/topologies/t5/assets/chaincode/assets-transfer-basic/chaincode-go/chaincode/mocks/transaction.go @@ -0,0 +1,164 @@ +// Code generated by counterfeiter. DO NOT EDIT. +package mocks + +import ( + "sync" + + "github.com/hyperledger/fabric-chaincode-go/pkg/cid" + "github.com/hyperledger/fabric-chaincode-go/shim" +) + +type TransactionContext struct { + GetClientIdentityStub func() cid.ClientIdentity + getClientIdentityMutex sync.RWMutex + getClientIdentityArgsForCall []struct { + } + getClientIdentityReturns struct { + result1 cid.ClientIdentity + } + getClientIdentityReturnsOnCall map[int]struct { + result1 cid.ClientIdentity + } + GetStubStub func() shim.ChaincodeStubInterface + getStubMutex sync.RWMutex + getStubArgsForCall []struct { + } + getStubReturns struct { + result1 shim.ChaincodeStubInterface + } + getStubReturnsOnCall map[int]struct { + result1 shim.ChaincodeStubInterface + } + invocations map[string][][]interface{} + invocationsMutex sync.RWMutex +} + +func (fake *TransactionContext) GetClientIdentity() cid.ClientIdentity { + fake.getClientIdentityMutex.Lock() + ret, specificReturn := fake.getClientIdentityReturnsOnCall[len(fake.getClientIdentityArgsForCall)] + fake.getClientIdentityArgsForCall = append(fake.getClientIdentityArgsForCall, struct { + }{}) + fake.recordInvocation("GetClientIdentity", []interface{}{}) + fake.getClientIdentityMutex.Unlock() + if fake.GetClientIdentityStub != nil { + return fake.GetClientIdentityStub() + } + if specificReturn { + return ret.result1 + } + fakeReturns := fake.getClientIdentityReturns + return fakeReturns.result1 +} + +func (fake *TransactionContext) GetClientIdentityCallCount() int { + fake.getClientIdentityMutex.RLock() + defer fake.getClientIdentityMutex.RUnlock() + return len(fake.getClientIdentityArgsForCall) +} + +func (fake *TransactionContext) GetClientIdentityCalls(stub func() cid.ClientIdentity) { + fake.getClientIdentityMutex.Lock() + defer fake.getClientIdentityMutex.Unlock() + fake.GetClientIdentityStub = stub +} + +func (fake *TransactionContext) GetClientIdentityReturns(result1 cid.ClientIdentity) { + fake.getClientIdentityMutex.Lock() + defer fake.getClientIdentityMutex.Unlock() + fake.GetClientIdentityStub = nil + fake.getClientIdentityReturns = struct { + result1 cid.ClientIdentity + }{result1} +} + +func (fake *TransactionContext) GetClientIdentityReturnsOnCall(i int, result1 cid.ClientIdentity) { + fake.getClientIdentityMutex.Lock() + defer fake.getClientIdentityMutex.Unlock() + fake.GetClientIdentityStub = nil + if fake.getClientIdentityReturnsOnCall == nil { + fake.getClientIdentityReturnsOnCall = make(map[int]struct { + result1 cid.ClientIdentity + }) + } + fake.getClientIdentityReturnsOnCall[i] = struct { + result1 cid.ClientIdentity + }{result1} +} + +func (fake *TransactionContext) GetStub() shim.ChaincodeStubInterface { + fake.getStubMutex.Lock() + ret, specificReturn := fake.getStubReturnsOnCall[len(fake.getStubArgsForCall)] + fake.getStubArgsForCall = append(fake.getStubArgsForCall, struct { + }{}) + fake.recordInvocation("GetStub", []interface{}{}) + fake.getStubMutex.Unlock() + if fake.GetStubStub != nil { + return fake.GetStubStub() + } + if specificReturn { + return ret.result1 + } + fakeReturns := fake.getStubReturns + return fakeReturns.result1 +} + +func (fake *TransactionContext) GetStubCallCount() int { + fake.getStubMutex.RLock() + defer fake.getStubMutex.RUnlock() + return len(fake.getStubArgsForCall) +} + +func (fake *TransactionContext) GetStubCalls(stub func() shim.ChaincodeStubInterface) { + fake.getStubMutex.Lock() + defer fake.getStubMutex.Unlock() + fake.GetStubStub = stub +} + +func (fake *TransactionContext) GetStubReturns(result1 shim.ChaincodeStubInterface) { + fake.getStubMutex.Lock() + defer fake.getStubMutex.Unlock() + fake.GetStubStub = nil + fake.getStubReturns = struct { + result1 shim.ChaincodeStubInterface + }{result1} +} + +func (fake *TransactionContext) GetStubReturnsOnCall(i int, result1 shim.ChaincodeStubInterface) { + fake.getStubMutex.Lock() + defer fake.getStubMutex.Unlock() + fake.GetStubStub = nil + if fake.getStubReturnsOnCall == nil { + fake.getStubReturnsOnCall = make(map[int]struct { + result1 shim.ChaincodeStubInterface + }) + } + fake.getStubReturnsOnCall[i] = struct { + result1 shim.ChaincodeStubInterface + }{result1} +} + +func (fake *TransactionContext) Invocations() map[string][][]interface{} { + fake.invocationsMutex.RLock() + defer fake.invocationsMutex.RUnlock() + fake.getClientIdentityMutex.RLock() + defer fake.getClientIdentityMutex.RUnlock() + fake.getStubMutex.RLock() + defer fake.getStubMutex.RUnlock() + copiedInvocations := map[string][][]interface{}{} + for key, value := range fake.invocations { + copiedInvocations[key] = value + } + return copiedInvocations +} + +func (fake *TransactionContext) recordInvocation(key string, args []interface{}) { + fake.invocationsMutex.Lock() + defer fake.invocationsMutex.Unlock() + if fake.invocations == nil { + fake.invocations = map[string][][]interface{}{} + } + if fake.invocations[key] == nil { + fake.invocations[key] = [][]interface{}{} + } + fake.invocations[key] = append(fake.invocations[key], args) +} diff --git a/topologies/t5/assets/chaincode/assets-transfer-basic/chaincode-go/chaincode/smartcontract.go b/topologies/t5/assets/chaincode/assets-transfer-basic/chaincode-go/chaincode/smartcontract.go new file mode 100644 index 0000000..71e8dd8 --- /dev/null +++ b/topologies/t5/assets/chaincode/assets-transfer-basic/chaincode-go/chaincode/smartcontract.go @@ -0,0 +1,185 @@ +package chaincode + +import ( + "encoding/json" + "fmt" + + "github.com/hyperledger/fabric-contract-api-go/contractapi" +) + +// SmartContract provides functions for managing an Asset +type SmartContract struct { + contractapi.Contract +} + +// Asset describes basic details of what makes up a simple asset +type Asset struct { + ID string `json:"ID"` + Color string `json:"color"` + Size int `json:"size"` + Owner string `json:"owner"` + AppraisedValue int `json:"appraisedValue"` +} + +// InitLedger adds a base set of assets to the ledger +func (s *SmartContract) InitLedger(ctx contractapi.TransactionContextInterface) error { + assets := []Asset{ + {ID: "asset1", Color: "blue", Size: 5, Owner: "Tomoko", AppraisedValue: 300}, + {ID: "asset2", Color: "red", Size: 5, Owner: "Brad", AppraisedValue: 400}, + {ID: "asset3", Color: "green", Size: 10, Owner: "Jin Soo", AppraisedValue: 500}, + {ID: "asset4", Color: "yellow", Size: 10, Owner: "Max", AppraisedValue: 600}, + {ID: "asset5", Color: "black", Size: 15, Owner: "Adriana", AppraisedValue: 700}, + {ID: "asset6", Color: "white", Size: 15, Owner: "Michel", AppraisedValue: 800}, + } + + for _, asset := range assets { + assetJSON, err := json.Marshal(asset) + if err != nil { + return err + } + + err = ctx.GetStub().PutState(asset.ID, assetJSON) + if err != nil { + return fmt.Errorf("failed to put to world state. %v", err) + } + } + + return nil +} + +// CreateAsset issues a new asset to the world state with given details. +func (s *SmartContract) CreateAsset(ctx contractapi.TransactionContextInterface, id string, color string, size int, owner string, appraisedValue int) error { + exists, err := s.AssetExists(ctx, id) + if err != nil { + return err + } + if exists { + return fmt.Errorf("the asset %s already exists", id) + } + + asset := Asset{ + ID: id, + Color: color, + Size: size, + Owner: owner, + AppraisedValue: appraisedValue, + } + assetJSON, err := json.Marshal(asset) + if err != nil { + return err + } + + return ctx.GetStub().PutState(id, assetJSON) +} + +// ReadAsset returns the asset stored in the world state with given id. +func (s *SmartContract) ReadAsset(ctx contractapi.TransactionContextInterface, id string) (*Asset, error) { + assetJSON, err := ctx.GetStub().GetState(id) + if err != nil { + return nil, fmt.Errorf("failed to read from world state: %v", err) + } + if assetJSON == nil { + return nil, fmt.Errorf("the asset %s does not exist", id) + } + + var asset Asset + err = json.Unmarshal(assetJSON, &asset) + if err != nil { + return nil, err + } + + return &asset, nil +} + +// UpdateAsset updates an existing asset in the world state with provided parameters. +func (s *SmartContract) UpdateAsset(ctx contractapi.TransactionContextInterface, id string, color string, size int, owner string, appraisedValue int) error { + exists, err := s.AssetExists(ctx, id) + if err != nil { + return err + } + if !exists { + return fmt.Errorf("the asset %s does not exist", id) + } + + // overwriting original asset with new asset + asset := Asset{ + ID: id, + Color: color, + Size: size, + Owner: owner, + AppraisedValue: appraisedValue, + } + assetJSON, err := json.Marshal(asset) + if err != nil { + return err + } + + return ctx.GetStub().PutState(id, assetJSON) +} + +// DeleteAsset deletes an given asset from the world state. +func (s *SmartContract) DeleteAsset(ctx contractapi.TransactionContextInterface, id string) error { + exists, err := s.AssetExists(ctx, id) + if err != nil { + return err + } + if !exists { + return fmt.Errorf("the asset %s does not exist", id) + } + + return ctx.GetStub().DelState(id) +} + +// AssetExists returns true when asset with given ID exists in world state +func (s *SmartContract) AssetExists(ctx contractapi.TransactionContextInterface, id string) (bool, error) { + assetJSON, err := ctx.GetStub().GetState(id) + if err != nil { + return false, fmt.Errorf("failed to read from world state: %v", err) + } + + return assetJSON != nil, nil +} + +// TransferAsset updates the owner field of asset with given id in world state. +func (s *SmartContract) TransferAsset(ctx contractapi.TransactionContextInterface, id string, newOwner string) error { + asset, err := s.ReadAsset(ctx, id) + if err != nil { + return err + } + + asset.Owner = newOwner + assetJSON, err := json.Marshal(asset) + if err != nil { + return err + } + + return ctx.GetStub().PutState(id, assetJSON) +} + +// GetAllAssets returns all assets found in world state +func (s *SmartContract) GetAllAssets(ctx contractapi.TransactionContextInterface) ([]*Asset, error) { + // range query with empty string for startKey and endKey does an + // open-ended query of all assets in the chaincode namespace. + resultsIterator, err := ctx.GetStub().GetStateByRange("", "") + if err != nil { + return nil, err + } + defer resultsIterator.Close() + + var assets []*Asset + for resultsIterator.HasNext() { + queryResponse, err := resultsIterator.Next() + if err != nil { + return nil, err + } + + var asset Asset + err = json.Unmarshal(queryResponse.Value, &asset) + if err != nil { + return nil, err + } + assets = append(assets, &asset) + } + + return assets, nil +} diff --git a/topologies/t5/assets/chaincode/assets-transfer-basic/chaincode-go/chaincode/smartcontract_test.go b/topologies/t5/assets/chaincode/assets-transfer-basic/chaincode-go/chaincode/smartcontract_test.go new file mode 100644 index 0000000..cb001de --- /dev/null +++ b/topologies/t5/assets/chaincode/assets-transfer-basic/chaincode-go/chaincode/smartcontract_test.go @@ -0,0 +1,184 @@ +package chaincode_test + +import ( + "encoding/json" + "fmt" + "testing" + + "github.com/hyperledger/fabric-chaincode-go/shim" + "github.com/hyperledger/fabric-contract-api-go/contractapi" + "github.com/hyperledger/fabric-protos-go/ledger/queryresult" + "github.com/hyperledger/fabric-samples/asset-transfer-basic/chaincode-go/chaincode" + "github.com/hyperledger/fabric-samples/asset-transfer-basic/chaincode-go/chaincode/mocks" + "github.com/stretchr/testify/require" +) + +//go:generate counterfeiter -o mocks/transaction.go -fake-name TransactionContext . transactionContext +type transactionContext interface { + contractapi.TransactionContextInterface +} + +//go:generate counterfeiter -o mocks/chaincodestub.go -fake-name ChaincodeStub . chaincodeStub +type chaincodeStub interface { + shim.ChaincodeStubInterface +} + +//go:generate counterfeiter -o mocks/statequeryiterator.go -fake-name StateQueryIterator . stateQueryIterator +type stateQueryIterator interface { + shim.StateQueryIteratorInterface +} + +func TestInitLedger(t *testing.T) { + chaincodeStub := &mocks.ChaincodeStub{} + transactionContext := &mocks.TransactionContext{} + transactionContext.GetStubReturns(chaincodeStub) + + assetTransfer := chaincode.SmartContract{} + err := assetTransfer.InitLedger(transactionContext) + require.NoError(t, err) + + chaincodeStub.PutStateReturns(fmt.Errorf("failed inserting key")) + err = assetTransfer.InitLedger(transactionContext) + require.EqualError(t, err, "failed to put to world state. failed inserting key") +} + +func TestCreateAsset(t *testing.T) { + chaincodeStub := &mocks.ChaincodeStub{} + transactionContext := &mocks.TransactionContext{} + transactionContext.GetStubReturns(chaincodeStub) + + assetTransfer := chaincode.SmartContract{} + err := assetTransfer.CreateAsset(transactionContext, "", "", 0, "", 0) + require.NoError(t, err) + + chaincodeStub.GetStateReturns([]byte{}, nil) + err = assetTransfer.CreateAsset(transactionContext, "asset1", "", 0, "", 0) + require.EqualError(t, err, "the asset asset1 already exists") + + chaincodeStub.GetStateReturns(nil, fmt.Errorf("unable to retrieve asset")) + err = assetTransfer.CreateAsset(transactionContext, "asset1", "", 0, "", 0) + require.EqualError(t, err, "failed to read from world state: unable to retrieve asset") +} + +func TestReadAsset(t *testing.T) { + chaincodeStub := &mocks.ChaincodeStub{} + transactionContext := &mocks.TransactionContext{} + transactionContext.GetStubReturns(chaincodeStub) + + expectedAsset := &chaincode.Asset{ID: "asset1"} + bytes, err := json.Marshal(expectedAsset) + require.NoError(t, err) + + chaincodeStub.GetStateReturns(bytes, nil) + assetTransfer := chaincode.SmartContract{} + asset, err := assetTransfer.ReadAsset(transactionContext, "") + require.NoError(t, err) + require.Equal(t, expectedAsset, asset) + + chaincodeStub.GetStateReturns(nil, fmt.Errorf("unable to retrieve asset")) + _, err = assetTransfer.ReadAsset(transactionContext, "") + require.EqualError(t, err, "failed to read from world state: unable to retrieve asset") + + chaincodeStub.GetStateReturns(nil, nil) + asset, err = assetTransfer.ReadAsset(transactionContext, "asset1") + require.EqualError(t, err, "the asset asset1 does not exist") + require.Nil(t, asset) +} + +func TestUpdateAsset(t *testing.T) { + chaincodeStub := &mocks.ChaincodeStub{} + transactionContext := &mocks.TransactionContext{} + transactionContext.GetStubReturns(chaincodeStub) + + expectedAsset := &chaincode.Asset{ID: "asset1"} + bytes, err := json.Marshal(expectedAsset) + require.NoError(t, err) + + chaincodeStub.GetStateReturns(bytes, nil) + assetTransfer := chaincode.SmartContract{} + err = assetTransfer.UpdateAsset(transactionContext, "", "", 0, "", 0) + require.NoError(t, err) + + chaincodeStub.GetStateReturns(nil, nil) + err = assetTransfer.UpdateAsset(transactionContext, "asset1", "", 0, "", 0) + require.EqualError(t, err, "the asset asset1 does not exist") + + chaincodeStub.GetStateReturns(nil, fmt.Errorf("unable to retrieve asset")) + err = assetTransfer.UpdateAsset(transactionContext, "asset1", "", 0, "", 0) + require.EqualError(t, err, "failed to read from world state: unable to retrieve asset") +} + +func TestDeleteAsset(t *testing.T) { + chaincodeStub := &mocks.ChaincodeStub{} + transactionContext := &mocks.TransactionContext{} + transactionContext.GetStubReturns(chaincodeStub) + + asset := &chaincode.Asset{ID: "asset1"} + bytes, err := json.Marshal(asset) + require.NoError(t, err) + + chaincodeStub.GetStateReturns(bytes, nil) + chaincodeStub.DelStateReturns(nil) + assetTransfer := chaincode.SmartContract{} + err = assetTransfer.DeleteAsset(transactionContext, "") + require.NoError(t, err) + + chaincodeStub.GetStateReturns(nil, nil) + err = assetTransfer.DeleteAsset(transactionContext, "asset1") + require.EqualError(t, err, "the asset asset1 does not exist") + + chaincodeStub.GetStateReturns(nil, fmt.Errorf("unable to retrieve asset")) + err = assetTransfer.DeleteAsset(transactionContext, "") + require.EqualError(t, err, "failed to read from world state: unable to retrieve asset") +} + +func TestTransferAsset(t *testing.T) { + chaincodeStub := &mocks.ChaincodeStub{} + transactionContext := &mocks.TransactionContext{} + transactionContext.GetStubReturns(chaincodeStub) + + asset := &chaincode.Asset{ID: "asset1"} + bytes, err := json.Marshal(asset) + require.NoError(t, err) + + chaincodeStub.GetStateReturns(bytes, nil) + assetTransfer := chaincode.SmartContract{} + err = assetTransfer.TransferAsset(transactionContext, "", "") + require.NoError(t, err) + + chaincodeStub.GetStateReturns(nil, fmt.Errorf("unable to retrieve asset")) + err = assetTransfer.TransferAsset(transactionContext, "", "") + require.EqualError(t, err, "failed to read from world state: unable to retrieve asset") +} + +func TestGetAllAssets(t *testing.T) { + asset := &chaincode.Asset{ID: "asset1"} + bytes, err := json.Marshal(asset) + require.NoError(t, err) + + iterator := &mocks.StateQueryIterator{} + iterator.HasNextReturnsOnCall(0, true) + iterator.HasNextReturnsOnCall(1, false) + iterator.NextReturns(&queryresult.KV{Value: bytes}, nil) + + chaincodeStub := &mocks.ChaincodeStub{} + transactionContext := &mocks.TransactionContext{} + transactionContext.GetStubReturns(chaincodeStub) + + chaincodeStub.GetStateByRangeReturns(iterator, nil) + assetTransfer := &chaincode.SmartContract{} + assets, err := assetTransfer.GetAllAssets(transactionContext) + require.NoError(t, err) + require.Equal(t, []*chaincode.Asset{asset}, assets) + + iterator.HasNextReturns(true) + iterator.NextReturns(nil, fmt.Errorf("failed retrieving next item")) + assets, err = assetTransfer.GetAllAssets(transactionContext) + require.EqualError(t, err, "failed retrieving next item") + require.Nil(t, assets) + + chaincodeStub.GetStateByRangeReturns(nil, fmt.Errorf("failed retrieving all assets")) + assets, err = assetTransfer.GetAllAssets(transactionContext) + require.EqualError(t, err, "failed retrieving all assets") + require.Nil(t, assets) +} diff --git a/topologies/t5/assets/chaincode/assets-transfer-basic/chaincode-go/go.mod b/topologies/t5/assets/chaincode/assets-transfer-basic/chaincode-go/go.mod new file mode 100644 index 0000000..630a157 --- /dev/null +++ b/topologies/t5/assets/chaincode/assets-transfer-basic/chaincode-go/go.mod @@ -0,0 +1,11 @@ +module github.com/hyperledger/fabric-samples/asset-transfer-basic/chaincode-go + +go 1.14 + +require ( + github.com/golang/protobuf v1.3.2 + github.com/hyperledger/fabric-chaincode-go v0.0.0-20210718160520-38d29fabecb9 + github.com/hyperledger/fabric-contract-api-go v1.1.1 + github.com/hyperledger/fabric-protos-go v0.0.0-20201028172056-a3136dde2354 + github.com/stretchr/testify v1.5.1 +) diff --git a/topologies/t5/assets/chaincode/assets-transfer-basic/chaincode-go/go.sum b/topologies/t5/assets/chaincode/assets-transfer-basic/chaincode-go/go.sum new file mode 100644 index 0000000..577c18b --- /dev/null +++ b/topologies/t5/assets/chaincode/assets-transfer-basic/chaincode-go/go.sum @@ -0,0 +1,154 @@ +cloud.google.com/go v0.26.0/go.mod h1:aQUYkXzVsufM+DwF1aE+0xfcU+56JwCaLick0ClmMTw= +github.com/BurntSushi/toml v0.3.1/go.mod h1:xHWCNGjB5oqiDr8zfno3MHue2Ht5sIBksp03qcyfWMU= +github.com/DATA-DOG/go-txdb v0.1.3/go.mod h1:DhAhxMXZpUJVGnT+p9IbzJoRKvlArO2pkHjnGX7o0n0= +github.com/PuerkitoBio/purell v1.1.1 h1:WEQqlqaGbrPkxLJWfBwQmfEAE1Z7ONdDLqrN38tNFfI= +github.com/PuerkitoBio/purell v1.1.1/go.mod h1:c11w/QuzBsJSee3cPx9rAFu61PvFxuPbtSwDGJws/X0= +github.com/PuerkitoBio/urlesc v0.0.0-20170810143723-de5bf2ad4578 h1:d+Bc7a5rLufV/sSk/8dngufqelfh6jnri85riMAaF/M= +github.com/PuerkitoBio/urlesc v0.0.0-20170810143723-de5bf2ad4578/go.mod h1:uGdkoq3SwY9Y+13GIhn11/XLaGBb4BfwItxLd5jeuXE= +github.com/armon/consul-api v0.0.0-20180202201655-eb2c6b5be1b6/go.mod h1:grANhF5doyWs3UAsr3K4I6qtAmlQcZDesFNEHPZAzj8= +github.com/client9/misspell v0.3.4/go.mod h1:qj6jICC3Q7zFZvVWo7KLAzC3yx5G7kyvSDkc90ppPyw= +github.com/coreos/etcd v3.3.10+incompatible/go.mod h1:uF7uidLiAD3TWHmW31ZFd/JWoc32PjwdhPthX9715RE= +github.com/coreos/go-etcd v2.0.0+incompatible/go.mod h1:Jez6KQU2B/sWsbdaef3ED8NzMklzPG4d5KIOhIy30Tk= +github.com/coreos/go-semver v0.2.0/go.mod h1:nnelYz7RCh+5ahJtPPxZlU+153eP4D4r3EedlOD2RNk= +github.com/cpuguy83/go-md2man v1.0.10/go.mod h1:SmD6nW6nTyfqj6ABTjUi3V3JVMnlJmwcJI5acqYI6dE= +github.com/cucumber/godog v0.8.0/go.mod h1:Cp3tEV1LRAyH/RuCThcxHS/+9ORZ+FMzPva2AZ5Ki+A= +github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38= +github.com/davecgh/go-spew v1.1.1 h1:vj9j/u1bqnvCEfJOwUhtlOARqs3+rkHYY13jYWTU97c= +github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38= +github.com/fsnotify/fsnotify v1.4.7/go.mod h1:jwhsz4b93w/PPRr/qN1Yymfu8t87LnFCMoQvtojpjFo= +github.com/go-openapi/jsonpointer v0.19.2/go.mod h1:3akKfEdA7DF1sugOqz1dVQHBcuDBPKZGEoHC/NkiQRg= +github.com/go-openapi/jsonpointer v0.19.3 h1:gihV7YNZK1iK6Tgwwsxo2rJbD1GTbdm72325Bq8FI3w= +github.com/go-openapi/jsonpointer v0.19.3/go.mod h1:Pl9vOtqEWErmShwVjC8pYs9cog34VGT37dQOVbmoatg= +github.com/go-openapi/jsonreference v0.19.2 h1:o20suLFB4Ri0tuzpWtyHlh7E7HnkqTNLq6aR6WVNS1w= +github.com/go-openapi/jsonreference v0.19.2/go.mod h1:jMjeRr2HHw6nAVajTXJ4eiUwohSTlpa0o73RUL1owJc= +github.com/go-openapi/spec v0.19.4 h1:ixzUSnHTd6hCemgtAJgluaTSGYpLNpJY4mA2DIkdOAo= +github.com/go-openapi/spec v0.19.4/go.mod h1:FpwSN1ksY1eteniUU7X0N/BgJ7a4WvBFVA8Lj9mJglo= +github.com/go-openapi/swag v0.19.2/go.mod h1:POnQmlKehdgb5mhVOsnJFsivZCEZ/vjK9gh66Z9tfKk= +github.com/go-openapi/swag v0.19.5 h1:lTz6Ys4CmqqCQmZPBlbQENR1/GucA2bzYTE12Pw4tFY= +github.com/go-openapi/swag v0.19.5/go.mod h1:POnQmlKehdgb5mhVOsnJFsivZCEZ/vjK9gh66Z9tfKk= +github.com/go-sql-driver/mysql v1.5.0/go.mod h1:DCzpHaOWr8IXmIStZouvnhqoel9Qv2LBy8hT2VhHyBg= +github.com/gobuffalo/envy v1.7.0 h1:GlXgaiBkmrYMHco6t4j7SacKO4XUjvh5pwXh0f4uxXU= +github.com/gobuffalo/envy v1.7.0/go.mod h1:n7DRkBerg/aorDM8kbduw5dN3oXGswK5liaSCx4T5NI= +github.com/gobuffalo/logger v1.0.0/go.mod h1:2zbswyIUa45I+c+FLXuWl9zSWEiVuthsk8ze5s8JvPs= +github.com/gobuffalo/packd v0.3.0 h1:eMwymTkA1uXsqxS0Tpoop3Lc0u3kTfiMBE6nKtQU4g4= +github.com/gobuffalo/packd v0.3.0/go.mod h1:zC7QkmNkYVGKPw4tHpBQ+ml7W/3tIebgeo1b36chA3Q= +github.com/gobuffalo/packr v1.30.1 h1:hu1fuVR3fXEZR7rXNW3h8rqSML8EVAf6KNm0NKO/wKg= +github.com/gobuffalo/packr v1.30.1/go.mod h1:ljMyFO2EcrnzsHsN99cvbq055Y9OhRrIaviy289eRuk= +github.com/gobuffalo/packr/v2 v2.5.1/go.mod h1:8f9c96ITobJlPzI44jj+4tHnEKNt0xXWSVlXRN9X1Iw= +github.com/golang/glog v0.0.0-20160126235308-23def4e6c14b h1:VKtxabqXZkF25pY9ekfRL6a582T4P37/31XEstQ5p58= +github.com/golang/glog v0.0.0-20160126235308-23def4e6c14b/go.mod h1:SBH7ygxi8pfUlaOkMMuAQtPIUF8ecWP5IEl/CR7VP2Q= +github.com/golang/mock v1.1.1/go.mod h1:oTYuIxOrZwtPieC+H1uAHpcLFnEyAGVDL/k47Jfbm0A= +github.com/golang/protobuf v1.2.0/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U= +github.com/golang/protobuf v1.3.2 h1:6nsPYzhq5kReh6QImI3k5qWzO4PEbvbIW2cwSfR/6xs= +github.com/golang/protobuf v1.3.2/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U= +github.com/google/go-cmp v0.2.0/go.mod h1:oXzfMopK8JAjlY9xF4vHSVASa0yLyX7SntLO5aqRK0M= +github.com/hashicorp/hcl v1.0.0/go.mod h1:E5yfLk+7swimpb2L/Alb/PJmXilQ/rhwaUYs4T20WEQ= +github.com/hyperledger/fabric-chaincode-go v0.0.0-20200424173110-d7076418f212 h1:1i4lnpV8BDgKOLi1hgElfBqdHXjXieSuj8629mwBZ8o= +github.com/hyperledger/fabric-chaincode-go v0.0.0-20200424173110-d7076418f212/go.mod h1:N7H3sA7Tx4k/YzFq7U0EPdqJtqvM4Kild0JoCc7C0Dc= +github.com/hyperledger/fabric-chaincode-go v0.0.0-20210718160520-38d29fabecb9 h1:1cAZHHrBYFrX3bwQGhOZtOB4sCM9QWVppd81O8vsPXs= +github.com/hyperledger/fabric-chaincode-go v0.0.0-20210718160520-38d29fabecb9/go.mod h1:N7H3sA7Tx4k/YzFq7U0EPdqJtqvM4Kild0JoCc7C0Dc= +github.com/hyperledger/fabric-contract-api-go v1.1.0 h1:K9uucl/6eX3NF0/b+CGIiO1IPm1VYQxBkpnVGJur2S4= +github.com/hyperledger/fabric-contract-api-go v1.1.0/go.mod h1:nHWt0B45fK53owcFpLtAe8DH0Q5P068mnzkNXMPSL7E= +github.com/hyperledger/fabric-contract-api-go v1.1.1 h1:gDhOC18gjgElNZ85kFWsbCQq95hyUP/21n++m0Sv6B0= +github.com/hyperledger/fabric-contract-api-go v1.1.1/go.mod h1:+39cWxbh5py3NtXpRA63rAH7NzXyED+QJx1EZr0tJPo= +github.com/hyperledger/fabric-protos-go v0.0.0-20190919234611-2a87503ac7c9/go.mod h1:xVYTjK4DtZRBxZ2D9aE4y6AbLaPwue2o/criQyQbVD0= +github.com/hyperledger/fabric-protos-go v0.0.0-20200424173316-dd554ba3746e h1:9PS5iezHk/j7XriSlNuSQILyCOfcZ9wZ3/PiucmSE8E= +github.com/hyperledger/fabric-protos-go v0.0.0-20200424173316-dd554ba3746e/go.mod h1:xVYTjK4DtZRBxZ2D9aE4y6AbLaPwue2o/criQyQbVD0= +github.com/hyperledger/fabric-protos-go v0.0.0-20201028172056-a3136dde2354 h1:6vLLEpvDbSlmUJFjg1hB5YMBpI+WgKguztlONcAFBoY= +github.com/hyperledger/fabric-protos-go v0.0.0-20201028172056-a3136dde2354/go.mod h1:xVYTjK4DtZRBxZ2D9aE4y6AbLaPwue2o/criQyQbVD0= +github.com/hyperledger/fabric-protos-go v0.0.0-20210722212527-1ed7094bb8fe h1:1ef+SKRVYiQCAMhZ5W6HZhStEcNNAm27V8YcptzSWnM= +github.com/hyperledger/fabric-protos-go v0.0.0-20210722212527-1ed7094bb8fe/go.mod h1:xVYTjK4DtZRBxZ2D9aE4y6AbLaPwue2o/criQyQbVD0= +github.com/inconshreveable/mousetrap v1.0.0/go.mod h1:PxqpIevigyE2G7u3NXJIT2ANytuPF1OarO4DADm73n8= +github.com/joho/godotenv v1.3.0 h1:Zjp+RcGpHhGlrMbJzXTrZZPrWj+1vfm90La1wgB6Bhc= +github.com/joho/godotenv v1.3.0/go.mod h1:7hK45KPybAkOC6peb+G5yklZfMxEjkZhHbwpqxOKXbg= +github.com/karrick/godirwalk v1.10.12/go.mod h1:RoGL9dQei4vP9ilrpETWE8CLOZ1kiN0LhBygSwrAsHA= +github.com/konsorten/go-windows-terminal-sequences v1.0.1/go.mod h1:T0+1ngSBFLxvqU3pZ+m/2kptfBszLMUkC4ZK/EgS/cQ= +github.com/konsorten/go-windows-terminal-sequences v1.0.2/go.mod h1:T0+1ngSBFLxvqU3pZ+m/2kptfBszLMUkC4ZK/EgS/cQ= +github.com/kr/pretty v0.1.0/go.mod h1:dAy3ld7l9f0ibDNOQOHHMYYIIbhfbHSm3C4ZsoJORNo= +github.com/kr/pretty v0.2.0 h1:s5hAObm+yFO5uHYt5dYjxi2rXrsnmRpJx4OYvIWUaQs= +github.com/kr/pretty v0.2.0/go.mod h1:ipq/a2n7PKx3OHsz4KJII5eveXtPO4qwEXGdVfWzfnI= +github.com/kr/pty v1.1.1/go.mod h1:pFQYn66WHrOpPYNljwOMqo10TkYh1fy3cYio2l3bCsQ= +github.com/kr/pty v1.1.5/go.mod h1:9r2w37qlBe7rQ6e1fg1S/9xpWHSnaqNdHD3WcMdbPDA= +github.com/kr/text v0.1.0 h1:45sCR5RtlFHMR4UwH9sdQ5TC8v0qDQCHnXt+kaKSTVE= +github.com/kr/text v0.1.0/go.mod h1:4Jbv+DJW3UT/LiOwJeYQe1efqtUx/iVham/4vfdArNI= +github.com/magiconair/properties v1.8.0/go.mod h1:PppfXfuXeibc/6YijjN8zIbojt8czPbwD3XqdrwzmxQ= +github.com/mailru/easyjson v0.0.0-20190614124828-94de47d64c63/go.mod h1:C1wdFJiN94OJF2b5HbByQZoLdCWB1Yqtg26g4irojpc= +github.com/mailru/easyjson v0.0.0-20190626092158-b2ccc519800e h1:hB2xlXdHp/pmPZq0y3QnmWAArdw9PqbmotexnWx/FU8= +github.com/mailru/easyjson v0.0.0-20190626092158-b2ccc519800e/go.mod h1:C1wdFJiN94OJF2b5HbByQZoLdCWB1Yqtg26g4irojpc= +github.com/mitchellh/go-homedir v1.1.0/go.mod h1:SfyaCUpYCn1Vlf4IUYiD9fPX4A5wJrkLzIz1N1q0pr0= +github.com/mitchellh/mapstructure v1.1.2/go.mod h1:FVVH3fgwuzCH5S8UJGiWEs2h04kUh9fWfEaFds41c1Y= +github.com/pelletier/go-toml v1.2.0/go.mod h1:5z9KED0ma1S8pY6P1sdut58dfprrGBbd/94hg7ilaic= +github.com/pmezard/go-difflib v1.0.0 h1:4DBwDE0NGyQoBHbLQYPwSUPoCMWR5BEzIk/f1lZbAQM= +github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4= +github.com/rogpeppe/go-internal v1.1.0/go.mod h1:M8bDsm7K2OlrFYOpmOWEs/qY81heoFRclV5y23lUDJ4= +github.com/rogpeppe/go-internal v1.3.0 h1:RR9dF3JtopPvtkroDZuVD7qquD0bnHlKSqaQhgwt8yk= +github.com/rogpeppe/go-internal v1.3.0/go.mod h1:M8bDsm7K2OlrFYOpmOWEs/qY81heoFRclV5y23lUDJ4= +github.com/russross/blackfriday v1.5.2/go.mod h1:JO/DiYxRf+HjHt06OyowR9PTA263kcR/rfWxYHBV53g= +github.com/sirupsen/logrus v1.4.2/go.mod h1:tLMulIdttU9McNUspp0xgXVQah82FyeX6MwdIuYE2rE= +github.com/spf13/afero v1.1.2/go.mod h1:j4pytiNVoe2o6bmDsKpLACNPDBIoEAkihy7loJ1B0CQ= +github.com/spf13/cast v1.3.0/go.mod h1:Qx5cxh0v+4UWYiBimWS+eyWzqEqokIECu5etghLkUJE= +github.com/spf13/cobra v0.0.5/go.mod h1:3K3wKZymM7VvHMDS9+Akkh4K60UwM26emMESw8tLCHU= +github.com/spf13/jwalterweatherman v1.0.0/go.mod h1:cQK4TGJAtQXfYWX+Ddv3mKDzgVb68N+wFjFa4jdeBTo= +github.com/spf13/pflag v1.0.3/go.mod h1:DYY7MBk1bdzusC3SYhjObp+wFpr4gzcvqqNjLnInEg4= +github.com/spf13/viper v1.3.2/go.mod h1:ZiWeW+zYFKm7srdB9IoDzzZXaJaI5eL9QjNiN/DMA2s= +github.com/stretchr/objx v0.1.0/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME= +github.com/stretchr/objx v0.1.1/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME= +github.com/stretchr/objx v0.2.0/go.mod h1:qt09Ya8vawLte6SNmTgCsAVtYtaKzEcn8ATUoHMkEqE= +github.com/stretchr/testify v1.2.2/go.mod h1:a8OnRcib4nhh0OaRAV+Yts87kKdq0PP7pXfy6kDkUVs= +github.com/stretchr/testify v1.3.0/go.mod h1:M5WIy9Dh21IEIfnGCwXGc5bZfKNJtfHm1UVUgZn+9EI= +github.com/stretchr/testify v1.4.0/go.mod h1:j7eGeouHqKxXV5pUuKE4zz7dFj8WfuZ+81PSLYec5m4= +github.com/stretchr/testify v1.5.1 h1:nOGnQDM7FYENwehXlg/kFVnos3rEvtKTjRvOWSzb6H4= +github.com/stretchr/testify v1.5.1/go.mod h1:5W2xD1RspED5o8YsWQXVCued0rvSQ+mT+I5cxcmMvtA= +github.com/ugorji/go/codec v0.0.0-20181204163529-d75b2dcb6bc8/go.mod h1:VFNgLljTbGfSG7qAOspJ7OScBnGdDN/yBr0sguwnwf0= +github.com/xeipuuv/gojsonpointer v0.0.0-20180127040702-4e3ac2762d5f h1:J9EGpcZtP0E/raorCMxlFGSTBrsSlaDGf3jU/qvAE2c= +github.com/xeipuuv/gojsonpointer v0.0.0-20180127040702-4e3ac2762d5f/go.mod h1:N2zxlSyiKSe5eX1tZViRH5QA0qijqEDrYZiPEAiq3wU= +github.com/xeipuuv/gojsonreference v0.0.0-20180127040603-bd5ef7bd5415 h1:EzJWgHovont7NscjpAxXsDA8S8BMYve8Y5+7cuRE7R0= +github.com/xeipuuv/gojsonreference v0.0.0-20180127040603-bd5ef7bd5415/go.mod h1:GwrjFmJcFw6At/Gs6z4yjiIwzuJ1/+UwLxMQDVQXShQ= +github.com/xeipuuv/gojsonschema v1.2.0 h1:LhYJRs+L4fBtjZUfuSZIKGeVu0QRy8e5Xi7D17UxZ74= +github.com/xeipuuv/gojsonschema v1.2.0/go.mod h1:anYRn/JVcOK2ZgGU+IjEV4nwlhoK5sQluxsYJ78Id3Y= +github.com/xordataexchange/crypt v0.0.3-0.20170626215501-b2862e3d0a77/go.mod h1:aYKd//L2LvnjZzWKhF00oedf4jCCReLcmhLdhm1A27Q= +golang.org/x/crypto v0.0.0-20181203042331-505ab145d0a9/go.mod h1:6SG95UA2DQfeDnfUPMdvaQW0Q7yPrPDi9nlGo2tz2b4= +golang.org/x/crypto v0.0.0-20190308221718-c2843e01d9a2/go.mod h1:djNgcEr1/C05ACkg1iLfiJU5Ep61QUkGW8qpdssI0+w= +golang.org/x/crypto v0.0.0-20190611184440-5c40567a22f8/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI= +golang.org/x/crypto v0.0.0-20190621222207-cc06ce4a13d4/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI= +golang.org/x/lint v0.0.0-20190313153728-d0100b6bd8b3/go.mod h1:6SW0HCj/g11FgYtHlgUYUwCkIfeOF89ocIRzGO/8vkc= +golang.org/x/net v0.0.0-20190311183353-d8887717615a/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg= +golang.org/x/net v0.0.0-20190404232315-eb5bcb51f2a3/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg= +golang.org/x/net v0.0.0-20190522155817-f3200d17e092/go.mod h1:HSz+uSET+XFnRR8LxR5pz3Of3rY3CfYBVs4xY44aLks= +golang.org/x/net v0.0.0-20190613194153-d28f0bde5980/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s= +golang.org/x/net v0.0.0-20190827160401-ba9fcec4b297 h1:k7pJ2yAPLPgbskkFdhRCsA77k2fySZ1zf2zCjvQCiIM= +golang.org/x/net v0.0.0-20190827160401-ba9fcec4b297/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s= +golang.org/x/oauth2 v0.0.0-20180821212333-d2e6202438be/go.mod h1:N/0e6XlmueqKjAGxoOufVs8QHGRruUQn6yWY3a++T0U= +golang.org/x/sync v0.0.0-20190423024810-112230192c58/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= +golang.org/x/sys v0.0.0-20181205085412-a5c9d58dba9a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY= +golang.org/x/sys v0.0.0-20190215142949-d0b11bdaac8a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY= +golang.org/x/sys v0.0.0-20190412213103-97732733099d/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= +golang.org/x/sys v0.0.0-20190422165155-953cdadca894/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= +golang.org/x/sys v0.0.0-20190515120540-06a5c4944438/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= +golang.org/x/sys v0.0.0-20190616124812-15dcb6c0061f/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= +golang.org/x/sys v0.0.0-20190710143415-6ec70d6a5542 h1:6ZQFf1D2YYDDI7eSwW8adlkkavTB9sw5I24FVtEvNUQ= +golang.org/x/sys v0.0.0-20190710143415-6ec70d6a5542/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= +golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ= +golang.org/x/text v0.3.2 h1:tW2bmiBqwgJj/UpqtC8EpXEZVYOwU0yG4iWbprSVAcs= +golang.org/x/text v0.3.2/go.mod h1:bEr9sfX3Q8Zfm5fL9x+3itogRgK3+ptLWKqgva+5dAk= +golang.org/x/tools v0.0.0-20180917221912-90fa682c2a6e/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ= +golang.org/x/tools v0.0.0-20190311212946-11955173bddd/go.mod h1:LCzVGOaR6xXOjkQ3onu1FJEFr0SW1gC7cKk1uF8kGRs= +golang.org/x/tools v0.0.0-20190524140312-2c0ae7006135/go.mod h1:RgjU9mgBXZiqYHBnxXauZ1Gv1EHHAz9KjViQ78xBX0Q= +golang.org/x/tools v0.0.0-20190614205625-5aca471b1d59/go.mod h1:/rFqwRUd4F7ZHNgwSSTFct+R/Kf4OFW1sUzUTQQTgfc= +golang.org/x/tools v0.0.0-20190624180213-70d37148ca0c/go.mod h1:/rFqwRUd4F7ZHNgwSSTFct+R/Kf4OFW1sUzUTQQTgfc= +google.golang.org/appengine v1.1.0/go.mod h1:EbEs0AVv82hx2wNQdGPgUI5lhzA/G0D9YwlJXL52JkM= +google.golang.org/genproto v0.0.0-20180817151627-c66870c02cf8/go.mod h1:JiN7NxoALGmiZfu7CAH4rXhgtRTLTxftemlI0sWmxmc= +google.golang.org/genproto v0.0.0-20180831171423-11092d34479b h1:lohp5blsw53GBXtLyLNaTXPXS9pJ1tiTw61ZHUoE9Qw= +google.golang.org/genproto v0.0.0-20180831171423-11092d34479b/go.mod h1:JiN7NxoALGmiZfu7CAH4rXhgtRTLTxftemlI0sWmxmc= +google.golang.org/grpc v1.23.0 h1:AzbTB6ux+okLTzP8Ru1Xs41C303zdcfEht7MQnYJt5A= +google.golang.org/grpc v1.23.0/go.mod h1:Y5yQAOtifL1yxbo5wqy6BxZv8vAUGQwXBOALyacEbxg= +gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0= +gopkg.in/check.v1 v1.0.0-20180628173108-788fd7840127/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0= +gopkg.in/check.v1 v1.0.0-20190902080502-41f04d3bba15 h1:YR8cESwS4TdDjEe65xsg0ogRM/Nc3DYOhEAlW+xobZo= +gopkg.in/check.v1 v1.0.0-20190902080502-41f04d3bba15/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0= +gopkg.in/errgo.v2 v2.1.0/go.mod h1:hNsd1EY+bozCKY1Ytp96fpM3vjJbqLJn88ws8XvfDNI= +gopkg.in/yaml.v2 v2.2.2 h1:ZCJp+EgiOT7lHqUV2J862kp8Qj64Jo6az82+3Td9dZw= +gopkg.in/yaml.v2 v2.2.2/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI= +gopkg.in/yaml.v2 v2.2.8 h1:obN1ZagJSUGI0Ek/LBmuj4SNLPfIny3KsKFopxRdj10= +gopkg.in/yaml.v2 v2.2.8/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI= +honnef.co/go/tools v0.0.0-20190523083050-ea95bdfd59fc/go.mod h1:rf3lG4BRIbNafJWhAfAdb/ePZxsR/4RtNHQocxwk9r4= diff --git a/topologies/t5/config/cas-proxy/org1/identities/nginx/nginx.conf b/topologies/t5/config/cas-proxy/org1/identities/nginx/nginx.conf new file mode 100644 index 0000000..db03ff4 --- /dev/null +++ b/topologies/t5/config/cas-proxy/org1/identities/nginx/nginx.conf @@ -0,0 +1,29 @@ +worker_processes auto; +pid /run/nginx.pid; +include /etc/nginx/modules-enabled/*.conf; + +events { + worker_connections 768; + # multi_accept on; +} + +stream { + upstream backend_servers { + server t5-org1-ca-identities-n0:9000 max_fails=3 fail_timeout=10s; + server t5-org1-ca-identities-n1:9000 max_fails=3 fail_timeout=10s; + } + + log_format basic '$remote_addr [$time_local] ' + '$protocol $status $bytes_sent $bytes_received ' + '$session_time "$upstream_addr" ' + '"$upstream_bytes_sent" "$upstream_bytes_received" "$upstream_connect_time"'; + + access_log /var/log/nginx/access.log basic; + error_log /var/log/nginx/error.log; + + server { + listen 9001; + proxy_pass backend_servers; + proxy_next_upstream on; + } +} \ No newline at end of file diff --git a/topologies/t5/config/cas-proxy/org1/tls/nginx/nginx.conf b/topologies/t5/config/cas-proxy/org1/tls/nginx/nginx.conf new file mode 100644 index 0000000..1d6f5fc --- /dev/null +++ b/topologies/t5/config/cas-proxy/org1/tls/nginx/nginx.conf @@ -0,0 +1,31 @@ +worker_processes auto; +pid /run/nginx.pid; +include /etc/nginx/modules-enabled/*.conf; + +events { + worker_connections 768; + # multi_accept on; +} + + + +stream { + upstream backend_servers { + server t5-org1-ca-tls-n0:9000 max_fails=3 fail_timeout=10s; + server t5-org1-ca-tls-n1:9000 max_fails=3 fail_timeout=10s; + } + + log_format basic '$remote_addr [$time_local] ' + '$protocol $status $bytes_sent $bytes_received ' + '$session_time "$upstream_addr" ' + '"$upstream_bytes_sent" "$upstream_bytes_received" "$upstream_connect_time"'; + + access_log /var/log/nginx/access.log basic; + error_log /var/log/nginx/error.log debug; + + server { + listen 0.0.0.0:9000; + proxy_pass backend_servers; + proxy_next_upstream on; + } +} \ No newline at end of file diff --git a/topologies/t5/config/config.yaml b/topologies/t5/config/config.yaml new file mode 100644 index 0000000..1f4cc49 --- /dev/null +++ b/topologies/t5/config/config.yaml @@ -0,0 +1,14 @@ +NodeOUs: + Enable: true + ClientOUIdentifier: + Certificate: cacerts/ca-cert.pem + OrganizationalUnitIdentifier: client + PeerOUIdentifier: + Certificate: cacerts/ca-cert.pem + OrganizationalUnitIdentifier: peer + AdminOUIdentifier: + Certificate: cacerts/ca-cert.pem + OrganizationalUnitIdentifier: admin + OrdererOUIdentifier: + Certificate: cacerts/ca-cert.pem + OrganizationalUnitIdentifier: orderer \ No newline at end of file diff --git a/topologies/t5/config/configtx.yaml b/topologies/t5/config/configtx.yaml new file mode 100644 index 0000000..1264040 --- /dev/null +++ b/topologies/t5/config/configtx.yaml @@ -0,0 +1,428 @@ +# Copyright IBM Corp. All Rights Reserved. +# +# SPDX-License-Identifier: Apache-2.0 +# + +--- +################################################################################ +# +# Section: Organizations +# +# - This section defines the different organizational identities which will +# be referenced later in the configuration. +# +################################################################################ +Organizations: + + # SampleOrg defines an MSP using the sampleconfig. It should never be used + # in production but may be used as a template for other definitions + - &org1 + # DefaultOrg defines the organization which is used in the sampleconfig + # of the fabric.git development environment + Name: org1 + + # ID to load the MSP definition as + ID: org1MSP + + # MSPDir is the filesystem path which contains the MSP configuration + MSPDir: /tmp/crypto-material/orgs/org1/msp + + # Policies defines the set of policies at this level of the config tree + # For organization policies, their canonical path is usually + # /Channel/// + Policies: + Readers: + Type: Signature + Rule: "OR('org1MSP.member')" + Writers: + Type: Signature + Rule: "OR('org1MSP.member')" + Admins: + Type: Signature + Rule: "OR('org1MSP.admin')" + Endorsement: + Type: Signature + Rule: "OR('org1MSP.member')" + + OrdererEndpoints: + - <>-org1-orderer1:7050 + - <>-org1-orderer2:7050 + - <>-org1-orderer3:7050 + + - &org2 + # DefaultOrg defines the organization which is used in the sampleconfig + # of the fabric.git development environment + Name: org2MSP + + # ID to load the MSP definition as + ID: org2MSP + + MSPDir: /tmp/crypto-material/orgs/org2/msp + + # Policies defines the set of policies at this level of the config tree + # For organization policies, their canonical path is usually + # /Channel/// + Policies: + Readers: + Type: Signature + Rule: "OR('org2MSP.member')" + Writers: + Type: Signature + Rule: "OR('org2MSP.member')" + Admins: + Type: Signature + Rule: "OR('org2MSP.admin')" + Endorsement: + Type: Signature + Rule: "OR('org2MSP.peer')" + + # leave this flag set to true. + AnchorPeers: + # AnchorPeers defines the location of peers which can be used + # for cross org gossip communication. Note, this value is only + # encoded in the genesis block in the Application section context + - Host: <>-org2-peer1 + Port: 7051 + + - &org3 + # DefaultOrg defines the organization which is used in the sampleconfig + # of the fabric.git development environment + Name: org3MSP + + # ID to load the MSP definition as + ID: org3MSP + + MSPDir: /tmp/crypto-material/orgs/org3/msp + + # Policies defines the set of policies at this level of the config tree + # For organization policies, their canonical path is usually + # /Channel/// + Policies: + Readers: + Type: Signature + Rule: "OR('org3MSP.member')" + Writers: + Type: Signature + Rule: "OR('org3MSP.member')" + Admins: + Type: Signature + Rule: "OR('org3MSP.admin')" + Endorsement: + Type: Signature + Rule: "OR('org3MSP.peer')" + + # leave this flag set to true. + AnchorPeers: + # AnchorPeers defines the location of peers which can be used + # for cross org gossip communication. Note, this value is only + # encoded in the genesis block in the Application section context + - Host: <>-org3-peer1 + Port: 7051 + +################################################################################ +# +# SECTION: Capabilities +# +# - This section defines the capabilities of fabric network. This is a new +# concept as of v1.1.0 and should not be utilized in mixed networks with +# v1.0.x peers and orderers. Capabilities define features which must be +# present in a fabric binary for that binary to safely participate in the +# fabric network. For instance, if a new MSP type is added, newer binaries +# might recognize and validate the signatures from this type, while older +# binaries without this support would be unable to validate those +# transactions. This could lead to different versions of the fabric binaries +# having different world states. Instead, defining a capability for a channel +# informs those binaries without this capability that they must cease +# processing transactions until they have been upgraded. For v1.0.x if any +# capabilities are defined (including a map with all capabilities turned off) +# then the v1.0.x peer will deliberately crash. +# +################################################################################ +Capabilities: + # Channel capabilities apply to both the orderers and the peers and must be + # supported by both. + # Set the value of the capability to true to require it. + Channel: &ChannelCapabilities + # V2_0 capability ensures that orderers and peers behave according + # to v2.0 channel capabilities. Orderers and peers from + # prior releases would behave in an incompatible way, and are therefore + # not able to participate in channels at v2.0 capability. + # Prior to enabling V2.0 channel capabilities, ensure that all + # orderers and peers on a channel are at v2.0.0 or later. + V2_0: true + + # Orderer capabilities apply only to the orderers, and may be safely + # used with prior release peers. + # Set the value of the capability to true to require it. + Orderer: &OrdererCapabilities + # V2_0 orderer capability ensures that orderers behave according + # to v2.0 orderer capabilities. Orderers from + # prior releases would behave in an incompatible way, and are therefore + # not able to participate in channels at v2.0 orderer capability. + # Prior to enabling V2.0 orderer capabilities, ensure that all + # orderers on channel are at v2.0.0 or later. + V2_0: true + + # Application capabilities apply only to the peer network, and may be safely + # used with prior release orderers. + # Set the value of the capability to true to require it. + Application: &ApplicationCapabilities + # V2_0 application capability ensures that peers behave according + # to v2.0 application capabilities. Peers from + # prior releases would behave in an incompatible way, and are therefore + # not able to participate in channels at v2.0 application capability. + # Prior to enabling V2.0 application capabilities, ensure that all + # peers on channel are at v2.0.0 or later. + V2_0: true + +################################################################################ +# +# SECTION: Application +# +# - This section defines the values to encode into a config transaction or +# genesis block for application related parameters +# +################################################################################ +Application: &ApplicationDefaults + ACLs: &ACLsDefault + + # ACL policy for lscc's "getid" function + lscc/ChaincodeExists: /Channel/Application/Readers + + # ACL policy for lscc's "getdepspec" function + lscc/GetDeploymentSpec: /Channel/Application/Readers + + # ACL policy for lscc's "getccdata" function + lscc/GetChaincodeData: /Channel/Application/Readers + + # ACL Policy for lscc's "getchaincodes" function + lscc/GetInstantiatedChaincodes: /Channel/Application/Readers + + + #---Query System Chaincode (qscc) function to policy mapping for access control---# + + # ACL policy for qscc's "GetChainInfo" function + qscc/GetChainInfo: /Channel/Application/Readers + + + # ACL policy for qscc's "GetBlockByNumber" function + qscc/GetBlockByNumber: /Channel/Application/Readers + + # ACL policy for qscc's "GetBlockByHash" function + qscc/GetBlockByHash: /Channel/Application/Readers + + # ACL policy for qscc's "GetTransactionByID" function + qscc/GetTransactionByID: /Channel/Application/Readers + + # ACL policy for qscc's "GetBlockByTxID" function + qscc/GetBlockByTxID: /Channel/Application/Readers + + #---Configuration System Chaincode (cscc) function to policy mapping for access control---# + + # ACL policy for cscc's "GetConfigBlock" function + cscc/GetConfigBlock: /Channel/Application/Readers + + # ACL policy for cscc's "GetConfigTree" function + cscc/GetConfigTree: /Channel/Application/Readers + + # ACL policy for cscc's "SimulateConfigTreeUpdate" function + cscc/SimulateConfigTreeUpdate: /Channel/Application/Readers + + #---Miscellanesous peer function to policy mapping for access control---# + + # ACL policy for invoking chaincodes on peer + peer/Propose: /Channel/Application/Writers + + # ACL policy for chaincode to chaincode invocation + peer/ChaincodeToChaincode: /Channel/Application/Readers + + #---Events resource to policy mapping for access control###---# + + # ACL policy for sending block events + event/Block: /Channel/Application/Readers + + # ACL policy for sending filtered block events + event/FilteredBlock: /Channel/Application/Readers + + # Chaincode Lifecycle Policies introduced in Fabric 2.x + # ACL policy for _lifecycle's "CheckCommitReadiness" function + _lifecycle/CheckCommitReadiness: /Channel/Application/Writers + + # ACL policy for _lifecycle's "CommitChaincodeDefinition" function + _lifecycle/CommitChaincodeDefinition: /Channel/Application/Writers + + # ACL policy for _lifecycle's "QueryChaincodeDefinition" function + _lifecycle/QueryChaincodeDefinition: /Channel/Application/Readers + + + # Organizations is the list of orgs which are defined as participants on + # the application side of the network + Organizations: + + # Policies defines the set of policies at this level of the config tree + # For Application policies, their canonical path is + # /Channel/Application/ + Policies: + Readers: + Type: ImplicitMeta + Rule: "ANY Readers" + Writers: + Type: ImplicitMeta + Rule: "ANY Writers" + Admins: + Type: ImplicitMeta + Rule: "MAJORITY Admins" + LifecycleEndorsement: + Type: ImplicitMeta + Rule: "MAJORITY Endorsement" + Endorsement: + Type: ImplicitMeta + Rule: "MAJORITY Endorsement" + + Capabilities: + <<: *ApplicationCapabilities +################################################################################ +# +# SECTION: Orderer +# +# - This section defines the values to encode into a config transaction or +# genesis block for orderer related parameters +# +################################################################################ +Orderer: &OrdererDefaults + + # Orderer Type: The orderer implementation to start + OrdererType: etcdraft + + # Addresses used to be the list of orderer addresses that clients and peers + # could connect to. However, this does not allow clients to associate orderer + # addresses and orderer organizations which can be useful for things such + # as TLS validation. The preferred way to specify orderer addresses is now + # to include the OrdererEndpoints item in your org definition + Addresses: + - <>-org1-orderer1:7050 + - <>-org1-orderer2:7050 + - <>-org1-orderer3:7050 + + EtcdRaft: + Consenters: + - Host: <>-org1-orderer1 + Port: 7050 + ClientTLSCert: /tmp/crypto-material/orderers/org1/orderer1/node-tls/msp/signcerts/cert.pem + ServerTLSCert: /tmp/crypto-material/orderers/org1/orderer1/node-tls/msp/signcerts/cert.pem + - Host: <>-org1-orderer2 + Port: 7050 + ClientTLSCert: /tmp/crypto-material/orderers/org1/orderer2/node-tls/msp/signcerts/cert.pem + ServerTLSCert: /tmp/crypto-material/orderers/org1/orderer2/node-tls/msp/signcerts/cert.pem + - Host: <>-org1-orderer3 + Port: 7050 + ClientTLSCert: /tmp/crypto-material/orderers/org1/orderer3/node-tls/msp/signcerts/cert.pem + ServerTLSCert: /tmp/crypto-material/orderers/org1/orderer3/node-tls/msp/signcerts/cert.pem + + # Batch Timeout: The amount of time to wait before creating a batch + BatchTimeout: 2s + + # Batch Size: Controls the number of messages batched into a block + BatchSize: + + # Max Message Count: The maximum number of messages to permit in a batch + MaxMessageCount: 10 + + # Absolute Max Bytes: The absolute maximum number of bytes allowed for + # the serialized messages in a batch. + AbsoluteMaxBytes: 99 MB + + # Preferred Max Bytes: The preferred maximum number of bytes allowed for + # the serialized messages in a batch. A message larger than the preferred + # max bytes will result in a batch larger than preferred max bytes. + PreferredMaxBytes: 512 KB + + # Organizations is the list of orgs which are defined as participants on + # the orderer side of the network + Organizations: + + # Policies defines the set of policies at this level of the config tree + # For Orderer policies, their canonical path is + # /Channel/Orderer/ + Policies: + Readers: + Type: ImplicitMeta + Rule: "ANY Readers" + Writers: + Type: ImplicitMeta + Rule: "ANY Writers" + Admins: + Type: ImplicitMeta + Rule: "MAJORITY Admins" + # BlockValidation specifies what signatures must be included in the block + # from the orderer for the peer to validate it. + BlockValidation: + Type: ImplicitMeta + Rule: "ANY Writers" + +################################################################################ +# +# CHANNEL +# +# This section defines the values to encode into a config transaction or +# genesis block for channel related parameters. +# +################################################################################ +Channel: &ChannelDefaults + # Policies defines the set of policies at this level of the config tree + # For Channel policies, their canonical path is + # /Channel/ + Policies: + # Who may invoke the 'Deliver' API + Readers: + Type: ImplicitMeta + Rule: "ANY Readers" + # Who may invoke the 'Broadcast' API + Writers: + Type: ImplicitMeta + Rule: "ANY Writers" + # By default, who may modify elements at this config level + Admins: + Type: ImplicitMeta + Rule: "MAJORITY Admins" + + # Capabilities describes the channel level capabilities, see the + # dedicated Capabilities section elsewhere in this file for a full + # description + Capabilities: + <<: *ChannelCapabilities + +################################################################################ +# +# Profile +# +# - Different configuration profiles may be encoded here to be specified +# as parameters to the configtxgen tool +# +################################################################################ +Profiles: + OrgsOrdererGenesis: + <<: *ChannelDefaults + Orderer: + <<: *OrdererDefaults + Organizations: + - *org1 + Capabilities: + <<: *OrdererCapabilities + Consortiums: + MainConsortium: + Organizations: + - *org2 + - *org3 + + OrgsChannel: + Consortium: MainConsortium + <<: *ChannelDefaults + Application: + <<: *ApplicationDefaults + Organizations: + - *org2 + - *org3 + Capabilities: + <<: *ApplicationCapabilities + diff --git a/topologies/t5/containers/cas/org1-cas/docker-compose-org1-cas-db.yml b/topologies/t5/containers/cas/org1-cas/docker-compose-org1-cas-db.yml new file mode 100644 index 0000000..9c8d949 --- /dev/null +++ b/topologies/t5/containers/cas/org1-cas/docker-compose-org1-cas-db.yml @@ -0,0 +1,10 @@ +services: + org1-ca-db: + container_name: ${CURRENT_HL_TOPOLOGY}-org1-ca-db + environment: + - PGDATA=/tmp/postgres/data + - POSTGRES_USER=postgres + - POSTGRES_PASSWORD=adminpsql + - POSTGRES_DB=postgres + volumes: + - "${HL_TOPOLOGIES_BASE_FOLDER}/homefolders/org1/ca-db/data:/tmp/postgres/data" \ No newline at end of file diff --git a/topologies/t5/containers/cas/org1-cas/docker-compose-org1-cas-proxy.yml b/topologies/t5/containers/cas/org1-cas/docker-compose-org1-cas-proxy.yml new file mode 100644 index 0000000..2f35559 --- /dev/null +++ b/topologies/t5/containers/cas/org1-cas/docker-compose-org1-cas-proxy.yml @@ -0,0 +1,12 @@ +services: + org1-ca-tls: + container_name: ${CURRENT_HL_TOPOLOGY}-org1-ca-tls + environment: + - name=value12 + volumes: + - ${HL_TOPOLOGIES_BASE_FOLDER}/config/cas-proxy/org1/tls/nginx/nginx.conf:/etc/nginx/nginx.conf + # command: [nginx-debug, '-g', 'daemon off;'] + org1-ca-identities: + container_name: ${CURRENT_HL_TOPOLOGY}-org1-ca-identities + volumes: + - ${HL_TOPOLOGIES_BASE_FOLDER}/config/cas-proxy/org1/identities/nginx/nginx.conf:/etc/nginx/nginx.conf \ No newline at end of file diff --git a/topologies/t5/containers/cas/org1-cas/docker-compose-org1-cas.yml b/topologies/t5/containers/cas/org1-cas/docker-compose-org1-cas.yml new file mode 100644 index 0000000..7b04a9b --- /dev/null +++ b/topologies/t5/containers/cas/org1-cas/docker-compose-org1-cas.yml @@ -0,0 +1,61 @@ +services: + org1-ca-tls-n0: + container_name: ${CURRENT_HL_TOPOLOGY}-org1-ca-tls-n0 + command: sh -c 'fabric-ca-server start -d -b org1-ca-tls-admin:org1-ca-tls-adminpw --port 9000' + environment: + - FABRIC_CA_SERVER_HOME=/tmp/hyperledger/fabric-ca/ + - FABRIC_CA_SERVER_TLS_ENABLED=true + - FABRIC_CA_SERVER_CSR_CN=${CURRENT_HL_TOPOLOGY}-org1-ca-tls + - FABRIC_CA_SERVER_CSR_HOSTS=0.0.0.0 + - FABRIC_CA_SERVER_DEBUG=true + - TOPOLOGY=${CURRENT_HL_TOPOLOGY} + - FABRIC_CA_SERVER_DB_TYPE=postgres + - FABRIC_CA_SERVER_DB_DATASOURCE=host=${CURRENT_HL_TOPOLOGY}-org1-ca-db port=5432 user=postgres password=adminpsql dbname=fabric_ca_tls sslmode=disable + volumes: + - "${HL_TOPOLOGIES_BASE_FOLDER}/crypto-material/cas/org1/ca-tls:/tmp/hyperledger/fabric-ca" + - "${HL_TOPOLOGIES_BASE_FOLDER}:/tmp" + org1-ca-tls-n1: + container_name: ${CURRENT_HL_TOPOLOGY}-org1-ca-tls-n1 + command: sh -c 'fabric-ca-server start -d -b org1-ca-tls-admin:org1-ca-tls-adminpw --port 9000' + environment: + - FABRIC_CA_SERVER_HOME=/tmp/hyperledger/fabric-ca/ + - FABRIC_CA_SERVER_TLS_ENABLED=true + - FABRIC_CA_SERVER_CSR_CN=${CURRENT_HL_TOPOLOGY}-org1-ca-tls + - FABRIC_CA_SERVER_CSR_HOSTS=0.0.0.0 + - FABRIC_CA_SERVER_DEBUG=true + - TOPOLOGY=${CURRENT_HL_TOPOLOGY} + - FABRIC_CA_SERVER_DB_TYPE=postgres + - FABRIC_CA_SERVER_DB_DATASOURCE=host=${CURRENT_HL_TOPOLOGY}-org1-ca-db port=5432 user=postgres password=adminpsql dbname=fabric_ca_tls sslmode=disable + volumes: + - "${HL_TOPOLOGIES_BASE_FOLDER}/crypto-material/cas/org1/ca-tls:/tmp/hyperledger/fabric-ca" + - "${HL_TOPOLOGIES_BASE_FOLDER}:/tmp" + org1-ca-identities-n0: + container_name: ${CURRENT_HL_TOPOLOGY}-org1-ca-identities-n0 + command: sh -c 'fabric-ca-server start -d -b org1-ca-identities-admin:org1-ca-identities-adminpw --port 9000' + environment: + - FABRIC_CA_SERVER_HOME=/tmp/hyperledger/fabric-ca/ + - FABRIC_CA_SERVER_TLS_ENABLED=true + - FABRIC_CA_SERVER_CSR_CN=${CURRENT_HL_TOPOLOGY}-org1-ca-identities + - FABRIC_CA_SERVER_CSR_HOSTS=0.0.0.0 + - FABRIC_CA_SERVER_DEBUG=true + - TOPOLOGY=${CURRENT_HL_TOPOLOGY} + - FABRIC_CA_SERVER_DB_TYPE=postgres + - FABRIC_CA_SERVER_DB_DATASOURCE=host=${CURRENT_HL_TOPOLOGY}-org1-ca-db port=5432 user=postgres password=adminpsql dbname=fabric_ca_identities sslmode=disable + volumes: + - "${HL_TOPOLOGIES_BASE_FOLDER}/crypto-material/cas/org1/ca-identities:/tmp/hyperledger/fabric-ca" + - "${HL_TOPOLOGIES_BASE_FOLDER}:/tmp" + org1-ca-identities-n1: + container_name: ${CURRENT_HL_TOPOLOGY}-org1-ca-identities-n1 + command: sh -c 'fabric-ca-server start -d -b org1-ca-identities-admin:org1-ca-identities-adminpw --port 9000' + environment: + - FABRIC_CA_SERVER_HOME=/tmp/hyperledger/fabric-ca/ + - FABRIC_CA_SERVER_TLS_ENABLED=true + - FABRIC_CA_SERVER_CSR_CN=${CURRENT_HL_TOPOLOGY}-org1-ca-identities + - FABRIC_CA_SERVER_CSR_HOSTS=0.0.0.0 + - FABRIC_CA_SERVER_DEBUG=true + - TOPOLOGY=${CURRENT_HL_TOPOLOGY} + - FABRIC_CA_SERVER_DB_TYPE=postgres + - FABRIC_CA_SERVER_DB_DATASOURCE=host=${CURRENT_HL_TOPOLOGY}-org1-ca-db port=5432 user=postgres password=adminpsql dbname=fabric_ca_identities sslmode=disable + volumes: + - "${HL_TOPOLOGIES_BASE_FOLDER}/crypto-material/cas/org1/ca-identities:/tmp/hyperledger/fabric-ca" + - "${HL_TOPOLOGIES_BASE_FOLDER}:/tmp" \ No newline at end of file diff --git a/topologies/t5/containers/cas/org2-cas/docker-compose-org2-cas.yml b/topologies/t5/containers/cas/org2-cas/docker-compose-org2-cas.yml new file mode 100644 index 0000000..e22813e --- /dev/null +++ b/topologies/t5/containers/cas/org2-cas/docker-compose-org2-cas.yml @@ -0,0 +1,28 @@ +version: "3.9" +services: + org2-ca-tls: + container_name: ${CURRENT_HL_TOPOLOGY}-org2-ca-tls + command: sh -c 'fabric-ca-server start -d -b org2-ca-tls-admin:org2-ca-tls-adminpw --port 7054' + environment: + - FABRIC_CA_SERVER_HOME=/tmp/hyperledger/fabric-ca/ + - FABRIC_CA_SERVER_TLS_ENABLED=true + - FABRIC_CA_SERVER_CSR_CN=${CURRENT_HL_TOPOLOGY}-org2-ca-tls + - FABRIC_CA_SERVER_CSR_HOSTS=0.0.0.0 + - FABRIC_CA_SERVER_DEBUG=true + - TOPOLOGY=${CURRENT_HL_TOPOLOGY} + volumes: + - "${HL_TOPOLOGIES_BASE_FOLDER}/crypto-material/cas/org2/ca-tls:/tmp/hyperledger/fabric-ca" + - "${HL_TOPOLOGIES_BASE_FOLDER}:/tmp" + org2-ca-identities: + container_name: ${CURRENT_HL_TOPOLOGY}-org2-ca-identities + command: sh -c 'fabric-ca-server start -d -b org2-ca-identities-admin:org2-ca-identities-adminpw --port 7054' + environment: + - FABRIC_CA_SERVER_HOME=/tmp/hyperledger/fabric-ca/ + - FABRIC_CA_SERVER_TLS_ENABLED=true + - FABRIC_CA_SERVER_CSR_CN=${CURRENT_HL_TOPOLOGY}-org2-ca-identities + - FABRIC_CA_SERVER_CSR_HOSTS=0.0.0.0 + - FABRIC_CA_SERVER_DEBUG=true + - TOPOLOGY=${CURRENT_HL_TOPOLOGY} + volumes: + - "${HL_TOPOLOGIES_BASE_FOLDER}/crypto-material/cas/org2/ca-identities:/tmp/hyperledger/fabric-ca" + - "${HL_TOPOLOGIES_BASE_FOLDER}:/tmp" \ No newline at end of file diff --git a/topologies/t5/containers/cas/org3-cas/docker-compose-org3-cas.yml b/topologies/t5/containers/cas/org3-cas/docker-compose-org3-cas.yml new file mode 100644 index 0000000..b18ea4d --- /dev/null +++ b/topologies/t5/containers/cas/org3-cas/docker-compose-org3-cas.yml @@ -0,0 +1,28 @@ +version: "3.9" +services: + org3-ca-tls: + container_name: ${CURRENT_HL_TOPOLOGY}-org3-ca-tls + command: sh -c 'fabric-ca-server start -d -b org3-ca-tls-admin:org3-ca-tls-adminpw --port 7054' + environment: + - FABRIC_CA_SERVER_HOME=/tmp/hyperledger/fabric-ca/ + - FABRIC_CA_SERVER_TLS_ENABLED=true + - FABRIC_CA_SERVER_CSR_CN=${CURRENT_HL_TOPOLOGY}-org3-ca-tls + - FABRIC_CA_SERVER_CSR_HOSTS=0.0.0.0 + - FABRIC_CA_SERVER_DEBUG=true + - TOPOLOGY=${CURRENT_HL_TOPOLOGY} + volumes: + - "${HL_TOPOLOGIES_BASE_FOLDER}/crypto-material/cas/org3/ca-tls:/tmp/hyperledger/fabric-ca" + - "${HL_TOPOLOGIES_BASE_FOLDER}:/tmp" + org3-ca-identities: + container_name: ${CURRENT_HL_TOPOLOGY}-org3-ca-identities + command: sh -c 'fabric-ca-server start -d -b org3-ca-identities-admin:org3-ca-identities-adminpw --port 7054' + environment: + - FABRIC_CA_SERVER_HOME=/tmp/hyperledger/fabric-ca/ + - FABRIC_CA_SERVER_TLS_ENABLED=true + - FABRIC_CA_SERVER_CSR_CN=${CURRENT_HL_TOPOLOGY}-org3-ca-identities + - FABRIC_CA_SERVER_CSR_HOSTS=0.0.0.0 + - FABRIC_CA_SERVER_DEBUG=true + - TOPOLOGY=${CURRENT_HL_TOPOLOGY} + volumes: + - "${HL_TOPOLOGIES_BASE_FOLDER}/crypto-material/cas/org3/ca-identities:/tmp/hyperledger/fabric-ca" + - "${HL_TOPOLOGIES_BASE_FOLDER}:/tmp" \ No newline at end of file diff --git a/topologies/t5/containers/clis/org2-clis/docker-compose-org2-clis.yml b/topologies/t5/containers/clis/org2-clis/docker-compose-org2-clis.yml new file mode 100644 index 0000000..4ceb8a0 --- /dev/null +++ b/topologies/t5/containers/clis/org2-clis/docker-compose-org2-clis.yml @@ -0,0 +1,21 @@ +services: + org2-cli-peer1: + container_name: ${CURRENT_HL_TOPOLOGY}-org2-cli-peer1 + tty: true + stdin_open: true + environment: + - GOPATH=/opt/gopath + - CORE_VM_ENDPOINT=unix:///host/var/run/docker.sock + - FABRIC_LOGGING_SPEC=DEBUG + - CORE_PEER_ID=${CURRENT_HL_TOPOLOGY}-org2-cli-peer1 + - CORE_PEER_ADDRESS=${CURRENT_HL_TOPOLOGY}-org2-peer1:7051 + - CORE_PEER_LOCALMSPID=org2MSP + - CORE_PEER_TLS_ENABLED=true + - CORE_PEER_TLS_ROOTCERT_FILE=/tmp/crypto-material/cas/org2/ca-tls/ca-cert.pem + - CORE_PEER_MSPCONFIGPATH=/tmp/crypto-material/peers/org2/peer1/node/msp + - TOPOLOGY=${CURRENT_HL_TOPOLOGY} + working_dir: /opt/gopath/src/github.com/hyperledger/fabric/org2 + command: sh + volumes: + - ${HL_TOPOLOGIES_BASE_FOLDER}/assets/chaincode:/opt/gopath/src/github.com/hyperledger/fabric-samples/chaincode + - ${HL_TOPOLOGIES_BASE_FOLDER}:/tmp diff --git a/topologies/t5/containers/clis/org3-clis/docker-compose-org3-clis.yml b/topologies/t5/containers/clis/org3-clis/docker-compose-org3-clis.yml new file mode 100644 index 0000000..68d2ec0 --- /dev/null +++ b/topologies/t5/containers/clis/org3-clis/docker-compose-org3-clis.yml @@ -0,0 +1,21 @@ +services: + org3-cli-peer1: + container_name: ${CURRENT_HL_TOPOLOGY}-org3-cli-peer1 + tty: true + stdin_open: true + environment: + - GOPATH=/opt/gopath + - CORE_VM_ENDPOINT=unix:///host/var/run/docker.sock + - FABRIC_LOGGING_SPEC=DEBUG + - CORE_PEER_ID=${CURRENT_HL_TOPOLOGY}-org3-cli-peer1 + - CORE_PEER_ADDRESS=${CURRENT_HL_TOPOLOGY}-org3-peer1:7051 + - CORE_PEER_LOCALMSPID=org3MSP + - CORE_PEER_TLS_ENABLED=true + - CORE_PEER_TLS_ROOTCERT_FILE=/tmp/crypto-material/cas/org3/ca-tls/ca-cert.pem + - CORE_PEER_MSPCONFIGPATH=/tmp/crypto-material/peers/org3/peer1/node/msp + - TOPOLOGY=${CURRENT_HL_TOPOLOGY} + working_dir: /opt/gopath/src/github.com/hyperledger/fabric/org3 + command: sh + volumes: + - ${HL_TOPOLOGIES_BASE_FOLDER}/assets/chaincode:/opt/gopath/src/github.com/hyperledger/fabric-samples/chaincode + - ${HL_TOPOLOGIES_BASE_FOLDER}:/tmp \ No newline at end of file diff --git a/topologies/t5/containers/orderers/org1-orderers/docker-compose-org1-orderers.yml b/topologies/t5/containers/orderers/org1-orderers/docker-compose-org1-orderers.yml new file mode 100644 index 0000000..0045893 --- /dev/null +++ b/topologies/t5/containers/orderers/org1-orderers/docker-compose-org1-orderers.yml @@ -0,0 +1,82 @@ +services: + org1-orderer1: + container_name: ${CURRENT_HL_TOPOLOGY}-org1-orderer1 + environment: + - ORDERER_HOME=/tmp/hyperledger/orderer/home + - ORDERER_FILELEDGER_LOCATION=/tmp/hyperledger/orderer/fileledger + - ORDERER_HOST=${CURRENT_HL_TOPOLOGY}-org1-orderer1 + - ORDERER_GENERAL_LISTENADDRESS=0.0.0.0 + - ORDERER_GENERAL_LISTENPORT=7050 + - ORDERER_GENERAL_GENESISMETHOD=file + # - ORDERER_GENERAL_GENESISFILE=/tmp/crypto-material/artifacts/channels/genesis.block + - ORDERER_GENERAL_BOOTSTRAPFILE=/tmp/crypto-material/artifacts/channels/genesis.block + - ORDERER_GENERAL_LOCALMSPID=org1MSP + - ORDERER_GENERAL_LOCALMSPDIR=/tmp/crypto-material/orderers/org1/orderer1/node/msp + - ORDERER_GENERAL_TLS_ENABLED=true + - ORDERER_GENERAL_TLS_CERTIFICATE=/tmp/crypto-material/orderers/org1/orderer1/node-tls/msp/signcerts/cert.pem + - ORDERER_GENERAL_TLS_PRIVATEKEY=/tmp/crypto-material/orderers/org1/orderer1/node-tls/msp/keystore/key.pem + # - ORDERER_GENERAL_TLS_ROOTCAS=[/tmp/crypto-material/cas/org1/ca-tls/ca-cert.pem] + - ORDERER_GENERAL_TLS_ROOTCAS=[/tmp/crypto-material/cas/org1/ca-tls/ca-cert.pem,/tmp/crypto-material/cas/org2/ca-tls/ca-cert.pem,/tmp/crypto-material/cas/org3/ca-tls/ca-cert.pem] + - ORDERER_GENERAL_LOGLEVEL=debug + - ORDERER_DEBUG_BROADCASTTRACEDIR=data/logs + - ORDERER_OPERATIONS_LISTENADDRESS=0.0.0.0:7051 + - ORDERER_METRICS_STATSD_ADDRESS=0.0.0.0:7052 + - ORDERER_GENERAL_PROFILE_ADDRESS=0.0.0.0:7053 + - TOPOLOGY=${CURRENT_HL_TOPOLOGY} + volumes: + - ${HL_TOPOLOGIES_BASE_FOLDER}/crypto-material:/tmp/crypto-material + - ${HL_TOPOLOGIES_BASE_FOLDER}/homefolders/orderers/org1/orderer1:/tmp/hyperledger/orderer + org1-orderer2: + container_name: ${CURRENT_HL_TOPOLOGY}-org1-orderer2 + environment: + - ORDERER_HOME=/tmp/hyperledger/orderer/home + - ORDERER_FILELEDGER_LOCATION=/tmp/hyperledger/orderer/fileledger + - ORDERER_HOST=${CURRENT_HL_TOPOLOGY}-org1-orderer2 + - ORDERER_GENERAL_LISTENADDRESS=0.0.0.0 + - ORDERER_GENERAL_LISTENPORT=7050 + - ORDERER_GENERAL_GENESISMETHOD=file + # - ORDERER_GENERAL_GENESISFILE=/tmp/crypto-material/artifacts/channels/genesis.block + - ORDERER_GENERAL_BOOTSTRAPFILE=/tmp/crypto-material/artifacts/channels/genesis.block + - ORDERER_GENERAL_LOCALMSPID=org1MSP + - ORDERER_GENERAL_LOCALMSPDIR=/tmp/crypto-material/orderers/org1/orderer2/node/msp + - ORDERER_GENERAL_TLS_ENABLED=true + - ORDERER_GENERAL_TLS_CERTIFICATE=/tmp/crypto-material/orderers/org1/orderer2/node-tls/msp/signcerts/cert.pem + - ORDERER_GENERAL_TLS_PRIVATEKEY=/tmp/crypto-material/orderers/org1/orderer2/node-tls/msp/keystore/key.pem + # - ORDERER_GENERAL_TLS_ROOTCAS=[/tmp/crypto-material/cas/org1/ca-tls/ca-cert.pem] + - ORDERER_GENERAL_TLS_ROOTCAS=[/tmp/crypto-material/cas/org1/ca-tls/ca-cert.pem,/tmp/crypto-material/cas/org2/ca-tls/ca-cert.pem,/tmp/crypto-material/cas/org3/ca-tls/ca-cert.pem] + - ORDERER_GENERAL_LOGLEVEL=debug + - ORDERER_DEBUG_BROADCASTTRACEDIR=data/logs + - ORDERER_OPERATIONS_LISTENADDRESS=0.0.0.0:7051 + - ORDERER_METRICS_STATSD_ADDRESS=0.0.0.0:7052 + - ORDERER_GENERAL_PROFILE_ADDRESS=0.0.0.0:7053 + - TOPOLOGY=${CURRENT_HL_TOPOLOGY} + volumes: + - ${HL_TOPOLOGIES_BASE_FOLDER}/crypto-material:/tmp/crypto-material + - ${HL_TOPOLOGIES_BASE_FOLDER}/homefolders/orderers/org1/orderer2:/tmp/hyperledger/orderer + org1-orderer3: + container_name: ${CURRENT_HL_TOPOLOGY}-org1-orderer3 + environment: + - ORDERER_HOME=/tmp/hyperledger/orderer/home + - ORDERER_FILELEDGER_LOCATION=/tmp/hyperledger/orderer/fileledger + - ORDERER_HOST=${CURRENT_HL_TOPOLOGY}-org1-orderer3 + - ORDERER_GENERAL_LISTENADDRESS=0.0.0.0 + - ORDERER_GENERAL_LISTENPORT=7050 + - ORDERER_GENERAL_GENESISMETHOD=file + # - ORDERER_GENERAL_GENESISFILE=/tmp/crypto-material/artifacts/channels/genesis.block + - ORDERER_GENERAL_BOOTSTRAPFILE=/tmp/crypto-material/artifacts/channels/genesis.block + - ORDERER_GENERAL_LOCALMSPID=org1MSP + - ORDERER_GENERAL_LOCALMSPDIR=/tmp/crypto-material/orderers/org1/orderer3/node/msp + - ORDERER_GENERAL_TLS_ENABLED=true + - ORDERER_GENERAL_TLS_CERTIFICATE=/tmp/crypto-material/orderers/org1/orderer3/node-tls/msp/signcerts/cert.pem + - ORDERER_GENERAL_TLS_PRIVATEKEY=/tmp/crypto-material/orderers/org1/orderer3/node-tls/msp/keystore/key.pem + # - ORDERER_GENERAL_TLS_ROOTCAS=[/tmp/crypto-material/cas/org1/ca-tls/ca-cert.pem] + - ORDERER_GENERAL_TLS_ROOTCAS=[/tmp/crypto-material/cas/org1/ca-tls/ca-cert.pem,/tmp/crypto-material/cas/org2/ca-tls/ca-cert.pem,/tmp/crypto-material/cas/org3/ca-tls/ca-cert.pem] + - ORDERER_GENERAL_LOGLEVEL=debug + - ORDERER_DEBUG_BROADCASTTRACEDIR=data/logs + - ORDERER_OPERATIONS_LISTENADDRESS=0.0.0.0:7051 + - ORDERER_METRICS_STATSD_ADDRESS=0.0.0.0:7052 + - ORDERER_GENERAL_PROFILE_ADDRESS=0.0.0.0:7053 + - TOPOLOGY=${CURRENT_HL_TOPOLOGY} + volumes: + - ${HL_TOPOLOGIES_BASE_FOLDER}/crypto-material:/tmp/crypto-material + - ${HL_TOPOLOGIES_BASE_FOLDER}/homefolders/orderers/org1/orderer3:/tmp/hyperledger/orderer diff --git a/topologies/t5/containers/peers/org2-peers/docker-compose-org2-peers.yml b/topologies/t5/containers/peers/org2-peers/docker-compose-org2-peers.yml new file mode 100644 index 0000000..f115b6d --- /dev/null +++ b/topologies/t5/containers/peers/org2-peers/docker-compose-org2-peers.yml @@ -0,0 +1,50 @@ +services: + org2-peer1: + container_name: ${CURRENT_HL_TOPOLOGY}-org2-peer1 + environment: + - CORE_PEER_ID=${CURRENT_HL_TOPOLOGY}-org2-peer1 + - CORE_PEER_ADDRESS=${CURRENT_HL_TOPOLOGY}-org2-peer1:7051 + - CORE_PEER_LOCALMSPID=org2MSP + - CORE_PEER_MSPCONFIGPATH=/tmp/crypto-material/peers/org2/peer1/node/msp + - CORE_VM_ENDPOINT=unix:///host/var/run/docker.sock + - CORE_VM_DOCKER_HOSTCONFIG_NETWORKMODE=hl-fabric-${CURRENT_HL_TOPOLOGY} + # - FABRIC_LOGGING_SPEC=debug + - CORE_PEER_TLS_ENABLED=true + - CORE_PEER_TLS_CERT_FILE=/tmp/crypto-material/peers/org2/peer1/node-tls/msp/signcerts/cert.pem + - CORE_PEER_TLS_KEY_FILE=/tmp/crypto-material/peers/org2/peer1/node-tls/msp/keystore/key.pem + - CORE_PEER_TLS_ROOTCERT_FILE=/tmp/crypto-material/peers/org2/peer1/node-tls/msp/tlscacerts/ca-cert.pem + - CORE_PEER_GOSSIP_USELEADERELECTION=true + - CORE_PEER_GOSSIP_ORGLEADER=false + - CORE_PEER_GOSSIP_EXTERNALENDPOINT=${CURRENT_HL_TOPOLOGY}-org2-peer1:7051 + - CORE_PEER_GOSSIP_SKIPHANDSHAKE=true + - TOPOLOGY=${CURRENT_HL_TOPOLOGY} + # - CORE_PEER_CHAINCODELISTENADDRESS=0.0.0.0:7052 + working_dir: /opt/gopath/src/github.com/hyperledger/fabric/org2/peer1 + volumes: + - /var/run:/host/var/run + - ${HL_TOPOLOGIES_BASE_FOLDER}:/tmp + org2-peer2: + container_name: ${CURRENT_HL_TOPOLOGY}-org2-peer2 + environment: + - CORE_PEER_ID=${CURRENT_HL_TOPOLOGY}-org2-peer2 + - CORE_PEER_ADDRESS=${CURRENT_HL_TOPOLOGY}-org2-peer2:7051 + - CORE_PEER_LOCALMSPID=org2MSP + - CORE_PEER_MSPCONFIGPATH=/tmp/crypto-material/peers/org2/peer2/node/msp + - CORE_VM_ENDPOINT=unix:///host/var/run/docker.sock + - CORE_VM_DOCKER_HOSTCONFIG_NETWORKMODE=hl-fabric-${CURRENT_HL_TOPOLOGY} + # - FABRIC_LOGGING_SPEC=debug + - CORE_PEER_TLS_ENABLED=true + - CORE_PEER_TLS_CERT_FILE=/tmp/crypto-material/peers/org2/peer2/node-tls/msp/signcerts/cert.pem + - CORE_PEER_TLS_KEY_FILE=/tmp/crypto-material/peers/org2/peer2/node-tls/msp/keystore/key.pem + - CORE_PEER_TLS_ROOTCERT_FILE=/tmp/crypto-material/peers/org2/peer2/node-tls/msp/tlscacerts/ca-cert.pem + - CORE_PEER_GOSSIP_USELEADERELECTION=true + - CORE_PEER_GOSSIP_ORGLEADER=false + - CORE_PEER_GOSSIP_EXTERNALENDPOINT=${CURRENT_HL_TOPOLOGY}-org2-peer2:7051 + - CORE_PEER_GOSSIP_SKIPHANDSHAKE=true + # - CORE_PEER_CHAINCODELISTENADDRESS=0.0.0.0:7052 + - CORE_PEER_GOSSIP_BOOTSTRAP=${CURRENT_HL_TOPOLOGY}-org2-peer1:7051 + - TOPOLOGY=${CURRENT_HL_TOPOLOGY} + working_dir: /opt/gopath/src/github.com/hyperledger/fabric/org2/peer2 + volumes: + - /var/run:/host/var/run + - ${HL_TOPOLOGIES_BASE_FOLDER}:/tmp \ No newline at end of file diff --git a/topologies/t5/containers/peers/org3-peers/docker-compose-org3-peers.yml b/topologies/t5/containers/peers/org3-peers/docker-compose-org3-peers.yml new file mode 100644 index 0000000..c58de7b --- /dev/null +++ b/topologies/t5/containers/peers/org3-peers/docker-compose-org3-peers.yml @@ -0,0 +1,50 @@ +services: + org3-peer1: + container_name: ${CURRENT_HL_TOPOLOGY}-org3-peer1 + environment: + - CORE_PEER_ID=${CURRENT_HL_TOPOLOGY}-org3-peer1 + - CORE_PEER_ADDRESS=${CURRENT_HL_TOPOLOGY}-org3-peer1:7051 + - CORE_PEER_LOCALMSPID=org3MSP + - CORE_PEER_MSPCONFIGPATH=/tmp/crypto-material/peers/org3/peer1/node/msp + - CORE_VM_ENDPOINT=unix:///host/var/run/docker.sock + - CORE_VM_DOCKER_HOSTCONFIG_NETWORKMODE=hl-fabric-${CURRENT_HL_TOPOLOGY} + # - FABRIC_LOGGING_SPEC=debug + - CORE_PEER_TLS_ENABLED=true + - CORE_PEER_TLS_CERT_FILE=/tmp/crypto-material/peers/org3/peer1/node-tls/msp/signcerts/cert.pem + - CORE_PEER_TLS_KEY_FILE=/tmp/crypto-material/peers/org3/peer1/node-tls/msp/keystore/key.pem + - CORE_PEER_TLS_ROOTCERT_FILE=/tmp/crypto-material/peers/org3/peer1/node-tls/msp/tlscacerts/ca-cert.pem + - CORE_PEER_GOSSIP_USELEADERELECTION=true + - CORE_PEER_GOSSIP_ORGLEADER=false + - CORE_PEER_GOSSIP_EXTERNALENDPOINT=${CURRENT_HL_TOPOLOGY}-org3-peer1:7051 + - CORE_PEER_GOSSIP_SKIPHANDSHAKE=true + - TOPOLOGY=${CURRENT_HL_TOPOLOGY} + # - CORE_PEER_CHAINCODELISTENADDRESS=0.0.0.0:7052 + working_dir: /opt/gopath/src/github.com/hyperledger/fabric/org3/peer1 + volumes: + - /var/run:/host/var/run + - ${HL_TOPOLOGIES_BASE_FOLDER}:/tmp + org3-peer2: + container_name: ${CURRENT_HL_TOPOLOGY}-org3-peer2 + environment: + - CORE_PEER_ID=${CURRENT_HL_TOPOLOGY}-org3-peer2 + - CORE_PEER_ADDRESS=${CURRENT_HL_TOPOLOGY}-org3-peer2:7051 + - CORE_PEER_LOCALMSPID=org3MSP + - CORE_PEER_MSPCONFIGPATH=/tmp/crypto-material/peers/org3/peer2/node/msp + - CORE_VM_ENDPOINT=unix:///host/var/run/docker.sock + - CORE_VM_DOCKER_HOSTCONFIG_NETWORKMODE=hl-fabric-${CURRENT_HL_TOPOLOGY} + # - FABRIC_LOGGING_SPEC=debug + - CORE_PEER_TLS_ENABLED=true + - CORE_PEER_TLS_CERT_FILE=/tmp/crypto-material/peers/org3/peer2/node-tls/msp/signcerts/cert.pem + - CORE_PEER_TLS_KEY_FILE=/tmp/crypto-material/peers/org3/peer2/node-tls/msp/keystore/key.pem + - CORE_PEER_TLS_ROOTCERT_FILE=/tmp/crypto-material/peers/org3/peer2/node-tls/msp/tlscacerts/ca-cert.pem + - CORE_PEER_GOSSIP_USELEADERELECTION=true + - CORE_PEER_GOSSIP_ORGLEADER=false + - CORE_PEER_GOSSIP_EXTERNALENDPOINT=${CURRENT_HL_TOPOLOGY}-org3-peer2:7051 + - CORE_PEER_GOSSIP_SKIPHANDSHAKE=true + # - CORE_PEER_CHAINCODELISTENADDRESS=0.0.0.0:7052 + - CORE_PEER_GOSSIP_BOOTSTRAP=${CURRENT_HL_TOPOLOGY}-org3-peer1:7051 + - TOPOLOGY=${CURRENT_HL_TOPOLOGY} + working_dir: /opt/gopath/src/github.com/hyperledger/fabric/org3/peer2 + volumes: + - /var/run:/host/var/run + - ${HL_TOPOLOGIES_BASE_FOLDER}:/tmp \ No newline at end of file diff --git a/topologies/t5/crypto-material/.gitkeep b/topologies/t5/crypto-material/.gitkeep new file mode 100644 index 0000000..e69de29 diff --git a/topologies/t5/docker-compose.yml b/topologies/t5/docker-compose.yml new file mode 100644 index 0000000..0c968e9 --- /dev/null +++ b/topologies/t5/docker-compose.yml @@ -0,0 +1,121 @@ +services: + org-shell-cmd: + image: alpine + networks: + - hl-fabric + org1-ca-tls: + image: nginx-hl-fabric + #ports: + # - 9000:9000 + networks: + - hl-fabric + org1-ca-identities: + image: nginx-hl-fabric + #ports: + # - 9001:9001 + networks: + - hl-fabric + org2-ca-tls: + image: hyperledger/fabric-ca:${FABRIC_CA_VERSION} + # ports: + # - :7054 + networks: + - hl-fabric + org2-ca-identities: + image: hyperledger/fabric-ca:${FABRIC_CA_VERSION} + # ports: + # - :7054 + networks: + - hl-fabric + org3-ca-tls: + image: hyperledger/fabric-ca:${FABRIC_CA_VERSION} + # ports: + # - :7054 + networks: + - hl-fabric + org3-ca-identities: + image: hyperledger/fabric-ca:${FABRIC_CA_VERSION} + # ports: + # - :7054 + networks: + - hl-fabric + org2-peer1: + image: hyperledger/fabric-peer:${FABRIC_PEER_VERSION} + # ports: + # - :7051 + networks: + - hl-fabric + org2-peer2: + image: hyperledger/fabric-peer:${FABRIC_PEER_VERSION} + # ports: + # - :7051 + networks: + - hl-fabric + org3-peer1: + image: hyperledger/fabric-peer:${FABRIC_PEER_VERSION} + # ports: + # - :7051 + networks: + - hl-fabric + org3-peer2: + image: hyperledger/fabric-peer:${FABRIC_PEER_VERSION} + # ports: + # - :7051 + networks: + - hl-fabric + org2-cli-peer1: + image: hyperledger/fabric-tools:${FABRIC_TOOLS_VERSION} + networks: + - hl-fabric + org3-cli-peer1: + image: hyperledger/fabric-tools:${FABRIC_TOOLS_VERSION} + networks: + - hl-fabric + org1-orderer1: + image: hyperledger/fabric-orderer:${PEER_ORDERER_VERSION} + # ports: + # - :7050 + networks: + - hl-fabric + org1-orderer2: + image: hyperledger/fabric-orderer:${PEER_ORDERER_VERSION} + # ports: + # - :7050 + networks: + - hl-fabric + org1-orderer3: + image: hyperledger/fabric-orderer:${PEER_ORDERER_VERSION} + # ports: + # - :7050 + networks: + - hl-fabric + org1-ca-db: + image: postgres + #ports: + # - :5432 + networks: + - hl-fabric + org1-ca-tls-n0: + image: hyperledger/fabric-ca:${FABRIC_CA_VERSION} + #ports: + # - :9000 + networks: + - hl-fabric + org1-ca-tls-n1: + image: hyperledger/fabric-ca:${FABRIC_CA_VERSION} + #Ports: + # - :9000 + networks: + - hl-fabric + org1-ca-identities-n0: + image: hyperledger/fabric-ca:${FABRIC_CA_VERSION} + #Ports: + # - :9000 + networks: + - hl-fabric + org1-ca-identities-n1: + image: hyperledger/fabric-ca:${FABRIC_CA_VERSION} + #ports: + # - :9000 + networks: + - hl-fabric diff --git a/topologies/t5/homefolders/.gitkeep b/topologies/t5/homefolders/.gitkeep new file mode 100644 index 0000000..e69de29 diff --git a/topologies/t5/images/nginx/Dockerfile b/topologies/t5/images/nginx/Dockerfile new file mode 100644 index 0000000..bd4aa06 --- /dev/null +++ b/topologies/t5/images/nginx/Dockerfile @@ -0,0 +1,4 @@ +FROM nginx +RUN apt update +RUN apt install -y nginx-common +RUN apt install -y libnginx-mod-stream \ No newline at end of file diff --git a/topologies/t5/scripts/all-org-peers-commit-chaincode.sh b/topologies/t5/scripts/all-org-peers-commit-chaincode.sh new file mode 100755 index 0000000..7810f33 --- /dev/null +++ b/topologies/t5/scripts/all-org-peers-commit-chaincode.sh @@ -0,0 +1,26 @@ +#!/bin/bash +set -e +set -x + +peer lifecycle chaincode checkcommitreadiness -C mychannel --name myccv1 -v "1.0" --sequence 1 + +export CORE_PEER_ADDRESS=${TOPOLOGY}-org2-peer1:7051 +export CORE_PEER_MSPCONFIGPATH=/tmp/crypto-material/orgs/org2/admins/admin/msp +QUERY_INSTALLED=`FABRIC_LOGGING_SPEC=ERROR peer lifecycle chaincode queryinstalled | grep myccv1` +IFS=' ' read -r -a array <<< $QUERY_INSTALLED +PACKAGE_ID=${array[2]} +PACKAGE_ID=${PACKAGE_ID::-1} +echo "The Package ID for the installed chaincode is: $PACKAGE_ID" + +peer lifecycle chaincode commit --channelID mychannel --name myccv1 --version "1.0" \ + --peerAddresses ${TOPOLOGY}-org2-peer1:7051 \ + --peerAddresses ${TOPOLOGY}-org2-peer2:7051 \ + --peerAddresses ${TOPOLOGY}-org3-peer1:7051 \ + --peerAddresses ${TOPOLOGY}-org3-peer2:7051 \ + --tlsRootCertFiles /tmp/crypto-material/cas/org2/ca-tls/ca-cert.pem \ + --tlsRootCertFiles /tmp/crypto-material/cas/org2/ca-tls/ca-cert.pem \ + --tlsRootCertFiles /tmp/crypto-material/cas/org3/ca-tls/ca-cert.pem \ + --tlsRootCertFiles /tmp/crypto-material/cas/org3/ca-tls/ca-cert.pem \ + --sequence 1 --tls --cafile /tmp/crypto-material/cas/org1/ca-tls/ca-cert.pem + +peer lifecycle chaincode querycommitted --channelID mychannel --name myccv1 \ No newline at end of file diff --git a/topologies/t5/scripts/all-org-peers-execute-chaincode.sh b/topologies/t5/scripts/all-org-peers-execute-chaincode.sh new file mode 100755 index 0000000..d8be52e --- /dev/null +++ b/topologies/t5/scripts/all-org-peers-execute-chaincode.sh @@ -0,0 +1,18 @@ +#!/bin/sh + +set -e +set -x + +export CORE_PEER_ADDRESS=${TOPOLOGY}-org2-peer1:7051 +export CORE_PEER_MSPCONFIGPATH=/tmp/crypto-material/orgs/org2/users/user/msp +peer chaincode invoke -C mychannel -n myccv1 -c '{"Args":["InitLedger"]}' --waitForEvent --tls --cafile /tmp/crypto-material/cas/org1/ca-tls/ca-cert.pem \ + --peerAddresses ${TOPOLOGY}-org2-peer1:7051 \ + --peerAddresses ${TOPOLOGY}-org2-peer2:7051 \ + --peerAddresses ${TOPOLOGY}-org3-peer1:7051 \ + --peerAddresses ${TOPOLOGY}-org3-peer2:7051 \ + --tlsRootCertFiles /tmp/crypto-material/cas/org2/ca-tls/ca-cert.pem \ + --tlsRootCertFiles /tmp/crypto-material/cas/org2/ca-tls/ca-cert.pem \ + --tlsRootCertFiles /tmp/crypto-material/cas/org3/ca-tls/ca-cert.pem \ + --tlsRootCertFiles /tmp/crypto-material/cas/org3/ca-tls/ca-cert.pem + +peer chaincode query -C mychannel -n myccv1 -c '{"Args":["GetAllAssets"]}' diff --git a/topologies/t5/scripts/channels-setup.sh b/topologies/t5/scripts/channels-setup.sh new file mode 100755 index 0000000..230aac8 --- /dev/null +++ b/topologies/t5/scripts/channels-setup.sh @@ -0,0 +1,7 @@ +#!/bin/sh +set -e +set -x + +export FABRIC_CFG_PATH=/tmp/crypto-material/config +configtxgen -profile OrgsOrdererGenesis -outputBlock /tmp/crypto-material/artifacts/channels/genesis.block -channelID syschannel +configtxgen -profile OrgsChannel -outputCreateChannelTx /tmp/crypto-material/artifacts/channels/channel.tx -channelID mychannel \ No newline at end of file diff --git a/topologies/t5/scripts/delete-state-data.sh b/topologies/t5/scripts/delete-state-data.sh new file mode 100755 index 0000000..9062431 --- /dev/null +++ b/topologies/t5/scripts/delete-state-data.sh @@ -0,0 +1,10 @@ +#!/bin/bash +set -e +set -x + +# remove crypto material files +rm -rf /tmp/crypto-material/* +touch /tmp/crypto-material/.gitkeep + +rm -rf /tmp/homefolders/* +touch /tmp/homefolders/.gitkeep diff --git a/topologies/t5/scripts/org1-enroll-identities-with-ca-identities.sh b/topologies/t5/scripts/org1-enroll-identities-with-ca-identities.sh new file mode 100755 index 0000000..ca41d1c --- /dev/null +++ b/topologies/t5/scripts/org1-enroll-identities-with-ca-identities.sh @@ -0,0 +1,46 @@ +#!/bin/bash +set -e +set -x + +# enroll orderer1 node +export FABRIC_CA_CLIENT_HOME=/tmp/crypto-material/orderers/org1/orderer1/node +export FABRIC_CA_CLIENT_TLS_CERTFILES=/tmp/crypto-material/cas/org1/ca-identities/ca-cert.pem +fabric-ca-client enroll -d -u https://org1-orderer1:orderer1PW@0.0.0.0:9000 +mv /tmp/crypto-material/orderers/org1/orderer1/node/msp/cacerts/* /tmp/crypto-material/orderers/org1/orderer1/node/msp/cacerts/ca-cert.pem +cp /tmp/config/config.yaml /tmp/crypto-material/orderers/org1/orderer1/node/msp + +# enroll orderer2 node +export FABRIC_CA_CLIENT_HOME=/tmp/crypto-material/orderers/org1/orderer2/node +export FABRIC_CA_CLIENT_TLS_CERTFILES=/tmp/crypto-material/cas/org1/ca-identities/ca-cert.pem +fabric-ca-client enroll -d -u https://org1-orderer2:orderer2PW@0.0.0.0:9000 +mv /tmp/crypto-material/orderers/org1/orderer2/node/msp/cacerts/* /tmp/crypto-material/orderers/org1/orderer2/node/msp/cacerts/ca-cert.pem +cp /tmp/config/config.yaml /tmp/crypto-material/orderers/org1/orderer2/node/msp + +# enroll orderer3 node +export FABRIC_CA_CLIENT_HOME=/tmp/crypto-material/orderers/org1/orderer3/node +export FABRIC_CA_CLIENT_TLS_CERTFILES=/tmp/crypto-material/cas/org1/ca-identities/ca-cert.pem +fabric-ca-client enroll -d -u https://org1-orderer3:orderer3PW@0.0.0.0:9000 +mv /tmp/crypto-material/orderers/org1/orderer3/node/msp/cacerts/* /tmp/crypto-material/orderers/org1/orderer3/node/msp/cacerts/ca-cert.pem +cp /tmp/config/config.yaml /tmp/crypto-material/orderers/org1/orderer3/node/msp + +# enroll org1 admin +export FABRIC_CA_CLIENT_HOME=/tmp/crypto-material/orgs/org1/admins/admin +export FABRIC_CA_CLIENT_TLS_CERTFILES=/tmp/crypto-material/cas/org1/ca-identities/ca-cert.pem +fabric-ca-client enroll -d -u https://admin-org1:org1adminpw@0.0.0.0:9000 +mv /tmp/crypto-material/orgs/org1/admins/admin/msp/cacerts/* /tmp/crypto-material/orgs/org1/admins/admin/msp/cacerts/ca-cert.pem +cp /tmp/config/config.yaml /tmp/crypto-material/orgs/org1/admins/admin/msp + +mv /tmp/crypto-material/orderers/org1/orderer1/node-tls/msp/keystore/* /tmp/crypto-material/orderers/org1/orderer1/node-tls/msp/keystore/key.pem +mv /tmp/crypto-material/orderers/org1/orderer2/node-tls/msp/keystore/* /tmp/crypto-material/orderers/org1/orderer2/node-tls/msp/keystore/key.pem +mv /tmp/crypto-material/orderers/org1/orderer3/node-tls/msp/keystore/* /tmp/crypto-material/orderers/org1/orderer3/node-tls/msp/keystore/key.pem + +# setup org1 msp +mkdir -p /tmp/crypto-material/orgs/org1/msp +cp /tmp/config/config.yaml /tmp/crypto-material/orgs/org1/msp +mkdir -p /tmp/crypto-material/orgs/org1/msp/cacerts +cp /tmp/crypto-material/cas/org1/ca-identities/ca-cert.pem /tmp/crypto-material/orgs/org1/msp/cacerts/ca-cert.pem +mkdir -p /tmp/crypto-material/orgs/org1/msp/tlscacerts +cp /tmp/crypto-material/cas/org1/ca-tls/ca-cert.pem /tmp/crypto-material/orgs/org1/msp/tlscacerts/org1-tls-ca-cert.pem +cp /tmp/crypto-material/cas/org2/ca-tls/ca-cert.pem /tmp/crypto-material/orgs/org1/msp/tlscacerts/org2-tls-ca-cert.pem +cp /tmp/crypto-material/cas/org3/ca-tls/ca-cert.pem /tmp/crypto-material/orgs/org1/msp/tlscacerts/org3-tls-ca-cert.pem +mkdir -p /tmp/crypto-material/orgs/org1/msp/users \ No newline at end of file diff --git a/topologies/t5/scripts/org1-enroll-identities-with-ca-tls.sh b/topologies/t5/scripts/org1-enroll-identities-with-ca-tls.sh new file mode 100755 index 0000000..1ce96dc --- /dev/null +++ b/topologies/t5/scripts/org1-enroll-identities-with-ca-tls.sh @@ -0,0 +1,26 @@ +#!/bin/bash +set -e +set -x + +# enroll orderer1 node-tls +export FABRIC_CA_CLIENT_HOME=/tmp/crypto-material/orderers/org1/orderer1/node-tls +export FABRIC_CA_CLIENT_TLS_CERTFILES=/tmp/crypto-material/cas/org1/ca-tls/ca-cert.pem +fabric-ca-client enroll -d -u https://org1-orderer1:orderer1PW@0.0.0.0:9000 --enrollment.profile tls --csr.hosts ${TOPOLOGY}-org1-orderer1 + +# enroll orderer2 node-tls +export FABRIC_CA_CLIENT_HOME=/tmp/crypto-material/orderers/org1/orderer2/node-tls +export FABRIC_CA_CLIENT_TLS_CERTFILES=/tmp/crypto-material/cas/org1/ca-tls/ca-cert.pem +fabric-ca-client enroll -d -u https://org1-orderer2:orderer2PW@0.0.0.0:9000 --enrollment.profile tls --csr.hosts ${TOPOLOGY}-org1-orderer2 + +# enroll orderer3 node-tls +export FABRIC_CA_CLIENT_HOME=/tmp/crypto-material/orderers/org1/orderer3/node-tls +export FABRIC_CA_CLIENT_TLS_CERTFILES=/tmp/crypto-material/cas/org1/ca-tls/ca-cert.pem +fabric-ca-client enroll -d -u https://org1-orderer3:orderer3PW@0.0.0.0:9000 --enrollment.profile tls --csr.hosts ${TOPOLOGY}-org1-orderer3 + +mv /tmp/crypto-material/orderers/org1/orderer1/node-tls/msp/keystore/* /tmp/crypto-material/orderers/org1/orderer1/node-tls/msp/keystore/key.pem +mv /tmp/crypto-material/orderers/org1/orderer2/node-tls/msp/keystore/* /tmp/crypto-material/orderers/org1/orderer2/node-tls/msp/keystore/key.pem +mv /tmp/crypto-material/orderers/org1/orderer3/node-tls/msp/keystore/* /tmp/crypto-material/orderers/org1/orderer3/node-tls/msp/keystore/key.pem + +mv /tmp/crypto-material/orderers/org1/orderer1/node-tls/msp/tlscacerts/* /tmp/crypto-material/orderers/org1/orderer1/node-tls/msp/tlscacerts/ca-cert.pem +mv /tmp/crypto-material/orderers/org1/orderer2/node-tls/msp/tlscacerts/* /tmp/crypto-material/orderers/org1/orderer2/node-tls/msp/tlscacerts/ca-cert.pem +mv /tmp/crypto-material/orderers/org1/orderer3/node-tls/msp/tlscacerts/* /tmp/crypto-material/orderers/org1/orderer3/node-tls/msp/tlscacerts/ca-cert.pem \ No newline at end of file diff --git a/topologies/t5/scripts/org1-register-identities-with-ca-identities.sh b/topologies/t5/scripts/org1-register-identities-with-ca-identities.sh new file mode 100755 index 0000000..fc6c878 --- /dev/null +++ b/topologies/t5/scripts/org1-register-identities-with-ca-identities.sh @@ -0,0 +1,11 @@ +#!/bin/bash +set -e +set -x + +export FABRIC_CA_CLIENT_TLS_CERTFILES=/tmp/crypto-material/cas/org1/ca-identities/ca-cert.pem +export FABRIC_CA_CLIENT_HOME=/tmp/crypto-material/cas/org1/ca-identities-admin +fabric-ca-client enroll -d -u https://org1-ca-identities-admin:org1-ca-identities-adminpw@0.0.0.0:9000 +fabric-ca-client register -d --id.name org1-orderer1 --id.secret orderer1PW --id.type orderer -u https://0.0.0.0:9000 +fabric-ca-client register -d --id.name org1-orderer2 --id.secret orderer2PW --id.type orderer -u https://0.0.0.0:9000 +fabric-ca-client register -d --id.name org1-orderer3 --id.secret orderer3PW --id.type orderer -u https://0.0.0.0:9000 +fabric-ca-client register -d --id.name admin-org1 --id.secret org1adminpw --id.type admin --id.attrs "hf.Registrar.Roles=client,hf.Registrar.Attributes=*,hf.Revoker=true,hf.GenCRL=true,admin=true:ecert,abac.init=true:ecert" -u https://0.0.0.0:9000 \ No newline at end of file diff --git a/topologies/t5/scripts/org1-register-identities-with-ca-tls.sh b/topologies/t5/scripts/org1-register-identities-with-ca-tls.sh new file mode 100755 index 0000000..a146aeb --- /dev/null +++ b/topologies/t5/scripts/org1-register-identities-with-ca-tls.sh @@ -0,0 +1,10 @@ +#!/bin/bash +set -e +set -x + +export FABRIC_CA_CLIENT_TLS_CERTFILES=/tmp/crypto-material/cas/org1/ca-tls/ca-cert.pem +export FABRIC_CA_CLIENT_HOME=/tmp/crypto-material/cas/org1/ca-tls-admin +fabric-ca-client enroll -d -u https://org1-ca-tls-admin:org1-ca-tls-adminpw@0.0.0.0:9000 +fabric-ca-client register -d --id.name org1-orderer1 --id.secret orderer1PW --id.type orderer -u https://0.0.0.0:9000 +fabric-ca-client register -d --id.name org1-orderer2 --id.secret orderer2PW --id.type orderer -u https://0.0.0.0:9000 +fabric-ca-client register -d --id.name org1-orderer3 --id.secret orderer3PW --id.type orderer -u https://0.0.0.0:9000 \ No newline at end of file diff --git a/topologies/t5/scripts/org2-approve-chaincode.sh b/topologies/t5/scripts/org2-approve-chaincode.sh new file mode 100755 index 0000000..fb8ac61 --- /dev/null +++ b/topologies/t5/scripts/org2-approve-chaincode.sh @@ -0,0 +1,22 @@ +#!/bin/bash +set -e +set -x + +export CORE_PEER_ADDRESS=${TOPOLOGY}-org2-peer1:7051 +export CORE_PEER_MSPCONFIGPATH=/tmp/crypto-material/orgs/org2/admins/admin/msp +QUERY_INSTALLED=`FABRIC_LOGGING_SPEC=ERROR peer lifecycle chaincode queryinstalled | grep myccv1` +IFS=' ' read -r -a array <<< $QUERY_INSTALLED +PACKAGE_ID=${array[2]} +PACKAGE_ID=${PACKAGE_ID::-1} +echo "The Package ID for the installed chaincode is: $PACKAGE_ID" + + +peer lifecycle chaincode approveformyorg --channelID mychannel --name myccv1 --version "1.0" \ + --package-id $PACKAGE_ID \ + --peerAddresses ${TOPOLOGY}-org2-peer1:7051 \ + --peerAddresses ${TOPOLOGY}-org2-peer2:7051 \ + --tlsRootCertFiles /tmp/crypto-material/cas/org2/ca-tls/ca-cert.pem \ + --tlsRootCertFiles /tmp/crypto-material/cas/org2/ca-tls/ca-cert.pem \ + --sequence 1 --tls --cafile /tmp/crypto-material/cas/org1/ca-tls/ca-cert.pem + +peer lifecycle chaincode queryapproved -C mychannel --name myccv1 \ No newline at end of file diff --git a/topologies/t5/scripts/org2-create-and-join-channels.sh b/topologies/t5/scripts/org2-create-and-join-channels.sh new file mode 100755 index 0000000..ed6a189 --- /dev/null +++ b/topologies/t5/scripts/org2-create-and-join-channels.sh @@ -0,0 +1,12 @@ +#!/bin/sh +set -e +set -x + +export CORE_PEER_MSPCONFIGPATH=/tmp/crypto-material/orgs/org2/admins/admin/msp +peer channel create -c mychannel -f /tmp/crypto-material/artifacts/channels/channel.tx -o ${TOPOLOGY}-org1-orderer1:7050 --outputBlock /tmp/crypto-material/artifacts/channels/mychannel.block --tls --cafile /tmp/crypto-material/cas/org1/ca-tls/ca-cert.pem + +export CORE_PEER_ADDRESS=${TOPOLOGY}-org2-peer1:7051 +peer channel join -b /tmp/crypto-material/artifacts/channels/mychannel.block + +export CORE_PEER_ADDRESS=${TOPOLOGY}-org2-peer2:7051 +peer channel join -b /tmp/crypto-material/artifacts/channels/mychannel.block \ No newline at end of file diff --git a/topologies/t5/scripts/org2-enroll-identities-with-ca-identities.sh b/topologies/t5/scripts/org2-enroll-identities-with-ca-identities.sh new file mode 100755 index 0000000..ad0ea07 --- /dev/null +++ b/topologies/t5/scripts/org2-enroll-identities-with-ca-identities.sh @@ -0,0 +1,49 @@ +#!/bin/bash +set -e +set -x + +# enroll peer1 node +export FABRIC_CA_CLIENT_HOME=/tmp/crypto-material/peers/org2/peer1/node +export FABRIC_CA_CLIENT_TLS_CERTFILES=/tmp/crypto-material/cas/org2/ca-identities/ca-cert.pem +fabric-ca-client enroll -d -u https://peer1-org2:peer1PW@0.0.0.0:7054 +mv /tmp/crypto-material/peers/org2/peer1/node/msp/cacerts/* /tmp/crypto-material/peers/org2/peer1/node/msp/cacerts/ca-cert.pem +cp /tmp/config/config.yaml /tmp/crypto-material/peers/org2/peer1/node/msp + +# enroll peer2 node +export FABRIC_CA_CLIENT_HOME=/tmp/crypto-material/peers/org2/peer2/node +export FABRIC_CA_CLIENT_TLS_CERTFILES=/tmp/crypto-material/cas/org2/ca-identities/ca-cert.pem +fabric-ca-client enroll -d -u https://peer2-org2:peer2PW@0.0.0.0:7054 +mv /tmp/crypto-material/peers/org2/peer2/node/msp/cacerts/* /tmp/crypto-material/peers/org2/peer2/node/msp/cacerts/ca-cert.pem +cp /tmp/config/config.yaml /tmp/crypto-material/peers/org2/peer2/node/msp + +# enroll org2 admin +export FABRIC_CA_CLIENT_HOME=/tmp/crypto-material/orgs/org2/admins/admin +export FABRIC_CA_CLIENT_TLS_CERTFILES=/tmp/crypto-material/cas/org2/ca-identities/ca-cert.pem +fabric-ca-client enroll -d -u https://admin-org2:org2AdminPW@0.0.0.0:7054 +mv /tmp/crypto-material/orgs/org2/admins/admin/msp/cacerts/* /tmp/crypto-material/orgs/org2/admins/admin/msp/cacerts/ca-cert.pem +cp /tmp/config/config.yaml /tmp/crypto-material/orgs/org2/admins/admin/msp + +# enroll org2 user +export FABRIC_CA_CLIENT_HOME=/tmp/crypto-material/orgs/org2/users/user +export FABRIC_CA_CLIENT_TLS_CERTFILES=/tmp/crypto-material/cas/org2/ca-identities/ca-cert.pem +fabric-ca-client enroll -d -u https://admin-org2:org2AdminPW@0.0.0.0:7054 +mv /tmp/crypto-material/orgs/org2/users/user/msp/cacerts/* /tmp/crypto-material/orgs/org2/users/user/msp/cacerts/ca-cert.pem +cp /tmp/config/config.yaml /tmp/crypto-material/orgs/org2/users/user/msp + +# enroll org2 client +export FABRIC_CA_CLIENT_HOME=/tmp/crypto-material/orgs/org2/clients/client +export FABRIC_CA_CLIENT_TLS_CERTFILES=/tmp/crypto-material/cas/org2/ca-identities/ca-cert.pem +fabric-ca-client enroll -d -u https://admin-org2:org2AdminPW@0.0.0.0:7054 +mv /tmp/crypto-material/orgs/org2/clients/client/msp/cacerts/* /tmp/crypto-material/orgs/org2/clients/client/msp/cacerts/ca-cert.pem +cp /tmp/config/config.yaml /tmp/crypto-material/orgs/org2/clients/client/msp + +# setup org2 msp +mkdir -p /tmp/crypto-material/orgs/org2/msp +cp /tmp/config/config.yaml /tmp/crypto-material/orgs/org2/msp +mkdir -p /tmp/crypto-material/orgs/org2/msp/cacerts +cp /tmp/crypto-material/cas/org2/ca-identities/ca-cert.pem /tmp/crypto-material/orgs/org2/msp/cacerts/ca-cert.pem +mkdir -p /tmp/crypto-material/orgs/org2/msp/tlscacerts +cp /tmp/crypto-material/cas/org1/ca-tls/ca-cert.pem /tmp/crypto-material/orgs/org2/msp/tlscacerts/org1-tls-ca-cert.pem +cp /tmp/crypto-material/cas/org2/ca-tls/ca-cert.pem /tmp/crypto-material/orgs/org2/msp/tlscacerts/org2-tls-ca-cert.pem +cp /tmp/crypto-material/cas/org3/ca-tls/ca-cert.pem /tmp/crypto-material/orgs/org2/msp/tlscacerts/org3-tls-ca-cert.pem +mkdir -p /tmp/crypto-material/orgs/org2/msp/users diff --git a/topologies/t5/scripts/org2-enroll-identities-with-ca-tls.sh b/topologies/t5/scripts/org2-enroll-identities-with-ca-tls.sh new file mode 100755 index 0000000..ea37b7e --- /dev/null +++ b/topologies/t5/scripts/org2-enroll-identities-with-ca-tls.sh @@ -0,0 +1,19 @@ +#!/bin/bash +set -e +set -x + +# enroll peer1 node-tls +export FABRIC_CA_CLIENT_HOME=/tmp/crypto-material/peers/org2/peer1/node-tls +export FABRIC_CA_CLIENT_TLS_CERTFILES=/tmp/crypto-material/cas/org2/ca-tls/ca-cert.pem +fabric-ca-client enroll -d -u https://peer1-org2:peer1PW@0.0.0.0:7054 --enrollment.profile tls --csr.hosts ${TOPOLOGY}-org2-peer1 + +# enroll peer2 node-tls +export FABRIC_CA_CLIENT_HOME=/tmp/crypto-material/peers/org2/peer2/node-tls +export FABRIC_CA_CLIENT_TLS_CERTFILES=/tmp/crypto-material/cas/org2/ca-tls/ca-cert.pem +fabric-ca-client enroll -d -u https://peer2-org2:peer2PW@0.0.0.0:7054 --enrollment.profile tls --csr.hosts ${TOPOLOGY}-org2-peer2 + +mv /tmp/crypto-material/peers/org2/peer1/node-tls/msp/keystore/* /tmp/crypto-material/peers/org2/peer1/node-tls/msp/keystore/key.pem +mv /tmp/crypto-material/peers/org2/peer2/node-tls/msp/keystore/* /tmp/crypto-material/peers/org2/peer2/node-tls/msp/keystore/key.pem + +mv /tmp/crypto-material/peers/org2/peer1/node-tls/msp/tlscacerts/* /tmp/crypto-material/peers/org2/peer1/node-tls/msp/tlscacerts/ca-cert.pem +mv /tmp/crypto-material/peers/org2/peer2/node-tls/msp/tlscacerts/* /tmp/crypto-material/peers/org2/peer2/node-tls/msp/tlscacerts/ca-cert.pem \ No newline at end of file diff --git a/topologies/t5/scripts/org2-install-chaincode.sh b/topologies/t5/scripts/org2-install-chaincode.sh new file mode 100755 index 0000000..b9db570 --- /dev/null +++ b/topologies/t5/scripts/org2-install-chaincode.sh @@ -0,0 +1,13 @@ +#!/bin/bash +set -e +set -x + +export CHAINCODE_DIR=/opt/gopath/src/github.com/hyperledger/fabric-samples/chaincode/assets-transfer-basic/chaincode-go + +export CORE_PEER_ADDRESS=${TOPOLOGY}-org2-peer1:7051 +export CORE_PEER_MSPCONFIGPATH=/tmp/crypto-material/orgs/org2/admins/admin/msp +peer lifecycle chaincode package mycc.tar.gz --path $CHAINCODE_DIR --lang golang --label myccv1 +peer lifecycle chaincode install mycc.tar.gz + +export CORE_PEER_ADDRESS=${TOPOLOGY}-org2-peer2:7051 +peer lifecycle chaincode install mycc.tar.gz \ No newline at end of file diff --git a/topologies/t5/scripts/org2-register-identities-with-ca-identities.sh b/topologies/t5/scripts/org2-register-identities-with-ca-identities.sh new file mode 100755 index 0000000..beb2cd6 --- /dev/null +++ b/topologies/t5/scripts/org2-register-identities-with-ca-identities.sh @@ -0,0 +1,12 @@ +#!/bin/bash +set -e +set -x + +export FABRIC_CA_CLIENT_TLS_CERTFILES=/tmp/crypto-material/cas/org2/ca-identities/ca-cert.pem +export FABRIC_CA_CLIENT_HOME=/tmp/crypto-material/cas/org2/ca-identities-admin +fabric-ca-client enroll -d -u https://org2-ca-identities-admin:org2-ca-identities-adminpw@0.0.0.0:7054 +fabric-ca-client register -d --id.name peer1-org2 --id.secret peer1PW --id.type peer -u https://0.0.0.0:7054 +fabric-ca-client register -d --id.name peer2-org2 --id.secret peer2PW --id.type peer -u https://0.0.0.0:7054 +fabric-ca-client register -d --id.name admin-org2 --id.secret org2AdminPW --id.type admin -u https://0.0.0.0:7054 +fabric-ca-client register -d --id.name user-org2 --id.secret org2UserPW --id.type user -u https://0.0.0.0:7054 +fabric-ca-client register -d --id.name client-org2 --id.secret org2UserPW --id.type client -u https://0.0.0.0:7054 \ No newline at end of file diff --git a/topologies/t5/scripts/org2-register-identities-with-ca-tls.sh b/topologies/t5/scripts/org2-register-identities-with-ca-tls.sh new file mode 100755 index 0000000..3a99816 --- /dev/null +++ b/topologies/t5/scripts/org2-register-identities-with-ca-tls.sh @@ -0,0 +1,9 @@ +#!/bin/bash +set -e +set -x + +export FABRIC_CA_CLIENT_TLS_CERTFILES=/tmp/crypto-material/cas/org2/ca-tls/ca-cert.pem +export FABRIC_CA_CLIENT_HOME=/tmp/crypto-material/cas/org2/ca-tls-admin +fabric-ca-client enroll -d -u https://org2-ca-tls-admin:org2-ca-tls-adminpw@0.0.0.0:7054 +fabric-ca-client register -d --id.name peer1-org2 --id.secret peer1PW --id.type peer -u https://0.0.0.0:7054 +fabric-ca-client register -d --id.name peer2-org2 --id.secret peer2PW --id.type peer -u https://0.0.0.0:7054 \ No newline at end of file diff --git a/topologies/t5/scripts/org3-approve-chaincode.sh b/topologies/t5/scripts/org3-approve-chaincode.sh new file mode 100755 index 0000000..094c94f --- /dev/null +++ b/topologies/t5/scripts/org3-approve-chaincode.sh @@ -0,0 +1,22 @@ +#!/bin/bash +set -e +set -x + +export CORE_PEER_ADDRESS=${TOPOLOGY}-org3-peer1:7051 +export CORE_PEER_MSPCONFIGPATH=/tmp/crypto-material/orgs/org3/admins/admin/msp +QUERY_INSTALLED=`FABRIC_LOGGING_SPEC=ERROR peer lifecycle chaincode queryinstalled | grep myccv1` +IFS=' ' read -r -a array <<< $QUERY_INSTALLED +PACKAGE_ID=${array[2]} +PACKAGE_ID=${PACKAGE_ID::-1} +echo "The Package ID for the installed chaincode is: $PACKAGE_ID" + + +peer lifecycle chaincode approveformyorg --channelID mychannel --name myccv1 --version "1.0" \ + --package-id $PACKAGE_ID \ + --peerAddresses ${TOPOLOGY}-org3-peer1:7051 \ + --peerAddresses ${TOPOLOGY}-org3-peer2:7051 \ + --tlsRootCertFiles /tmp/crypto-material/cas/org3/ca-tls/ca-cert.pem \ + --tlsRootCertFiles /tmp/crypto-material/cas/org3/ca-tls/ca-cert.pem \ + --sequence 1 --tls --cafile /tmp/crypto-material/cas/org1/ca-tls/ca-cert.pem + +peer lifecycle chaincode queryapproved -C mychannel --name myccv1 \ No newline at end of file diff --git a/topologies/t5/scripts/org3-create-and-join-channels.sh b/topologies/t5/scripts/org3-create-and-join-channels.sh new file mode 100755 index 0000000..ebf8b48 --- /dev/null +++ b/topologies/t5/scripts/org3-create-and-join-channels.sh @@ -0,0 +1,11 @@ +#!/bin/sh +set -e +set -x + +export CORE_PEER_MSPCONFIGPATH=/tmp/crypto-material/orgs/org3/admins/admin/msp + +export CORE_PEER_ADDRESS=${TOPOLOGY}-org3-peer1:7051 +peer channel join -b /tmp/crypto-material/artifacts/channels/mychannel.block + +export CORE_PEER_ADDRESS=${TOPOLOGY}-org3-peer2:7051 +peer channel join -b /tmp/crypto-material/artifacts/channels/mychannel.block \ No newline at end of file diff --git a/topologies/t5/scripts/org3-enroll-identities-with-ca-identities.sh b/topologies/t5/scripts/org3-enroll-identities-with-ca-identities.sh new file mode 100755 index 0000000..36f317d --- /dev/null +++ b/topologies/t5/scripts/org3-enroll-identities-with-ca-identities.sh @@ -0,0 +1,49 @@ +#!/bin/bash +set -e +set -x + +# enroll peer1 node +export FABRIC_CA_CLIENT_HOME=/tmp/crypto-material/peers/org3/peer1/node +export FABRIC_CA_CLIENT_TLS_CERTFILES=/tmp/crypto-material/cas/org3/ca-identities/ca-cert.pem +fabric-ca-client enroll -d -u https://peer1-org3:peer1PW@0.0.0.0:7054 +mv /tmp/crypto-material/peers/org3/peer1/node/msp/cacerts/* /tmp/crypto-material/peers/org3/peer1/node/msp/cacerts/ca-cert.pem +cp /tmp/config/config.yaml /tmp/crypto-material/peers/org3/peer1/node/msp + +# enroll peer2 node +export FABRIC_CA_CLIENT_HOME=/tmp/crypto-material/peers/org3/peer2/node +export FABRIC_CA_CLIENT_TLS_CERTFILES=/tmp/crypto-material/cas/org3/ca-identities/ca-cert.pem +fabric-ca-client enroll -d -u https://peer2-org3:peer2PW@0.0.0.0:7054 +mv /tmp/crypto-material/peers/org3/peer2/node/msp/cacerts/* /tmp/crypto-material/peers/org3/peer2/node/msp/cacerts/ca-cert.pem +cp /tmp/config/config.yaml /tmp/crypto-material/peers/org3/peer2/node/msp + +# enroll org3 admin +export FABRIC_CA_CLIENT_HOME=/tmp/crypto-material/orgs/org3/admins/admin +export FABRIC_CA_CLIENT_TLS_CERTFILES=/tmp/crypto-material/cas/org3/ca-identities/ca-cert.pem +fabric-ca-client enroll -d -u https://admin-org3:org3AdminPW@0.0.0.0:7054 +mv /tmp/crypto-material/orgs/org3/admins/admin/msp/cacerts/* /tmp/crypto-material/orgs/org3/admins/admin/msp/cacerts/ca-cert.pem +cp /tmp/config/config.yaml /tmp/crypto-material/orgs/org3/admins/admin/msp + +# enroll org3 user +export FABRIC_CA_CLIENT_HOME=/tmp/crypto-material/orgs/org3/users/user +export FABRIC_CA_CLIENT_TLS_CERTFILES=/tmp/crypto-material/cas/org3/ca-identities/ca-cert.pem +fabric-ca-client enroll -d -u https://admin-org3:org3AdminPW@0.0.0.0:7054 +mv /tmp/crypto-material/orgs/org3/users/user/msp/cacerts/* /tmp/crypto-material/orgs/org3/users/user/msp/cacerts/ca-cert.pem +cp /tmp/config/config.yaml /tmp/crypto-material/orgs/org3/users/user/msp + +# enroll org3 client +export FABRIC_CA_CLIENT_HOME=/tmp/crypto-material/orgs/org3/clients/client +export FABRIC_CA_CLIENT_TLS_CERTFILES=/tmp/crypto-material/cas/org3/ca-identities/ca-cert.pem +fabric-ca-client enroll -d -u https://admin-org3:org3AdminPW@0.0.0.0:7054 +mv /tmp/crypto-material/orgs/org3/clients/client/msp/cacerts/* /tmp/crypto-material/orgs/org3/clients/client/msp/cacerts/ca-cert.pem +cp /tmp/config/config.yaml /tmp/crypto-material/orgs/org3/clients/client/msp + +# setup org3 msp +mkdir -p /tmp/crypto-material/orgs/org3/msp +cp /tmp/config/config.yaml /tmp/crypto-material/orgs/org3/msp +mkdir -p /tmp/crypto-material/orgs/org3/msp/cacerts +cp /tmp/crypto-material/cas/org3/ca-identities/ca-cert.pem /tmp/crypto-material/orgs/org3/msp/cacerts/ca-cert.pem +mkdir -p /tmp/crypto-material/orgs/org3/msp/tlscacerts +cp /tmp/crypto-material/cas/org1/ca-tls/ca-cert.pem /tmp/crypto-material/orgs/org3/msp/tlscacerts/org1-tls-ca-cert.pem +cp /tmp/crypto-material/cas/org2/ca-tls/ca-cert.pem /tmp/crypto-material/orgs/org3/msp/tlscacerts/org2-tls-ca-cert.pem +cp /tmp/crypto-material/cas/org3/ca-tls/ca-cert.pem /tmp/crypto-material/orgs/org3/msp/tlscacerts/org3-tls-ca-cert.pem +mkdir -p /tmp/crypto-material/orgs/org3/msp/users diff --git a/topologies/t5/scripts/org3-enroll-identities-with-ca-tls.sh b/topologies/t5/scripts/org3-enroll-identities-with-ca-tls.sh new file mode 100755 index 0000000..2bbb3b3 --- /dev/null +++ b/topologies/t5/scripts/org3-enroll-identities-with-ca-tls.sh @@ -0,0 +1,19 @@ +#!/bin/bash +set -e +set -x + +# enroll peer1 node-tls +export FABRIC_CA_CLIENT_HOME=/tmp/crypto-material/peers/org3/peer1/node-tls +export FABRIC_CA_CLIENT_TLS_CERTFILES=/tmp/crypto-material/cas/org3/ca-tls/ca-cert.pem +fabric-ca-client enroll -d -u https://peer1-org3:peer1PW@0.0.0.0:7054 --enrollment.profile tls --csr.hosts ${TOPOLOGY}-org3-peer1 + +# enroll peer2 node-tls +export FABRIC_CA_CLIENT_HOME=/tmp/crypto-material/peers/org3/peer2/node-tls +export FABRIC_CA_CLIENT_TLS_CERTFILES=/tmp/crypto-material/cas/org3/ca-tls/ca-cert.pem +fabric-ca-client enroll -d -u https://peer2-org3:peer2PW@0.0.0.0:7054 --enrollment.profile tls --csr.hosts ${TOPOLOGY}-org3-peer2 + +mv /tmp/crypto-material/peers/org3/peer1/node-tls/msp/keystore/* /tmp/crypto-material/peers/org3/peer1/node-tls/msp/keystore/key.pem +mv /tmp/crypto-material/peers/org3/peer2/node-tls/msp/keystore/* /tmp/crypto-material/peers/org3/peer2/node-tls/msp/keystore/key.pem + +mv /tmp/crypto-material/peers/org3/peer1/node-tls/msp/tlscacerts/* /tmp/crypto-material/peers/org3/peer1/node-tls/msp/tlscacerts/ca-cert.pem +mv /tmp/crypto-material/peers/org3/peer2/node-tls/msp/tlscacerts/* /tmp/crypto-material/peers/org3/peer2/node-tls/msp/tlscacerts/ca-cert.pem \ No newline at end of file diff --git a/topologies/t5/scripts/org3-install-chaincode.sh b/topologies/t5/scripts/org3-install-chaincode.sh new file mode 100755 index 0000000..f6b8789 --- /dev/null +++ b/topologies/t5/scripts/org3-install-chaincode.sh @@ -0,0 +1,13 @@ +#!/bin/bash +set -e +set -x + +export CHAINCODE_DIR=/opt/gopath/src/github.com/hyperledger/fabric-samples/chaincode/assets-transfer-basic/chaincode-go + +export CORE_PEER_ADDRESS=${TOPOLOGY}-org3-peer1:7051 +export CORE_PEER_MSPCONFIGPATH=/tmp/crypto-material/orgs/org3/admins/admin/msp +peer lifecycle chaincode package mycc.tar.gz --path $CHAINCODE_DIR --lang golang --label myccv1 +peer lifecycle chaincode install mycc.tar.gz + +export CORE_PEER_ADDRESS=${TOPOLOGY}-org3-peer2:7051 +peer lifecycle chaincode install mycc.tar.gz diff --git a/topologies/t5/scripts/org3-register-identities-with-ca-identities.sh b/topologies/t5/scripts/org3-register-identities-with-ca-identities.sh new file mode 100755 index 0000000..1c56144 --- /dev/null +++ b/topologies/t5/scripts/org3-register-identities-with-ca-identities.sh @@ -0,0 +1,12 @@ +#!/bin/bash +set -e +set -x + +export FABRIC_CA_CLIENT_TLS_CERTFILES=/tmp/crypto-material/cas/org3/ca-identities/ca-cert.pem +export FABRIC_CA_CLIENT_HOME=/tmp/crypto-material/cas/org3/ca-identities-admin +fabric-ca-client enroll -d -u https://org3-ca-identities-admin:org3-ca-identities-adminpw@0.0.0.0:7054 +fabric-ca-client register -d --id.name peer1-org3 --id.secret peer1PW --id.type peer -u https://0.0.0.0:7054 +fabric-ca-client register -d --id.name peer2-org3 --id.secret peer2PW --id.type peer -u https://0.0.0.0:7054 +fabric-ca-client register -d --id.name admin-org3 --id.secret org3AdminPW --id.type admin -u https://0.0.0.0:7054 +fabric-ca-client register -d --id.name user-org3 --id.secret org3UserPW --id.type user -u https://0.0.0.0:7054 +fabric-ca-client register -d --id.name client-org3 --id.secret org3UserPW --id.type client -u https://0.0.0.0:7054 \ No newline at end of file diff --git a/topologies/t5/scripts/org3-register-identities-with-ca-tls.sh b/topologies/t5/scripts/org3-register-identities-with-ca-tls.sh new file mode 100755 index 0000000..a5ccd7f --- /dev/null +++ b/topologies/t5/scripts/org3-register-identities-with-ca-tls.sh @@ -0,0 +1,9 @@ +#!/bin/bash +set -e +set -x + +export FABRIC_CA_CLIENT_TLS_CERTFILES=/tmp/crypto-material/cas/org3/ca-tls/ca-cert.pem +export FABRIC_CA_CLIENT_HOME=/tmp/crypto-material/cas/org3/ca-tls-admin +fabric-ca-client enroll -d -u https://org3-ca-tls-admin:org3-ca-tls-adminpw@0.0.0.0:7054 +fabric-ca-client register -d --id.name peer1-org3 --id.secret peer1PW --id.type peer -u https://0.0.0.0:7054 +fabric-ca-client register -d --id.name peer2-org3 --id.secret peer2PW --id.type peer -u https://0.0.0.0:7054 \ No newline at end of file diff --git a/topologies/t5/scripts/patch-configtx.sh b/topologies/t5/scripts/patch-configtx.sh new file mode 100755 index 0000000..a4359d7 --- /dev/null +++ b/topologies/t5/scripts/patch-configtx.sh @@ -0,0 +1,8 @@ +#!/bin/bash +set -e +set -x + +mkdir -p /tmp/crypto-material/config +cp /tmp/config/configtx.yaml /tmp/crypto-material/config + +cat /tmp/config/configtx.yaml | sed -r "s;<>;${TOPOLOGY};g" | tee /tmp/crypto-material/config/configtx.yaml > /dev/null \ No newline at end of file diff --git a/topologies/t5/scripts/setup-docker-images.sh b/topologies/t5/scripts/setup-docker-images.sh new file mode 100755 index 0000000..0b4131f --- /dev/null +++ b/topologies/t5/scripts/setup-docker-images.sh @@ -0,0 +1,2 @@ +cd $1/images/nginx +docker build -t nginx-hl-fabric . \ No newline at end of file diff --git a/topologies/t5/setup-network.sh b/topologies/t5/setup-network.sh new file mode 100755 index 0000000..ea855a4 --- /dev/null +++ b/topologies/t5/setup-network.sh @@ -0,0 +1,189 @@ +#!/bin/bash +# -----stop script execution on error and log commands +set -e +set -x + +# -----get the folder of where the current script is located +export HL_TOPOLOGIES_BASE_FOLDER=$( cd ${0%/*} && pwd -P ) +rm -rf ./topologies + +export CURRENT_HL_TOPOLOGY=t5 +echo "Topology ${CURRENT_HL_TOPOLOGY} Root Folder Set to: ${HL_TOPOLOGIES_BASE_FOLDER}" + +# -----delete any old or existing docker networks and clear any state data---- +echo "Deleting the old network..." +./teardown-network.sh + +# -----bring up the hyperledger admin shell terminal console, used to bootstrap the network +docker compose --env-file ${HL_TOPOLOGIES_BASE_FOLDER}/.env -f ${HL_TOPOLOGIES_BASE_FOLDER}/../docker-compose-base.yml \ + -f ${HL_TOPOLOGIES_BASE_FOLDER}/../docker-compose-shell-cmd.yml up -d + +# -----begin by modifying the configtx yaml file with any string replacements +echo "Starting the setup of the new network..." + +docker exec ${CURRENT_HL_TOPOLOGY}-shell-cmd /bin/sh -c "/bin/sh /tmp/scripts/patch-configtx.sh" + +# ----Setup Docker Images ---- +./scripts/setup-docker-images.sh ${HL_TOPOLOGIES_BASE_FOLDER} + +# -----Setup CA DBs ----- +mkdir -p ${HL_TOPOLOGIES_BASE_FOLDER}/homefolders/org1/ca-db/data +docker compose --env-file ${HL_TOPOLOGIES_BASE_FOLDER}/.env -f ${HL_TOPOLOGIES_BASE_FOLDER}/../docker-compose-base.yml \ + -f ${HL_TOPOLOGIES_BASE_FOLDER}/docker-compose.yml \ + -f ${HL_TOPOLOGIES_BASE_FOLDER}/containers/cas/org1-cas/docker-compose-org1-cas-db.yml up -d org1-ca-db +sleep 33 + +# ----------------------------------------------------------------------------- +# -----setup the CAs for all orgs and register with these the TLS-CA and Identities-CA users, such as admins, clients, etc...----- +# ----------------------------------------------------------------------------- +# -----org1 CAs +docker compose --env-file ${HL_TOPOLOGIES_BASE_FOLDER}/.env -f ${HL_TOPOLOGIES_BASE_FOLDER}/../docker-compose-base.yml \ + -f ${HL_TOPOLOGIES_BASE_FOLDER}/docker-compose.yml \ + -f ${HL_TOPOLOGIES_BASE_FOLDER}/containers/cas/org1-cas/docker-compose-org1-cas.yml up -d org1-ca-tls-n0 +sleep 10 +docker compose --env-file ${HL_TOPOLOGIES_BASE_FOLDER}/.env -f ${HL_TOPOLOGIES_BASE_FOLDER}/../docker-compose-base.yml \ + -f ${HL_TOPOLOGIES_BASE_FOLDER}/docker-compose.yml \ + -f ${HL_TOPOLOGIES_BASE_FOLDER}/containers/cas/org1-cas/docker-compose-org1-cas.yml up -d org1-ca-tls-n1 +sleep 2 +docker compose --env-file ${HL_TOPOLOGIES_BASE_FOLDER}/.env -f ${HL_TOPOLOGIES_BASE_FOLDER}/../docker-compose-base.yml \ + -f ${HL_TOPOLOGIES_BASE_FOLDER}/docker-compose.yml \ + -f ${HL_TOPOLOGIES_BASE_FOLDER}/containers/cas/org1-cas/docker-compose-org1-cas-proxy.yml up -d org1-ca-tls +sleep 2 +docker compose --env-file ${HL_TOPOLOGIES_BASE_FOLDER}/.env -f ${HL_TOPOLOGIES_BASE_FOLDER}/../docker-compose-base.yml \ + -f ${HL_TOPOLOGIES_BASE_FOLDER}/docker-compose.yml \ + -f ${HL_TOPOLOGIES_BASE_FOLDER}/containers/cas/org1-cas/docker-compose-org1-cas.yml up -d org1-ca-identities-n0 +sleep 10 +docker compose --env-file ${HL_TOPOLOGIES_BASE_FOLDER}/.env -f ${HL_TOPOLOGIES_BASE_FOLDER}/../docker-compose-base.yml \ + -f ${HL_TOPOLOGIES_BASE_FOLDER}/docker-compose.yml \ + -f ${HL_TOPOLOGIES_BASE_FOLDER}/containers/cas/org1-cas/docker-compose-org1-cas.yml up -d org1-ca-identities-n1 +sleep 2 +docker compose --env-file ${HL_TOPOLOGIES_BASE_FOLDER}/.env -f ${HL_TOPOLOGIES_BASE_FOLDER}/../docker-compose-base.yml \ + -f ${HL_TOPOLOGIES_BASE_FOLDER}/docker-compose.yml \ + -f ${HL_TOPOLOGIES_BASE_FOLDER}/containers/cas/org1-cas/docker-compose-org1-cas-proxy.yml up -d org1-ca-identities +sleep 7 + +docker exec ${CURRENT_HL_TOPOLOGY}-org1-ca-tls-n0 /bin/sh -c "/bin/sh /tmp/scripts/org1-register-identities-with-ca-tls.sh" +docker exec ${CURRENT_HL_TOPOLOGY}-org1-ca-identities-n0 /bin/sh -c "/bin/sh /tmp/scripts/org1-register-identities-with-ca-identities.sh" + +# -----org2 CAs +docker compose --env-file ${HL_TOPOLOGIES_BASE_FOLDER}/.env -f ${HL_TOPOLOGIES_BASE_FOLDER}/../docker-compose-base.yml \ + -f ${HL_TOPOLOGIES_BASE_FOLDER}/docker-compose.yml \ + -f ${HL_TOPOLOGIES_BASE_FOLDER}/containers/cas/org2-cas/docker-compose-org2-cas.yml up -d org2-ca-tls +sleep 2 +docker exec ${CURRENT_HL_TOPOLOGY}-org2-ca-tls /bin/sh -c "/bin/sh /tmp/scripts/org2-register-identities-with-ca-tls.sh" + +docker compose --env-file ${HL_TOPOLOGIES_BASE_FOLDER}/.env -f ${HL_TOPOLOGIES_BASE_FOLDER}/../docker-compose-base.yml \ + -f ${HL_TOPOLOGIES_BASE_FOLDER}/docker-compose.yml \ + -f ${HL_TOPOLOGIES_BASE_FOLDER}/containers/cas/org2-cas/docker-compose-org2-cas.yml up -d org2-ca-identities +sleep 2 +docker exec ${CURRENT_HL_TOPOLOGY}-org2-ca-identities /bin/sh -c "/bin/sh /tmp/scripts/org2-register-identities-with-ca-identities.sh" + +# -----org3 CAs +docker compose --env-file ${HL_TOPOLOGIES_BASE_FOLDER}/.env -f ${HL_TOPOLOGIES_BASE_FOLDER}/../docker-compose-base.yml \ + -f ${HL_TOPOLOGIES_BASE_FOLDER}/docker-compose.yml \ + -f ${HL_TOPOLOGIES_BASE_FOLDER}/containers/cas/org3-cas/docker-compose-org3-cas.yml up -d org3-ca-tls +sleep 2 +docker exec ${CURRENT_HL_TOPOLOGY}-org3-ca-tls /bin/sh -c "/bin/sh /tmp/scripts/org3-register-identities-with-ca-tls.sh" + +docker compose --env-file ${HL_TOPOLOGIES_BASE_FOLDER}/.env -f ${HL_TOPOLOGIES_BASE_FOLDER}/../docker-compose-base.yml \ + -f ${HL_TOPOLOGIES_BASE_FOLDER}/docker-compose.yml \ + -f ${HL_TOPOLOGIES_BASE_FOLDER}/containers/cas/org3-cas/docker-compose-org3-cas.yml up -d org3-ca-identities +sleep 2 +docker exec ${CURRENT_HL_TOPOLOGY}-org3-ca-identities /bin/sh -c "/bin/sh /tmp/scripts/org3-register-identities-with-ca-identities.sh" + +# ----------------------------------------------------------------------------- +# -----setup the Peers for all orgs----- +# ----------------------------------------------------------------------------- +# -----begin by enrolling each orgs' TLS-CA and Identities-CA users +docker exec ${CURRENT_HL_TOPOLOGY}-org2-ca-tls /bin/sh -c "/bin/sh /tmp/scripts/org2-enroll-identities-with-ca-tls.sh" +docker exec ${CURRENT_HL_TOPOLOGY}-org2-ca-identities /bin/sh -c "/bin/sh /tmp/scripts/org2-enroll-identities-with-ca-identities.sh" + +docker exec ${CURRENT_HL_TOPOLOGY}-org3-ca-tls /bin/sh -c "/bin/sh /tmp/scripts/org3-enroll-identities-with-ca-tls.sh" +docker exec ${CURRENT_HL_TOPOLOGY}-org3-ca-identities /bin/sh -c "/bin/sh /tmp/scripts/org3-enroll-identities-with-ca-identities.sh" + +# -----org2 peers +docker compose --env-file ${HL_TOPOLOGIES_BASE_FOLDER}/.env -f ${HL_TOPOLOGIES_BASE_FOLDER}/../docker-compose-base.yml \ + -f ${HL_TOPOLOGIES_BASE_FOLDER}/docker-compose.yml \ + -f ${HL_TOPOLOGIES_BASE_FOLDER}/containers/peers/org2-peers/docker-compose-org2-peers.yml up -d org2-peer1 + +docker compose --env-file ${HL_TOPOLOGIES_BASE_FOLDER}/.env -f ${HL_TOPOLOGIES_BASE_FOLDER}/../docker-compose-base.yml \ + -f ${HL_TOPOLOGIES_BASE_FOLDER}/docker-compose.yml \ + -f ${HL_TOPOLOGIES_BASE_FOLDER}/containers/peers/org2-peers/docker-compose-org2-peers.yml up -d org2-peer2 + +# -----org3 peers +docker compose --env-file ${HL_TOPOLOGIES_BASE_FOLDER}/.env -f ${HL_TOPOLOGIES_BASE_FOLDER}/../docker-compose-base.yml \ + -f ${HL_TOPOLOGIES_BASE_FOLDER}/docker-compose.yml \ + -f ${HL_TOPOLOGIES_BASE_FOLDER}/containers/peers/org3-peers/docker-compose-org3-peers.yml up -d org3-peer1 + +docker compose --env-file ${HL_TOPOLOGIES_BASE_FOLDER}/.env -f ${HL_TOPOLOGIES_BASE_FOLDER}/../docker-compose-base.yml \ + -f ${HL_TOPOLOGIES_BASE_FOLDER}/docker-compose.yml \ + -f ${HL_TOPOLOGIES_BASE_FOLDER}/containers/peers/org3-peers/docker-compose-org3-peers.yml up -d org3-peer2 + +# ----------------------------------------------------------------------------- +# -----setup CLIs for each org----- +# ----------------------------------------------------------------------------- +docker compose --env-file ${HL_TOPOLOGIES_BASE_FOLDER}/.env -f ${HL_TOPOLOGIES_BASE_FOLDER}/../docker-compose-base.yml \ + -f ${HL_TOPOLOGIES_BASE_FOLDER}/docker-compose.yml \ + -f ${HL_TOPOLOGIES_BASE_FOLDER}/containers/clis/org2-clis/docker-compose-org2-clis.yml up -d org2-cli-peer1 + +docker compose --env-file ${HL_TOPOLOGIES_BASE_FOLDER}/.env -f ${HL_TOPOLOGIES_BASE_FOLDER}/../docker-compose-base.yml \ + -f ${HL_TOPOLOGIES_BASE_FOLDER}/docker-compose.yml \ + -f ${HL_TOPOLOGIES_BASE_FOLDER}/containers/clis/org3-clis/docker-compose-org3-clis.yml up -d org3-cli-peer1 + +# ----------------------------------------------------------------------------- +# -----setup Orderers ----- +# ----------------------------------------------------------------------------- +# -----enroll orderer users with TLS-CA and Identities-CA for org1, the orderer org +docker exec ${CURRENT_HL_TOPOLOGY}-org1-ca-tls-n0 /bin/sh -c "/bin/sh /tmp/scripts/org1-enroll-identities-with-ca-tls.sh" +docker exec ${CURRENT_HL_TOPOLOGY}-org1-ca-identities-n0 /bin/sh -c "/bin/sh /tmp/scripts/org1-enroll-identities-with-ca-identities.sh" + +# -----generate the genesis.block and mychannel artifacts +docker exec ${CURRENT_HL_TOPOLOGY}-org2-cli-peer1 /bin/sh -c "/tmp/scripts/channels-setup.sh" + +sleep 1 + +# -----bring up org1 orderers +docker compose --env-file ${HL_TOPOLOGIES_BASE_FOLDER}/.env -f ${HL_TOPOLOGIES_BASE_FOLDER}/../docker-compose-base.yml \ + -f ${HL_TOPOLOGIES_BASE_FOLDER}/docker-compose.yml \ + -f ${HL_TOPOLOGIES_BASE_FOLDER}/containers/orderers/org1-orderers/docker-compose-org1-orderers.yml up -d org1-orderer1 + +docker compose --env-file ${HL_TOPOLOGIES_BASE_FOLDER}/.env -f ${HL_TOPOLOGIES_BASE_FOLDER}/../docker-compose-base.yml \ + -f ${HL_TOPOLOGIES_BASE_FOLDER}/docker-compose.yml \ + -f ${HL_TOPOLOGIES_BASE_FOLDER}/containers/orderers/org1-orderers/docker-compose-org1-orderers.yml up -d org1-orderer2 + +docker compose --env-file ${HL_TOPOLOGIES_BASE_FOLDER}/.env -f ${HL_TOPOLOGIES_BASE_FOLDER}/../docker-compose-base.yml \ + -f ${HL_TOPOLOGIES_BASE_FOLDER}/docker-compose.yml \ + -f ${HL_TOPOLOGIES_BASE_FOLDER}/containers/orderers/org1-orderers/docker-compose-org1-orderers.yml up -d org1-orderer3 + +# -----need to wait until raft leader selection is completed for the orderers +sleep 4 + +# ----------------------------------------------------------------------------- +# -----setup Channels ----- +# ----------------------------------------------------------------------------- +docker exec ${CURRENT_HL_TOPOLOGY}-org2-cli-peer1 /bin/sh -c "/tmp/scripts/org2-create-and-join-channels.sh" +docker exec ${CURRENT_HL_TOPOLOGY}-org3-cli-peer1 /bin/sh -c "/tmp/scripts/org3-create-and-join-channels.sh" + +# ----------------------------------------------------------------------------- +# -----setup Chaincode ----- +# ----------------------------------------------------------------------------- +docker exec ${CURRENT_HL_TOPOLOGY}-org2-cli-peer1 /bin/sh -c "/tmp/scripts/org2-install-chaincode.sh" +docker exec ${CURRENT_HL_TOPOLOGY}-org3-cli-peer1 /bin/sh -c "/tmp/scripts/org3-install-chaincode.sh" + +sleep 1 + +docker exec ${CURRENT_HL_TOPOLOGY}-org2-cli-peer1 /bin/sh -c "/tmp/scripts/org2-approve-chaincode.sh" +docker exec ${CURRENT_HL_TOPOLOGY}-org3-cli-peer1 /bin/sh -c "/tmp/scripts/org3-approve-chaincode.sh" + +sleep 1 + +docker exec ${CURRENT_HL_TOPOLOGY}-org2-cli-peer1 /bin/sh -c "/tmp/scripts/all-org-peers-commit-chaincode.sh" + +sleep 1 + +# ----------------------------------------------------------------------------- +# -----test Chaincode with invoke and query----- +# ----------------------------------------------------------------------------- +docker exec ${CURRENT_HL_TOPOLOGY}-org2-cli-peer1 /bin/sh -c "/tmp/scripts/all-org-peers-execute-chaincode.sh" + +echo "******* NETWORK SETUP COMPLETED *******" diff --git a/topologies/t5/teardown-network.sh b/topologies/t5/teardown-network.sh new file mode 100755 index 0000000..886fd23 --- /dev/null +++ b/topologies/t5/teardown-network.sh @@ -0,0 +1,33 @@ +#!/bin/bash +# -----stop script execution on error and log commands +set -e +set -x + +export CURRENT_HL_TOPOLOGY=t5 + +# ----------------------------------------------------------------------------- +# -----remove current topoloy containers containers +# ----------------------------------------------------------------------------- +# -----the chaincode dockers throw an error as not being able to be deleted, although they are being deleted. ignoring errors here +set +e +docker ps --filter name=${CURRENT_HL_TOPOLOGY}-org --filter status=running -aq | xargs -r docker stop | xargs -r docker rm +docker ps --filter name=${CURRENT_HL_TOPOLOGY}-org --filter status=exited -aq | xargs -r docker rm +set -e + +# ----------------------------------------------------------------------------- +# -----clear any state data written to disk +# ----------------------------------------------------------------------------- +# -----docker exec will throw error if no running container found +if [ "$( docker container inspect -f '{{.State.Status}}' ${CURRENT_HL_TOPOLOGY}-shell-cmd )" == "running" ] +then + docker exec ${CURRENT_HL_TOPOLOGY}-shell-cmd /bin/sh -c "/bin/sh /tmp/scripts/delete-state-data.sh" +else + echo "Shell Cmd Container is not running." +fi + +# ----------------------------------------------------------------------------- +# -----remove the bootstrap shell container +# ----------------------------------------------------------------------------- +# -----remove shell command container +docker ps --filter name=${CURRENT_HL_TOPOLOGY}-shell-cmd --filter status=running -aq | xargs -r docker stop | xargs -r docker rm +docker ps --filter name=${CURRENT_HL_TOPOLOGY}-shell-cmd --filter status=exited -aq | xargs -r docker rm diff --git a/topologies/t6/.env b/topologies/t6/.env new file mode 100644 index 0000000..0c2e27e --- /dev/null +++ b/topologies/t6/.env @@ -0,0 +1,5 @@ +COMPOSE_PROJECT_NAME=hl-fabric-topology-t6 +FABRIC_CA_VERSION=1.5 +FABRIC_PEER_VERSION=2.2.3 +FABRIC_TOOLS_VERSION=2.2.3 +PEER_ORDERER_VERSION=2.2.3 \ No newline at end of file diff --git a/topologies/t6/.gitignore b/topologies/t6/.gitignore new file mode 100644 index 0000000..ee0881a --- /dev/null +++ b/topologies/t6/.gitignore @@ -0,0 +1,2 @@ +crypto-material/*/** +homefolders/*/** \ No newline at end of file diff --git a/topologies/t6/README.md b/topologies/t6/README.md new file mode 100644 index 0000000..0a3e447 --- /dev/null +++ b/topologies/t6/README.md @@ -0,0 +1,39 @@ +# T6: Mutual TLS +## Description +--- +T1 network + communications between all components performed using TLS & Client authentication + +## Diagram +--- +![Diagram of components](../image_store/T6.png) + +## Relevant Documentation + +- https://hyperledger-fabric.readthedocs.io/en/latest/enable_tls.html + +## Components List +--- +* Org 1 + * Orderer 1 + * Orderer 2 + * Orderer 3 + * TLS CA + * Identities CA +* Org 2 + * Peer 1 + * Peer 1 CLI + * Peer 2 + * TLS CA + * Identities CA +* Org 3 + * Peer 1 + * Peer 1 CLI + * Peer 2 + * TLS CA + * Identities CA + +## Characteristics + +- World State Database Instance (LevelDB) embedded (in peer containers) +- Chaincode installed directly on peers +- Communication between all components done via Mutual TLS \ No newline at end of file diff --git a/topologies/t6/assets/chaincode/assets-transfer-basic/chaincode-go/assetTransfer.go b/topologies/t6/assets/chaincode/assets-transfer-basic/chaincode-go/assetTransfer.go new file mode 100644 index 0000000..9c619d5 --- /dev/null +++ b/topologies/t6/assets/chaincode/assets-transfer-basic/chaincode-go/assetTransfer.go @@ -0,0 +1,23 @@ +/* +SPDX-License-Identifier: Apache-2.0 +*/ + +package main + +import ( + "log" + + "github.com/hyperledger/fabric-contract-api-go/contractapi" + "github.com/hyperledger/fabric-samples/asset-transfer-basic/chaincode-go/chaincode" +) + +func main() { + assetChaincode, err := contractapi.NewChaincode(&chaincode.SmartContract{}) + if err != nil { + log.Panicf("Error creating asset-transfer-basic chaincode: %v", err) + } + + if err := assetChaincode.Start(); err != nil { + log.Panicf("Error starting asset-transfer-basic chaincode: %v", err) + } +} diff --git a/topologies/t6/assets/chaincode/assets-transfer-basic/chaincode-go/chaincode/mocks/chaincodestub.go b/topologies/t6/assets/chaincode/assets-transfer-basic/chaincode-go/chaincode/mocks/chaincodestub.go new file mode 100644 index 0000000..91348fe --- /dev/null +++ b/topologies/t6/assets/chaincode/assets-transfer-basic/chaincode-go/chaincode/mocks/chaincodestub.go @@ -0,0 +1,2878 @@ +// Code generated by counterfeiter. DO NOT EDIT. +package mocks + +import ( + "sync" + + "github.com/golang/protobuf/ptypes/timestamp" + "github.com/hyperledger/fabric-chaincode-go/shim" + "github.com/hyperledger/fabric-protos-go/peer" +) + +type ChaincodeStub struct { + CreateCompositeKeyStub func(string, []string) (string, error) + createCompositeKeyMutex sync.RWMutex + createCompositeKeyArgsForCall []struct { + arg1 string + arg2 []string + } + createCompositeKeyReturns struct { + result1 string + result2 error + } + createCompositeKeyReturnsOnCall map[int]struct { + result1 string + result2 error + } + DelPrivateDataStub func(string, string) error + delPrivateDataMutex sync.RWMutex + delPrivateDataArgsForCall []struct { + arg1 string + arg2 string + } + delPrivateDataReturns struct { + result1 error + } + delPrivateDataReturnsOnCall map[int]struct { + result1 error + } + DelStateStub func(string) error + delStateMutex sync.RWMutex + delStateArgsForCall []struct { + arg1 string + } + delStateReturns struct { + result1 error + } + delStateReturnsOnCall map[int]struct { + result1 error + } + GetArgsStub func() [][]byte + getArgsMutex sync.RWMutex + getArgsArgsForCall []struct { + } + getArgsReturns struct { + result1 [][]byte + } + getArgsReturnsOnCall map[int]struct { + result1 [][]byte + } + GetArgsSliceStub func() ([]byte, error) + getArgsSliceMutex sync.RWMutex + getArgsSliceArgsForCall []struct { + } + getArgsSliceReturns struct { + result1 []byte + result2 error + } + getArgsSliceReturnsOnCall map[int]struct { + result1 []byte + result2 error + } + GetBindingStub func() ([]byte, error) + getBindingMutex sync.RWMutex + getBindingArgsForCall []struct { + } + getBindingReturns struct { + result1 []byte + result2 error + } + getBindingReturnsOnCall map[int]struct { + result1 []byte + result2 error + } + GetChannelIDStub func() string + getChannelIDMutex sync.RWMutex + getChannelIDArgsForCall []struct { + } + getChannelIDReturns struct { + result1 string + } + getChannelIDReturnsOnCall map[int]struct { + result1 string + } + GetCreatorStub func() ([]byte, error) + getCreatorMutex sync.RWMutex + getCreatorArgsForCall []struct { + } + getCreatorReturns struct { + result1 []byte + result2 error + } + getCreatorReturnsOnCall map[int]struct { + result1 []byte + result2 error + } + GetDecorationsStub func() map[string][]byte + getDecorationsMutex sync.RWMutex + getDecorationsArgsForCall []struct { + } + getDecorationsReturns struct { + result1 map[string][]byte + } + getDecorationsReturnsOnCall map[int]struct { + result1 map[string][]byte + } + GetFunctionAndParametersStub func() (string, []string) + getFunctionAndParametersMutex sync.RWMutex + getFunctionAndParametersArgsForCall []struct { + } + getFunctionAndParametersReturns struct { + result1 string + result2 []string + } + getFunctionAndParametersReturnsOnCall map[int]struct { + result1 string + result2 []string + } + GetHistoryForKeyStub func(string) (shim.HistoryQueryIteratorInterface, error) + getHistoryForKeyMutex sync.RWMutex + getHistoryForKeyArgsForCall []struct { + arg1 string + } + getHistoryForKeyReturns struct { + result1 shim.HistoryQueryIteratorInterface + result2 error + } + getHistoryForKeyReturnsOnCall map[int]struct { + result1 shim.HistoryQueryIteratorInterface + result2 error + } + GetPrivateDataStub func(string, string) ([]byte, error) + getPrivateDataMutex sync.RWMutex + getPrivateDataArgsForCall []struct { + arg1 string + arg2 string + } + getPrivateDataReturns struct { + result1 []byte + result2 error + } + getPrivateDataReturnsOnCall map[int]struct { + result1 []byte + result2 error + } + GetPrivateDataByPartialCompositeKeyStub func(string, string, []string) (shim.StateQueryIteratorInterface, error) + getPrivateDataByPartialCompositeKeyMutex sync.RWMutex + getPrivateDataByPartialCompositeKeyArgsForCall []struct { + arg1 string + arg2 string + arg3 []string + } + getPrivateDataByPartialCompositeKeyReturns struct { + result1 shim.StateQueryIteratorInterface + result2 error + } + getPrivateDataByPartialCompositeKeyReturnsOnCall map[int]struct { + result1 shim.StateQueryIteratorInterface + result2 error + } + GetPrivateDataByRangeStub func(string, string, string) (shim.StateQueryIteratorInterface, error) + getPrivateDataByRangeMutex sync.RWMutex + getPrivateDataByRangeArgsForCall []struct { + arg1 string + arg2 string + arg3 string + } + getPrivateDataByRangeReturns struct { + result1 shim.StateQueryIteratorInterface + result2 error + } + getPrivateDataByRangeReturnsOnCall map[int]struct { + result1 shim.StateQueryIteratorInterface + result2 error + } + GetPrivateDataHashStub func(string, string) ([]byte, error) + getPrivateDataHashMutex sync.RWMutex + getPrivateDataHashArgsForCall []struct { + arg1 string + arg2 string + } + getPrivateDataHashReturns struct { + result1 []byte + result2 error + } + getPrivateDataHashReturnsOnCall map[int]struct { + result1 []byte + result2 error + } + GetPrivateDataQueryResultStub func(string, string) (shim.StateQueryIteratorInterface, error) + getPrivateDataQueryResultMutex sync.RWMutex + getPrivateDataQueryResultArgsForCall []struct { + arg1 string + arg2 string + } + getPrivateDataQueryResultReturns struct { + result1 shim.StateQueryIteratorInterface + result2 error + } + getPrivateDataQueryResultReturnsOnCall map[int]struct { + result1 shim.StateQueryIteratorInterface + result2 error + } + GetPrivateDataValidationParameterStub func(string, string) ([]byte, error) + getPrivateDataValidationParameterMutex sync.RWMutex + getPrivateDataValidationParameterArgsForCall []struct { + arg1 string + arg2 string + } + getPrivateDataValidationParameterReturns struct { + result1 []byte + result2 error + } + getPrivateDataValidationParameterReturnsOnCall map[int]struct { + result1 []byte + result2 error + } + GetQueryResultStub func(string) (shim.StateQueryIteratorInterface, error) + getQueryResultMutex sync.RWMutex + getQueryResultArgsForCall []struct { + arg1 string + } + getQueryResultReturns struct { + result1 shim.StateQueryIteratorInterface + result2 error + } + getQueryResultReturnsOnCall map[int]struct { + result1 shim.StateQueryIteratorInterface + result2 error + } + GetQueryResultWithPaginationStub func(string, int32, string) (shim.StateQueryIteratorInterface, *peer.QueryResponseMetadata, error) + getQueryResultWithPaginationMutex sync.RWMutex + getQueryResultWithPaginationArgsForCall []struct { + arg1 string + arg2 int32 + arg3 string + } + getQueryResultWithPaginationReturns struct { + result1 shim.StateQueryIteratorInterface + result2 *peer.QueryResponseMetadata + result3 error + } + getQueryResultWithPaginationReturnsOnCall map[int]struct { + result1 shim.StateQueryIteratorInterface + result2 *peer.QueryResponseMetadata + result3 error + } + GetSignedProposalStub func() (*peer.SignedProposal, error) + getSignedProposalMutex sync.RWMutex + getSignedProposalArgsForCall []struct { + } + getSignedProposalReturns struct { + result1 *peer.SignedProposal + result2 error + } + getSignedProposalReturnsOnCall map[int]struct { + result1 *peer.SignedProposal + result2 error + } + GetStateStub func(string) ([]byte, error) + getStateMutex sync.RWMutex + getStateArgsForCall []struct { + arg1 string + } + getStateReturns struct { + result1 []byte + result2 error + } + getStateReturnsOnCall map[int]struct { + result1 []byte + result2 error + } + GetStateByPartialCompositeKeyStub func(string, []string) (shim.StateQueryIteratorInterface, error) + getStateByPartialCompositeKeyMutex sync.RWMutex + getStateByPartialCompositeKeyArgsForCall []struct { + arg1 string + arg2 []string + } + getStateByPartialCompositeKeyReturns struct { + result1 shim.StateQueryIteratorInterface + result2 error + } + getStateByPartialCompositeKeyReturnsOnCall map[int]struct { + result1 shim.StateQueryIteratorInterface + result2 error + } + GetStateByPartialCompositeKeyWithPaginationStub func(string, []string, int32, string) (shim.StateQueryIteratorInterface, *peer.QueryResponseMetadata, error) + getStateByPartialCompositeKeyWithPaginationMutex sync.RWMutex + getStateByPartialCompositeKeyWithPaginationArgsForCall []struct { + arg1 string + arg2 []string + arg3 int32 + arg4 string + } + getStateByPartialCompositeKeyWithPaginationReturns struct { + result1 shim.StateQueryIteratorInterface + result2 *peer.QueryResponseMetadata + result3 error + } + getStateByPartialCompositeKeyWithPaginationReturnsOnCall map[int]struct { + result1 shim.StateQueryIteratorInterface + result2 *peer.QueryResponseMetadata + result3 error + } + GetStateByRangeStub func(string, string) (shim.StateQueryIteratorInterface, error) + getStateByRangeMutex sync.RWMutex + getStateByRangeArgsForCall []struct { + arg1 string + arg2 string + } + getStateByRangeReturns struct { + result1 shim.StateQueryIteratorInterface + result2 error + } + getStateByRangeReturnsOnCall map[int]struct { + result1 shim.StateQueryIteratorInterface + result2 error + } + GetStateByRangeWithPaginationStub func(string, string, int32, string) (shim.StateQueryIteratorInterface, *peer.QueryResponseMetadata, error) + getStateByRangeWithPaginationMutex sync.RWMutex + getStateByRangeWithPaginationArgsForCall []struct { + arg1 string + arg2 string + arg3 int32 + arg4 string + } + getStateByRangeWithPaginationReturns struct { + result1 shim.StateQueryIteratorInterface + result2 *peer.QueryResponseMetadata + result3 error + } + getStateByRangeWithPaginationReturnsOnCall map[int]struct { + result1 shim.StateQueryIteratorInterface + result2 *peer.QueryResponseMetadata + result3 error + } + GetStateValidationParameterStub func(string) ([]byte, error) + getStateValidationParameterMutex sync.RWMutex + getStateValidationParameterArgsForCall []struct { + arg1 string + } + getStateValidationParameterReturns struct { + result1 []byte + result2 error + } + getStateValidationParameterReturnsOnCall map[int]struct { + result1 []byte + result2 error + } + GetStringArgsStub func() []string + getStringArgsMutex sync.RWMutex + getStringArgsArgsForCall []struct { + } + getStringArgsReturns struct { + result1 []string + } + getStringArgsReturnsOnCall map[int]struct { + result1 []string + } + GetTransientStub func() (map[string][]byte, error) + getTransientMutex sync.RWMutex + getTransientArgsForCall []struct { + } + getTransientReturns struct { + result1 map[string][]byte + result2 error + } + getTransientReturnsOnCall map[int]struct { + result1 map[string][]byte + result2 error + } + GetTxIDStub func() string + getTxIDMutex sync.RWMutex + getTxIDArgsForCall []struct { + } + getTxIDReturns struct { + result1 string + } + getTxIDReturnsOnCall map[int]struct { + result1 string + } + GetTxTimestampStub func() (*timestamp.Timestamp, error) + getTxTimestampMutex sync.RWMutex + getTxTimestampArgsForCall []struct { + } + getTxTimestampReturns struct { + result1 *timestamp.Timestamp + result2 error + } + getTxTimestampReturnsOnCall map[int]struct { + result1 *timestamp.Timestamp + result2 error + } + InvokeChaincodeStub func(string, [][]byte, string) peer.Response + invokeChaincodeMutex sync.RWMutex + invokeChaincodeArgsForCall []struct { + arg1 string + arg2 [][]byte + arg3 string + } + invokeChaincodeReturns struct { + result1 peer.Response + } + invokeChaincodeReturnsOnCall map[int]struct { + result1 peer.Response + } + PutPrivateDataStub func(string, string, []byte) error + putPrivateDataMutex sync.RWMutex + putPrivateDataArgsForCall []struct { + arg1 string + arg2 string + arg3 []byte + } + putPrivateDataReturns struct { + result1 error + } + putPrivateDataReturnsOnCall map[int]struct { + result1 error + } + PutStateStub func(string, []byte) error + putStateMutex sync.RWMutex + putStateArgsForCall []struct { + arg1 string + arg2 []byte + } + putStateReturns struct { + result1 error + } + putStateReturnsOnCall map[int]struct { + result1 error + } + SetEventStub func(string, []byte) error + setEventMutex sync.RWMutex + setEventArgsForCall []struct { + arg1 string + arg2 []byte + } + setEventReturns struct { + result1 error + } + setEventReturnsOnCall map[int]struct { + result1 error + } + SetPrivateDataValidationParameterStub func(string, string, []byte) error + setPrivateDataValidationParameterMutex sync.RWMutex + setPrivateDataValidationParameterArgsForCall []struct { + arg1 string + arg2 string + arg3 []byte + } + setPrivateDataValidationParameterReturns struct { + result1 error + } + setPrivateDataValidationParameterReturnsOnCall map[int]struct { + result1 error + } + SetStateValidationParameterStub func(string, []byte) error + setStateValidationParameterMutex sync.RWMutex + setStateValidationParameterArgsForCall []struct { + arg1 string + arg2 []byte + } + setStateValidationParameterReturns struct { + result1 error + } + setStateValidationParameterReturnsOnCall map[int]struct { + result1 error + } + SplitCompositeKeyStub func(string) (string, []string, error) + splitCompositeKeyMutex sync.RWMutex + splitCompositeKeyArgsForCall []struct { + arg1 string + } + splitCompositeKeyReturns struct { + result1 string + result2 []string + result3 error + } + splitCompositeKeyReturnsOnCall map[int]struct { + result1 string + result2 []string + result3 error + } + invocations map[string][][]interface{} + invocationsMutex sync.RWMutex +} + +func (fake *ChaincodeStub) CreateCompositeKey(arg1 string, arg2 []string) (string, error) { + var arg2Copy []string + if arg2 != nil { + arg2Copy = make([]string, len(arg2)) + copy(arg2Copy, arg2) + } + fake.createCompositeKeyMutex.Lock() + ret, specificReturn := fake.createCompositeKeyReturnsOnCall[len(fake.createCompositeKeyArgsForCall)] + fake.createCompositeKeyArgsForCall = append(fake.createCompositeKeyArgsForCall, struct { + arg1 string + arg2 []string + }{arg1, arg2Copy}) + fake.recordInvocation("CreateCompositeKey", []interface{}{arg1, arg2Copy}) + fake.createCompositeKeyMutex.Unlock() + if fake.CreateCompositeKeyStub != nil { + return fake.CreateCompositeKeyStub(arg1, arg2) + } + if specificReturn { + return ret.result1, ret.result2 + } + fakeReturns := fake.createCompositeKeyReturns + return fakeReturns.result1, fakeReturns.result2 +} + +func (fake *ChaincodeStub) CreateCompositeKeyCallCount() int { + fake.createCompositeKeyMutex.RLock() + defer fake.createCompositeKeyMutex.RUnlock() + return len(fake.createCompositeKeyArgsForCall) +} + +func (fake *ChaincodeStub) CreateCompositeKeyCalls(stub func(string, []string) (string, error)) { + fake.createCompositeKeyMutex.Lock() + defer fake.createCompositeKeyMutex.Unlock() + fake.CreateCompositeKeyStub = stub +} + +func (fake *ChaincodeStub) CreateCompositeKeyArgsForCall(i int) (string, []string) { + fake.createCompositeKeyMutex.RLock() + defer fake.createCompositeKeyMutex.RUnlock() + argsForCall := fake.createCompositeKeyArgsForCall[i] + return argsForCall.arg1, argsForCall.arg2 +} + +func (fake *ChaincodeStub) CreateCompositeKeyReturns(result1 string, result2 error) { + fake.createCompositeKeyMutex.Lock() + defer fake.createCompositeKeyMutex.Unlock() + fake.CreateCompositeKeyStub = nil + fake.createCompositeKeyReturns = struct { + result1 string + result2 error + }{result1, result2} +} + +func (fake *ChaincodeStub) CreateCompositeKeyReturnsOnCall(i int, result1 string, result2 error) { + fake.createCompositeKeyMutex.Lock() + defer fake.createCompositeKeyMutex.Unlock() + fake.CreateCompositeKeyStub = nil + if fake.createCompositeKeyReturnsOnCall == nil { + fake.createCompositeKeyReturnsOnCall = make(map[int]struct { + result1 string + result2 error + }) + } + fake.createCompositeKeyReturnsOnCall[i] = struct { + result1 string + result2 error + }{result1, result2} +} + +func (fake *ChaincodeStub) DelPrivateData(arg1 string, arg2 string) error { + fake.delPrivateDataMutex.Lock() + ret, specificReturn := fake.delPrivateDataReturnsOnCall[len(fake.delPrivateDataArgsForCall)] + fake.delPrivateDataArgsForCall = append(fake.delPrivateDataArgsForCall, struct { + arg1 string + arg2 string + }{arg1, arg2}) + fake.recordInvocation("DelPrivateData", []interface{}{arg1, arg2}) + fake.delPrivateDataMutex.Unlock() + if fake.DelPrivateDataStub != nil { + return fake.DelPrivateDataStub(arg1, arg2) + } + if specificReturn { + return ret.result1 + } + fakeReturns := fake.delPrivateDataReturns + return fakeReturns.result1 +} + +func (fake *ChaincodeStub) DelPrivateDataCallCount() int { + fake.delPrivateDataMutex.RLock() + defer fake.delPrivateDataMutex.RUnlock() + return len(fake.delPrivateDataArgsForCall) +} + +func (fake *ChaincodeStub) DelPrivateDataCalls(stub func(string, string) error) { + fake.delPrivateDataMutex.Lock() + defer fake.delPrivateDataMutex.Unlock() + fake.DelPrivateDataStub = stub +} + +func (fake *ChaincodeStub) DelPrivateDataArgsForCall(i int) (string, string) { + fake.delPrivateDataMutex.RLock() + defer fake.delPrivateDataMutex.RUnlock() + argsForCall := fake.delPrivateDataArgsForCall[i] + return argsForCall.arg1, argsForCall.arg2 +} + +func (fake *ChaincodeStub) DelPrivateDataReturns(result1 error) { + fake.delPrivateDataMutex.Lock() + defer fake.delPrivateDataMutex.Unlock() + fake.DelPrivateDataStub = nil + fake.delPrivateDataReturns = struct { + result1 error + }{result1} +} + +func (fake *ChaincodeStub) DelPrivateDataReturnsOnCall(i int, result1 error) { + fake.delPrivateDataMutex.Lock() + defer fake.delPrivateDataMutex.Unlock() + fake.DelPrivateDataStub = nil + if fake.delPrivateDataReturnsOnCall == nil { + fake.delPrivateDataReturnsOnCall = make(map[int]struct { + result1 error + }) + } + fake.delPrivateDataReturnsOnCall[i] = struct { + result1 error + }{result1} +} + +func (fake *ChaincodeStub) DelState(arg1 string) error { + fake.delStateMutex.Lock() + ret, specificReturn := fake.delStateReturnsOnCall[len(fake.delStateArgsForCall)] + fake.delStateArgsForCall = append(fake.delStateArgsForCall, struct { + arg1 string + }{arg1}) + fake.recordInvocation("DelState", []interface{}{arg1}) + fake.delStateMutex.Unlock() + if fake.DelStateStub != nil { + return fake.DelStateStub(arg1) + } + if specificReturn { + return ret.result1 + } + fakeReturns := fake.delStateReturns + return fakeReturns.result1 +} + +func (fake *ChaincodeStub) DelStateCallCount() int { + fake.delStateMutex.RLock() + defer fake.delStateMutex.RUnlock() + return len(fake.delStateArgsForCall) +} + +func (fake *ChaincodeStub) DelStateCalls(stub func(string) error) { + fake.delStateMutex.Lock() + defer fake.delStateMutex.Unlock() + fake.DelStateStub = stub +} + +func (fake *ChaincodeStub) DelStateArgsForCall(i int) string { + fake.delStateMutex.RLock() + defer fake.delStateMutex.RUnlock() + argsForCall := fake.delStateArgsForCall[i] + return argsForCall.arg1 +} + +func (fake *ChaincodeStub) DelStateReturns(result1 error) { + fake.delStateMutex.Lock() + defer fake.delStateMutex.Unlock() + fake.DelStateStub = nil + fake.delStateReturns = struct { + result1 error + }{result1} +} + +func (fake *ChaincodeStub) DelStateReturnsOnCall(i int, result1 error) { + fake.delStateMutex.Lock() + defer fake.delStateMutex.Unlock() + fake.DelStateStub = nil + if fake.delStateReturnsOnCall == nil { + fake.delStateReturnsOnCall = make(map[int]struct { + result1 error + }) + } + fake.delStateReturnsOnCall[i] = struct { + result1 error + }{result1} +} + +func (fake *ChaincodeStub) GetArgs() [][]byte { + fake.getArgsMutex.Lock() + ret, specificReturn := fake.getArgsReturnsOnCall[len(fake.getArgsArgsForCall)] + fake.getArgsArgsForCall = append(fake.getArgsArgsForCall, struct { + }{}) + fake.recordInvocation("GetArgs", []interface{}{}) + fake.getArgsMutex.Unlock() + if fake.GetArgsStub != nil { + return fake.GetArgsStub() + } + if specificReturn { + return ret.result1 + } + fakeReturns := fake.getArgsReturns + return fakeReturns.result1 +} + +func (fake *ChaincodeStub) GetArgsCallCount() int { + fake.getArgsMutex.RLock() + defer fake.getArgsMutex.RUnlock() + return len(fake.getArgsArgsForCall) +} + +func (fake *ChaincodeStub) GetArgsCalls(stub func() [][]byte) { + fake.getArgsMutex.Lock() + defer fake.getArgsMutex.Unlock() + fake.GetArgsStub = stub +} + +func (fake *ChaincodeStub) GetArgsReturns(result1 [][]byte) { + fake.getArgsMutex.Lock() + defer fake.getArgsMutex.Unlock() + fake.GetArgsStub = nil + fake.getArgsReturns = struct { + result1 [][]byte + }{result1} +} + +func (fake *ChaincodeStub) GetArgsReturnsOnCall(i int, result1 [][]byte) { + fake.getArgsMutex.Lock() + defer fake.getArgsMutex.Unlock() + fake.GetArgsStub = nil + if fake.getArgsReturnsOnCall == nil { + fake.getArgsReturnsOnCall = make(map[int]struct { + result1 [][]byte + }) + } + fake.getArgsReturnsOnCall[i] = struct { + result1 [][]byte + }{result1} +} + +func (fake *ChaincodeStub) GetArgsSlice() ([]byte, error) { + fake.getArgsSliceMutex.Lock() + ret, specificReturn := fake.getArgsSliceReturnsOnCall[len(fake.getArgsSliceArgsForCall)] + fake.getArgsSliceArgsForCall = append(fake.getArgsSliceArgsForCall, struct { + }{}) + fake.recordInvocation("GetArgsSlice", []interface{}{}) + fake.getArgsSliceMutex.Unlock() + if fake.GetArgsSliceStub != nil { + return fake.GetArgsSliceStub() + } + if specificReturn { + return ret.result1, ret.result2 + } + fakeReturns := fake.getArgsSliceReturns + return fakeReturns.result1, fakeReturns.result2 +} + +func (fake *ChaincodeStub) GetArgsSliceCallCount() int { + fake.getArgsSliceMutex.RLock() + defer fake.getArgsSliceMutex.RUnlock() + return len(fake.getArgsSliceArgsForCall) +} + +func (fake *ChaincodeStub) GetArgsSliceCalls(stub func() ([]byte, error)) { + fake.getArgsSliceMutex.Lock() + defer fake.getArgsSliceMutex.Unlock() + fake.GetArgsSliceStub = stub +} + +func (fake *ChaincodeStub) GetArgsSliceReturns(result1 []byte, result2 error) { + fake.getArgsSliceMutex.Lock() + defer fake.getArgsSliceMutex.Unlock() + fake.GetArgsSliceStub = nil + fake.getArgsSliceReturns = struct { + result1 []byte + result2 error + }{result1, result2} +} + +func (fake *ChaincodeStub) GetArgsSliceReturnsOnCall(i int, result1 []byte, result2 error) { + fake.getArgsSliceMutex.Lock() + defer fake.getArgsSliceMutex.Unlock() + fake.GetArgsSliceStub = nil + if fake.getArgsSliceReturnsOnCall == nil { + fake.getArgsSliceReturnsOnCall = make(map[int]struct { + result1 []byte + result2 error + }) + } + fake.getArgsSliceReturnsOnCall[i] = struct { + result1 []byte + result2 error + }{result1, result2} +} + +func (fake *ChaincodeStub) GetBinding() ([]byte, error) { + fake.getBindingMutex.Lock() + ret, specificReturn := fake.getBindingReturnsOnCall[len(fake.getBindingArgsForCall)] + fake.getBindingArgsForCall = append(fake.getBindingArgsForCall, struct { + }{}) + fake.recordInvocation("GetBinding", []interface{}{}) + fake.getBindingMutex.Unlock() + if fake.GetBindingStub != nil { + return fake.GetBindingStub() + } + if specificReturn { + return ret.result1, ret.result2 + } + fakeReturns := fake.getBindingReturns + return fakeReturns.result1, fakeReturns.result2 +} + +func (fake *ChaincodeStub) GetBindingCallCount() int { + fake.getBindingMutex.RLock() + defer fake.getBindingMutex.RUnlock() + return len(fake.getBindingArgsForCall) +} + +func (fake *ChaincodeStub) GetBindingCalls(stub func() ([]byte, error)) { + fake.getBindingMutex.Lock() + defer fake.getBindingMutex.Unlock() + fake.GetBindingStub = stub +} + +func (fake *ChaincodeStub) GetBindingReturns(result1 []byte, result2 error) { + fake.getBindingMutex.Lock() + defer fake.getBindingMutex.Unlock() + fake.GetBindingStub = nil + fake.getBindingReturns = struct { + result1 []byte + result2 error + }{result1, result2} +} + +func (fake *ChaincodeStub) GetBindingReturnsOnCall(i int, result1 []byte, result2 error) { + fake.getBindingMutex.Lock() + defer fake.getBindingMutex.Unlock() + fake.GetBindingStub = nil + if fake.getBindingReturnsOnCall == nil { + fake.getBindingReturnsOnCall = make(map[int]struct { + result1 []byte + result2 error + }) + } + fake.getBindingReturnsOnCall[i] = struct { + result1 []byte + result2 error + }{result1, result2} +} + +func (fake *ChaincodeStub) GetChannelID() string { + fake.getChannelIDMutex.Lock() + ret, specificReturn := fake.getChannelIDReturnsOnCall[len(fake.getChannelIDArgsForCall)] + fake.getChannelIDArgsForCall = append(fake.getChannelIDArgsForCall, struct { + }{}) + fake.recordInvocation("GetChannelID", []interface{}{}) + fake.getChannelIDMutex.Unlock() + if fake.GetChannelIDStub != nil { + return fake.GetChannelIDStub() + } + if specificReturn { + return ret.result1 + } + fakeReturns := fake.getChannelIDReturns + return fakeReturns.result1 +} + +func (fake *ChaincodeStub) GetChannelIDCallCount() int { + fake.getChannelIDMutex.RLock() + defer fake.getChannelIDMutex.RUnlock() + return len(fake.getChannelIDArgsForCall) +} + +func (fake *ChaincodeStub) GetChannelIDCalls(stub func() string) { + fake.getChannelIDMutex.Lock() + defer fake.getChannelIDMutex.Unlock() + fake.GetChannelIDStub = stub +} + +func (fake *ChaincodeStub) GetChannelIDReturns(result1 string) { + fake.getChannelIDMutex.Lock() + defer fake.getChannelIDMutex.Unlock() + fake.GetChannelIDStub = nil + fake.getChannelIDReturns = struct { + result1 string + }{result1} +} + +func (fake *ChaincodeStub) GetChannelIDReturnsOnCall(i int, result1 string) { + fake.getChannelIDMutex.Lock() + defer fake.getChannelIDMutex.Unlock() + fake.GetChannelIDStub = nil + if fake.getChannelIDReturnsOnCall == nil { + fake.getChannelIDReturnsOnCall = make(map[int]struct { + result1 string + }) + } + fake.getChannelIDReturnsOnCall[i] = struct { + result1 string + }{result1} +} + +func (fake *ChaincodeStub) GetCreator() ([]byte, error) { + fake.getCreatorMutex.Lock() + ret, specificReturn := fake.getCreatorReturnsOnCall[len(fake.getCreatorArgsForCall)] + fake.getCreatorArgsForCall = append(fake.getCreatorArgsForCall, struct { + }{}) + fake.recordInvocation("GetCreator", []interface{}{}) + fake.getCreatorMutex.Unlock() + if fake.GetCreatorStub != nil { + return fake.GetCreatorStub() + } + if specificReturn { + return ret.result1, ret.result2 + } + fakeReturns := fake.getCreatorReturns + return fakeReturns.result1, fakeReturns.result2 +} + +func (fake *ChaincodeStub) GetCreatorCallCount() int { + fake.getCreatorMutex.RLock() + defer fake.getCreatorMutex.RUnlock() + return len(fake.getCreatorArgsForCall) +} + +func (fake *ChaincodeStub) GetCreatorCalls(stub func() ([]byte, error)) { + fake.getCreatorMutex.Lock() + defer fake.getCreatorMutex.Unlock() + fake.GetCreatorStub = stub +} + +func (fake *ChaincodeStub) GetCreatorReturns(result1 []byte, result2 error) { + fake.getCreatorMutex.Lock() + defer fake.getCreatorMutex.Unlock() + fake.GetCreatorStub = nil + fake.getCreatorReturns = struct { + result1 []byte + result2 error + }{result1, result2} +} + +func (fake *ChaincodeStub) GetCreatorReturnsOnCall(i int, result1 []byte, result2 error) { + fake.getCreatorMutex.Lock() + defer fake.getCreatorMutex.Unlock() + fake.GetCreatorStub = nil + if fake.getCreatorReturnsOnCall == nil { + fake.getCreatorReturnsOnCall = make(map[int]struct { + result1 []byte + result2 error + }) + } + fake.getCreatorReturnsOnCall[i] = struct { + result1 []byte + result2 error + }{result1, result2} +} + +func (fake *ChaincodeStub) GetDecorations() map[string][]byte { + fake.getDecorationsMutex.Lock() + ret, specificReturn := fake.getDecorationsReturnsOnCall[len(fake.getDecorationsArgsForCall)] + fake.getDecorationsArgsForCall = append(fake.getDecorationsArgsForCall, struct { + }{}) + fake.recordInvocation("GetDecorations", []interface{}{}) + fake.getDecorationsMutex.Unlock() + if fake.GetDecorationsStub != nil { + return fake.GetDecorationsStub() + } + if specificReturn { + return ret.result1 + } + fakeReturns := fake.getDecorationsReturns + return fakeReturns.result1 +} + +func (fake *ChaincodeStub) GetDecorationsCallCount() int { + fake.getDecorationsMutex.RLock() + defer fake.getDecorationsMutex.RUnlock() + return len(fake.getDecorationsArgsForCall) +} + +func (fake *ChaincodeStub) GetDecorationsCalls(stub func() map[string][]byte) { + fake.getDecorationsMutex.Lock() + defer fake.getDecorationsMutex.Unlock() + fake.GetDecorationsStub = stub +} + +func (fake *ChaincodeStub) GetDecorationsReturns(result1 map[string][]byte) { + fake.getDecorationsMutex.Lock() + defer fake.getDecorationsMutex.Unlock() + fake.GetDecorationsStub = nil + fake.getDecorationsReturns = struct { + result1 map[string][]byte + }{result1} +} + +func (fake *ChaincodeStub) GetDecorationsReturnsOnCall(i int, result1 map[string][]byte) { + fake.getDecorationsMutex.Lock() + defer fake.getDecorationsMutex.Unlock() + fake.GetDecorationsStub = nil + if fake.getDecorationsReturnsOnCall == nil { + fake.getDecorationsReturnsOnCall = make(map[int]struct { + result1 map[string][]byte + }) + } + fake.getDecorationsReturnsOnCall[i] = struct { + result1 map[string][]byte + }{result1} +} + +func (fake *ChaincodeStub) GetFunctionAndParameters() (string, []string) { + fake.getFunctionAndParametersMutex.Lock() + ret, specificReturn := fake.getFunctionAndParametersReturnsOnCall[len(fake.getFunctionAndParametersArgsForCall)] + fake.getFunctionAndParametersArgsForCall = append(fake.getFunctionAndParametersArgsForCall, struct { + }{}) + fake.recordInvocation("GetFunctionAndParameters", []interface{}{}) + fake.getFunctionAndParametersMutex.Unlock() + if fake.GetFunctionAndParametersStub != nil { + return fake.GetFunctionAndParametersStub() + } + if specificReturn { + return ret.result1, ret.result2 + } + fakeReturns := fake.getFunctionAndParametersReturns + return fakeReturns.result1, fakeReturns.result2 +} + +func (fake *ChaincodeStub) GetFunctionAndParametersCallCount() int { + fake.getFunctionAndParametersMutex.RLock() + defer fake.getFunctionAndParametersMutex.RUnlock() + return len(fake.getFunctionAndParametersArgsForCall) +} + +func (fake *ChaincodeStub) GetFunctionAndParametersCalls(stub func() (string, []string)) { + fake.getFunctionAndParametersMutex.Lock() + defer fake.getFunctionAndParametersMutex.Unlock() + fake.GetFunctionAndParametersStub = stub +} + +func (fake *ChaincodeStub) GetFunctionAndParametersReturns(result1 string, result2 []string) { + fake.getFunctionAndParametersMutex.Lock() + defer fake.getFunctionAndParametersMutex.Unlock() + fake.GetFunctionAndParametersStub = nil + fake.getFunctionAndParametersReturns = struct { + result1 string + result2 []string + }{result1, result2} +} + +func (fake *ChaincodeStub) GetFunctionAndParametersReturnsOnCall(i int, result1 string, result2 []string) { + fake.getFunctionAndParametersMutex.Lock() + defer fake.getFunctionAndParametersMutex.Unlock() + fake.GetFunctionAndParametersStub = nil + if fake.getFunctionAndParametersReturnsOnCall == nil { + fake.getFunctionAndParametersReturnsOnCall = make(map[int]struct { + result1 string + result2 []string + }) + } + fake.getFunctionAndParametersReturnsOnCall[i] = struct { + result1 string + result2 []string + }{result1, result2} +} + +func (fake *ChaincodeStub) GetHistoryForKey(arg1 string) (shim.HistoryQueryIteratorInterface, error) { + fake.getHistoryForKeyMutex.Lock() + ret, specificReturn := fake.getHistoryForKeyReturnsOnCall[len(fake.getHistoryForKeyArgsForCall)] + fake.getHistoryForKeyArgsForCall = append(fake.getHistoryForKeyArgsForCall, struct { + arg1 string + }{arg1}) + fake.recordInvocation("GetHistoryForKey", []interface{}{arg1}) + fake.getHistoryForKeyMutex.Unlock() + if fake.GetHistoryForKeyStub != nil { + return fake.GetHistoryForKeyStub(arg1) + } + if specificReturn { + return ret.result1, ret.result2 + } + fakeReturns := fake.getHistoryForKeyReturns + return fakeReturns.result1, fakeReturns.result2 +} + +func (fake *ChaincodeStub) GetHistoryForKeyCallCount() int { + fake.getHistoryForKeyMutex.RLock() + defer fake.getHistoryForKeyMutex.RUnlock() + return len(fake.getHistoryForKeyArgsForCall) +} + +func (fake *ChaincodeStub) GetHistoryForKeyCalls(stub func(string) (shim.HistoryQueryIteratorInterface, error)) { + fake.getHistoryForKeyMutex.Lock() + defer fake.getHistoryForKeyMutex.Unlock() + fake.GetHistoryForKeyStub = stub +} + +func (fake *ChaincodeStub) GetHistoryForKeyArgsForCall(i int) string { + fake.getHistoryForKeyMutex.RLock() + defer fake.getHistoryForKeyMutex.RUnlock() + argsForCall := fake.getHistoryForKeyArgsForCall[i] + return argsForCall.arg1 +} + +func (fake *ChaincodeStub) GetHistoryForKeyReturns(result1 shim.HistoryQueryIteratorInterface, result2 error) { + fake.getHistoryForKeyMutex.Lock() + defer fake.getHistoryForKeyMutex.Unlock() + fake.GetHistoryForKeyStub = nil + fake.getHistoryForKeyReturns = struct { + result1 shim.HistoryQueryIteratorInterface + result2 error + }{result1, result2} +} + +func (fake *ChaincodeStub) GetHistoryForKeyReturnsOnCall(i int, result1 shim.HistoryQueryIteratorInterface, result2 error) { + fake.getHistoryForKeyMutex.Lock() + defer fake.getHistoryForKeyMutex.Unlock() + fake.GetHistoryForKeyStub = nil + if fake.getHistoryForKeyReturnsOnCall == nil { + fake.getHistoryForKeyReturnsOnCall = make(map[int]struct { + result1 shim.HistoryQueryIteratorInterface + result2 error + }) + } + fake.getHistoryForKeyReturnsOnCall[i] = struct { + result1 shim.HistoryQueryIteratorInterface + result2 error + }{result1, result2} +} + +func (fake *ChaincodeStub) GetPrivateData(arg1 string, arg2 string) ([]byte, error) { + fake.getPrivateDataMutex.Lock() + ret, specificReturn := fake.getPrivateDataReturnsOnCall[len(fake.getPrivateDataArgsForCall)] + fake.getPrivateDataArgsForCall = append(fake.getPrivateDataArgsForCall, struct { + arg1 string + arg2 string + }{arg1, arg2}) + fake.recordInvocation("GetPrivateData", []interface{}{arg1, arg2}) + fake.getPrivateDataMutex.Unlock() + if fake.GetPrivateDataStub != nil { + return fake.GetPrivateDataStub(arg1, arg2) + } + if specificReturn { + return ret.result1, ret.result2 + } + fakeReturns := fake.getPrivateDataReturns + return fakeReturns.result1, fakeReturns.result2 +} + +func (fake *ChaincodeStub) GetPrivateDataCallCount() int { + fake.getPrivateDataMutex.RLock() + defer fake.getPrivateDataMutex.RUnlock() + return len(fake.getPrivateDataArgsForCall) +} + +func (fake *ChaincodeStub) GetPrivateDataCalls(stub func(string, string) ([]byte, error)) { + fake.getPrivateDataMutex.Lock() + defer fake.getPrivateDataMutex.Unlock() + fake.GetPrivateDataStub = stub +} + +func (fake *ChaincodeStub) GetPrivateDataArgsForCall(i int) (string, string) { + fake.getPrivateDataMutex.RLock() + defer fake.getPrivateDataMutex.RUnlock() + argsForCall := fake.getPrivateDataArgsForCall[i] + return argsForCall.arg1, argsForCall.arg2 +} + +func (fake *ChaincodeStub) GetPrivateDataReturns(result1 []byte, result2 error) { + fake.getPrivateDataMutex.Lock() + defer fake.getPrivateDataMutex.Unlock() + fake.GetPrivateDataStub = nil + fake.getPrivateDataReturns = struct { + result1 []byte + result2 error + }{result1, result2} +} + +func (fake *ChaincodeStub) GetPrivateDataReturnsOnCall(i int, result1 []byte, result2 error) { + fake.getPrivateDataMutex.Lock() + defer fake.getPrivateDataMutex.Unlock() + fake.GetPrivateDataStub = nil + if fake.getPrivateDataReturnsOnCall == nil { + fake.getPrivateDataReturnsOnCall = make(map[int]struct { + result1 []byte + result2 error + }) + } + fake.getPrivateDataReturnsOnCall[i] = struct { + result1 []byte + result2 error + }{result1, result2} +} + +func (fake *ChaincodeStub) GetPrivateDataByPartialCompositeKey(arg1 string, arg2 string, arg3 []string) (shim.StateQueryIteratorInterface, error) { + var arg3Copy []string + if arg3 != nil { + arg3Copy = make([]string, len(arg3)) + copy(arg3Copy, arg3) + } + fake.getPrivateDataByPartialCompositeKeyMutex.Lock() + ret, specificReturn := fake.getPrivateDataByPartialCompositeKeyReturnsOnCall[len(fake.getPrivateDataByPartialCompositeKeyArgsForCall)] + fake.getPrivateDataByPartialCompositeKeyArgsForCall = append(fake.getPrivateDataByPartialCompositeKeyArgsForCall, struct { + arg1 string + arg2 string + arg3 []string + }{arg1, arg2, arg3Copy}) + fake.recordInvocation("GetPrivateDataByPartialCompositeKey", []interface{}{arg1, arg2, arg3Copy}) + fake.getPrivateDataByPartialCompositeKeyMutex.Unlock() + if fake.GetPrivateDataByPartialCompositeKeyStub != nil { + return fake.GetPrivateDataByPartialCompositeKeyStub(arg1, arg2, arg3) + } + if specificReturn { + return ret.result1, ret.result2 + } + fakeReturns := fake.getPrivateDataByPartialCompositeKeyReturns + return fakeReturns.result1, fakeReturns.result2 +} + +func (fake *ChaincodeStub) GetPrivateDataByPartialCompositeKeyCallCount() int { + fake.getPrivateDataByPartialCompositeKeyMutex.RLock() + defer fake.getPrivateDataByPartialCompositeKeyMutex.RUnlock() + return len(fake.getPrivateDataByPartialCompositeKeyArgsForCall) +} + +func (fake *ChaincodeStub) GetPrivateDataByPartialCompositeKeyCalls(stub func(string, string, []string) (shim.StateQueryIteratorInterface, error)) { + fake.getPrivateDataByPartialCompositeKeyMutex.Lock() + defer fake.getPrivateDataByPartialCompositeKeyMutex.Unlock() + fake.GetPrivateDataByPartialCompositeKeyStub = stub +} + +func (fake *ChaincodeStub) GetPrivateDataByPartialCompositeKeyArgsForCall(i int) (string, string, []string) { + fake.getPrivateDataByPartialCompositeKeyMutex.RLock() + defer fake.getPrivateDataByPartialCompositeKeyMutex.RUnlock() + argsForCall := fake.getPrivateDataByPartialCompositeKeyArgsForCall[i] + return argsForCall.arg1, argsForCall.arg2, argsForCall.arg3 +} + +func (fake *ChaincodeStub) GetPrivateDataByPartialCompositeKeyReturns(result1 shim.StateQueryIteratorInterface, result2 error) { + fake.getPrivateDataByPartialCompositeKeyMutex.Lock() + defer fake.getPrivateDataByPartialCompositeKeyMutex.Unlock() + fake.GetPrivateDataByPartialCompositeKeyStub = nil + fake.getPrivateDataByPartialCompositeKeyReturns = struct { + result1 shim.StateQueryIteratorInterface + result2 error + }{result1, result2} +} + +func (fake *ChaincodeStub) GetPrivateDataByPartialCompositeKeyReturnsOnCall(i int, result1 shim.StateQueryIteratorInterface, result2 error) { + fake.getPrivateDataByPartialCompositeKeyMutex.Lock() + defer fake.getPrivateDataByPartialCompositeKeyMutex.Unlock() + fake.GetPrivateDataByPartialCompositeKeyStub = nil + if fake.getPrivateDataByPartialCompositeKeyReturnsOnCall == nil { + fake.getPrivateDataByPartialCompositeKeyReturnsOnCall = make(map[int]struct { + result1 shim.StateQueryIteratorInterface + result2 error + }) + } + fake.getPrivateDataByPartialCompositeKeyReturnsOnCall[i] = struct { + result1 shim.StateQueryIteratorInterface + result2 error + }{result1, result2} +} + +func (fake *ChaincodeStub) GetPrivateDataByRange(arg1 string, arg2 string, arg3 string) (shim.StateQueryIteratorInterface, error) { + fake.getPrivateDataByRangeMutex.Lock() + ret, specificReturn := fake.getPrivateDataByRangeReturnsOnCall[len(fake.getPrivateDataByRangeArgsForCall)] + fake.getPrivateDataByRangeArgsForCall = append(fake.getPrivateDataByRangeArgsForCall, struct { + arg1 string + arg2 string + arg3 string + }{arg1, arg2, arg3}) + fake.recordInvocation("GetPrivateDataByRange", []interface{}{arg1, arg2, arg3}) + fake.getPrivateDataByRangeMutex.Unlock() + if fake.GetPrivateDataByRangeStub != nil { + return fake.GetPrivateDataByRangeStub(arg1, arg2, arg3) + } + if specificReturn { + return ret.result1, ret.result2 + } + fakeReturns := fake.getPrivateDataByRangeReturns + return fakeReturns.result1, fakeReturns.result2 +} + +func (fake *ChaincodeStub) GetPrivateDataByRangeCallCount() int { + fake.getPrivateDataByRangeMutex.RLock() + defer fake.getPrivateDataByRangeMutex.RUnlock() + return len(fake.getPrivateDataByRangeArgsForCall) +} + +func (fake *ChaincodeStub) GetPrivateDataByRangeCalls(stub func(string, string, string) (shim.StateQueryIteratorInterface, error)) { + fake.getPrivateDataByRangeMutex.Lock() + defer fake.getPrivateDataByRangeMutex.Unlock() + fake.GetPrivateDataByRangeStub = stub +} + +func (fake *ChaincodeStub) GetPrivateDataByRangeArgsForCall(i int) (string, string, string) { + fake.getPrivateDataByRangeMutex.RLock() + defer fake.getPrivateDataByRangeMutex.RUnlock() + argsForCall := fake.getPrivateDataByRangeArgsForCall[i] + return argsForCall.arg1, argsForCall.arg2, argsForCall.arg3 +} + +func (fake *ChaincodeStub) GetPrivateDataByRangeReturns(result1 shim.StateQueryIteratorInterface, result2 error) { + fake.getPrivateDataByRangeMutex.Lock() + defer fake.getPrivateDataByRangeMutex.Unlock() + fake.GetPrivateDataByRangeStub = nil + fake.getPrivateDataByRangeReturns = struct { + result1 shim.StateQueryIteratorInterface + result2 error + }{result1, result2} +} + +func (fake *ChaincodeStub) GetPrivateDataByRangeReturnsOnCall(i int, result1 shim.StateQueryIteratorInterface, result2 error) { + fake.getPrivateDataByRangeMutex.Lock() + defer fake.getPrivateDataByRangeMutex.Unlock() + fake.GetPrivateDataByRangeStub = nil + if fake.getPrivateDataByRangeReturnsOnCall == nil { + fake.getPrivateDataByRangeReturnsOnCall = make(map[int]struct { + result1 shim.StateQueryIteratorInterface + result2 error + }) + } + fake.getPrivateDataByRangeReturnsOnCall[i] = struct { + result1 shim.StateQueryIteratorInterface + result2 error + }{result1, result2} +} + +func (fake *ChaincodeStub) GetPrivateDataHash(arg1 string, arg2 string) ([]byte, error) { + fake.getPrivateDataHashMutex.Lock() + ret, specificReturn := fake.getPrivateDataHashReturnsOnCall[len(fake.getPrivateDataHashArgsForCall)] + fake.getPrivateDataHashArgsForCall = append(fake.getPrivateDataHashArgsForCall, struct { + arg1 string + arg2 string + }{arg1, arg2}) + fake.recordInvocation("GetPrivateDataHash", []interface{}{arg1, arg2}) + fake.getPrivateDataHashMutex.Unlock() + if fake.GetPrivateDataHashStub != nil { + return fake.GetPrivateDataHashStub(arg1, arg2) + } + if specificReturn { + return ret.result1, ret.result2 + } + fakeReturns := fake.getPrivateDataHashReturns + return fakeReturns.result1, fakeReturns.result2 +} + +func (fake *ChaincodeStub) GetPrivateDataHashCallCount() int { + fake.getPrivateDataHashMutex.RLock() + defer fake.getPrivateDataHashMutex.RUnlock() + return len(fake.getPrivateDataHashArgsForCall) +} + +func (fake *ChaincodeStub) GetPrivateDataHashCalls(stub func(string, string) ([]byte, error)) { + fake.getPrivateDataHashMutex.Lock() + defer fake.getPrivateDataHashMutex.Unlock() + fake.GetPrivateDataHashStub = stub +} + +func (fake *ChaincodeStub) GetPrivateDataHashArgsForCall(i int) (string, string) { + fake.getPrivateDataHashMutex.RLock() + defer fake.getPrivateDataHashMutex.RUnlock() + argsForCall := fake.getPrivateDataHashArgsForCall[i] + return argsForCall.arg1, argsForCall.arg2 +} + +func (fake *ChaincodeStub) GetPrivateDataHashReturns(result1 []byte, result2 error) { + fake.getPrivateDataHashMutex.Lock() + defer fake.getPrivateDataHashMutex.Unlock() + fake.GetPrivateDataHashStub = nil + fake.getPrivateDataHashReturns = struct { + result1 []byte + result2 error + }{result1, result2} +} + +func (fake *ChaincodeStub) GetPrivateDataHashReturnsOnCall(i int, result1 []byte, result2 error) { + fake.getPrivateDataHashMutex.Lock() + defer fake.getPrivateDataHashMutex.Unlock() + fake.GetPrivateDataHashStub = nil + if fake.getPrivateDataHashReturnsOnCall == nil { + fake.getPrivateDataHashReturnsOnCall = make(map[int]struct { + result1 []byte + result2 error + }) + } + fake.getPrivateDataHashReturnsOnCall[i] = struct { + result1 []byte + result2 error + }{result1, result2} +} + +func (fake *ChaincodeStub) GetPrivateDataQueryResult(arg1 string, arg2 string) (shim.StateQueryIteratorInterface, error) { + fake.getPrivateDataQueryResultMutex.Lock() + ret, specificReturn := fake.getPrivateDataQueryResultReturnsOnCall[len(fake.getPrivateDataQueryResultArgsForCall)] + fake.getPrivateDataQueryResultArgsForCall = append(fake.getPrivateDataQueryResultArgsForCall, struct { + arg1 string + arg2 string + }{arg1, arg2}) + fake.recordInvocation("GetPrivateDataQueryResult", []interface{}{arg1, arg2}) + fake.getPrivateDataQueryResultMutex.Unlock() + if fake.GetPrivateDataQueryResultStub != nil { + return fake.GetPrivateDataQueryResultStub(arg1, arg2) + } + if specificReturn { + return ret.result1, ret.result2 + } + fakeReturns := fake.getPrivateDataQueryResultReturns + return fakeReturns.result1, fakeReturns.result2 +} + +func (fake *ChaincodeStub) GetPrivateDataQueryResultCallCount() int { + fake.getPrivateDataQueryResultMutex.RLock() + defer fake.getPrivateDataQueryResultMutex.RUnlock() + return len(fake.getPrivateDataQueryResultArgsForCall) +} + +func (fake *ChaincodeStub) GetPrivateDataQueryResultCalls(stub func(string, string) (shim.StateQueryIteratorInterface, error)) { + fake.getPrivateDataQueryResultMutex.Lock() + defer fake.getPrivateDataQueryResultMutex.Unlock() + fake.GetPrivateDataQueryResultStub = stub +} + +func (fake *ChaincodeStub) GetPrivateDataQueryResultArgsForCall(i int) (string, string) { + fake.getPrivateDataQueryResultMutex.RLock() + defer fake.getPrivateDataQueryResultMutex.RUnlock() + argsForCall := fake.getPrivateDataQueryResultArgsForCall[i] + return argsForCall.arg1, argsForCall.arg2 +} + +func (fake *ChaincodeStub) GetPrivateDataQueryResultReturns(result1 shim.StateQueryIteratorInterface, result2 error) { + fake.getPrivateDataQueryResultMutex.Lock() + defer fake.getPrivateDataQueryResultMutex.Unlock() + fake.GetPrivateDataQueryResultStub = nil + fake.getPrivateDataQueryResultReturns = struct { + result1 shim.StateQueryIteratorInterface + result2 error + }{result1, result2} +} + +func (fake *ChaincodeStub) GetPrivateDataQueryResultReturnsOnCall(i int, result1 shim.StateQueryIteratorInterface, result2 error) { + fake.getPrivateDataQueryResultMutex.Lock() + defer fake.getPrivateDataQueryResultMutex.Unlock() + fake.GetPrivateDataQueryResultStub = nil + if fake.getPrivateDataQueryResultReturnsOnCall == nil { + fake.getPrivateDataQueryResultReturnsOnCall = make(map[int]struct { + result1 shim.StateQueryIteratorInterface + result2 error + }) + } + fake.getPrivateDataQueryResultReturnsOnCall[i] = struct { + result1 shim.StateQueryIteratorInterface + result2 error + }{result1, result2} +} + +func (fake *ChaincodeStub) GetPrivateDataValidationParameter(arg1 string, arg2 string) ([]byte, error) { + fake.getPrivateDataValidationParameterMutex.Lock() + ret, specificReturn := fake.getPrivateDataValidationParameterReturnsOnCall[len(fake.getPrivateDataValidationParameterArgsForCall)] + fake.getPrivateDataValidationParameterArgsForCall = append(fake.getPrivateDataValidationParameterArgsForCall, struct { + arg1 string + arg2 string + }{arg1, arg2}) + fake.recordInvocation("GetPrivateDataValidationParameter", []interface{}{arg1, arg2}) + fake.getPrivateDataValidationParameterMutex.Unlock() + if fake.GetPrivateDataValidationParameterStub != nil { + return fake.GetPrivateDataValidationParameterStub(arg1, arg2) + } + if specificReturn { + return ret.result1, ret.result2 + } + fakeReturns := fake.getPrivateDataValidationParameterReturns + return fakeReturns.result1, fakeReturns.result2 +} + +func (fake *ChaincodeStub) GetPrivateDataValidationParameterCallCount() int { + fake.getPrivateDataValidationParameterMutex.RLock() + defer fake.getPrivateDataValidationParameterMutex.RUnlock() + return len(fake.getPrivateDataValidationParameterArgsForCall) +} + +func (fake *ChaincodeStub) GetPrivateDataValidationParameterCalls(stub func(string, string) ([]byte, error)) { + fake.getPrivateDataValidationParameterMutex.Lock() + defer fake.getPrivateDataValidationParameterMutex.Unlock() + fake.GetPrivateDataValidationParameterStub = stub +} + +func (fake *ChaincodeStub) GetPrivateDataValidationParameterArgsForCall(i int) (string, string) { + fake.getPrivateDataValidationParameterMutex.RLock() + defer fake.getPrivateDataValidationParameterMutex.RUnlock() + argsForCall := fake.getPrivateDataValidationParameterArgsForCall[i] + return argsForCall.arg1, argsForCall.arg2 +} + +func (fake *ChaincodeStub) GetPrivateDataValidationParameterReturns(result1 []byte, result2 error) { + fake.getPrivateDataValidationParameterMutex.Lock() + defer fake.getPrivateDataValidationParameterMutex.Unlock() + fake.GetPrivateDataValidationParameterStub = nil + fake.getPrivateDataValidationParameterReturns = struct { + result1 []byte + result2 error + }{result1, result2} +} + +func (fake *ChaincodeStub) GetPrivateDataValidationParameterReturnsOnCall(i int, result1 []byte, result2 error) { + fake.getPrivateDataValidationParameterMutex.Lock() + defer fake.getPrivateDataValidationParameterMutex.Unlock() + fake.GetPrivateDataValidationParameterStub = nil + if fake.getPrivateDataValidationParameterReturnsOnCall == nil { + fake.getPrivateDataValidationParameterReturnsOnCall = make(map[int]struct { + result1 []byte + result2 error + }) + } + fake.getPrivateDataValidationParameterReturnsOnCall[i] = struct { + result1 []byte + result2 error + }{result1, result2} +} + +func (fake *ChaincodeStub) GetQueryResult(arg1 string) (shim.StateQueryIteratorInterface, error) { + fake.getQueryResultMutex.Lock() + ret, specificReturn := fake.getQueryResultReturnsOnCall[len(fake.getQueryResultArgsForCall)] + fake.getQueryResultArgsForCall = append(fake.getQueryResultArgsForCall, struct { + arg1 string + }{arg1}) + fake.recordInvocation("GetQueryResult", []interface{}{arg1}) + fake.getQueryResultMutex.Unlock() + if fake.GetQueryResultStub != nil { + return fake.GetQueryResultStub(arg1) + } + if specificReturn { + return ret.result1, ret.result2 + } + fakeReturns := fake.getQueryResultReturns + return fakeReturns.result1, fakeReturns.result2 +} + +func (fake *ChaincodeStub) GetQueryResultCallCount() int { + fake.getQueryResultMutex.RLock() + defer fake.getQueryResultMutex.RUnlock() + return len(fake.getQueryResultArgsForCall) +} + +func (fake *ChaincodeStub) GetQueryResultCalls(stub func(string) (shim.StateQueryIteratorInterface, error)) { + fake.getQueryResultMutex.Lock() + defer fake.getQueryResultMutex.Unlock() + fake.GetQueryResultStub = stub +} + +func (fake *ChaincodeStub) GetQueryResultArgsForCall(i int) string { + fake.getQueryResultMutex.RLock() + defer fake.getQueryResultMutex.RUnlock() + argsForCall := fake.getQueryResultArgsForCall[i] + return argsForCall.arg1 +} + +func (fake *ChaincodeStub) GetQueryResultReturns(result1 shim.StateQueryIteratorInterface, result2 error) { + fake.getQueryResultMutex.Lock() + defer fake.getQueryResultMutex.Unlock() + fake.GetQueryResultStub = nil + fake.getQueryResultReturns = struct { + result1 shim.StateQueryIteratorInterface + result2 error + }{result1, result2} +} + +func (fake *ChaincodeStub) GetQueryResultReturnsOnCall(i int, result1 shim.StateQueryIteratorInterface, result2 error) { + fake.getQueryResultMutex.Lock() + defer fake.getQueryResultMutex.Unlock() + fake.GetQueryResultStub = nil + if fake.getQueryResultReturnsOnCall == nil { + fake.getQueryResultReturnsOnCall = make(map[int]struct { + result1 shim.StateQueryIteratorInterface + result2 error + }) + } + fake.getQueryResultReturnsOnCall[i] = struct { + result1 shim.StateQueryIteratorInterface + result2 error + }{result1, result2} +} + +func (fake *ChaincodeStub) GetQueryResultWithPagination(arg1 string, arg2 int32, arg3 string) (shim.StateQueryIteratorInterface, *peer.QueryResponseMetadata, error) { + fake.getQueryResultWithPaginationMutex.Lock() + ret, specificReturn := fake.getQueryResultWithPaginationReturnsOnCall[len(fake.getQueryResultWithPaginationArgsForCall)] + fake.getQueryResultWithPaginationArgsForCall = append(fake.getQueryResultWithPaginationArgsForCall, struct { + arg1 string + arg2 int32 + arg3 string + }{arg1, arg2, arg3}) + fake.recordInvocation("GetQueryResultWithPagination", []interface{}{arg1, arg2, arg3}) + fake.getQueryResultWithPaginationMutex.Unlock() + if fake.GetQueryResultWithPaginationStub != nil { + return fake.GetQueryResultWithPaginationStub(arg1, arg2, arg3) + } + if specificReturn { + return ret.result1, ret.result2, ret.result3 + } + fakeReturns := fake.getQueryResultWithPaginationReturns + return fakeReturns.result1, fakeReturns.result2, fakeReturns.result3 +} + +func (fake *ChaincodeStub) GetQueryResultWithPaginationCallCount() int { + fake.getQueryResultWithPaginationMutex.RLock() + defer fake.getQueryResultWithPaginationMutex.RUnlock() + return len(fake.getQueryResultWithPaginationArgsForCall) +} + +func (fake *ChaincodeStub) GetQueryResultWithPaginationCalls(stub func(string, int32, string) (shim.StateQueryIteratorInterface, *peer.QueryResponseMetadata, error)) { + fake.getQueryResultWithPaginationMutex.Lock() + defer fake.getQueryResultWithPaginationMutex.Unlock() + fake.GetQueryResultWithPaginationStub = stub +} + +func (fake *ChaincodeStub) GetQueryResultWithPaginationArgsForCall(i int) (string, int32, string) { + fake.getQueryResultWithPaginationMutex.RLock() + defer fake.getQueryResultWithPaginationMutex.RUnlock() + argsForCall := fake.getQueryResultWithPaginationArgsForCall[i] + return argsForCall.arg1, argsForCall.arg2, argsForCall.arg3 +} + +func (fake *ChaincodeStub) GetQueryResultWithPaginationReturns(result1 shim.StateQueryIteratorInterface, result2 *peer.QueryResponseMetadata, result3 error) { + fake.getQueryResultWithPaginationMutex.Lock() + defer fake.getQueryResultWithPaginationMutex.Unlock() + fake.GetQueryResultWithPaginationStub = nil + fake.getQueryResultWithPaginationReturns = struct { + result1 shim.StateQueryIteratorInterface + result2 *peer.QueryResponseMetadata + result3 error + }{result1, result2, result3} +} + +func (fake *ChaincodeStub) GetQueryResultWithPaginationReturnsOnCall(i int, result1 shim.StateQueryIteratorInterface, result2 *peer.QueryResponseMetadata, result3 error) { + fake.getQueryResultWithPaginationMutex.Lock() + defer fake.getQueryResultWithPaginationMutex.Unlock() + fake.GetQueryResultWithPaginationStub = nil + if fake.getQueryResultWithPaginationReturnsOnCall == nil { + fake.getQueryResultWithPaginationReturnsOnCall = make(map[int]struct { + result1 shim.StateQueryIteratorInterface + result2 *peer.QueryResponseMetadata + result3 error + }) + } + fake.getQueryResultWithPaginationReturnsOnCall[i] = struct { + result1 shim.StateQueryIteratorInterface + result2 *peer.QueryResponseMetadata + result3 error + }{result1, result2, result3} +} + +func (fake *ChaincodeStub) GetSignedProposal() (*peer.SignedProposal, error) { + fake.getSignedProposalMutex.Lock() + ret, specificReturn := fake.getSignedProposalReturnsOnCall[len(fake.getSignedProposalArgsForCall)] + fake.getSignedProposalArgsForCall = append(fake.getSignedProposalArgsForCall, struct { + }{}) + fake.recordInvocation("GetSignedProposal", []interface{}{}) + fake.getSignedProposalMutex.Unlock() + if fake.GetSignedProposalStub != nil { + return fake.GetSignedProposalStub() + } + if specificReturn { + return ret.result1, ret.result2 + } + fakeReturns := fake.getSignedProposalReturns + return fakeReturns.result1, fakeReturns.result2 +} + +func (fake *ChaincodeStub) GetSignedProposalCallCount() int { + fake.getSignedProposalMutex.RLock() + defer fake.getSignedProposalMutex.RUnlock() + return len(fake.getSignedProposalArgsForCall) +} + +func (fake *ChaincodeStub) GetSignedProposalCalls(stub func() (*peer.SignedProposal, error)) { + fake.getSignedProposalMutex.Lock() + defer fake.getSignedProposalMutex.Unlock() + fake.GetSignedProposalStub = stub +} + +func (fake *ChaincodeStub) GetSignedProposalReturns(result1 *peer.SignedProposal, result2 error) { + fake.getSignedProposalMutex.Lock() + defer fake.getSignedProposalMutex.Unlock() + fake.GetSignedProposalStub = nil + fake.getSignedProposalReturns = struct { + result1 *peer.SignedProposal + result2 error + }{result1, result2} +} + +func (fake *ChaincodeStub) GetSignedProposalReturnsOnCall(i int, result1 *peer.SignedProposal, result2 error) { + fake.getSignedProposalMutex.Lock() + defer fake.getSignedProposalMutex.Unlock() + fake.GetSignedProposalStub = nil + if fake.getSignedProposalReturnsOnCall == nil { + fake.getSignedProposalReturnsOnCall = make(map[int]struct { + result1 *peer.SignedProposal + result2 error + }) + } + fake.getSignedProposalReturnsOnCall[i] = struct { + result1 *peer.SignedProposal + result2 error + }{result1, result2} +} + +func (fake *ChaincodeStub) GetState(arg1 string) ([]byte, error) { + fake.getStateMutex.Lock() + ret, specificReturn := fake.getStateReturnsOnCall[len(fake.getStateArgsForCall)] + fake.getStateArgsForCall = append(fake.getStateArgsForCall, struct { + arg1 string + }{arg1}) + fake.recordInvocation("GetState", []interface{}{arg1}) + fake.getStateMutex.Unlock() + if fake.GetStateStub != nil { + return fake.GetStateStub(arg1) + } + if specificReturn { + return ret.result1, ret.result2 + } + fakeReturns := fake.getStateReturns + return fakeReturns.result1, fakeReturns.result2 +} + +func (fake *ChaincodeStub) GetStateCallCount() int { + fake.getStateMutex.RLock() + defer fake.getStateMutex.RUnlock() + return len(fake.getStateArgsForCall) +} + +func (fake *ChaincodeStub) GetStateCalls(stub func(string) ([]byte, error)) { + fake.getStateMutex.Lock() + defer fake.getStateMutex.Unlock() + fake.GetStateStub = stub +} + +func (fake *ChaincodeStub) GetStateArgsForCall(i int) string { + fake.getStateMutex.RLock() + defer fake.getStateMutex.RUnlock() + argsForCall := fake.getStateArgsForCall[i] + return argsForCall.arg1 +} + +func (fake *ChaincodeStub) GetStateReturns(result1 []byte, result2 error) { + fake.getStateMutex.Lock() + defer fake.getStateMutex.Unlock() + fake.GetStateStub = nil + fake.getStateReturns = struct { + result1 []byte + result2 error + }{result1, result2} +} + +func (fake *ChaincodeStub) GetStateReturnsOnCall(i int, result1 []byte, result2 error) { + fake.getStateMutex.Lock() + defer fake.getStateMutex.Unlock() + fake.GetStateStub = nil + if fake.getStateReturnsOnCall == nil { + fake.getStateReturnsOnCall = make(map[int]struct { + result1 []byte + result2 error + }) + } + fake.getStateReturnsOnCall[i] = struct { + result1 []byte + result2 error + }{result1, result2} +} + +func (fake *ChaincodeStub) GetStateByPartialCompositeKey(arg1 string, arg2 []string) (shim.StateQueryIteratorInterface, error) { + var arg2Copy []string + if arg2 != nil { + arg2Copy = make([]string, len(arg2)) + copy(arg2Copy, arg2) + } + fake.getStateByPartialCompositeKeyMutex.Lock() + ret, specificReturn := fake.getStateByPartialCompositeKeyReturnsOnCall[len(fake.getStateByPartialCompositeKeyArgsForCall)] + fake.getStateByPartialCompositeKeyArgsForCall = append(fake.getStateByPartialCompositeKeyArgsForCall, struct { + arg1 string + arg2 []string + }{arg1, arg2Copy}) + fake.recordInvocation("GetStateByPartialCompositeKey", []interface{}{arg1, arg2Copy}) + fake.getStateByPartialCompositeKeyMutex.Unlock() + if fake.GetStateByPartialCompositeKeyStub != nil { + return fake.GetStateByPartialCompositeKeyStub(arg1, arg2) + } + if specificReturn { + return ret.result1, ret.result2 + } + fakeReturns := fake.getStateByPartialCompositeKeyReturns + return fakeReturns.result1, fakeReturns.result2 +} + +func (fake *ChaincodeStub) GetStateByPartialCompositeKeyCallCount() int { + fake.getStateByPartialCompositeKeyMutex.RLock() + defer fake.getStateByPartialCompositeKeyMutex.RUnlock() + return len(fake.getStateByPartialCompositeKeyArgsForCall) +} + +func (fake *ChaincodeStub) GetStateByPartialCompositeKeyCalls(stub func(string, []string) (shim.StateQueryIteratorInterface, error)) { + fake.getStateByPartialCompositeKeyMutex.Lock() + defer fake.getStateByPartialCompositeKeyMutex.Unlock() + fake.GetStateByPartialCompositeKeyStub = stub +} + +func (fake *ChaincodeStub) GetStateByPartialCompositeKeyArgsForCall(i int) (string, []string) { + fake.getStateByPartialCompositeKeyMutex.RLock() + defer fake.getStateByPartialCompositeKeyMutex.RUnlock() + argsForCall := fake.getStateByPartialCompositeKeyArgsForCall[i] + return argsForCall.arg1, argsForCall.arg2 +} + +func (fake *ChaincodeStub) GetStateByPartialCompositeKeyReturns(result1 shim.StateQueryIteratorInterface, result2 error) { + fake.getStateByPartialCompositeKeyMutex.Lock() + defer fake.getStateByPartialCompositeKeyMutex.Unlock() + fake.GetStateByPartialCompositeKeyStub = nil + fake.getStateByPartialCompositeKeyReturns = struct { + result1 shim.StateQueryIteratorInterface + result2 error + }{result1, result2} +} + +func (fake *ChaincodeStub) GetStateByPartialCompositeKeyReturnsOnCall(i int, result1 shim.StateQueryIteratorInterface, result2 error) { + fake.getStateByPartialCompositeKeyMutex.Lock() + defer fake.getStateByPartialCompositeKeyMutex.Unlock() + fake.GetStateByPartialCompositeKeyStub = nil + if fake.getStateByPartialCompositeKeyReturnsOnCall == nil { + fake.getStateByPartialCompositeKeyReturnsOnCall = make(map[int]struct { + result1 shim.StateQueryIteratorInterface + result2 error + }) + } + fake.getStateByPartialCompositeKeyReturnsOnCall[i] = struct { + result1 shim.StateQueryIteratorInterface + result2 error + }{result1, result2} +} + +func (fake *ChaincodeStub) GetStateByPartialCompositeKeyWithPagination(arg1 string, arg2 []string, arg3 int32, arg4 string) (shim.StateQueryIteratorInterface, *peer.QueryResponseMetadata, error) { + var arg2Copy []string + if arg2 != nil { + arg2Copy = make([]string, len(arg2)) + copy(arg2Copy, arg2) + } + fake.getStateByPartialCompositeKeyWithPaginationMutex.Lock() + ret, specificReturn := fake.getStateByPartialCompositeKeyWithPaginationReturnsOnCall[len(fake.getStateByPartialCompositeKeyWithPaginationArgsForCall)] + fake.getStateByPartialCompositeKeyWithPaginationArgsForCall = append(fake.getStateByPartialCompositeKeyWithPaginationArgsForCall, struct { + arg1 string + arg2 []string + arg3 int32 + arg4 string + }{arg1, arg2Copy, arg3, arg4}) + fake.recordInvocation("GetStateByPartialCompositeKeyWithPagination", []interface{}{arg1, arg2Copy, arg3, arg4}) + fake.getStateByPartialCompositeKeyWithPaginationMutex.Unlock() + if fake.GetStateByPartialCompositeKeyWithPaginationStub != nil { + return fake.GetStateByPartialCompositeKeyWithPaginationStub(arg1, arg2, arg3, arg4) + } + if specificReturn { + return ret.result1, ret.result2, ret.result3 + } + fakeReturns := fake.getStateByPartialCompositeKeyWithPaginationReturns + return fakeReturns.result1, fakeReturns.result2, fakeReturns.result3 +} + +func (fake *ChaincodeStub) GetStateByPartialCompositeKeyWithPaginationCallCount() int { + fake.getStateByPartialCompositeKeyWithPaginationMutex.RLock() + defer fake.getStateByPartialCompositeKeyWithPaginationMutex.RUnlock() + return len(fake.getStateByPartialCompositeKeyWithPaginationArgsForCall) +} + +func (fake *ChaincodeStub) GetStateByPartialCompositeKeyWithPaginationCalls(stub func(string, []string, int32, string) (shim.StateQueryIteratorInterface, *peer.QueryResponseMetadata, error)) { + fake.getStateByPartialCompositeKeyWithPaginationMutex.Lock() + defer fake.getStateByPartialCompositeKeyWithPaginationMutex.Unlock() + fake.GetStateByPartialCompositeKeyWithPaginationStub = stub +} + +func (fake *ChaincodeStub) GetStateByPartialCompositeKeyWithPaginationArgsForCall(i int) (string, []string, int32, string) { + fake.getStateByPartialCompositeKeyWithPaginationMutex.RLock() + defer fake.getStateByPartialCompositeKeyWithPaginationMutex.RUnlock() + argsForCall := fake.getStateByPartialCompositeKeyWithPaginationArgsForCall[i] + return argsForCall.arg1, argsForCall.arg2, argsForCall.arg3, argsForCall.arg4 +} + +func (fake *ChaincodeStub) GetStateByPartialCompositeKeyWithPaginationReturns(result1 shim.StateQueryIteratorInterface, result2 *peer.QueryResponseMetadata, result3 error) { + fake.getStateByPartialCompositeKeyWithPaginationMutex.Lock() + defer fake.getStateByPartialCompositeKeyWithPaginationMutex.Unlock() + fake.GetStateByPartialCompositeKeyWithPaginationStub = nil + fake.getStateByPartialCompositeKeyWithPaginationReturns = struct { + result1 shim.StateQueryIteratorInterface + result2 *peer.QueryResponseMetadata + result3 error + }{result1, result2, result3} +} + +func (fake *ChaincodeStub) GetStateByPartialCompositeKeyWithPaginationReturnsOnCall(i int, result1 shim.StateQueryIteratorInterface, result2 *peer.QueryResponseMetadata, result3 error) { + fake.getStateByPartialCompositeKeyWithPaginationMutex.Lock() + defer fake.getStateByPartialCompositeKeyWithPaginationMutex.Unlock() + fake.GetStateByPartialCompositeKeyWithPaginationStub = nil + if fake.getStateByPartialCompositeKeyWithPaginationReturnsOnCall == nil { + fake.getStateByPartialCompositeKeyWithPaginationReturnsOnCall = make(map[int]struct { + result1 shim.StateQueryIteratorInterface + result2 *peer.QueryResponseMetadata + result3 error + }) + } + fake.getStateByPartialCompositeKeyWithPaginationReturnsOnCall[i] = struct { + result1 shim.StateQueryIteratorInterface + result2 *peer.QueryResponseMetadata + result3 error + }{result1, result2, result3} +} + +func (fake *ChaincodeStub) GetStateByRange(arg1 string, arg2 string) (shim.StateQueryIteratorInterface, error) { + fake.getStateByRangeMutex.Lock() + ret, specificReturn := fake.getStateByRangeReturnsOnCall[len(fake.getStateByRangeArgsForCall)] + fake.getStateByRangeArgsForCall = append(fake.getStateByRangeArgsForCall, struct { + arg1 string + arg2 string + }{arg1, arg2}) + fake.recordInvocation("GetStateByRange", []interface{}{arg1, arg2}) + fake.getStateByRangeMutex.Unlock() + if fake.GetStateByRangeStub != nil { + return fake.GetStateByRangeStub(arg1, arg2) + } + if specificReturn { + return ret.result1, ret.result2 + } + fakeReturns := fake.getStateByRangeReturns + return fakeReturns.result1, fakeReturns.result2 +} + +func (fake *ChaincodeStub) GetStateByRangeCallCount() int { + fake.getStateByRangeMutex.RLock() + defer fake.getStateByRangeMutex.RUnlock() + return len(fake.getStateByRangeArgsForCall) +} + +func (fake *ChaincodeStub) GetStateByRangeCalls(stub func(string, string) (shim.StateQueryIteratorInterface, error)) { + fake.getStateByRangeMutex.Lock() + defer fake.getStateByRangeMutex.Unlock() + fake.GetStateByRangeStub = stub +} + +func (fake *ChaincodeStub) GetStateByRangeArgsForCall(i int) (string, string) { + fake.getStateByRangeMutex.RLock() + defer fake.getStateByRangeMutex.RUnlock() + argsForCall := fake.getStateByRangeArgsForCall[i] + return argsForCall.arg1, argsForCall.arg2 +} + +func (fake *ChaincodeStub) GetStateByRangeReturns(result1 shim.StateQueryIteratorInterface, result2 error) { + fake.getStateByRangeMutex.Lock() + defer fake.getStateByRangeMutex.Unlock() + fake.GetStateByRangeStub = nil + fake.getStateByRangeReturns = struct { + result1 shim.StateQueryIteratorInterface + result2 error + }{result1, result2} +} + +func (fake *ChaincodeStub) GetStateByRangeReturnsOnCall(i int, result1 shim.StateQueryIteratorInterface, result2 error) { + fake.getStateByRangeMutex.Lock() + defer fake.getStateByRangeMutex.Unlock() + fake.GetStateByRangeStub = nil + if fake.getStateByRangeReturnsOnCall == nil { + fake.getStateByRangeReturnsOnCall = make(map[int]struct { + result1 shim.StateQueryIteratorInterface + result2 error + }) + } + fake.getStateByRangeReturnsOnCall[i] = struct { + result1 shim.StateQueryIteratorInterface + result2 error + }{result1, result2} +} + +func (fake *ChaincodeStub) GetStateByRangeWithPagination(arg1 string, arg2 string, arg3 int32, arg4 string) (shim.StateQueryIteratorInterface, *peer.QueryResponseMetadata, error) { + fake.getStateByRangeWithPaginationMutex.Lock() + ret, specificReturn := fake.getStateByRangeWithPaginationReturnsOnCall[len(fake.getStateByRangeWithPaginationArgsForCall)] + fake.getStateByRangeWithPaginationArgsForCall = append(fake.getStateByRangeWithPaginationArgsForCall, struct { + arg1 string + arg2 string + arg3 int32 + arg4 string + }{arg1, arg2, arg3, arg4}) + fake.recordInvocation("GetStateByRangeWithPagination", []interface{}{arg1, arg2, arg3, arg4}) + fake.getStateByRangeWithPaginationMutex.Unlock() + if fake.GetStateByRangeWithPaginationStub != nil { + return fake.GetStateByRangeWithPaginationStub(arg1, arg2, arg3, arg4) + } + if specificReturn { + return ret.result1, ret.result2, ret.result3 + } + fakeReturns := fake.getStateByRangeWithPaginationReturns + return fakeReturns.result1, fakeReturns.result2, fakeReturns.result3 +} + +func (fake *ChaincodeStub) GetStateByRangeWithPaginationCallCount() int { + fake.getStateByRangeWithPaginationMutex.RLock() + defer fake.getStateByRangeWithPaginationMutex.RUnlock() + return len(fake.getStateByRangeWithPaginationArgsForCall) +} + +func (fake *ChaincodeStub) GetStateByRangeWithPaginationCalls(stub func(string, string, int32, string) (shim.StateQueryIteratorInterface, *peer.QueryResponseMetadata, error)) { + fake.getStateByRangeWithPaginationMutex.Lock() + defer fake.getStateByRangeWithPaginationMutex.Unlock() + fake.GetStateByRangeWithPaginationStub = stub +} + +func (fake *ChaincodeStub) GetStateByRangeWithPaginationArgsForCall(i int) (string, string, int32, string) { + fake.getStateByRangeWithPaginationMutex.RLock() + defer fake.getStateByRangeWithPaginationMutex.RUnlock() + argsForCall := fake.getStateByRangeWithPaginationArgsForCall[i] + return argsForCall.arg1, argsForCall.arg2, argsForCall.arg3, argsForCall.arg4 +} + +func (fake *ChaincodeStub) GetStateByRangeWithPaginationReturns(result1 shim.StateQueryIteratorInterface, result2 *peer.QueryResponseMetadata, result3 error) { + fake.getStateByRangeWithPaginationMutex.Lock() + defer fake.getStateByRangeWithPaginationMutex.Unlock() + fake.GetStateByRangeWithPaginationStub = nil + fake.getStateByRangeWithPaginationReturns = struct { + result1 shim.StateQueryIteratorInterface + result2 *peer.QueryResponseMetadata + result3 error + }{result1, result2, result3} +} + +func (fake *ChaincodeStub) GetStateByRangeWithPaginationReturnsOnCall(i int, result1 shim.StateQueryIteratorInterface, result2 *peer.QueryResponseMetadata, result3 error) { + fake.getStateByRangeWithPaginationMutex.Lock() + defer fake.getStateByRangeWithPaginationMutex.Unlock() + fake.GetStateByRangeWithPaginationStub = nil + if fake.getStateByRangeWithPaginationReturnsOnCall == nil { + fake.getStateByRangeWithPaginationReturnsOnCall = make(map[int]struct { + result1 shim.StateQueryIteratorInterface + result2 *peer.QueryResponseMetadata + result3 error + }) + } + fake.getStateByRangeWithPaginationReturnsOnCall[i] = struct { + result1 shim.StateQueryIteratorInterface + result2 *peer.QueryResponseMetadata + result3 error + }{result1, result2, result3} +} + +func (fake *ChaincodeStub) GetStateValidationParameter(arg1 string) ([]byte, error) { + fake.getStateValidationParameterMutex.Lock() + ret, specificReturn := fake.getStateValidationParameterReturnsOnCall[len(fake.getStateValidationParameterArgsForCall)] + fake.getStateValidationParameterArgsForCall = append(fake.getStateValidationParameterArgsForCall, struct { + arg1 string + }{arg1}) + fake.recordInvocation("GetStateValidationParameter", []interface{}{arg1}) + fake.getStateValidationParameterMutex.Unlock() + if fake.GetStateValidationParameterStub != nil { + return fake.GetStateValidationParameterStub(arg1) + } + if specificReturn { + return ret.result1, ret.result2 + } + fakeReturns := fake.getStateValidationParameterReturns + return fakeReturns.result1, fakeReturns.result2 +} + +func (fake *ChaincodeStub) GetStateValidationParameterCallCount() int { + fake.getStateValidationParameterMutex.RLock() + defer fake.getStateValidationParameterMutex.RUnlock() + return len(fake.getStateValidationParameterArgsForCall) +} + +func (fake *ChaincodeStub) GetStateValidationParameterCalls(stub func(string) ([]byte, error)) { + fake.getStateValidationParameterMutex.Lock() + defer fake.getStateValidationParameterMutex.Unlock() + fake.GetStateValidationParameterStub = stub +} + +func (fake *ChaincodeStub) GetStateValidationParameterArgsForCall(i int) string { + fake.getStateValidationParameterMutex.RLock() + defer fake.getStateValidationParameterMutex.RUnlock() + argsForCall := fake.getStateValidationParameterArgsForCall[i] + return argsForCall.arg1 +} + +func (fake *ChaincodeStub) GetStateValidationParameterReturns(result1 []byte, result2 error) { + fake.getStateValidationParameterMutex.Lock() + defer fake.getStateValidationParameterMutex.Unlock() + fake.GetStateValidationParameterStub = nil + fake.getStateValidationParameterReturns = struct { + result1 []byte + result2 error + }{result1, result2} +} + +func (fake *ChaincodeStub) GetStateValidationParameterReturnsOnCall(i int, result1 []byte, result2 error) { + fake.getStateValidationParameterMutex.Lock() + defer fake.getStateValidationParameterMutex.Unlock() + fake.GetStateValidationParameterStub = nil + if fake.getStateValidationParameterReturnsOnCall == nil { + fake.getStateValidationParameterReturnsOnCall = make(map[int]struct { + result1 []byte + result2 error + }) + } + fake.getStateValidationParameterReturnsOnCall[i] = struct { + result1 []byte + result2 error + }{result1, result2} +} + +func (fake *ChaincodeStub) GetStringArgs() []string { + fake.getStringArgsMutex.Lock() + ret, specificReturn := fake.getStringArgsReturnsOnCall[len(fake.getStringArgsArgsForCall)] + fake.getStringArgsArgsForCall = append(fake.getStringArgsArgsForCall, struct { + }{}) + fake.recordInvocation("GetStringArgs", []interface{}{}) + fake.getStringArgsMutex.Unlock() + if fake.GetStringArgsStub != nil { + return fake.GetStringArgsStub() + } + if specificReturn { + return ret.result1 + } + fakeReturns := fake.getStringArgsReturns + return fakeReturns.result1 +} + +func (fake *ChaincodeStub) GetStringArgsCallCount() int { + fake.getStringArgsMutex.RLock() + defer fake.getStringArgsMutex.RUnlock() + return len(fake.getStringArgsArgsForCall) +} + +func (fake *ChaincodeStub) GetStringArgsCalls(stub func() []string) { + fake.getStringArgsMutex.Lock() + defer fake.getStringArgsMutex.Unlock() + fake.GetStringArgsStub = stub +} + +func (fake *ChaincodeStub) GetStringArgsReturns(result1 []string) { + fake.getStringArgsMutex.Lock() + defer fake.getStringArgsMutex.Unlock() + fake.GetStringArgsStub = nil + fake.getStringArgsReturns = struct { + result1 []string + }{result1} +} + +func (fake *ChaincodeStub) GetStringArgsReturnsOnCall(i int, result1 []string) { + fake.getStringArgsMutex.Lock() + defer fake.getStringArgsMutex.Unlock() + fake.GetStringArgsStub = nil + if fake.getStringArgsReturnsOnCall == nil { + fake.getStringArgsReturnsOnCall = make(map[int]struct { + result1 []string + }) + } + fake.getStringArgsReturnsOnCall[i] = struct { + result1 []string + }{result1} +} + +func (fake *ChaincodeStub) GetTransient() (map[string][]byte, error) { + fake.getTransientMutex.Lock() + ret, specificReturn := fake.getTransientReturnsOnCall[len(fake.getTransientArgsForCall)] + fake.getTransientArgsForCall = append(fake.getTransientArgsForCall, struct { + }{}) + fake.recordInvocation("GetTransient", []interface{}{}) + fake.getTransientMutex.Unlock() + if fake.GetTransientStub != nil { + return fake.GetTransientStub() + } + if specificReturn { + return ret.result1, ret.result2 + } + fakeReturns := fake.getTransientReturns + return fakeReturns.result1, fakeReturns.result2 +} + +func (fake *ChaincodeStub) GetTransientCallCount() int { + fake.getTransientMutex.RLock() + defer fake.getTransientMutex.RUnlock() + return len(fake.getTransientArgsForCall) +} + +func (fake *ChaincodeStub) GetTransientCalls(stub func() (map[string][]byte, error)) { + fake.getTransientMutex.Lock() + defer fake.getTransientMutex.Unlock() + fake.GetTransientStub = stub +} + +func (fake *ChaincodeStub) GetTransientReturns(result1 map[string][]byte, result2 error) { + fake.getTransientMutex.Lock() + defer fake.getTransientMutex.Unlock() + fake.GetTransientStub = nil + fake.getTransientReturns = struct { + result1 map[string][]byte + result2 error + }{result1, result2} +} + +func (fake *ChaincodeStub) GetTransientReturnsOnCall(i int, result1 map[string][]byte, result2 error) { + fake.getTransientMutex.Lock() + defer fake.getTransientMutex.Unlock() + fake.GetTransientStub = nil + if fake.getTransientReturnsOnCall == nil { + fake.getTransientReturnsOnCall = make(map[int]struct { + result1 map[string][]byte + result2 error + }) + } + fake.getTransientReturnsOnCall[i] = struct { + result1 map[string][]byte + result2 error + }{result1, result2} +} + +func (fake *ChaincodeStub) GetTxID() string { + fake.getTxIDMutex.Lock() + ret, specificReturn := fake.getTxIDReturnsOnCall[len(fake.getTxIDArgsForCall)] + fake.getTxIDArgsForCall = append(fake.getTxIDArgsForCall, struct { + }{}) + fake.recordInvocation("GetTxID", []interface{}{}) + fake.getTxIDMutex.Unlock() + if fake.GetTxIDStub != nil { + return fake.GetTxIDStub() + } + if specificReturn { + return ret.result1 + } + fakeReturns := fake.getTxIDReturns + return fakeReturns.result1 +} + +func (fake *ChaincodeStub) GetTxIDCallCount() int { + fake.getTxIDMutex.RLock() + defer fake.getTxIDMutex.RUnlock() + return len(fake.getTxIDArgsForCall) +} + +func (fake *ChaincodeStub) GetTxIDCalls(stub func() string) { + fake.getTxIDMutex.Lock() + defer fake.getTxIDMutex.Unlock() + fake.GetTxIDStub = stub +} + +func (fake *ChaincodeStub) GetTxIDReturns(result1 string) { + fake.getTxIDMutex.Lock() + defer fake.getTxIDMutex.Unlock() + fake.GetTxIDStub = nil + fake.getTxIDReturns = struct { + result1 string + }{result1} +} + +func (fake *ChaincodeStub) GetTxIDReturnsOnCall(i int, result1 string) { + fake.getTxIDMutex.Lock() + defer fake.getTxIDMutex.Unlock() + fake.GetTxIDStub = nil + if fake.getTxIDReturnsOnCall == nil { + fake.getTxIDReturnsOnCall = make(map[int]struct { + result1 string + }) + } + fake.getTxIDReturnsOnCall[i] = struct { + result1 string + }{result1} +} + +func (fake *ChaincodeStub) GetTxTimestamp() (*timestamp.Timestamp, error) { + fake.getTxTimestampMutex.Lock() + ret, specificReturn := fake.getTxTimestampReturnsOnCall[len(fake.getTxTimestampArgsForCall)] + fake.getTxTimestampArgsForCall = append(fake.getTxTimestampArgsForCall, struct { + }{}) + fake.recordInvocation("GetTxTimestamp", []interface{}{}) + fake.getTxTimestampMutex.Unlock() + if fake.GetTxTimestampStub != nil { + return fake.GetTxTimestampStub() + } + if specificReturn { + return ret.result1, ret.result2 + } + fakeReturns := fake.getTxTimestampReturns + return fakeReturns.result1, fakeReturns.result2 +} + +func (fake *ChaincodeStub) GetTxTimestampCallCount() int { + fake.getTxTimestampMutex.RLock() + defer fake.getTxTimestampMutex.RUnlock() + return len(fake.getTxTimestampArgsForCall) +} + +func (fake *ChaincodeStub) GetTxTimestampCalls(stub func() (*timestamp.Timestamp, error)) { + fake.getTxTimestampMutex.Lock() + defer fake.getTxTimestampMutex.Unlock() + fake.GetTxTimestampStub = stub +} + +func (fake *ChaincodeStub) GetTxTimestampReturns(result1 *timestamp.Timestamp, result2 error) { + fake.getTxTimestampMutex.Lock() + defer fake.getTxTimestampMutex.Unlock() + fake.GetTxTimestampStub = nil + fake.getTxTimestampReturns = struct { + result1 *timestamp.Timestamp + result2 error + }{result1, result2} +} + +func (fake *ChaincodeStub) GetTxTimestampReturnsOnCall(i int, result1 *timestamp.Timestamp, result2 error) { + fake.getTxTimestampMutex.Lock() + defer fake.getTxTimestampMutex.Unlock() + fake.GetTxTimestampStub = nil + if fake.getTxTimestampReturnsOnCall == nil { + fake.getTxTimestampReturnsOnCall = make(map[int]struct { + result1 *timestamp.Timestamp + result2 error + }) + } + fake.getTxTimestampReturnsOnCall[i] = struct { + result1 *timestamp.Timestamp + result2 error + }{result1, result2} +} + +func (fake *ChaincodeStub) InvokeChaincode(arg1 string, arg2 [][]byte, arg3 string) peer.Response { + var arg2Copy [][]byte + if arg2 != nil { + arg2Copy = make([][]byte, len(arg2)) + copy(arg2Copy, arg2) + } + fake.invokeChaincodeMutex.Lock() + ret, specificReturn := fake.invokeChaincodeReturnsOnCall[len(fake.invokeChaincodeArgsForCall)] + fake.invokeChaincodeArgsForCall = append(fake.invokeChaincodeArgsForCall, struct { + arg1 string + arg2 [][]byte + arg3 string + }{arg1, arg2Copy, arg3}) + fake.recordInvocation("InvokeChaincode", []interface{}{arg1, arg2Copy, arg3}) + fake.invokeChaincodeMutex.Unlock() + if fake.InvokeChaincodeStub != nil { + return fake.InvokeChaincodeStub(arg1, arg2, arg3) + } + if specificReturn { + return ret.result1 + } + fakeReturns := fake.invokeChaincodeReturns + return fakeReturns.result1 +} + +func (fake *ChaincodeStub) InvokeChaincodeCallCount() int { + fake.invokeChaincodeMutex.RLock() + defer fake.invokeChaincodeMutex.RUnlock() + return len(fake.invokeChaincodeArgsForCall) +} + +func (fake *ChaincodeStub) InvokeChaincodeCalls(stub func(string, [][]byte, string) peer.Response) { + fake.invokeChaincodeMutex.Lock() + defer fake.invokeChaincodeMutex.Unlock() + fake.InvokeChaincodeStub = stub +} + +func (fake *ChaincodeStub) InvokeChaincodeArgsForCall(i int) (string, [][]byte, string) { + fake.invokeChaincodeMutex.RLock() + defer fake.invokeChaincodeMutex.RUnlock() + argsForCall := fake.invokeChaincodeArgsForCall[i] + return argsForCall.arg1, argsForCall.arg2, argsForCall.arg3 +} + +func (fake *ChaincodeStub) InvokeChaincodeReturns(result1 peer.Response) { + fake.invokeChaincodeMutex.Lock() + defer fake.invokeChaincodeMutex.Unlock() + fake.InvokeChaincodeStub = nil + fake.invokeChaincodeReturns = struct { + result1 peer.Response + }{result1} +} + +func (fake *ChaincodeStub) InvokeChaincodeReturnsOnCall(i int, result1 peer.Response) { + fake.invokeChaincodeMutex.Lock() + defer fake.invokeChaincodeMutex.Unlock() + fake.InvokeChaincodeStub = nil + if fake.invokeChaincodeReturnsOnCall == nil { + fake.invokeChaincodeReturnsOnCall = make(map[int]struct { + result1 peer.Response + }) + } + fake.invokeChaincodeReturnsOnCall[i] = struct { + result1 peer.Response + }{result1} +} + +func (fake *ChaincodeStub) PutPrivateData(arg1 string, arg2 string, arg3 []byte) error { + var arg3Copy []byte + if arg3 != nil { + arg3Copy = make([]byte, len(arg3)) + copy(arg3Copy, arg3) + } + fake.putPrivateDataMutex.Lock() + ret, specificReturn := fake.putPrivateDataReturnsOnCall[len(fake.putPrivateDataArgsForCall)] + fake.putPrivateDataArgsForCall = append(fake.putPrivateDataArgsForCall, struct { + arg1 string + arg2 string + arg3 []byte + }{arg1, arg2, arg3Copy}) + fake.recordInvocation("PutPrivateData", []interface{}{arg1, arg2, arg3Copy}) + fake.putPrivateDataMutex.Unlock() + if fake.PutPrivateDataStub != nil { + return fake.PutPrivateDataStub(arg1, arg2, arg3) + } + if specificReturn { + return ret.result1 + } + fakeReturns := fake.putPrivateDataReturns + return fakeReturns.result1 +} + +func (fake *ChaincodeStub) PutPrivateDataCallCount() int { + fake.putPrivateDataMutex.RLock() + defer fake.putPrivateDataMutex.RUnlock() + return len(fake.putPrivateDataArgsForCall) +} + +func (fake *ChaincodeStub) PutPrivateDataCalls(stub func(string, string, []byte) error) { + fake.putPrivateDataMutex.Lock() + defer fake.putPrivateDataMutex.Unlock() + fake.PutPrivateDataStub = stub +} + +func (fake *ChaincodeStub) PutPrivateDataArgsForCall(i int) (string, string, []byte) { + fake.putPrivateDataMutex.RLock() + defer fake.putPrivateDataMutex.RUnlock() + argsForCall := fake.putPrivateDataArgsForCall[i] + return argsForCall.arg1, argsForCall.arg2, argsForCall.arg3 +} + +func (fake *ChaincodeStub) PutPrivateDataReturns(result1 error) { + fake.putPrivateDataMutex.Lock() + defer fake.putPrivateDataMutex.Unlock() + fake.PutPrivateDataStub = nil + fake.putPrivateDataReturns = struct { + result1 error + }{result1} +} + +func (fake *ChaincodeStub) PutPrivateDataReturnsOnCall(i int, result1 error) { + fake.putPrivateDataMutex.Lock() + defer fake.putPrivateDataMutex.Unlock() + fake.PutPrivateDataStub = nil + if fake.putPrivateDataReturnsOnCall == nil { + fake.putPrivateDataReturnsOnCall = make(map[int]struct { + result1 error + }) + } + fake.putPrivateDataReturnsOnCall[i] = struct { + result1 error + }{result1} +} + +func (fake *ChaincodeStub) PutState(arg1 string, arg2 []byte) error { + var arg2Copy []byte + if arg2 != nil { + arg2Copy = make([]byte, len(arg2)) + copy(arg2Copy, arg2) + } + fake.putStateMutex.Lock() + ret, specificReturn := fake.putStateReturnsOnCall[len(fake.putStateArgsForCall)] + fake.putStateArgsForCall = append(fake.putStateArgsForCall, struct { + arg1 string + arg2 []byte + }{arg1, arg2Copy}) + fake.recordInvocation("PutState", []interface{}{arg1, arg2Copy}) + fake.putStateMutex.Unlock() + if fake.PutStateStub != nil { + return fake.PutStateStub(arg1, arg2) + } + if specificReturn { + return ret.result1 + } + fakeReturns := fake.putStateReturns + return fakeReturns.result1 +} + +func (fake *ChaincodeStub) PutStateCallCount() int { + fake.putStateMutex.RLock() + defer fake.putStateMutex.RUnlock() + return len(fake.putStateArgsForCall) +} + +func (fake *ChaincodeStub) PutStateCalls(stub func(string, []byte) error) { + fake.putStateMutex.Lock() + defer fake.putStateMutex.Unlock() + fake.PutStateStub = stub +} + +func (fake *ChaincodeStub) PutStateArgsForCall(i int) (string, []byte) { + fake.putStateMutex.RLock() + defer fake.putStateMutex.RUnlock() + argsForCall := fake.putStateArgsForCall[i] + return argsForCall.arg1, argsForCall.arg2 +} + +func (fake *ChaincodeStub) PutStateReturns(result1 error) { + fake.putStateMutex.Lock() + defer fake.putStateMutex.Unlock() + fake.PutStateStub = nil + fake.putStateReturns = struct { + result1 error + }{result1} +} + +func (fake *ChaincodeStub) PutStateReturnsOnCall(i int, result1 error) { + fake.putStateMutex.Lock() + defer fake.putStateMutex.Unlock() + fake.PutStateStub = nil + if fake.putStateReturnsOnCall == nil { + fake.putStateReturnsOnCall = make(map[int]struct { + result1 error + }) + } + fake.putStateReturnsOnCall[i] = struct { + result1 error + }{result1} +} + +func (fake *ChaincodeStub) SetEvent(arg1 string, arg2 []byte) error { + var arg2Copy []byte + if arg2 != nil { + arg2Copy = make([]byte, len(arg2)) + copy(arg2Copy, arg2) + } + fake.setEventMutex.Lock() + ret, specificReturn := fake.setEventReturnsOnCall[len(fake.setEventArgsForCall)] + fake.setEventArgsForCall = append(fake.setEventArgsForCall, struct { + arg1 string + arg2 []byte + }{arg1, arg2Copy}) + fake.recordInvocation("SetEvent", []interface{}{arg1, arg2Copy}) + fake.setEventMutex.Unlock() + if fake.SetEventStub != nil { + return fake.SetEventStub(arg1, arg2) + } + if specificReturn { + return ret.result1 + } + fakeReturns := fake.setEventReturns + return fakeReturns.result1 +} + +func (fake *ChaincodeStub) SetEventCallCount() int { + fake.setEventMutex.RLock() + defer fake.setEventMutex.RUnlock() + return len(fake.setEventArgsForCall) +} + +func (fake *ChaincodeStub) SetEventCalls(stub func(string, []byte) error) { + fake.setEventMutex.Lock() + defer fake.setEventMutex.Unlock() + fake.SetEventStub = stub +} + +func (fake *ChaincodeStub) SetEventArgsForCall(i int) (string, []byte) { + fake.setEventMutex.RLock() + defer fake.setEventMutex.RUnlock() + argsForCall := fake.setEventArgsForCall[i] + return argsForCall.arg1, argsForCall.arg2 +} + +func (fake *ChaincodeStub) SetEventReturns(result1 error) { + fake.setEventMutex.Lock() + defer fake.setEventMutex.Unlock() + fake.SetEventStub = nil + fake.setEventReturns = struct { + result1 error + }{result1} +} + +func (fake *ChaincodeStub) SetEventReturnsOnCall(i int, result1 error) { + fake.setEventMutex.Lock() + defer fake.setEventMutex.Unlock() + fake.SetEventStub = nil + if fake.setEventReturnsOnCall == nil { + fake.setEventReturnsOnCall = make(map[int]struct { + result1 error + }) + } + fake.setEventReturnsOnCall[i] = struct { + result1 error + }{result1} +} + +func (fake *ChaincodeStub) SetPrivateDataValidationParameter(arg1 string, arg2 string, arg3 []byte) error { + var arg3Copy []byte + if arg3 != nil { + arg3Copy = make([]byte, len(arg3)) + copy(arg3Copy, arg3) + } + fake.setPrivateDataValidationParameterMutex.Lock() + ret, specificReturn := fake.setPrivateDataValidationParameterReturnsOnCall[len(fake.setPrivateDataValidationParameterArgsForCall)] + fake.setPrivateDataValidationParameterArgsForCall = append(fake.setPrivateDataValidationParameterArgsForCall, struct { + arg1 string + arg2 string + arg3 []byte + }{arg1, arg2, arg3Copy}) + fake.recordInvocation("SetPrivateDataValidationParameter", []interface{}{arg1, arg2, arg3Copy}) + fake.setPrivateDataValidationParameterMutex.Unlock() + if fake.SetPrivateDataValidationParameterStub != nil { + return fake.SetPrivateDataValidationParameterStub(arg1, arg2, arg3) + } + if specificReturn { + return ret.result1 + } + fakeReturns := fake.setPrivateDataValidationParameterReturns + return fakeReturns.result1 +} + +func (fake *ChaincodeStub) SetPrivateDataValidationParameterCallCount() int { + fake.setPrivateDataValidationParameterMutex.RLock() + defer fake.setPrivateDataValidationParameterMutex.RUnlock() + return len(fake.setPrivateDataValidationParameterArgsForCall) +} + +func (fake *ChaincodeStub) SetPrivateDataValidationParameterCalls(stub func(string, string, []byte) error) { + fake.setPrivateDataValidationParameterMutex.Lock() + defer fake.setPrivateDataValidationParameterMutex.Unlock() + fake.SetPrivateDataValidationParameterStub = stub +} + +func (fake *ChaincodeStub) SetPrivateDataValidationParameterArgsForCall(i int) (string, string, []byte) { + fake.setPrivateDataValidationParameterMutex.RLock() + defer fake.setPrivateDataValidationParameterMutex.RUnlock() + argsForCall := fake.setPrivateDataValidationParameterArgsForCall[i] + return argsForCall.arg1, argsForCall.arg2, argsForCall.arg3 +} + +func (fake *ChaincodeStub) SetPrivateDataValidationParameterReturns(result1 error) { + fake.setPrivateDataValidationParameterMutex.Lock() + defer fake.setPrivateDataValidationParameterMutex.Unlock() + fake.SetPrivateDataValidationParameterStub = nil + fake.setPrivateDataValidationParameterReturns = struct { + result1 error + }{result1} +} + +func (fake *ChaincodeStub) SetPrivateDataValidationParameterReturnsOnCall(i int, result1 error) { + fake.setPrivateDataValidationParameterMutex.Lock() + defer fake.setPrivateDataValidationParameterMutex.Unlock() + fake.SetPrivateDataValidationParameterStub = nil + if fake.setPrivateDataValidationParameterReturnsOnCall == nil { + fake.setPrivateDataValidationParameterReturnsOnCall = make(map[int]struct { + result1 error + }) + } + fake.setPrivateDataValidationParameterReturnsOnCall[i] = struct { + result1 error + }{result1} +} + +func (fake *ChaincodeStub) SetStateValidationParameter(arg1 string, arg2 []byte) error { + var arg2Copy []byte + if arg2 != nil { + arg2Copy = make([]byte, len(arg2)) + copy(arg2Copy, arg2) + } + fake.setStateValidationParameterMutex.Lock() + ret, specificReturn := fake.setStateValidationParameterReturnsOnCall[len(fake.setStateValidationParameterArgsForCall)] + fake.setStateValidationParameterArgsForCall = append(fake.setStateValidationParameterArgsForCall, struct { + arg1 string + arg2 []byte + }{arg1, arg2Copy}) + fake.recordInvocation("SetStateValidationParameter", []interface{}{arg1, arg2Copy}) + fake.setStateValidationParameterMutex.Unlock() + if fake.SetStateValidationParameterStub != nil { + return fake.SetStateValidationParameterStub(arg1, arg2) + } + if specificReturn { + return ret.result1 + } + fakeReturns := fake.setStateValidationParameterReturns + return fakeReturns.result1 +} + +func (fake *ChaincodeStub) SetStateValidationParameterCallCount() int { + fake.setStateValidationParameterMutex.RLock() + defer fake.setStateValidationParameterMutex.RUnlock() + return len(fake.setStateValidationParameterArgsForCall) +} + +func (fake *ChaincodeStub) SetStateValidationParameterCalls(stub func(string, []byte) error) { + fake.setStateValidationParameterMutex.Lock() + defer fake.setStateValidationParameterMutex.Unlock() + fake.SetStateValidationParameterStub = stub +} + +func (fake *ChaincodeStub) SetStateValidationParameterArgsForCall(i int) (string, []byte) { + fake.setStateValidationParameterMutex.RLock() + defer fake.setStateValidationParameterMutex.RUnlock() + argsForCall := fake.setStateValidationParameterArgsForCall[i] + return argsForCall.arg1, argsForCall.arg2 +} + +func (fake *ChaincodeStub) SetStateValidationParameterReturns(result1 error) { + fake.setStateValidationParameterMutex.Lock() + defer fake.setStateValidationParameterMutex.Unlock() + fake.SetStateValidationParameterStub = nil + fake.setStateValidationParameterReturns = struct { + result1 error + }{result1} +} + +func (fake *ChaincodeStub) SetStateValidationParameterReturnsOnCall(i int, result1 error) { + fake.setStateValidationParameterMutex.Lock() + defer fake.setStateValidationParameterMutex.Unlock() + fake.SetStateValidationParameterStub = nil + if fake.setStateValidationParameterReturnsOnCall == nil { + fake.setStateValidationParameterReturnsOnCall = make(map[int]struct { + result1 error + }) + } + fake.setStateValidationParameterReturnsOnCall[i] = struct { + result1 error + }{result1} +} + +func (fake *ChaincodeStub) SplitCompositeKey(arg1 string) (string, []string, error) { + fake.splitCompositeKeyMutex.Lock() + ret, specificReturn := fake.splitCompositeKeyReturnsOnCall[len(fake.splitCompositeKeyArgsForCall)] + fake.splitCompositeKeyArgsForCall = append(fake.splitCompositeKeyArgsForCall, struct { + arg1 string + }{arg1}) + fake.recordInvocation("SplitCompositeKey", []interface{}{arg1}) + fake.splitCompositeKeyMutex.Unlock() + if fake.SplitCompositeKeyStub != nil { + return fake.SplitCompositeKeyStub(arg1) + } + if specificReturn { + return ret.result1, ret.result2, ret.result3 + } + fakeReturns := fake.splitCompositeKeyReturns + return fakeReturns.result1, fakeReturns.result2, fakeReturns.result3 +} + +func (fake *ChaincodeStub) SplitCompositeKeyCallCount() int { + fake.splitCompositeKeyMutex.RLock() + defer fake.splitCompositeKeyMutex.RUnlock() + return len(fake.splitCompositeKeyArgsForCall) +} + +func (fake *ChaincodeStub) SplitCompositeKeyCalls(stub func(string) (string, []string, error)) { + fake.splitCompositeKeyMutex.Lock() + defer fake.splitCompositeKeyMutex.Unlock() + fake.SplitCompositeKeyStub = stub +} + +func (fake *ChaincodeStub) SplitCompositeKeyArgsForCall(i int) string { + fake.splitCompositeKeyMutex.RLock() + defer fake.splitCompositeKeyMutex.RUnlock() + argsForCall := fake.splitCompositeKeyArgsForCall[i] + return argsForCall.arg1 +} + +func (fake *ChaincodeStub) SplitCompositeKeyReturns(result1 string, result2 []string, result3 error) { + fake.splitCompositeKeyMutex.Lock() + defer fake.splitCompositeKeyMutex.Unlock() + fake.SplitCompositeKeyStub = nil + fake.splitCompositeKeyReturns = struct { + result1 string + result2 []string + result3 error + }{result1, result2, result3} +} + +func (fake *ChaincodeStub) SplitCompositeKeyReturnsOnCall(i int, result1 string, result2 []string, result3 error) { + fake.splitCompositeKeyMutex.Lock() + defer fake.splitCompositeKeyMutex.Unlock() + fake.SplitCompositeKeyStub = nil + if fake.splitCompositeKeyReturnsOnCall == nil { + fake.splitCompositeKeyReturnsOnCall = make(map[int]struct { + result1 string + result2 []string + result3 error + }) + } + fake.splitCompositeKeyReturnsOnCall[i] = struct { + result1 string + result2 []string + result3 error + }{result1, result2, result3} +} + +func (fake *ChaincodeStub) Invocations() map[string][][]interface{} { + fake.invocationsMutex.RLock() + defer fake.invocationsMutex.RUnlock() + fake.createCompositeKeyMutex.RLock() + defer fake.createCompositeKeyMutex.RUnlock() + fake.delPrivateDataMutex.RLock() + defer fake.delPrivateDataMutex.RUnlock() + fake.delStateMutex.RLock() + defer fake.delStateMutex.RUnlock() + fake.getArgsMutex.RLock() + defer fake.getArgsMutex.RUnlock() + fake.getArgsSliceMutex.RLock() + defer fake.getArgsSliceMutex.RUnlock() + fake.getBindingMutex.RLock() + defer fake.getBindingMutex.RUnlock() + fake.getChannelIDMutex.RLock() + defer fake.getChannelIDMutex.RUnlock() + fake.getCreatorMutex.RLock() + defer fake.getCreatorMutex.RUnlock() + fake.getDecorationsMutex.RLock() + defer fake.getDecorationsMutex.RUnlock() + fake.getFunctionAndParametersMutex.RLock() + defer fake.getFunctionAndParametersMutex.RUnlock() + fake.getHistoryForKeyMutex.RLock() + defer fake.getHistoryForKeyMutex.RUnlock() + fake.getPrivateDataMutex.RLock() + defer fake.getPrivateDataMutex.RUnlock() + fake.getPrivateDataByPartialCompositeKeyMutex.RLock() + defer fake.getPrivateDataByPartialCompositeKeyMutex.RUnlock() + fake.getPrivateDataByRangeMutex.RLock() + defer fake.getPrivateDataByRangeMutex.RUnlock() + fake.getPrivateDataHashMutex.RLock() + defer fake.getPrivateDataHashMutex.RUnlock() + fake.getPrivateDataQueryResultMutex.RLock() + defer fake.getPrivateDataQueryResultMutex.RUnlock() + fake.getPrivateDataValidationParameterMutex.RLock() + defer fake.getPrivateDataValidationParameterMutex.RUnlock() + fake.getQueryResultMutex.RLock() + defer fake.getQueryResultMutex.RUnlock() + fake.getQueryResultWithPaginationMutex.RLock() + defer fake.getQueryResultWithPaginationMutex.RUnlock() + fake.getSignedProposalMutex.RLock() + defer fake.getSignedProposalMutex.RUnlock() + fake.getStateMutex.RLock() + defer fake.getStateMutex.RUnlock() + fake.getStateByPartialCompositeKeyMutex.RLock() + defer fake.getStateByPartialCompositeKeyMutex.RUnlock() + fake.getStateByPartialCompositeKeyWithPaginationMutex.RLock() + defer fake.getStateByPartialCompositeKeyWithPaginationMutex.RUnlock() + fake.getStateByRangeMutex.RLock() + defer fake.getStateByRangeMutex.RUnlock() + fake.getStateByRangeWithPaginationMutex.RLock() + defer fake.getStateByRangeWithPaginationMutex.RUnlock() + fake.getStateValidationParameterMutex.RLock() + defer fake.getStateValidationParameterMutex.RUnlock() + fake.getStringArgsMutex.RLock() + defer fake.getStringArgsMutex.RUnlock() + fake.getTransientMutex.RLock() + defer fake.getTransientMutex.RUnlock() + fake.getTxIDMutex.RLock() + defer fake.getTxIDMutex.RUnlock() + fake.getTxTimestampMutex.RLock() + defer fake.getTxTimestampMutex.RUnlock() + fake.invokeChaincodeMutex.RLock() + defer fake.invokeChaincodeMutex.RUnlock() + fake.putPrivateDataMutex.RLock() + defer fake.putPrivateDataMutex.RUnlock() + fake.putStateMutex.RLock() + defer fake.putStateMutex.RUnlock() + fake.setEventMutex.RLock() + defer fake.setEventMutex.RUnlock() + fake.setPrivateDataValidationParameterMutex.RLock() + defer fake.setPrivateDataValidationParameterMutex.RUnlock() + fake.setStateValidationParameterMutex.RLock() + defer fake.setStateValidationParameterMutex.RUnlock() + fake.splitCompositeKeyMutex.RLock() + defer fake.splitCompositeKeyMutex.RUnlock() + copiedInvocations := map[string][][]interface{}{} + for key, value := range fake.invocations { + copiedInvocations[key] = value + } + return copiedInvocations +} + +func (fake *ChaincodeStub) recordInvocation(key string, args []interface{}) { + fake.invocationsMutex.Lock() + defer fake.invocationsMutex.Unlock() + if fake.invocations == nil { + fake.invocations = map[string][][]interface{}{} + } + if fake.invocations[key] == nil { + fake.invocations[key] = [][]interface{}{} + } + fake.invocations[key] = append(fake.invocations[key], args) +} diff --git a/topologies/t6/assets/chaincode/assets-transfer-basic/chaincode-go/chaincode/mocks/statequeryiterator.go b/topologies/t6/assets/chaincode/assets-transfer-basic/chaincode-go/chaincode/mocks/statequeryiterator.go new file mode 100644 index 0000000..27e3034 --- /dev/null +++ b/topologies/t6/assets/chaincode/assets-transfer-basic/chaincode-go/chaincode/mocks/statequeryiterator.go @@ -0,0 +1,232 @@ +// Code generated by counterfeiter. DO NOT EDIT. +package mocks + +import ( + "sync" + + "github.com/hyperledger/fabric-protos-go/ledger/queryresult" +) + +type StateQueryIterator struct { + CloseStub func() error + closeMutex sync.RWMutex + closeArgsForCall []struct { + } + closeReturns struct { + result1 error + } + closeReturnsOnCall map[int]struct { + result1 error + } + HasNextStub func() bool + hasNextMutex sync.RWMutex + hasNextArgsForCall []struct { + } + hasNextReturns struct { + result1 bool + } + hasNextReturnsOnCall map[int]struct { + result1 bool + } + NextStub func() (*queryresult.KV, error) + nextMutex sync.RWMutex + nextArgsForCall []struct { + } + nextReturns struct { + result1 *queryresult.KV + result2 error + } + nextReturnsOnCall map[int]struct { + result1 *queryresult.KV + result2 error + } + invocations map[string][][]interface{} + invocationsMutex sync.RWMutex +} + +func (fake *StateQueryIterator) Close() error { + fake.closeMutex.Lock() + ret, specificReturn := fake.closeReturnsOnCall[len(fake.closeArgsForCall)] + fake.closeArgsForCall = append(fake.closeArgsForCall, struct { + }{}) + fake.recordInvocation("Close", []interface{}{}) + fake.closeMutex.Unlock() + if fake.CloseStub != nil { + return fake.CloseStub() + } + if specificReturn { + return ret.result1 + } + fakeReturns := fake.closeReturns + return fakeReturns.result1 +} + +func (fake *StateQueryIterator) CloseCallCount() int { + fake.closeMutex.RLock() + defer fake.closeMutex.RUnlock() + return len(fake.closeArgsForCall) +} + +func (fake *StateQueryIterator) CloseCalls(stub func() error) { + fake.closeMutex.Lock() + defer fake.closeMutex.Unlock() + fake.CloseStub = stub +} + +func (fake *StateQueryIterator) CloseReturns(result1 error) { + fake.closeMutex.Lock() + defer fake.closeMutex.Unlock() + fake.CloseStub = nil + fake.closeReturns = struct { + result1 error + }{result1} +} + +func (fake *StateQueryIterator) CloseReturnsOnCall(i int, result1 error) { + fake.closeMutex.Lock() + defer fake.closeMutex.Unlock() + fake.CloseStub = nil + if fake.closeReturnsOnCall == nil { + fake.closeReturnsOnCall = make(map[int]struct { + result1 error + }) + } + fake.closeReturnsOnCall[i] = struct { + result1 error + }{result1} +} + +func (fake *StateQueryIterator) HasNext() bool { + fake.hasNextMutex.Lock() + ret, specificReturn := fake.hasNextReturnsOnCall[len(fake.hasNextArgsForCall)] + fake.hasNextArgsForCall = append(fake.hasNextArgsForCall, struct { + }{}) + fake.recordInvocation("HasNext", []interface{}{}) + fake.hasNextMutex.Unlock() + if fake.HasNextStub != nil { + return fake.HasNextStub() + } + if specificReturn { + return ret.result1 + } + fakeReturns := fake.hasNextReturns + return fakeReturns.result1 +} + +func (fake *StateQueryIterator) HasNextCallCount() int { + fake.hasNextMutex.RLock() + defer fake.hasNextMutex.RUnlock() + return len(fake.hasNextArgsForCall) +} + +func (fake *StateQueryIterator) HasNextCalls(stub func() bool) { + fake.hasNextMutex.Lock() + defer fake.hasNextMutex.Unlock() + fake.HasNextStub = stub +} + +func (fake *StateQueryIterator) HasNextReturns(result1 bool) { + fake.hasNextMutex.Lock() + defer fake.hasNextMutex.Unlock() + fake.HasNextStub = nil + fake.hasNextReturns = struct { + result1 bool + }{result1} +} + +func (fake *StateQueryIterator) HasNextReturnsOnCall(i int, result1 bool) { + fake.hasNextMutex.Lock() + defer fake.hasNextMutex.Unlock() + fake.HasNextStub = nil + if fake.hasNextReturnsOnCall == nil { + fake.hasNextReturnsOnCall = make(map[int]struct { + result1 bool + }) + } + fake.hasNextReturnsOnCall[i] = struct { + result1 bool + }{result1} +} + +func (fake *StateQueryIterator) Next() (*queryresult.KV, error) { + fake.nextMutex.Lock() + ret, specificReturn := fake.nextReturnsOnCall[len(fake.nextArgsForCall)] + fake.nextArgsForCall = append(fake.nextArgsForCall, struct { + }{}) + fake.recordInvocation("Next", []interface{}{}) + fake.nextMutex.Unlock() + if fake.NextStub != nil { + return fake.NextStub() + } + if specificReturn { + return ret.result1, ret.result2 + } + fakeReturns := fake.nextReturns + return fakeReturns.result1, fakeReturns.result2 +} + +func (fake *StateQueryIterator) NextCallCount() int { + fake.nextMutex.RLock() + defer fake.nextMutex.RUnlock() + return len(fake.nextArgsForCall) +} + +func (fake *StateQueryIterator) NextCalls(stub func() (*queryresult.KV, error)) { + fake.nextMutex.Lock() + defer fake.nextMutex.Unlock() + fake.NextStub = stub +} + +func (fake *StateQueryIterator) NextReturns(result1 *queryresult.KV, result2 error) { + fake.nextMutex.Lock() + defer fake.nextMutex.Unlock() + fake.NextStub = nil + fake.nextReturns = struct { + result1 *queryresult.KV + result2 error + }{result1, result2} +} + +func (fake *StateQueryIterator) NextReturnsOnCall(i int, result1 *queryresult.KV, result2 error) { + fake.nextMutex.Lock() + defer fake.nextMutex.Unlock() + fake.NextStub = nil + if fake.nextReturnsOnCall == nil { + fake.nextReturnsOnCall = make(map[int]struct { + result1 *queryresult.KV + result2 error + }) + } + fake.nextReturnsOnCall[i] = struct { + result1 *queryresult.KV + result2 error + }{result1, result2} +} + +func (fake *StateQueryIterator) Invocations() map[string][][]interface{} { + fake.invocationsMutex.RLock() + defer fake.invocationsMutex.RUnlock() + fake.closeMutex.RLock() + defer fake.closeMutex.RUnlock() + fake.hasNextMutex.RLock() + defer fake.hasNextMutex.RUnlock() + fake.nextMutex.RLock() + defer fake.nextMutex.RUnlock() + copiedInvocations := map[string][][]interface{}{} + for key, value := range fake.invocations { + copiedInvocations[key] = value + } + return copiedInvocations +} + +func (fake *StateQueryIterator) recordInvocation(key string, args []interface{}) { + fake.invocationsMutex.Lock() + defer fake.invocationsMutex.Unlock() + if fake.invocations == nil { + fake.invocations = map[string][][]interface{}{} + } + if fake.invocations[key] == nil { + fake.invocations[key] = [][]interface{}{} + } + fake.invocations[key] = append(fake.invocations[key], args) +} diff --git a/topologies/t6/assets/chaincode/assets-transfer-basic/chaincode-go/chaincode/mocks/transaction.go b/topologies/t6/assets/chaincode/assets-transfer-basic/chaincode-go/chaincode/mocks/transaction.go new file mode 100644 index 0000000..eea37db --- /dev/null +++ b/topologies/t6/assets/chaincode/assets-transfer-basic/chaincode-go/chaincode/mocks/transaction.go @@ -0,0 +1,164 @@ +// Code generated by counterfeiter. DO NOT EDIT. +package mocks + +import ( + "sync" + + "github.com/hyperledger/fabric-chaincode-go/pkg/cid" + "github.com/hyperledger/fabric-chaincode-go/shim" +) + +type TransactionContext struct { + GetClientIdentityStub func() cid.ClientIdentity + getClientIdentityMutex sync.RWMutex + getClientIdentityArgsForCall []struct { + } + getClientIdentityReturns struct { + result1 cid.ClientIdentity + } + getClientIdentityReturnsOnCall map[int]struct { + result1 cid.ClientIdentity + } + GetStubStub func() shim.ChaincodeStubInterface + getStubMutex sync.RWMutex + getStubArgsForCall []struct { + } + getStubReturns struct { + result1 shim.ChaincodeStubInterface + } + getStubReturnsOnCall map[int]struct { + result1 shim.ChaincodeStubInterface + } + invocations map[string][][]interface{} + invocationsMutex sync.RWMutex +} + +func (fake *TransactionContext) GetClientIdentity() cid.ClientIdentity { + fake.getClientIdentityMutex.Lock() + ret, specificReturn := fake.getClientIdentityReturnsOnCall[len(fake.getClientIdentityArgsForCall)] + fake.getClientIdentityArgsForCall = append(fake.getClientIdentityArgsForCall, struct { + }{}) + fake.recordInvocation("GetClientIdentity", []interface{}{}) + fake.getClientIdentityMutex.Unlock() + if fake.GetClientIdentityStub != nil { + return fake.GetClientIdentityStub() + } + if specificReturn { + return ret.result1 + } + fakeReturns := fake.getClientIdentityReturns + return fakeReturns.result1 +} + +func (fake *TransactionContext) GetClientIdentityCallCount() int { + fake.getClientIdentityMutex.RLock() + defer fake.getClientIdentityMutex.RUnlock() + return len(fake.getClientIdentityArgsForCall) +} + +func (fake *TransactionContext) GetClientIdentityCalls(stub func() cid.ClientIdentity) { + fake.getClientIdentityMutex.Lock() + defer fake.getClientIdentityMutex.Unlock() + fake.GetClientIdentityStub = stub +} + +func (fake *TransactionContext) GetClientIdentityReturns(result1 cid.ClientIdentity) { + fake.getClientIdentityMutex.Lock() + defer fake.getClientIdentityMutex.Unlock() + fake.GetClientIdentityStub = nil + fake.getClientIdentityReturns = struct { + result1 cid.ClientIdentity + }{result1} +} + +func (fake *TransactionContext) GetClientIdentityReturnsOnCall(i int, result1 cid.ClientIdentity) { + fake.getClientIdentityMutex.Lock() + defer fake.getClientIdentityMutex.Unlock() + fake.GetClientIdentityStub = nil + if fake.getClientIdentityReturnsOnCall == nil { + fake.getClientIdentityReturnsOnCall = make(map[int]struct { + result1 cid.ClientIdentity + }) + } + fake.getClientIdentityReturnsOnCall[i] = struct { + result1 cid.ClientIdentity + }{result1} +} + +func (fake *TransactionContext) GetStub() shim.ChaincodeStubInterface { + fake.getStubMutex.Lock() + ret, specificReturn := fake.getStubReturnsOnCall[len(fake.getStubArgsForCall)] + fake.getStubArgsForCall = append(fake.getStubArgsForCall, struct { + }{}) + fake.recordInvocation("GetStub", []interface{}{}) + fake.getStubMutex.Unlock() + if fake.GetStubStub != nil { + return fake.GetStubStub() + } + if specificReturn { + return ret.result1 + } + fakeReturns := fake.getStubReturns + return fakeReturns.result1 +} + +func (fake *TransactionContext) GetStubCallCount() int { + fake.getStubMutex.RLock() + defer fake.getStubMutex.RUnlock() + return len(fake.getStubArgsForCall) +} + +func (fake *TransactionContext) GetStubCalls(stub func() shim.ChaincodeStubInterface) { + fake.getStubMutex.Lock() + defer fake.getStubMutex.Unlock() + fake.GetStubStub = stub +} + +func (fake *TransactionContext) GetStubReturns(result1 shim.ChaincodeStubInterface) { + fake.getStubMutex.Lock() + defer fake.getStubMutex.Unlock() + fake.GetStubStub = nil + fake.getStubReturns = struct { + result1 shim.ChaincodeStubInterface + }{result1} +} + +func (fake *TransactionContext) GetStubReturnsOnCall(i int, result1 shim.ChaincodeStubInterface) { + fake.getStubMutex.Lock() + defer fake.getStubMutex.Unlock() + fake.GetStubStub = nil + if fake.getStubReturnsOnCall == nil { + fake.getStubReturnsOnCall = make(map[int]struct { + result1 shim.ChaincodeStubInterface + }) + } + fake.getStubReturnsOnCall[i] = struct { + result1 shim.ChaincodeStubInterface + }{result1} +} + +func (fake *TransactionContext) Invocations() map[string][][]interface{} { + fake.invocationsMutex.RLock() + defer fake.invocationsMutex.RUnlock() + fake.getClientIdentityMutex.RLock() + defer fake.getClientIdentityMutex.RUnlock() + fake.getStubMutex.RLock() + defer fake.getStubMutex.RUnlock() + copiedInvocations := map[string][][]interface{}{} + for key, value := range fake.invocations { + copiedInvocations[key] = value + } + return copiedInvocations +} + +func (fake *TransactionContext) recordInvocation(key string, args []interface{}) { + fake.invocationsMutex.Lock() + defer fake.invocationsMutex.Unlock() + if fake.invocations == nil { + fake.invocations = map[string][][]interface{}{} + } + if fake.invocations[key] == nil { + fake.invocations[key] = [][]interface{}{} + } + fake.invocations[key] = append(fake.invocations[key], args) +} diff --git a/topologies/t6/assets/chaincode/assets-transfer-basic/chaincode-go/chaincode/smartcontract.go b/topologies/t6/assets/chaincode/assets-transfer-basic/chaincode-go/chaincode/smartcontract.go new file mode 100644 index 0000000..71e8dd8 --- /dev/null +++ b/topologies/t6/assets/chaincode/assets-transfer-basic/chaincode-go/chaincode/smartcontract.go @@ -0,0 +1,185 @@ +package chaincode + +import ( + "encoding/json" + "fmt" + + "github.com/hyperledger/fabric-contract-api-go/contractapi" +) + +// SmartContract provides functions for managing an Asset +type SmartContract struct { + contractapi.Contract +} + +// Asset describes basic details of what makes up a simple asset +type Asset struct { + ID string `json:"ID"` + Color string `json:"color"` + Size int `json:"size"` + Owner string `json:"owner"` + AppraisedValue int `json:"appraisedValue"` +} + +// InitLedger adds a base set of assets to the ledger +func (s *SmartContract) InitLedger(ctx contractapi.TransactionContextInterface) error { + assets := []Asset{ + {ID: "asset1", Color: "blue", Size: 5, Owner: "Tomoko", AppraisedValue: 300}, + {ID: "asset2", Color: "red", Size: 5, Owner: "Brad", AppraisedValue: 400}, + {ID: "asset3", Color: "green", Size: 10, Owner: "Jin Soo", AppraisedValue: 500}, + {ID: "asset4", Color: "yellow", Size: 10, Owner: "Max", AppraisedValue: 600}, + {ID: "asset5", Color: "black", Size: 15, Owner: "Adriana", AppraisedValue: 700}, + {ID: "asset6", Color: "white", Size: 15, Owner: "Michel", AppraisedValue: 800}, + } + + for _, asset := range assets { + assetJSON, err := json.Marshal(asset) + if err != nil { + return err + } + + err = ctx.GetStub().PutState(asset.ID, assetJSON) + if err != nil { + return fmt.Errorf("failed to put to world state. %v", err) + } + } + + return nil +} + +// CreateAsset issues a new asset to the world state with given details. +func (s *SmartContract) CreateAsset(ctx contractapi.TransactionContextInterface, id string, color string, size int, owner string, appraisedValue int) error { + exists, err := s.AssetExists(ctx, id) + if err != nil { + return err + } + if exists { + return fmt.Errorf("the asset %s already exists", id) + } + + asset := Asset{ + ID: id, + Color: color, + Size: size, + Owner: owner, + AppraisedValue: appraisedValue, + } + assetJSON, err := json.Marshal(asset) + if err != nil { + return err + } + + return ctx.GetStub().PutState(id, assetJSON) +} + +// ReadAsset returns the asset stored in the world state with given id. +func (s *SmartContract) ReadAsset(ctx contractapi.TransactionContextInterface, id string) (*Asset, error) { + assetJSON, err := ctx.GetStub().GetState(id) + if err != nil { + return nil, fmt.Errorf("failed to read from world state: %v", err) + } + if assetJSON == nil { + return nil, fmt.Errorf("the asset %s does not exist", id) + } + + var asset Asset + err = json.Unmarshal(assetJSON, &asset) + if err != nil { + return nil, err + } + + return &asset, nil +} + +// UpdateAsset updates an existing asset in the world state with provided parameters. +func (s *SmartContract) UpdateAsset(ctx contractapi.TransactionContextInterface, id string, color string, size int, owner string, appraisedValue int) error { + exists, err := s.AssetExists(ctx, id) + if err != nil { + return err + } + if !exists { + return fmt.Errorf("the asset %s does not exist", id) + } + + // overwriting original asset with new asset + asset := Asset{ + ID: id, + Color: color, + Size: size, + Owner: owner, + AppraisedValue: appraisedValue, + } + assetJSON, err := json.Marshal(asset) + if err != nil { + return err + } + + return ctx.GetStub().PutState(id, assetJSON) +} + +// DeleteAsset deletes an given asset from the world state. +func (s *SmartContract) DeleteAsset(ctx contractapi.TransactionContextInterface, id string) error { + exists, err := s.AssetExists(ctx, id) + if err != nil { + return err + } + if !exists { + return fmt.Errorf("the asset %s does not exist", id) + } + + return ctx.GetStub().DelState(id) +} + +// AssetExists returns true when asset with given ID exists in world state +func (s *SmartContract) AssetExists(ctx contractapi.TransactionContextInterface, id string) (bool, error) { + assetJSON, err := ctx.GetStub().GetState(id) + if err != nil { + return false, fmt.Errorf("failed to read from world state: %v", err) + } + + return assetJSON != nil, nil +} + +// TransferAsset updates the owner field of asset with given id in world state. +func (s *SmartContract) TransferAsset(ctx contractapi.TransactionContextInterface, id string, newOwner string) error { + asset, err := s.ReadAsset(ctx, id) + if err != nil { + return err + } + + asset.Owner = newOwner + assetJSON, err := json.Marshal(asset) + if err != nil { + return err + } + + return ctx.GetStub().PutState(id, assetJSON) +} + +// GetAllAssets returns all assets found in world state +func (s *SmartContract) GetAllAssets(ctx contractapi.TransactionContextInterface) ([]*Asset, error) { + // range query with empty string for startKey and endKey does an + // open-ended query of all assets in the chaincode namespace. + resultsIterator, err := ctx.GetStub().GetStateByRange("", "") + if err != nil { + return nil, err + } + defer resultsIterator.Close() + + var assets []*Asset + for resultsIterator.HasNext() { + queryResponse, err := resultsIterator.Next() + if err != nil { + return nil, err + } + + var asset Asset + err = json.Unmarshal(queryResponse.Value, &asset) + if err != nil { + return nil, err + } + assets = append(assets, &asset) + } + + return assets, nil +} diff --git a/topologies/t6/assets/chaincode/assets-transfer-basic/chaincode-go/chaincode/smartcontract_test.go b/topologies/t6/assets/chaincode/assets-transfer-basic/chaincode-go/chaincode/smartcontract_test.go new file mode 100644 index 0000000..cb001de --- /dev/null +++ b/topologies/t6/assets/chaincode/assets-transfer-basic/chaincode-go/chaincode/smartcontract_test.go @@ -0,0 +1,184 @@ +package chaincode_test + +import ( + "encoding/json" + "fmt" + "testing" + + "github.com/hyperledger/fabric-chaincode-go/shim" + "github.com/hyperledger/fabric-contract-api-go/contractapi" + "github.com/hyperledger/fabric-protos-go/ledger/queryresult" + "github.com/hyperledger/fabric-samples/asset-transfer-basic/chaincode-go/chaincode" + "github.com/hyperledger/fabric-samples/asset-transfer-basic/chaincode-go/chaincode/mocks" + "github.com/stretchr/testify/require" +) + +//go:generate counterfeiter -o mocks/transaction.go -fake-name TransactionContext . transactionContext +type transactionContext interface { + contractapi.TransactionContextInterface +} + +//go:generate counterfeiter -o mocks/chaincodestub.go -fake-name ChaincodeStub . chaincodeStub +type chaincodeStub interface { + shim.ChaincodeStubInterface +} + +//go:generate counterfeiter -o mocks/statequeryiterator.go -fake-name StateQueryIterator . stateQueryIterator +type stateQueryIterator interface { + shim.StateQueryIteratorInterface +} + +func TestInitLedger(t *testing.T) { + chaincodeStub := &mocks.ChaincodeStub{} + transactionContext := &mocks.TransactionContext{} + transactionContext.GetStubReturns(chaincodeStub) + + assetTransfer := chaincode.SmartContract{} + err := assetTransfer.InitLedger(transactionContext) + require.NoError(t, err) + + chaincodeStub.PutStateReturns(fmt.Errorf("failed inserting key")) + err = assetTransfer.InitLedger(transactionContext) + require.EqualError(t, err, "failed to put to world state. failed inserting key") +} + +func TestCreateAsset(t *testing.T) { + chaincodeStub := &mocks.ChaincodeStub{} + transactionContext := &mocks.TransactionContext{} + transactionContext.GetStubReturns(chaincodeStub) + + assetTransfer := chaincode.SmartContract{} + err := assetTransfer.CreateAsset(transactionContext, "", "", 0, "", 0) + require.NoError(t, err) + + chaincodeStub.GetStateReturns([]byte{}, nil) + err = assetTransfer.CreateAsset(transactionContext, "asset1", "", 0, "", 0) + require.EqualError(t, err, "the asset asset1 already exists") + + chaincodeStub.GetStateReturns(nil, fmt.Errorf("unable to retrieve asset")) + err = assetTransfer.CreateAsset(transactionContext, "asset1", "", 0, "", 0) + require.EqualError(t, err, "failed to read from world state: unable to retrieve asset") +} + +func TestReadAsset(t *testing.T) { + chaincodeStub := &mocks.ChaincodeStub{} + transactionContext := &mocks.TransactionContext{} + transactionContext.GetStubReturns(chaincodeStub) + + expectedAsset := &chaincode.Asset{ID: "asset1"} + bytes, err := json.Marshal(expectedAsset) + require.NoError(t, err) + + chaincodeStub.GetStateReturns(bytes, nil) + assetTransfer := chaincode.SmartContract{} + asset, err := assetTransfer.ReadAsset(transactionContext, "") + require.NoError(t, err) + require.Equal(t, expectedAsset, asset) + + chaincodeStub.GetStateReturns(nil, fmt.Errorf("unable to retrieve asset")) + _, err = assetTransfer.ReadAsset(transactionContext, "") + require.EqualError(t, err, "failed to read from world state: unable to retrieve asset") + + chaincodeStub.GetStateReturns(nil, nil) + asset, err = assetTransfer.ReadAsset(transactionContext, "asset1") + require.EqualError(t, err, "the asset asset1 does not exist") + require.Nil(t, asset) +} + +func TestUpdateAsset(t *testing.T) { + chaincodeStub := &mocks.ChaincodeStub{} + transactionContext := &mocks.TransactionContext{} + transactionContext.GetStubReturns(chaincodeStub) + + expectedAsset := &chaincode.Asset{ID: "asset1"} + bytes, err := json.Marshal(expectedAsset) + require.NoError(t, err) + + chaincodeStub.GetStateReturns(bytes, nil) + assetTransfer := chaincode.SmartContract{} + err = assetTransfer.UpdateAsset(transactionContext, "", "", 0, "", 0) + require.NoError(t, err) + + chaincodeStub.GetStateReturns(nil, nil) + err = assetTransfer.UpdateAsset(transactionContext, "asset1", "", 0, "", 0) + require.EqualError(t, err, "the asset asset1 does not exist") + + chaincodeStub.GetStateReturns(nil, fmt.Errorf("unable to retrieve asset")) + err = assetTransfer.UpdateAsset(transactionContext, "asset1", "", 0, "", 0) + require.EqualError(t, err, "failed to read from world state: unable to retrieve asset") +} + +func TestDeleteAsset(t *testing.T) { + chaincodeStub := &mocks.ChaincodeStub{} + transactionContext := &mocks.TransactionContext{} + transactionContext.GetStubReturns(chaincodeStub) + + asset := &chaincode.Asset{ID: "asset1"} + bytes, err := json.Marshal(asset) + require.NoError(t, err) + + chaincodeStub.GetStateReturns(bytes, nil) + chaincodeStub.DelStateReturns(nil) + assetTransfer := chaincode.SmartContract{} + err = assetTransfer.DeleteAsset(transactionContext, "") + require.NoError(t, err) + + chaincodeStub.GetStateReturns(nil, nil) + err = assetTransfer.DeleteAsset(transactionContext, "asset1") + require.EqualError(t, err, "the asset asset1 does not exist") + + chaincodeStub.GetStateReturns(nil, fmt.Errorf("unable to retrieve asset")) + err = assetTransfer.DeleteAsset(transactionContext, "") + require.EqualError(t, err, "failed to read from world state: unable to retrieve asset") +} + +func TestTransferAsset(t *testing.T) { + chaincodeStub := &mocks.ChaincodeStub{} + transactionContext := &mocks.TransactionContext{} + transactionContext.GetStubReturns(chaincodeStub) + + asset := &chaincode.Asset{ID: "asset1"} + bytes, err := json.Marshal(asset) + require.NoError(t, err) + + chaincodeStub.GetStateReturns(bytes, nil) + assetTransfer := chaincode.SmartContract{} + err = assetTransfer.TransferAsset(transactionContext, "", "") + require.NoError(t, err) + + chaincodeStub.GetStateReturns(nil, fmt.Errorf("unable to retrieve asset")) + err = assetTransfer.TransferAsset(transactionContext, "", "") + require.EqualError(t, err, "failed to read from world state: unable to retrieve asset") +} + +func TestGetAllAssets(t *testing.T) { + asset := &chaincode.Asset{ID: "asset1"} + bytes, err := json.Marshal(asset) + require.NoError(t, err) + + iterator := &mocks.StateQueryIterator{} + iterator.HasNextReturnsOnCall(0, true) + iterator.HasNextReturnsOnCall(1, false) + iterator.NextReturns(&queryresult.KV{Value: bytes}, nil) + + chaincodeStub := &mocks.ChaincodeStub{} + transactionContext := &mocks.TransactionContext{} + transactionContext.GetStubReturns(chaincodeStub) + + chaincodeStub.GetStateByRangeReturns(iterator, nil) + assetTransfer := &chaincode.SmartContract{} + assets, err := assetTransfer.GetAllAssets(transactionContext) + require.NoError(t, err) + require.Equal(t, []*chaincode.Asset{asset}, assets) + + iterator.HasNextReturns(true) + iterator.NextReturns(nil, fmt.Errorf("failed retrieving next item")) + assets, err = assetTransfer.GetAllAssets(transactionContext) + require.EqualError(t, err, "failed retrieving next item") + require.Nil(t, assets) + + chaincodeStub.GetStateByRangeReturns(nil, fmt.Errorf("failed retrieving all assets")) + assets, err = assetTransfer.GetAllAssets(transactionContext) + require.EqualError(t, err, "failed retrieving all assets") + require.Nil(t, assets) +} diff --git a/topologies/t6/assets/chaincode/assets-transfer-basic/chaincode-go/go.mod b/topologies/t6/assets/chaincode/assets-transfer-basic/chaincode-go/go.mod new file mode 100644 index 0000000..630a157 --- /dev/null +++ b/topologies/t6/assets/chaincode/assets-transfer-basic/chaincode-go/go.mod @@ -0,0 +1,11 @@ +module github.com/hyperledger/fabric-samples/asset-transfer-basic/chaincode-go + +go 1.14 + +require ( + github.com/golang/protobuf v1.3.2 + github.com/hyperledger/fabric-chaincode-go v0.0.0-20210718160520-38d29fabecb9 + github.com/hyperledger/fabric-contract-api-go v1.1.1 + github.com/hyperledger/fabric-protos-go v0.0.0-20201028172056-a3136dde2354 + github.com/stretchr/testify v1.5.1 +) diff --git a/topologies/t6/assets/chaincode/assets-transfer-basic/chaincode-go/go.sum b/topologies/t6/assets/chaincode/assets-transfer-basic/chaincode-go/go.sum new file mode 100644 index 0000000..577c18b --- /dev/null +++ b/topologies/t6/assets/chaincode/assets-transfer-basic/chaincode-go/go.sum @@ -0,0 +1,154 @@ +cloud.google.com/go v0.26.0/go.mod h1:aQUYkXzVsufM+DwF1aE+0xfcU+56JwCaLick0ClmMTw= +github.com/BurntSushi/toml v0.3.1/go.mod h1:xHWCNGjB5oqiDr8zfno3MHue2Ht5sIBksp03qcyfWMU= +github.com/DATA-DOG/go-txdb v0.1.3/go.mod h1:DhAhxMXZpUJVGnT+p9IbzJoRKvlArO2pkHjnGX7o0n0= +github.com/PuerkitoBio/purell v1.1.1 h1:WEQqlqaGbrPkxLJWfBwQmfEAE1Z7ONdDLqrN38tNFfI= +github.com/PuerkitoBio/purell v1.1.1/go.mod h1:c11w/QuzBsJSee3cPx9rAFu61PvFxuPbtSwDGJws/X0= +github.com/PuerkitoBio/urlesc v0.0.0-20170810143723-de5bf2ad4578 h1:d+Bc7a5rLufV/sSk/8dngufqelfh6jnri85riMAaF/M= +github.com/PuerkitoBio/urlesc v0.0.0-20170810143723-de5bf2ad4578/go.mod h1:uGdkoq3SwY9Y+13GIhn11/XLaGBb4BfwItxLd5jeuXE= +github.com/armon/consul-api v0.0.0-20180202201655-eb2c6b5be1b6/go.mod h1:grANhF5doyWs3UAsr3K4I6qtAmlQcZDesFNEHPZAzj8= +github.com/client9/misspell v0.3.4/go.mod h1:qj6jICC3Q7zFZvVWo7KLAzC3yx5G7kyvSDkc90ppPyw= +github.com/coreos/etcd v3.3.10+incompatible/go.mod h1:uF7uidLiAD3TWHmW31ZFd/JWoc32PjwdhPthX9715RE= +github.com/coreos/go-etcd v2.0.0+incompatible/go.mod h1:Jez6KQU2B/sWsbdaef3ED8NzMklzPG4d5KIOhIy30Tk= +github.com/coreos/go-semver v0.2.0/go.mod h1:nnelYz7RCh+5ahJtPPxZlU+153eP4D4r3EedlOD2RNk= +github.com/cpuguy83/go-md2man v1.0.10/go.mod h1:SmD6nW6nTyfqj6ABTjUi3V3JVMnlJmwcJI5acqYI6dE= +github.com/cucumber/godog v0.8.0/go.mod h1:Cp3tEV1LRAyH/RuCThcxHS/+9ORZ+FMzPva2AZ5Ki+A= +github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38= +github.com/davecgh/go-spew v1.1.1 h1:vj9j/u1bqnvCEfJOwUhtlOARqs3+rkHYY13jYWTU97c= +github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38= +github.com/fsnotify/fsnotify v1.4.7/go.mod h1:jwhsz4b93w/PPRr/qN1Yymfu8t87LnFCMoQvtojpjFo= +github.com/go-openapi/jsonpointer v0.19.2/go.mod h1:3akKfEdA7DF1sugOqz1dVQHBcuDBPKZGEoHC/NkiQRg= +github.com/go-openapi/jsonpointer v0.19.3 h1:gihV7YNZK1iK6Tgwwsxo2rJbD1GTbdm72325Bq8FI3w= +github.com/go-openapi/jsonpointer v0.19.3/go.mod h1:Pl9vOtqEWErmShwVjC8pYs9cog34VGT37dQOVbmoatg= +github.com/go-openapi/jsonreference v0.19.2 h1:o20suLFB4Ri0tuzpWtyHlh7E7HnkqTNLq6aR6WVNS1w= +github.com/go-openapi/jsonreference v0.19.2/go.mod h1:jMjeRr2HHw6nAVajTXJ4eiUwohSTlpa0o73RUL1owJc= +github.com/go-openapi/spec v0.19.4 h1:ixzUSnHTd6hCemgtAJgluaTSGYpLNpJY4mA2DIkdOAo= +github.com/go-openapi/spec v0.19.4/go.mod h1:FpwSN1ksY1eteniUU7X0N/BgJ7a4WvBFVA8Lj9mJglo= +github.com/go-openapi/swag v0.19.2/go.mod h1:POnQmlKehdgb5mhVOsnJFsivZCEZ/vjK9gh66Z9tfKk= +github.com/go-openapi/swag v0.19.5 h1:lTz6Ys4CmqqCQmZPBlbQENR1/GucA2bzYTE12Pw4tFY= +github.com/go-openapi/swag v0.19.5/go.mod h1:POnQmlKehdgb5mhVOsnJFsivZCEZ/vjK9gh66Z9tfKk= +github.com/go-sql-driver/mysql v1.5.0/go.mod h1:DCzpHaOWr8IXmIStZouvnhqoel9Qv2LBy8hT2VhHyBg= +github.com/gobuffalo/envy v1.7.0 h1:GlXgaiBkmrYMHco6t4j7SacKO4XUjvh5pwXh0f4uxXU= +github.com/gobuffalo/envy v1.7.0/go.mod h1:n7DRkBerg/aorDM8kbduw5dN3oXGswK5liaSCx4T5NI= +github.com/gobuffalo/logger v1.0.0/go.mod h1:2zbswyIUa45I+c+FLXuWl9zSWEiVuthsk8ze5s8JvPs= +github.com/gobuffalo/packd v0.3.0 h1:eMwymTkA1uXsqxS0Tpoop3Lc0u3kTfiMBE6nKtQU4g4= +github.com/gobuffalo/packd v0.3.0/go.mod h1:zC7QkmNkYVGKPw4tHpBQ+ml7W/3tIebgeo1b36chA3Q= +github.com/gobuffalo/packr v1.30.1 h1:hu1fuVR3fXEZR7rXNW3h8rqSML8EVAf6KNm0NKO/wKg= +github.com/gobuffalo/packr v1.30.1/go.mod h1:ljMyFO2EcrnzsHsN99cvbq055Y9OhRrIaviy289eRuk= +github.com/gobuffalo/packr/v2 v2.5.1/go.mod h1:8f9c96ITobJlPzI44jj+4tHnEKNt0xXWSVlXRN9X1Iw= +github.com/golang/glog v0.0.0-20160126235308-23def4e6c14b h1:VKtxabqXZkF25pY9ekfRL6a582T4P37/31XEstQ5p58= +github.com/golang/glog v0.0.0-20160126235308-23def4e6c14b/go.mod h1:SBH7ygxi8pfUlaOkMMuAQtPIUF8ecWP5IEl/CR7VP2Q= +github.com/golang/mock v1.1.1/go.mod h1:oTYuIxOrZwtPieC+H1uAHpcLFnEyAGVDL/k47Jfbm0A= +github.com/golang/protobuf v1.2.0/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U= +github.com/golang/protobuf v1.3.2 h1:6nsPYzhq5kReh6QImI3k5qWzO4PEbvbIW2cwSfR/6xs= +github.com/golang/protobuf v1.3.2/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U= +github.com/google/go-cmp v0.2.0/go.mod h1:oXzfMopK8JAjlY9xF4vHSVASa0yLyX7SntLO5aqRK0M= +github.com/hashicorp/hcl v1.0.0/go.mod h1:E5yfLk+7swimpb2L/Alb/PJmXilQ/rhwaUYs4T20WEQ= +github.com/hyperledger/fabric-chaincode-go v0.0.0-20200424173110-d7076418f212 h1:1i4lnpV8BDgKOLi1hgElfBqdHXjXieSuj8629mwBZ8o= +github.com/hyperledger/fabric-chaincode-go v0.0.0-20200424173110-d7076418f212/go.mod h1:N7H3sA7Tx4k/YzFq7U0EPdqJtqvM4Kild0JoCc7C0Dc= +github.com/hyperledger/fabric-chaincode-go v0.0.0-20210718160520-38d29fabecb9 h1:1cAZHHrBYFrX3bwQGhOZtOB4sCM9QWVppd81O8vsPXs= +github.com/hyperledger/fabric-chaincode-go v0.0.0-20210718160520-38d29fabecb9/go.mod h1:N7H3sA7Tx4k/YzFq7U0EPdqJtqvM4Kild0JoCc7C0Dc= +github.com/hyperledger/fabric-contract-api-go v1.1.0 h1:K9uucl/6eX3NF0/b+CGIiO1IPm1VYQxBkpnVGJur2S4= +github.com/hyperledger/fabric-contract-api-go v1.1.0/go.mod h1:nHWt0B45fK53owcFpLtAe8DH0Q5P068mnzkNXMPSL7E= +github.com/hyperledger/fabric-contract-api-go v1.1.1 h1:gDhOC18gjgElNZ85kFWsbCQq95hyUP/21n++m0Sv6B0= +github.com/hyperledger/fabric-contract-api-go v1.1.1/go.mod h1:+39cWxbh5py3NtXpRA63rAH7NzXyED+QJx1EZr0tJPo= +github.com/hyperledger/fabric-protos-go v0.0.0-20190919234611-2a87503ac7c9/go.mod h1:xVYTjK4DtZRBxZ2D9aE4y6AbLaPwue2o/criQyQbVD0= +github.com/hyperledger/fabric-protos-go v0.0.0-20200424173316-dd554ba3746e h1:9PS5iezHk/j7XriSlNuSQILyCOfcZ9wZ3/PiucmSE8E= +github.com/hyperledger/fabric-protos-go v0.0.0-20200424173316-dd554ba3746e/go.mod h1:xVYTjK4DtZRBxZ2D9aE4y6AbLaPwue2o/criQyQbVD0= +github.com/hyperledger/fabric-protos-go v0.0.0-20201028172056-a3136dde2354 h1:6vLLEpvDbSlmUJFjg1hB5YMBpI+WgKguztlONcAFBoY= +github.com/hyperledger/fabric-protos-go v0.0.0-20201028172056-a3136dde2354/go.mod h1:xVYTjK4DtZRBxZ2D9aE4y6AbLaPwue2o/criQyQbVD0= +github.com/hyperledger/fabric-protos-go v0.0.0-20210722212527-1ed7094bb8fe h1:1ef+SKRVYiQCAMhZ5W6HZhStEcNNAm27V8YcptzSWnM= +github.com/hyperledger/fabric-protos-go v0.0.0-20210722212527-1ed7094bb8fe/go.mod h1:xVYTjK4DtZRBxZ2D9aE4y6AbLaPwue2o/criQyQbVD0= +github.com/inconshreveable/mousetrap v1.0.0/go.mod h1:PxqpIevigyE2G7u3NXJIT2ANytuPF1OarO4DADm73n8= +github.com/joho/godotenv v1.3.0 h1:Zjp+RcGpHhGlrMbJzXTrZZPrWj+1vfm90La1wgB6Bhc= +github.com/joho/godotenv v1.3.0/go.mod h1:7hK45KPybAkOC6peb+G5yklZfMxEjkZhHbwpqxOKXbg= +github.com/karrick/godirwalk v1.10.12/go.mod h1:RoGL9dQei4vP9ilrpETWE8CLOZ1kiN0LhBygSwrAsHA= +github.com/konsorten/go-windows-terminal-sequences v1.0.1/go.mod h1:T0+1ngSBFLxvqU3pZ+m/2kptfBszLMUkC4ZK/EgS/cQ= +github.com/konsorten/go-windows-terminal-sequences v1.0.2/go.mod h1:T0+1ngSBFLxvqU3pZ+m/2kptfBszLMUkC4ZK/EgS/cQ= +github.com/kr/pretty v0.1.0/go.mod h1:dAy3ld7l9f0ibDNOQOHHMYYIIbhfbHSm3C4ZsoJORNo= +github.com/kr/pretty v0.2.0 h1:s5hAObm+yFO5uHYt5dYjxi2rXrsnmRpJx4OYvIWUaQs= +github.com/kr/pretty v0.2.0/go.mod h1:ipq/a2n7PKx3OHsz4KJII5eveXtPO4qwEXGdVfWzfnI= +github.com/kr/pty v1.1.1/go.mod h1:pFQYn66WHrOpPYNljwOMqo10TkYh1fy3cYio2l3bCsQ= +github.com/kr/pty v1.1.5/go.mod h1:9r2w37qlBe7rQ6e1fg1S/9xpWHSnaqNdHD3WcMdbPDA= +github.com/kr/text v0.1.0 h1:45sCR5RtlFHMR4UwH9sdQ5TC8v0qDQCHnXt+kaKSTVE= +github.com/kr/text v0.1.0/go.mod h1:4Jbv+DJW3UT/LiOwJeYQe1efqtUx/iVham/4vfdArNI= +github.com/magiconair/properties v1.8.0/go.mod h1:PppfXfuXeibc/6YijjN8zIbojt8czPbwD3XqdrwzmxQ= +github.com/mailru/easyjson v0.0.0-20190614124828-94de47d64c63/go.mod h1:C1wdFJiN94OJF2b5HbByQZoLdCWB1Yqtg26g4irojpc= +github.com/mailru/easyjson v0.0.0-20190626092158-b2ccc519800e h1:hB2xlXdHp/pmPZq0y3QnmWAArdw9PqbmotexnWx/FU8= +github.com/mailru/easyjson v0.0.0-20190626092158-b2ccc519800e/go.mod h1:C1wdFJiN94OJF2b5HbByQZoLdCWB1Yqtg26g4irojpc= +github.com/mitchellh/go-homedir v1.1.0/go.mod h1:SfyaCUpYCn1Vlf4IUYiD9fPX4A5wJrkLzIz1N1q0pr0= +github.com/mitchellh/mapstructure v1.1.2/go.mod h1:FVVH3fgwuzCH5S8UJGiWEs2h04kUh9fWfEaFds41c1Y= +github.com/pelletier/go-toml v1.2.0/go.mod h1:5z9KED0ma1S8pY6P1sdut58dfprrGBbd/94hg7ilaic= +github.com/pmezard/go-difflib v1.0.0 h1:4DBwDE0NGyQoBHbLQYPwSUPoCMWR5BEzIk/f1lZbAQM= +github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4= +github.com/rogpeppe/go-internal v1.1.0/go.mod h1:M8bDsm7K2OlrFYOpmOWEs/qY81heoFRclV5y23lUDJ4= +github.com/rogpeppe/go-internal v1.3.0 h1:RR9dF3JtopPvtkroDZuVD7qquD0bnHlKSqaQhgwt8yk= +github.com/rogpeppe/go-internal v1.3.0/go.mod h1:M8bDsm7K2OlrFYOpmOWEs/qY81heoFRclV5y23lUDJ4= +github.com/russross/blackfriday v1.5.2/go.mod h1:JO/DiYxRf+HjHt06OyowR9PTA263kcR/rfWxYHBV53g= +github.com/sirupsen/logrus v1.4.2/go.mod h1:tLMulIdttU9McNUspp0xgXVQah82FyeX6MwdIuYE2rE= +github.com/spf13/afero v1.1.2/go.mod h1:j4pytiNVoe2o6bmDsKpLACNPDBIoEAkihy7loJ1B0CQ= +github.com/spf13/cast v1.3.0/go.mod h1:Qx5cxh0v+4UWYiBimWS+eyWzqEqokIECu5etghLkUJE= +github.com/spf13/cobra v0.0.5/go.mod h1:3K3wKZymM7VvHMDS9+Akkh4K60UwM26emMESw8tLCHU= +github.com/spf13/jwalterweatherman v1.0.0/go.mod h1:cQK4TGJAtQXfYWX+Ddv3mKDzgVb68N+wFjFa4jdeBTo= +github.com/spf13/pflag v1.0.3/go.mod h1:DYY7MBk1bdzusC3SYhjObp+wFpr4gzcvqqNjLnInEg4= +github.com/spf13/viper v1.3.2/go.mod h1:ZiWeW+zYFKm7srdB9IoDzzZXaJaI5eL9QjNiN/DMA2s= +github.com/stretchr/objx v0.1.0/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME= +github.com/stretchr/objx v0.1.1/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME= +github.com/stretchr/objx v0.2.0/go.mod h1:qt09Ya8vawLte6SNmTgCsAVtYtaKzEcn8ATUoHMkEqE= +github.com/stretchr/testify v1.2.2/go.mod h1:a8OnRcib4nhh0OaRAV+Yts87kKdq0PP7pXfy6kDkUVs= +github.com/stretchr/testify v1.3.0/go.mod h1:M5WIy9Dh21IEIfnGCwXGc5bZfKNJtfHm1UVUgZn+9EI= +github.com/stretchr/testify v1.4.0/go.mod h1:j7eGeouHqKxXV5pUuKE4zz7dFj8WfuZ+81PSLYec5m4= +github.com/stretchr/testify v1.5.1 h1:nOGnQDM7FYENwehXlg/kFVnos3rEvtKTjRvOWSzb6H4= +github.com/stretchr/testify v1.5.1/go.mod h1:5W2xD1RspED5o8YsWQXVCued0rvSQ+mT+I5cxcmMvtA= +github.com/ugorji/go/codec v0.0.0-20181204163529-d75b2dcb6bc8/go.mod h1:VFNgLljTbGfSG7qAOspJ7OScBnGdDN/yBr0sguwnwf0= +github.com/xeipuuv/gojsonpointer v0.0.0-20180127040702-4e3ac2762d5f h1:J9EGpcZtP0E/raorCMxlFGSTBrsSlaDGf3jU/qvAE2c= +github.com/xeipuuv/gojsonpointer v0.0.0-20180127040702-4e3ac2762d5f/go.mod h1:N2zxlSyiKSe5eX1tZViRH5QA0qijqEDrYZiPEAiq3wU= +github.com/xeipuuv/gojsonreference v0.0.0-20180127040603-bd5ef7bd5415 h1:EzJWgHovont7NscjpAxXsDA8S8BMYve8Y5+7cuRE7R0= +github.com/xeipuuv/gojsonreference v0.0.0-20180127040603-bd5ef7bd5415/go.mod h1:GwrjFmJcFw6At/Gs6z4yjiIwzuJ1/+UwLxMQDVQXShQ= +github.com/xeipuuv/gojsonschema v1.2.0 h1:LhYJRs+L4fBtjZUfuSZIKGeVu0QRy8e5Xi7D17UxZ74= +github.com/xeipuuv/gojsonschema v1.2.0/go.mod h1:anYRn/JVcOK2ZgGU+IjEV4nwlhoK5sQluxsYJ78Id3Y= +github.com/xordataexchange/crypt v0.0.3-0.20170626215501-b2862e3d0a77/go.mod h1:aYKd//L2LvnjZzWKhF00oedf4jCCReLcmhLdhm1A27Q= +golang.org/x/crypto v0.0.0-20181203042331-505ab145d0a9/go.mod h1:6SG95UA2DQfeDnfUPMdvaQW0Q7yPrPDi9nlGo2tz2b4= +golang.org/x/crypto v0.0.0-20190308221718-c2843e01d9a2/go.mod h1:djNgcEr1/C05ACkg1iLfiJU5Ep61QUkGW8qpdssI0+w= +golang.org/x/crypto v0.0.0-20190611184440-5c40567a22f8/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI= +golang.org/x/crypto v0.0.0-20190621222207-cc06ce4a13d4/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI= +golang.org/x/lint v0.0.0-20190313153728-d0100b6bd8b3/go.mod h1:6SW0HCj/g11FgYtHlgUYUwCkIfeOF89ocIRzGO/8vkc= +golang.org/x/net v0.0.0-20190311183353-d8887717615a/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg= +golang.org/x/net v0.0.0-20190404232315-eb5bcb51f2a3/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg= +golang.org/x/net v0.0.0-20190522155817-f3200d17e092/go.mod h1:HSz+uSET+XFnRR8LxR5pz3Of3rY3CfYBVs4xY44aLks= +golang.org/x/net v0.0.0-20190613194153-d28f0bde5980/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s= +golang.org/x/net v0.0.0-20190827160401-ba9fcec4b297 h1:k7pJ2yAPLPgbskkFdhRCsA77k2fySZ1zf2zCjvQCiIM= +golang.org/x/net v0.0.0-20190827160401-ba9fcec4b297/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s= +golang.org/x/oauth2 v0.0.0-20180821212333-d2e6202438be/go.mod h1:N/0e6XlmueqKjAGxoOufVs8QHGRruUQn6yWY3a++T0U= +golang.org/x/sync v0.0.0-20190423024810-112230192c58/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= +golang.org/x/sys v0.0.0-20181205085412-a5c9d58dba9a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY= +golang.org/x/sys v0.0.0-20190215142949-d0b11bdaac8a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY= +golang.org/x/sys v0.0.0-20190412213103-97732733099d/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= +golang.org/x/sys v0.0.0-20190422165155-953cdadca894/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= +golang.org/x/sys v0.0.0-20190515120540-06a5c4944438/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= +golang.org/x/sys v0.0.0-20190616124812-15dcb6c0061f/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= +golang.org/x/sys v0.0.0-20190710143415-6ec70d6a5542 h1:6ZQFf1D2YYDDI7eSwW8adlkkavTB9sw5I24FVtEvNUQ= +golang.org/x/sys v0.0.0-20190710143415-6ec70d6a5542/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= +golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ= +golang.org/x/text v0.3.2 h1:tW2bmiBqwgJj/UpqtC8EpXEZVYOwU0yG4iWbprSVAcs= +golang.org/x/text v0.3.2/go.mod h1:bEr9sfX3Q8Zfm5fL9x+3itogRgK3+ptLWKqgva+5dAk= +golang.org/x/tools v0.0.0-20180917221912-90fa682c2a6e/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ= +golang.org/x/tools v0.0.0-20190311212946-11955173bddd/go.mod h1:LCzVGOaR6xXOjkQ3onu1FJEFr0SW1gC7cKk1uF8kGRs= +golang.org/x/tools v0.0.0-20190524140312-2c0ae7006135/go.mod h1:RgjU9mgBXZiqYHBnxXauZ1Gv1EHHAz9KjViQ78xBX0Q= +golang.org/x/tools v0.0.0-20190614205625-5aca471b1d59/go.mod h1:/rFqwRUd4F7ZHNgwSSTFct+R/Kf4OFW1sUzUTQQTgfc= +golang.org/x/tools v0.0.0-20190624180213-70d37148ca0c/go.mod h1:/rFqwRUd4F7ZHNgwSSTFct+R/Kf4OFW1sUzUTQQTgfc= +google.golang.org/appengine v1.1.0/go.mod h1:EbEs0AVv82hx2wNQdGPgUI5lhzA/G0D9YwlJXL52JkM= +google.golang.org/genproto v0.0.0-20180817151627-c66870c02cf8/go.mod h1:JiN7NxoALGmiZfu7CAH4rXhgtRTLTxftemlI0sWmxmc= +google.golang.org/genproto v0.0.0-20180831171423-11092d34479b h1:lohp5blsw53GBXtLyLNaTXPXS9pJ1tiTw61ZHUoE9Qw= +google.golang.org/genproto v0.0.0-20180831171423-11092d34479b/go.mod h1:JiN7NxoALGmiZfu7CAH4rXhgtRTLTxftemlI0sWmxmc= +google.golang.org/grpc v1.23.0 h1:AzbTB6ux+okLTzP8Ru1Xs41C303zdcfEht7MQnYJt5A= +google.golang.org/grpc v1.23.0/go.mod h1:Y5yQAOtifL1yxbo5wqy6BxZv8vAUGQwXBOALyacEbxg= +gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0= +gopkg.in/check.v1 v1.0.0-20180628173108-788fd7840127/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0= +gopkg.in/check.v1 v1.0.0-20190902080502-41f04d3bba15 h1:YR8cESwS4TdDjEe65xsg0ogRM/Nc3DYOhEAlW+xobZo= +gopkg.in/check.v1 v1.0.0-20190902080502-41f04d3bba15/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0= +gopkg.in/errgo.v2 v2.1.0/go.mod h1:hNsd1EY+bozCKY1Ytp96fpM3vjJbqLJn88ws8XvfDNI= +gopkg.in/yaml.v2 v2.2.2 h1:ZCJp+EgiOT7lHqUV2J862kp8Qj64Jo6az82+3Td9dZw= +gopkg.in/yaml.v2 v2.2.2/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI= +gopkg.in/yaml.v2 v2.2.8 h1:obN1ZagJSUGI0Ek/LBmuj4SNLPfIny3KsKFopxRdj10= +gopkg.in/yaml.v2 v2.2.8/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI= +honnef.co/go/tools v0.0.0-20190523083050-ea95bdfd59fc/go.mod h1:rf3lG4BRIbNafJWhAfAdb/ePZxsR/4RtNHQocxwk9r4= diff --git a/topologies/t6/config/config.yaml b/topologies/t6/config/config.yaml new file mode 100644 index 0000000..1f4cc49 --- /dev/null +++ b/topologies/t6/config/config.yaml @@ -0,0 +1,14 @@ +NodeOUs: + Enable: true + ClientOUIdentifier: + Certificate: cacerts/ca-cert.pem + OrganizationalUnitIdentifier: client + PeerOUIdentifier: + Certificate: cacerts/ca-cert.pem + OrganizationalUnitIdentifier: peer + AdminOUIdentifier: + Certificate: cacerts/ca-cert.pem + OrganizationalUnitIdentifier: admin + OrdererOUIdentifier: + Certificate: cacerts/ca-cert.pem + OrganizationalUnitIdentifier: orderer \ No newline at end of file diff --git a/topologies/t6/config/configtx.yaml b/topologies/t6/config/configtx.yaml new file mode 100644 index 0000000..1264040 --- /dev/null +++ b/topologies/t6/config/configtx.yaml @@ -0,0 +1,428 @@ +# Copyright IBM Corp. All Rights Reserved. +# +# SPDX-License-Identifier: Apache-2.0 +# + +--- +################################################################################ +# +# Section: Organizations +# +# - This section defines the different organizational identities which will +# be referenced later in the configuration. +# +################################################################################ +Organizations: + + # SampleOrg defines an MSP using the sampleconfig. It should never be used + # in production but may be used as a template for other definitions + - &org1 + # DefaultOrg defines the organization which is used in the sampleconfig + # of the fabric.git development environment + Name: org1 + + # ID to load the MSP definition as + ID: org1MSP + + # MSPDir is the filesystem path which contains the MSP configuration + MSPDir: /tmp/crypto-material/orgs/org1/msp + + # Policies defines the set of policies at this level of the config tree + # For organization policies, their canonical path is usually + # /Channel/// + Policies: + Readers: + Type: Signature + Rule: "OR('org1MSP.member')" + Writers: + Type: Signature + Rule: "OR('org1MSP.member')" + Admins: + Type: Signature + Rule: "OR('org1MSP.admin')" + Endorsement: + Type: Signature + Rule: "OR('org1MSP.member')" + + OrdererEndpoints: + - <>-org1-orderer1:7050 + - <>-org1-orderer2:7050 + - <>-org1-orderer3:7050 + + - &org2 + # DefaultOrg defines the organization which is used in the sampleconfig + # of the fabric.git development environment + Name: org2MSP + + # ID to load the MSP definition as + ID: org2MSP + + MSPDir: /tmp/crypto-material/orgs/org2/msp + + # Policies defines the set of policies at this level of the config tree + # For organization policies, their canonical path is usually + # /Channel/// + Policies: + Readers: + Type: Signature + Rule: "OR('org2MSP.member')" + Writers: + Type: Signature + Rule: "OR('org2MSP.member')" + Admins: + Type: Signature + Rule: "OR('org2MSP.admin')" + Endorsement: + Type: Signature + Rule: "OR('org2MSP.peer')" + + # leave this flag set to true. + AnchorPeers: + # AnchorPeers defines the location of peers which can be used + # for cross org gossip communication. Note, this value is only + # encoded in the genesis block in the Application section context + - Host: <>-org2-peer1 + Port: 7051 + + - &org3 + # DefaultOrg defines the organization which is used in the sampleconfig + # of the fabric.git development environment + Name: org3MSP + + # ID to load the MSP definition as + ID: org3MSP + + MSPDir: /tmp/crypto-material/orgs/org3/msp + + # Policies defines the set of policies at this level of the config tree + # For organization policies, their canonical path is usually + # /Channel/// + Policies: + Readers: + Type: Signature + Rule: "OR('org3MSP.member')" + Writers: + Type: Signature + Rule: "OR('org3MSP.member')" + Admins: + Type: Signature + Rule: "OR('org3MSP.admin')" + Endorsement: + Type: Signature + Rule: "OR('org3MSP.peer')" + + # leave this flag set to true. + AnchorPeers: + # AnchorPeers defines the location of peers which can be used + # for cross org gossip communication. Note, this value is only + # encoded in the genesis block in the Application section context + - Host: <>-org3-peer1 + Port: 7051 + +################################################################################ +# +# SECTION: Capabilities +# +# - This section defines the capabilities of fabric network. This is a new +# concept as of v1.1.0 and should not be utilized in mixed networks with +# v1.0.x peers and orderers. Capabilities define features which must be +# present in a fabric binary for that binary to safely participate in the +# fabric network. For instance, if a new MSP type is added, newer binaries +# might recognize and validate the signatures from this type, while older +# binaries without this support would be unable to validate those +# transactions. This could lead to different versions of the fabric binaries +# having different world states. Instead, defining a capability for a channel +# informs those binaries without this capability that they must cease +# processing transactions until they have been upgraded. For v1.0.x if any +# capabilities are defined (including a map with all capabilities turned off) +# then the v1.0.x peer will deliberately crash. +# +################################################################################ +Capabilities: + # Channel capabilities apply to both the orderers and the peers and must be + # supported by both. + # Set the value of the capability to true to require it. + Channel: &ChannelCapabilities + # V2_0 capability ensures that orderers and peers behave according + # to v2.0 channel capabilities. Orderers and peers from + # prior releases would behave in an incompatible way, and are therefore + # not able to participate in channels at v2.0 capability. + # Prior to enabling V2.0 channel capabilities, ensure that all + # orderers and peers on a channel are at v2.0.0 or later. + V2_0: true + + # Orderer capabilities apply only to the orderers, and may be safely + # used with prior release peers. + # Set the value of the capability to true to require it. + Orderer: &OrdererCapabilities + # V2_0 orderer capability ensures that orderers behave according + # to v2.0 orderer capabilities. Orderers from + # prior releases would behave in an incompatible way, and are therefore + # not able to participate in channels at v2.0 orderer capability. + # Prior to enabling V2.0 orderer capabilities, ensure that all + # orderers on channel are at v2.0.0 or later. + V2_0: true + + # Application capabilities apply only to the peer network, and may be safely + # used with prior release orderers. + # Set the value of the capability to true to require it. + Application: &ApplicationCapabilities + # V2_0 application capability ensures that peers behave according + # to v2.0 application capabilities. Peers from + # prior releases would behave in an incompatible way, and are therefore + # not able to participate in channels at v2.0 application capability. + # Prior to enabling V2.0 application capabilities, ensure that all + # peers on channel are at v2.0.0 or later. + V2_0: true + +################################################################################ +# +# SECTION: Application +# +# - This section defines the values to encode into a config transaction or +# genesis block for application related parameters +# +################################################################################ +Application: &ApplicationDefaults + ACLs: &ACLsDefault + + # ACL policy for lscc's "getid" function + lscc/ChaincodeExists: /Channel/Application/Readers + + # ACL policy for lscc's "getdepspec" function + lscc/GetDeploymentSpec: /Channel/Application/Readers + + # ACL policy for lscc's "getccdata" function + lscc/GetChaincodeData: /Channel/Application/Readers + + # ACL Policy for lscc's "getchaincodes" function + lscc/GetInstantiatedChaincodes: /Channel/Application/Readers + + + #---Query System Chaincode (qscc) function to policy mapping for access control---# + + # ACL policy for qscc's "GetChainInfo" function + qscc/GetChainInfo: /Channel/Application/Readers + + + # ACL policy for qscc's "GetBlockByNumber" function + qscc/GetBlockByNumber: /Channel/Application/Readers + + # ACL policy for qscc's "GetBlockByHash" function + qscc/GetBlockByHash: /Channel/Application/Readers + + # ACL policy for qscc's "GetTransactionByID" function + qscc/GetTransactionByID: /Channel/Application/Readers + + # ACL policy for qscc's "GetBlockByTxID" function + qscc/GetBlockByTxID: /Channel/Application/Readers + + #---Configuration System Chaincode (cscc) function to policy mapping for access control---# + + # ACL policy for cscc's "GetConfigBlock" function + cscc/GetConfigBlock: /Channel/Application/Readers + + # ACL policy for cscc's "GetConfigTree" function + cscc/GetConfigTree: /Channel/Application/Readers + + # ACL policy for cscc's "SimulateConfigTreeUpdate" function + cscc/SimulateConfigTreeUpdate: /Channel/Application/Readers + + #---Miscellanesous peer function to policy mapping for access control---# + + # ACL policy for invoking chaincodes on peer + peer/Propose: /Channel/Application/Writers + + # ACL policy for chaincode to chaincode invocation + peer/ChaincodeToChaincode: /Channel/Application/Readers + + #---Events resource to policy mapping for access control###---# + + # ACL policy for sending block events + event/Block: /Channel/Application/Readers + + # ACL policy for sending filtered block events + event/FilteredBlock: /Channel/Application/Readers + + # Chaincode Lifecycle Policies introduced in Fabric 2.x + # ACL policy for _lifecycle's "CheckCommitReadiness" function + _lifecycle/CheckCommitReadiness: /Channel/Application/Writers + + # ACL policy for _lifecycle's "CommitChaincodeDefinition" function + _lifecycle/CommitChaincodeDefinition: /Channel/Application/Writers + + # ACL policy for _lifecycle's "QueryChaincodeDefinition" function + _lifecycle/QueryChaincodeDefinition: /Channel/Application/Readers + + + # Organizations is the list of orgs which are defined as participants on + # the application side of the network + Organizations: + + # Policies defines the set of policies at this level of the config tree + # For Application policies, their canonical path is + # /Channel/Application/ + Policies: + Readers: + Type: ImplicitMeta + Rule: "ANY Readers" + Writers: + Type: ImplicitMeta + Rule: "ANY Writers" + Admins: + Type: ImplicitMeta + Rule: "MAJORITY Admins" + LifecycleEndorsement: + Type: ImplicitMeta + Rule: "MAJORITY Endorsement" + Endorsement: + Type: ImplicitMeta + Rule: "MAJORITY Endorsement" + + Capabilities: + <<: *ApplicationCapabilities +################################################################################ +# +# SECTION: Orderer +# +# - This section defines the values to encode into a config transaction or +# genesis block for orderer related parameters +# +################################################################################ +Orderer: &OrdererDefaults + + # Orderer Type: The orderer implementation to start + OrdererType: etcdraft + + # Addresses used to be the list of orderer addresses that clients and peers + # could connect to. However, this does not allow clients to associate orderer + # addresses and orderer organizations which can be useful for things such + # as TLS validation. The preferred way to specify orderer addresses is now + # to include the OrdererEndpoints item in your org definition + Addresses: + - <>-org1-orderer1:7050 + - <>-org1-orderer2:7050 + - <>-org1-orderer3:7050 + + EtcdRaft: + Consenters: + - Host: <>-org1-orderer1 + Port: 7050 + ClientTLSCert: /tmp/crypto-material/orderers/org1/orderer1/node-tls/msp/signcerts/cert.pem + ServerTLSCert: /tmp/crypto-material/orderers/org1/orderer1/node-tls/msp/signcerts/cert.pem + - Host: <>-org1-orderer2 + Port: 7050 + ClientTLSCert: /tmp/crypto-material/orderers/org1/orderer2/node-tls/msp/signcerts/cert.pem + ServerTLSCert: /tmp/crypto-material/orderers/org1/orderer2/node-tls/msp/signcerts/cert.pem + - Host: <>-org1-orderer3 + Port: 7050 + ClientTLSCert: /tmp/crypto-material/orderers/org1/orderer3/node-tls/msp/signcerts/cert.pem + ServerTLSCert: /tmp/crypto-material/orderers/org1/orderer3/node-tls/msp/signcerts/cert.pem + + # Batch Timeout: The amount of time to wait before creating a batch + BatchTimeout: 2s + + # Batch Size: Controls the number of messages batched into a block + BatchSize: + + # Max Message Count: The maximum number of messages to permit in a batch + MaxMessageCount: 10 + + # Absolute Max Bytes: The absolute maximum number of bytes allowed for + # the serialized messages in a batch. + AbsoluteMaxBytes: 99 MB + + # Preferred Max Bytes: The preferred maximum number of bytes allowed for + # the serialized messages in a batch. A message larger than the preferred + # max bytes will result in a batch larger than preferred max bytes. + PreferredMaxBytes: 512 KB + + # Organizations is the list of orgs which are defined as participants on + # the orderer side of the network + Organizations: + + # Policies defines the set of policies at this level of the config tree + # For Orderer policies, their canonical path is + # /Channel/Orderer/ + Policies: + Readers: + Type: ImplicitMeta + Rule: "ANY Readers" + Writers: + Type: ImplicitMeta + Rule: "ANY Writers" + Admins: + Type: ImplicitMeta + Rule: "MAJORITY Admins" + # BlockValidation specifies what signatures must be included in the block + # from the orderer for the peer to validate it. + BlockValidation: + Type: ImplicitMeta + Rule: "ANY Writers" + +################################################################################ +# +# CHANNEL +# +# This section defines the values to encode into a config transaction or +# genesis block for channel related parameters. +# +################################################################################ +Channel: &ChannelDefaults + # Policies defines the set of policies at this level of the config tree + # For Channel policies, their canonical path is + # /Channel/ + Policies: + # Who may invoke the 'Deliver' API + Readers: + Type: ImplicitMeta + Rule: "ANY Readers" + # Who may invoke the 'Broadcast' API + Writers: + Type: ImplicitMeta + Rule: "ANY Writers" + # By default, who may modify elements at this config level + Admins: + Type: ImplicitMeta + Rule: "MAJORITY Admins" + + # Capabilities describes the channel level capabilities, see the + # dedicated Capabilities section elsewhere in this file for a full + # description + Capabilities: + <<: *ChannelCapabilities + +################################################################################ +# +# Profile +# +# - Different configuration profiles may be encoded here to be specified +# as parameters to the configtxgen tool +# +################################################################################ +Profiles: + OrgsOrdererGenesis: + <<: *ChannelDefaults + Orderer: + <<: *OrdererDefaults + Organizations: + - *org1 + Capabilities: + <<: *OrdererCapabilities + Consortiums: + MainConsortium: + Organizations: + - *org2 + - *org3 + + OrgsChannel: + Consortium: MainConsortium + <<: *ChannelDefaults + Application: + <<: *ApplicationDefaults + Organizations: + - *org2 + - *org3 + Capabilities: + <<: *ApplicationCapabilities + diff --git a/topologies/t6/containers/cas/org1-cas/docker-compose-org1-cas.yml b/topologies/t6/containers/cas/org1-cas/docker-compose-org1-cas.yml new file mode 100644 index 0000000..6dd1b43 --- /dev/null +++ b/topologies/t6/containers/cas/org1-cas/docker-compose-org1-cas.yml @@ -0,0 +1,30 @@ +version: "3.9" +services: + org1-ca-tls: + container_name: ${CURRENT_HL_TOPOLOGY}-org1-ca-tls + command: sh -c 'fabric-ca-server start -d -b org1-ca-tls-admin:org1-ca-tls-adminpw --port 7054' + environment: + - FABRIC_CA_SERVER_HOME=/tmp/hyperledger/fabric-ca/ + - FABRIC_CA_SERVER_TLS_ENABLED=true + - FABRIC_CA_SERVER_CSR_CN=${CURRENT_HL_TOPOLOGY}-org1-ca-tls + - FABRIC_CA_SERVER_CSR_HOSTS=0.0.0.0 + - FABRIC_CA_SERVER_DEBUG=true + - TOPOLOGY=${CURRENT_HL_TOPOLOGY} + - FABRIC_CA_SERVER_TLS_CLIENTAUTH_TYPE=RequireAndVerifyClientCert + - FABRIC_CA_SERVER_TLS_CLIENTAUTH_CERTFILES=/tmp/hyperledger/fabric-ca/ca-cert.pem + volumes: + - "${HL_TOPOLOGIES_BASE_FOLDER}/crypto-material/cas/org1/ca-tls:/tmp/hyperledger/fabric-ca" + - "${HL_TOPOLOGIES_BASE_FOLDER}:/tmp" + org1-ca-identities: + container_name: ${CURRENT_HL_TOPOLOGY}-org1-ca-identities + command: sh -c 'fabric-ca-server start -d -b org1-ca-identities-admin:org1-ca-identities-adminpw --port 7054' + environment: + - FABRIC_CA_SERVER_HOME=/tmp/hyperledger/fabric-ca/ + - FABRIC_CA_SERVER_TLS_ENABLED=true + - FABRIC_CA_SERVER_CSR_CN=${CURRENT_HL_TOPOLOGY}-org1-ca-identities + - FABRIC_CA_SERVER_CSR_HOSTS=0.0.0.0 + - FABRIC_CA_SERVER_DEBUG=true + - TOPOLOGY=${CURRENT_HL_TOPOLOGY} + volumes: + - "${HL_TOPOLOGIES_BASE_FOLDER}/crypto-material/cas/org1/ca-identities:/tmp/hyperledger/fabric-ca" + - "${HL_TOPOLOGIES_BASE_FOLDER}:/tmp" \ No newline at end of file diff --git a/topologies/t6/containers/cas/org2-cas/docker-compose-org2-cas.yml b/topologies/t6/containers/cas/org2-cas/docker-compose-org2-cas.yml new file mode 100644 index 0000000..89a99dc --- /dev/null +++ b/topologies/t6/containers/cas/org2-cas/docker-compose-org2-cas.yml @@ -0,0 +1,30 @@ +version: "3.9" +services: + org2-ca-tls: + container_name: ${CURRENT_HL_TOPOLOGY}-org2-ca-tls + command: sh -c 'fabric-ca-server start -d -b org2-ca-tls-admin:org2-ca-tls-adminpw --port 7054' + environment: + - FABRIC_CA_SERVER_HOME=/tmp/hyperledger/fabric-ca/ + - FABRIC_CA_SERVER_TLS_ENABLED=true + - FABRIC_CA_SERVER_CSR_CN=${CURRENT_HL_TOPOLOGY}-org2-ca-tls + - FABRIC_CA_SERVER_CSR_HOSTS=0.0.0.0 + - FABRIC_CA_SERVER_DEBUG=true + - TOPOLOGY=${CURRENT_HL_TOPOLOGY} + - FABRIC_CA_SERVER_TLS_CLIENTAUTH_TYPE=RequireAndVerifyClientCert + - FABRIC_CA_SERVER_TLS_CLIENTAUTH_CERTFILES=/tmp/hyperledger/fabric-ca/ca-cert.pem + volumes: + - "${HL_TOPOLOGIES_BASE_FOLDER}/crypto-material/cas/org2/ca-tls:/tmp/hyperledger/fabric-ca" + - "${HL_TOPOLOGIES_BASE_FOLDER}:/tmp" + org2-ca-identities: + container_name: ${CURRENT_HL_TOPOLOGY}-org2-ca-identities + command: sh -c 'fabric-ca-server start -d -b org2-ca-identities-admin:org2-ca-identities-adminpw --port 7054' + environment: + - FABRIC_CA_SERVER_HOME=/tmp/hyperledger/fabric-ca/ + - FABRIC_CA_SERVER_TLS_ENABLED=true + - FABRIC_CA_SERVER_CSR_CN=${CURRENT_HL_TOPOLOGY}-org2-ca-identities + - FABRIC_CA_SERVER_CSR_HOSTS=0.0.0.0 + - FABRIC_CA_SERVER_DEBUG=true + - TOPOLOGY=${CURRENT_HL_TOPOLOGY} + volumes: + - "${HL_TOPOLOGIES_BASE_FOLDER}/crypto-material/cas/org2/ca-identities:/tmp/hyperledger/fabric-ca" + - "${HL_TOPOLOGIES_BASE_FOLDER}:/tmp" \ No newline at end of file diff --git a/topologies/t6/containers/cas/org3-cas/docker-compose-org3-cas.yml b/topologies/t6/containers/cas/org3-cas/docker-compose-org3-cas.yml new file mode 100644 index 0000000..446acd5 --- /dev/null +++ b/topologies/t6/containers/cas/org3-cas/docker-compose-org3-cas.yml @@ -0,0 +1,30 @@ +version: "3.9" +services: + org3-ca-tls: + container_name: ${CURRENT_HL_TOPOLOGY}-org3-ca-tls + command: sh -c 'fabric-ca-server start -d -b org3-ca-tls-admin:org3-ca-tls-adminpw --port 7054' + environment: + - FABRIC_CA_SERVER_HOME=/tmp/hyperledger/fabric-ca/ + - FABRIC_CA_SERVER_TLS_ENABLED=true + - FABRIC_CA_SERVER_CSR_CN=${CURRENT_HL_TOPOLOGY}-org3-ca-tls + - FABRIC_CA_SERVER_CSR_HOSTS=0.0.0.0 + - FABRIC_CA_SERVER_DEBUG=true + - TOPOLOGY=${CURRENT_HL_TOPOLOGY} + - FABRIC_CA_SERVER_TLS_CLIENTAUTH_TYPE=RequireAndVerifyClientCert + - FABRIC_CA_SERVER_TLS_CLIENTAUTH_CERTFILES=/tmp/hyperledger/fabric-ca/ca-cert.pem + volumes: + - "${HL_TOPOLOGIES_BASE_FOLDER}/crypto-material/cas/org3/ca-tls:/tmp/hyperledger/fabric-ca" + - "${HL_TOPOLOGIES_BASE_FOLDER}:/tmp" + org3-ca-identities: + container_name: ${CURRENT_HL_TOPOLOGY}-org3-ca-identities + command: sh -c 'fabric-ca-server start -d -b org3-ca-identities-admin:org3-ca-identities-adminpw --port 7054' + environment: + - FABRIC_CA_SERVER_HOME=/tmp/hyperledger/fabric-ca/ + - FABRIC_CA_SERVER_TLS_ENABLED=true + - FABRIC_CA_SERVER_CSR_CN=${CURRENT_HL_TOPOLOGY}-org3-ca-identities + - FABRIC_CA_SERVER_CSR_HOSTS=0.0.0.0 + - FABRIC_CA_SERVER_DEBUG=true + - TOPOLOGY=${CURRENT_HL_TOPOLOGY} + volumes: + - "${HL_TOPOLOGIES_BASE_FOLDER}/crypto-material/cas/org3/ca-identities:/tmp/hyperledger/fabric-ca" + - "${HL_TOPOLOGIES_BASE_FOLDER}:/tmp" \ No newline at end of file diff --git a/topologies/t6/containers/clis/org2-clis/docker-compose-org2-clis.yml b/topologies/t6/containers/clis/org2-clis/docker-compose-org2-clis.yml new file mode 100644 index 0000000..50580aa --- /dev/null +++ b/topologies/t6/containers/clis/org2-clis/docker-compose-org2-clis.yml @@ -0,0 +1,24 @@ +services: + org2-cli-peer1: + container_name: ${CURRENT_HL_TOPOLOGY}-org2-cli-peer1 + tty: true + stdin_open: true + environment: + - GOPATH=/opt/gopath + - CORE_VM_ENDPOINT=unix:///host/var/run/docker.sock + - FABRIC_LOGGING_SPEC=DEBUG + - CORE_PEER_ID=${CURRENT_HL_TOPOLOGY}-org2-cli-peer1 + - CORE_PEER_ADDRESS=${CURRENT_HL_TOPOLOGY}-org2-peer1:7051 + - CORE_PEER_LOCALMSPID=org2MSP + - CORE_PEER_TLS_ENABLED=true + - CORE_PEER_TLS_ROOTCERT_FILE=/tmp/crypto-material/cas/org2/ca-tls/ca-cert.pem + - CORE_PEER_TLS_CLIENTAUTHREQUIRED=true + - CORE_PEER_TLS_CLIENTCERT_FILE=/tmp/crypto-material/orgs/org2/admins/admin-tls/msp/signcerts/cert.pem + - CORE_PEER_TLS_CLIENTKEY_FILE=/tmp/crypto-material/orgs/org2/admins/admin-tls/msp/keystore/key.pem + - CORE_PEER_MSPCONFIGPATH=/tmp/crypto-material/peers/org2/peer1/node/msp + - TOPOLOGY=${CURRENT_HL_TOPOLOGY} + working_dir: /opt/gopath/src/github.com/hyperledger/fabric/org2 + command: sh + volumes: + - "${HL_TOPOLOGIES_BASE_FOLDER}/assets/chaincode:/opt/gopath/src/github.com/hyperledger/fabric-samples/chaincode" + - "${HL_TOPOLOGIES_BASE_FOLDER}:/tmp" diff --git a/topologies/t6/containers/clis/org3-clis/docker-compose-org3-clis.yml b/topologies/t6/containers/clis/org3-clis/docker-compose-org3-clis.yml new file mode 100644 index 0000000..dbaedf1 --- /dev/null +++ b/topologies/t6/containers/clis/org3-clis/docker-compose-org3-clis.yml @@ -0,0 +1,24 @@ +services: + org3-cli-peer1: + container_name: ${CURRENT_HL_TOPOLOGY}-org3-cli-peer1 + tty: true + stdin_open: true + environment: + - GOPATH=/opt/gopath + - CORE_VM_ENDPOINT=unix:///host/var/run/docker.sock + - FABRIC_LOGGING_SPEC=DEBUG + - CORE_PEER_ID=${CURRENT_HL_TOPOLOGY}-org3-cli-peer1 + - CORE_PEER_ADDRESS=${CURRENT_HL_TOPOLOGY}-org3-peer1:7051 + - CORE_PEER_LOCALMSPID=org3MSP + - CORE_PEER_TLS_ENABLED=true + - CORE_PEER_TLS_ROOTCERT_FILE=/tmp/crypto-material/cas/org3/ca-tls/ca-cert.pem + - CORE_PEER_TLS_CLIENTAUTHREQUIRED=true + - CORE_PEER_TLS_CLIENTCERT_FILE=/tmp/crypto-material/orgs/org3/admins/admin-tls/msp/signcerts/cert.pem + - CORE_PEER_TLS_CLIENTKEY_FILE=/tmp/crypto-material/orgs/org3/admins/admin-tls/msp/keystore/key.pem + - CORE_PEER_MSPCONFIGPATH=/tmp/crypto-material/peers/org3/peer1/node/msp + - TOPOLOGY=${CURRENT_HL_TOPOLOGY} + working_dir: /opt/gopath/src/github.com/hyperledger/fabric/org3 + command: sh + volumes: + - ${HL_TOPOLOGIES_BASE_FOLDER}/assets/chaincode:/opt/gopath/src/github.com/hyperledger/fabric-samples/chaincode + - ${HL_TOPOLOGIES_BASE_FOLDER}:/tmp \ No newline at end of file diff --git a/topologies/t6/containers/orderers/org1-orderers/docker-compose-org1-orderers.yml b/topologies/t6/containers/orderers/org1-orderers/docker-compose-org1-orderers.yml new file mode 100644 index 0000000..fed3321 --- /dev/null +++ b/topologies/t6/containers/orderers/org1-orderers/docker-compose-org1-orderers.yml @@ -0,0 +1,88 @@ +services: + org1-orderer1: + container_name: ${CURRENT_HL_TOPOLOGY}-org1-orderer1 + environment: + - ORDERER_HOME=/tmp/hyperledger/orderer/home + - ORDERER_FILELEDGER_LOCATION=/tmp/hyperledger/orderer/fileledger + - ORDERER_HOST=${CURRENT_HL_TOPOLOGY}-org1-orderer1 + - ORDERER_GENERAL_LISTENADDRESS=0.0.0.0 + - ORDERER_GENERAL_LISTENPORT=7050 + - ORDERER_GENERAL_GENESISMETHOD=file + # - ORDERER_GENERAL_GENESISFILE=/tmp/crypto-material/artifacts/channels/genesis.block + - ORDERER_GENERAL_BOOTSTRAPFILE=/tmp/crypto-material/artifacts/channels/genesis.block + - ORDERER_GENERAL_LOCALMSPID=org1MSP + - ORDERER_GENERAL_LOCALMSPDIR=/tmp/crypto-material/orderers/org1/orderer1/node/msp + - ORDERER_GENERAL_TLS_ENABLED=true + - ORDERER_GENERAL_TLS_CERTIFICATE=/tmp/crypto-material/orderers/org1/orderer1/node-tls/msp/signcerts/cert.pem + - ORDERER_GENERAL_TLS_PRIVATEKEY=/tmp/crypto-material/orderers/org1/orderer1/node-tls/msp/keystore/key.pem + # - ORDERER_GENERAL_TLS_ROOTCAS=[/tmp/crypto-material/cas/org1/ca-tls/ca-cert.pem] + - ORDERER_GENERAL_TLS_ROOTCAS=[/tmp/crypto-material/cas/org1/ca-tls/ca-cert.pem,/tmp/crypto-material/cas/org2/ca-tls/ca-cert.pem,/tmp/crypto-material/cas/org3/ca-tls/ca-cert.pem] + - ORDERER_GENERAL_TLS_CLIENTAUTHREQUIRED=true + - ORDERER_GENERAL_TLS_CLIENTROOTCAS=/tmp/crypto-material/cas/org1/ca-tls/ca-cert.pem + - ORDERER_GENERAL_LOGLEVEL=debug + - ORDERER_DEBUG_BROADCASTTRACEDIR=data/logs + - ORDERER_OPERATIONS_LISTENADDRESS=0.0.0.0:7051 + - ORDERER_METRICS_STATSD_ADDRESS=0.0.0.0:7052 + - ORDERER_GENERAL_PROFILE_ADDRESS=0.0.0.0:7053 + - TOPOLOGY=${CURRENT_HL_TOPOLOGY} + volumes: + - ${HL_TOPOLOGIES_BASE_FOLDER}/crypto-material:/tmp/crypto-material + - ${HL_TOPOLOGIES_BASE_FOLDER}/homefolders/orderers/org1/orderer1:/tmp/hyperledger/orderer + org1-orderer2: + container_name: ${CURRENT_HL_TOPOLOGY}-org1-orderer2 + environment: + - ORDERER_HOME=/tmp/hyperledger/orderer/home + - ORDERER_FILELEDGER_LOCATION=/tmp/hyperledger/orderer/fileledger + - ORDERER_HOST=${CURRENT_HL_TOPOLOGY}-org1-orderer2 + - ORDERER_GENERAL_LISTENADDRESS=0.0.0.0 + - ORDERER_GENERAL_LISTENPORT=7050 + - ORDERER_GENERAL_GENESISMETHOD=file + # - ORDERER_GENERAL_GENESISFILE=/tmp/crypto-material/artifacts/channels/genesis.block + - ORDERER_GENERAL_BOOTSTRAPFILE=/tmp/crypto-material/artifacts/channels/genesis.block + - ORDERER_GENERAL_LOCALMSPID=org1MSP + - ORDERER_GENERAL_LOCALMSPDIR=/tmp/crypto-material/orderers/org1/orderer2/node/msp + - ORDERER_GENERAL_TLS_ENABLED=true + - ORDERER_GENERAL_TLS_CERTIFICATE=/tmp/crypto-material/orderers/org1/orderer2/node-tls/msp/signcerts/cert.pem + - ORDERER_GENERAL_TLS_PRIVATEKEY=/tmp/crypto-material/orderers/org1/orderer2/node-tls/msp/keystore/key.pem + # - ORDERER_GENERAL_TLS_ROOTCAS=[/tmp/crypto-material/cas/org1/ca-tls/ca-cert.pem] + - ORDERER_GENERAL_TLS_ROOTCAS=[/tmp/crypto-material/cas/org1/ca-tls/ca-cert.pem,/tmp/crypto-material/cas/org2/ca-tls/ca-cert.pem,/tmp/crypto-material/cas/org3/ca-tls/ca-cert.pem] + - ORDERER_GENERAL_TLS_CLIENTAUTHREQUIRED=true + - ORDERER_GENERAL_TLS_CLIENTROOTCAS=/tmp/crypto-material/cas/org1/ca-tls/ca-cert.pem + - ORDERER_GENERAL_LOGLEVEL=debug + - ORDERER_DEBUG_BROADCASTTRACEDIR=data/logs + - ORDERER_OPERATIONS_LISTENADDRESS=0.0.0.0:7051 + - ORDERER_METRICS_STATSD_ADDRESS=0.0.0.0:7052 + - ORDERER_GENERAL_PROFILE_ADDRESS=0.0.0.0:7053 + - TOPOLOGY=${CURRENT_HL_TOPOLOGY} + volumes: + - ${HL_TOPOLOGIES_BASE_FOLDER}/crypto-material:/tmp/crypto-material + - ${HL_TOPOLOGIES_BASE_FOLDER}/homefolders/orderers/org1/orderer2:/tmp/hyperledger/orderer + org1-orderer3: + container_name: ${CURRENT_HL_TOPOLOGY}-org1-orderer3 + environment: + - ORDERER_HOME=/tmp/hyperledger/orderer/home + - ORDERER_FILELEDGER_LOCATION=/tmp/hyperledger/orderer/fileledger + - ORDERER_HOST=${CURRENT_HL_TOPOLOGY}-org1-orderer3 + - ORDERER_GENERAL_LISTENADDRESS=0.0.0.0 + - ORDERER_GENERAL_LISTENPORT=7050 + - ORDERER_GENERAL_GENESISMETHOD=file + # - ORDERER_GENERAL_GENESISFILE=/tmp/crypto-material/artifacts/channels/genesis.block + - ORDERER_GENERAL_BOOTSTRAPFILE=/tmp/crypto-material/artifacts/channels/genesis.block + - ORDERER_GENERAL_LOCALMSPID=org1MSP + - ORDERER_GENERAL_LOCALMSPDIR=/tmp/crypto-material/orderers/org1/orderer3/node/msp + - ORDERER_GENERAL_TLS_ENABLED=true + - ORDERER_GENERAL_TLS_CERTIFICATE=/tmp/crypto-material/orderers/org1/orderer3/node-tls/msp/signcerts/cert.pem + - ORDERER_GENERAL_TLS_PRIVATEKEY=/tmp/crypto-material/orderers/org1/orderer3/node-tls/msp/keystore/key.pem + # - ORDERER_GENERAL_TLS_ROOTCAS=[/tmp/crypto-material/cas/org1/ca-tls/ca-cert.pem] + - ORDERER_GENERAL_TLS_ROOTCAS=[/tmp/crypto-material/cas/org1/ca-tls/ca-cert.pem,/tmp/crypto-material/cas/org2/ca-tls/ca-cert.pem,/tmp/crypto-material/cas/org3/ca-tls/ca-cert.pem] + - ORDERER_GENERAL_TLS_CLIENTAUTHREQUIRED=true + - ORDERER_GENERAL_TLS_CLIENTROOTCAS=/tmp/crypto-material/cas/org1/ca-tls/ca-cert.pem + - ORDERER_GENERAL_LOGLEVEL=debug + - ORDERER_DEBUG_BROADCASTTRACEDIR=data/logs + - ORDERER_OPERATIONS_LISTENADDRESS=0.0.0.0:7051 + - ORDERER_METRICS_STATSD_ADDRESS=0.0.0.0:7052 + - ORDERER_GENERAL_PROFILE_ADDRESS=0.0.0.0:7053 + - TOPOLOGY=${CURRENT_HL_TOPOLOGY} + volumes: + - ${HL_TOPOLOGIES_BASE_FOLDER}/crypto-material:/tmp/crypto-material + - ${HL_TOPOLOGIES_BASE_FOLDER}/homefolders/orderers/org1/orderer3:/tmp/hyperledger/orderer diff --git a/topologies/t6/containers/peers/org2-peers/docker-compose-org2-peers.yml b/topologies/t6/containers/peers/org2-peers/docker-compose-org2-peers.yml new file mode 100644 index 0000000..ab557f0 --- /dev/null +++ b/topologies/t6/containers/peers/org2-peers/docker-compose-org2-peers.yml @@ -0,0 +1,54 @@ +services: + org2-peer1: + container_name: ${CURRENT_HL_TOPOLOGY}-org2-peer1 + environment: + - CORE_PEER_ID=${CURRENT_HL_TOPOLOGY}-org2-peer1 + - CORE_PEER_ADDRESS=${CURRENT_HL_TOPOLOGY}-org2-peer1:7051 + - CORE_PEER_LOCALMSPID=org2MSP + - CORE_PEER_MSPCONFIGPATH=/tmp/crypto-material/peers/org2/peer1/node/msp + - CORE_VM_ENDPOINT=unix:///host/var/run/docker.sock + - CORE_VM_DOCKER_HOSTCONFIG_NETWORKMODE=hl-fabric-${CURRENT_HL_TOPOLOGY} + # - FABRIC_LOGGING_SPEC=debug + - CORE_PEER_TLS_ENABLED=true + - CORE_PEER_TLS_CERT_FILE=/tmp/crypto-material/peers/org2/peer1/node-tls/msp/signcerts/cert.pem + - CORE_PEER_TLS_KEY_FILE=/tmp/crypto-material/peers/org2/peer1/node-tls/msp/keystore/key.pem + - CORE_PEER_TLS_ROOTCERT_FILE=/tmp/crypto-material/peers/org2/peer1/node-tls/msp/tlscacerts/ca-cert.pem + - CORE_PEER_TLS_CLIENTAUTHREQUIRED=true + - CORE_PEER_TLS_CLIENTROOTCAS_FILES=/tmp/crypto-material/peers/org2/peer1/node-tls/msp/tlscacerts/ca-cert.pem + - CORE_PEER_GOSSIP_USELEADERELECTION=true + - CORE_PEER_GOSSIP_ORGLEADER=false + - CORE_PEER_GOSSIP_EXTERNALENDPOINT=${CURRENT_HL_TOPOLOGY}-org2-peer1:7051 + - CORE_PEER_GOSSIP_SKIPHANDSHAKE=true + - TOPOLOGY=${CURRENT_HL_TOPOLOGY} + # - CORE_PEER_CHAINCODELISTENADDRESS=0.0.0.0:7052 + working_dir: /opt/gopath/src/github.com/hyperledger/fabric/org2/peer1 + volumes: + - /var/run:/host/var/run + - ${HL_TOPOLOGIES_BASE_FOLDER}:/tmp + org2-peer2: + container_name: ${CURRENT_HL_TOPOLOGY}-org2-peer2 + environment: + - CORE_PEER_ID=${CURRENT_HL_TOPOLOGY}-org2-peer2 + - CORE_PEER_ADDRESS=${CURRENT_HL_TOPOLOGY}-org2-peer2:7051 + - CORE_PEER_LOCALMSPID=org2MSP + - CORE_PEER_MSPCONFIGPATH=/tmp/crypto-material/peers/org2/peer2/node/msp + - CORE_VM_ENDPOINT=unix:///host/var/run/docker.sock + - CORE_VM_DOCKER_HOSTCONFIG_NETWORKMODE=hl-fabric-${CURRENT_HL_TOPOLOGY} + # - FABRIC_LOGGING_SPEC=debug + - CORE_PEER_TLS_ENABLED=true + - CORE_PEER_TLS_CERT_FILE=/tmp/crypto-material/peers/org2/peer2/node-tls/msp/signcerts/cert.pem + - CORE_PEER_TLS_KEY_FILE=/tmp/crypto-material/peers/org2/peer2/node-tls/msp/keystore/key.pem + - CORE_PEER_TLS_ROOTCERT_FILE=/tmp/crypto-material/peers/org2/peer2/node-tls/msp/tlscacerts/ca-cert.pem + - CORE_PEER_TLS_CLIENTAUTHREQUIRED=true + - CORE_PEER_TLS_CLIENTROOTCAS_FILES=/tmp/crypto-material/peers/org2/peer2/node-tls/msp/tlscacerts/ca-cert.pem + - CORE_PEER_GOSSIP_USELEADERELECTION=true + - CORE_PEER_GOSSIP_ORGLEADER=false + - CORE_PEER_GOSSIP_EXTERNALENDPOINT=${CURRENT_HL_TOPOLOGY}-org2-peer2:7051 + - CORE_PEER_GOSSIP_SKIPHANDSHAKE=true + # - CORE_PEER_CHAINCODELISTENADDRESS=0.0.0.0:7052 + - CORE_PEER_GOSSIP_BOOTSTRAP=${CURRENT_HL_TOPOLOGY}-org2-peer1:7051 + - TOPOLOGY=${CURRENT_HL_TOPOLOGY} + working_dir: /opt/gopath/src/github.com/hyperledger/fabric/org2/peer2 + volumes: + - /var/run:/host/var/run + - ${HL_TOPOLOGIES_BASE_FOLDER}:/tmp diff --git a/topologies/t6/containers/peers/org3-peers/docker-compose-org3-peers.yml b/topologies/t6/containers/peers/org3-peers/docker-compose-org3-peers.yml new file mode 100644 index 0000000..ea90fe3 --- /dev/null +++ b/topologies/t6/containers/peers/org3-peers/docker-compose-org3-peers.yml @@ -0,0 +1,54 @@ +services: + org3-peer1: + container_name: ${CURRENT_HL_TOPOLOGY}-org3-peer1 + environment: + - CORE_PEER_ID=${CURRENT_HL_TOPOLOGY}-org3-peer1 + - CORE_PEER_ADDRESS=${CURRENT_HL_TOPOLOGY}-org3-peer1:7051 + - CORE_PEER_LOCALMSPID=org3MSP + - CORE_PEER_MSPCONFIGPATH=/tmp/crypto-material/peers/org3/peer1/node/msp + - CORE_VM_ENDPOINT=unix:///host/var/run/docker.sock + - CORE_VM_DOCKER_HOSTCONFIG_NETWORKMODE=hl-fabric-${CURRENT_HL_TOPOLOGY} + # - FABRIC_LOGGING_SPEC=debug + - CORE_PEER_TLS_ENABLED=true + - CORE_PEER_TLS_CERT_FILE=/tmp/crypto-material/peers/org3/peer1/node-tls/msp/signcerts/cert.pem + - CORE_PEER_TLS_KEY_FILE=/tmp/crypto-material/peers/org3/peer1/node-tls/msp/keystore/key.pem + - CORE_PEER_TLS_ROOTCERT_FILE=/tmp/crypto-material/peers/org3/peer1/node-tls/msp/tlscacerts/ca-cert.pem + - CORE_PEER_TLS_CLIENTAUTHREQUIRED=true + - CORE_PEER_TLS_CLIENTROOTCAS_FILES=/tmp/crypto-material/peers/org3/peer1/node-tls/msp/tlscacerts/ca-cert.pem + - CORE_PEER_GOSSIP_USELEADERELECTION=true + - CORE_PEER_GOSSIP_ORGLEADER=false + - CORE_PEER_GOSSIP_EXTERNALENDPOINT=${CURRENT_HL_TOPOLOGY}-org3-peer1:7051 + - CORE_PEER_GOSSIP_SKIPHANDSHAKE=true + - TOPOLOGY=${CURRENT_HL_TOPOLOGY} + # - CORE_PEER_CHAINCODELISTENADDRESS=0.0.0.0:7052 + working_dir: /opt/gopath/src/github.com/hyperledger/fabric/org3/peer1 + volumes: + - /var/run:/host/var/run + - ${HL_TOPOLOGIES_BASE_FOLDER}:/tmp + org3-peer2: + container_name: ${CURRENT_HL_TOPOLOGY}-org3-peer2 + environment: + - CORE_PEER_ID=${CURRENT_HL_TOPOLOGY}-org3-peer2 + - CORE_PEER_ADDRESS=${CURRENT_HL_TOPOLOGY}-org3-peer2:7051 + - CORE_PEER_LOCALMSPID=org3MSP + - CORE_PEER_MSPCONFIGPATH=/tmp/crypto-material/peers/org3/peer2/node/msp + - CORE_VM_ENDPOINT=unix:///host/var/run/docker.sock + - CORE_VM_DOCKER_HOSTCONFIG_NETWORKMODE=hl-fabric-${CURRENT_HL_TOPOLOGY} + # - FABRIC_LOGGING_SPEC=debug + - CORE_PEER_TLS_ENABLED=true + - CORE_PEER_TLS_CERT_FILE=/tmp/crypto-material/peers/org3/peer2/node-tls/msp/signcerts/cert.pem + - CORE_PEER_TLS_KEY_FILE=/tmp/crypto-material/peers/org3/peer2/node-tls/msp/keystore/key.pem + - CORE_PEER_TLS_ROOTCERT_FILE=/tmp/crypto-material/peers/org3/peer2/node-tls/msp/tlscacerts/ca-cert.pem + - CORE_PEER_TLS_CLIENTAUTHREQUIRED=true + - CORE_PEER_TLS_CLIENTROOTCAS_FILES=/tmp/crypto-material/peers/org3/peer2/node-tls/msp/tlscacerts/ca-cert.pem + - CORE_PEER_GOSSIP_USELEADERELECTION=true + - CORE_PEER_GOSSIP_ORGLEADER=false + - CORE_PEER_GOSSIP_EXTERNALENDPOINT=${CURRENT_HL_TOPOLOGY}-org3-peer2:7051 + - CORE_PEER_GOSSIP_SKIPHANDSHAKE=true + # - CORE_PEER_CHAINCODELISTENADDRESS=0.0.0.0:7052 + - CORE_PEER_GOSSIP_BOOTSTRAP=${CURRENT_HL_TOPOLOGY}-org3-peer1:7051 + - TOPOLOGY=${CURRENT_HL_TOPOLOGY} + working_dir: /opt/gopath/src/github.com/hyperledger/fabric/org3/peer2 + volumes: + - /var/run:/host/var/run + - ${HL_TOPOLOGIES_BASE_FOLDER}:/tmp diff --git a/topologies/t6/crypto-material/.gitkeep b/topologies/t6/crypto-material/.gitkeep new file mode 100644 index 0000000..e69de29 diff --git a/topologies/t6/docker-compose.yml b/topologies/t6/docker-compose.yml new file mode 100644 index 0000000..ac46843 --- /dev/null +++ b/topologies/t6/docker-compose.yml @@ -0,0 +1,91 @@ +services: + org-shell-cmd: + image: alpine + networks: + - hl-fabric + org1-ca-tls: + image: fabric-ca-openssl:latest + # ports: + # - :7054 + networks: + - hl-fabric + org1-ca-identities: + image: fabric-ca-openssl:latest + # ports: + # - :7054 + networks: + - hl-fabric + org2-ca-tls: + image: fabric-ca-openssl:latest + # ports: + # - :7054 + networks: + - hl-fabric + org2-ca-identities: + image: fabric-ca-openssl:latest + # ports: + # - :7054 + networks: + - hl-fabric + org3-ca-tls: + image: fabric-ca-openssl:latest + # ports: + # - :7054 + networks: + - hl-fabric + org3-ca-identities: + image: fabric-ca-openssl:latest + # ports: + # - :7054 + networks: + - hl-fabric + org2-peer1: + image: hyperledger/fabric-peer:${FABRIC_PEER_VERSION} + # ports: + # - :7051 + networks: + - hl-fabric + org2-peer2: + image: hyperledger/fabric-peer:${FABRIC_PEER_VERSION} + # ports: + # - :7051 + networks: + - hl-fabric + org3-peer1: + image: hyperledger/fabric-peer:${FABRIC_PEER_VERSION} + # ports: + # - :7051 + networks: + - hl-fabric + org3-peer2: + image: hyperledger/fabric-peer:${FABRIC_PEER_VERSION} + # ports: + # - :7051 + networks: + - hl-fabric + org2-cli-peer1: + image: hyperledger/fabric-tools:${FABRIC_TOOLS_VERSION} + networks: + - hl-fabric + org3-cli-peer1: + image: hyperledger/fabric-tools:${FABRIC_TOOLS_VERSION} + networks: + - hl-fabric + org1-orderer1: + image: hyperledger/fabric-orderer:${PEER_ORDERER_VERSION} + # ports: + # - :7050 + networks: + - hl-fabric + org1-orderer2: + image: hyperledger/fabric-orderer:${PEER_ORDERER_VERSION} + # ports: + # - :7050 + networks: + - hl-fabric + org1-orderer3: + image: hyperledger/fabric-orderer:${PEER_ORDERER_VERSION} + # ports: + # - :7050 + networks: + - hl-fabric \ No newline at end of file diff --git a/topologies/t6/homefolders/.gitkeep b/topologies/t6/homefolders/.gitkeep new file mode 100644 index 0000000..e69de29 diff --git a/topologies/t6/images/Dockerfile b/topologies/t6/images/Dockerfile new file mode 100644 index 0000000..141c832 --- /dev/null +++ b/topologies/t6/images/Dockerfile @@ -0,0 +1,4 @@ +FROM hyperledger/fabric-ca:1.5 +RUN apk upgrade --update-cache --available && \ + apk add openssl && \ + rm -rf /var/cache/apk/* \ No newline at end of file diff --git a/topologies/t6/scripts/all-org-peers-commit-chaincode.sh b/topologies/t6/scripts/all-org-peers-commit-chaincode.sh new file mode 100755 index 0000000..43c34db --- /dev/null +++ b/topologies/t6/scripts/all-org-peers-commit-chaincode.sh @@ -0,0 +1,28 @@ +#!/bin/bash +set -e +set -x + +peer lifecycle chaincode checkcommitreadiness -C mychannel --name myccv1 -v "1.0" --sequence 1 + +export CORE_PEER_ADDRESS=${TOPOLOGY}-org2-peer1:7051 +export CORE_PEER_MSPCONFIGPATH=/tmp/crypto-material/orgs/org2/admins/admin/msp +QUERY_INSTALLED=`FABRIC_LOGGING_SPEC=ERROR peer lifecycle chaincode queryinstalled | grep myccv1` +IFS=' ' read -r -a array <<< $QUERY_INSTALLED +PACKAGE_ID=${array[2]} +PACKAGE_ID=${PACKAGE_ID::-1} +echo "The Package ID for the installed chaincode is: $PACKAGE_ID" + +peer lifecycle chaincode commit --channelID mychannel --name myccv1 --version "1.0" \ + --peerAddresses ${TOPOLOGY}-org2-peer1:7051 \ + --peerAddresses ${TOPOLOGY}-org2-peer2:7051 \ + --peerAddresses ${TOPOLOGY}-org3-peer1:7051 \ + --peerAddresses ${TOPOLOGY}-org3-peer2:7051 \ + --tlsRootCertFiles /tmp/crypto-material/cas/org2/ca-tls/ca-cert.pem \ + --tlsRootCertFiles /tmp/crypto-material/cas/org2/ca-tls/ca-cert.pem \ + --tlsRootCertFiles /tmp/crypto-material/cas/org3/ca-tls/ca-cert.pem \ + --tlsRootCertFiles /tmp/crypto-material/cas/org3/ca-tls/ca-cert.pem \ + --sequence 1 --tls --cafile /tmp/crypto-material/cas/org1/ca-tls/ca-cert.pem --clientauth \ + --keyfile /tmp/crypto-material/orgs/org1/admins/admin-tls/msp/keystore/key.pem \ + --certfile /tmp/crypto-material/orgs/org1/admins/admin-tls/msp/signcerts/cert.pem + +peer lifecycle chaincode querycommitted --channelID mychannel --name myccv1 \ No newline at end of file diff --git a/topologies/t6/scripts/all-org-peers-execute-chaincode.sh b/topologies/t6/scripts/all-org-peers-execute-chaincode.sh new file mode 100755 index 0000000..70eff11 --- /dev/null +++ b/topologies/t6/scripts/all-org-peers-execute-chaincode.sh @@ -0,0 +1,20 @@ +#!/bin/sh + +set -e +set -x + +export CORE_PEER_ADDRESS=${TOPOLOGY}-org2-peer1:7051 +export CORE_PEER_MSPCONFIGPATH=/tmp/crypto-material/orgs/org2/users/user/msp +peer chaincode invoke -C mychannel -n myccv1 -c '{"Args":["InitLedger"]}' --waitForEvent --tls --cafile /tmp/crypto-material/cas/org1/ca-tls/ca-cert.pem \ + --peerAddresses ${TOPOLOGY}-org2-peer1:7051 \ + --peerAddresses ${TOPOLOGY}-org2-peer2:7051 \ + --peerAddresses ${TOPOLOGY}-org3-peer1:7051 \ + --peerAddresses ${TOPOLOGY}-org3-peer2:7051 \ + --tlsRootCertFiles /tmp/crypto-material/cas/org2/ca-tls/ca-cert.pem \ + --tlsRootCertFiles /tmp/crypto-material/cas/org2/ca-tls/ca-cert.pem \ + --tlsRootCertFiles /tmp/crypto-material/cas/org3/ca-tls/ca-cert.pem \ + --tlsRootCertFiles /tmp/crypto-material/cas/org3/ca-tls/ca-cert.pem --clientauth \ + --keyfile /tmp/crypto-material/orgs/org1/admins/admin-tls/msp/keystore/key.pem \ + --certfile /tmp/crypto-material/orgs/org1/admins/admin-tls/msp/signcerts/cert.pem + +peer chaincode query -C mychannel -n myccv1 -c '{"Args":["GetAllAssets"]}' diff --git a/topologies/t6/scripts/channels-setup.sh b/topologies/t6/scripts/channels-setup.sh new file mode 100755 index 0000000..230aac8 --- /dev/null +++ b/topologies/t6/scripts/channels-setup.sh @@ -0,0 +1,7 @@ +#!/bin/sh +set -e +set -x + +export FABRIC_CFG_PATH=/tmp/crypto-material/config +configtxgen -profile OrgsOrdererGenesis -outputBlock /tmp/crypto-material/artifacts/channels/genesis.block -channelID syschannel +configtxgen -profile OrgsChannel -outputCreateChannelTx /tmp/crypto-material/artifacts/channels/channel.tx -channelID mychannel \ No newline at end of file diff --git a/topologies/t6/scripts/delete-state-data.sh b/topologies/t6/scripts/delete-state-data.sh new file mode 100755 index 0000000..9062431 --- /dev/null +++ b/topologies/t6/scripts/delete-state-data.sh @@ -0,0 +1,10 @@ +#!/bin/bash +set -e +set -x + +# remove crypto material files +rm -rf /tmp/crypto-material/* +touch /tmp/crypto-material/.gitkeep + +rm -rf /tmp/homefolders/* +touch /tmp/homefolders/.gitkeep diff --git a/topologies/t6/scripts/find-ca-private-key.sh b/topologies/t6/scripts/find-ca-private-key.sh new file mode 100755 index 0000000..29b3f95 --- /dev/null +++ b/topologies/t6/scripts/find-ca-private-key.sh @@ -0,0 +1,21 @@ +#!/bin/bash + +set -e +set -x + +find_private_key_path() { + CA_HOME=$1 + CA_CERTFILE=$CA_HOME/tls-cert.pem + CA_HASH=`openssl x509 -noout -pubkey -in $CA_CERTFILE | openssl md5` + + for x in $CA_HOME/msp/keystore/*_sk; do + CA_KEYFILE_HASH=`openssl pkey -pubout -in ${x%} | openssl md5` + if [[ "${CA_KEYFILE_HASH}" == "${CA_HASH}" ]] + then + echo ${x%} + return 0 + fi + done + + return -1 +} \ No newline at end of file diff --git a/topologies/t6/scripts/org1-enroll-identities-with-ca-identities.sh b/topologies/t6/scripts/org1-enroll-identities-with-ca-identities.sh new file mode 100755 index 0000000..7ab78d9 --- /dev/null +++ b/topologies/t6/scripts/org1-enroll-identities-with-ca-identities.sh @@ -0,0 +1,48 @@ +#!/bin/bash +set -e +set -x + +unset FABRIC_CA_CLIENT_TLS_CLIENT_KEYFILE +unset FABRIC_CA_CLIENT_TLS_CLIENT_CERTFILE +# enroll orderer1 node +export FABRIC_CA_CLIENT_HOME=/tmp/crypto-material/orderers/org1/orderer1/node +export FABRIC_CA_CLIENT_TLS_CERTFILES=/tmp/crypto-material/cas/org1/ca-identities/ca-cert.pem +fabric-ca-client enroll -d -u https://org1-orderer1:orderer1PW@0.0.0.0:7054 +mv /tmp/crypto-material/orderers/org1/orderer1/node/msp/cacerts/* /tmp/crypto-material/orderers/org1/orderer1/node/msp/cacerts/ca-cert.pem +cp /tmp/config/config.yaml /tmp/crypto-material/orderers/org1/orderer1/node/msp + +# enroll orderer2 node +export FABRIC_CA_CLIENT_HOME=/tmp/crypto-material/orderers/org1/orderer2/node +export FABRIC_CA_CLIENT_TLS_CERTFILES=/tmp/crypto-material/cas/org1/ca-identities/ca-cert.pem +fabric-ca-client enroll -d -u https://org1-orderer2:orderer2PW@0.0.0.0:7054 +mv /tmp/crypto-material/orderers/org1/orderer2/node/msp/cacerts/* /tmp/crypto-material/orderers/org1/orderer2/node/msp/cacerts/ca-cert.pem +cp /tmp/config/config.yaml /tmp/crypto-material/orderers/org1/orderer2/node/msp + +# enroll orderer3 node +export FABRIC_CA_CLIENT_HOME=/tmp/crypto-material/orderers/org1/orderer3/node +export FABRIC_CA_CLIENT_TLS_CERTFILES=/tmp/crypto-material/cas/org1/ca-identities/ca-cert.pem +fabric-ca-client enroll -d -u https://org1-orderer3:orderer3PW@0.0.0.0:7054 +mv /tmp/crypto-material/orderers/org1/orderer3/node/msp/cacerts/* /tmp/crypto-material/orderers/org1/orderer3/node/msp/cacerts/ca-cert.pem +cp /tmp/config/config.yaml /tmp/crypto-material/orderers/org1/orderer3/node/msp + +# enroll org1 admin +export FABRIC_CA_CLIENT_HOME=/tmp/crypto-material/orgs/org1/admins/admin +export FABRIC_CA_CLIENT_TLS_CERTFILES=/tmp/crypto-material/cas/org1/ca-identities/ca-cert.pem +fabric-ca-client enroll -d -u https://admin-org1:org1adminpw@0.0.0.0:7054 +mv /tmp/crypto-material/orgs/org1/admins/admin/msp/cacerts/* /tmp/crypto-material/orgs/org1/admins/admin/msp/cacerts/ca-cert.pem +cp /tmp/config/config.yaml /tmp/crypto-material/orgs/org1/admins/admin/msp + +mv /tmp/crypto-material/orderers/org1/orderer1/node-tls/msp/keystore/* /tmp/crypto-material/orderers/org1/orderer1/node-tls/msp/keystore/key.pem +mv /tmp/crypto-material/orderers/org1/orderer2/node-tls/msp/keystore/* /tmp/crypto-material/orderers/org1/orderer2/node-tls/msp/keystore/key.pem +mv /tmp/crypto-material/orderers/org1/orderer3/node-tls/msp/keystore/* /tmp/crypto-material/orderers/org1/orderer3/node-tls/msp/keystore/key.pem + +# setup org1 msp +mkdir -p /tmp/crypto-material/orgs/org1/msp +cp /tmp/config/config.yaml /tmp/crypto-material/orgs/org1/msp +mkdir -p /tmp/crypto-material/orgs/org1/msp/cacerts +cp /tmp/crypto-material/cas/org1/ca-identities/ca-cert.pem /tmp/crypto-material/orgs/org1/msp/cacerts/ca-cert.pem +mkdir -p /tmp/crypto-material/orgs/org1/msp/tlscacerts +cp /tmp/crypto-material/cas/org1/ca-tls/ca-cert.pem /tmp/crypto-material/orgs/org1/msp/tlscacerts/org1-tls-ca-cert.pem +cp /tmp/crypto-material/cas/org2/ca-tls/ca-cert.pem /tmp/crypto-material/orgs/org1/msp/tlscacerts/org2-tls-ca-cert.pem +cp /tmp/crypto-material/cas/org3/ca-tls/ca-cert.pem /tmp/crypto-material/orgs/org1/msp/tlscacerts/org3-tls-ca-cert.pem +mkdir -p /tmp/crypto-material/orgs/org1/msp/users \ No newline at end of file diff --git a/topologies/t6/scripts/org1-enroll-identities-with-ca-tls.sh b/topologies/t6/scripts/org1-enroll-identities-with-ca-tls.sh new file mode 100755 index 0000000..2d985cc --- /dev/null +++ b/topologies/t6/scripts/org1-enroll-identities-with-ca-tls.sh @@ -0,0 +1,42 @@ +#!/bin/bash +set -e +set -x + +export FABRIC_CA_CLIENT_TLS_CLIENT_KEYFILE=/tmp/crypto-material/cas/org1/ca-tls-admin/msp/keystore/key.pem +export FABRIC_CA_CLIENT_TLS_CLIENT_CERTFILE=/tmp/crypto-material/cas/org1/ca-tls-admin/msp/signcerts/cert.pem +# enroll orderer1 node-tls +export FABRIC_CA_CLIENT_HOME=/tmp/crypto-material/orderers/org1/orderer1/node-tls +export FABRIC_CA_CLIENT_TLS_CERTFILES=/tmp/crypto-material/cas/org1/ca-tls/ca-cert.pem +fabric-ca-client enroll -d -u https://org1-orderer1:orderer1PW@0.0.0.0:7054 --enrollment.profile tls --csr.hosts ${TOPOLOGY}-org1-orderer1 + +# enroll orderer2 node-tls +export FABRIC_CA_CLIENT_HOME=/tmp/crypto-material/orderers/org1/orderer2/node-tls +export FABRIC_CA_CLIENT_TLS_CERTFILES=/tmp/crypto-material/cas/org1/ca-tls/ca-cert.pem +fabric-ca-client enroll -d -u https://org1-orderer2:orderer2PW@0.0.0.0:7054 --enrollment.profile tls --csr.hosts ${TOPOLOGY}-org1-orderer2 + +# enroll orderer3 node-tls +export FABRIC_CA_CLIENT_HOME=/tmp/crypto-material/orderers/org1/orderer3/node-tls +export FABRIC_CA_CLIENT_TLS_CERTFILES=/tmp/crypto-material/cas/org1/ca-tls/ca-cert.pem +fabric-ca-client enroll -d -u https://org1-orderer3:orderer3PW@0.0.0.0:7054 --enrollment.profile tls --csr.hosts ${TOPOLOGY}-org1-orderer3 + +# enroll org1 admin-tls +unset FABRIC_CA_CLIENT_TLS_CLIENT_KEYFILE +unset FABRIC_CA_CLIENT_TLS_CLIENT_CERTFILE +export FABRIC_CA_CLIENT_HOME=/tmp/crypto-material/orgs/org1/admins/admin-tls +export FABRIC_CA_CLIENT_TLS_CERTFILES=/tmp/crypto-material/cas/org1/ca-tls/ca-cert.pem +export FABRIC_CA_CLIENT_TLS_CLIENT_KEYFILE=/tmp/crypto-material/cas/org1/ca-tls-admin/msp/keystore/key.pem +export FABRIC_CA_CLIENT_TLS_CLIENT_CERTFILE=/tmp/crypto-material/cas/org1/ca-tls-admin/msp/signcerts/cert.pem +export FABRIC_CA_CLIENT_MSPDIR=msp +fabric-ca-client enroll -d -u https://admin-org1:org1AdminPW@0.0.0.0:7054 + +mv /tmp/crypto-material/orgs/org1/admins/admin-tls/msp/keystore/* /tmp/crypto-material/orgs/org1/admins/admin-tls/msp/keystore/key.pem + + + +mv /tmp/crypto-material/orderers/org1/orderer1/node-tls/msp/keystore/* /tmp/crypto-material/orderers/org1/orderer1/node-tls/msp/keystore/key.pem +mv /tmp/crypto-material/orderers/org1/orderer2/node-tls/msp/keystore/* /tmp/crypto-material/orderers/org1/orderer2/node-tls/msp/keystore/key.pem +mv /tmp/crypto-material/orderers/org1/orderer3/node-tls/msp/keystore/* /tmp/crypto-material/orderers/org1/orderer3/node-tls/msp/keystore/key.pem + +mv /tmp/crypto-material/orderers/org1/orderer1/node-tls/msp/tlscacerts/* /tmp/crypto-material/orderers/org1/orderer1/node-tls/msp/tlscacerts/ca-cert.pem +mv /tmp/crypto-material/orderers/org1/orderer2/node-tls/msp/tlscacerts/* /tmp/crypto-material/orderers/org1/orderer2/node-tls/msp/tlscacerts/ca-cert.pem +mv /tmp/crypto-material/orderers/org1/orderer3/node-tls/msp/tlscacerts/* /tmp/crypto-material/orderers/org1/orderer3/node-tls/msp/tlscacerts/ca-cert.pem \ No newline at end of file diff --git a/topologies/t6/scripts/org1-register-identities-with-ca-identities.sh b/topologies/t6/scripts/org1-register-identities-with-ca-identities.sh new file mode 100755 index 0000000..1e6dcb2 --- /dev/null +++ b/topologies/t6/scripts/org1-register-identities-with-ca-identities.sh @@ -0,0 +1,11 @@ +#!/bin/bash +set -e +set -x + +export FABRIC_CA_CLIENT_TLS_CERTFILES=/tmp/crypto-material/cas/org1/ca-identities/ca-cert.pem +export FABRIC_CA_CLIENT_HOME=/tmp/crypto-material/cas/org1/ca-identities-admin +fabric-ca-client enroll -d -u https://org1-ca-identities-admin:org1-ca-identities-adminpw@0.0.0.0:7054 +fabric-ca-client register -d --id.name org1-orderer1 --id.secret orderer1PW --id.type orderer -u https://0.0.0.0:7054 +fabric-ca-client register -d --id.name org1-orderer2 --id.secret orderer2PW --id.type orderer -u https://0.0.0.0:7054 +fabric-ca-client register -d --id.name org1-orderer3 --id.secret orderer3PW --id.type orderer -u https://0.0.0.0:7054 +fabric-ca-client register -d --id.name admin-org1 --id.secret org1adminpw --id.type admin --id.attrs "hf.Registrar.Roles=client,hf.Registrar.Attributes=*,hf.Revoker=true,hf.GenCRL=true,admin=true:ecert,abac.init=true:ecert" -u https://0.0.0.0:7054 \ No newline at end of file diff --git a/topologies/t6/scripts/org1-register-identities-with-ca-tls.sh b/topologies/t6/scripts/org1-register-identities-with-ca-tls.sh new file mode 100755 index 0000000..d8cb558 --- /dev/null +++ b/topologies/t6/scripts/org1-register-identities-with-ca-tls.sh @@ -0,0 +1,26 @@ +#!/bin/bash +set -e +set -x +source /tmp/scripts/find-ca-private-key.sh +# get the CA private key file as 2 of them are created: one for the CA cert (ca-cert.pem) and another one for CA TLS cert (tls-cert.pem) +# and we need the keyfile for the CA TLS cert +CA_PRIV_KEYFILE=`find_private_key_path /tmp/crypto-material/cas/org1/ca-tls` +echo $CA_PRIV_KEYFILE +# initial enroll of bootstrap admin will be done with the key and cert of the CA server +# after this, the admin's key and cert will be used for registrations. + +export FABRIC_CA_CLIENT_TLS_CERTFILES=/tmp/crypto-material/cas/org1/ca-tls/ca-cert.pem +# copy the CA server's key file to make it easier to use it +cp $CA_PRIV_KEYFILE /tmp/crypto-material/cas/org1/ca-tls/msp/keystore/key.pem +export FABRIC_CA_CLIENT_TLS_CLIENT_KEYFILE=/tmp/crypto-material/cas/org1/ca-tls/msp/keystore/key.pem +export FABRIC_CA_CLIENT_TLS_CLIENT_CERTFILE=/tmp/crypto-material/cas/org1/ca-tls/tls-cert.pem +export FABRIC_CA_CLIENT_HOME=/tmp/crypto-material/cas/org1/ca-tls-admin +fabric-ca-client enroll -d -u https://org1-ca-tls-admin:org1-ca-tls-adminpw@0.0.0.0:7054 +sleep 1 +mv /tmp/crypto-material/cas/org1/ca-tls-admin/msp/keystore/* /tmp/crypto-material/cas/org1/ca-tls-admin/msp/keystore/key.pem +export FABRIC_CA_CLIENT_TLS_CLIENT_KEYFILE=/tmp/crypto-material/cas/org1/ca-tls-admin/msp/keystore/key.pem +export FABRIC_CA_CLIENT_TLS_CLIENT_CERTFILE=/tmp/crypto-material/cas/org1/ca-tls-admin/msp/signcerts/cert.pem +fabric-ca-client register -d --id.name org1-orderer1 --id.secret orderer1PW --id.type orderer -u https://0.0.0.0:7054 +fabric-ca-client register -d --id.name org1-orderer2 --id.secret orderer2PW --id.type orderer -u https://0.0.0.0:7054 +fabric-ca-client register -d --id.name org1-orderer3 --id.secret orderer3PW --id.type orderer -u https://0.0.0.0:7054 +fabric-ca-client register -d --id.name admin-org1 --id.secret org1AdminPW --id.type admin -u https://0.0.0.0:7054 \ No newline at end of file diff --git a/topologies/t6/scripts/org2-approve-chaincode.sh b/topologies/t6/scripts/org2-approve-chaincode.sh new file mode 100755 index 0000000..7c4d6a8 --- /dev/null +++ b/topologies/t6/scripts/org2-approve-chaincode.sh @@ -0,0 +1,24 @@ +#!/bin/bash +set -e +set -x + +export CORE_PEER_ADDRESS=${TOPOLOGY}-org2-peer1:7051 +export CORE_PEER_MSPCONFIGPATH=/tmp/crypto-material/orgs/org2/admins/admin/msp +QUERY_INSTALLED=`FABRIC_LOGGING_SPEC=ERROR peer lifecycle chaincode queryinstalled | grep myccv1` +IFS=' ' read -r -a array <<< $QUERY_INSTALLED +PACKAGE_ID=${array[2]} +PACKAGE_ID=${PACKAGE_ID::-1} +echo "The Package ID for the installed chaincode is: $PACKAGE_ID" + + +peer lifecycle chaincode approveformyorg --channelID mychannel --name myccv1 --version "1.0" \ + --package-id $PACKAGE_ID \ + --peerAddresses ${TOPOLOGY}-org2-peer1:7051 \ + --peerAddresses ${TOPOLOGY}-org2-peer2:7051 \ + --tlsRootCertFiles /tmp/crypto-material/cas/org2/ca-tls/ca-cert.pem \ + --tlsRootCertFiles /tmp/crypto-material/cas/org2/ca-tls/ca-cert.pem \ + --sequence 1 --tls --cafile /tmp/crypto-material/cas/org1/ca-tls/ca-cert.pem --clientauth \ + --keyfile /tmp/crypto-material/orgs/org2/admins/admin-tls/msp/keystore/key.pem \ + --certfile /tmp/crypto-material/orgs/org2/admins/admin-tls/msp/signcerts/cert.pem + +peer lifecycle chaincode queryapproved -C mychannel --name myccv1 \ No newline at end of file diff --git a/topologies/t6/scripts/org2-create-and-join-channels.sh b/topologies/t6/scripts/org2-create-and-join-channels.sh new file mode 100755 index 0000000..943f229 --- /dev/null +++ b/topologies/t6/scripts/org2-create-and-join-channels.sh @@ -0,0 +1,17 @@ +#!/bin/sh +set -e +set -x + +export CORE_PEER_MSPCONFIGPATH=/tmp/crypto-material/orgs/org2/admins/admin/msp +peer channel create -c mychannel \ + -f /tmp/crypto-material/artifacts/channels/channel.tx -o ${TOPOLOGY}-org1-orderer1:7050 \ + --outputBlock /tmp/crypto-material/artifacts/channels/mychannel.block \ + --tls --cafile /tmp/crypto-material/cas/org1/ca-tls/ca-cert.pem --clientauth \ + --keyfile /tmp/crypto-material/orgs/org2/admins/admin-tls/msp/keystore/key.pem \ + --certfile /tmp/crypto-material/orgs/org2/admins/admin-tls/msp/signcerts/cert.pem + +export CORE_PEER_ADDRESS=${TOPOLOGY}-org2-peer1:7051 +peer channel join -b /tmp/crypto-material/artifacts/channels/mychannel.block + +export CORE_PEER_ADDRESS=${TOPOLOGY}-org2-peer2:7051 +peer channel join -b /tmp/crypto-material/artifacts/channels/mychannel.block \ No newline at end of file diff --git a/topologies/t6/scripts/org2-enroll-identities-with-ca-identities.sh b/topologies/t6/scripts/org2-enroll-identities-with-ca-identities.sh new file mode 100755 index 0000000..20f8c8d --- /dev/null +++ b/topologies/t6/scripts/org2-enroll-identities-with-ca-identities.sh @@ -0,0 +1,53 @@ +#!/bin/bash +set -e +set -x + +# enroll peer1 node +export FABRIC_CA_CLIENT_HOME=/tmp/crypto-material/peers/org2/peer1/node +export FABRIC_CA_CLIENT_TLS_CERTFILES=/tmp/crypto-material/cas/org2/ca-identities/ca-cert.pem +fabric-ca-client enroll -d -u https://peer1-org2:peer1PW@0.0.0.0:7054 +mv /tmp/crypto-material/peers/org2/peer1/node/msp/cacerts/* /tmp/crypto-material/peers/org2/peer1/node/msp/cacerts/ca-cert.pem +cp /tmp/config/config.yaml /tmp/crypto-material/peers/org2/peer1/node/msp + +# enroll peer2 node +export FABRIC_CA_CLIENT_HOME=/tmp/crypto-material/peers/org2/peer2/node +export FABRIC_CA_CLIENT_TLS_CERTFILES=/tmp/crypto-material/cas/org2/ca-identities/ca-cert.pem +unset FABRIC_CA_CLIENT_TLS_CLIENT_KEYFILE +unset FABRIC_CA_CLIENT_TLS_CLIENT_CERTFILE +fabric-ca-client enroll -d -u https://peer2-org2:peer2PW@0.0.0.0:7054 +mv /tmp/crypto-material/peers/org2/peer2/node/msp/cacerts/* /tmp/crypto-material/peers/org2/peer2/node/msp/cacerts/ca-cert.pem +cp /tmp/config/config.yaml /tmp/crypto-material/peers/org2/peer2/node/msp + +# enroll org2 admin +export FABRIC_CA_CLIENT_HOME=/tmp/crypto-material/orgs/org2/admins/admin +export FABRIC_CA_CLIENT_TLS_CERTFILES=/tmp/crypto-material/cas/org2/ca-identities/ca-cert.pem +unset FABRIC_CA_CLIENT_TLS_CLIENT_KEYFILE +unset FABRIC_CA_CLIENT_TLS_CLIENT_CERTFILE +fabric-ca-client enroll -d -u https://admin-org2:org2AdminPW@0.0.0.0:7054 +mv /tmp/crypto-material/orgs/org2/admins/admin/msp/cacerts/* /tmp/crypto-material/orgs/org2/admins/admin/msp/cacerts/ca-cert.pem +cp /tmp/config/config.yaml /tmp/crypto-material/orgs/org2/admins/admin/msp + +# enroll org2 user +export FABRIC_CA_CLIENT_HOME=/tmp/crypto-material/orgs/org2/users/user +export FABRIC_CA_CLIENT_TLS_CERTFILES=/tmp/crypto-material/cas/org2/ca-identities/ca-cert.pem +fabric-ca-client enroll -d -u https://admin-org2:org2AdminPW@0.0.0.0:7054 +mv /tmp/crypto-material/orgs/org2/users/user/msp/cacerts/* /tmp/crypto-material/orgs/org2/users/user/msp/cacerts/ca-cert.pem +cp /tmp/config/config.yaml /tmp/crypto-material/orgs/org2/users/user/msp + +# enroll org2 client +export FABRIC_CA_CLIENT_HOME=/tmp/crypto-material/orgs/org2/clients/client +export FABRIC_CA_CLIENT_TLS_CERTFILES=/tmp/crypto-material/cas/org2/ca-identities/ca-cert.pem +fabric-ca-client enroll -d -u https://admin-org2:org2AdminPW@0.0.0.0:7054 +mv /tmp/crypto-material/orgs/org2/clients/client/msp/cacerts/* /tmp/crypto-material/orgs/org2/clients/client/msp/cacerts/ca-cert.pem +cp /tmp/config/config.yaml /tmp/crypto-material/orgs/org2/clients/client/msp + +# setup org2 msp +mkdir -p /tmp/crypto-material/orgs/org2/msp +cp /tmp/config/config.yaml /tmp/crypto-material/orgs/org2/msp +mkdir -p /tmp/crypto-material/orgs/org2/msp/cacerts +cp /tmp/crypto-material/cas/org2/ca-identities/ca-cert.pem /tmp/crypto-material/orgs/org2/msp/cacerts/ca-cert.pem +mkdir -p /tmp/crypto-material/orgs/org2/msp/tlscacerts +cp /tmp/crypto-material/cas/org1/ca-tls/ca-cert.pem /tmp/crypto-material/orgs/org2/msp/tlscacerts/org1-tls-ca-cert.pem +cp /tmp/crypto-material/cas/org2/ca-tls/ca-cert.pem /tmp/crypto-material/orgs/org2/msp/tlscacerts/org2-tls-ca-cert.pem +cp /tmp/crypto-material/cas/org3/ca-tls/ca-cert.pem /tmp/crypto-material/orgs/org2/msp/tlscacerts/org3-tls-ca-cert.pem +mkdir -p /tmp/crypto-material/orgs/org2/msp/users diff --git a/topologies/t6/scripts/org2-enroll-identities-with-ca-tls.sh b/topologies/t6/scripts/org2-enroll-identities-with-ca-tls.sh new file mode 100755 index 0000000..6698877 --- /dev/null +++ b/topologies/t6/scripts/org2-enroll-identities-with-ca-tls.sh @@ -0,0 +1,34 @@ +#!/bin/bash +set -e +set -x + +# enroll peer1 node-tls +export FABRIC_CA_CLIENT_HOME=/tmp/crypto-material/peers/org2/peer1/node-tls +export FABRIC_CA_CLIENT_TLS_CERTFILES=/tmp/crypto-material/cas/org2/ca-tls/ca-cert.pem +export FABRIC_CA_CLIENT_TLS_CLIENT_KEYFILE=/tmp/crypto-material/cas/org2/ca-tls-admin/msp/keystore/key.pem +export FABRIC_CA_CLIENT_TLS_CLIENT_CERTFILE=/tmp/crypto-material/cas/org2/ca-tls-admin/msp/signcerts/cert.pem +fabric-ca-client enroll -d -u https://peer1-org2:peer1PW@0.0.0.0:7054 --enrollment.profile tls --csr.hosts ${TOPOLOGY}-org2-peer1 + +# enroll peer2 node-tls +export FABRIC_CA_CLIENT_HOME=/tmp/crypto-material/peers/org2/peer2/node-tls +export FABRIC_CA_CLIENT_TLS_CERTFILES=/tmp/crypto-material/cas/org2/ca-tls/ca-cert.pem +export FABRIC_CA_CLIENT_TLS_CLIENT_KEYFILE=/tmp/crypto-material/cas/org2/ca-tls-admin/msp/keystore/key.pem +export FABRIC_CA_CLIENT_TLS_CLIENT_CERTFILE=/tmp/crypto-material/cas/org2/ca-tls-admin/msp/signcerts/cert.pem +fabric-ca-client enroll -d -u https://peer2-org2:peer2PW@0.0.0.0:7054 --enrollment.profile tls --csr.hosts ${TOPOLOGY}-org2-peer2 + +# enroll org2 admin-tls +export FABRIC_CA_CLIENT_HOME=/tmp/crypto-material/orgs/org2/admins/admin-tls +export FABRIC_CA_CLIENT_TLS_CERTFILES=/tmp/crypto-material/cas/org2/ca-tls/ca-cert.pem +export FABRIC_CA_CLIENT_TLS_CLIENT_KEYFILE=/tmp/crypto-material/cas/org2/ca-tls-admin/msp/keystore/key.pem +export FABRIC_CA_CLIENT_TLS_CLIENT_CERTFILE=/tmp/crypto-material/cas/org2/ca-tls-admin/msp/signcerts/cert.pem +#export FABRIC_CA_CLIENT_MSPDIR=msp +fabric-ca-client enroll -d -u https://admin-org2:org2AdminPW@0.0.0.0:7054 + +mv /tmp/crypto-material/orgs/org2/admins/admin-tls/msp/keystore/* /tmp/crypto-material/orgs/org2/admins/admin-tls/msp/keystore/key.pem + + +mv /tmp/crypto-material/peers/org2/peer1/node-tls/msp/keystore/* /tmp/crypto-material/peers/org2/peer1/node-tls/msp/keystore/key.pem +mv /tmp/crypto-material/peers/org2/peer2/node-tls/msp/keystore/* /tmp/crypto-material/peers/org2/peer2/node-tls/msp/keystore/key.pem + +mv /tmp/crypto-material/peers/org2/peer1/node-tls/msp/tlscacerts/* /tmp/crypto-material/peers/org2/peer1/node-tls/msp/tlscacerts/ca-cert.pem +mv /tmp/crypto-material/peers/org2/peer2/node-tls/msp/tlscacerts/* /tmp/crypto-material/peers/org2/peer2/node-tls/msp/tlscacerts/ca-cert.pem \ No newline at end of file diff --git a/topologies/t6/scripts/org2-install-chaincode.sh b/topologies/t6/scripts/org2-install-chaincode.sh new file mode 100755 index 0000000..b9db570 --- /dev/null +++ b/topologies/t6/scripts/org2-install-chaincode.sh @@ -0,0 +1,13 @@ +#!/bin/bash +set -e +set -x + +export CHAINCODE_DIR=/opt/gopath/src/github.com/hyperledger/fabric-samples/chaincode/assets-transfer-basic/chaincode-go + +export CORE_PEER_ADDRESS=${TOPOLOGY}-org2-peer1:7051 +export CORE_PEER_MSPCONFIGPATH=/tmp/crypto-material/orgs/org2/admins/admin/msp +peer lifecycle chaincode package mycc.tar.gz --path $CHAINCODE_DIR --lang golang --label myccv1 +peer lifecycle chaincode install mycc.tar.gz + +export CORE_PEER_ADDRESS=${TOPOLOGY}-org2-peer2:7051 +peer lifecycle chaincode install mycc.tar.gz \ No newline at end of file diff --git a/topologies/t6/scripts/org2-register-identities-with-ca-identities.sh b/topologies/t6/scripts/org2-register-identities-with-ca-identities.sh new file mode 100755 index 0000000..beb2cd6 --- /dev/null +++ b/topologies/t6/scripts/org2-register-identities-with-ca-identities.sh @@ -0,0 +1,12 @@ +#!/bin/bash +set -e +set -x + +export FABRIC_CA_CLIENT_TLS_CERTFILES=/tmp/crypto-material/cas/org2/ca-identities/ca-cert.pem +export FABRIC_CA_CLIENT_HOME=/tmp/crypto-material/cas/org2/ca-identities-admin +fabric-ca-client enroll -d -u https://org2-ca-identities-admin:org2-ca-identities-adminpw@0.0.0.0:7054 +fabric-ca-client register -d --id.name peer1-org2 --id.secret peer1PW --id.type peer -u https://0.0.0.0:7054 +fabric-ca-client register -d --id.name peer2-org2 --id.secret peer2PW --id.type peer -u https://0.0.0.0:7054 +fabric-ca-client register -d --id.name admin-org2 --id.secret org2AdminPW --id.type admin -u https://0.0.0.0:7054 +fabric-ca-client register -d --id.name user-org2 --id.secret org2UserPW --id.type user -u https://0.0.0.0:7054 +fabric-ca-client register -d --id.name client-org2 --id.secret org2UserPW --id.type client -u https://0.0.0.0:7054 \ No newline at end of file diff --git a/topologies/t6/scripts/org2-register-identities-with-ca-tls.sh b/topologies/t6/scripts/org2-register-identities-with-ca-tls.sh new file mode 100755 index 0000000..fe612eb --- /dev/null +++ b/topologies/t6/scripts/org2-register-identities-with-ca-tls.sh @@ -0,0 +1,24 @@ +#!/bin/bash +set -e +set -x +source /tmp/scripts/find-ca-private-key.sh +# get the CA private key file as 2 of them are created: one for the CA cert (ca-cert.pem) and another one for CA TLS cert (tls-cert.pem) +# and we need the keyfile for the CA TLS cert +CA_PRIV_KEYFILE=`find_private_key_path /tmp/crypto-material/cas/org2/ca-tls` +echo $CA_PRIV_KEYFILE +# initial enroll of bootstrap admin will be done with the key and cert of the CA server +# after this, the admin's key and cert will be used for registrations. +export FABRIC_CA_CLIENT_TLS_CERTFILES=/tmp/crypto-material/cas/org2/ca-tls/ca-cert.pem +# copy the CA server's key file to make it easier to use it +cp $CA_PRIV_KEYFILE /tmp/crypto-material/cas/org2/ca-tls/msp/keystore/key.pem +export FABRIC_CA_CLIENT_TLS_CLIENT_KEYFILE=/tmp/crypto-material/cas/org2/ca-tls/msp/keystore/key.pem +export FABRIC_CA_CLIENT_TLS_CLIENT_CERTFILE=/tmp/crypto-material/cas/org2/ca-tls/tls-cert.pem +export FABRIC_CA_CLIENT_HOME=/tmp/crypto-material/cas/org2/ca-tls-admin +fabric-ca-client enroll -d -u https://org2-ca-tls-admin:org2-ca-tls-adminpw@0.0.0.0:7054 +sleep 1 +mv /tmp/crypto-material/cas/org2/ca-tls-admin/msp/keystore/* /tmp/crypto-material/cas/org2/ca-tls-admin/msp/keystore/key.pem +export FABRIC_CA_CLIENT_TLS_CLIENT_KEYFILE=/tmp/crypto-material/cas/org2/ca-tls-admin/msp/keystore/key.pem +export FABRIC_CA_CLIENT_TLS_CLIENT_CERTFILE=/tmp/crypto-material/cas/org2/ca-tls-admin/msp/signcerts/cert.pem +fabric-ca-client register -d --id.name peer1-org2 --id.secret peer1PW --id.type peer -u https://0.0.0.0:7054 +fabric-ca-client register -d --id.name peer2-org2 --id.secret peer2PW --id.type peer -u https://0.0.0.0:7054 +fabric-ca-client register -d --id.name admin-org2 --id.secret org2AdminPW --id.type admin -u https://0.0.0.0:7054 \ No newline at end of file diff --git a/topologies/t6/scripts/org3-approve-chaincode.sh b/topologies/t6/scripts/org3-approve-chaincode.sh new file mode 100755 index 0000000..d72f268 --- /dev/null +++ b/topologies/t6/scripts/org3-approve-chaincode.sh @@ -0,0 +1,24 @@ +#!/bin/bash +set -e +set -x + +export CORE_PEER_ADDRESS=${TOPOLOGY}-org3-peer1:7051 +export CORE_PEER_MSPCONFIGPATH=/tmp/crypto-material/orgs/org3/admins/admin/msp +QUERY_INSTALLED=`FABRIC_LOGGING_SPEC=ERROR peer lifecycle chaincode queryinstalled | grep myccv1` +IFS=' ' read -r -a array <<< $QUERY_INSTALLED +PACKAGE_ID=${array[2]} +PACKAGE_ID=${PACKAGE_ID::-1} +echo "The Package ID for the installed chaincode is: $PACKAGE_ID" + + +peer lifecycle chaincode approveformyorg --channelID mychannel --name myccv1 --version "1.0" \ + --package-id $PACKAGE_ID \ + --peerAddresses ${TOPOLOGY}-org3-peer1:7051 \ + --peerAddresses ${TOPOLOGY}-org3-peer2:7051 \ + --tlsRootCertFiles /tmp/crypto-material/cas/org3/ca-tls/ca-cert.pem \ + --tlsRootCertFiles /tmp/crypto-material/cas/org3/ca-tls/ca-cert.pem \ + --sequence 1 --tls --cafile /tmp/crypto-material/cas/org1/ca-tls/ca-cert.pem --clientauth \ + --keyfile /tmp/crypto-material/orgs/org2/admins/admin-tls/msp/keystore/key.pem \ + --certfile /tmp/crypto-material/orgs/org2/admins/admin-tls/msp/signcerts/cert.pem + +peer lifecycle chaincode queryapproved -C mychannel --name myccv1 \ No newline at end of file diff --git a/topologies/t6/scripts/org3-create-and-join-channels.sh b/topologies/t6/scripts/org3-create-and-join-channels.sh new file mode 100755 index 0000000..ebf8b48 --- /dev/null +++ b/topologies/t6/scripts/org3-create-and-join-channels.sh @@ -0,0 +1,11 @@ +#!/bin/sh +set -e +set -x + +export CORE_PEER_MSPCONFIGPATH=/tmp/crypto-material/orgs/org3/admins/admin/msp + +export CORE_PEER_ADDRESS=${TOPOLOGY}-org3-peer1:7051 +peer channel join -b /tmp/crypto-material/artifacts/channels/mychannel.block + +export CORE_PEER_ADDRESS=${TOPOLOGY}-org3-peer2:7051 +peer channel join -b /tmp/crypto-material/artifacts/channels/mychannel.block \ No newline at end of file diff --git a/topologies/t6/scripts/org3-enroll-identities-with-ca-identities.sh b/topologies/t6/scripts/org3-enroll-identities-with-ca-identities.sh new file mode 100755 index 0000000..7e3f436 --- /dev/null +++ b/topologies/t6/scripts/org3-enroll-identities-with-ca-identities.sh @@ -0,0 +1,52 @@ +#!/bin/bash +set -e +set -x + +export FABRIC_CA_CLIENT_TLS_CLIENT_KEYFILE=/tmp/crypto-material/cas/org2/ca-tls-admin/msp/keystore/key.pem +export FABRIC_CA_CLIENT_TLS_CLIENT_CERTFILE=/tmp/crypto-material/cas/org2/ca-tls-admin/msp/signcerts/cert.pem + +# enroll peer1 node +export FABRIC_CA_CLIENT_HOME=/tmp/crypto-material/peers/org3/peer1/node +export FABRIC_CA_CLIENT_TLS_CERTFILES=/tmp/crypto-material/cas/org3/ca-identities/ca-cert.pem +fabric-ca-client enroll -d -u https://peer1-org3:peer1PW@0.0.0.0:7054 +mv /tmp/crypto-material/peers/org3/peer1/node/msp/cacerts/* /tmp/crypto-material/peers/org3/peer1/node/msp/cacerts/ca-cert.pem +cp /tmp/config/config.yaml /tmp/crypto-material/peers/org3/peer1/node/msp + +# enroll peer2 node +export FABRIC_CA_CLIENT_HOME=/tmp/crypto-material/peers/org3/peer2/node +export FABRIC_CA_CLIENT_TLS_CERTFILES=/tmp/crypto-material/cas/org3/ca-identities/ca-cert.pem +fabric-ca-client enroll -d -u https://peer2-org3:peer2PW@0.0.0.0:7054 +mv /tmp/crypto-material/peers/org3/peer2/node/msp/cacerts/* /tmp/crypto-material/peers/org3/peer2/node/msp/cacerts/ca-cert.pem +cp /tmp/config/config.yaml /tmp/crypto-material/peers/org3/peer2/node/msp + +# enroll org3 admin +export FABRIC_CA_CLIENT_HOME=/tmp/crypto-material/orgs/org3/admins/admin +export FABRIC_CA_CLIENT_TLS_CERTFILES=/tmp/crypto-material/cas/org3/ca-identities/ca-cert.pem +fabric-ca-client enroll -d -u https://admin-org3:org3AdminPW@0.0.0.0:7054 +mv /tmp/crypto-material/orgs/org3/admins/admin/msp/cacerts/* /tmp/crypto-material/orgs/org3/admins/admin/msp/cacerts/ca-cert.pem +cp /tmp/config/config.yaml /tmp/crypto-material/orgs/org3/admins/admin/msp + +# enroll org3 user +export FABRIC_CA_CLIENT_HOME=/tmp/crypto-material/orgs/org3/users/user +export FABRIC_CA_CLIENT_TLS_CERTFILES=/tmp/crypto-material/cas/org3/ca-identities/ca-cert.pem +fabric-ca-client enroll -d -u https://admin-org3:org3AdminPW@0.0.0.0:7054 +mv /tmp/crypto-material/orgs/org3/users/user/msp/cacerts/* /tmp/crypto-material/orgs/org3/users/user/msp/cacerts/ca-cert.pem +cp /tmp/config/config.yaml /tmp/crypto-material/orgs/org3/users/user/msp + +# enroll org3 client +export FABRIC_CA_CLIENT_HOME=/tmp/crypto-material/orgs/org3/clients/client +export FABRIC_CA_CLIENT_TLS_CERTFILES=/tmp/crypto-material/cas/org3/ca-identities/ca-cert.pem +fabric-ca-client enroll -d -u https://admin-org3:org3AdminPW@0.0.0.0:7054 +mv /tmp/crypto-material/orgs/org3/clients/client/msp/cacerts/* /tmp/crypto-material/orgs/org3/clients/client/msp/cacerts/ca-cert.pem +cp /tmp/config/config.yaml /tmp/crypto-material/orgs/org3/clients/client/msp + +# setup org3 msp +mkdir -p /tmp/crypto-material/orgs/org3/msp +cp /tmp/config/config.yaml /tmp/crypto-material/orgs/org3/msp +mkdir -p /tmp/crypto-material/orgs/org3/msp/cacerts +cp /tmp/crypto-material/cas/org3/ca-identities/ca-cert.pem /tmp/crypto-material/orgs/org3/msp/cacerts/ca-cert.pem +mkdir -p /tmp/crypto-material/orgs/org3/msp/tlscacerts +cp /tmp/crypto-material/cas/org1/ca-tls/ca-cert.pem /tmp/crypto-material/orgs/org3/msp/tlscacerts/org1-tls-ca-cert.pem +cp /tmp/crypto-material/cas/org2/ca-tls/ca-cert.pem /tmp/crypto-material/orgs/org3/msp/tlscacerts/org2-tls-ca-cert.pem +cp /tmp/crypto-material/cas/org3/ca-tls/ca-cert.pem /tmp/crypto-material/orgs/org3/msp/tlscacerts/org3-tls-ca-cert.pem +mkdir -p /tmp/crypto-material/orgs/org3/msp/users diff --git a/topologies/t6/scripts/org3-enroll-identities-with-ca-tls.sh b/topologies/t6/scripts/org3-enroll-identities-with-ca-tls.sh new file mode 100755 index 0000000..30caa0c --- /dev/null +++ b/topologies/t6/scripts/org3-enroll-identities-with-ca-tls.sh @@ -0,0 +1,31 @@ +#!/bin/bash +set -e +set -x + +export FABRIC_CA_CLIENT_TLS_CLIENT_KEYFILE=/tmp/crypto-material/cas/org3/ca-tls-admin/msp/keystore/key.pem +export FABRIC_CA_CLIENT_TLS_CLIENT_CERTFILE=/tmp/crypto-material/cas/org3/ca-tls-admin/msp/signcerts/cert.pem + +# enroll peer1 node-tls +export FABRIC_CA_CLIENT_HOME=/tmp/crypto-material/peers/org3/peer1/node-tls +export FABRIC_CA_CLIENT_TLS_CERTFILES=/tmp/crypto-material/cas/org3/ca-tls/ca-cert.pem +fabric-ca-client enroll -d -u https://peer1-org3:peer1PW@0.0.0.0:7054 --enrollment.profile tls --csr.hosts ${TOPOLOGY}-org3-peer1 + +# enroll peer2 node-tls +export FABRIC_CA_CLIENT_HOME=/tmp/crypto-material/peers/org3/peer2/node-tls +export FABRIC_CA_CLIENT_TLS_CERTFILES=/tmp/crypto-material/cas/org3/ca-tls/ca-cert.pem +fabric-ca-client enroll -d -u https://peer2-org3:peer2PW@0.0.0.0:7054 --enrollment.profile tls --csr.hosts ${TOPOLOGY}-org3-peer2 +# enroll org3 admin-tls +unset FABRIC_CA_CLIENT_TLS_CLIENT_KEYFILE +unset FABRIC_CA_CLIENT_TLS_CLIENT_CERTFILE +export FABRIC_CA_CLIENT_HOME=/tmp/crypto-material/orgs/org3/admins/admin-tls +export FABRIC_CA_CLIENT_TLS_CERTFILES=/tmp/crypto-material/cas/org3/ca-tls/ca-cert.pem +export FABRIC_CA_CLIENT_TLS_CLIENT_KEYFILE=/tmp/crypto-material/cas/org3/ca-tls-admin/msp/keystore/key.pem +export FABRIC_CA_CLIENT_TLS_CLIENT_CERTFILE=/tmp/crypto-material/cas/org3/ca-tls-admin/msp/signcerts/cert.pem +export FABRIC_CA_CLIENT_MSPDIR=msp +fabric-ca-client enroll -d -u https://admin-org3:org3AdminPW@0.0.0.0:7054 +mv /tmp/crypto-material/orgs/org3/admins/admin-tls/msp/keystore/* /tmp/crypto-material/orgs/org3/admins/admin-tls/msp/keystore/key.pem +mv /tmp/crypto-material/peers/org3/peer1/node-tls/msp/keystore/* /tmp/crypto-material/peers/org3/peer1/node-tls/msp/keystore/key.pem +mv /tmp/crypto-material/peers/org3/peer2/node-tls/msp/keystore/* /tmp/crypto-material/peers/org3/peer2/node-tls/msp/keystore/key.pem + +mv /tmp/crypto-material/peers/org3/peer1/node-tls/msp/tlscacerts/* /tmp/crypto-material/peers/org3/peer1/node-tls/msp/tlscacerts/ca-cert.pem +mv /tmp/crypto-material/peers/org3/peer2/node-tls/msp/tlscacerts/* /tmp/crypto-material/peers/org3/peer2/node-tls/msp/tlscacerts/ca-cert.pem \ No newline at end of file diff --git a/topologies/t6/scripts/org3-install-chaincode.sh b/topologies/t6/scripts/org3-install-chaincode.sh new file mode 100755 index 0000000..f6b8789 --- /dev/null +++ b/topologies/t6/scripts/org3-install-chaincode.sh @@ -0,0 +1,13 @@ +#!/bin/bash +set -e +set -x + +export CHAINCODE_DIR=/opt/gopath/src/github.com/hyperledger/fabric-samples/chaincode/assets-transfer-basic/chaincode-go + +export CORE_PEER_ADDRESS=${TOPOLOGY}-org3-peer1:7051 +export CORE_PEER_MSPCONFIGPATH=/tmp/crypto-material/orgs/org3/admins/admin/msp +peer lifecycle chaincode package mycc.tar.gz --path $CHAINCODE_DIR --lang golang --label myccv1 +peer lifecycle chaincode install mycc.tar.gz + +export CORE_PEER_ADDRESS=${TOPOLOGY}-org3-peer2:7051 +peer lifecycle chaincode install mycc.tar.gz diff --git a/topologies/t6/scripts/org3-register-identities-with-ca-identities.sh b/topologies/t6/scripts/org3-register-identities-with-ca-identities.sh new file mode 100755 index 0000000..1c56144 --- /dev/null +++ b/topologies/t6/scripts/org3-register-identities-with-ca-identities.sh @@ -0,0 +1,12 @@ +#!/bin/bash +set -e +set -x + +export FABRIC_CA_CLIENT_TLS_CERTFILES=/tmp/crypto-material/cas/org3/ca-identities/ca-cert.pem +export FABRIC_CA_CLIENT_HOME=/tmp/crypto-material/cas/org3/ca-identities-admin +fabric-ca-client enroll -d -u https://org3-ca-identities-admin:org3-ca-identities-adminpw@0.0.0.0:7054 +fabric-ca-client register -d --id.name peer1-org3 --id.secret peer1PW --id.type peer -u https://0.0.0.0:7054 +fabric-ca-client register -d --id.name peer2-org3 --id.secret peer2PW --id.type peer -u https://0.0.0.0:7054 +fabric-ca-client register -d --id.name admin-org3 --id.secret org3AdminPW --id.type admin -u https://0.0.0.0:7054 +fabric-ca-client register -d --id.name user-org3 --id.secret org3UserPW --id.type user -u https://0.0.0.0:7054 +fabric-ca-client register -d --id.name client-org3 --id.secret org3UserPW --id.type client -u https://0.0.0.0:7054 \ No newline at end of file diff --git a/topologies/t6/scripts/org3-register-identities-with-ca-tls.sh b/topologies/t6/scripts/org3-register-identities-with-ca-tls.sh new file mode 100755 index 0000000..8b2b0f4 --- /dev/null +++ b/topologies/t6/scripts/org3-register-identities-with-ca-tls.sh @@ -0,0 +1,24 @@ +#!/bin/bash +set -e +set -x +source /tmp/scripts/find-ca-private-key.sh +# get the CA private key file as 2 of them are created: one for the CA cert (ca-cert.pem) and another one for CA TLS cert (tls-cert.pem) +# and we need the keyfile for the CA TLS cert +CA_PRIV_KEYFILE=`find_private_key_path /tmp/crypto-material/cas/org3/ca-tls` +echo $CA_PRIV_KEYFILE +# initial enroll of bootstrap admin will be done with the key and cert of the CA server +# after this, the admin's key and cert will be used for registrations. +export FABRIC_CA_CLIENT_TLS_CERTFILES=/tmp/crypto-material/cas/org3/ca-tls/ca-cert.pem +# copy the CA server's key file to make it easier to use it +cp $CA_PRIV_KEYFILE /tmp/crypto-material/cas/org3/ca-tls/msp/keystore/key.pem +export FABRIC_CA_CLIENT_TLS_CLIENT_KEYFILE=/tmp/crypto-material/cas/org3/ca-tls/msp/keystore/key.pem +export FABRIC_CA_CLIENT_TLS_CLIENT_CERTFILE=/tmp/crypto-material/cas/org3/ca-tls/tls-cert.pem +export FABRIC_CA_CLIENT_HOME=/tmp/crypto-material/cas/org3/ca-tls-admin +fabric-ca-client enroll -d -u https://org3-ca-tls-admin:org3-ca-tls-adminpw@0.0.0.0:7054 +sleep 1 +mv /tmp/crypto-material/cas/org3/ca-tls-admin/msp/keystore/* /tmp/crypto-material/cas/org3/ca-tls-admin/msp/keystore/key.pem +export FABRIC_CA_CLIENT_TLS_CLIENT_KEYFILE=/tmp/crypto-material/cas/org3/ca-tls-admin/msp/keystore/key.pem +export FABRIC_CA_CLIENT_TLS_CLIENT_CERTFILE=/tmp/crypto-material/cas/org3/ca-tls-admin/msp/signcerts/cert.pem +fabric-ca-client register -d --id.name peer1-org3 --id.secret peer1PW --id.type peer -u https://0.0.0.0:7054 +fabric-ca-client register -d --id.name peer2-org3 --id.secret peer2PW --id.type peer -u https://0.0.0.0:7054 +fabric-ca-client register -d --id.name admin-org3 --id.secret org3AdminPW --id.type admin -u https://0.0.0.0:7054 \ No newline at end of file diff --git a/topologies/t6/scripts/patch-configtx.sh b/topologies/t6/scripts/patch-configtx.sh new file mode 100755 index 0000000..a4359d7 --- /dev/null +++ b/topologies/t6/scripts/patch-configtx.sh @@ -0,0 +1,8 @@ +#!/bin/bash +set -e +set -x + +mkdir -p /tmp/crypto-material/config +cp /tmp/config/configtx.yaml /tmp/crypto-material/config + +cat /tmp/config/configtx.yaml | sed -r "s;<>;${TOPOLOGY};g" | tee /tmp/crypto-material/config/configtx.yaml > /dev/null \ No newline at end of file diff --git a/topologies/t6/scripts/setup-docker-images.sh b/topologies/t6/scripts/setup-docker-images.sh new file mode 100755 index 0000000..e15aa03 --- /dev/null +++ b/topologies/t6/scripts/setup-docker-images.sh @@ -0,0 +1,2 @@ +cd $1/images +docker build -t fabric-ca-openssl . \ No newline at end of file diff --git a/topologies/t6/setup-network.sh b/topologies/t6/setup-network.sh new file mode 100755 index 0000000..77e53d3 --- /dev/null +++ b/topologies/t6/setup-network.sh @@ -0,0 +1,166 @@ +#!/bin/bash +# -----stop script execution on error and log commands +set -e +set -x + +# -----get the folder of where the current script is located +export HL_TOPOLOGIES_BASE_FOLDER=$( cd ${0%/*} && pwd -P ) +rm -rf ./topologies + +export CURRENT_HL_TOPOLOGY=t6 +echo "Topology ${CURRENT_HL_TOPOLOGY} Root Folder Set to: ${HL_TOPOLOGIES_BASE_FOLDER}" + +# -----delete any old or existing docker networks and clear any state data---- +echo "Deleting the old network..." +./teardown-network.sh + +# -----bring up the hyperledger admin shell terminal console, used to bootstrap the network +docker compose --env-file ${HL_TOPOLOGIES_BASE_FOLDER}/.env -f ${HL_TOPOLOGIES_BASE_FOLDER}/../docker-compose-base.yml \ + -f ${HL_TOPOLOGIES_BASE_FOLDER}/../docker-compose-shell-cmd.yml up -d + +# -----begin by modifying the configtx yaml file with any string replacements +echo "Starting the setup of the new network..." + +docker exec ${CURRENT_HL_TOPOLOGY}-shell-cmd /bin/sh -c "/bin/sh /tmp/scripts/patch-configtx.sh" + +# Setup docker images for openssl +./scripts/setup-docker-images.sh ${HL_TOPOLOGIES_BASE_FOLDER} + +# ----------------------------------------------------------------------------- +# -----setup the CAs for all orgs and register with these the TLS-CA and Identities-CA users, such as admins, clients, etc...----- +# ----------------------------------------------------------------------------- +# -----org1 CAs +docker compose --env-file ${HL_TOPOLOGIES_BASE_FOLDER}/.env -f ${HL_TOPOLOGIES_BASE_FOLDER}/../docker-compose-base.yml \ + -f ${HL_TOPOLOGIES_BASE_FOLDER}/docker-compose.yml \ + -f ${HL_TOPOLOGIES_BASE_FOLDER}/containers/cas/org1-cas/docker-compose-org1-cas.yml up -d org1-ca-tls +sleep 2 +docker exec ${CURRENT_HL_TOPOLOGY}-org1-ca-tls /bin/sh -c "/bin/sh /tmp/scripts/org1-register-identities-with-ca-tls.sh" + +docker compose --env-file ${HL_TOPOLOGIES_BASE_FOLDER}/.env -f ${HL_TOPOLOGIES_BASE_FOLDER}/../docker-compose-base.yml \ + -f ${HL_TOPOLOGIES_BASE_FOLDER}/docker-compose.yml \ + -f ${HL_TOPOLOGIES_BASE_FOLDER}/containers/cas/org1-cas/docker-compose-org1-cas.yml up -d org1-ca-identities +sleep 2 +docker exec ${CURRENT_HL_TOPOLOGY}-org1-ca-identities /bin/sh -c "/bin/sh /tmp/scripts/org1-register-identities-with-ca-identities.sh" + +# -----org2 CAs +docker compose --env-file ${HL_TOPOLOGIES_BASE_FOLDER}/.env -f ${HL_TOPOLOGIES_BASE_FOLDER}/../docker-compose-base.yml \ + -f ${HL_TOPOLOGIES_BASE_FOLDER}/docker-compose.yml \ + -f ${HL_TOPOLOGIES_BASE_FOLDER}/containers/cas/org2-cas/docker-compose-org2-cas.yml up -d org2-ca-tls +sleep 2 +docker exec ${CURRENT_HL_TOPOLOGY}-org2-ca-tls /bin/sh -c "/bin/sh /tmp/scripts/org2-register-identities-with-ca-tls.sh" + +docker compose --env-file ${HL_TOPOLOGIES_BASE_FOLDER}/.env -f ${HL_TOPOLOGIES_BASE_FOLDER}/../docker-compose-base.yml \ + -f ${HL_TOPOLOGIES_BASE_FOLDER}/docker-compose.yml \ + -f ${HL_TOPOLOGIES_BASE_FOLDER}/containers/cas/org2-cas/docker-compose-org2-cas.yml up -d org2-ca-identities +sleep 2 +docker exec ${CURRENT_HL_TOPOLOGY}-org2-ca-identities /bin/sh -c "/bin/sh /tmp/scripts/org2-register-identities-with-ca-identities.sh" + +# -----org3 CAs +docker compose --env-file ${HL_TOPOLOGIES_BASE_FOLDER}/.env -f ${HL_TOPOLOGIES_BASE_FOLDER}/../docker-compose-base.yml \ + -f ${HL_TOPOLOGIES_BASE_FOLDER}/docker-compose.yml \ + -f ${HL_TOPOLOGIES_BASE_FOLDER}/containers/cas/org3-cas/docker-compose-org3-cas.yml up -d org3-ca-tls +sleep 2 +docker exec ${CURRENT_HL_TOPOLOGY}-org3-ca-tls /bin/sh -c "/bin/sh /tmp/scripts/org3-register-identities-with-ca-tls.sh" + +docker compose --env-file ${HL_TOPOLOGIES_BASE_FOLDER}/.env -f ${HL_TOPOLOGIES_BASE_FOLDER}/../docker-compose-base.yml \ + -f ${HL_TOPOLOGIES_BASE_FOLDER}/docker-compose.yml \ + -f ${HL_TOPOLOGIES_BASE_FOLDER}/containers/cas/org3-cas/docker-compose-org3-cas.yml up -d org3-ca-identities +sleep 2 +docker exec ${CURRENT_HL_TOPOLOGY}-org3-ca-identities /bin/sh -c "/bin/sh /tmp/scripts/org3-register-identities-with-ca-identities.sh" + +# ----------------------------------------------------------------------------- +# -----setup the Peers for all orgs----- +# ----------------------------------------------------------------------------- +# -----begin by enrolling each orgs' TLS-CA and Identities-CA users +docker exec ${CURRENT_HL_TOPOLOGY}-org2-ca-tls /bin/sh -c "/bin/sh /tmp/scripts/org2-enroll-identities-with-ca-tls.sh" +docker exec ${CURRENT_HL_TOPOLOGY}-org2-ca-identities /bin/sh -c "/bin/sh /tmp/scripts/org2-enroll-identities-with-ca-identities.sh" + +docker exec ${CURRENT_HL_TOPOLOGY}-org3-ca-tls /bin/sh -c "/bin/sh /tmp/scripts/org3-enroll-identities-with-ca-tls.sh" +docker exec ${CURRENT_HL_TOPOLOGY}-org3-ca-identities /bin/sh -c "/bin/sh /tmp/scripts/org3-enroll-identities-with-ca-identities.sh" + +# -----org2 peers +docker compose --env-file ${HL_TOPOLOGIES_BASE_FOLDER}/.env -f ${HL_TOPOLOGIES_BASE_FOLDER}/../docker-compose-base.yml \ + -f ${HL_TOPOLOGIES_BASE_FOLDER}/docker-compose.yml \ + -f ${HL_TOPOLOGIES_BASE_FOLDER}/containers/peers/org2-peers/docker-compose-org2-peers.yml up -d org2-peer1 + +docker compose --env-file ${HL_TOPOLOGIES_BASE_FOLDER}/.env -f ${HL_TOPOLOGIES_BASE_FOLDER}/../docker-compose-base.yml \ + -f ${HL_TOPOLOGIES_BASE_FOLDER}/docker-compose.yml \ + -f ${HL_TOPOLOGIES_BASE_FOLDER}/containers/peers/org2-peers/docker-compose-org2-peers.yml up -d org2-peer2 + +# -----org3 peers +docker compose --env-file ${HL_TOPOLOGIES_BASE_FOLDER}/.env -f ${HL_TOPOLOGIES_BASE_FOLDER}/../docker-compose-base.yml \ + -f ${HL_TOPOLOGIES_BASE_FOLDER}/docker-compose.yml \ + -f ${HL_TOPOLOGIES_BASE_FOLDER}/containers/peers/org3-peers/docker-compose-org3-peers.yml up -d org3-peer1 + +docker compose --env-file ${HL_TOPOLOGIES_BASE_FOLDER}/.env -f ${HL_TOPOLOGIES_BASE_FOLDER}/../docker-compose-base.yml \ + -f ${HL_TOPOLOGIES_BASE_FOLDER}/docker-compose.yml \ + -f ${HL_TOPOLOGIES_BASE_FOLDER}/containers/peers/org3-peers/docker-compose-org3-peers.yml up -d org3-peer2 + +# ----------------------------------------------------------------------------- +# -----setup CLIs for each org----- +# ----------------------------------------------------------------------------- +docker compose --env-file ${HL_TOPOLOGIES_BASE_FOLDER}/.env -f ${HL_TOPOLOGIES_BASE_FOLDER}/../docker-compose-base.yml \ + -f ${HL_TOPOLOGIES_BASE_FOLDER}/docker-compose.yml \ + -f ${HL_TOPOLOGIES_BASE_FOLDER}/containers/clis/org2-clis/docker-compose-org2-clis.yml up -d org2-cli-peer1 + +docker compose --env-file ${HL_TOPOLOGIES_BASE_FOLDER}/.env -f ${HL_TOPOLOGIES_BASE_FOLDER}/../docker-compose-base.yml \ + -f ${HL_TOPOLOGIES_BASE_FOLDER}/docker-compose.yml \ + -f ${HL_TOPOLOGIES_BASE_FOLDER}/containers/clis/org3-clis/docker-compose-org3-clis.yml up -d org3-cli-peer1 + +# ----------------------------------------------------------------------------- +# -----setup Orderers ----- +# ----------------------------------------------------------------------------- +# -----enroll orderer users with TLS-CA and Identities-CA for org1, the orderer org +docker exec ${CURRENT_HL_TOPOLOGY}-org1-ca-tls /bin/sh -c "/bin/sh /tmp/scripts/org1-enroll-identities-with-ca-tls.sh" +docker exec ${CURRENT_HL_TOPOLOGY}-org1-ca-identities /bin/sh -c "/bin/sh /tmp/scripts/org1-enroll-identities-with-ca-identities.sh" + +# -----generate the genesis.block and mychannel artifacts +docker exec ${CURRENT_HL_TOPOLOGY}-org2-cli-peer1 /bin/sh -c "/tmp/scripts/channels-setup.sh" + +sleep 1 + +# -----bring up org1 orderers +docker compose --env-file ${HL_TOPOLOGIES_BASE_FOLDER}/.env -f ${HL_TOPOLOGIES_BASE_FOLDER}/../docker-compose-base.yml \ + -f ${HL_TOPOLOGIES_BASE_FOLDER}/docker-compose.yml \ + -f ${HL_TOPOLOGIES_BASE_FOLDER}/containers/orderers/org1-orderers/docker-compose-org1-orderers.yml up -d org1-orderer1 + +docker compose --env-file ${HL_TOPOLOGIES_BASE_FOLDER}/.env -f ${HL_TOPOLOGIES_BASE_FOLDER}/../docker-compose-base.yml \ + -f ${HL_TOPOLOGIES_BASE_FOLDER}/docker-compose.yml \ + -f ${HL_TOPOLOGIES_BASE_FOLDER}/containers/orderers/org1-orderers/docker-compose-org1-orderers.yml up -d org1-orderer2 + +docker compose --env-file ${HL_TOPOLOGIES_BASE_FOLDER}/.env -f ${HL_TOPOLOGIES_BASE_FOLDER}/../docker-compose-base.yml \ + -f ${HL_TOPOLOGIES_BASE_FOLDER}/docker-compose.yml \ + -f ${HL_TOPOLOGIES_BASE_FOLDER}/containers/orderers/org1-orderers/docker-compose-org1-orderers.yml up -d org1-orderer3 + +# -----need to wait until raft leader selection is completed for the orderers +sleep 4 + +# ----------------------------------------------------------------------------- +# -----setup Channels ----- +# ----------------------------------------------------------------------------- +docker exec ${CURRENT_HL_TOPOLOGY}-org2-cli-peer1 /bin/sh -c "/tmp/scripts/org2-create-and-join-channels.sh" +docker exec ${CURRENT_HL_TOPOLOGY}-org3-cli-peer1 /bin/sh -c "/tmp/scripts/org3-create-and-join-channels.sh" + +# ----------------------------------------------------------------------------- +# -----setup Chaincode ----- +# ----------------------------------------------------------------------------- +docker exec ${CURRENT_HL_TOPOLOGY}-org2-cli-peer1 /bin/sh -c "/tmp/scripts/org2-install-chaincode.sh" +docker exec ${CURRENT_HL_TOPOLOGY}-org3-cli-peer1 /bin/sh -c "/tmp/scripts/org3-install-chaincode.sh" + +sleep 1 + +docker exec ${CURRENT_HL_TOPOLOGY}-org2-cli-peer1 /bin/sh -c "/tmp/scripts/org2-approve-chaincode.sh" +docker exec ${CURRENT_HL_TOPOLOGY}-org3-cli-peer1 /bin/sh -c "/tmp/scripts/org3-approve-chaincode.sh" + +sleep 1 + +docker exec ${CURRENT_HL_TOPOLOGY}-org2-cli-peer1 /bin/sh -c "/tmp/scripts/all-org-peers-commit-chaincode.sh" + +sleep 1 + +# ----------------------------------------------------------------------------- +# -----test Chaincode with invoke and query----- +# ----------------------------------------------------------------------------- +docker exec ${CURRENT_HL_TOPOLOGY}-org2-cli-peer1 /bin/sh -c "/tmp/scripts/all-org-peers-execute-chaincode.sh" + +echo "******* NETWORK SETUP COMPLETED *******" diff --git a/topologies/t6/teardown-network.sh b/topologies/t6/teardown-network.sh new file mode 100755 index 0000000..23c6885 --- /dev/null +++ b/topologies/t6/teardown-network.sh @@ -0,0 +1,33 @@ +#!/bin/bash +# -----stop script execution on error and log commands +set -e +set -x + +export CURRENT_HL_TOPOLOGY=t6 + +# ----------------------------------------------------------------------------- +# -----remove current topoloy containers containers +# ----------------------------------------------------------------------------- +# -----the chaincode dockers throw an error as not being able to be deleted, although they are being deleted. ignoring errors here +set +e +docker ps --filter name=${CURRENT_HL_TOPOLOGY}-org --filter status=running -aq | xargs -r docker stop | xargs -r docker rm +docker ps --filter name=${CURRENT_HL_TOPOLOGY}-org --filter status=exited -aq | xargs -r docker rm +set -e + +# ----------------------------------------------------------------------------- +# -----clear any state data written to disk +# ----------------------------------------------------------------------------- +# -----docker exec will throw error if no running container found +if [ "$( docker container inspect -f '{{.State.Status}}' ${CURRENT_HL_TOPOLOGY}-shell-cmd )" == "running" ] +then + docker exec ${CURRENT_HL_TOPOLOGY}-shell-cmd /bin/sh -c "/bin/sh /tmp/scripts/delete-state-data.sh" +else + echo "Shell Cmd Container is not running." +fi + +# ----------------------------------------------------------------------------- +# -----remove the bootstrap shell container +# ----------------------------------------------------------------------------- +# -----remove shell command container +docker ps --filter name=${CURRENT_HL_TOPOLOGY}-shell-cmd --filter status=running -aq | xargs -r docker stop | xargs -r docker rm +docker ps --filter name=${CURRENT_HL_TOPOLOGY}-shell-cmd --filter status=exited -aq | xargs -r docker rm diff --git a/topologies/t7/.env b/topologies/t7/.env new file mode 100644 index 0000000..29822d4 --- /dev/null +++ b/topologies/t7/.env @@ -0,0 +1,5 @@ +COMPOSE_PROJECT_NAME=hl-fabric-topology-t7 +FABRIC_CA_VERSION=1.5 +FABRIC_PEER_VERSION=2.2.3 +FABRIC_TOOLS_VERSION=2.2.3 +PEER_ORDERER_VERSION=2.2.3 \ No newline at end of file diff --git a/topologies/t7/.gitignore b/topologies/t7/.gitignore new file mode 100644 index 0000000..ee0881a --- /dev/null +++ b/topologies/t7/.gitignore @@ -0,0 +1,2 @@ +crypto-material/*/** +homefolders/*/** \ No newline at end of file diff --git a/topologies/t7/README.md b/topologies/t7/README.md new file mode 100644 index 0000000..2e65b0e --- /dev/null +++ b/topologies/t7/README.md @@ -0,0 +1,39 @@ +# T7: Private Data Collection +## Description +--- +T1 network + usages of a Private Data Collection (PDC) for the test chaincode. The test chaincode is adapted to write to the PDC. + +## Diagram +--- +![Diagram of components](../image_store/T7.png) + +## Relevant Documentation + +- https://hyperledger-fabric.readthedocs.io/en/latest/private-data/private-data.html + +## Components List +--- +* Org 1 + * Orderer 1 + * Orderer 2 + * Orderer 3 + * TLS CA + * Identities CA +* Org 2 + * Peer 1 + * Peer 1 CLI + * Peer 2 + * TLS CA + * Identities CA +* Org 3 + * Peer 1 + * Peer 1 CLI + * Peer 2 + * TLS CA + * Identities CA + +## Characteristics + +- World State Database Instance (LevelDB) embedded (in peer containers) +- Chaincode installed directly on peers with a Private Data Collection +- Communication between all components done via TLS \ No newline at end of file diff --git a/topologies/t7/assets/chaincode/assets-transfer-basic/chaincode-go/assetTransfer.go b/topologies/t7/assets/chaincode/assets-transfer-basic/chaincode-go/assetTransfer.go new file mode 100644 index 0000000..9c619d5 --- /dev/null +++ b/topologies/t7/assets/chaincode/assets-transfer-basic/chaincode-go/assetTransfer.go @@ -0,0 +1,23 @@ +/* +SPDX-License-Identifier: Apache-2.0 +*/ + +package main + +import ( + "log" + + "github.com/hyperledger/fabric-contract-api-go/contractapi" + "github.com/hyperledger/fabric-samples/asset-transfer-basic/chaincode-go/chaincode" +) + +func main() { + assetChaincode, err := contractapi.NewChaincode(&chaincode.SmartContract{}) + if err != nil { + log.Panicf("Error creating asset-transfer-basic chaincode: %v", err) + } + + if err := assetChaincode.Start(); err != nil { + log.Panicf("Error starting asset-transfer-basic chaincode: %v", err) + } +} diff --git a/topologies/t7/assets/chaincode/assets-transfer-basic/chaincode-go/chaincode/mocks/chaincodestub.go b/topologies/t7/assets/chaincode/assets-transfer-basic/chaincode-go/chaincode/mocks/chaincodestub.go new file mode 100644 index 0000000..91348fe --- /dev/null +++ b/topologies/t7/assets/chaincode/assets-transfer-basic/chaincode-go/chaincode/mocks/chaincodestub.go @@ -0,0 +1,2878 @@ +// Code generated by counterfeiter. DO NOT EDIT. +package mocks + +import ( + "sync" + + "github.com/golang/protobuf/ptypes/timestamp" + "github.com/hyperledger/fabric-chaincode-go/shim" + "github.com/hyperledger/fabric-protos-go/peer" +) + +type ChaincodeStub struct { + CreateCompositeKeyStub func(string, []string) (string, error) + createCompositeKeyMutex sync.RWMutex + createCompositeKeyArgsForCall []struct { + arg1 string + arg2 []string + } + createCompositeKeyReturns struct { + result1 string + result2 error + } + createCompositeKeyReturnsOnCall map[int]struct { + result1 string + result2 error + } + DelPrivateDataStub func(string, string) error + delPrivateDataMutex sync.RWMutex + delPrivateDataArgsForCall []struct { + arg1 string + arg2 string + } + delPrivateDataReturns struct { + result1 error + } + delPrivateDataReturnsOnCall map[int]struct { + result1 error + } + DelStateStub func(string) error + delStateMutex sync.RWMutex + delStateArgsForCall []struct { + arg1 string + } + delStateReturns struct { + result1 error + } + delStateReturnsOnCall map[int]struct { + result1 error + } + GetArgsStub func() [][]byte + getArgsMutex sync.RWMutex + getArgsArgsForCall []struct { + } + getArgsReturns struct { + result1 [][]byte + } + getArgsReturnsOnCall map[int]struct { + result1 [][]byte + } + GetArgsSliceStub func() ([]byte, error) + getArgsSliceMutex sync.RWMutex + getArgsSliceArgsForCall []struct { + } + getArgsSliceReturns struct { + result1 []byte + result2 error + } + getArgsSliceReturnsOnCall map[int]struct { + result1 []byte + result2 error + } + GetBindingStub func() ([]byte, error) + getBindingMutex sync.RWMutex + getBindingArgsForCall []struct { + } + getBindingReturns struct { + result1 []byte + result2 error + } + getBindingReturnsOnCall map[int]struct { + result1 []byte + result2 error + } + GetChannelIDStub func() string + getChannelIDMutex sync.RWMutex + getChannelIDArgsForCall []struct { + } + getChannelIDReturns struct { + result1 string + } + getChannelIDReturnsOnCall map[int]struct { + result1 string + } + GetCreatorStub func() ([]byte, error) + getCreatorMutex sync.RWMutex + getCreatorArgsForCall []struct { + } + getCreatorReturns struct { + result1 []byte + result2 error + } + getCreatorReturnsOnCall map[int]struct { + result1 []byte + result2 error + } + GetDecorationsStub func() map[string][]byte + getDecorationsMutex sync.RWMutex + getDecorationsArgsForCall []struct { + } + getDecorationsReturns struct { + result1 map[string][]byte + } + getDecorationsReturnsOnCall map[int]struct { + result1 map[string][]byte + } + GetFunctionAndParametersStub func() (string, []string) + getFunctionAndParametersMutex sync.RWMutex + getFunctionAndParametersArgsForCall []struct { + } + getFunctionAndParametersReturns struct { + result1 string + result2 []string + } + getFunctionAndParametersReturnsOnCall map[int]struct { + result1 string + result2 []string + } + GetHistoryForKeyStub func(string) (shim.HistoryQueryIteratorInterface, error) + getHistoryForKeyMutex sync.RWMutex + getHistoryForKeyArgsForCall []struct { + arg1 string + } + getHistoryForKeyReturns struct { + result1 shim.HistoryQueryIteratorInterface + result2 error + } + getHistoryForKeyReturnsOnCall map[int]struct { + result1 shim.HistoryQueryIteratorInterface + result2 error + } + GetPrivateDataStub func(string, string) ([]byte, error) + getPrivateDataMutex sync.RWMutex + getPrivateDataArgsForCall []struct { + arg1 string + arg2 string + } + getPrivateDataReturns struct { + result1 []byte + result2 error + } + getPrivateDataReturnsOnCall map[int]struct { + result1 []byte + result2 error + } + GetPrivateDataByPartialCompositeKeyStub func(string, string, []string) (shim.StateQueryIteratorInterface, error) + getPrivateDataByPartialCompositeKeyMutex sync.RWMutex + getPrivateDataByPartialCompositeKeyArgsForCall []struct { + arg1 string + arg2 string + arg3 []string + } + getPrivateDataByPartialCompositeKeyReturns struct { + result1 shim.StateQueryIteratorInterface + result2 error + } + getPrivateDataByPartialCompositeKeyReturnsOnCall map[int]struct { + result1 shim.StateQueryIteratorInterface + result2 error + } + GetPrivateDataByRangeStub func(string, string, string) (shim.StateQueryIteratorInterface, error) + getPrivateDataByRangeMutex sync.RWMutex + getPrivateDataByRangeArgsForCall []struct { + arg1 string + arg2 string + arg3 string + } + getPrivateDataByRangeReturns struct { + result1 shim.StateQueryIteratorInterface + result2 error + } + getPrivateDataByRangeReturnsOnCall map[int]struct { + result1 shim.StateQueryIteratorInterface + result2 error + } + GetPrivateDataHashStub func(string, string) ([]byte, error) + getPrivateDataHashMutex sync.RWMutex + getPrivateDataHashArgsForCall []struct { + arg1 string + arg2 string + } + getPrivateDataHashReturns struct { + result1 []byte + result2 error + } + getPrivateDataHashReturnsOnCall map[int]struct { + result1 []byte + result2 error + } + GetPrivateDataQueryResultStub func(string, string) (shim.StateQueryIteratorInterface, error) + getPrivateDataQueryResultMutex sync.RWMutex + getPrivateDataQueryResultArgsForCall []struct { + arg1 string + arg2 string + } + getPrivateDataQueryResultReturns struct { + result1 shim.StateQueryIteratorInterface + result2 error + } + getPrivateDataQueryResultReturnsOnCall map[int]struct { + result1 shim.StateQueryIteratorInterface + result2 error + } + GetPrivateDataValidationParameterStub func(string, string) ([]byte, error) + getPrivateDataValidationParameterMutex sync.RWMutex + getPrivateDataValidationParameterArgsForCall []struct { + arg1 string + arg2 string + } + getPrivateDataValidationParameterReturns struct { + result1 []byte + result2 error + } + getPrivateDataValidationParameterReturnsOnCall map[int]struct { + result1 []byte + result2 error + } + GetQueryResultStub func(string) (shim.StateQueryIteratorInterface, error) + getQueryResultMutex sync.RWMutex + getQueryResultArgsForCall []struct { + arg1 string + } + getQueryResultReturns struct { + result1 shim.StateQueryIteratorInterface + result2 error + } + getQueryResultReturnsOnCall map[int]struct { + result1 shim.StateQueryIteratorInterface + result2 error + } + GetQueryResultWithPaginationStub func(string, int32, string) (shim.StateQueryIteratorInterface, *peer.QueryResponseMetadata, error) + getQueryResultWithPaginationMutex sync.RWMutex + getQueryResultWithPaginationArgsForCall []struct { + arg1 string + arg2 int32 + arg3 string + } + getQueryResultWithPaginationReturns struct { + result1 shim.StateQueryIteratorInterface + result2 *peer.QueryResponseMetadata + result3 error + } + getQueryResultWithPaginationReturnsOnCall map[int]struct { + result1 shim.StateQueryIteratorInterface + result2 *peer.QueryResponseMetadata + result3 error + } + GetSignedProposalStub func() (*peer.SignedProposal, error) + getSignedProposalMutex sync.RWMutex + getSignedProposalArgsForCall []struct { + } + getSignedProposalReturns struct { + result1 *peer.SignedProposal + result2 error + } + getSignedProposalReturnsOnCall map[int]struct { + result1 *peer.SignedProposal + result2 error + } + GetStateStub func(string) ([]byte, error) + getStateMutex sync.RWMutex + getStateArgsForCall []struct { + arg1 string + } + getStateReturns struct { + result1 []byte + result2 error + } + getStateReturnsOnCall map[int]struct { + result1 []byte + result2 error + } + GetStateByPartialCompositeKeyStub func(string, []string) (shim.StateQueryIteratorInterface, error) + getStateByPartialCompositeKeyMutex sync.RWMutex + getStateByPartialCompositeKeyArgsForCall []struct { + arg1 string + arg2 []string + } + getStateByPartialCompositeKeyReturns struct { + result1 shim.StateQueryIteratorInterface + result2 error + } + getStateByPartialCompositeKeyReturnsOnCall map[int]struct { + result1 shim.StateQueryIteratorInterface + result2 error + } + GetStateByPartialCompositeKeyWithPaginationStub func(string, []string, int32, string) (shim.StateQueryIteratorInterface, *peer.QueryResponseMetadata, error) + getStateByPartialCompositeKeyWithPaginationMutex sync.RWMutex + getStateByPartialCompositeKeyWithPaginationArgsForCall []struct { + arg1 string + arg2 []string + arg3 int32 + arg4 string + } + getStateByPartialCompositeKeyWithPaginationReturns struct { + result1 shim.StateQueryIteratorInterface + result2 *peer.QueryResponseMetadata + result3 error + } + getStateByPartialCompositeKeyWithPaginationReturnsOnCall map[int]struct { + result1 shim.StateQueryIteratorInterface + result2 *peer.QueryResponseMetadata + result3 error + } + GetStateByRangeStub func(string, string) (shim.StateQueryIteratorInterface, error) + getStateByRangeMutex sync.RWMutex + getStateByRangeArgsForCall []struct { + arg1 string + arg2 string + } + getStateByRangeReturns struct { + result1 shim.StateQueryIteratorInterface + result2 error + } + getStateByRangeReturnsOnCall map[int]struct { + result1 shim.StateQueryIteratorInterface + result2 error + } + GetStateByRangeWithPaginationStub func(string, string, int32, string) (shim.StateQueryIteratorInterface, *peer.QueryResponseMetadata, error) + getStateByRangeWithPaginationMutex sync.RWMutex + getStateByRangeWithPaginationArgsForCall []struct { + arg1 string + arg2 string + arg3 int32 + arg4 string + } + getStateByRangeWithPaginationReturns struct { + result1 shim.StateQueryIteratorInterface + result2 *peer.QueryResponseMetadata + result3 error + } + getStateByRangeWithPaginationReturnsOnCall map[int]struct { + result1 shim.StateQueryIteratorInterface + result2 *peer.QueryResponseMetadata + result3 error + } + GetStateValidationParameterStub func(string) ([]byte, error) + getStateValidationParameterMutex sync.RWMutex + getStateValidationParameterArgsForCall []struct { + arg1 string + } + getStateValidationParameterReturns struct { + result1 []byte + result2 error + } + getStateValidationParameterReturnsOnCall map[int]struct { + result1 []byte + result2 error + } + GetStringArgsStub func() []string + getStringArgsMutex sync.RWMutex + getStringArgsArgsForCall []struct { + } + getStringArgsReturns struct { + result1 []string + } + getStringArgsReturnsOnCall map[int]struct { + result1 []string + } + GetTransientStub func() (map[string][]byte, error) + getTransientMutex sync.RWMutex + getTransientArgsForCall []struct { + } + getTransientReturns struct { + result1 map[string][]byte + result2 error + } + getTransientReturnsOnCall map[int]struct { + result1 map[string][]byte + result2 error + } + GetTxIDStub func() string + getTxIDMutex sync.RWMutex + getTxIDArgsForCall []struct { + } + getTxIDReturns struct { + result1 string + } + getTxIDReturnsOnCall map[int]struct { + result1 string + } + GetTxTimestampStub func() (*timestamp.Timestamp, error) + getTxTimestampMutex sync.RWMutex + getTxTimestampArgsForCall []struct { + } + getTxTimestampReturns struct { + result1 *timestamp.Timestamp + result2 error + } + getTxTimestampReturnsOnCall map[int]struct { + result1 *timestamp.Timestamp + result2 error + } + InvokeChaincodeStub func(string, [][]byte, string) peer.Response + invokeChaincodeMutex sync.RWMutex + invokeChaincodeArgsForCall []struct { + arg1 string + arg2 [][]byte + arg3 string + } + invokeChaincodeReturns struct { + result1 peer.Response + } + invokeChaincodeReturnsOnCall map[int]struct { + result1 peer.Response + } + PutPrivateDataStub func(string, string, []byte) error + putPrivateDataMutex sync.RWMutex + putPrivateDataArgsForCall []struct { + arg1 string + arg2 string + arg3 []byte + } + putPrivateDataReturns struct { + result1 error + } + putPrivateDataReturnsOnCall map[int]struct { + result1 error + } + PutStateStub func(string, []byte) error + putStateMutex sync.RWMutex + putStateArgsForCall []struct { + arg1 string + arg2 []byte + } + putStateReturns struct { + result1 error + } + putStateReturnsOnCall map[int]struct { + result1 error + } + SetEventStub func(string, []byte) error + setEventMutex sync.RWMutex + setEventArgsForCall []struct { + arg1 string + arg2 []byte + } + setEventReturns struct { + result1 error + } + setEventReturnsOnCall map[int]struct { + result1 error + } + SetPrivateDataValidationParameterStub func(string, string, []byte) error + setPrivateDataValidationParameterMutex sync.RWMutex + setPrivateDataValidationParameterArgsForCall []struct { + arg1 string + arg2 string + arg3 []byte + } + setPrivateDataValidationParameterReturns struct { + result1 error + } + setPrivateDataValidationParameterReturnsOnCall map[int]struct { + result1 error + } + SetStateValidationParameterStub func(string, []byte) error + setStateValidationParameterMutex sync.RWMutex + setStateValidationParameterArgsForCall []struct { + arg1 string + arg2 []byte + } + setStateValidationParameterReturns struct { + result1 error + } + setStateValidationParameterReturnsOnCall map[int]struct { + result1 error + } + SplitCompositeKeyStub func(string) (string, []string, error) + splitCompositeKeyMutex sync.RWMutex + splitCompositeKeyArgsForCall []struct { + arg1 string + } + splitCompositeKeyReturns struct { + result1 string + result2 []string + result3 error + } + splitCompositeKeyReturnsOnCall map[int]struct { + result1 string + result2 []string + result3 error + } + invocations map[string][][]interface{} + invocationsMutex sync.RWMutex +} + +func (fake *ChaincodeStub) CreateCompositeKey(arg1 string, arg2 []string) (string, error) { + var arg2Copy []string + if arg2 != nil { + arg2Copy = make([]string, len(arg2)) + copy(arg2Copy, arg2) + } + fake.createCompositeKeyMutex.Lock() + ret, specificReturn := fake.createCompositeKeyReturnsOnCall[len(fake.createCompositeKeyArgsForCall)] + fake.createCompositeKeyArgsForCall = append(fake.createCompositeKeyArgsForCall, struct { + arg1 string + arg2 []string + }{arg1, arg2Copy}) + fake.recordInvocation("CreateCompositeKey", []interface{}{arg1, arg2Copy}) + fake.createCompositeKeyMutex.Unlock() + if fake.CreateCompositeKeyStub != nil { + return fake.CreateCompositeKeyStub(arg1, arg2) + } + if specificReturn { + return ret.result1, ret.result2 + } + fakeReturns := fake.createCompositeKeyReturns + return fakeReturns.result1, fakeReturns.result2 +} + +func (fake *ChaincodeStub) CreateCompositeKeyCallCount() int { + fake.createCompositeKeyMutex.RLock() + defer fake.createCompositeKeyMutex.RUnlock() + return len(fake.createCompositeKeyArgsForCall) +} + +func (fake *ChaincodeStub) CreateCompositeKeyCalls(stub func(string, []string) (string, error)) { + fake.createCompositeKeyMutex.Lock() + defer fake.createCompositeKeyMutex.Unlock() + fake.CreateCompositeKeyStub = stub +} + +func (fake *ChaincodeStub) CreateCompositeKeyArgsForCall(i int) (string, []string) { + fake.createCompositeKeyMutex.RLock() + defer fake.createCompositeKeyMutex.RUnlock() + argsForCall := fake.createCompositeKeyArgsForCall[i] + return argsForCall.arg1, argsForCall.arg2 +} + +func (fake *ChaincodeStub) CreateCompositeKeyReturns(result1 string, result2 error) { + fake.createCompositeKeyMutex.Lock() + defer fake.createCompositeKeyMutex.Unlock() + fake.CreateCompositeKeyStub = nil + fake.createCompositeKeyReturns = struct { + result1 string + result2 error + }{result1, result2} +} + +func (fake *ChaincodeStub) CreateCompositeKeyReturnsOnCall(i int, result1 string, result2 error) { + fake.createCompositeKeyMutex.Lock() + defer fake.createCompositeKeyMutex.Unlock() + fake.CreateCompositeKeyStub = nil + if fake.createCompositeKeyReturnsOnCall == nil { + fake.createCompositeKeyReturnsOnCall = make(map[int]struct { + result1 string + result2 error + }) + } + fake.createCompositeKeyReturnsOnCall[i] = struct { + result1 string + result2 error + }{result1, result2} +} + +func (fake *ChaincodeStub) DelPrivateData(arg1 string, arg2 string) error { + fake.delPrivateDataMutex.Lock() + ret, specificReturn := fake.delPrivateDataReturnsOnCall[len(fake.delPrivateDataArgsForCall)] + fake.delPrivateDataArgsForCall = append(fake.delPrivateDataArgsForCall, struct { + arg1 string + arg2 string + }{arg1, arg2}) + fake.recordInvocation("DelPrivateData", []interface{}{arg1, arg2}) + fake.delPrivateDataMutex.Unlock() + if fake.DelPrivateDataStub != nil { + return fake.DelPrivateDataStub(arg1, arg2) + } + if specificReturn { + return ret.result1 + } + fakeReturns := fake.delPrivateDataReturns + return fakeReturns.result1 +} + +func (fake *ChaincodeStub) DelPrivateDataCallCount() int { + fake.delPrivateDataMutex.RLock() + defer fake.delPrivateDataMutex.RUnlock() + return len(fake.delPrivateDataArgsForCall) +} + +func (fake *ChaincodeStub) DelPrivateDataCalls(stub func(string, string) error) { + fake.delPrivateDataMutex.Lock() + defer fake.delPrivateDataMutex.Unlock() + fake.DelPrivateDataStub = stub +} + +func (fake *ChaincodeStub) DelPrivateDataArgsForCall(i int) (string, string) { + fake.delPrivateDataMutex.RLock() + defer fake.delPrivateDataMutex.RUnlock() + argsForCall := fake.delPrivateDataArgsForCall[i] + return argsForCall.arg1, argsForCall.arg2 +} + +func (fake *ChaincodeStub) DelPrivateDataReturns(result1 error) { + fake.delPrivateDataMutex.Lock() + defer fake.delPrivateDataMutex.Unlock() + fake.DelPrivateDataStub = nil + fake.delPrivateDataReturns = struct { + result1 error + }{result1} +} + +func (fake *ChaincodeStub) DelPrivateDataReturnsOnCall(i int, result1 error) { + fake.delPrivateDataMutex.Lock() + defer fake.delPrivateDataMutex.Unlock() + fake.DelPrivateDataStub = nil + if fake.delPrivateDataReturnsOnCall == nil { + fake.delPrivateDataReturnsOnCall = make(map[int]struct { + result1 error + }) + } + fake.delPrivateDataReturnsOnCall[i] = struct { + result1 error + }{result1} +} + +func (fake *ChaincodeStub) DelState(arg1 string) error { + fake.delStateMutex.Lock() + ret, specificReturn := fake.delStateReturnsOnCall[len(fake.delStateArgsForCall)] + fake.delStateArgsForCall = append(fake.delStateArgsForCall, struct { + arg1 string + }{arg1}) + fake.recordInvocation("DelState", []interface{}{arg1}) + fake.delStateMutex.Unlock() + if fake.DelStateStub != nil { + return fake.DelStateStub(arg1) + } + if specificReturn { + return ret.result1 + } + fakeReturns := fake.delStateReturns + return fakeReturns.result1 +} + +func (fake *ChaincodeStub) DelStateCallCount() int { + fake.delStateMutex.RLock() + defer fake.delStateMutex.RUnlock() + return len(fake.delStateArgsForCall) +} + +func (fake *ChaincodeStub) DelStateCalls(stub func(string) error) { + fake.delStateMutex.Lock() + defer fake.delStateMutex.Unlock() + fake.DelStateStub = stub +} + +func (fake *ChaincodeStub) DelStateArgsForCall(i int) string { + fake.delStateMutex.RLock() + defer fake.delStateMutex.RUnlock() + argsForCall := fake.delStateArgsForCall[i] + return argsForCall.arg1 +} + +func (fake *ChaincodeStub) DelStateReturns(result1 error) { + fake.delStateMutex.Lock() + defer fake.delStateMutex.Unlock() + fake.DelStateStub = nil + fake.delStateReturns = struct { + result1 error + }{result1} +} + +func (fake *ChaincodeStub) DelStateReturnsOnCall(i int, result1 error) { + fake.delStateMutex.Lock() + defer fake.delStateMutex.Unlock() + fake.DelStateStub = nil + if fake.delStateReturnsOnCall == nil { + fake.delStateReturnsOnCall = make(map[int]struct { + result1 error + }) + } + fake.delStateReturnsOnCall[i] = struct { + result1 error + }{result1} +} + +func (fake *ChaincodeStub) GetArgs() [][]byte { + fake.getArgsMutex.Lock() + ret, specificReturn := fake.getArgsReturnsOnCall[len(fake.getArgsArgsForCall)] + fake.getArgsArgsForCall = append(fake.getArgsArgsForCall, struct { + }{}) + fake.recordInvocation("GetArgs", []interface{}{}) + fake.getArgsMutex.Unlock() + if fake.GetArgsStub != nil { + return fake.GetArgsStub() + } + if specificReturn { + return ret.result1 + } + fakeReturns := fake.getArgsReturns + return fakeReturns.result1 +} + +func (fake *ChaincodeStub) GetArgsCallCount() int { + fake.getArgsMutex.RLock() + defer fake.getArgsMutex.RUnlock() + return len(fake.getArgsArgsForCall) +} + +func (fake *ChaincodeStub) GetArgsCalls(stub func() [][]byte) { + fake.getArgsMutex.Lock() + defer fake.getArgsMutex.Unlock() + fake.GetArgsStub = stub +} + +func (fake *ChaincodeStub) GetArgsReturns(result1 [][]byte) { + fake.getArgsMutex.Lock() + defer fake.getArgsMutex.Unlock() + fake.GetArgsStub = nil + fake.getArgsReturns = struct { + result1 [][]byte + }{result1} +} + +func (fake *ChaincodeStub) GetArgsReturnsOnCall(i int, result1 [][]byte) { + fake.getArgsMutex.Lock() + defer fake.getArgsMutex.Unlock() + fake.GetArgsStub = nil + if fake.getArgsReturnsOnCall == nil { + fake.getArgsReturnsOnCall = make(map[int]struct { + result1 [][]byte + }) + } + fake.getArgsReturnsOnCall[i] = struct { + result1 [][]byte + }{result1} +} + +func (fake *ChaincodeStub) GetArgsSlice() ([]byte, error) { + fake.getArgsSliceMutex.Lock() + ret, specificReturn := fake.getArgsSliceReturnsOnCall[len(fake.getArgsSliceArgsForCall)] + fake.getArgsSliceArgsForCall = append(fake.getArgsSliceArgsForCall, struct { + }{}) + fake.recordInvocation("GetArgsSlice", []interface{}{}) + fake.getArgsSliceMutex.Unlock() + if fake.GetArgsSliceStub != nil { + return fake.GetArgsSliceStub() + } + if specificReturn { + return ret.result1, ret.result2 + } + fakeReturns := fake.getArgsSliceReturns + return fakeReturns.result1, fakeReturns.result2 +} + +func (fake *ChaincodeStub) GetArgsSliceCallCount() int { + fake.getArgsSliceMutex.RLock() + defer fake.getArgsSliceMutex.RUnlock() + return len(fake.getArgsSliceArgsForCall) +} + +func (fake *ChaincodeStub) GetArgsSliceCalls(stub func() ([]byte, error)) { + fake.getArgsSliceMutex.Lock() + defer fake.getArgsSliceMutex.Unlock() + fake.GetArgsSliceStub = stub +} + +func (fake *ChaincodeStub) GetArgsSliceReturns(result1 []byte, result2 error) { + fake.getArgsSliceMutex.Lock() + defer fake.getArgsSliceMutex.Unlock() + fake.GetArgsSliceStub = nil + fake.getArgsSliceReturns = struct { + result1 []byte + result2 error + }{result1, result2} +} + +func (fake *ChaincodeStub) GetArgsSliceReturnsOnCall(i int, result1 []byte, result2 error) { + fake.getArgsSliceMutex.Lock() + defer fake.getArgsSliceMutex.Unlock() + fake.GetArgsSliceStub = nil + if fake.getArgsSliceReturnsOnCall == nil { + fake.getArgsSliceReturnsOnCall = make(map[int]struct { + result1 []byte + result2 error + }) + } + fake.getArgsSliceReturnsOnCall[i] = struct { + result1 []byte + result2 error + }{result1, result2} +} + +func (fake *ChaincodeStub) GetBinding() ([]byte, error) { + fake.getBindingMutex.Lock() + ret, specificReturn := fake.getBindingReturnsOnCall[len(fake.getBindingArgsForCall)] + fake.getBindingArgsForCall = append(fake.getBindingArgsForCall, struct { + }{}) + fake.recordInvocation("GetBinding", []interface{}{}) + fake.getBindingMutex.Unlock() + if fake.GetBindingStub != nil { + return fake.GetBindingStub() + } + if specificReturn { + return ret.result1, ret.result2 + } + fakeReturns := fake.getBindingReturns + return fakeReturns.result1, fakeReturns.result2 +} + +func (fake *ChaincodeStub) GetBindingCallCount() int { + fake.getBindingMutex.RLock() + defer fake.getBindingMutex.RUnlock() + return len(fake.getBindingArgsForCall) +} + +func (fake *ChaincodeStub) GetBindingCalls(stub func() ([]byte, error)) { + fake.getBindingMutex.Lock() + defer fake.getBindingMutex.Unlock() + fake.GetBindingStub = stub +} + +func (fake *ChaincodeStub) GetBindingReturns(result1 []byte, result2 error) { + fake.getBindingMutex.Lock() + defer fake.getBindingMutex.Unlock() + fake.GetBindingStub = nil + fake.getBindingReturns = struct { + result1 []byte + result2 error + }{result1, result2} +} + +func (fake *ChaincodeStub) GetBindingReturnsOnCall(i int, result1 []byte, result2 error) { + fake.getBindingMutex.Lock() + defer fake.getBindingMutex.Unlock() + fake.GetBindingStub = nil + if fake.getBindingReturnsOnCall == nil { + fake.getBindingReturnsOnCall = make(map[int]struct { + result1 []byte + result2 error + }) + } + fake.getBindingReturnsOnCall[i] = struct { + result1 []byte + result2 error + }{result1, result2} +} + +func (fake *ChaincodeStub) GetChannelID() string { + fake.getChannelIDMutex.Lock() + ret, specificReturn := fake.getChannelIDReturnsOnCall[len(fake.getChannelIDArgsForCall)] + fake.getChannelIDArgsForCall = append(fake.getChannelIDArgsForCall, struct { + }{}) + fake.recordInvocation("GetChannelID", []interface{}{}) + fake.getChannelIDMutex.Unlock() + if fake.GetChannelIDStub != nil { + return fake.GetChannelIDStub() + } + if specificReturn { + return ret.result1 + } + fakeReturns := fake.getChannelIDReturns + return fakeReturns.result1 +} + +func (fake *ChaincodeStub) GetChannelIDCallCount() int { + fake.getChannelIDMutex.RLock() + defer fake.getChannelIDMutex.RUnlock() + return len(fake.getChannelIDArgsForCall) +} + +func (fake *ChaincodeStub) GetChannelIDCalls(stub func() string) { + fake.getChannelIDMutex.Lock() + defer fake.getChannelIDMutex.Unlock() + fake.GetChannelIDStub = stub +} + +func (fake *ChaincodeStub) GetChannelIDReturns(result1 string) { + fake.getChannelIDMutex.Lock() + defer fake.getChannelIDMutex.Unlock() + fake.GetChannelIDStub = nil + fake.getChannelIDReturns = struct { + result1 string + }{result1} +} + +func (fake *ChaincodeStub) GetChannelIDReturnsOnCall(i int, result1 string) { + fake.getChannelIDMutex.Lock() + defer fake.getChannelIDMutex.Unlock() + fake.GetChannelIDStub = nil + if fake.getChannelIDReturnsOnCall == nil { + fake.getChannelIDReturnsOnCall = make(map[int]struct { + result1 string + }) + } + fake.getChannelIDReturnsOnCall[i] = struct { + result1 string + }{result1} +} + +func (fake *ChaincodeStub) GetCreator() ([]byte, error) { + fake.getCreatorMutex.Lock() + ret, specificReturn := fake.getCreatorReturnsOnCall[len(fake.getCreatorArgsForCall)] + fake.getCreatorArgsForCall = append(fake.getCreatorArgsForCall, struct { + }{}) + fake.recordInvocation("GetCreator", []interface{}{}) + fake.getCreatorMutex.Unlock() + if fake.GetCreatorStub != nil { + return fake.GetCreatorStub() + } + if specificReturn { + return ret.result1, ret.result2 + } + fakeReturns := fake.getCreatorReturns + return fakeReturns.result1, fakeReturns.result2 +} + +func (fake *ChaincodeStub) GetCreatorCallCount() int { + fake.getCreatorMutex.RLock() + defer fake.getCreatorMutex.RUnlock() + return len(fake.getCreatorArgsForCall) +} + +func (fake *ChaincodeStub) GetCreatorCalls(stub func() ([]byte, error)) { + fake.getCreatorMutex.Lock() + defer fake.getCreatorMutex.Unlock() + fake.GetCreatorStub = stub +} + +func (fake *ChaincodeStub) GetCreatorReturns(result1 []byte, result2 error) { + fake.getCreatorMutex.Lock() + defer fake.getCreatorMutex.Unlock() + fake.GetCreatorStub = nil + fake.getCreatorReturns = struct { + result1 []byte + result2 error + }{result1, result2} +} + +func (fake *ChaincodeStub) GetCreatorReturnsOnCall(i int, result1 []byte, result2 error) { + fake.getCreatorMutex.Lock() + defer fake.getCreatorMutex.Unlock() + fake.GetCreatorStub = nil + if fake.getCreatorReturnsOnCall == nil { + fake.getCreatorReturnsOnCall = make(map[int]struct { + result1 []byte + result2 error + }) + } + fake.getCreatorReturnsOnCall[i] = struct { + result1 []byte + result2 error + }{result1, result2} +} + +func (fake *ChaincodeStub) GetDecorations() map[string][]byte { + fake.getDecorationsMutex.Lock() + ret, specificReturn := fake.getDecorationsReturnsOnCall[len(fake.getDecorationsArgsForCall)] + fake.getDecorationsArgsForCall = append(fake.getDecorationsArgsForCall, struct { + }{}) + fake.recordInvocation("GetDecorations", []interface{}{}) + fake.getDecorationsMutex.Unlock() + if fake.GetDecorationsStub != nil { + return fake.GetDecorationsStub() + } + if specificReturn { + return ret.result1 + } + fakeReturns := fake.getDecorationsReturns + return fakeReturns.result1 +} + +func (fake *ChaincodeStub) GetDecorationsCallCount() int { + fake.getDecorationsMutex.RLock() + defer fake.getDecorationsMutex.RUnlock() + return len(fake.getDecorationsArgsForCall) +} + +func (fake *ChaincodeStub) GetDecorationsCalls(stub func() map[string][]byte) { + fake.getDecorationsMutex.Lock() + defer fake.getDecorationsMutex.Unlock() + fake.GetDecorationsStub = stub +} + +func (fake *ChaincodeStub) GetDecorationsReturns(result1 map[string][]byte) { + fake.getDecorationsMutex.Lock() + defer fake.getDecorationsMutex.Unlock() + fake.GetDecorationsStub = nil + fake.getDecorationsReturns = struct { + result1 map[string][]byte + }{result1} +} + +func (fake *ChaincodeStub) GetDecorationsReturnsOnCall(i int, result1 map[string][]byte) { + fake.getDecorationsMutex.Lock() + defer fake.getDecorationsMutex.Unlock() + fake.GetDecorationsStub = nil + if fake.getDecorationsReturnsOnCall == nil { + fake.getDecorationsReturnsOnCall = make(map[int]struct { + result1 map[string][]byte + }) + } + fake.getDecorationsReturnsOnCall[i] = struct { + result1 map[string][]byte + }{result1} +} + +func (fake *ChaincodeStub) GetFunctionAndParameters() (string, []string) { + fake.getFunctionAndParametersMutex.Lock() + ret, specificReturn := fake.getFunctionAndParametersReturnsOnCall[len(fake.getFunctionAndParametersArgsForCall)] + fake.getFunctionAndParametersArgsForCall = append(fake.getFunctionAndParametersArgsForCall, struct { + }{}) + fake.recordInvocation("GetFunctionAndParameters", []interface{}{}) + fake.getFunctionAndParametersMutex.Unlock() + if fake.GetFunctionAndParametersStub != nil { + return fake.GetFunctionAndParametersStub() + } + if specificReturn { + return ret.result1, ret.result2 + } + fakeReturns := fake.getFunctionAndParametersReturns + return fakeReturns.result1, fakeReturns.result2 +} + +func (fake *ChaincodeStub) GetFunctionAndParametersCallCount() int { + fake.getFunctionAndParametersMutex.RLock() + defer fake.getFunctionAndParametersMutex.RUnlock() + return len(fake.getFunctionAndParametersArgsForCall) +} + +func (fake *ChaincodeStub) GetFunctionAndParametersCalls(stub func() (string, []string)) { + fake.getFunctionAndParametersMutex.Lock() + defer fake.getFunctionAndParametersMutex.Unlock() + fake.GetFunctionAndParametersStub = stub +} + +func (fake *ChaincodeStub) GetFunctionAndParametersReturns(result1 string, result2 []string) { + fake.getFunctionAndParametersMutex.Lock() + defer fake.getFunctionAndParametersMutex.Unlock() + fake.GetFunctionAndParametersStub = nil + fake.getFunctionAndParametersReturns = struct { + result1 string + result2 []string + }{result1, result2} +} + +func (fake *ChaincodeStub) GetFunctionAndParametersReturnsOnCall(i int, result1 string, result2 []string) { + fake.getFunctionAndParametersMutex.Lock() + defer fake.getFunctionAndParametersMutex.Unlock() + fake.GetFunctionAndParametersStub = nil + if fake.getFunctionAndParametersReturnsOnCall == nil { + fake.getFunctionAndParametersReturnsOnCall = make(map[int]struct { + result1 string + result2 []string + }) + } + fake.getFunctionAndParametersReturnsOnCall[i] = struct { + result1 string + result2 []string + }{result1, result2} +} + +func (fake *ChaincodeStub) GetHistoryForKey(arg1 string) (shim.HistoryQueryIteratorInterface, error) { + fake.getHistoryForKeyMutex.Lock() + ret, specificReturn := fake.getHistoryForKeyReturnsOnCall[len(fake.getHistoryForKeyArgsForCall)] + fake.getHistoryForKeyArgsForCall = append(fake.getHistoryForKeyArgsForCall, struct { + arg1 string + }{arg1}) + fake.recordInvocation("GetHistoryForKey", []interface{}{arg1}) + fake.getHistoryForKeyMutex.Unlock() + if fake.GetHistoryForKeyStub != nil { + return fake.GetHistoryForKeyStub(arg1) + } + if specificReturn { + return ret.result1, ret.result2 + } + fakeReturns := fake.getHistoryForKeyReturns + return fakeReturns.result1, fakeReturns.result2 +} + +func (fake *ChaincodeStub) GetHistoryForKeyCallCount() int { + fake.getHistoryForKeyMutex.RLock() + defer fake.getHistoryForKeyMutex.RUnlock() + return len(fake.getHistoryForKeyArgsForCall) +} + +func (fake *ChaincodeStub) GetHistoryForKeyCalls(stub func(string) (shim.HistoryQueryIteratorInterface, error)) { + fake.getHistoryForKeyMutex.Lock() + defer fake.getHistoryForKeyMutex.Unlock() + fake.GetHistoryForKeyStub = stub +} + +func (fake *ChaincodeStub) GetHistoryForKeyArgsForCall(i int) string { + fake.getHistoryForKeyMutex.RLock() + defer fake.getHistoryForKeyMutex.RUnlock() + argsForCall := fake.getHistoryForKeyArgsForCall[i] + return argsForCall.arg1 +} + +func (fake *ChaincodeStub) GetHistoryForKeyReturns(result1 shim.HistoryQueryIteratorInterface, result2 error) { + fake.getHistoryForKeyMutex.Lock() + defer fake.getHistoryForKeyMutex.Unlock() + fake.GetHistoryForKeyStub = nil + fake.getHistoryForKeyReturns = struct { + result1 shim.HistoryQueryIteratorInterface + result2 error + }{result1, result2} +} + +func (fake *ChaincodeStub) GetHistoryForKeyReturnsOnCall(i int, result1 shim.HistoryQueryIteratorInterface, result2 error) { + fake.getHistoryForKeyMutex.Lock() + defer fake.getHistoryForKeyMutex.Unlock() + fake.GetHistoryForKeyStub = nil + if fake.getHistoryForKeyReturnsOnCall == nil { + fake.getHistoryForKeyReturnsOnCall = make(map[int]struct { + result1 shim.HistoryQueryIteratorInterface + result2 error + }) + } + fake.getHistoryForKeyReturnsOnCall[i] = struct { + result1 shim.HistoryQueryIteratorInterface + result2 error + }{result1, result2} +} + +func (fake *ChaincodeStub) GetPrivateData(arg1 string, arg2 string) ([]byte, error) { + fake.getPrivateDataMutex.Lock() + ret, specificReturn := fake.getPrivateDataReturnsOnCall[len(fake.getPrivateDataArgsForCall)] + fake.getPrivateDataArgsForCall = append(fake.getPrivateDataArgsForCall, struct { + arg1 string + arg2 string + }{arg1, arg2}) + fake.recordInvocation("GetPrivateData", []interface{}{arg1, arg2}) + fake.getPrivateDataMutex.Unlock() + if fake.GetPrivateDataStub != nil { + return fake.GetPrivateDataStub(arg1, arg2) + } + if specificReturn { + return ret.result1, ret.result2 + } + fakeReturns := fake.getPrivateDataReturns + return fakeReturns.result1, fakeReturns.result2 +} + +func (fake *ChaincodeStub) GetPrivateDataCallCount() int { + fake.getPrivateDataMutex.RLock() + defer fake.getPrivateDataMutex.RUnlock() + return len(fake.getPrivateDataArgsForCall) +} + +func (fake *ChaincodeStub) GetPrivateDataCalls(stub func(string, string) ([]byte, error)) { + fake.getPrivateDataMutex.Lock() + defer fake.getPrivateDataMutex.Unlock() + fake.GetPrivateDataStub = stub +} + +func (fake *ChaincodeStub) GetPrivateDataArgsForCall(i int) (string, string) { + fake.getPrivateDataMutex.RLock() + defer fake.getPrivateDataMutex.RUnlock() + argsForCall := fake.getPrivateDataArgsForCall[i] + return argsForCall.arg1, argsForCall.arg2 +} + +func (fake *ChaincodeStub) GetPrivateDataReturns(result1 []byte, result2 error) { + fake.getPrivateDataMutex.Lock() + defer fake.getPrivateDataMutex.Unlock() + fake.GetPrivateDataStub = nil + fake.getPrivateDataReturns = struct { + result1 []byte + result2 error + }{result1, result2} +} + +func (fake *ChaincodeStub) GetPrivateDataReturnsOnCall(i int, result1 []byte, result2 error) { + fake.getPrivateDataMutex.Lock() + defer fake.getPrivateDataMutex.Unlock() + fake.GetPrivateDataStub = nil + if fake.getPrivateDataReturnsOnCall == nil { + fake.getPrivateDataReturnsOnCall = make(map[int]struct { + result1 []byte + result2 error + }) + } + fake.getPrivateDataReturnsOnCall[i] = struct { + result1 []byte + result2 error + }{result1, result2} +} + +func (fake *ChaincodeStub) GetPrivateDataByPartialCompositeKey(arg1 string, arg2 string, arg3 []string) (shim.StateQueryIteratorInterface, error) { + var arg3Copy []string + if arg3 != nil { + arg3Copy = make([]string, len(arg3)) + copy(arg3Copy, arg3) + } + fake.getPrivateDataByPartialCompositeKeyMutex.Lock() + ret, specificReturn := fake.getPrivateDataByPartialCompositeKeyReturnsOnCall[len(fake.getPrivateDataByPartialCompositeKeyArgsForCall)] + fake.getPrivateDataByPartialCompositeKeyArgsForCall = append(fake.getPrivateDataByPartialCompositeKeyArgsForCall, struct { + arg1 string + arg2 string + arg3 []string + }{arg1, arg2, arg3Copy}) + fake.recordInvocation("GetPrivateDataByPartialCompositeKey", []interface{}{arg1, arg2, arg3Copy}) + fake.getPrivateDataByPartialCompositeKeyMutex.Unlock() + if fake.GetPrivateDataByPartialCompositeKeyStub != nil { + return fake.GetPrivateDataByPartialCompositeKeyStub(arg1, arg2, arg3) + } + if specificReturn { + return ret.result1, ret.result2 + } + fakeReturns := fake.getPrivateDataByPartialCompositeKeyReturns + return fakeReturns.result1, fakeReturns.result2 +} + +func (fake *ChaincodeStub) GetPrivateDataByPartialCompositeKeyCallCount() int { + fake.getPrivateDataByPartialCompositeKeyMutex.RLock() + defer fake.getPrivateDataByPartialCompositeKeyMutex.RUnlock() + return len(fake.getPrivateDataByPartialCompositeKeyArgsForCall) +} + +func (fake *ChaincodeStub) GetPrivateDataByPartialCompositeKeyCalls(stub func(string, string, []string) (shim.StateQueryIteratorInterface, error)) { + fake.getPrivateDataByPartialCompositeKeyMutex.Lock() + defer fake.getPrivateDataByPartialCompositeKeyMutex.Unlock() + fake.GetPrivateDataByPartialCompositeKeyStub = stub +} + +func (fake *ChaincodeStub) GetPrivateDataByPartialCompositeKeyArgsForCall(i int) (string, string, []string) { + fake.getPrivateDataByPartialCompositeKeyMutex.RLock() + defer fake.getPrivateDataByPartialCompositeKeyMutex.RUnlock() + argsForCall := fake.getPrivateDataByPartialCompositeKeyArgsForCall[i] + return argsForCall.arg1, argsForCall.arg2, argsForCall.arg3 +} + +func (fake *ChaincodeStub) GetPrivateDataByPartialCompositeKeyReturns(result1 shim.StateQueryIteratorInterface, result2 error) { + fake.getPrivateDataByPartialCompositeKeyMutex.Lock() + defer fake.getPrivateDataByPartialCompositeKeyMutex.Unlock() + fake.GetPrivateDataByPartialCompositeKeyStub = nil + fake.getPrivateDataByPartialCompositeKeyReturns = struct { + result1 shim.StateQueryIteratorInterface + result2 error + }{result1, result2} +} + +func (fake *ChaincodeStub) GetPrivateDataByPartialCompositeKeyReturnsOnCall(i int, result1 shim.StateQueryIteratorInterface, result2 error) { + fake.getPrivateDataByPartialCompositeKeyMutex.Lock() + defer fake.getPrivateDataByPartialCompositeKeyMutex.Unlock() + fake.GetPrivateDataByPartialCompositeKeyStub = nil + if fake.getPrivateDataByPartialCompositeKeyReturnsOnCall == nil { + fake.getPrivateDataByPartialCompositeKeyReturnsOnCall = make(map[int]struct { + result1 shim.StateQueryIteratorInterface + result2 error + }) + } + fake.getPrivateDataByPartialCompositeKeyReturnsOnCall[i] = struct { + result1 shim.StateQueryIteratorInterface + result2 error + }{result1, result2} +} + +func (fake *ChaincodeStub) GetPrivateDataByRange(arg1 string, arg2 string, arg3 string) (shim.StateQueryIteratorInterface, error) { + fake.getPrivateDataByRangeMutex.Lock() + ret, specificReturn := fake.getPrivateDataByRangeReturnsOnCall[len(fake.getPrivateDataByRangeArgsForCall)] + fake.getPrivateDataByRangeArgsForCall = append(fake.getPrivateDataByRangeArgsForCall, struct { + arg1 string + arg2 string + arg3 string + }{arg1, arg2, arg3}) + fake.recordInvocation("GetPrivateDataByRange", []interface{}{arg1, arg2, arg3}) + fake.getPrivateDataByRangeMutex.Unlock() + if fake.GetPrivateDataByRangeStub != nil { + return fake.GetPrivateDataByRangeStub(arg1, arg2, arg3) + } + if specificReturn { + return ret.result1, ret.result2 + } + fakeReturns := fake.getPrivateDataByRangeReturns + return fakeReturns.result1, fakeReturns.result2 +} + +func (fake *ChaincodeStub) GetPrivateDataByRangeCallCount() int { + fake.getPrivateDataByRangeMutex.RLock() + defer fake.getPrivateDataByRangeMutex.RUnlock() + return len(fake.getPrivateDataByRangeArgsForCall) +} + +func (fake *ChaincodeStub) GetPrivateDataByRangeCalls(stub func(string, string, string) (shim.StateQueryIteratorInterface, error)) { + fake.getPrivateDataByRangeMutex.Lock() + defer fake.getPrivateDataByRangeMutex.Unlock() + fake.GetPrivateDataByRangeStub = stub +} + +func (fake *ChaincodeStub) GetPrivateDataByRangeArgsForCall(i int) (string, string, string) { + fake.getPrivateDataByRangeMutex.RLock() + defer fake.getPrivateDataByRangeMutex.RUnlock() + argsForCall := fake.getPrivateDataByRangeArgsForCall[i] + return argsForCall.arg1, argsForCall.arg2, argsForCall.arg3 +} + +func (fake *ChaincodeStub) GetPrivateDataByRangeReturns(result1 shim.StateQueryIteratorInterface, result2 error) { + fake.getPrivateDataByRangeMutex.Lock() + defer fake.getPrivateDataByRangeMutex.Unlock() + fake.GetPrivateDataByRangeStub = nil + fake.getPrivateDataByRangeReturns = struct { + result1 shim.StateQueryIteratorInterface + result2 error + }{result1, result2} +} + +func (fake *ChaincodeStub) GetPrivateDataByRangeReturnsOnCall(i int, result1 shim.StateQueryIteratorInterface, result2 error) { + fake.getPrivateDataByRangeMutex.Lock() + defer fake.getPrivateDataByRangeMutex.Unlock() + fake.GetPrivateDataByRangeStub = nil + if fake.getPrivateDataByRangeReturnsOnCall == nil { + fake.getPrivateDataByRangeReturnsOnCall = make(map[int]struct { + result1 shim.StateQueryIteratorInterface + result2 error + }) + } + fake.getPrivateDataByRangeReturnsOnCall[i] = struct { + result1 shim.StateQueryIteratorInterface + result2 error + }{result1, result2} +} + +func (fake *ChaincodeStub) GetPrivateDataHash(arg1 string, arg2 string) ([]byte, error) { + fake.getPrivateDataHashMutex.Lock() + ret, specificReturn := fake.getPrivateDataHashReturnsOnCall[len(fake.getPrivateDataHashArgsForCall)] + fake.getPrivateDataHashArgsForCall = append(fake.getPrivateDataHashArgsForCall, struct { + arg1 string + arg2 string + }{arg1, arg2}) + fake.recordInvocation("GetPrivateDataHash", []interface{}{arg1, arg2}) + fake.getPrivateDataHashMutex.Unlock() + if fake.GetPrivateDataHashStub != nil { + return fake.GetPrivateDataHashStub(arg1, arg2) + } + if specificReturn { + return ret.result1, ret.result2 + } + fakeReturns := fake.getPrivateDataHashReturns + return fakeReturns.result1, fakeReturns.result2 +} + +func (fake *ChaincodeStub) GetPrivateDataHashCallCount() int { + fake.getPrivateDataHashMutex.RLock() + defer fake.getPrivateDataHashMutex.RUnlock() + return len(fake.getPrivateDataHashArgsForCall) +} + +func (fake *ChaincodeStub) GetPrivateDataHashCalls(stub func(string, string) ([]byte, error)) { + fake.getPrivateDataHashMutex.Lock() + defer fake.getPrivateDataHashMutex.Unlock() + fake.GetPrivateDataHashStub = stub +} + +func (fake *ChaincodeStub) GetPrivateDataHashArgsForCall(i int) (string, string) { + fake.getPrivateDataHashMutex.RLock() + defer fake.getPrivateDataHashMutex.RUnlock() + argsForCall := fake.getPrivateDataHashArgsForCall[i] + return argsForCall.arg1, argsForCall.arg2 +} + +func (fake *ChaincodeStub) GetPrivateDataHashReturns(result1 []byte, result2 error) { + fake.getPrivateDataHashMutex.Lock() + defer fake.getPrivateDataHashMutex.Unlock() + fake.GetPrivateDataHashStub = nil + fake.getPrivateDataHashReturns = struct { + result1 []byte + result2 error + }{result1, result2} +} + +func (fake *ChaincodeStub) GetPrivateDataHashReturnsOnCall(i int, result1 []byte, result2 error) { + fake.getPrivateDataHashMutex.Lock() + defer fake.getPrivateDataHashMutex.Unlock() + fake.GetPrivateDataHashStub = nil + if fake.getPrivateDataHashReturnsOnCall == nil { + fake.getPrivateDataHashReturnsOnCall = make(map[int]struct { + result1 []byte + result2 error + }) + } + fake.getPrivateDataHashReturnsOnCall[i] = struct { + result1 []byte + result2 error + }{result1, result2} +} + +func (fake *ChaincodeStub) GetPrivateDataQueryResult(arg1 string, arg2 string) (shim.StateQueryIteratorInterface, error) { + fake.getPrivateDataQueryResultMutex.Lock() + ret, specificReturn := fake.getPrivateDataQueryResultReturnsOnCall[len(fake.getPrivateDataQueryResultArgsForCall)] + fake.getPrivateDataQueryResultArgsForCall = append(fake.getPrivateDataQueryResultArgsForCall, struct { + arg1 string + arg2 string + }{arg1, arg2}) + fake.recordInvocation("GetPrivateDataQueryResult", []interface{}{arg1, arg2}) + fake.getPrivateDataQueryResultMutex.Unlock() + if fake.GetPrivateDataQueryResultStub != nil { + return fake.GetPrivateDataQueryResultStub(arg1, arg2) + } + if specificReturn { + return ret.result1, ret.result2 + } + fakeReturns := fake.getPrivateDataQueryResultReturns + return fakeReturns.result1, fakeReturns.result2 +} + +func (fake *ChaincodeStub) GetPrivateDataQueryResultCallCount() int { + fake.getPrivateDataQueryResultMutex.RLock() + defer fake.getPrivateDataQueryResultMutex.RUnlock() + return len(fake.getPrivateDataQueryResultArgsForCall) +} + +func (fake *ChaincodeStub) GetPrivateDataQueryResultCalls(stub func(string, string) (shim.StateQueryIteratorInterface, error)) { + fake.getPrivateDataQueryResultMutex.Lock() + defer fake.getPrivateDataQueryResultMutex.Unlock() + fake.GetPrivateDataQueryResultStub = stub +} + +func (fake *ChaincodeStub) GetPrivateDataQueryResultArgsForCall(i int) (string, string) { + fake.getPrivateDataQueryResultMutex.RLock() + defer fake.getPrivateDataQueryResultMutex.RUnlock() + argsForCall := fake.getPrivateDataQueryResultArgsForCall[i] + return argsForCall.arg1, argsForCall.arg2 +} + +func (fake *ChaincodeStub) GetPrivateDataQueryResultReturns(result1 shim.StateQueryIteratorInterface, result2 error) { + fake.getPrivateDataQueryResultMutex.Lock() + defer fake.getPrivateDataQueryResultMutex.Unlock() + fake.GetPrivateDataQueryResultStub = nil + fake.getPrivateDataQueryResultReturns = struct { + result1 shim.StateQueryIteratorInterface + result2 error + }{result1, result2} +} + +func (fake *ChaincodeStub) GetPrivateDataQueryResultReturnsOnCall(i int, result1 shim.StateQueryIteratorInterface, result2 error) { + fake.getPrivateDataQueryResultMutex.Lock() + defer fake.getPrivateDataQueryResultMutex.Unlock() + fake.GetPrivateDataQueryResultStub = nil + if fake.getPrivateDataQueryResultReturnsOnCall == nil { + fake.getPrivateDataQueryResultReturnsOnCall = make(map[int]struct { + result1 shim.StateQueryIteratorInterface + result2 error + }) + } + fake.getPrivateDataQueryResultReturnsOnCall[i] = struct { + result1 shim.StateQueryIteratorInterface + result2 error + }{result1, result2} +} + +func (fake *ChaincodeStub) GetPrivateDataValidationParameter(arg1 string, arg2 string) ([]byte, error) { + fake.getPrivateDataValidationParameterMutex.Lock() + ret, specificReturn := fake.getPrivateDataValidationParameterReturnsOnCall[len(fake.getPrivateDataValidationParameterArgsForCall)] + fake.getPrivateDataValidationParameterArgsForCall = append(fake.getPrivateDataValidationParameterArgsForCall, struct { + arg1 string + arg2 string + }{arg1, arg2}) + fake.recordInvocation("GetPrivateDataValidationParameter", []interface{}{arg1, arg2}) + fake.getPrivateDataValidationParameterMutex.Unlock() + if fake.GetPrivateDataValidationParameterStub != nil { + return fake.GetPrivateDataValidationParameterStub(arg1, arg2) + } + if specificReturn { + return ret.result1, ret.result2 + } + fakeReturns := fake.getPrivateDataValidationParameterReturns + return fakeReturns.result1, fakeReturns.result2 +} + +func (fake *ChaincodeStub) GetPrivateDataValidationParameterCallCount() int { + fake.getPrivateDataValidationParameterMutex.RLock() + defer fake.getPrivateDataValidationParameterMutex.RUnlock() + return len(fake.getPrivateDataValidationParameterArgsForCall) +} + +func (fake *ChaincodeStub) GetPrivateDataValidationParameterCalls(stub func(string, string) ([]byte, error)) { + fake.getPrivateDataValidationParameterMutex.Lock() + defer fake.getPrivateDataValidationParameterMutex.Unlock() + fake.GetPrivateDataValidationParameterStub = stub +} + +func (fake *ChaincodeStub) GetPrivateDataValidationParameterArgsForCall(i int) (string, string) { + fake.getPrivateDataValidationParameterMutex.RLock() + defer fake.getPrivateDataValidationParameterMutex.RUnlock() + argsForCall := fake.getPrivateDataValidationParameterArgsForCall[i] + return argsForCall.arg1, argsForCall.arg2 +} + +func (fake *ChaincodeStub) GetPrivateDataValidationParameterReturns(result1 []byte, result2 error) { + fake.getPrivateDataValidationParameterMutex.Lock() + defer fake.getPrivateDataValidationParameterMutex.Unlock() + fake.GetPrivateDataValidationParameterStub = nil + fake.getPrivateDataValidationParameterReturns = struct { + result1 []byte + result2 error + }{result1, result2} +} + +func (fake *ChaincodeStub) GetPrivateDataValidationParameterReturnsOnCall(i int, result1 []byte, result2 error) { + fake.getPrivateDataValidationParameterMutex.Lock() + defer fake.getPrivateDataValidationParameterMutex.Unlock() + fake.GetPrivateDataValidationParameterStub = nil + if fake.getPrivateDataValidationParameterReturnsOnCall == nil { + fake.getPrivateDataValidationParameterReturnsOnCall = make(map[int]struct { + result1 []byte + result2 error + }) + } + fake.getPrivateDataValidationParameterReturnsOnCall[i] = struct { + result1 []byte + result2 error + }{result1, result2} +} + +func (fake *ChaincodeStub) GetQueryResult(arg1 string) (shim.StateQueryIteratorInterface, error) { + fake.getQueryResultMutex.Lock() + ret, specificReturn := fake.getQueryResultReturnsOnCall[len(fake.getQueryResultArgsForCall)] + fake.getQueryResultArgsForCall = append(fake.getQueryResultArgsForCall, struct { + arg1 string + }{arg1}) + fake.recordInvocation("GetQueryResult", []interface{}{arg1}) + fake.getQueryResultMutex.Unlock() + if fake.GetQueryResultStub != nil { + return fake.GetQueryResultStub(arg1) + } + if specificReturn { + return ret.result1, ret.result2 + } + fakeReturns := fake.getQueryResultReturns + return fakeReturns.result1, fakeReturns.result2 +} + +func (fake *ChaincodeStub) GetQueryResultCallCount() int { + fake.getQueryResultMutex.RLock() + defer fake.getQueryResultMutex.RUnlock() + return len(fake.getQueryResultArgsForCall) +} + +func (fake *ChaincodeStub) GetQueryResultCalls(stub func(string) (shim.StateQueryIteratorInterface, error)) { + fake.getQueryResultMutex.Lock() + defer fake.getQueryResultMutex.Unlock() + fake.GetQueryResultStub = stub +} + +func (fake *ChaincodeStub) GetQueryResultArgsForCall(i int) string { + fake.getQueryResultMutex.RLock() + defer fake.getQueryResultMutex.RUnlock() + argsForCall := fake.getQueryResultArgsForCall[i] + return argsForCall.arg1 +} + +func (fake *ChaincodeStub) GetQueryResultReturns(result1 shim.StateQueryIteratorInterface, result2 error) { + fake.getQueryResultMutex.Lock() + defer fake.getQueryResultMutex.Unlock() + fake.GetQueryResultStub = nil + fake.getQueryResultReturns = struct { + result1 shim.StateQueryIteratorInterface + result2 error + }{result1, result2} +} + +func (fake *ChaincodeStub) GetQueryResultReturnsOnCall(i int, result1 shim.StateQueryIteratorInterface, result2 error) { + fake.getQueryResultMutex.Lock() + defer fake.getQueryResultMutex.Unlock() + fake.GetQueryResultStub = nil + if fake.getQueryResultReturnsOnCall == nil { + fake.getQueryResultReturnsOnCall = make(map[int]struct { + result1 shim.StateQueryIteratorInterface + result2 error + }) + } + fake.getQueryResultReturnsOnCall[i] = struct { + result1 shim.StateQueryIteratorInterface + result2 error + }{result1, result2} +} + +func (fake *ChaincodeStub) GetQueryResultWithPagination(arg1 string, arg2 int32, arg3 string) (shim.StateQueryIteratorInterface, *peer.QueryResponseMetadata, error) { + fake.getQueryResultWithPaginationMutex.Lock() + ret, specificReturn := fake.getQueryResultWithPaginationReturnsOnCall[len(fake.getQueryResultWithPaginationArgsForCall)] + fake.getQueryResultWithPaginationArgsForCall = append(fake.getQueryResultWithPaginationArgsForCall, struct { + arg1 string + arg2 int32 + arg3 string + }{arg1, arg2, arg3}) + fake.recordInvocation("GetQueryResultWithPagination", []interface{}{arg1, arg2, arg3}) + fake.getQueryResultWithPaginationMutex.Unlock() + if fake.GetQueryResultWithPaginationStub != nil { + return fake.GetQueryResultWithPaginationStub(arg1, arg2, arg3) + } + if specificReturn { + return ret.result1, ret.result2, ret.result3 + } + fakeReturns := fake.getQueryResultWithPaginationReturns + return fakeReturns.result1, fakeReturns.result2, fakeReturns.result3 +} + +func (fake *ChaincodeStub) GetQueryResultWithPaginationCallCount() int { + fake.getQueryResultWithPaginationMutex.RLock() + defer fake.getQueryResultWithPaginationMutex.RUnlock() + return len(fake.getQueryResultWithPaginationArgsForCall) +} + +func (fake *ChaincodeStub) GetQueryResultWithPaginationCalls(stub func(string, int32, string) (shim.StateQueryIteratorInterface, *peer.QueryResponseMetadata, error)) { + fake.getQueryResultWithPaginationMutex.Lock() + defer fake.getQueryResultWithPaginationMutex.Unlock() + fake.GetQueryResultWithPaginationStub = stub +} + +func (fake *ChaincodeStub) GetQueryResultWithPaginationArgsForCall(i int) (string, int32, string) { + fake.getQueryResultWithPaginationMutex.RLock() + defer fake.getQueryResultWithPaginationMutex.RUnlock() + argsForCall := fake.getQueryResultWithPaginationArgsForCall[i] + return argsForCall.arg1, argsForCall.arg2, argsForCall.arg3 +} + +func (fake *ChaincodeStub) GetQueryResultWithPaginationReturns(result1 shim.StateQueryIteratorInterface, result2 *peer.QueryResponseMetadata, result3 error) { + fake.getQueryResultWithPaginationMutex.Lock() + defer fake.getQueryResultWithPaginationMutex.Unlock() + fake.GetQueryResultWithPaginationStub = nil + fake.getQueryResultWithPaginationReturns = struct { + result1 shim.StateQueryIteratorInterface + result2 *peer.QueryResponseMetadata + result3 error + }{result1, result2, result3} +} + +func (fake *ChaincodeStub) GetQueryResultWithPaginationReturnsOnCall(i int, result1 shim.StateQueryIteratorInterface, result2 *peer.QueryResponseMetadata, result3 error) { + fake.getQueryResultWithPaginationMutex.Lock() + defer fake.getQueryResultWithPaginationMutex.Unlock() + fake.GetQueryResultWithPaginationStub = nil + if fake.getQueryResultWithPaginationReturnsOnCall == nil { + fake.getQueryResultWithPaginationReturnsOnCall = make(map[int]struct { + result1 shim.StateQueryIteratorInterface + result2 *peer.QueryResponseMetadata + result3 error + }) + } + fake.getQueryResultWithPaginationReturnsOnCall[i] = struct { + result1 shim.StateQueryIteratorInterface + result2 *peer.QueryResponseMetadata + result3 error + }{result1, result2, result3} +} + +func (fake *ChaincodeStub) GetSignedProposal() (*peer.SignedProposal, error) { + fake.getSignedProposalMutex.Lock() + ret, specificReturn := fake.getSignedProposalReturnsOnCall[len(fake.getSignedProposalArgsForCall)] + fake.getSignedProposalArgsForCall = append(fake.getSignedProposalArgsForCall, struct { + }{}) + fake.recordInvocation("GetSignedProposal", []interface{}{}) + fake.getSignedProposalMutex.Unlock() + if fake.GetSignedProposalStub != nil { + return fake.GetSignedProposalStub() + } + if specificReturn { + return ret.result1, ret.result2 + } + fakeReturns := fake.getSignedProposalReturns + return fakeReturns.result1, fakeReturns.result2 +} + +func (fake *ChaincodeStub) GetSignedProposalCallCount() int { + fake.getSignedProposalMutex.RLock() + defer fake.getSignedProposalMutex.RUnlock() + return len(fake.getSignedProposalArgsForCall) +} + +func (fake *ChaincodeStub) GetSignedProposalCalls(stub func() (*peer.SignedProposal, error)) { + fake.getSignedProposalMutex.Lock() + defer fake.getSignedProposalMutex.Unlock() + fake.GetSignedProposalStub = stub +} + +func (fake *ChaincodeStub) GetSignedProposalReturns(result1 *peer.SignedProposal, result2 error) { + fake.getSignedProposalMutex.Lock() + defer fake.getSignedProposalMutex.Unlock() + fake.GetSignedProposalStub = nil + fake.getSignedProposalReturns = struct { + result1 *peer.SignedProposal + result2 error + }{result1, result2} +} + +func (fake *ChaincodeStub) GetSignedProposalReturnsOnCall(i int, result1 *peer.SignedProposal, result2 error) { + fake.getSignedProposalMutex.Lock() + defer fake.getSignedProposalMutex.Unlock() + fake.GetSignedProposalStub = nil + if fake.getSignedProposalReturnsOnCall == nil { + fake.getSignedProposalReturnsOnCall = make(map[int]struct { + result1 *peer.SignedProposal + result2 error + }) + } + fake.getSignedProposalReturnsOnCall[i] = struct { + result1 *peer.SignedProposal + result2 error + }{result1, result2} +} + +func (fake *ChaincodeStub) GetState(arg1 string) ([]byte, error) { + fake.getStateMutex.Lock() + ret, specificReturn := fake.getStateReturnsOnCall[len(fake.getStateArgsForCall)] + fake.getStateArgsForCall = append(fake.getStateArgsForCall, struct { + arg1 string + }{arg1}) + fake.recordInvocation("GetState", []interface{}{arg1}) + fake.getStateMutex.Unlock() + if fake.GetStateStub != nil { + return fake.GetStateStub(arg1) + } + if specificReturn { + return ret.result1, ret.result2 + } + fakeReturns := fake.getStateReturns + return fakeReturns.result1, fakeReturns.result2 +} + +func (fake *ChaincodeStub) GetStateCallCount() int { + fake.getStateMutex.RLock() + defer fake.getStateMutex.RUnlock() + return len(fake.getStateArgsForCall) +} + +func (fake *ChaincodeStub) GetStateCalls(stub func(string) ([]byte, error)) { + fake.getStateMutex.Lock() + defer fake.getStateMutex.Unlock() + fake.GetStateStub = stub +} + +func (fake *ChaincodeStub) GetStateArgsForCall(i int) string { + fake.getStateMutex.RLock() + defer fake.getStateMutex.RUnlock() + argsForCall := fake.getStateArgsForCall[i] + return argsForCall.arg1 +} + +func (fake *ChaincodeStub) GetStateReturns(result1 []byte, result2 error) { + fake.getStateMutex.Lock() + defer fake.getStateMutex.Unlock() + fake.GetStateStub = nil + fake.getStateReturns = struct { + result1 []byte + result2 error + }{result1, result2} +} + +func (fake *ChaincodeStub) GetStateReturnsOnCall(i int, result1 []byte, result2 error) { + fake.getStateMutex.Lock() + defer fake.getStateMutex.Unlock() + fake.GetStateStub = nil + if fake.getStateReturnsOnCall == nil { + fake.getStateReturnsOnCall = make(map[int]struct { + result1 []byte + result2 error + }) + } + fake.getStateReturnsOnCall[i] = struct { + result1 []byte + result2 error + }{result1, result2} +} + +func (fake *ChaincodeStub) GetStateByPartialCompositeKey(arg1 string, arg2 []string) (shim.StateQueryIteratorInterface, error) { + var arg2Copy []string + if arg2 != nil { + arg2Copy = make([]string, len(arg2)) + copy(arg2Copy, arg2) + } + fake.getStateByPartialCompositeKeyMutex.Lock() + ret, specificReturn := fake.getStateByPartialCompositeKeyReturnsOnCall[len(fake.getStateByPartialCompositeKeyArgsForCall)] + fake.getStateByPartialCompositeKeyArgsForCall = append(fake.getStateByPartialCompositeKeyArgsForCall, struct { + arg1 string + arg2 []string + }{arg1, arg2Copy}) + fake.recordInvocation("GetStateByPartialCompositeKey", []interface{}{arg1, arg2Copy}) + fake.getStateByPartialCompositeKeyMutex.Unlock() + if fake.GetStateByPartialCompositeKeyStub != nil { + return fake.GetStateByPartialCompositeKeyStub(arg1, arg2) + } + if specificReturn { + return ret.result1, ret.result2 + } + fakeReturns := fake.getStateByPartialCompositeKeyReturns + return fakeReturns.result1, fakeReturns.result2 +} + +func (fake *ChaincodeStub) GetStateByPartialCompositeKeyCallCount() int { + fake.getStateByPartialCompositeKeyMutex.RLock() + defer fake.getStateByPartialCompositeKeyMutex.RUnlock() + return len(fake.getStateByPartialCompositeKeyArgsForCall) +} + +func (fake *ChaincodeStub) GetStateByPartialCompositeKeyCalls(stub func(string, []string) (shim.StateQueryIteratorInterface, error)) { + fake.getStateByPartialCompositeKeyMutex.Lock() + defer fake.getStateByPartialCompositeKeyMutex.Unlock() + fake.GetStateByPartialCompositeKeyStub = stub +} + +func (fake *ChaincodeStub) GetStateByPartialCompositeKeyArgsForCall(i int) (string, []string) { + fake.getStateByPartialCompositeKeyMutex.RLock() + defer fake.getStateByPartialCompositeKeyMutex.RUnlock() + argsForCall := fake.getStateByPartialCompositeKeyArgsForCall[i] + return argsForCall.arg1, argsForCall.arg2 +} + +func (fake *ChaincodeStub) GetStateByPartialCompositeKeyReturns(result1 shim.StateQueryIteratorInterface, result2 error) { + fake.getStateByPartialCompositeKeyMutex.Lock() + defer fake.getStateByPartialCompositeKeyMutex.Unlock() + fake.GetStateByPartialCompositeKeyStub = nil + fake.getStateByPartialCompositeKeyReturns = struct { + result1 shim.StateQueryIteratorInterface + result2 error + }{result1, result2} +} + +func (fake *ChaincodeStub) GetStateByPartialCompositeKeyReturnsOnCall(i int, result1 shim.StateQueryIteratorInterface, result2 error) { + fake.getStateByPartialCompositeKeyMutex.Lock() + defer fake.getStateByPartialCompositeKeyMutex.Unlock() + fake.GetStateByPartialCompositeKeyStub = nil + if fake.getStateByPartialCompositeKeyReturnsOnCall == nil { + fake.getStateByPartialCompositeKeyReturnsOnCall = make(map[int]struct { + result1 shim.StateQueryIteratorInterface + result2 error + }) + } + fake.getStateByPartialCompositeKeyReturnsOnCall[i] = struct { + result1 shim.StateQueryIteratorInterface + result2 error + }{result1, result2} +} + +func (fake *ChaincodeStub) GetStateByPartialCompositeKeyWithPagination(arg1 string, arg2 []string, arg3 int32, arg4 string) (shim.StateQueryIteratorInterface, *peer.QueryResponseMetadata, error) { + var arg2Copy []string + if arg2 != nil { + arg2Copy = make([]string, len(arg2)) + copy(arg2Copy, arg2) + } + fake.getStateByPartialCompositeKeyWithPaginationMutex.Lock() + ret, specificReturn := fake.getStateByPartialCompositeKeyWithPaginationReturnsOnCall[len(fake.getStateByPartialCompositeKeyWithPaginationArgsForCall)] + fake.getStateByPartialCompositeKeyWithPaginationArgsForCall = append(fake.getStateByPartialCompositeKeyWithPaginationArgsForCall, struct { + arg1 string + arg2 []string + arg3 int32 + arg4 string + }{arg1, arg2Copy, arg3, arg4}) + fake.recordInvocation("GetStateByPartialCompositeKeyWithPagination", []interface{}{arg1, arg2Copy, arg3, arg4}) + fake.getStateByPartialCompositeKeyWithPaginationMutex.Unlock() + if fake.GetStateByPartialCompositeKeyWithPaginationStub != nil { + return fake.GetStateByPartialCompositeKeyWithPaginationStub(arg1, arg2, arg3, arg4) + } + if specificReturn { + return ret.result1, ret.result2, ret.result3 + } + fakeReturns := fake.getStateByPartialCompositeKeyWithPaginationReturns + return fakeReturns.result1, fakeReturns.result2, fakeReturns.result3 +} + +func (fake *ChaincodeStub) GetStateByPartialCompositeKeyWithPaginationCallCount() int { + fake.getStateByPartialCompositeKeyWithPaginationMutex.RLock() + defer fake.getStateByPartialCompositeKeyWithPaginationMutex.RUnlock() + return len(fake.getStateByPartialCompositeKeyWithPaginationArgsForCall) +} + +func (fake *ChaincodeStub) GetStateByPartialCompositeKeyWithPaginationCalls(stub func(string, []string, int32, string) (shim.StateQueryIteratorInterface, *peer.QueryResponseMetadata, error)) { + fake.getStateByPartialCompositeKeyWithPaginationMutex.Lock() + defer fake.getStateByPartialCompositeKeyWithPaginationMutex.Unlock() + fake.GetStateByPartialCompositeKeyWithPaginationStub = stub +} + +func (fake *ChaincodeStub) GetStateByPartialCompositeKeyWithPaginationArgsForCall(i int) (string, []string, int32, string) { + fake.getStateByPartialCompositeKeyWithPaginationMutex.RLock() + defer fake.getStateByPartialCompositeKeyWithPaginationMutex.RUnlock() + argsForCall := fake.getStateByPartialCompositeKeyWithPaginationArgsForCall[i] + return argsForCall.arg1, argsForCall.arg2, argsForCall.arg3, argsForCall.arg4 +} + +func (fake *ChaincodeStub) GetStateByPartialCompositeKeyWithPaginationReturns(result1 shim.StateQueryIteratorInterface, result2 *peer.QueryResponseMetadata, result3 error) { + fake.getStateByPartialCompositeKeyWithPaginationMutex.Lock() + defer fake.getStateByPartialCompositeKeyWithPaginationMutex.Unlock() + fake.GetStateByPartialCompositeKeyWithPaginationStub = nil + fake.getStateByPartialCompositeKeyWithPaginationReturns = struct { + result1 shim.StateQueryIteratorInterface + result2 *peer.QueryResponseMetadata + result3 error + }{result1, result2, result3} +} + +func (fake *ChaincodeStub) GetStateByPartialCompositeKeyWithPaginationReturnsOnCall(i int, result1 shim.StateQueryIteratorInterface, result2 *peer.QueryResponseMetadata, result3 error) { + fake.getStateByPartialCompositeKeyWithPaginationMutex.Lock() + defer fake.getStateByPartialCompositeKeyWithPaginationMutex.Unlock() + fake.GetStateByPartialCompositeKeyWithPaginationStub = nil + if fake.getStateByPartialCompositeKeyWithPaginationReturnsOnCall == nil { + fake.getStateByPartialCompositeKeyWithPaginationReturnsOnCall = make(map[int]struct { + result1 shim.StateQueryIteratorInterface + result2 *peer.QueryResponseMetadata + result3 error + }) + } + fake.getStateByPartialCompositeKeyWithPaginationReturnsOnCall[i] = struct { + result1 shim.StateQueryIteratorInterface + result2 *peer.QueryResponseMetadata + result3 error + }{result1, result2, result3} +} + +func (fake *ChaincodeStub) GetStateByRange(arg1 string, arg2 string) (shim.StateQueryIteratorInterface, error) { + fake.getStateByRangeMutex.Lock() + ret, specificReturn := fake.getStateByRangeReturnsOnCall[len(fake.getStateByRangeArgsForCall)] + fake.getStateByRangeArgsForCall = append(fake.getStateByRangeArgsForCall, struct { + arg1 string + arg2 string + }{arg1, arg2}) + fake.recordInvocation("GetStateByRange", []interface{}{arg1, arg2}) + fake.getStateByRangeMutex.Unlock() + if fake.GetStateByRangeStub != nil { + return fake.GetStateByRangeStub(arg1, arg2) + } + if specificReturn { + return ret.result1, ret.result2 + } + fakeReturns := fake.getStateByRangeReturns + return fakeReturns.result1, fakeReturns.result2 +} + +func (fake *ChaincodeStub) GetStateByRangeCallCount() int { + fake.getStateByRangeMutex.RLock() + defer fake.getStateByRangeMutex.RUnlock() + return len(fake.getStateByRangeArgsForCall) +} + +func (fake *ChaincodeStub) GetStateByRangeCalls(stub func(string, string) (shim.StateQueryIteratorInterface, error)) { + fake.getStateByRangeMutex.Lock() + defer fake.getStateByRangeMutex.Unlock() + fake.GetStateByRangeStub = stub +} + +func (fake *ChaincodeStub) GetStateByRangeArgsForCall(i int) (string, string) { + fake.getStateByRangeMutex.RLock() + defer fake.getStateByRangeMutex.RUnlock() + argsForCall := fake.getStateByRangeArgsForCall[i] + return argsForCall.arg1, argsForCall.arg2 +} + +func (fake *ChaincodeStub) GetStateByRangeReturns(result1 shim.StateQueryIteratorInterface, result2 error) { + fake.getStateByRangeMutex.Lock() + defer fake.getStateByRangeMutex.Unlock() + fake.GetStateByRangeStub = nil + fake.getStateByRangeReturns = struct { + result1 shim.StateQueryIteratorInterface + result2 error + }{result1, result2} +} + +func (fake *ChaincodeStub) GetStateByRangeReturnsOnCall(i int, result1 shim.StateQueryIteratorInterface, result2 error) { + fake.getStateByRangeMutex.Lock() + defer fake.getStateByRangeMutex.Unlock() + fake.GetStateByRangeStub = nil + if fake.getStateByRangeReturnsOnCall == nil { + fake.getStateByRangeReturnsOnCall = make(map[int]struct { + result1 shim.StateQueryIteratorInterface + result2 error + }) + } + fake.getStateByRangeReturnsOnCall[i] = struct { + result1 shim.StateQueryIteratorInterface + result2 error + }{result1, result2} +} + +func (fake *ChaincodeStub) GetStateByRangeWithPagination(arg1 string, arg2 string, arg3 int32, arg4 string) (shim.StateQueryIteratorInterface, *peer.QueryResponseMetadata, error) { + fake.getStateByRangeWithPaginationMutex.Lock() + ret, specificReturn := fake.getStateByRangeWithPaginationReturnsOnCall[len(fake.getStateByRangeWithPaginationArgsForCall)] + fake.getStateByRangeWithPaginationArgsForCall = append(fake.getStateByRangeWithPaginationArgsForCall, struct { + arg1 string + arg2 string + arg3 int32 + arg4 string + }{arg1, arg2, arg3, arg4}) + fake.recordInvocation("GetStateByRangeWithPagination", []interface{}{arg1, arg2, arg3, arg4}) + fake.getStateByRangeWithPaginationMutex.Unlock() + if fake.GetStateByRangeWithPaginationStub != nil { + return fake.GetStateByRangeWithPaginationStub(arg1, arg2, arg3, arg4) + } + if specificReturn { + return ret.result1, ret.result2, ret.result3 + } + fakeReturns := fake.getStateByRangeWithPaginationReturns + return fakeReturns.result1, fakeReturns.result2, fakeReturns.result3 +} + +func (fake *ChaincodeStub) GetStateByRangeWithPaginationCallCount() int { + fake.getStateByRangeWithPaginationMutex.RLock() + defer fake.getStateByRangeWithPaginationMutex.RUnlock() + return len(fake.getStateByRangeWithPaginationArgsForCall) +} + +func (fake *ChaincodeStub) GetStateByRangeWithPaginationCalls(stub func(string, string, int32, string) (shim.StateQueryIteratorInterface, *peer.QueryResponseMetadata, error)) { + fake.getStateByRangeWithPaginationMutex.Lock() + defer fake.getStateByRangeWithPaginationMutex.Unlock() + fake.GetStateByRangeWithPaginationStub = stub +} + +func (fake *ChaincodeStub) GetStateByRangeWithPaginationArgsForCall(i int) (string, string, int32, string) { + fake.getStateByRangeWithPaginationMutex.RLock() + defer fake.getStateByRangeWithPaginationMutex.RUnlock() + argsForCall := fake.getStateByRangeWithPaginationArgsForCall[i] + return argsForCall.arg1, argsForCall.arg2, argsForCall.arg3, argsForCall.arg4 +} + +func (fake *ChaincodeStub) GetStateByRangeWithPaginationReturns(result1 shim.StateQueryIteratorInterface, result2 *peer.QueryResponseMetadata, result3 error) { + fake.getStateByRangeWithPaginationMutex.Lock() + defer fake.getStateByRangeWithPaginationMutex.Unlock() + fake.GetStateByRangeWithPaginationStub = nil + fake.getStateByRangeWithPaginationReturns = struct { + result1 shim.StateQueryIteratorInterface + result2 *peer.QueryResponseMetadata + result3 error + }{result1, result2, result3} +} + +func (fake *ChaincodeStub) GetStateByRangeWithPaginationReturnsOnCall(i int, result1 shim.StateQueryIteratorInterface, result2 *peer.QueryResponseMetadata, result3 error) { + fake.getStateByRangeWithPaginationMutex.Lock() + defer fake.getStateByRangeWithPaginationMutex.Unlock() + fake.GetStateByRangeWithPaginationStub = nil + if fake.getStateByRangeWithPaginationReturnsOnCall == nil { + fake.getStateByRangeWithPaginationReturnsOnCall = make(map[int]struct { + result1 shim.StateQueryIteratorInterface + result2 *peer.QueryResponseMetadata + result3 error + }) + } + fake.getStateByRangeWithPaginationReturnsOnCall[i] = struct { + result1 shim.StateQueryIteratorInterface + result2 *peer.QueryResponseMetadata + result3 error + }{result1, result2, result3} +} + +func (fake *ChaincodeStub) GetStateValidationParameter(arg1 string) ([]byte, error) { + fake.getStateValidationParameterMutex.Lock() + ret, specificReturn := fake.getStateValidationParameterReturnsOnCall[len(fake.getStateValidationParameterArgsForCall)] + fake.getStateValidationParameterArgsForCall = append(fake.getStateValidationParameterArgsForCall, struct { + arg1 string + }{arg1}) + fake.recordInvocation("GetStateValidationParameter", []interface{}{arg1}) + fake.getStateValidationParameterMutex.Unlock() + if fake.GetStateValidationParameterStub != nil { + return fake.GetStateValidationParameterStub(arg1) + } + if specificReturn { + return ret.result1, ret.result2 + } + fakeReturns := fake.getStateValidationParameterReturns + return fakeReturns.result1, fakeReturns.result2 +} + +func (fake *ChaincodeStub) GetStateValidationParameterCallCount() int { + fake.getStateValidationParameterMutex.RLock() + defer fake.getStateValidationParameterMutex.RUnlock() + return len(fake.getStateValidationParameterArgsForCall) +} + +func (fake *ChaincodeStub) GetStateValidationParameterCalls(stub func(string) ([]byte, error)) { + fake.getStateValidationParameterMutex.Lock() + defer fake.getStateValidationParameterMutex.Unlock() + fake.GetStateValidationParameterStub = stub +} + +func (fake *ChaincodeStub) GetStateValidationParameterArgsForCall(i int) string { + fake.getStateValidationParameterMutex.RLock() + defer fake.getStateValidationParameterMutex.RUnlock() + argsForCall := fake.getStateValidationParameterArgsForCall[i] + return argsForCall.arg1 +} + +func (fake *ChaincodeStub) GetStateValidationParameterReturns(result1 []byte, result2 error) { + fake.getStateValidationParameterMutex.Lock() + defer fake.getStateValidationParameterMutex.Unlock() + fake.GetStateValidationParameterStub = nil + fake.getStateValidationParameterReturns = struct { + result1 []byte + result2 error + }{result1, result2} +} + +func (fake *ChaincodeStub) GetStateValidationParameterReturnsOnCall(i int, result1 []byte, result2 error) { + fake.getStateValidationParameterMutex.Lock() + defer fake.getStateValidationParameterMutex.Unlock() + fake.GetStateValidationParameterStub = nil + if fake.getStateValidationParameterReturnsOnCall == nil { + fake.getStateValidationParameterReturnsOnCall = make(map[int]struct { + result1 []byte + result2 error + }) + } + fake.getStateValidationParameterReturnsOnCall[i] = struct { + result1 []byte + result2 error + }{result1, result2} +} + +func (fake *ChaincodeStub) GetStringArgs() []string { + fake.getStringArgsMutex.Lock() + ret, specificReturn := fake.getStringArgsReturnsOnCall[len(fake.getStringArgsArgsForCall)] + fake.getStringArgsArgsForCall = append(fake.getStringArgsArgsForCall, struct { + }{}) + fake.recordInvocation("GetStringArgs", []interface{}{}) + fake.getStringArgsMutex.Unlock() + if fake.GetStringArgsStub != nil { + return fake.GetStringArgsStub() + } + if specificReturn { + return ret.result1 + } + fakeReturns := fake.getStringArgsReturns + return fakeReturns.result1 +} + +func (fake *ChaincodeStub) GetStringArgsCallCount() int { + fake.getStringArgsMutex.RLock() + defer fake.getStringArgsMutex.RUnlock() + return len(fake.getStringArgsArgsForCall) +} + +func (fake *ChaincodeStub) GetStringArgsCalls(stub func() []string) { + fake.getStringArgsMutex.Lock() + defer fake.getStringArgsMutex.Unlock() + fake.GetStringArgsStub = stub +} + +func (fake *ChaincodeStub) GetStringArgsReturns(result1 []string) { + fake.getStringArgsMutex.Lock() + defer fake.getStringArgsMutex.Unlock() + fake.GetStringArgsStub = nil + fake.getStringArgsReturns = struct { + result1 []string + }{result1} +} + +func (fake *ChaincodeStub) GetStringArgsReturnsOnCall(i int, result1 []string) { + fake.getStringArgsMutex.Lock() + defer fake.getStringArgsMutex.Unlock() + fake.GetStringArgsStub = nil + if fake.getStringArgsReturnsOnCall == nil { + fake.getStringArgsReturnsOnCall = make(map[int]struct { + result1 []string + }) + } + fake.getStringArgsReturnsOnCall[i] = struct { + result1 []string + }{result1} +} + +func (fake *ChaincodeStub) GetTransient() (map[string][]byte, error) { + fake.getTransientMutex.Lock() + ret, specificReturn := fake.getTransientReturnsOnCall[len(fake.getTransientArgsForCall)] + fake.getTransientArgsForCall = append(fake.getTransientArgsForCall, struct { + }{}) + fake.recordInvocation("GetTransient", []interface{}{}) + fake.getTransientMutex.Unlock() + if fake.GetTransientStub != nil { + return fake.GetTransientStub() + } + if specificReturn { + return ret.result1, ret.result2 + } + fakeReturns := fake.getTransientReturns + return fakeReturns.result1, fakeReturns.result2 +} + +func (fake *ChaincodeStub) GetTransientCallCount() int { + fake.getTransientMutex.RLock() + defer fake.getTransientMutex.RUnlock() + return len(fake.getTransientArgsForCall) +} + +func (fake *ChaincodeStub) GetTransientCalls(stub func() (map[string][]byte, error)) { + fake.getTransientMutex.Lock() + defer fake.getTransientMutex.Unlock() + fake.GetTransientStub = stub +} + +func (fake *ChaincodeStub) GetTransientReturns(result1 map[string][]byte, result2 error) { + fake.getTransientMutex.Lock() + defer fake.getTransientMutex.Unlock() + fake.GetTransientStub = nil + fake.getTransientReturns = struct { + result1 map[string][]byte + result2 error + }{result1, result2} +} + +func (fake *ChaincodeStub) GetTransientReturnsOnCall(i int, result1 map[string][]byte, result2 error) { + fake.getTransientMutex.Lock() + defer fake.getTransientMutex.Unlock() + fake.GetTransientStub = nil + if fake.getTransientReturnsOnCall == nil { + fake.getTransientReturnsOnCall = make(map[int]struct { + result1 map[string][]byte + result2 error + }) + } + fake.getTransientReturnsOnCall[i] = struct { + result1 map[string][]byte + result2 error + }{result1, result2} +} + +func (fake *ChaincodeStub) GetTxID() string { + fake.getTxIDMutex.Lock() + ret, specificReturn := fake.getTxIDReturnsOnCall[len(fake.getTxIDArgsForCall)] + fake.getTxIDArgsForCall = append(fake.getTxIDArgsForCall, struct { + }{}) + fake.recordInvocation("GetTxID", []interface{}{}) + fake.getTxIDMutex.Unlock() + if fake.GetTxIDStub != nil { + return fake.GetTxIDStub() + } + if specificReturn { + return ret.result1 + } + fakeReturns := fake.getTxIDReturns + return fakeReturns.result1 +} + +func (fake *ChaincodeStub) GetTxIDCallCount() int { + fake.getTxIDMutex.RLock() + defer fake.getTxIDMutex.RUnlock() + return len(fake.getTxIDArgsForCall) +} + +func (fake *ChaincodeStub) GetTxIDCalls(stub func() string) { + fake.getTxIDMutex.Lock() + defer fake.getTxIDMutex.Unlock() + fake.GetTxIDStub = stub +} + +func (fake *ChaincodeStub) GetTxIDReturns(result1 string) { + fake.getTxIDMutex.Lock() + defer fake.getTxIDMutex.Unlock() + fake.GetTxIDStub = nil + fake.getTxIDReturns = struct { + result1 string + }{result1} +} + +func (fake *ChaincodeStub) GetTxIDReturnsOnCall(i int, result1 string) { + fake.getTxIDMutex.Lock() + defer fake.getTxIDMutex.Unlock() + fake.GetTxIDStub = nil + if fake.getTxIDReturnsOnCall == nil { + fake.getTxIDReturnsOnCall = make(map[int]struct { + result1 string + }) + } + fake.getTxIDReturnsOnCall[i] = struct { + result1 string + }{result1} +} + +func (fake *ChaincodeStub) GetTxTimestamp() (*timestamp.Timestamp, error) { + fake.getTxTimestampMutex.Lock() + ret, specificReturn := fake.getTxTimestampReturnsOnCall[len(fake.getTxTimestampArgsForCall)] + fake.getTxTimestampArgsForCall = append(fake.getTxTimestampArgsForCall, struct { + }{}) + fake.recordInvocation("GetTxTimestamp", []interface{}{}) + fake.getTxTimestampMutex.Unlock() + if fake.GetTxTimestampStub != nil { + return fake.GetTxTimestampStub() + } + if specificReturn { + return ret.result1, ret.result2 + } + fakeReturns := fake.getTxTimestampReturns + return fakeReturns.result1, fakeReturns.result2 +} + +func (fake *ChaincodeStub) GetTxTimestampCallCount() int { + fake.getTxTimestampMutex.RLock() + defer fake.getTxTimestampMutex.RUnlock() + return len(fake.getTxTimestampArgsForCall) +} + +func (fake *ChaincodeStub) GetTxTimestampCalls(stub func() (*timestamp.Timestamp, error)) { + fake.getTxTimestampMutex.Lock() + defer fake.getTxTimestampMutex.Unlock() + fake.GetTxTimestampStub = stub +} + +func (fake *ChaincodeStub) GetTxTimestampReturns(result1 *timestamp.Timestamp, result2 error) { + fake.getTxTimestampMutex.Lock() + defer fake.getTxTimestampMutex.Unlock() + fake.GetTxTimestampStub = nil + fake.getTxTimestampReturns = struct { + result1 *timestamp.Timestamp + result2 error + }{result1, result2} +} + +func (fake *ChaincodeStub) GetTxTimestampReturnsOnCall(i int, result1 *timestamp.Timestamp, result2 error) { + fake.getTxTimestampMutex.Lock() + defer fake.getTxTimestampMutex.Unlock() + fake.GetTxTimestampStub = nil + if fake.getTxTimestampReturnsOnCall == nil { + fake.getTxTimestampReturnsOnCall = make(map[int]struct { + result1 *timestamp.Timestamp + result2 error + }) + } + fake.getTxTimestampReturnsOnCall[i] = struct { + result1 *timestamp.Timestamp + result2 error + }{result1, result2} +} + +func (fake *ChaincodeStub) InvokeChaincode(arg1 string, arg2 [][]byte, arg3 string) peer.Response { + var arg2Copy [][]byte + if arg2 != nil { + arg2Copy = make([][]byte, len(arg2)) + copy(arg2Copy, arg2) + } + fake.invokeChaincodeMutex.Lock() + ret, specificReturn := fake.invokeChaincodeReturnsOnCall[len(fake.invokeChaincodeArgsForCall)] + fake.invokeChaincodeArgsForCall = append(fake.invokeChaincodeArgsForCall, struct { + arg1 string + arg2 [][]byte + arg3 string + }{arg1, arg2Copy, arg3}) + fake.recordInvocation("InvokeChaincode", []interface{}{arg1, arg2Copy, arg3}) + fake.invokeChaincodeMutex.Unlock() + if fake.InvokeChaincodeStub != nil { + return fake.InvokeChaincodeStub(arg1, arg2, arg3) + } + if specificReturn { + return ret.result1 + } + fakeReturns := fake.invokeChaincodeReturns + return fakeReturns.result1 +} + +func (fake *ChaincodeStub) InvokeChaincodeCallCount() int { + fake.invokeChaincodeMutex.RLock() + defer fake.invokeChaincodeMutex.RUnlock() + return len(fake.invokeChaincodeArgsForCall) +} + +func (fake *ChaincodeStub) InvokeChaincodeCalls(stub func(string, [][]byte, string) peer.Response) { + fake.invokeChaincodeMutex.Lock() + defer fake.invokeChaincodeMutex.Unlock() + fake.InvokeChaincodeStub = stub +} + +func (fake *ChaincodeStub) InvokeChaincodeArgsForCall(i int) (string, [][]byte, string) { + fake.invokeChaincodeMutex.RLock() + defer fake.invokeChaincodeMutex.RUnlock() + argsForCall := fake.invokeChaincodeArgsForCall[i] + return argsForCall.arg1, argsForCall.arg2, argsForCall.arg3 +} + +func (fake *ChaincodeStub) InvokeChaincodeReturns(result1 peer.Response) { + fake.invokeChaincodeMutex.Lock() + defer fake.invokeChaincodeMutex.Unlock() + fake.InvokeChaincodeStub = nil + fake.invokeChaincodeReturns = struct { + result1 peer.Response + }{result1} +} + +func (fake *ChaincodeStub) InvokeChaincodeReturnsOnCall(i int, result1 peer.Response) { + fake.invokeChaincodeMutex.Lock() + defer fake.invokeChaincodeMutex.Unlock() + fake.InvokeChaincodeStub = nil + if fake.invokeChaincodeReturnsOnCall == nil { + fake.invokeChaincodeReturnsOnCall = make(map[int]struct { + result1 peer.Response + }) + } + fake.invokeChaincodeReturnsOnCall[i] = struct { + result1 peer.Response + }{result1} +} + +func (fake *ChaincodeStub) PutPrivateData(arg1 string, arg2 string, arg3 []byte) error { + var arg3Copy []byte + if arg3 != nil { + arg3Copy = make([]byte, len(arg3)) + copy(arg3Copy, arg3) + } + fake.putPrivateDataMutex.Lock() + ret, specificReturn := fake.putPrivateDataReturnsOnCall[len(fake.putPrivateDataArgsForCall)] + fake.putPrivateDataArgsForCall = append(fake.putPrivateDataArgsForCall, struct { + arg1 string + arg2 string + arg3 []byte + }{arg1, arg2, arg3Copy}) + fake.recordInvocation("PutPrivateData", []interface{}{arg1, arg2, arg3Copy}) + fake.putPrivateDataMutex.Unlock() + if fake.PutPrivateDataStub != nil { + return fake.PutPrivateDataStub(arg1, arg2, arg3) + } + if specificReturn { + return ret.result1 + } + fakeReturns := fake.putPrivateDataReturns + return fakeReturns.result1 +} + +func (fake *ChaincodeStub) PutPrivateDataCallCount() int { + fake.putPrivateDataMutex.RLock() + defer fake.putPrivateDataMutex.RUnlock() + return len(fake.putPrivateDataArgsForCall) +} + +func (fake *ChaincodeStub) PutPrivateDataCalls(stub func(string, string, []byte) error) { + fake.putPrivateDataMutex.Lock() + defer fake.putPrivateDataMutex.Unlock() + fake.PutPrivateDataStub = stub +} + +func (fake *ChaincodeStub) PutPrivateDataArgsForCall(i int) (string, string, []byte) { + fake.putPrivateDataMutex.RLock() + defer fake.putPrivateDataMutex.RUnlock() + argsForCall := fake.putPrivateDataArgsForCall[i] + return argsForCall.arg1, argsForCall.arg2, argsForCall.arg3 +} + +func (fake *ChaincodeStub) PutPrivateDataReturns(result1 error) { + fake.putPrivateDataMutex.Lock() + defer fake.putPrivateDataMutex.Unlock() + fake.PutPrivateDataStub = nil + fake.putPrivateDataReturns = struct { + result1 error + }{result1} +} + +func (fake *ChaincodeStub) PutPrivateDataReturnsOnCall(i int, result1 error) { + fake.putPrivateDataMutex.Lock() + defer fake.putPrivateDataMutex.Unlock() + fake.PutPrivateDataStub = nil + if fake.putPrivateDataReturnsOnCall == nil { + fake.putPrivateDataReturnsOnCall = make(map[int]struct { + result1 error + }) + } + fake.putPrivateDataReturnsOnCall[i] = struct { + result1 error + }{result1} +} + +func (fake *ChaincodeStub) PutState(arg1 string, arg2 []byte) error { + var arg2Copy []byte + if arg2 != nil { + arg2Copy = make([]byte, len(arg2)) + copy(arg2Copy, arg2) + } + fake.putStateMutex.Lock() + ret, specificReturn := fake.putStateReturnsOnCall[len(fake.putStateArgsForCall)] + fake.putStateArgsForCall = append(fake.putStateArgsForCall, struct { + arg1 string + arg2 []byte + }{arg1, arg2Copy}) + fake.recordInvocation("PutState", []interface{}{arg1, arg2Copy}) + fake.putStateMutex.Unlock() + if fake.PutStateStub != nil { + return fake.PutStateStub(arg1, arg2) + } + if specificReturn { + return ret.result1 + } + fakeReturns := fake.putStateReturns + return fakeReturns.result1 +} + +func (fake *ChaincodeStub) PutStateCallCount() int { + fake.putStateMutex.RLock() + defer fake.putStateMutex.RUnlock() + return len(fake.putStateArgsForCall) +} + +func (fake *ChaincodeStub) PutStateCalls(stub func(string, []byte) error) { + fake.putStateMutex.Lock() + defer fake.putStateMutex.Unlock() + fake.PutStateStub = stub +} + +func (fake *ChaincodeStub) PutStateArgsForCall(i int) (string, []byte) { + fake.putStateMutex.RLock() + defer fake.putStateMutex.RUnlock() + argsForCall := fake.putStateArgsForCall[i] + return argsForCall.arg1, argsForCall.arg2 +} + +func (fake *ChaincodeStub) PutStateReturns(result1 error) { + fake.putStateMutex.Lock() + defer fake.putStateMutex.Unlock() + fake.PutStateStub = nil + fake.putStateReturns = struct { + result1 error + }{result1} +} + +func (fake *ChaincodeStub) PutStateReturnsOnCall(i int, result1 error) { + fake.putStateMutex.Lock() + defer fake.putStateMutex.Unlock() + fake.PutStateStub = nil + if fake.putStateReturnsOnCall == nil { + fake.putStateReturnsOnCall = make(map[int]struct { + result1 error + }) + } + fake.putStateReturnsOnCall[i] = struct { + result1 error + }{result1} +} + +func (fake *ChaincodeStub) SetEvent(arg1 string, arg2 []byte) error { + var arg2Copy []byte + if arg2 != nil { + arg2Copy = make([]byte, len(arg2)) + copy(arg2Copy, arg2) + } + fake.setEventMutex.Lock() + ret, specificReturn := fake.setEventReturnsOnCall[len(fake.setEventArgsForCall)] + fake.setEventArgsForCall = append(fake.setEventArgsForCall, struct { + arg1 string + arg2 []byte + }{arg1, arg2Copy}) + fake.recordInvocation("SetEvent", []interface{}{arg1, arg2Copy}) + fake.setEventMutex.Unlock() + if fake.SetEventStub != nil { + return fake.SetEventStub(arg1, arg2) + } + if specificReturn { + return ret.result1 + } + fakeReturns := fake.setEventReturns + return fakeReturns.result1 +} + +func (fake *ChaincodeStub) SetEventCallCount() int { + fake.setEventMutex.RLock() + defer fake.setEventMutex.RUnlock() + return len(fake.setEventArgsForCall) +} + +func (fake *ChaincodeStub) SetEventCalls(stub func(string, []byte) error) { + fake.setEventMutex.Lock() + defer fake.setEventMutex.Unlock() + fake.SetEventStub = stub +} + +func (fake *ChaincodeStub) SetEventArgsForCall(i int) (string, []byte) { + fake.setEventMutex.RLock() + defer fake.setEventMutex.RUnlock() + argsForCall := fake.setEventArgsForCall[i] + return argsForCall.arg1, argsForCall.arg2 +} + +func (fake *ChaincodeStub) SetEventReturns(result1 error) { + fake.setEventMutex.Lock() + defer fake.setEventMutex.Unlock() + fake.SetEventStub = nil + fake.setEventReturns = struct { + result1 error + }{result1} +} + +func (fake *ChaincodeStub) SetEventReturnsOnCall(i int, result1 error) { + fake.setEventMutex.Lock() + defer fake.setEventMutex.Unlock() + fake.SetEventStub = nil + if fake.setEventReturnsOnCall == nil { + fake.setEventReturnsOnCall = make(map[int]struct { + result1 error + }) + } + fake.setEventReturnsOnCall[i] = struct { + result1 error + }{result1} +} + +func (fake *ChaincodeStub) SetPrivateDataValidationParameter(arg1 string, arg2 string, arg3 []byte) error { + var arg3Copy []byte + if arg3 != nil { + arg3Copy = make([]byte, len(arg3)) + copy(arg3Copy, arg3) + } + fake.setPrivateDataValidationParameterMutex.Lock() + ret, specificReturn := fake.setPrivateDataValidationParameterReturnsOnCall[len(fake.setPrivateDataValidationParameterArgsForCall)] + fake.setPrivateDataValidationParameterArgsForCall = append(fake.setPrivateDataValidationParameterArgsForCall, struct { + arg1 string + arg2 string + arg3 []byte + }{arg1, arg2, arg3Copy}) + fake.recordInvocation("SetPrivateDataValidationParameter", []interface{}{arg1, arg2, arg3Copy}) + fake.setPrivateDataValidationParameterMutex.Unlock() + if fake.SetPrivateDataValidationParameterStub != nil { + return fake.SetPrivateDataValidationParameterStub(arg1, arg2, arg3) + } + if specificReturn { + return ret.result1 + } + fakeReturns := fake.setPrivateDataValidationParameterReturns + return fakeReturns.result1 +} + +func (fake *ChaincodeStub) SetPrivateDataValidationParameterCallCount() int { + fake.setPrivateDataValidationParameterMutex.RLock() + defer fake.setPrivateDataValidationParameterMutex.RUnlock() + return len(fake.setPrivateDataValidationParameterArgsForCall) +} + +func (fake *ChaincodeStub) SetPrivateDataValidationParameterCalls(stub func(string, string, []byte) error) { + fake.setPrivateDataValidationParameterMutex.Lock() + defer fake.setPrivateDataValidationParameterMutex.Unlock() + fake.SetPrivateDataValidationParameterStub = stub +} + +func (fake *ChaincodeStub) SetPrivateDataValidationParameterArgsForCall(i int) (string, string, []byte) { + fake.setPrivateDataValidationParameterMutex.RLock() + defer fake.setPrivateDataValidationParameterMutex.RUnlock() + argsForCall := fake.setPrivateDataValidationParameterArgsForCall[i] + return argsForCall.arg1, argsForCall.arg2, argsForCall.arg3 +} + +func (fake *ChaincodeStub) SetPrivateDataValidationParameterReturns(result1 error) { + fake.setPrivateDataValidationParameterMutex.Lock() + defer fake.setPrivateDataValidationParameterMutex.Unlock() + fake.SetPrivateDataValidationParameterStub = nil + fake.setPrivateDataValidationParameterReturns = struct { + result1 error + }{result1} +} + +func (fake *ChaincodeStub) SetPrivateDataValidationParameterReturnsOnCall(i int, result1 error) { + fake.setPrivateDataValidationParameterMutex.Lock() + defer fake.setPrivateDataValidationParameterMutex.Unlock() + fake.SetPrivateDataValidationParameterStub = nil + if fake.setPrivateDataValidationParameterReturnsOnCall == nil { + fake.setPrivateDataValidationParameterReturnsOnCall = make(map[int]struct { + result1 error + }) + } + fake.setPrivateDataValidationParameterReturnsOnCall[i] = struct { + result1 error + }{result1} +} + +func (fake *ChaincodeStub) SetStateValidationParameter(arg1 string, arg2 []byte) error { + var arg2Copy []byte + if arg2 != nil { + arg2Copy = make([]byte, len(arg2)) + copy(arg2Copy, arg2) + } + fake.setStateValidationParameterMutex.Lock() + ret, specificReturn := fake.setStateValidationParameterReturnsOnCall[len(fake.setStateValidationParameterArgsForCall)] + fake.setStateValidationParameterArgsForCall = append(fake.setStateValidationParameterArgsForCall, struct { + arg1 string + arg2 []byte + }{arg1, arg2Copy}) + fake.recordInvocation("SetStateValidationParameter", []interface{}{arg1, arg2Copy}) + fake.setStateValidationParameterMutex.Unlock() + if fake.SetStateValidationParameterStub != nil { + return fake.SetStateValidationParameterStub(arg1, arg2) + } + if specificReturn { + return ret.result1 + } + fakeReturns := fake.setStateValidationParameterReturns + return fakeReturns.result1 +} + +func (fake *ChaincodeStub) SetStateValidationParameterCallCount() int { + fake.setStateValidationParameterMutex.RLock() + defer fake.setStateValidationParameterMutex.RUnlock() + return len(fake.setStateValidationParameterArgsForCall) +} + +func (fake *ChaincodeStub) SetStateValidationParameterCalls(stub func(string, []byte) error) { + fake.setStateValidationParameterMutex.Lock() + defer fake.setStateValidationParameterMutex.Unlock() + fake.SetStateValidationParameterStub = stub +} + +func (fake *ChaincodeStub) SetStateValidationParameterArgsForCall(i int) (string, []byte) { + fake.setStateValidationParameterMutex.RLock() + defer fake.setStateValidationParameterMutex.RUnlock() + argsForCall := fake.setStateValidationParameterArgsForCall[i] + return argsForCall.arg1, argsForCall.arg2 +} + +func (fake *ChaincodeStub) SetStateValidationParameterReturns(result1 error) { + fake.setStateValidationParameterMutex.Lock() + defer fake.setStateValidationParameterMutex.Unlock() + fake.SetStateValidationParameterStub = nil + fake.setStateValidationParameterReturns = struct { + result1 error + }{result1} +} + +func (fake *ChaincodeStub) SetStateValidationParameterReturnsOnCall(i int, result1 error) { + fake.setStateValidationParameterMutex.Lock() + defer fake.setStateValidationParameterMutex.Unlock() + fake.SetStateValidationParameterStub = nil + if fake.setStateValidationParameterReturnsOnCall == nil { + fake.setStateValidationParameterReturnsOnCall = make(map[int]struct { + result1 error + }) + } + fake.setStateValidationParameterReturnsOnCall[i] = struct { + result1 error + }{result1} +} + +func (fake *ChaincodeStub) SplitCompositeKey(arg1 string) (string, []string, error) { + fake.splitCompositeKeyMutex.Lock() + ret, specificReturn := fake.splitCompositeKeyReturnsOnCall[len(fake.splitCompositeKeyArgsForCall)] + fake.splitCompositeKeyArgsForCall = append(fake.splitCompositeKeyArgsForCall, struct { + arg1 string + }{arg1}) + fake.recordInvocation("SplitCompositeKey", []interface{}{arg1}) + fake.splitCompositeKeyMutex.Unlock() + if fake.SplitCompositeKeyStub != nil { + return fake.SplitCompositeKeyStub(arg1) + } + if specificReturn { + return ret.result1, ret.result2, ret.result3 + } + fakeReturns := fake.splitCompositeKeyReturns + return fakeReturns.result1, fakeReturns.result2, fakeReturns.result3 +} + +func (fake *ChaincodeStub) SplitCompositeKeyCallCount() int { + fake.splitCompositeKeyMutex.RLock() + defer fake.splitCompositeKeyMutex.RUnlock() + return len(fake.splitCompositeKeyArgsForCall) +} + +func (fake *ChaincodeStub) SplitCompositeKeyCalls(stub func(string) (string, []string, error)) { + fake.splitCompositeKeyMutex.Lock() + defer fake.splitCompositeKeyMutex.Unlock() + fake.SplitCompositeKeyStub = stub +} + +func (fake *ChaincodeStub) SplitCompositeKeyArgsForCall(i int) string { + fake.splitCompositeKeyMutex.RLock() + defer fake.splitCompositeKeyMutex.RUnlock() + argsForCall := fake.splitCompositeKeyArgsForCall[i] + return argsForCall.arg1 +} + +func (fake *ChaincodeStub) SplitCompositeKeyReturns(result1 string, result2 []string, result3 error) { + fake.splitCompositeKeyMutex.Lock() + defer fake.splitCompositeKeyMutex.Unlock() + fake.SplitCompositeKeyStub = nil + fake.splitCompositeKeyReturns = struct { + result1 string + result2 []string + result3 error + }{result1, result2, result3} +} + +func (fake *ChaincodeStub) SplitCompositeKeyReturnsOnCall(i int, result1 string, result2 []string, result3 error) { + fake.splitCompositeKeyMutex.Lock() + defer fake.splitCompositeKeyMutex.Unlock() + fake.SplitCompositeKeyStub = nil + if fake.splitCompositeKeyReturnsOnCall == nil { + fake.splitCompositeKeyReturnsOnCall = make(map[int]struct { + result1 string + result2 []string + result3 error + }) + } + fake.splitCompositeKeyReturnsOnCall[i] = struct { + result1 string + result2 []string + result3 error + }{result1, result2, result3} +} + +func (fake *ChaincodeStub) Invocations() map[string][][]interface{} { + fake.invocationsMutex.RLock() + defer fake.invocationsMutex.RUnlock() + fake.createCompositeKeyMutex.RLock() + defer fake.createCompositeKeyMutex.RUnlock() + fake.delPrivateDataMutex.RLock() + defer fake.delPrivateDataMutex.RUnlock() + fake.delStateMutex.RLock() + defer fake.delStateMutex.RUnlock() + fake.getArgsMutex.RLock() + defer fake.getArgsMutex.RUnlock() + fake.getArgsSliceMutex.RLock() + defer fake.getArgsSliceMutex.RUnlock() + fake.getBindingMutex.RLock() + defer fake.getBindingMutex.RUnlock() + fake.getChannelIDMutex.RLock() + defer fake.getChannelIDMutex.RUnlock() + fake.getCreatorMutex.RLock() + defer fake.getCreatorMutex.RUnlock() + fake.getDecorationsMutex.RLock() + defer fake.getDecorationsMutex.RUnlock() + fake.getFunctionAndParametersMutex.RLock() + defer fake.getFunctionAndParametersMutex.RUnlock() + fake.getHistoryForKeyMutex.RLock() + defer fake.getHistoryForKeyMutex.RUnlock() + fake.getPrivateDataMutex.RLock() + defer fake.getPrivateDataMutex.RUnlock() + fake.getPrivateDataByPartialCompositeKeyMutex.RLock() + defer fake.getPrivateDataByPartialCompositeKeyMutex.RUnlock() + fake.getPrivateDataByRangeMutex.RLock() + defer fake.getPrivateDataByRangeMutex.RUnlock() + fake.getPrivateDataHashMutex.RLock() + defer fake.getPrivateDataHashMutex.RUnlock() + fake.getPrivateDataQueryResultMutex.RLock() + defer fake.getPrivateDataQueryResultMutex.RUnlock() + fake.getPrivateDataValidationParameterMutex.RLock() + defer fake.getPrivateDataValidationParameterMutex.RUnlock() + fake.getQueryResultMutex.RLock() + defer fake.getQueryResultMutex.RUnlock() + fake.getQueryResultWithPaginationMutex.RLock() + defer fake.getQueryResultWithPaginationMutex.RUnlock() + fake.getSignedProposalMutex.RLock() + defer fake.getSignedProposalMutex.RUnlock() + fake.getStateMutex.RLock() + defer fake.getStateMutex.RUnlock() + fake.getStateByPartialCompositeKeyMutex.RLock() + defer fake.getStateByPartialCompositeKeyMutex.RUnlock() + fake.getStateByPartialCompositeKeyWithPaginationMutex.RLock() + defer fake.getStateByPartialCompositeKeyWithPaginationMutex.RUnlock() + fake.getStateByRangeMutex.RLock() + defer fake.getStateByRangeMutex.RUnlock() + fake.getStateByRangeWithPaginationMutex.RLock() + defer fake.getStateByRangeWithPaginationMutex.RUnlock() + fake.getStateValidationParameterMutex.RLock() + defer fake.getStateValidationParameterMutex.RUnlock() + fake.getStringArgsMutex.RLock() + defer fake.getStringArgsMutex.RUnlock() + fake.getTransientMutex.RLock() + defer fake.getTransientMutex.RUnlock() + fake.getTxIDMutex.RLock() + defer fake.getTxIDMutex.RUnlock() + fake.getTxTimestampMutex.RLock() + defer fake.getTxTimestampMutex.RUnlock() + fake.invokeChaincodeMutex.RLock() + defer fake.invokeChaincodeMutex.RUnlock() + fake.putPrivateDataMutex.RLock() + defer fake.putPrivateDataMutex.RUnlock() + fake.putStateMutex.RLock() + defer fake.putStateMutex.RUnlock() + fake.setEventMutex.RLock() + defer fake.setEventMutex.RUnlock() + fake.setPrivateDataValidationParameterMutex.RLock() + defer fake.setPrivateDataValidationParameterMutex.RUnlock() + fake.setStateValidationParameterMutex.RLock() + defer fake.setStateValidationParameterMutex.RUnlock() + fake.splitCompositeKeyMutex.RLock() + defer fake.splitCompositeKeyMutex.RUnlock() + copiedInvocations := map[string][][]interface{}{} + for key, value := range fake.invocations { + copiedInvocations[key] = value + } + return copiedInvocations +} + +func (fake *ChaincodeStub) recordInvocation(key string, args []interface{}) { + fake.invocationsMutex.Lock() + defer fake.invocationsMutex.Unlock() + if fake.invocations == nil { + fake.invocations = map[string][][]interface{}{} + } + if fake.invocations[key] == nil { + fake.invocations[key] = [][]interface{}{} + } + fake.invocations[key] = append(fake.invocations[key], args) +} diff --git a/topologies/t7/assets/chaincode/assets-transfer-basic/chaincode-go/chaincode/mocks/statequeryiterator.go b/topologies/t7/assets/chaincode/assets-transfer-basic/chaincode-go/chaincode/mocks/statequeryiterator.go new file mode 100644 index 0000000..27e3034 --- /dev/null +++ b/topologies/t7/assets/chaincode/assets-transfer-basic/chaincode-go/chaincode/mocks/statequeryiterator.go @@ -0,0 +1,232 @@ +// Code generated by counterfeiter. DO NOT EDIT. +package mocks + +import ( + "sync" + + "github.com/hyperledger/fabric-protos-go/ledger/queryresult" +) + +type StateQueryIterator struct { + CloseStub func() error + closeMutex sync.RWMutex + closeArgsForCall []struct { + } + closeReturns struct { + result1 error + } + closeReturnsOnCall map[int]struct { + result1 error + } + HasNextStub func() bool + hasNextMutex sync.RWMutex + hasNextArgsForCall []struct { + } + hasNextReturns struct { + result1 bool + } + hasNextReturnsOnCall map[int]struct { + result1 bool + } + NextStub func() (*queryresult.KV, error) + nextMutex sync.RWMutex + nextArgsForCall []struct { + } + nextReturns struct { + result1 *queryresult.KV + result2 error + } + nextReturnsOnCall map[int]struct { + result1 *queryresult.KV + result2 error + } + invocations map[string][][]interface{} + invocationsMutex sync.RWMutex +} + +func (fake *StateQueryIterator) Close() error { + fake.closeMutex.Lock() + ret, specificReturn := fake.closeReturnsOnCall[len(fake.closeArgsForCall)] + fake.closeArgsForCall = append(fake.closeArgsForCall, struct { + }{}) + fake.recordInvocation("Close", []interface{}{}) + fake.closeMutex.Unlock() + if fake.CloseStub != nil { + return fake.CloseStub() + } + if specificReturn { + return ret.result1 + } + fakeReturns := fake.closeReturns + return fakeReturns.result1 +} + +func (fake *StateQueryIterator) CloseCallCount() int { + fake.closeMutex.RLock() + defer fake.closeMutex.RUnlock() + return len(fake.closeArgsForCall) +} + +func (fake *StateQueryIterator) CloseCalls(stub func() error) { + fake.closeMutex.Lock() + defer fake.closeMutex.Unlock() + fake.CloseStub = stub +} + +func (fake *StateQueryIterator) CloseReturns(result1 error) { + fake.closeMutex.Lock() + defer fake.closeMutex.Unlock() + fake.CloseStub = nil + fake.closeReturns = struct { + result1 error + }{result1} +} + +func (fake *StateQueryIterator) CloseReturnsOnCall(i int, result1 error) { + fake.closeMutex.Lock() + defer fake.closeMutex.Unlock() + fake.CloseStub = nil + if fake.closeReturnsOnCall == nil { + fake.closeReturnsOnCall = make(map[int]struct { + result1 error + }) + } + fake.closeReturnsOnCall[i] = struct { + result1 error + }{result1} +} + +func (fake *StateQueryIterator) HasNext() bool { + fake.hasNextMutex.Lock() + ret, specificReturn := fake.hasNextReturnsOnCall[len(fake.hasNextArgsForCall)] + fake.hasNextArgsForCall = append(fake.hasNextArgsForCall, struct { + }{}) + fake.recordInvocation("HasNext", []interface{}{}) + fake.hasNextMutex.Unlock() + if fake.HasNextStub != nil { + return fake.HasNextStub() + } + if specificReturn { + return ret.result1 + } + fakeReturns := fake.hasNextReturns + return fakeReturns.result1 +} + +func (fake *StateQueryIterator) HasNextCallCount() int { + fake.hasNextMutex.RLock() + defer fake.hasNextMutex.RUnlock() + return len(fake.hasNextArgsForCall) +} + +func (fake *StateQueryIterator) HasNextCalls(stub func() bool) { + fake.hasNextMutex.Lock() + defer fake.hasNextMutex.Unlock() + fake.HasNextStub = stub +} + +func (fake *StateQueryIterator) HasNextReturns(result1 bool) { + fake.hasNextMutex.Lock() + defer fake.hasNextMutex.Unlock() + fake.HasNextStub = nil + fake.hasNextReturns = struct { + result1 bool + }{result1} +} + +func (fake *StateQueryIterator) HasNextReturnsOnCall(i int, result1 bool) { + fake.hasNextMutex.Lock() + defer fake.hasNextMutex.Unlock() + fake.HasNextStub = nil + if fake.hasNextReturnsOnCall == nil { + fake.hasNextReturnsOnCall = make(map[int]struct { + result1 bool + }) + } + fake.hasNextReturnsOnCall[i] = struct { + result1 bool + }{result1} +} + +func (fake *StateQueryIterator) Next() (*queryresult.KV, error) { + fake.nextMutex.Lock() + ret, specificReturn := fake.nextReturnsOnCall[len(fake.nextArgsForCall)] + fake.nextArgsForCall = append(fake.nextArgsForCall, struct { + }{}) + fake.recordInvocation("Next", []interface{}{}) + fake.nextMutex.Unlock() + if fake.NextStub != nil { + return fake.NextStub() + } + if specificReturn { + return ret.result1, ret.result2 + } + fakeReturns := fake.nextReturns + return fakeReturns.result1, fakeReturns.result2 +} + +func (fake *StateQueryIterator) NextCallCount() int { + fake.nextMutex.RLock() + defer fake.nextMutex.RUnlock() + return len(fake.nextArgsForCall) +} + +func (fake *StateQueryIterator) NextCalls(stub func() (*queryresult.KV, error)) { + fake.nextMutex.Lock() + defer fake.nextMutex.Unlock() + fake.NextStub = stub +} + +func (fake *StateQueryIterator) NextReturns(result1 *queryresult.KV, result2 error) { + fake.nextMutex.Lock() + defer fake.nextMutex.Unlock() + fake.NextStub = nil + fake.nextReturns = struct { + result1 *queryresult.KV + result2 error + }{result1, result2} +} + +func (fake *StateQueryIterator) NextReturnsOnCall(i int, result1 *queryresult.KV, result2 error) { + fake.nextMutex.Lock() + defer fake.nextMutex.Unlock() + fake.NextStub = nil + if fake.nextReturnsOnCall == nil { + fake.nextReturnsOnCall = make(map[int]struct { + result1 *queryresult.KV + result2 error + }) + } + fake.nextReturnsOnCall[i] = struct { + result1 *queryresult.KV + result2 error + }{result1, result2} +} + +func (fake *StateQueryIterator) Invocations() map[string][][]interface{} { + fake.invocationsMutex.RLock() + defer fake.invocationsMutex.RUnlock() + fake.closeMutex.RLock() + defer fake.closeMutex.RUnlock() + fake.hasNextMutex.RLock() + defer fake.hasNextMutex.RUnlock() + fake.nextMutex.RLock() + defer fake.nextMutex.RUnlock() + copiedInvocations := map[string][][]interface{}{} + for key, value := range fake.invocations { + copiedInvocations[key] = value + } + return copiedInvocations +} + +func (fake *StateQueryIterator) recordInvocation(key string, args []interface{}) { + fake.invocationsMutex.Lock() + defer fake.invocationsMutex.Unlock() + if fake.invocations == nil { + fake.invocations = map[string][][]interface{}{} + } + if fake.invocations[key] == nil { + fake.invocations[key] = [][]interface{}{} + } + fake.invocations[key] = append(fake.invocations[key], args) +} diff --git a/topologies/t7/assets/chaincode/assets-transfer-basic/chaincode-go/chaincode/mocks/transaction.go b/topologies/t7/assets/chaincode/assets-transfer-basic/chaincode-go/chaincode/mocks/transaction.go new file mode 100644 index 0000000..eea37db --- /dev/null +++ b/topologies/t7/assets/chaincode/assets-transfer-basic/chaincode-go/chaincode/mocks/transaction.go @@ -0,0 +1,164 @@ +// Code generated by counterfeiter. DO NOT EDIT. +package mocks + +import ( + "sync" + + "github.com/hyperledger/fabric-chaincode-go/pkg/cid" + "github.com/hyperledger/fabric-chaincode-go/shim" +) + +type TransactionContext struct { + GetClientIdentityStub func() cid.ClientIdentity + getClientIdentityMutex sync.RWMutex + getClientIdentityArgsForCall []struct { + } + getClientIdentityReturns struct { + result1 cid.ClientIdentity + } + getClientIdentityReturnsOnCall map[int]struct { + result1 cid.ClientIdentity + } + GetStubStub func() shim.ChaincodeStubInterface + getStubMutex sync.RWMutex + getStubArgsForCall []struct { + } + getStubReturns struct { + result1 shim.ChaincodeStubInterface + } + getStubReturnsOnCall map[int]struct { + result1 shim.ChaincodeStubInterface + } + invocations map[string][][]interface{} + invocationsMutex sync.RWMutex +} + +func (fake *TransactionContext) GetClientIdentity() cid.ClientIdentity { + fake.getClientIdentityMutex.Lock() + ret, specificReturn := fake.getClientIdentityReturnsOnCall[len(fake.getClientIdentityArgsForCall)] + fake.getClientIdentityArgsForCall = append(fake.getClientIdentityArgsForCall, struct { + }{}) + fake.recordInvocation("GetClientIdentity", []interface{}{}) + fake.getClientIdentityMutex.Unlock() + if fake.GetClientIdentityStub != nil { + return fake.GetClientIdentityStub() + } + if specificReturn { + return ret.result1 + } + fakeReturns := fake.getClientIdentityReturns + return fakeReturns.result1 +} + +func (fake *TransactionContext) GetClientIdentityCallCount() int { + fake.getClientIdentityMutex.RLock() + defer fake.getClientIdentityMutex.RUnlock() + return len(fake.getClientIdentityArgsForCall) +} + +func (fake *TransactionContext) GetClientIdentityCalls(stub func() cid.ClientIdentity) { + fake.getClientIdentityMutex.Lock() + defer fake.getClientIdentityMutex.Unlock() + fake.GetClientIdentityStub = stub +} + +func (fake *TransactionContext) GetClientIdentityReturns(result1 cid.ClientIdentity) { + fake.getClientIdentityMutex.Lock() + defer fake.getClientIdentityMutex.Unlock() + fake.GetClientIdentityStub = nil + fake.getClientIdentityReturns = struct { + result1 cid.ClientIdentity + }{result1} +} + +func (fake *TransactionContext) GetClientIdentityReturnsOnCall(i int, result1 cid.ClientIdentity) { + fake.getClientIdentityMutex.Lock() + defer fake.getClientIdentityMutex.Unlock() + fake.GetClientIdentityStub = nil + if fake.getClientIdentityReturnsOnCall == nil { + fake.getClientIdentityReturnsOnCall = make(map[int]struct { + result1 cid.ClientIdentity + }) + } + fake.getClientIdentityReturnsOnCall[i] = struct { + result1 cid.ClientIdentity + }{result1} +} + +func (fake *TransactionContext) GetStub() shim.ChaincodeStubInterface { + fake.getStubMutex.Lock() + ret, specificReturn := fake.getStubReturnsOnCall[len(fake.getStubArgsForCall)] + fake.getStubArgsForCall = append(fake.getStubArgsForCall, struct { + }{}) + fake.recordInvocation("GetStub", []interface{}{}) + fake.getStubMutex.Unlock() + if fake.GetStubStub != nil { + return fake.GetStubStub() + } + if specificReturn { + return ret.result1 + } + fakeReturns := fake.getStubReturns + return fakeReturns.result1 +} + +func (fake *TransactionContext) GetStubCallCount() int { + fake.getStubMutex.RLock() + defer fake.getStubMutex.RUnlock() + return len(fake.getStubArgsForCall) +} + +func (fake *TransactionContext) GetStubCalls(stub func() shim.ChaincodeStubInterface) { + fake.getStubMutex.Lock() + defer fake.getStubMutex.Unlock() + fake.GetStubStub = stub +} + +func (fake *TransactionContext) GetStubReturns(result1 shim.ChaincodeStubInterface) { + fake.getStubMutex.Lock() + defer fake.getStubMutex.Unlock() + fake.GetStubStub = nil + fake.getStubReturns = struct { + result1 shim.ChaincodeStubInterface + }{result1} +} + +func (fake *TransactionContext) GetStubReturnsOnCall(i int, result1 shim.ChaincodeStubInterface) { + fake.getStubMutex.Lock() + defer fake.getStubMutex.Unlock() + fake.GetStubStub = nil + if fake.getStubReturnsOnCall == nil { + fake.getStubReturnsOnCall = make(map[int]struct { + result1 shim.ChaincodeStubInterface + }) + } + fake.getStubReturnsOnCall[i] = struct { + result1 shim.ChaincodeStubInterface + }{result1} +} + +func (fake *TransactionContext) Invocations() map[string][][]interface{} { + fake.invocationsMutex.RLock() + defer fake.invocationsMutex.RUnlock() + fake.getClientIdentityMutex.RLock() + defer fake.getClientIdentityMutex.RUnlock() + fake.getStubMutex.RLock() + defer fake.getStubMutex.RUnlock() + copiedInvocations := map[string][][]interface{}{} + for key, value := range fake.invocations { + copiedInvocations[key] = value + } + return copiedInvocations +} + +func (fake *TransactionContext) recordInvocation(key string, args []interface{}) { + fake.invocationsMutex.Lock() + defer fake.invocationsMutex.Unlock() + if fake.invocations == nil { + fake.invocations = map[string][][]interface{}{} + } + if fake.invocations[key] == nil { + fake.invocations[key] = [][]interface{}{} + } + fake.invocations[key] = append(fake.invocations[key], args) +} diff --git a/topologies/t7/assets/chaincode/assets-transfer-basic/chaincode-go/chaincode/smartcontract.go b/topologies/t7/assets/chaincode/assets-transfer-basic/chaincode-go/chaincode/smartcontract.go new file mode 100644 index 0000000..70e7f2a --- /dev/null +++ b/topologies/t7/assets/chaincode/assets-transfer-basic/chaincode-go/chaincode/smartcontract.go @@ -0,0 +1,185 @@ +package chaincode + +import ( + "encoding/json" + "fmt" + + "github.com/hyperledger/fabric-contract-api-go/contractapi" +) + +// SmartContract provides functions for managing an Asset +type SmartContract struct { + contractapi.Contract +} + +// Asset describes basic details of what makes up a simple asset +type Asset struct { + ID string `json:"ID"` + Color string `json:"color"` + Size int `json:"size"` + Owner string `json:"owner"` + AppraisedValue int `json:"appraisedValue"` +} + +// InitLedger adds a base set of assets to the ledger +func (s *SmartContract) InitLedger(ctx contractapi.TransactionContextInterface) error { + assets := []Asset{ + {ID: "asset1", Color: "blue", Size: 5, Owner: "Tomoko", AppraisedValue: 300}, + {ID: "asset2", Color: "red", Size: 5, Owner: "Brad", AppraisedValue: 400}, + {ID: "asset3", Color: "green", Size: 10, Owner: "Jin Soo", AppraisedValue: 500}, + {ID: "asset4", Color: "yellow", Size: 10, Owner: "Max", AppraisedValue: 600}, + {ID: "asset5", Color: "black", Size: 15, Owner: "Adriana", AppraisedValue: 700}, + {ID: "asset6", Color: "white", Size: 15, Owner: "Michel", AppraisedValue: 800}, + } + + for _, asset := range assets { + assetJSON, err := json.Marshal(asset) + if err != nil { + return err + } + + err = ctx.GetStub().PutPrivateData("org2PrivateData", asset.ID, assetJSON) + if err != nil { + return fmt.Errorf("failed to put to world state. %v", err) + } + } + + return nil +} + +// CreateAsset issues a new asset to the world state with given details. +func (s *SmartContract) CreateAsset(ctx contractapi.TransactionContextInterface, id string, color string, size int, owner string, appraisedValue int) error { + exists, err := s.AssetExists(ctx, id) + if err != nil { + return err + } + if exists { + return fmt.Errorf("the asset %s already exists", id) + } + + asset := Asset{ + ID: id, + Color: color, + Size: size, + Owner: owner, + AppraisedValue: appraisedValue, + } + assetJSON, err := json.Marshal(asset) + if err != nil { + return err + } + + return ctx.GetStub().PutState(id, assetJSON) +} + +// ReadAsset returns the asset stored in the world state with given id. +func (s *SmartContract) ReadAsset(ctx contractapi.TransactionContextInterface, id string) (*Asset, error) { + assetJSON, err := ctx.GetStub().GetState(id) + if err != nil { + return nil, fmt.Errorf("failed to read from world state: %v", err) + } + if assetJSON == nil { + return nil, fmt.Errorf("the asset %s does not exist", id) + } + + var asset Asset + err = json.Unmarshal(assetJSON, &asset) + if err != nil { + return nil, err + } + + return &asset, nil +} + +// UpdateAsset updates an existing asset in the world state with provided parameters. +func (s *SmartContract) UpdateAsset(ctx contractapi.TransactionContextInterface, id string, color string, size int, owner string, appraisedValue int) error { + exists, err := s.AssetExists(ctx, id) + if err != nil { + return err + } + if !exists { + return fmt.Errorf("the asset %s does not exist", id) + } + + // overwriting original asset with new asset + asset := Asset{ + ID: id, + Color: color, + Size: size, + Owner: owner, + AppraisedValue: appraisedValue, + } + assetJSON, err := json.Marshal(asset) + if err != nil { + return err + } + + return ctx.GetStub().PutState(id, assetJSON) +} + +// DeleteAsset deletes an given asset from the world state. +func (s *SmartContract) DeleteAsset(ctx contractapi.TransactionContextInterface, id string) error { + exists, err := s.AssetExists(ctx, id) + if err != nil { + return err + } + if !exists { + return fmt.Errorf("the asset %s does not exist", id) + } + + return ctx.GetStub().DelState(id) +} + +// AssetExists returns true when asset with given ID exists in world state +func (s *SmartContract) AssetExists(ctx contractapi.TransactionContextInterface, id string) (bool, error) { + assetJSON, err := ctx.GetStub().GetState(id) + if err != nil { + return false, fmt.Errorf("failed to read from world state: %v", err) + } + + return assetJSON != nil, nil +} + +// TransferAsset updates the owner field of asset with given id in world state. +func (s *SmartContract) TransferAsset(ctx contractapi.TransactionContextInterface, id string, newOwner string) error { + asset, err := s.ReadAsset(ctx, id) + if err != nil { + return err + } + + asset.Owner = newOwner + assetJSON, err := json.Marshal(asset) + if err != nil { + return err + } + + return ctx.GetStub().PutState(id, assetJSON) +} + +// GetAllAssets returns all assets found in world state +func (s *SmartContract) GetAllAssets(ctx contractapi.TransactionContextInterface) ([]*Asset, error) { + // range query with empty string for startKey and endKey does an + // open-ended query of all assets in the chaincode namespace. + resultsIterator, err := ctx.GetStub().GetPrivateDataByRange("org2PrivateData", "", "") + if err != nil { + return nil, err + } + defer resultsIterator.Close() + + var assets []*Asset + for resultsIterator.HasNext() { + queryResponse, err := resultsIterator.Next() + if err != nil { + return nil, err + } + + var asset Asset + err = json.Unmarshal(queryResponse.Value, &asset) + if err != nil { + return nil, err + } + assets = append(assets, &asset) + } + + return assets, nil +} diff --git a/topologies/t7/assets/chaincode/assets-transfer-basic/chaincode-go/chaincode/smartcontract_test.go b/topologies/t7/assets/chaincode/assets-transfer-basic/chaincode-go/chaincode/smartcontract_test.go new file mode 100644 index 0000000..cb001de --- /dev/null +++ b/topologies/t7/assets/chaincode/assets-transfer-basic/chaincode-go/chaincode/smartcontract_test.go @@ -0,0 +1,184 @@ +package chaincode_test + +import ( + "encoding/json" + "fmt" + "testing" + + "github.com/hyperledger/fabric-chaincode-go/shim" + "github.com/hyperledger/fabric-contract-api-go/contractapi" + "github.com/hyperledger/fabric-protos-go/ledger/queryresult" + "github.com/hyperledger/fabric-samples/asset-transfer-basic/chaincode-go/chaincode" + "github.com/hyperledger/fabric-samples/asset-transfer-basic/chaincode-go/chaincode/mocks" + "github.com/stretchr/testify/require" +) + +//go:generate counterfeiter -o mocks/transaction.go -fake-name TransactionContext . transactionContext +type transactionContext interface { + contractapi.TransactionContextInterface +} + +//go:generate counterfeiter -o mocks/chaincodestub.go -fake-name ChaincodeStub . chaincodeStub +type chaincodeStub interface { + shim.ChaincodeStubInterface +} + +//go:generate counterfeiter -o mocks/statequeryiterator.go -fake-name StateQueryIterator . stateQueryIterator +type stateQueryIterator interface { + shim.StateQueryIteratorInterface +} + +func TestInitLedger(t *testing.T) { + chaincodeStub := &mocks.ChaincodeStub{} + transactionContext := &mocks.TransactionContext{} + transactionContext.GetStubReturns(chaincodeStub) + + assetTransfer := chaincode.SmartContract{} + err := assetTransfer.InitLedger(transactionContext) + require.NoError(t, err) + + chaincodeStub.PutStateReturns(fmt.Errorf("failed inserting key")) + err = assetTransfer.InitLedger(transactionContext) + require.EqualError(t, err, "failed to put to world state. failed inserting key") +} + +func TestCreateAsset(t *testing.T) { + chaincodeStub := &mocks.ChaincodeStub{} + transactionContext := &mocks.TransactionContext{} + transactionContext.GetStubReturns(chaincodeStub) + + assetTransfer := chaincode.SmartContract{} + err := assetTransfer.CreateAsset(transactionContext, "", "", 0, "", 0) + require.NoError(t, err) + + chaincodeStub.GetStateReturns([]byte{}, nil) + err = assetTransfer.CreateAsset(transactionContext, "asset1", "", 0, "", 0) + require.EqualError(t, err, "the asset asset1 already exists") + + chaincodeStub.GetStateReturns(nil, fmt.Errorf("unable to retrieve asset")) + err = assetTransfer.CreateAsset(transactionContext, "asset1", "", 0, "", 0) + require.EqualError(t, err, "failed to read from world state: unable to retrieve asset") +} + +func TestReadAsset(t *testing.T) { + chaincodeStub := &mocks.ChaincodeStub{} + transactionContext := &mocks.TransactionContext{} + transactionContext.GetStubReturns(chaincodeStub) + + expectedAsset := &chaincode.Asset{ID: "asset1"} + bytes, err := json.Marshal(expectedAsset) + require.NoError(t, err) + + chaincodeStub.GetStateReturns(bytes, nil) + assetTransfer := chaincode.SmartContract{} + asset, err := assetTransfer.ReadAsset(transactionContext, "") + require.NoError(t, err) + require.Equal(t, expectedAsset, asset) + + chaincodeStub.GetStateReturns(nil, fmt.Errorf("unable to retrieve asset")) + _, err = assetTransfer.ReadAsset(transactionContext, "") + require.EqualError(t, err, "failed to read from world state: unable to retrieve asset") + + chaincodeStub.GetStateReturns(nil, nil) + asset, err = assetTransfer.ReadAsset(transactionContext, "asset1") + require.EqualError(t, err, "the asset asset1 does not exist") + require.Nil(t, asset) +} + +func TestUpdateAsset(t *testing.T) { + chaincodeStub := &mocks.ChaincodeStub{} + transactionContext := &mocks.TransactionContext{} + transactionContext.GetStubReturns(chaincodeStub) + + expectedAsset := &chaincode.Asset{ID: "asset1"} + bytes, err := json.Marshal(expectedAsset) + require.NoError(t, err) + + chaincodeStub.GetStateReturns(bytes, nil) + assetTransfer := chaincode.SmartContract{} + err = assetTransfer.UpdateAsset(transactionContext, "", "", 0, "", 0) + require.NoError(t, err) + + chaincodeStub.GetStateReturns(nil, nil) + err = assetTransfer.UpdateAsset(transactionContext, "asset1", "", 0, "", 0) + require.EqualError(t, err, "the asset asset1 does not exist") + + chaincodeStub.GetStateReturns(nil, fmt.Errorf("unable to retrieve asset")) + err = assetTransfer.UpdateAsset(transactionContext, "asset1", "", 0, "", 0) + require.EqualError(t, err, "failed to read from world state: unable to retrieve asset") +} + +func TestDeleteAsset(t *testing.T) { + chaincodeStub := &mocks.ChaincodeStub{} + transactionContext := &mocks.TransactionContext{} + transactionContext.GetStubReturns(chaincodeStub) + + asset := &chaincode.Asset{ID: "asset1"} + bytes, err := json.Marshal(asset) + require.NoError(t, err) + + chaincodeStub.GetStateReturns(bytes, nil) + chaincodeStub.DelStateReturns(nil) + assetTransfer := chaincode.SmartContract{} + err = assetTransfer.DeleteAsset(transactionContext, "") + require.NoError(t, err) + + chaincodeStub.GetStateReturns(nil, nil) + err = assetTransfer.DeleteAsset(transactionContext, "asset1") + require.EqualError(t, err, "the asset asset1 does not exist") + + chaincodeStub.GetStateReturns(nil, fmt.Errorf("unable to retrieve asset")) + err = assetTransfer.DeleteAsset(transactionContext, "") + require.EqualError(t, err, "failed to read from world state: unable to retrieve asset") +} + +func TestTransferAsset(t *testing.T) { + chaincodeStub := &mocks.ChaincodeStub{} + transactionContext := &mocks.TransactionContext{} + transactionContext.GetStubReturns(chaincodeStub) + + asset := &chaincode.Asset{ID: "asset1"} + bytes, err := json.Marshal(asset) + require.NoError(t, err) + + chaincodeStub.GetStateReturns(bytes, nil) + assetTransfer := chaincode.SmartContract{} + err = assetTransfer.TransferAsset(transactionContext, "", "") + require.NoError(t, err) + + chaincodeStub.GetStateReturns(nil, fmt.Errorf("unable to retrieve asset")) + err = assetTransfer.TransferAsset(transactionContext, "", "") + require.EqualError(t, err, "failed to read from world state: unable to retrieve asset") +} + +func TestGetAllAssets(t *testing.T) { + asset := &chaincode.Asset{ID: "asset1"} + bytes, err := json.Marshal(asset) + require.NoError(t, err) + + iterator := &mocks.StateQueryIterator{} + iterator.HasNextReturnsOnCall(0, true) + iterator.HasNextReturnsOnCall(1, false) + iterator.NextReturns(&queryresult.KV{Value: bytes}, nil) + + chaincodeStub := &mocks.ChaincodeStub{} + transactionContext := &mocks.TransactionContext{} + transactionContext.GetStubReturns(chaincodeStub) + + chaincodeStub.GetStateByRangeReturns(iterator, nil) + assetTransfer := &chaincode.SmartContract{} + assets, err := assetTransfer.GetAllAssets(transactionContext) + require.NoError(t, err) + require.Equal(t, []*chaincode.Asset{asset}, assets) + + iterator.HasNextReturns(true) + iterator.NextReturns(nil, fmt.Errorf("failed retrieving next item")) + assets, err = assetTransfer.GetAllAssets(transactionContext) + require.EqualError(t, err, "failed retrieving next item") + require.Nil(t, assets) + + chaincodeStub.GetStateByRangeReturns(nil, fmt.Errorf("failed retrieving all assets")) + assets, err = assetTransfer.GetAllAssets(transactionContext) + require.EqualError(t, err, "failed retrieving all assets") + require.Nil(t, assets) +} diff --git a/topologies/t7/assets/chaincode/assets-transfer-basic/chaincode-go/go.mod b/topologies/t7/assets/chaincode/assets-transfer-basic/chaincode-go/go.mod new file mode 100644 index 0000000..630a157 --- /dev/null +++ b/topologies/t7/assets/chaincode/assets-transfer-basic/chaincode-go/go.mod @@ -0,0 +1,11 @@ +module github.com/hyperledger/fabric-samples/asset-transfer-basic/chaincode-go + +go 1.14 + +require ( + github.com/golang/protobuf v1.3.2 + github.com/hyperledger/fabric-chaincode-go v0.0.0-20210718160520-38d29fabecb9 + github.com/hyperledger/fabric-contract-api-go v1.1.1 + github.com/hyperledger/fabric-protos-go v0.0.0-20201028172056-a3136dde2354 + github.com/stretchr/testify v1.5.1 +) diff --git a/topologies/t7/assets/chaincode/assets-transfer-basic/chaincode-go/go.sum b/topologies/t7/assets/chaincode/assets-transfer-basic/chaincode-go/go.sum new file mode 100644 index 0000000..577c18b --- /dev/null +++ b/topologies/t7/assets/chaincode/assets-transfer-basic/chaincode-go/go.sum @@ -0,0 +1,154 @@ +cloud.google.com/go v0.26.0/go.mod h1:aQUYkXzVsufM+DwF1aE+0xfcU+56JwCaLick0ClmMTw= +github.com/BurntSushi/toml v0.3.1/go.mod h1:xHWCNGjB5oqiDr8zfno3MHue2Ht5sIBksp03qcyfWMU= +github.com/DATA-DOG/go-txdb v0.1.3/go.mod h1:DhAhxMXZpUJVGnT+p9IbzJoRKvlArO2pkHjnGX7o0n0= +github.com/PuerkitoBio/purell v1.1.1 h1:WEQqlqaGbrPkxLJWfBwQmfEAE1Z7ONdDLqrN38tNFfI= +github.com/PuerkitoBio/purell v1.1.1/go.mod h1:c11w/QuzBsJSee3cPx9rAFu61PvFxuPbtSwDGJws/X0= +github.com/PuerkitoBio/urlesc v0.0.0-20170810143723-de5bf2ad4578 h1:d+Bc7a5rLufV/sSk/8dngufqelfh6jnri85riMAaF/M= +github.com/PuerkitoBio/urlesc v0.0.0-20170810143723-de5bf2ad4578/go.mod h1:uGdkoq3SwY9Y+13GIhn11/XLaGBb4BfwItxLd5jeuXE= +github.com/armon/consul-api v0.0.0-20180202201655-eb2c6b5be1b6/go.mod h1:grANhF5doyWs3UAsr3K4I6qtAmlQcZDesFNEHPZAzj8= +github.com/client9/misspell v0.3.4/go.mod h1:qj6jICC3Q7zFZvVWo7KLAzC3yx5G7kyvSDkc90ppPyw= +github.com/coreos/etcd v3.3.10+incompatible/go.mod h1:uF7uidLiAD3TWHmW31ZFd/JWoc32PjwdhPthX9715RE= +github.com/coreos/go-etcd v2.0.0+incompatible/go.mod h1:Jez6KQU2B/sWsbdaef3ED8NzMklzPG4d5KIOhIy30Tk= +github.com/coreos/go-semver v0.2.0/go.mod h1:nnelYz7RCh+5ahJtPPxZlU+153eP4D4r3EedlOD2RNk= +github.com/cpuguy83/go-md2man v1.0.10/go.mod h1:SmD6nW6nTyfqj6ABTjUi3V3JVMnlJmwcJI5acqYI6dE= +github.com/cucumber/godog v0.8.0/go.mod h1:Cp3tEV1LRAyH/RuCThcxHS/+9ORZ+FMzPva2AZ5Ki+A= +github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38= +github.com/davecgh/go-spew v1.1.1 h1:vj9j/u1bqnvCEfJOwUhtlOARqs3+rkHYY13jYWTU97c= +github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38= +github.com/fsnotify/fsnotify v1.4.7/go.mod h1:jwhsz4b93w/PPRr/qN1Yymfu8t87LnFCMoQvtojpjFo= +github.com/go-openapi/jsonpointer v0.19.2/go.mod h1:3akKfEdA7DF1sugOqz1dVQHBcuDBPKZGEoHC/NkiQRg= +github.com/go-openapi/jsonpointer v0.19.3 h1:gihV7YNZK1iK6Tgwwsxo2rJbD1GTbdm72325Bq8FI3w= +github.com/go-openapi/jsonpointer v0.19.3/go.mod h1:Pl9vOtqEWErmShwVjC8pYs9cog34VGT37dQOVbmoatg= +github.com/go-openapi/jsonreference v0.19.2 h1:o20suLFB4Ri0tuzpWtyHlh7E7HnkqTNLq6aR6WVNS1w= +github.com/go-openapi/jsonreference v0.19.2/go.mod h1:jMjeRr2HHw6nAVajTXJ4eiUwohSTlpa0o73RUL1owJc= +github.com/go-openapi/spec v0.19.4 h1:ixzUSnHTd6hCemgtAJgluaTSGYpLNpJY4mA2DIkdOAo= +github.com/go-openapi/spec v0.19.4/go.mod h1:FpwSN1ksY1eteniUU7X0N/BgJ7a4WvBFVA8Lj9mJglo= +github.com/go-openapi/swag v0.19.2/go.mod h1:POnQmlKehdgb5mhVOsnJFsivZCEZ/vjK9gh66Z9tfKk= +github.com/go-openapi/swag v0.19.5 h1:lTz6Ys4CmqqCQmZPBlbQENR1/GucA2bzYTE12Pw4tFY= +github.com/go-openapi/swag v0.19.5/go.mod h1:POnQmlKehdgb5mhVOsnJFsivZCEZ/vjK9gh66Z9tfKk= +github.com/go-sql-driver/mysql v1.5.0/go.mod h1:DCzpHaOWr8IXmIStZouvnhqoel9Qv2LBy8hT2VhHyBg= +github.com/gobuffalo/envy v1.7.0 h1:GlXgaiBkmrYMHco6t4j7SacKO4XUjvh5pwXh0f4uxXU= +github.com/gobuffalo/envy v1.7.0/go.mod h1:n7DRkBerg/aorDM8kbduw5dN3oXGswK5liaSCx4T5NI= +github.com/gobuffalo/logger v1.0.0/go.mod h1:2zbswyIUa45I+c+FLXuWl9zSWEiVuthsk8ze5s8JvPs= +github.com/gobuffalo/packd v0.3.0 h1:eMwymTkA1uXsqxS0Tpoop3Lc0u3kTfiMBE6nKtQU4g4= +github.com/gobuffalo/packd v0.3.0/go.mod h1:zC7QkmNkYVGKPw4tHpBQ+ml7W/3tIebgeo1b36chA3Q= +github.com/gobuffalo/packr v1.30.1 h1:hu1fuVR3fXEZR7rXNW3h8rqSML8EVAf6KNm0NKO/wKg= +github.com/gobuffalo/packr v1.30.1/go.mod h1:ljMyFO2EcrnzsHsN99cvbq055Y9OhRrIaviy289eRuk= +github.com/gobuffalo/packr/v2 v2.5.1/go.mod h1:8f9c96ITobJlPzI44jj+4tHnEKNt0xXWSVlXRN9X1Iw= +github.com/golang/glog v0.0.0-20160126235308-23def4e6c14b h1:VKtxabqXZkF25pY9ekfRL6a582T4P37/31XEstQ5p58= +github.com/golang/glog v0.0.0-20160126235308-23def4e6c14b/go.mod h1:SBH7ygxi8pfUlaOkMMuAQtPIUF8ecWP5IEl/CR7VP2Q= +github.com/golang/mock v1.1.1/go.mod h1:oTYuIxOrZwtPieC+H1uAHpcLFnEyAGVDL/k47Jfbm0A= +github.com/golang/protobuf v1.2.0/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U= +github.com/golang/protobuf v1.3.2 h1:6nsPYzhq5kReh6QImI3k5qWzO4PEbvbIW2cwSfR/6xs= +github.com/golang/protobuf v1.3.2/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U= +github.com/google/go-cmp v0.2.0/go.mod h1:oXzfMopK8JAjlY9xF4vHSVASa0yLyX7SntLO5aqRK0M= +github.com/hashicorp/hcl v1.0.0/go.mod h1:E5yfLk+7swimpb2L/Alb/PJmXilQ/rhwaUYs4T20WEQ= +github.com/hyperledger/fabric-chaincode-go v0.0.0-20200424173110-d7076418f212 h1:1i4lnpV8BDgKOLi1hgElfBqdHXjXieSuj8629mwBZ8o= +github.com/hyperledger/fabric-chaincode-go v0.0.0-20200424173110-d7076418f212/go.mod h1:N7H3sA7Tx4k/YzFq7U0EPdqJtqvM4Kild0JoCc7C0Dc= +github.com/hyperledger/fabric-chaincode-go v0.0.0-20210718160520-38d29fabecb9 h1:1cAZHHrBYFrX3bwQGhOZtOB4sCM9QWVppd81O8vsPXs= +github.com/hyperledger/fabric-chaincode-go v0.0.0-20210718160520-38d29fabecb9/go.mod h1:N7H3sA7Tx4k/YzFq7U0EPdqJtqvM4Kild0JoCc7C0Dc= +github.com/hyperledger/fabric-contract-api-go v1.1.0 h1:K9uucl/6eX3NF0/b+CGIiO1IPm1VYQxBkpnVGJur2S4= +github.com/hyperledger/fabric-contract-api-go v1.1.0/go.mod h1:nHWt0B45fK53owcFpLtAe8DH0Q5P068mnzkNXMPSL7E= +github.com/hyperledger/fabric-contract-api-go v1.1.1 h1:gDhOC18gjgElNZ85kFWsbCQq95hyUP/21n++m0Sv6B0= +github.com/hyperledger/fabric-contract-api-go v1.1.1/go.mod h1:+39cWxbh5py3NtXpRA63rAH7NzXyED+QJx1EZr0tJPo= +github.com/hyperledger/fabric-protos-go v0.0.0-20190919234611-2a87503ac7c9/go.mod h1:xVYTjK4DtZRBxZ2D9aE4y6AbLaPwue2o/criQyQbVD0= +github.com/hyperledger/fabric-protos-go v0.0.0-20200424173316-dd554ba3746e h1:9PS5iezHk/j7XriSlNuSQILyCOfcZ9wZ3/PiucmSE8E= +github.com/hyperledger/fabric-protos-go v0.0.0-20200424173316-dd554ba3746e/go.mod h1:xVYTjK4DtZRBxZ2D9aE4y6AbLaPwue2o/criQyQbVD0= +github.com/hyperledger/fabric-protos-go v0.0.0-20201028172056-a3136dde2354 h1:6vLLEpvDbSlmUJFjg1hB5YMBpI+WgKguztlONcAFBoY= +github.com/hyperledger/fabric-protos-go v0.0.0-20201028172056-a3136dde2354/go.mod h1:xVYTjK4DtZRBxZ2D9aE4y6AbLaPwue2o/criQyQbVD0= +github.com/hyperledger/fabric-protos-go v0.0.0-20210722212527-1ed7094bb8fe h1:1ef+SKRVYiQCAMhZ5W6HZhStEcNNAm27V8YcptzSWnM= +github.com/hyperledger/fabric-protos-go v0.0.0-20210722212527-1ed7094bb8fe/go.mod h1:xVYTjK4DtZRBxZ2D9aE4y6AbLaPwue2o/criQyQbVD0= +github.com/inconshreveable/mousetrap v1.0.0/go.mod h1:PxqpIevigyE2G7u3NXJIT2ANytuPF1OarO4DADm73n8= +github.com/joho/godotenv v1.3.0 h1:Zjp+RcGpHhGlrMbJzXTrZZPrWj+1vfm90La1wgB6Bhc= +github.com/joho/godotenv v1.3.0/go.mod h1:7hK45KPybAkOC6peb+G5yklZfMxEjkZhHbwpqxOKXbg= +github.com/karrick/godirwalk v1.10.12/go.mod h1:RoGL9dQei4vP9ilrpETWE8CLOZ1kiN0LhBygSwrAsHA= +github.com/konsorten/go-windows-terminal-sequences v1.0.1/go.mod h1:T0+1ngSBFLxvqU3pZ+m/2kptfBszLMUkC4ZK/EgS/cQ= +github.com/konsorten/go-windows-terminal-sequences v1.0.2/go.mod h1:T0+1ngSBFLxvqU3pZ+m/2kptfBszLMUkC4ZK/EgS/cQ= +github.com/kr/pretty v0.1.0/go.mod h1:dAy3ld7l9f0ibDNOQOHHMYYIIbhfbHSm3C4ZsoJORNo= +github.com/kr/pretty v0.2.0 h1:s5hAObm+yFO5uHYt5dYjxi2rXrsnmRpJx4OYvIWUaQs= +github.com/kr/pretty v0.2.0/go.mod h1:ipq/a2n7PKx3OHsz4KJII5eveXtPO4qwEXGdVfWzfnI= +github.com/kr/pty v1.1.1/go.mod h1:pFQYn66WHrOpPYNljwOMqo10TkYh1fy3cYio2l3bCsQ= +github.com/kr/pty v1.1.5/go.mod h1:9r2w37qlBe7rQ6e1fg1S/9xpWHSnaqNdHD3WcMdbPDA= +github.com/kr/text v0.1.0 h1:45sCR5RtlFHMR4UwH9sdQ5TC8v0qDQCHnXt+kaKSTVE= +github.com/kr/text v0.1.0/go.mod h1:4Jbv+DJW3UT/LiOwJeYQe1efqtUx/iVham/4vfdArNI= +github.com/magiconair/properties v1.8.0/go.mod h1:PppfXfuXeibc/6YijjN8zIbojt8czPbwD3XqdrwzmxQ= +github.com/mailru/easyjson v0.0.0-20190614124828-94de47d64c63/go.mod h1:C1wdFJiN94OJF2b5HbByQZoLdCWB1Yqtg26g4irojpc= +github.com/mailru/easyjson v0.0.0-20190626092158-b2ccc519800e h1:hB2xlXdHp/pmPZq0y3QnmWAArdw9PqbmotexnWx/FU8= +github.com/mailru/easyjson v0.0.0-20190626092158-b2ccc519800e/go.mod h1:C1wdFJiN94OJF2b5HbByQZoLdCWB1Yqtg26g4irojpc= +github.com/mitchellh/go-homedir v1.1.0/go.mod h1:SfyaCUpYCn1Vlf4IUYiD9fPX4A5wJrkLzIz1N1q0pr0= +github.com/mitchellh/mapstructure v1.1.2/go.mod h1:FVVH3fgwuzCH5S8UJGiWEs2h04kUh9fWfEaFds41c1Y= +github.com/pelletier/go-toml v1.2.0/go.mod h1:5z9KED0ma1S8pY6P1sdut58dfprrGBbd/94hg7ilaic= +github.com/pmezard/go-difflib v1.0.0 h1:4DBwDE0NGyQoBHbLQYPwSUPoCMWR5BEzIk/f1lZbAQM= +github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4= +github.com/rogpeppe/go-internal v1.1.0/go.mod h1:M8bDsm7K2OlrFYOpmOWEs/qY81heoFRclV5y23lUDJ4= +github.com/rogpeppe/go-internal v1.3.0 h1:RR9dF3JtopPvtkroDZuVD7qquD0bnHlKSqaQhgwt8yk= +github.com/rogpeppe/go-internal v1.3.0/go.mod h1:M8bDsm7K2OlrFYOpmOWEs/qY81heoFRclV5y23lUDJ4= +github.com/russross/blackfriday v1.5.2/go.mod h1:JO/DiYxRf+HjHt06OyowR9PTA263kcR/rfWxYHBV53g= +github.com/sirupsen/logrus v1.4.2/go.mod h1:tLMulIdttU9McNUspp0xgXVQah82FyeX6MwdIuYE2rE= +github.com/spf13/afero v1.1.2/go.mod h1:j4pytiNVoe2o6bmDsKpLACNPDBIoEAkihy7loJ1B0CQ= +github.com/spf13/cast v1.3.0/go.mod h1:Qx5cxh0v+4UWYiBimWS+eyWzqEqokIECu5etghLkUJE= +github.com/spf13/cobra v0.0.5/go.mod h1:3K3wKZymM7VvHMDS9+Akkh4K60UwM26emMESw8tLCHU= +github.com/spf13/jwalterweatherman v1.0.0/go.mod h1:cQK4TGJAtQXfYWX+Ddv3mKDzgVb68N+wFjFa4jdeBTo= +github.com/spf13/pflag v1.0.3/go.mod h1:DYY7MBk1bdzusC3SYhjObp+wFpr4gzcvqqNjLnInEg4= +github.com/spf13/viper v1.3.2/go.mod h1:ZiWeW+zYFKm7srdB9IoDzzZXaJaI5eL9QjNiN/DMA2s= +github.com/stretchr/objx v0.1.0/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME= +github.com/stretchr/objx v0.1.1/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME= +github.com/stretchr/objx v0.2.0/go.mod h1:qt09Ya8vawLte6SNmTgCsAVtYtaKzEcn8ATUoHMkEqE= +github.com/stretchr/testify v1.2.2/go.mod h1:a8OnRcib4nhh0OaRAV+Yts87kKdq0PP7pXfy6kDkUVs= +github.com/stretchr/testify v1.3.0/go.mod h1:M5WIy9Dh21IEIfnGCwXGc5bZfKNJtfHm1UVUgZn+9EI= +github.com/stretchr/testify v1.4.0/go.mod h1:j7eGeouHqKxXV5pUuKE4zz7dFj8WfuZ+81PSLYec5m4= +github.com/stretchr/testify v1.5.1 h1:nOGnQDM7FYENwehXlg/kFVnos3rEvtKTjRvOWSzb6H4= +github.com/stretchr/testify v1.5.1/go.mod h1:5W2xD1RspED5o8YsWQXVCued0rvSQ+mT+I5cxcmMvtA= +github.com/ugorji/go/codec v0.0.0-20181204163529-d75b2dcb6bc8/go.mod h1:VFNgLljTbGfSG7qAOspJ7OScBnGdDN/yBr0sguwnwf0= +github.com/xeipuuv/gojsonpointer v0.0.0-20180127040702-4e3ac2762d5f h1:J9EGpcZtP0E/raorCMxlFGSTBrsSlaDGf3jU/qvAE2c= +github.com/xeipuuv/gojsonpointer v0.0.0-20180127040702-4e3ac2762d5f/go.mod h1:N2zxlSyiKSe5eX1tZViRH5QA0qijqEDrYZiPEAiq3wU= +github.com/xeipuuv/gojsonreference v0.0.0-20180127040603-bd5ef7bd5415 h1:EzJWgHovont7NscjpAxXsDA8S8BMYve8Y5+7cuRE7R0= +github.com/xeipuuv/gojsonreference v0.0.0-20180127040603-bd5ef7bd5415/go.mod h1:GwrjFmJcFw6At/Gs6z4yjiIwzuJ1/+UwLxMQDVQXShQ= +github.com/xeipuuv/gojsonschema v1.2.0 h1:LhYJRs+L4fBtjZUfuSZIKGeVu0QRy8e5Xi7D17UxZ74= +github.com/xeipuuv/gojsonschema v1.2.0/go.mod h1:anYRn/JVcOK2ZgGU+IjEV4nwlhoK5sQluxsYJ78Id3Y= +github.com/xordataexchange/crypt v0.0.3-0.20170626215501-b2862e3d0a77/go.mod h1:aYKd//L2LvnjZzWKhF00oedf4jCCReLcmhLdhm1A27Q= +golang.org/x/crypto v0.0.0-20181203042331-505ab145d0a9/go.mod h1:6SG95UA2DQfeDnfUPMdvaQW0Q7yPrPDi9nlGo2tz2b4= +golang.org/x/crypto v0.0.0-20190308221718-c2843e01d9a2/go.mod h1:djNgcEr1/C05ACkg1iLfiJU5Ep61QUkGW8qpdssI0+w= +golang.org/x/crypto v0.0.0-20190611184440-5c40567a22f8/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI= +golang.org/x/crypto v0.0.0-20190621222207-cc06ce4a13d4/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI= +golang.org/x/lint v0.0.0-20190313153728-d0100b6bd8b3/go.mod h1:6SW0HCj/g11FgYtHlgUYUwCkIfeOF89ocIRzGO/8vkc= +golang.org/x/net v0.0.0-20190311183353-d8887717615a/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg= +golang.org/x/net v0.0.0-20190404232315-eb5bcb51f2a3/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg= +golang.org/x/net v0.0.0-20190522155817-f3200d17e092/go.mod h1:HSz+uSET+XFnRR8LxR5pz3Of3rY3CfYBVs4xY44aLks= +golang.org/x/net v0.0.0-20190613194153-d28f0bde5980/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s= +golang.org/x/net v0.0.0-20190827160401-ba9fcec4b297 h1:k7pJ2yAPLPgbskkFdhRCsA77k2fySZ1zf2zCjvQCiIM= +golang.org/x/net v0.0.0-20190827160401-ba9fcec4b297/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s= +golang.org/x/oauth2 v0.0.0-20180821212333-d2e6202438be/go.mod h1:N/0e6XlmueqKjAGxoOufVs8QHGRruUQn6yWY3a++T0U= +golang.org/x/sync v0.0.0-20190423024810-112230192c58/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= +golang.org/x/sys v0.0.0-20181205085412-a5c9d58dba9a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY= +golang.org/x/sys v0.0.0-20190215142949-d0b11bdaac8a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY= +golang.org/x/sys v0.0.0-20190412213103-97732733099d/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= +golang.org/x/sys v0.0.0-20190422165155-953cdadca894/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= +golang.org/x/sys v0.0.0-20190515120540-06a5c4944438/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= +golang.org/x/sys v0.0.0-20190616124812-15dcb6c0061f/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= +golang.org/x/sys v0.0.0-20190710143415-6ec70d6a5542 h1:6ZQFf1D2YYDDI7eSwW8adlkkavTB9sw5I24FVtEvNUQ= +golang.org/x/sys v0.0.0-20190710143415-6ec70d6a5542/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= +golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ= +golang.org/x/text v0.3.2 h1:tW2bmiBqwgJj/UpqtC8EpXEZVYOwU0yG4iWbprSVAcs= +golang.org/x/text v0.3.2/go.mod h1:bEr9sfX3Q8Zfm5fL9x+3itogRgK3+ptLWKqgva+5dAk= +golang.org/x/tools v0.0.0-20180917221912-90fa682c2a6e/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ= +golang.org/x/tools v0.0.0-20190311212946-11955173bddd/go.mod h1:LCzVGOaR6xXOjkQ3onu1FJEFr0SW1gC7cKk1uF8kGRs= +golang.org/x/tools v0.0.0-20190524140312-2c0ae7006135/go.mod h1:RgjU9mgBXZiqYHBnxXauZ1Gv1EHHAz9KjViQ78xBX0Q= +golang.org/x/tools v0.0.0-20190614205625-5aca471b1d59/go.mod h1:/rFqwRUd4F7ZHNgwSSTFct+R/Kf4OFW1sUzUTQQTgfc= +golang.org/x/tools v0.0.0-20190624180213-70d37148ca0c/go.mod h1:/rFqwRUd4F7ZHNgwSSTFct+R/Kf4OFW1sUzUTQQTgfc= +google.golang.org/appengine v1.1.0/go.mod h1:EbEs0AVv82hx2wNQdGPgUI5lhzA/G0D9YwlJXL52JkM= +google.golang.org/genproto v0.0.0-20180817151627-c66870c02cf8/go.mod h1:JiN7NxoALGmiZfu7CAH4rXhgtRTLTxftemlI0sWmxmc= +google.golang.org/genproto v0.0.0-20180831171423-11092d34479b h1:lohp5blsw53GBXtLyLNaTXPXS9pJ1tiTw61ZHUoE9Qw= +google.golang.org/genproto v0.0.0-20180831171423-11092d34479b/go.mod h1:JiN7NxoALGmiZfu7CAH4rXhgtRTLTxftemlI0sWmxmc= +google.golang.org/grpc v1.23.0 h1:AzbTB6ux+okLTzP8Ru1Xs41C303zdcfEht7MQnYJt5A= +google.golang.org/grpc v1.23.0/go.mod h1:Y5yQAOtifL1yxbo5wqy6BxZv8vAUGQwXBOALyacEbxg= +gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0= +gopkg.in/check.v1 v1.0.0-20180628173108-788fd7840127/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0= +gopkg.in/check.v1 v1.0.0-20190902080502-41f04d3bba15 h1:YR8cESwS4TdDjEe65xsg0ogRM/Nc3DYOhEAlW+xobZo= +gopkg.in/check.v1 v1.0.0-20190902080502-41f04d3bba15/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0= +gopkg.in/errgo.v2 v2.1.0/go.mod h1:hNsd1EY+bozCKY1Ytp96fpM3vjJbqLJn88ws8XvfDNI= +gopkg.in/yaml.v2 v2.2.2 h1:ZCJp+EgiOT7lHqUV2J862kp8Qj64Jo6az82+3Td9dZw= +gopkg.in/yaml.v2 v2.2.2/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI= +gopkg.in/yaml.v2 v2.2.8 h1:obN1ZagJSUGI0Ek/LBmuj4SNLPfIny3KsKFopxRdj10= +gopkg.in/yaml.v2 v2.2.8/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI= +honnef.co/go/tools v0.0.0-20190523083050-ea95bdfd59fc/go.mod h1:rf3lG4BRIbNafJWhAfAdb/ePZxsR/4RtNHQocxwk9r4= diff --git a/topologies/t7/config/collections_config.json b/topologies/t7/config/collections_config.json new file mode 100644 index 0000000..093b1c6 --- /dev/null +++ b/topologies/t7/config/collections_config.json @@ -0,0 +1,11 @@ +[ + { + "name": "org2PrivateData", + "policy": "OR('org2MSP.member')", + "requiredPeerCount": 0, + "maxPeerCount": 3, + "blockToLive": 3, + "memberOnlyRead": true, + "memberOnlyWrite": true + } +] \ No newline at end of file diff --git a/topologies/t7/config/config.yaml b/topologies/t7/config/config.yaml new file mode 100644 index 0000000..1f4cc49 --- /dev/null +++ b/topologies/t7/config/config.yaml @@ -0,0 +1,14 @@ +NodeOUs: + Enable: true + ClientOUIdentifier: + Certificate: cacerts/ca-cert.pem + OrganizationalUnitIdentifier: client + PeerOUIdentifier: + Certificate: cacerts/ca-cert.pem + OrganizationalUnitIdentifier: peer + AdminOUIdentifier: + Certificate: cacerts/ca-cert.pem + OrganizationalUnitIdentifier: admin + OrdererOUIdentifier: + Certificate: cacerts/ca-cert.pem + OrganizationalUnitIdentifier: orderer \ No newline at end of file diff --git a/topologies/t7/config/configtx.yaml b/topologies/t7/config/configtx.yaml new file mode 100644 index 0000000..1264040 --- /dev/null +++ b/topologies/t7/config/configtx.yaml @@ -0,0 +1,428 @@ +# Copyright IBM Corp. All Rights Reserved. +# +# SPDX-License-Identifier: Apache-2.0 +# + +--- +################################################################################ +# +# Section: Organizations +# +# - This section defines the different organizational identities which will +# be referenced later in the configuration. +# +################################################################################ +Organizations: + + # SampleOrg defines an MSP using the sampleconfig. It should never be used + # in production but may be used as a template for other definitions + - &org1 + # DefaultOrg defines the organization which is used in the sampleconfig + # of the fabric.git development environment + Name: org1 + + # ID to load the MSP definition as + ID: org1MSP + + # MSPDir is the filesystem path which contains the MSP configuration + MSPDir: /tmp/crypto-material/orgs/org1/msp + + # Policies defines the set of policies at this level of the config tree + # For organization policies, their canonical path is usually + # /Channel/// + Policies: + Readers: + Type: Signature + Rule: "OR('org1MSP.member')" + Writers: + Type: Signature + Rule: "OR('org1MSP.member')" + Admins: + Type: Signature + Rule: "OR('org1MSP.admin')" + Endorsement: + Type: Signature + Rule: "OR('org1MSP.member')" + + OrdererEndpoints: + - <>-org1-orderer1:7050 + - <>-org1-orderer2:7050 + - <>-org1-orderer3:7050 + + - &org2 + # DefaultOrg defines the organization which is used in the sampleconfig + # of the fabric.git development environment + Name: org2MSP + + # ID to load the MSP definition as + ID: org2MSP + + MSPDir: /tmp/crypto-material/orgs/org2/msp + + # Policies defines the set of policies at this level of the config tree + # For organization policies, their canonical path is usually + # /Channel/// + Policies: + Readers: + Type: Signature + Rule: "OR('org2MSP.member')" + Writers: + Type: Signature + Rule: "OR('org2MSP.member')" + Admins: + Type: Signature + Rule: "OR('org2MSP.admin')" + Endorsement: + Type: Signature + Rule: "OR('org2MSP.peer')" + + # leave this flag set to true. + AnchorPeers: + # AnchorPeers defines the location of peers which can be used + # for cross org gossip communication. Note, this value is only + # encoded in the genesis block in the Application section context + - Host: <>-org2-peer1 + Port: 7051 + + - &org3 + # DefaultOrg defines the organization which is used in the sampleconfig + # of the fabric.git development environment + Name: org3MSP + + # ID to load the MSP definition as + ID: org3MSP + + MSPDir: /tmp/crypto-material/orgs/org3/msp + + # Policies defines the set of policies at this level of the config tree + # For organization policies, their canonical path is usually + # /Channel/// + Policies: + Readers: + Type: Signature + Rule: "OR('org3MSP.member')" + Writers: + Type: Signature + Rule: "OR('org3MSP.member')" + Admins: + Type: Signature + Rule: "OR('org3MSP.admin')" + Endorsement: + Type: Signature + Rule: "OR('org3MSP.peer')" + + # leave this flag set to true. + AnchorPeers: + # AnchorPeers defines the location of peers which can be used + # for cross org gossip communication. Note, this value is only + # encoded in the genesis block in the Application section context + - Host: <>-org3-peer1 + Port: 7051 + +################################################################################ +# +# SECTION: Capabilities +# +# - This section defines the capabilities of fabric network. This is a new +# concept as of v1.1.0 and should not be utilized in mixed networks with +# v1.0.x peers and orderers. Capabilities define features which must be +# present in a fabric binary for that binary to safely participate in the +# fabric network. For instance, if a new MSP type is added, newer binaries +# might recognize and validate the signatures from this type, while older +# binaries without this support would be unable to validate those +# transactions. This could lead to different versions of the fabric binaries +# having different world states. Instead, defining a capability for a channel +# informs those binaries without this capability that they must cease +# processing transactions until they have been upgraded. For v1.0.x if any +# capabilities are defined (including a map with all capabilities turned off) +# then the v1.0.x peer will deliberately crash. +# +################################################################################ +Capabilities: + # Channel capabilities apply to both the orderers and the peers and must be + # supported by both. + # Set the value of the capability to true to require it. + Channel: &ChannelCapabilities + # V2_0 capability ensures that orderers and peers behave according + # to v2.0 channel capabilities. Orderers and peers from + # prior releases would behave in an incompatible way, and are therefore + # not able to participate in channels at v2.0 capability. + # Prior to enabling V2.0 channel capabilities, ensure that all + # orderers and peers on a channel are at v2.0.0 or later. + V2_0: true + + # Orderer capabilities apply only to the orderers, and may be safely + # used with prior release peers. + # Set the value of the capability to true to require it. + Orderer: &OrdererCapabilities + # V2_0 orderer capability ensures that orderers behave according + # to v2.0 orderer capabilities. Orderers from + # prior releases would behave in an incompatible way, and are therefore + # not able to participate in channels at v2.0 orderer capability. + # Prior to enabling V2.0 orderer capabilities, ensure that all + # orderers on channel are at v2.0.0 or later. + V2_0: true + + # Application capabilities apply only to the peer network, and may be safely + # used with prior release orderers. + # Set the value of the capability to true to require it. + Application: &ApplicationCapabilities + # V2_0 application capability ensures that peers behave according + # to v2.0 application capabilities. Peers from + # prior releases would behave in an incompatible way, and are therefore + # not able to participate in channels at v2.0 application capability. + # Prior to enabling V2.0 application capabilities, ensure that all + # peers on channel are at v2.0.0 or later. + V2_0: true + +################################################################################ +# +# SECTION: Application +# +# - This section defines the values to encode into a config transaction or +# genesis block for application related parameters +# +################################################################################ +Application: &ApplicationDefaults + ACLs: &ACLsDefault + + # ACL policy for lscc's "getid" function + lscc/ChaincodeExists: /Channel/Application/Readers + + # ACL policy for lscc's "getdepspec" function + lscc/GetDeploymentSpec: /Channel/Application/Readers + + # ACL policy for lscc's "getccdata" function + lscc/GetChaincodeData: /Channel/Application/Readers + + # ACL Policy for lscc's "getchaincodes" function + lscc/GetInstantiatedChaincodes: /Channel/Application/Readers + + + #---Query System Chaincode (qscc) function to policy mapping for access control---# + + # ACL policy for qscc's "GetChainInfo" function + qscc/GetChainInfo: /Channel/Application/Readers + + + # ACL policy for qscc's "GetBlockByNumber" function + qscc/GetBlockByNumber: /Channel/Application/Readers + + # ACL policy for qscc's "GetBlockByHash" function + qscc/GetBlockByHash: /Channel/Application/Readers + + # ACL policy for qscc's "GetTransactionByID" function + qscc/GetTransactionByID: /Channel/Application/Readers + + # ACL policy for qscc's "GetBlockByTxID" function + qscc/GetBlockByTxID: /Channel/Application/Readers + + #---Configuration System Chaincode (cscc) function to policy mapping for access control---# + + # ACL policy for cscc's "GetConfigBlock" function + cscc/GetConfigBlock: /Channel/Application/Readers + + # ACL policy for cscc's "GetConfigTree" function + cscc/GetConfigTree: /Channel/Application/Readers + + # ACL policy for cscc's "SimulateConfigTreeUpdate" function + cscc/SimulateConfigTreeUpdate: /Channel/Application/Readers + + #---Miscellanesous peer function to policy mapping for access control---# + + # ACL policy for invoking chaincodes on peer + peer/Propose: /Channel/Application/Writers + + # ACL policy for chaincode to chaincode invocation + peer/ChaincodeToChaincode: /Channel/Application/Readers + + #---Events resource to policy mapping for access control###---# + + # ACL policy for sending block events + event/Block: /Channel/Application/Readers + + # ACL policy for sending filtered block events + event/FilteredBlock: /Channel/Application/Readers + + # Chaincode Lifecycle Policies introduced in Fabric 2.x + # ACL policy for _lifecycle's "CheckCommitReadiness" function + _lifecycle/CheckCommitReadiness: /Channel/Application/Writers + + # ACL policy for _lifecycle's "CommitChaincodeDefinition" function + _lifecycle/CommitChaincodeDefinition: /Channel/Application/Writers + + # ACL policy for _lifecycle's "QueryChaincodeDefinition" function + _lifecycle/QueryChaincodeDefinition: /Channel/Application/Readers + + + # Organizations is the list of orgs which are defined as participants on + # the application side of the network + Organizations: + + # Policies defines the set of policies at this level of the config tree + # For Application policies, their canonical path is + # /Channel/Application/ + Policies: + Readers: + Type: ImplicitMeta + Rule: "ANY Readers" + Writers: + Type: ImplicitMeta + Rule: "ANY Writers" + Admins: + Type: ImplicitMeta + Rule: "MAJORITY Admins" + LifecycleEndorsement: + Type: ImplicitMeta + Rule: "MAJORITY Endorsement" + Endorsement: + Type: ImplicitMeta + Rule: "MAJORITY Endorsement" + + Capabilities: + <<: *ApplicationCapabilities +################################################################################ +# +# SECTION: Orderer +# +# - This section defines the values to encode into a config transaction or +# genesis block for orderer related parameters +# +################################################################################ +Orderer: &OrdererDefaults + + # Orderer Type: The orderer implementation to start + OrdererType: etcdraft + + # Addresses used to be the list of orderer addresses that clients and peers + # could connect to. However, this does not allow clients to associate orderer + # addresses and orderer organizations which can be useful for things such + # as TLS validation. The preferred way to specify orderer addresses is now + # to include the OrdererEndpoints item in your org definition + Addresses: + - <>-org1-orderer1:7050 + - <>-org1-orderer2:7050 + - <>-org1-orderer3:7050 + + EtcdRaft: + Consenters: + - Host: <>-org1-orderer1 + Port: 7050 + ClientTLSCert: /tmp/crypto-material/orderers/org1/orderer1/node-tls/msp/signcerts/cert.pem + ServerTLSCert: /tmp/crypto-material/orderers/org1/orderer1/node-tls/msp/signcerts/cert.pem + - Host: <>-org1-orderer2 + Port: 7050 + ClientTLSCert: /tmp/crypto-material/orderers/org1/orderer2/node-tls/msp/signcerts/cert.pem + ServerTLSCert: /tmp/crypto-material/orderers/org1/orderer2/node-tls/msp/signcerts/cert.pem + - Host: <>-org1-orderer3 + Port: 7050 + ClientTLSCert: /tmp/crypto-material/orderers/org1/orderer3/node-tls/msp/signcerts/cert.pem + ServerTLSCert: /tmp/crypto-material/orderers/org1/orderer3/node-tls/msp/signcerts/cert.pem + + # Batch Timeout: The amount of time to wait before creating a batch + BatchTimeout: 2s + + # Batch Size: Controls the number of messages batched into a block + BatchSize: + + # Max Message Count: The maximum number of messages to permit in a batch + MaxMessageCount: 10 + + # Absolute Max Bytes: The absolute maximum number of bytes allowed for + # the serialized messages in a batch. + AbsoluteMaxBytes: 99 MB + + # Preferred Max Bytes: The preferred maximum number of bytes allowed for + # the serialized messages in a batch. A message larger than the preferred + # max bytes will result in a batch larger than preferred max bytes. + PreferredMaxBytes: 512 KB + + # Organizations is the list of orgs which are defined as participants on + # the orderer side of the network + Organizations: + + # Policies defines the set of policies at this level of the config tree + # For Orderer policies, their canonical path is + # /Channel/Orderer/ + Policies: + Readers: + Type: ImplicitMeta + Rule: "ANY Readers" + Writers: + Type: ImplicitMeta + Rule: "ANY Writers" + Admins: + Type: ImplicitMeta + Rule: "MAJORITY Admins" + # BlockValidation specifies what signatures must be included in the block + # from the orderer for the peer to validate it. + BlockValidation: + Type: ImplicitMeta + Rule: "ANY Writers" + +################################################################################ +# +# CHANNEL +# +# This section defines the values to encode into a config transaction or +# genesis block for channel related parameters. +# +################################################################################ +Channel: &ChannelDefaults + # Policies defines the set of policies at this level of the config tree + # For Channel policies, their canonical path is + # /Channel/ + Policies: + # Who may invoke the 'Deliver' API + Readers: + Type: ImplicitMeta + Rule: "ANY Readers" + # Who may invoke the 'Broadcast' API + Writers: + Type: ImplicitMeta + Rule: "ANY Writers" + # By default, who may modify elements at this config level + Admins: + Type: ImplicitMeta + Rule: "MAJORITY Admins" + + # Capabilities describes the channel level capabilities, see the + # dedicated Capabilities section elsewhere in this file for a full + # description + Capabilities: + <<: *ChannelCapabilities + +################################################################################ +# +# Profile +# +# - Different configuration profiles may be encoded here to be specified +# as parameters to the configtxgen tool +# +################################################################################ +Profiles: + OrgsOrdererGenesis: + <<: *ChannelDefaults + Orderer: + <<: *OrdererDefaults + Organizations: + - *org1 + Capabilities: + <<: *OrdererCapabilities + Consortiums: + MainConsortium: + Organizations: + - *org2 + - *org3 + + OrgsChannel: + Consortium: MainConsortium + <<: *ChannelDefaults + Application: + <<: *ApplicationDefaults + Organizations: + - *org2 + - *org3 + Capabilities: + <<: *ApplicationCapabilities + diff --git a/topologies/t7/containers/cas/org1-cas/docker-compose-org1-cas.yml b/topologies/t7/containers/cas/org1-cas/docker-compose-org1-cas.yml new file mode 100644 index 0000000..1d941c5 --- /dev/null +++ b/topologies/t7/containers/cas/org1-cas/docker-compose-org1-cas.yml @@ -0,0 +1,29 @@ +version: "3.9" +services: + org1-ca-tls: + container_name: ${CURRENT_HL_TOPOLOGY}-org1-ca-tls + + command: sh -c 'fabric-ca-server start -d -b org1-ca-tls-admin:org1-ca-tls-adminpw --port 7054' + environment: + - FABRIC_CA_SERVER_HOME=/tmp/hyperledger/fabric-ca/ + - FABRIC_CA_SERVER_TLS_ENABLED=true + - FABRIC_CA_SERVER_CSR_CN=${CURRENT_HL_TOPOLOGY}-org1-ca-tls + - FABRIC_CA_SERVER_CSR_HOSTS=0.0.0.0 + - FABRIC_CA_SERVER_DEBUG=true + - TOPOLOGY=${CURRENT_HL_TOPOLOGY} + volumes: + - "${HL_TOPOLOGIES_BASE_FOLDER}/crypto-material/cas/org1/ca-tls:/tmp/hyperledger/fabric-ca" + - "${HL_TOPOLOGIES_BASE_FOLDER}:/tmp" + org1-ca-identities: + container_name: ${CURRENT_HL_TOPOLOGY}-org1-ca-identities + command: sh -c 'fabric-ca-server start -d -b org1-ca-identities-admin:org1-ca-identities-adminpw --port 7054' + environment: + - FABRIC_CA_SERVER_HOME=/tmp/hyperledger/fabric-ca/ + - FABRIC_CA_SERVER_TLS_ENABLED=true + - FABRIC_CA_SERVER_CSR_CN=${CURRENT_HL_TOPOLOGY}-org1-ca-identities + - FABRIC_CA_SERVER_CSR_HOSTS=0.0.0.0 + - FABRIC_CA_SERVER_DEBUG=true + - TOPOLOGY=${CURRENT_HL_TOPOLOGY} + volumes: + - "${HL_TOPOLOGIES_BASE_FOLDER}/crypto-material/cas/org1/ca-identities:/tmp/hyperledger/fabric-ca" + - "${HL_TOPOLOGIES_BASE_FOLDER}:/tmp" \ No newline at end of file diff --git a/topologies/t7/containers/cas/org2-cas/docker-compose-org2-cas.yml b/topologies/t7/containers/cas/org2-cas/docker-compose-org2-cas.yml new file mode 100644 index 0000000..e22813e --- /dev/null +++ b/topologies/t7/containers/cas/org2-cas/docker-compose-org2-cas.yml @@ -0,0 +1,28 @@ +version: "3.9" +services: + org2-ca-tls: + container_name: ${CURRENT_HL_TOPOLOGY}-org2-ca-tls + command: sh -c 'fabric-ca-server start -d -b org2-ca-tls-admin:org2-ca-tls-adminpw --port 7054' + environment: + - FABRIC_CA_SERVER_HOME=/tmp/hyperledger/fabric-ca/ + - FABRIC_CA_SERVER_TLS_ENABLED=true + - FABRIC_CA_SERVER_CSR_CN=${CURRENT_HL_TOPOLOGY}-org2-ca-tls + - FABRIC_CA_SERVER_CSR_HOSTS=0.0.0.0 + - FABRIC_CA_SERVER_DEBUG=true + - TOPOLOGY=${CURRENT_HL_TOPOLOGY} + volumes: + - "${HL_TOPOLOGIES_BASE_FOLDER}/crypto-material/cas/org2/ca-tls:/tmp/hyperledger/fabric-ca" + - "${HL_TOPOLOGIES_BASE_FOLDER}:/tmp" + org2-ca-identities: + container_name: ${CURRENT_HL_TOPOLOGY}-org2-ca-identities + command: sh -c 'fabric-ca-server start -d -b org2-ca-identities-admin:org2-ca-identities-adminpw --port 7054' + environment: + - FABRIC_CA_SERVER_HOME=/tmp/hyperledger/fabric-ca/ + - FABRIC_CA_SERVER_TLS_ENABLED=true + - FABRIC_CA_SERVER_CSR_CN=${CURRENT_HL_TOPOLOGY}-org2-ca-identities + - FABRIC_CA_SERVER_CSR_HOSTS=0.0.0.0 + - FABRIC_CA_SERVER_DEBUG=true + - TOPOLOGY=${CURRENT_HL_TOPOLOGY} + volumes: + - "${HL_TOPOLOGIES_BASE_FOLDER}/crypto-material/cas/org2/ca-identities:/tmp/hyperledger/fabric-ca" + - "${HL_TOPOLOGIES_BASE_FOLDER}:/tmp" \ No newline at end of file diff --git a/topologies/t7/containers/cas/org3-cas/docker-compose-org3-cas.yml b/topologies/t7/containers/cas/org3-cas/docker-compose-org3-cas.yml new file mode 100644 index 0000000..b18ea4d --- /dev/null +++ b/topologies/t7/containers/cas/org3-cas/docker-compose-org3-cas.yml @@ -0,0 +1,28 @@ +version: "3.9" +services: + org3-ca-tls: + container_name: ${CURRENT_HL_TOPOLOGY}-org3-ca-tls + command: sh -c 'fabric-ca-server start -d -b org3-ca-tls-admin:org3-ca-tls-adminpw --port 7054' + environment: + - FABRIC_CA_SERVER_HOME=/tmp/hyperledger/fabric-ca/ + - FABRIC_CA_SERVER_TLS_ENABLED=true + - FABRIC_CA_SERVER_CSR_CN=${CURRENT_HL_TOPOLOGY}-org3-ca-tls + - FABRIC_CA_SERVER_CSR_HOSTS=0.0.0.0 + - FABRIC_CA_SERVER_DEBUG=true + - TOPOLOGY=${CURRENT_HL_TOPOLOGY} + volumes: + - "${HL_TOPOLOGIES_BASE_FOLDER}/crypto-material/cas/org3/ca-tls:/tmp/hyperledger/fabric-ca" + - "${HL_TOPOLOGIES_BASE_FOLDER}:/tmp" + org3-ca-identities: + container_name: ${CURRENT_HL_TOPOLOGY}-org3-ca-identities + command: sh -c 'fabric-ca-server start -d -b org3-ca-identities-admin:org3-ca-identities-adminpw --port 7054' + environment: + - FABRIC_CA_SERVER_HOME=/tmp/hyperledger/fabric-ca/ + - FABRIC_CA_SERVER_TLS_ENABLED=true + - FABRIC_CA_SERVER_CSR_CN=${CURRENT_HL_TOPOLOGY}-org3-ca-identities + - FABRIC_CA_SERVER_CSR_HOSTS=0.0.0.0 + - FABRIC_CA_SERVER_DEBUG=true + - TOPOLOGY=${CURRENT_HL_TOPOLOGY} + volumes: + - "${HL_TOPOLOGIES_BASE_FOLDER}/crypto-material/cas/org3/ca-identities:/tmp/hyperledger/fabric-ca" + - "${HL_TOPOLOGIES_BASE_FOLDER}:/tmp" \ No newline at end of file diff --git a/topologies/t7/containers/clis/org2-clis/docker-compose-org2-clis.yml b/topologies/t7/containers/clis/org2-clis/docker-compose-org2-clis.yml new file mode 100644 index 0000000..4ceb8a0 --- /dev/null +++ b/topologies/t7/containers/clis/org2-clis/docker-compose-org2-clis.yml @@ -0,0 +1,21 @@ +services: + org2-cli-peer1: + container_name: ${CURRENT_HL_TOPOLOGY}-org2-cli-peer1 + tty: true + stdin_open: true + environment: + - GOPATH=/opt/gopath + - CORE_VM_ENDPOINT=unix:///host/var/run/docker.sock + - FABRIC_LOGGING_SPEC=DEBUG + - CORE_PEER_ID=${CURRENT_HL_TOPOLOGY}-org2-cli-peer1 + - CORE_PEER_ADDRESS=${CURRENT_HL_TOPOLOGY}-org2-peer1:7051 + - CORE_PEER_LOCALMSPID=org2MSP + - CORE_PEER_TLS_ENABLED=true + - CORE_PEER_TLS_ROOTCERT_FILE=/tmp/crypto-material/cas/org2/ca-tls/ca-cert.pem + - CORE_PEER_MSPCONFIGPATH=/tmp/crypto-material/peers/org2/peer1/node/msp + - TOPOLOGY=${CURRENT_HL_TOPOLOGY} + working_dir: /opt/gopath/src/github.com/hyperledger/fabric/org2 + command: sh + volumes: + - ${HL_TOPOLOGIES_BASE_FOLDER}/assets/chaincode:/opt/gopath/src/github.com/hyperledger/fabric-samples/chaincode + - ${HL_TOPOLOGIES_BASE_FOLDER}:/tmp diff --git a/topologies/t7/containers/clis/org3-clis/docker-compose-org3-clis.yml b/topologies/t7/containers/clis/org3-clis/docker-compose-org3-clis.yml new file mode 100644 index 0000000..68d2ec0 --- /dev/null +++ b/topologies/t7/containers/clis/org3-clis/docker-compose-org3-clis.yml @@ -0,0 +1,21 @@ +services: + org3-cli-peer1: + container_name: ${CURRENT_HL_TOPOLOGY}-org3-cli-peer1 + tty: true + stdin_open: true + environment: + - GOPATH=/opt/gopath + - CORE_VM_ENDPOINT=unix:///host/var/run/docker.sock + - FABRIC_LOGGING_SPEC=DEBUG + - CORE_PEER_ID=${CURRENT_HL_TOPOLOGY}-org3-cli-peer1 + - CORE_PEER_ADDRESS=${CURRENT_HL_TOPOLOGY}-org3-peer1:7051 + - CORE_PEER_LOCALMSPID=org3MSP + - CORE_PEER_TLS_ENABLED=true + - CORE_PEER_TLS_ROOTCERT_FILE=/tmp/crypto-material/cas/org3/ca-tls/ca-cert.pem + - CORE_PEER_MSPCONFIGPATH=/tmp/crypto-material/peers/org3/peer1/node/msp + - TOPOLOGY=${CURRENT_HL_TOPOLOGY} + working_dir: /opt/gopath/src/github.com/hyperledger/fabric/org3 + command: sh + volumes: + - ${HL_TOPOLOGIES_BASE_FOLDER}/assets/chaincode:/opt/gopath/src/github.com/hyperledger/fabric-samples/chaincode + - ${HL_TOPOLOGIES_BASE_FOLDER}:/tmp \ No newline at end of file diff --git a/topologies/t7/containers/orderers/org1-orderers/docker-compose-org1-orderers.yml b/topologies/t7/containers/orderers/org1-orderers/docker-compose-org1-orderers.yml new file mode 100644 index 0000000..0045893 --- /dev/null +++ b/topologies/t7/containers/orderers/org1-orderers/docker-compose-org1-orderers.yml @@ -0,0 +1,82 @@ +services: + org1-orderer1: + container_name: ${CURRENT_HL_TOPOLOGY}-org1-orderer1 + environment: + - ORDERER_HOME=/tmp/hyperledger/orderer/home + - ORDERER_FILELEDGER_LOCATION=/tmp/hyperledger/orderer/fileledger + - ORDERER_HOST=${CURRENT_HL_TOPOLOGY}-org1-orderer1 + - ORDERER_GENERAL_LISTENADDRESS=0.0.0.0 + - ORDERER_GENERAL_LISTENPORT=7050 + - ORDERER_GENERAL_GENESISMETHOD=file + # - ORDERER_GENERAL_GENESISFILE=/tmp/crypto-material/artifacts/channels/genesis.block + - ORDERER_GENERAL_BOOTSTRAPFILE=/tmp/crypto-material/artifacts/channels/genesis.block + - ORDERER_GENERAL_LOCALMSPID=org1MSP + - ORDERER_GENERAL_LOCALMSPDIR=/tmp/crypto-material/orderers/org1/orderer1/node/msp + - ORDERER_GENERAL_TLS_ENABLED=true + - ORDERER_GENERAL_TLS_CERTIFICATE=/tmp/crypto-material/orderers/org1/orderer1/node-tls/msp/signcerts/cert.pem + - ORDERER_GENERAL_TLS_PRIVATEKEY=/tmp/crypto-material/orderers/org1/orderer1/node-tls/msp/keystore/key.pem + # - ORDERER_GENERAL_TLS_ROOTCAS=[/tmp/crypto-material/cas/org1/ca-tls/ca-cert.pem] + - ORDERER_GENERAL_TLS_ROOTCAS=[/tmp/crypto-material/cas/org1/ca-tls/ca-cert.pem,/tmp/crypto-material/cas/org2/ca-tls/ca-cert.pem,/tmp/crypto-material/cas/org3/ca-tls/ca-cert.pem] + - ORDERER_GENERAL_LOGLEVEL=debug + - ORDERER_DEBUG_BROADCASTTRACEDIR=data/logs + - ORDERER_OPERATIONS_LISTENADDRESS=0.0.0.0:7051 + - ORDERER_METRICS_STATSD_ADDRESS=0.0.0.0:7052 + - ORDERER_GENERAL_PROFILE_ADDRESS=0.0.0.0:7053 + - TOPOLOGY=${CURRENT_HL_TOPOLOGY} + volumes: + - ${HL_TOPOLOGIES_BASE_FOLDER}/crypto-material:/tmp/crypto-material + - ${HL_TOPOLOGIES_BASE_FOLDER}/homefolders/orderers/org1/orderer1:/tmp/hyperledger/orderer + org1-orderer2: + container_name: ${CURRENT_HL_TOPOLOGY}-org1-orderer2 + environment: + - ORDERER_HOME=/tmp/hyperledger/orderer/home + - ORDERER_FILELEDGER_LOCATION=/tmp/hyperledger/orderer/fileledger + - ORDERER_HOST=${CURRENT_HL_TOPOLOGY}-org1-orderer2 + - ORDERER_GENERAL_LISTENADDRESS=0.0.0.0 + - ORDERER_GENERAL_LISTENPORT=7050 + - ORDERER_GENERAL_GENESISMETHOD=file + # - ORDERER_GENERAL_GENESISFILE=/tmp/crypto-material/artifacts/channels/genesis.block + - ORDERER_GENERAL_BOOTSTRAPFILE=/tmp/crypto-material/artifacts/channels/genesis.block + - ORDERER_GENERAL_LOCALMSPID=org1MSP + - ORDERER_GENERAL_LOCALMSPDIR=/tmp/crypto-material/orderers/org1/orderer2/node/msp + - ORDERER_GENERAL_TLS_ENABLED=true + - ORDERER_GENERAL_TLS_CERTIFICATE=/tmp/crypto-material/orderers/org1/orderer2/node-tls/msp/signcerts/cert.pem + - ORDERER_GENERAL_TLS_PRIVATEKEY=/tmp/crypto-material/orderers/org1/orderer2/node-tls/msp/keystore/key.pem + # - ORDERER_GENERAL_TLS_ROOTCAS=[/tmp/crypto-material/cas/org1/ca-tls/ca-cert.pem] + - ORDERER_GENERAL_TLS_ROOTCAS=[/tmp/crypto-material/cas/org1/ca-tls/ca-cert.pem,/tmp/crypto-material/cas/org2/ca-tls/ca-cert.pem,/tmp/crypto-material/cas/org3/ca-tls/ca-cert.pem] + - ORDERER_GENERAL_LOGLEVEL=debug + - ORDERER_DEBUG_BROADCASTTRACEDIR=data/logs + - ORDERER_OPERATIONS_LISTENADDRESS=0.0.0.0:7051 + - ORDERER_METRICS_STATSD_ADDRESS=0.0.0.0:7052 + - ORDERER_GENERAL_PROFILE_ADDRESS=0.0.0.0:7053 + - TOPOLOGY=${CURRENT_HL_TOPOLOGY} + volumes: + - ${HL_TOPOLOGIES_BASE_FOLDER}/crypto-material:/tmp/crypto-material + - ${HL_TOPOLOGIES_BASE_FOLDER}/homefolders/orderers/org1/orderer2:/tmp/hyperledger/orderer + org1-orderer3: + container_name: ${CURRENT_HL_TOPOLOGY}-org1-orderer3 + environment: + - ORDERER_HOME=/tmp/hyperledger/orderer/home + - ORDERER_FILELEDGER_LOCATION=/tmp/hyperledger/orderer/fileledger + - ORDERER_HOST=${CURRENT_HL_TOPOLOGY}-org1-orderer3 + - ORDERER_GENERAL_LISTENADDRESS=0.0.0.0 + - ORDERER_GENERAL_LISTENPORT=7050 + - ORDERER_GENERAL_GENESISMETHOD=file + # - ORDERER_GENERAL_GENESISFILE=/tmp/crypto-material/artifacts/channels/genesis.block + - ORDERER_GENERAL_BOOTSTRAPFILE=/tmp/crypto-material/artifacts/channels/genesis.block + - ORDERER_GENERAL_LOCALMSPID=org1MSP + - ORDERER_GENERAL_LOCALMSPDIR=/tmp/crypto-material/orderers/org1/orderer3/node/msp + - ORDERER_GENERAL_TLS_ENABLED=true + - ORDERER_GENERAL_TLS_CERTIFICATE=/tmp/crypto-material/orderers/org1/orderer3/node-tls/msp/signcerts/cert.pem + - ORDERER_GENERAL_TLS_PRIVATEKEY=/tmp/crypto-material/orderers/org1/orderer3/node-tls/msp/keystore/key.pem + # - ORDERER_GENERAL_TLS_ROOTCAS=[/tmp/crypto-material/cas/org1/ca-tls/ca-cert.pem] + - ORDERER_GENERAL_TLS_ROOTCAS=[/tmp/crypto-material/cas/org1/ca-tls/ca-cert.pem,/tmp/crypto-material/cas/org2/ca-tls/ca-cert.pem,/tmp/crypto-material/cas/org3/ca-tls/ca-cert.pem] + - ORDERER_GENERAL_LOGLEVEL=debug + - ORDERER_DEBUG_BROADCASTTRACEDIR=data/logs + - ORDERER_OPERATIONS_LISTENADDRESS=0.0.0.0:7051 + - ORDERER_METRICS_STATSD_ADDRESS=0.0.0.0:7052 + - ORDERER_GENERAL_PROFILE_ADDRESS=0.0.0.0:7053 + - TOPOLOGY=${CURRENT_HL_TOPOLOGY} + volumes: + - ${HL_TOPOLOGIES_BASE_FOLDER}/crypto-material:/tmp/crypto-material + - ${HL_TOPOLOGIES_BASE_FOLDER}/homefolders/orderers/org1/orderer3:/tmp/hyperledger/orderer diff --git a/topologies/t7/containers/peers/org2-peers/docker-compose-org2-peers.yml b/topologies/t7/containers/peers/org2-peers/docker-compose-org2-peers.yml new file mode 100644 index 0000000..f115b6d --- /dev/null +++ b/topologies/t7/containers/peers/org2-peers/docker-compose-org2-peers.yml @@ -0,0 +1,50 @@ +services: + org2-peer1: + container_name: ${CURRENT_HL_TOPOLOGY}-org2-peer1 + environment: + - CORE_PEER_ID=${CURRENT_HL_TOPOLOGY}-org2-peer1 + - CORE_PEER_ADDRESS=${CURRENT_HL_TOPOLOGY}-org2-peer1:7051 + - CORE_PEER_LOCALMSPID=org2MSP + - CORE_PEER_MSPCONFIGPATH=/tmp/crypto-material/peers/org2/peer1/node/msp + - CORE_VM_ENDPOINT=unix:///host/var/run/docker.sock + - CORE_VM_DOCKER_HOSTCONFIG_NETWORKMODE=hl-fabric-${CURRENT_HL_TOPOLOGY} + # - FABRIC_LOGGING_SPEC=debug + - CORE_PEER_TLS_ENABLED=true + - CORE_PEER_TLS_CERT_FILE=/tmp/crypto-material/peers/org2/peer1/node-tls/msp/signcerts/cert.pem + - CORE_PEER_TLS_KEY_FILE=/tmp/crypto-material/peers/org2/peer1/node-tls/msp/keystore/key.pem + - CORE_PEER_TLS_ROOTCERT_FILE=/tmp/crypto-material/peers/org2/peer1/node-tls/msp/tlscacerts/ca-cert.pem + - CORE_PEER_GOSSIP_USELEADERELECTION=true + - CORE_PEER_GOSSIP_ORGLEADER=false + - CORE_PEER_GOSSIP_EXTERNALENDPOINT=${CURRENT_HL_TOPOLOGY}-org2-peer1:7051 + - CORE_PEER_GOSSIP_SKIPHANDSHAKE=true + - TOPOLOGY=${CURRENT_HL_TOPOLOGY} + # - CORE_PEER_CHAINCODELISTENADDRESS=0.0.0.0:7052 + working_dir: /opt/gopath/src/github.com/hyperledger/fabric/org2/peer1 + volumes: + - /var/run:/host/var/run + - ${HL_TOPOLOGIES_BASE_FOLDER}:/tmp + org2-peer2: + container_name: ${CURRENT_HL_TOPOLOGY}-org2-peer2 + environment: + - CORE_PEER_ID=${CURRENT_HL_TOPOLOGY}-org2-peer2 + - CORE_PEER_ADDRESS=${CURRENT_HL_TOPOLOGY}-org2-peer2:7051 + - CORE_PEER_LOCALMSPID=org2MSP + - CORE_PEER_MSPCONFIGPATH=/tmp/crypto-material/peers/org2/peer2/node/msp + - CORE_VM_ENDPOINT=unix:///host/var/run/docker.sock + - CORE_VM_DOCKER_HOSTCONFIG_NETWORKMODE=hl-fabric-${CURRENT_HL_TOPOLOGY} + # - FABRIC_LOGGING_SPEC=debug + - CORE_PEER_TLS_ENABLED=true + - CORE_PEER_TLS_CERT_FILE=/tmp/crypto-material/peers/org2/peer2/node-tls/msp/signcerts/cert.pem + - CORE_PEER_TLS_KEY_FILE=/tmp/crypto-material/peers/org2/peer2/node-tls/msp/keystore/key.pem + - CORE_PEER_TLS_ROOTCERT_FILE=/tmp/crypto-material/peers/org2/peer2/node-tls/msp/tlscacerts/ca-cert.pem + - CORE_PEER_GOSSIP_USELEADERELECTION=true + - CORE_PEER_GOSSIP_ORGLEADER=false + - CORE_PEER_GOSSIP_EXTERNALENDPOINT=${CURRENT_HL_TOPOLOGY}-org2-peer2:7051 + - CORE_PEER_GOSSIP_SKIPHANDSHAKE=true + # - CORE_PEER_CHAINCODELISTENADDRESS=0.0.0.0:7052 + - CORE_PEER_GOSSIP_BOOTSTRAP=${CURRENT_HL_TOPOLOGY}-org2-peer1:7051 + - TOPOLOGY=${CURRENT_HL_TOPOLOGY} + working_dir: /opt/gopath/src/github.com/hyperledger/fabric/org2/peer2 + volumes: + - /var/run:/host/var/run + - ${HL_TOPOLOGIES_BASE_FOLDER}:/tmp \ No newline at end of file diff --git a/topologies/t7/containers/peers/org3-peers/docker-compose-org3-peers.yml b/topologies/t7/containers/peers/org3-peers/docker-compose-org3-peers.yml new file mode 100644 index 0000000..c58de7b --- /dev/null +++ b/topologies/t7/containers/peers/org3-peers/docker-compose-org3-peers.yml @@ -0,0 +1,50 @@ +services: + org3-peer1: + container_name: ${CURRENT_HL_TOPOLOGY}-org3-peer1 + environment: + - CORE_PEER_ID=${CURRENT_HL_TOPOLOGY}-org3-peer1 + - CORE_PEER_ADDRESS=${CURRENT_HL_TOPOLOGY}-org3-peer1:7051 + - CORE_PEER_LOCALMSPID=org3MSP + - CORE_PEER_MSPCONFIGPATH=/tmp/crypto-material/peers/org3/peer1/node/msp + - CORE_VM_ENDPOINT=unix:///host/var/run/docker.sock + - CORE_VM_DOCKER_HOSTCONFIG_NETWORKMODE=hl-fabric-${CURRENT_HL_TOPOLOGY} + # - FABRIC_LOGGING_SPEC=debug + - CORE_PEER_TLS_ENABLED=true + - CORE_PEER_TLS_CERT_FILE=/tmp/crypto-material/peers/org3/peer1/node-tls/msp/signcerts/cert.pem + - CORE_PEER_TLS_KEY_FILE=/tmp/crypto-material/peers/org3/peer1/node-tls/msp/keystore/key.pem + - CORE_PEER_TLS_ROOTCERT_FILE=/tmp/crypto-material/peers/org3/peer1/node-tls/msp/tlscacerts/ca-cert.pem + - CORE_PEER_GOSSIP_USELEADERELECTION=true + - CORE_PEER_GOSSIP_ORGLEADER=false + - CORE_PEER_GOSSIP_EXTERNALENDPOINT=${CURRENT_HL_TOPOLOGY}-org3-peer1:7051 + - CORE_PEER_GOSSIP_SKIPHANDSHAKE=true + - TOPOLOGY=${CURRENT_HL_TOPOLOGY} + # - CORE_PEER_CHAINCODELISTENADDRESS=0.0.0.0:7052 + working_dir: /opt/gopath/src/github.com/hyperledger/fabric/org3/peer1 + volumes: + - /var/run:/host/var/run + - ${HL_TOPOLOGIES_BASE_FOLDER}:/tmp + org3-peer2: + container_name: ${CURRENT_HL_TOPOLOGY}-org3-peer2 + environment: + - CORE_PEER_ID=${CURRENT_HL_TOPOLOGY}-org3-peer2 + - CORE_PEER_ADDRESS=${CURRENT_HL_TOPOLOGY}-org3-peer2:7051 + - CORE_PEER_LOCALMSPID=org3MSP + - CORE_PEER_MSPCONFIGPATH=/tmp/crypto-material/peers/org3/peer2/node/msp + - CORE_VM_ENDPOINT=unix:///host/var/run/docker.sock + - CORE_VM_DOCKER_HOSTCONFIG_NETWORKMODE=hl-fabric-${CURRENT_HL_TOPOLOGY} + # - FABRIC_LOGGING_SPEC=debug + - CORE_PEER_TLS_ENABLED=true + - CORE_PEER_TLS_CERT_FILE=/tmp/crypto-material/peers/org3/peer2/node-tls/msp/signcerts/cert.pem + - CORE_PEER_TLS_KEY_FILE=/tmp/crypto-material/peers/org3/peer2/node-tls/msp/keystore/key.pem + - CORE_PEER_TLS_ROOTCERT_FILE=/tmp/crypto-material/peers/org3/peer2/node-tls/msp/tlscacerts/ca-cert.pem + - CORE_PEER_GOSSIP_USELEADERELECTION=true + - CORE_PEER_GOSSIP_ORGLEADER=false + - CORE_PEER_GOSSIP_EXTERNALENDPOINT=${CURRENT_HL_TOPOLOGY}-org3-peer2:7051 + - CORE_PEER_GOSSIP_SKIPHANDSHAKE=true + # - CORE_PEER_CHAINCODELISTENADDRESS=0.0.0.0:7052 + - CORE_PEER_GOSSIP_BOOTSTRAP=${CURRENT_HL_TOPOLOGY}-org3-peer1:7051 + - TOPOLOGY=${CURRENT_HL_TOPOLOGY} + working_dir: /opt/gopath/src/github.com/hyperledger/fabric/org3/peer2 + volumes: + - /var/run:/host/var/run + - ${HL_TOPOLOGIES_BASE_FOLDER}:/tmp \ No newline at end of file diff --git a/topologies/t7/crypto-material/.gitkeep b/topologies/t7/crypto-material/.gitkeep new file mode 100644 index 0000000..e69de29 diff --git a/topologies/t7/docker-compose.yml b/topologies/t7/docker-compose.yml new file mode 100644 index 0000000..3743e47 --- /dev/null +++ b/topologies/t7/docker-compose.yml @@ -0,0 +1,91 @@ +services: + org-shell-cmd: + image: alpine + networks: + - hl-fabric + org1-ca-tls: + image: hyperledger/fabric-ca:${FABRIC_CA_VERSION} + # ports: + # - :7054 + networks: + - hl-fabric + org1-ca-identities: + image: hyperledger/fabric-ca:${FABRIC_CA_VERSION} + # ports: + # - :7054 + networks: + - hl-fabric + org2-ca-tls: + image: hyperledger/fabric-ca:${FABRIC_CA_VERSION} + # ports: + # - :7054 + networks: + - hl-fabric + org2-ca-identities: + image: hyperledger/fabric-ca:${FABRIC_CA_VERSION} + # ports: + # - :7054 + networks: + - hl-fabric + org3-ca-tls: + image: hyperledger/fabric-ca:${FABRIC_CA_VERSION} + # ports: + # - :7054 + networks: + - hl-fabric + org3-ca-identities: + image: hyperledger/fabric-ca:${FABRIC_CA_VERSION} + # ports: + # - :7054 + networks: + - hl-fabric + org2-peer1: + image: hyperledger/fabric-peer:${FABRIC_PEER_VERSION} + # ports: + # - :7051 + networks: + - hl-fabric + org2-peer2: + image: hyperledger/fabric-peer:${FABRIC_PEER_VERSION} + # ports: + # - :7051 + networks: + - hl-fabric + org3-peer1: + image: hyperledger/fabric-peer:${FABRIC_PEER_VERSION} + # ports: + # - :7051 + networks: + - hl-fabric + org3-peer2: + image: hyperledger/fabric-peer:${FABRIC_PEER_VERSION} + # ports: + # - :7051 + networks: + - hl-fabric + org2-cli-peer1: + image: hyperledger/fabric-tools:${FABRIC_TOOLS_VERSION} + networks: + - hl-fabric + org3-cli-peer1: + image: hyperledger/fabric-tools:${FABRIC_TOOLS_VERSION} + networks: + - hl-fabric + org1-orderer1: + image: hyperledger/fabric-orderer:${PEER_ORDERER_VERSION} + # ports: + # - :7050 + networks: + - hl-fabric + org1-orderer2: + image: hyperledger/fabric-orderer:${PEER_ORDERER_VERSION} + # ports: + # - :7050 + networks: + - hl-fabric + org1-orderer3: + image: hyperledger/fabric-orderer:${PEER_ORDERER_VERSION} + # ports: + # - :7050 + networks: + - hl-fabric \ No newline at end of file diff --git a/topologies/t7/homefolders/.gitkeep b/topologies/t7/homefolders/.gitkeep new file mode 100644 index 0000000..e69de29 diff --git a/topologies/t7/scripts/all-org-peers-commit-chaincode.sh b/topologies/t7/scripts/all-org-peers-commit-chaincode.sh new file mode 100755 index 0000000..ff53cc9 --- /dev/null +++ b/topologies/t7/scripts/all-org-peers-commit-chaincode.sh @@ -0,0 +1,26 @@ +#!/bin/bash +set -e +set -x + +peer lifecycle chaincode checkcommitreadiness -C mychannel --name myccv1 -v "1.0" --sequence 1 + +export CORE_PEER_ADDRESS=${TOPOLOGY}-org2-peer1:7051 +export CORE_PEER_MSPCONFIGPATH=/tmp/crypto-material/orgs/org2/admins/admin/msp +QUERY_INSTALLED=`FABRIC_LOGGING_SPEC=ERROR peer lifecycle chaincode queryinstalled | grep myccv1` +IFS=' ' read -r -a array <<< $QUERY_INSTALLED +PACKAGE_ID=${array[2]} +PACKAGE_ID=${PACKAGE_ID::-1} +echo "The Package ID for the installed chaincode is: $PACKAGE_ID" + +peer lifecycle chaincode commit --channelID mychannel --name myccv1 --version "1.0" \ + --peerAddresses ${TOPOLOGY}-org2-peer1:7051 \ + --peerAddresses ${TOPOLOGY}-org2-peer2:7051 \ + --peerAddresses ${TOPOLOGY}-org3-peer1:7051 \ + --peerAddresses ${TOPOLOGY}-org3-peer2:7051 \ + --tlsRootCertFiles /tmp/crypto-material/cas/org2/ca-tls/ca-cert.pem \ + --tlsRootCertFiles /tmp/crypto-material/cas/org2/ca-tls/ca-cert.pem \ + --tlsRootCertFiles /tmp/crypto-material/cas/org3/ca-tls/ca-cert.pem \ + --tlsRootCertFiles /tmp/crypto-material/cas/org3/ca-tls/ca-cert.pem \ + --sequence 1 --tls --cafile /tmp/crypto-material/cas/org1/ca-tls/ca-cert.pem \ + --collections-config /tmp/config/collections_config.json +peer lifecycle chaincode querycommitted --channelID mychannel --name myccv1 \ No newline at end of file diff --git a/topologies/t7/scripts/all-org-peers-execute-chaincode.sh b/topologies/t7/scripts/all-org-peers-execute-chaincode.sh new file mode 100755 index 0000000..d8be52e --- /dev/null +++ b/topologies/t7/scripts/all-org-peers-execute-chaincode.sh @@ -0,0 +1,18 @@ +#!/bin/sh + +set -e +set -x + +export CORE_PEER_ADDRESS=${TOPOLOGY}-org2-peer1:7051 +export CORE_PEER_MSPCONFIGPATH=/tmp/crypto-material/orgs/org2/users/user/msp +peer chaincode invoke -C mychannel -n myccv1 -c '{"Args":["InitLedger"]}' --waitForEvent --tls --cafile /tmp/crypto-material/cas/org1/ca-tls/ca-cert.pem \ + --peerAddresses ${TOPOLOGY}-org2-peer1:7051 \ + --peerAddresses ${TOPOLOGY}-org2-peer2:7051 \ + --peerAddresses ${TOPOLOGY}-org3-peer1:7051 \ + --peerAddresses ${TOPOLOGY}-org3-peer2:7051 \ + --tlsRootCertFiles /tmp/crypto-material/cas/org2/ca-tls/ca-cert.pem \ + --tlsRootCertFiles /tmp/crypto-material/cas/org2/ca-tls/ca-cert.pem \ + --tlsRootCertFiles /tmp/crypto-material/cas/org3/ca-tls/ca-cert.pem \ + --tlsRootCertFiles /tmp/crypto-material/cas/org3/ca-tls/ca-cert.pem + +peer chaincode query -C mychannel -n myccv1 -c '{"Args":["GetAllAssets"]}' diff --git a/topologies/t7/scripts/channels-setup.sh b/topologies/t7/scripts/channels-setup.sh new file mode 100755 index 0000000..230aac8 --- /dev/null +++ b/topologies/t7/scripts/channels-setup.sh @@ -0,0 +1,7 @@ +#!/bin/sh +set -e +set -x + +export FABRIC_CFG_PATH=/tmp/crypto-material/config +configtxgen -profile OrgsOrdererGenesis -outputBlock /tmp/crypto-material/artifacts/channels/genesis.block -channelID syschannel +configtxgen -profile OrgsChannel -outputCreateChannelTx /tmp/crypto-material/artifacts/channels/channel.tx -channelID mychannel \ No newline at end of file diff --git a/topologies/t7/scripts/delete-state-data.sh b/topologies/t7/scripts/delete-state-data.sh new file mode 100755 index 0000000..9062431 --- /dev/null +++ b/topologies/t7/scripts/delete-state-data.sh @@ -0,0 +1,10 @@ +#!/bin/bash +set -e +set -x + +# remove crypto material files +rm -rf /tmp/crypto-material/* +touch /tmp/crypto-material/.gitkeep + +rm -rf /tmp/homefolders/* +touch /tmp/homefolders/.gitkeep diff --git a/topologies/t7/scripts/org1-enroll-identities-with-ca-identities.sh b/topologies/t7/scripts/org1-enroll-identities-with-ca-identities.sh new file mode 100755 index 0000000..5c5dec7 --- /dev/null +++ b/topologies/t7/scripts/org1-enroll-identities-with-ca-identities.sh @@ -0,0 +1,46 @@ +#!/bin/bash +set -e +set -x + +# enroll orderer1 node +export FABRIC_CA_CLIENT_HOME=/tmp/crypto-material/orderers/org1/orderer1/node +export FABRIC_CA_CLIENT_TLS_CERTFILES=/tmp/crypto-material/cas/org1/ca-identities/ca-cert.pem +fabric-ca-client enroll -d -u https://org1-orderer1:orderer1PW@0.0.0.0:7054 +mv /tmp/crypto-material/orderers/org1/orderer1/node/msp/cacerts/* /tmp/crypto-material/orderers/org1/orderer1/node/msp/cacerts/ca-cert.pem +cp /tmp/config/config.yaml /tmp/crypto-material/orderers/org1/orderer1/node/msp + +# enroll orderer2 node +export FABRIC_CA_CLIENT_HOME=/tmp/crypto-material/orderers/org1/orderer2/node +export FABRIC_CA_CLIENT_TLS_CERTFILES=/tmp/crypto-material/cas/org1/ca-identities/ca-cert.pem +fabric-ca-client enroll -d -u https://org1-orderer2:orderer2PW@0.0.0.0:7054 +mv /tmp/crypto-material/orderers/org1/orderer2/node/msp/cacerts/* /tmp/crypto-material/orderers/org1/orderer2/node/msp/cacerts/ca-cert.pem +cp /tmp/config/config.yaml /tmp/crypto-material/orderers/org1/orderer2/node/msp + +# enroll orderer3 node +export FABRIC_CA_CLIENT_HOME=/tmp/crypto-material/orderers/org1/orderer3/node +export FABRIC_CA_CLIENT_TLS_CERTFILES=/tmp/crypto-material/cas/org1/ca-identities/ca-cert.pem +fabric-ca-client enroll -d -u https://org1-orderer3:orderer3PW@0.0.0.0:7054 +mv /tmp/crypto-material/orderers/org1/orderer3/node/msp/cacerts/* /tmp/crypto-material/orderers/org1/orderer3/node/msp/cacerts/ca-cert.pem +cp /tmp/config/config.yaml /tmp/crypto-material/orderers/org1/orderer3/node/msp + +# enroll org1 admin +export FABRIC_CA_CLIENT_HOME=/tmp/crypto-material/orgs/org1/admins/admin +export FABRIC_CA_CLIENT_TLS_CERTFILES=/tmp/crypto-material/cas/org1/ca-identities/ca-cert.pem +fabric-ca-client enroll -d -u https://admin-org1:org1adminpw@0.0.0.0:7054 +mv /tmp/crypto-material/orgs/org1/admins/admin/msp/cacerts/* /tmp/crypto-material/orgs/org1/admins/admin/msp/cacerts/ca-cert.pem +cp /tmp/config/config.yaml /tmp/crypto-material/orgs/org1/admins/admin/msp + +mv /tmp/crypto-material/orderers/org1/orderer1/node-tls/msp/keystore/* /tmp/crypto-material/orderers/org1/orderer1/node-tls/msp/keystore/key.pem +mv /tmp/crypto-material/orderers/org1/orderer2/node-tls/msp/keystore/* /tmp/crypto-material/orderers/org1/orderer2/node-tls/msp/keystore/key.pem +mv /tmp/crypto-material/orderers/org1/orderer3/node-tls/msp/keystore/* /tmp/crypto-material/orderers/org1/orderer3/node-tls/msp/keystore/key.pem + +# setup org1 msp +mkdir -p /tmp/crypto-material/orgs/org1/msp +cp /tmp/config/config.yaml /tmp/crypto-material/orgs/org1/msp +mkdir -p /tmp/crypto-material/orgs/org1/msp/cacerts +cp /tmp/crypto-material/cas/org1/ca-identities/ca-cert.pem /tmp/crypto-material/orgs/org1/msp/cacerts/ca-cert.pem +mkdir -p /tmp/crypto-material/orgs/org1/msp/tlscacerts +cp /tmp/crypto-material/cas/org1/ca-tls/ca-cert.pem /tmp/crypto-material/orgs/org1/msp/tlscacerts/org1-tls-ca-cert.pem +cp /tmp/crypto-material/cas/org2/ca-tls/ca-cert.pem /tmp/crypto-material/orgs/org1/msp/tlscacerts/org2-tls-ca-cert.pem +cp /tmp/crypto-material/cas/org3/ca-tls/ca-cert.pem /tmp/crypto-material/orgs/org1/msp/tlscacerts/org3-tls-ca-cert.pem +mkdir -p /tmp/crypto-material/orgs/org1/msp/users \ No newline at end of file diff --git a/topologies/t7/scripts/org1-enroll-identities-with-ca-tls.sh b/topologies/t7/scripts/org1-enroll-identities-with-ca-tls.sh new file mode 100755 index 0000000..67e742b --- /dev/null +++ b/topologies/t7/scripts/org1-enroll-identities-with-ca-tls.sh @@ -0,0 +1,26 @@ +#!/bin/bash +set -e +set -x + +# enroll orderer1 node-tls +export FABRIC_CA_CLIENT_HOME=/tmp/crypto-material/orderers/org1/orderer1/node-tls +export FABRIC_CA_CLIENT_TLS_CERTFILES=/tmp/crypto-material/cas/org1/ca-tls/ca-cert.pem +fabric-ca-client enroll -d -u https://org1-orderer1:orderer1PW@0.0.0.0:7054 --enrollment.profile tls --csr.hosts ${TOPOLOGY}-org1-orderer1 + +# enroll orderer2 node-tls +export FABRIC_CA_CLIENT_HOME=/tmp/crypto-material/orderers/org1/orderer2/node-tls +export FABRIC_CA_CLIENT_TLS_CERTFILES=/tmp/crypto-material/cas/org1/ca-tls/ca-cert.pem +fabric-ca-client enroll -d -u https://org1-orderer2:orderer2PW@0.0.0.0:7054 --enrollment.profile tls --csr.hosts ${TOPOLOGY}-org1-orderer2 + +# enroll orderer3 node-tls +export FABRIC_CA_CLIENT_HOME=/tmp/crypto-material/orderers/org1/orderer3/node-tls +export FABRIC_CA_CLIENT_TLS_CERTFILES=/tmp/crypto-material/cas/org1/ca-tls/ca-cert.pem +fabric-ca-client enroll -d -u https://org1-orderer3:orderer3PW@0.0.0.0:7054 --enrollment.profile tls --csr.hosts ${TOPOLOGY}-org1-orderer3 + +mv /tmp/crypto-material/orderers/org1/orderer1/node-tls/msp/keystore/* /tmp/crypto-material/orderers/org1/orderer1/node-tls/msp/keystore/key.pem +mv /tmp/crypto-material/orderers/org1/orderer2/node-tls/msp/keystore/* /tmp/crypto-material/orderers/org1/orderer2/node-tls/msp/keystore/key.pem +mv /tmp/crypto-material/orderers/org1/orderer3/node-tls/msp/keystore/* /tmp/crypto-material/orderers/org1/orderer3/node-tls/msp/keystore/key.pem + +mv /tmp/crypto-material/orderers/org1/orderer1/node-tls/msp/tlscacerts/* /tmp/crypto-material/orderers/org1/orderer1/node-tls/msp/tlscacerts/ca-cert.pem +mv /tmp/crypto-material/orderers/org1/orderer2/node-tls/msp/tlscacerts/* /tmp/crypto-material/orderers/org1/orderer2/node-tls/msp/tlscacerts/ca-cert.pem +mv /tmp/crypto-material/orderers/org1/orderer3/node-tls/msp/tlscacerts/* /tmp/crypto-material/orderers/org1/orderer3/node-tls/msp/tlscacerts/ca-cert.pem \ No newline at end of file diff --git a/topologies/t7/scripts/org1-register-identities-with-ca-identities.sh b/topologies/t7/scripts/org1-register-identities-with-ca-identities.sh new file mode 100755 index 0000000..1e6dcb2 --- /dev/null +++ b/topologies/t7/scripts/org1-register-identities-with-ca-identities.sh @@ -0,0 +1,11 @@ +#!/bin/bash +set -e +set -x + +export FABRIC_CA_CLIENT_TLS_CERTFILES=/tmp/crypto-material/cas/org1/ca-identities/ca-cert.pem +export FABRIC_CA_CLIENT_HOME=/tmp/crypto-material/cas/org1/ca-identities-admin +fabric-ca-client enroll -d -u https://org1-ca-identities-admin:org1-ca-identities-adminpw@0.0.0.0:7054 +fabric-ca-client register -d --id.name org1-orderer1 --id.secret orderer1PW --id.type orderer -u https://0.0.0.0:7054 +fabric-ca-client register -d --id.name org1-orderer2 --id.secret orderer2PW --id.type orderer -u https://0.0.0.0:7054 +fabric-ca-client register -d --id.name org1-orderer3 --id.secret orderer3PW --id.type orderer -u https://0.0.0.0:7054 +fabric-ca-client register -d --id.name admin-org1 --id.secret org1adminpw --id.type admin --id.attrs "hf.Registrar.Roles=client,hf.Registrar.Attributes=*,hf.Revoker=true,hf.GenCRL=true,admin=true:ecert,abac.init=true:ecert" -u https://0.0.0.0:7054 \ No newline at end of file diff --git a/topologies/t7/scripts/org1-register-identities-with-ca-tls.sh b/topologies/t7/scripts/org1-register-identities-with-ca-tls.sh new file mode 100755 index 0000000..ab095a5 --- /dev/null +++ b/topologies/t7/scripts/org1-register-identities-with-ca-tls.sh @@ -0,0 +1,10 @@ +#!/bin/bash +set -e +set -x + +export FABRIC_CA_CLIENT_TLS_CERTFILES=/tmp/crypto-material/cas/org1/ca-tls/ca-cert.pem +export FABRIC_CA_CLIENT_HOME=/tmp/crypto-material/cas/org1/ca-tls-admin +fabric-ca-client enroll -d -u https://org1-ca-tls-admin:org1-ca-tls-adminpw@0.0.0.0:7054 +fabric-ca-client register -d --id.name org1-orderer1 --id.secret orderer1PW --id.type orderer -u https://0.0.0.0:7054 +fabric-ca-client register -d --id.name org1-orderer2 --id.secret orderer2PW --id.type orderer -u https://0.0.0.0:7054 +fabric-ca-client register -d --id.name org1-orderer3 --id.secret orderer3PW --id.type orderer -u https://0.0.0.0:7054 \ No newline at end of file diff --git a/topologies/t7/scripts/org2-approve-chaincode.sh b/topologies/t7/scripts/org2-approve-chaincode.sh new file mode 100755 index 0000000..4e0d7aa --- /dev/null +++ b/topologies/t7/scripts/org2-approve-chaincode.sh @@ -0,0 +1,22 @@ +#!/bin/bash +set -e +set -x + +export CORE_PEER_ADDRESS=${TOPOLOGY}-org2-peer1:7051 +export CORE_PEER_MSPCONFIGPATH=/tmp/crypto-material/orgs/org2/admins/admin/msp +QUERY_INSTALLED=`FABRIC_LOGGING_SPEC=ERROR peer lifecycle chaincode queryinstalled | grep myccv1` +IFS=' ' read -r -a array <<< $QUERY_INSTALLED +PACKAGE_ID=${array[2]} +PACKAGE_ID=${PACKAGE_ID::-1} +echo "The Package ID for the installed chaincode is: $PACKAGE_ID" + + +peer lifecycle chaincode approveformyorg --channelID mychannel --name myccv1 --version "1.0" \ + --package-id $PACKAGE_ID \ + --peerAddresses ${TOPOLOGY}-org2-peer1:7051 \ + --peerAddresses ${TOPOLOGY}-org2-peer2:7051 \ + --tlsRootCertFiles /tmp/crypto-material/cas/org2/ca-tls/ca-cert.pem \ + --tlsRootCertFiles /tmp/crypto-material/cas/org2/ca-tls/ca-cert.pem \ + --sequence 1 --tls --cafile /tmp/crypto-material/cas/org1/ca-tls/ca-cert.pem \ + --collections-config /tmp/config/collections_config.json +peer lifecycle chaincode queryapproved -C mychannel --name myccv1 \ No newline at end of file diff --git a/topologies/t7/scripts/org2-create-and-join-channels.sh b/topologies/t7/scripts/org2-create-and-join-channels.sh new file mode 100755 index 0000000..ed6a189 --- /dev/null +++ b/topologies/t7/scripts/org2-create-and-join-channels.sh @@ -0,0 +1,12 @@ +#!/bin/sh +set -e +set -x + +export CORE_PEER_MSPCONFIGPATH=/tmp/crypto-material/orgs/org2/admins/admin/msp +peer channel create -c mychannel -f /tmp/crypto-material/artifacts/channels/channel.tx -o ${TOPOLOGY}-org1-orderer1:7050 --outputBlock /tmp/crypto-material/artifacts/channels/mychannel.block --tls --cafile /tmp/crypto-material/cas/org1/ca-tls/ca-cert.pem + +export CORE_PEER_ADDRESS=${TOPOLOGY}-org2-peer1:7051 +peer channel join -b /tmp/crypto-material/artifacts/channels/mychannel.block + +export CORE_PEER_ADDRESS=${TOPOLOGY}-org2-peer2:7051 +peer channel join -b /tmp/crypto-material/artifacts/channels/mychannel.block \ No newline at end of file diff --git a/topologies/t7/scripts/org2-enroll-identities-with-ca-identities.sh b/topologies/t7/scripts/org2-enroll-identities-with-ca-identities.sh new file mode 100755 index 0000000..ad0ea07 --- /dev/null +++ b/topologies/t7/scripts/org2-enroll-identities-with-ca-identities.sh @@ -0,0 +1,49 @@ +#!/bin/bash +set -e +set -x + +# enroll peer1 node +export FABRIC_CA_CLIENT_HOME=/tmp/crypto-material/peers/org2/peer1/node +export FABRIC_CA_CLIENT_TLS_CERTFILES=/tmp/crypto-material/cas/org2/ca-identities/ca-cert.pem +fabric-ca-client enroll -d -u https://peer1-org2:peer1PW@0.0.0.0:7054 +mv /tmp/crypto-material/peers/org2/peer1/node/msp/cacerts/* /tmp/crypto-material/peers/org2/peer1/node/msp/cacerts/ca-cert.pem +cp /tmp/config/config.yaml /tmp/crypto-material/peers/org2/peer1/node/msp + +# enroll peer2 node +export FABRIC_CA_CLIENT_HOME=/tmp/crypto-material/peers/org2/peer2/node +export FABRIC_CA_CLIENT_TLS_CERTFILES=/tmp/crypto-material/cas/org2/ca-identities/ca-cert.pem +fabric-ca-client enroll -d -u https://peer2-org2:peer2PW@0.0.0.0:7054 +mv /tmp/crypto-material/peers/org2/peer2/node/msp/cacerts/* /tmp/crypto-material/peers/org2/peer2/node/msp/cacerts/ca-cert.pem +cp /tmp/config/config.yaml /tmp/crypto-material/peers/org2/peer2/node/msp + +# enroll org2 admin +export FABRIC_CA_CLIENT_HOME=/tmp/crypto-material/orgs/org2/admins/admin +export FABRIC_CA_CLIENT_TLS_CERTFILES=/tmp/crypto-material/cas/org2/ca-identities/ca-cert.pem +fabric-ca-client enroll -d -u https://admin-org2:org2AdminPW@0.0.0.0:7054 +mv /tmp/crypto-material/orgs/org2/admins/admin/msp/cacerts/* /tmp/crypto-material/orgs/org2/admins/admin/msp/cacerts/ca-cert.pem +cp /tmp/config/config.yaml /tmp/crypto-material/orgs/org2/admins/admin/msp + +# enroll org2 user +export FABRIC_CA_CLIENT_HOME=/tmp/crypto-material/orgs/org2/users/user +export FABRIC_CA_CLIENT_TLS_CERTFILES=/tmp/crypto-material/cas/org2/ca-identities/ca-cert.pem +fabric-ca-client enroll -d -u https://admin-org2:org2AdminPW@0.0.0.0:7054 +mv /tmp/crypto-material/orgs/org2/users/user/msp/cacerts/* /tmp/crypto-material/orgs/org2/users/user/msp/cacerts/ca-cert.pem +cp /tmp/config/config.yaml /tmp/crypto-material/orgs/org2/users/user/msp + +# enroll org2 client +export FABRIC_CA_CLIENT_HOME=/tmp/crypto-material/orgs/org2/clients/client +export FABRIC_CA_CLIENT_TLS_CERTFILES=/tmp/crypto-material/cas/org2/ca-identities/ca-cert.pem +fabric-ca-client enroll -d -u https://admin-org2:org2AdminPW@0.0.0.0:7054 +mv /tmp/crypto-material/orgs/org2/clients/client/msp/cacerts/* /tmp/crypto-material/orgs/org2/clients/client/msp/cacerts/ca-cert.pem +cp /tmp/config/config.yaml /tmp/crypto-material/orgs/org2/clients/client/msp + +# setup org2 msp +mkdir -p /tmp/crypto-material/orgs/org2/msp +cp /tmp/config/config.yaml /tmp/crypto-material/orgs/org2/msp +mkdir -p /tmp/crypto-material/orgs/org2/msp/cacerts +cp /tmp/crypto-material/cas/org2/ca-identities/ca-cert.pem /tmp/crypto-material/orgs/org2/msp/cacerts/ca-cert.pem +mkdir -p /tmp/crypto-material/orgs/org2/msp/tlscacerts +cp /tmp/crypto-material/cas/org1/ca-tls/ca-cert.pem /tmp/crypto-material/orgs/org2/msp/tlscacerts/org1-tls-ca-cert.pem +cp /tmp/crypto-material/cas/org2/ca-tls/ca-cert.pem /tmp/crypto-material/orgs/org2/msp/tlscacerts/org2-tls-ca-cert.pem +cp /tmp/crypto-material/cas/org3/ca-tls/ca-cert.pem /tmp/crypto-material/orgs/org2/msp/tlscacerts/org3-tls-ca-cert.pem +mkdir -p /tmp/crypto-material/orgs/org2/msp/users diff --git a/topologies/t7/scripts/org2-enroll-identities-with-ca-tls.sh b/topologies/t7/scripts/org2-enroll-identities-with-ca-tls.sh new file mode 100755 index 0000000..ea37b7e --- /dev/null +++ b/topologies/t7/scripts/org2-enroll-identities-with-ca-tls.sh @@ -0,0 +1,19 @@ +#!/bin/bash +set -e +set -x + +# enroll peer1 node-tls +export FABRIC_CA_CLIENT_HOME=/tmp/crypto-material/peers/org2/peer1/node-tls +export FABRIC_CA_CLIENT_TLS_CERTFILES=/tmp/crypto-material/cas/org2/ca-tls/ca-cert.pem +fabric-ca-client enroll -d -u https://peer1-org2:peer1PW@0.0.0.0:7054 --enrollment.profile tls --csr.hosts ${TOPOLOGY}-org2-peer1 + +# enroll peer2 node-tls +export FABRIC_CA_CLIENT_HOME=/tmp/crypto-material/peers/org2/peer2/node-tls +export FABRIC_CA_CLIENT_TLS_CERTFILES=/tmp/crypto-material/cas/org2/ca-tls/ca-cert.pem +fabric-ca-client enroll -d -u https://peer2-org2:peer2PW@0.0.0.0:7054 --enrollment.profile tls --csr.hosts ${TOPOLOGY}-org2-peer2 + +mv /tmp/crypto-material/peers/org2/peer1/node-tls/msp/keystore/* /tmp/crypto-material/peers/org2/peer1/node-tls/msp/keystore/key.pem +mv /tmp/crypto-material/peers/org2/peer2/node-tls/msp/keystore/* /tmp/crypto-material/peers/org2/peer2/node-tls/msp/keystore/key.pem + +mv /tmp/crypto-material/peers/org2/peer1/node-tls/msp/tlscacerts/* /tmp/crypto-material/peers/org2/peer1/node-tls/msp/tlscacerts/ca-cert.pem +mv /tmp/crypto-material/peers/org2/peer2/node-tls/msp/tlscacerts/* /tmp/crypto-material/peers/org2/peer2/node-tls/msp/tlscacerts/ca-cert.pem \ No newline at end of file diff --git a/topologies/t7/scripts/org2-install-chaincode.sh b/topologies/t7/scripts/org2-install-chaincode.sh new file mode 100755 index 0000000..b9db570 --- /dev/null +++ b/topologies/t7/scripts/org2-install-chaincode.sh @@ -0,0 +1,13 @@ +#!/bin/bash +set -e +set -x + +export CHAINCODE_DIR=/opt/gopath/src/github.com/hyperledger/fabric-samples/chaincode/assets-transfer-basic/chaincode-go + +export CORE_PEER_ADDRESS=${TOPOLOGY}-org2-peer1:7051 +export CORE_PEER_MSPCONFIGPATH=/tmp/crypto-material/orgs/org2/admins/admin/msp +peer lifecycle chaincode package mycc.tar.gz --path $CHAINCODE_DIR --lang golang --label myccv1 +peer lifecycle chaincode install mycc.tar.gz + +export CORE_PEER_ADDRESS=${TOPOLOGY}-org2-peer2:7051 +peer lifecycle chaincode install mycc.tar.gz \ No newline at end of file diff --git a/topologies/t7/scripts/org2-register-identities-with-ca-identities.sh b/topologies/t7/scripts/org2-register-identities-with-ca-identities.sh new file mode 100755 index 0000000..beb2cd6 --- /dev/null +++ b/topologies/t7/scripts/org2-register-identities-with-ca-identities.sh @@ -0,0 +1,12 @@ +#!/bin/bash +set -e +set -x + +export FABRIC_CA_CLIENT_TLS_CERTFILES=/tmp/crypto-material/cas/org2/ca-identities/ca-cert.pem +export FABRIC_CA_CLIENT_HOME=/tmp/crypto-material/cas/org2/ca-identities-admin +fabric-ca-client enroll -d -u https://org2-ca-identities-admin:org2-ca-identities-adminpw@0.0.0.0:7054 +fabric-ca-client register -d --id.name peer1-org2 --id.secret peer1PW --id.type peer -u https://0.0.0.0:7054 +fabric-ca-client register -d --id.name peer2-org2 --id.secret peer2PW --id.type peer -u https://0.0.0.0:7054 +fabric-ca-client register -d --id.name admin-org2 --id.secret org2AdminPW --id.type admin -u https://0.0.0.0:7054 +fabric-ca-client register -d --id.name user-org2 --id.secret org2UserPW --id.type user -u https://0.0.0.0:7054 +fabric-ca-client register -d --id.name client-org2 --id.secret org2UserPW --id.type client -u https://0.0.0.0:7054 \ No newline at end of file diff --git a/topologies/t7/scripts/org2-register-identities-with-ca-tls.sh b/topologies/t7/scripts/org2-register-identities-with-ca-tls.sh new file mode 100755 index 0000000..3a99816 --- /dev/null +++ b/topologies/t7/scripts/org2-register-identities-with-ca-tls.sh @@ -0,0 +1,9 @@ +#!/bin/bash +set -e +set -x + +export FABRIC_CA_CLIENT_TLS_CERTFILES=/tmp/crypto-material/cas/org2/ca-tls/ca-cert.pem +export FABRIC_CA_CLIENT_HOME=/tmp/crypto-material/cas/org2/ca-tls-admin +fabric-ca-client enroll -d -u https://org2-ca-tls-admin:org2-ca-tls-adminpw@0.0.0.0:7054 +fabric-ca-client register -d --id.name peer1-org2 --id.secret peer1PW --id.type peer -u https://0.0.0.0:7054 +fabric-ca-client register -d --id.name peer2-org2 --id.secret peer2PW --id.type peer -u https://0.0.0.0:7054 \ No newline at end of file diff --git a/topologies/t7/scripts/org3-approve-chaincode.sh b/topologies/t7/scripts/org3-approve-chaincode.sh new file mode 100755 index 0000000..44ee2f7 --- /dev/null +++ b/topologies/t7/scripts/org3-approve-chaincode.sh @@ -0,0 +1,22 @@ +#!/bin/bash +set -e +set -x + +export CORE_PEER_ADDRESS=${TOPOLOGY}-org3-peer1:7051 +export CORE_PEER_MSPCONFIGPATH=/tmp/crypto-material/orgs/org3/admins/admin/msp +QUERY_INSTALLED=`FABRIC_LOGGING_SPEC=ERROR peer lifecycle chaincode queryinstalled | grep myccv1` +IFS=' ' read -r -a array <<< $QUERY_INSTALLED +PACKAGE_ID=${array[2]} +PACKAGE_ID=${PACKAGE_ID::-1} +echo "The Package ID for the installed chaincode is: $PACKAGE_ID" + + +peer lifecycle chaincode approveformyorg --channelID mychannel --name myccv1 --version "1.0" \ + --package-id $PACKAGE_ID \ + --peerAddresses ${TOPOLOGY}-org3-peer1:7051 \ + --peerAddresses ${TOPOLOGY}-org3-peer2:7051 \ + --tlsRootCertFiles /tmp/crypto-material/cas/org3/ca-tls/ca-cert.pem \ + --tlsRootCertFiles /tmp/crypto-material/cas/org3/ca-tls/ca-cert.pem \ + --sequence 1 --tls --cafile /tmp/crypto-material/cas/org1/ca-tls/ca-cert.pem \ + --collections-config /tmp/config/collections_config.json +peer lifecycle chaincode queryapproved -C mychannel --name myccv1 \ No newline at end of file diff --git a/topologies/t7/scripts/org3-create-and-join-channels.sh b/topologies/t7/scripts/org3-create-and-join-channels.sh new file mode 100755 index 0000000..ebf8b48 --- /dev/null +++ b/topologies/t7/scripts/org3-create-and-join-channels.sh @@ -0,0 +1,11 @@ +#!/bin/sh +set -e +set -x + +export CORE_PEER_MSPCONFIGPATH=/tmp/crypto-material/orgs/org3/admins/admin/msp + +export CORE_PEER_ADDRESS=${TOPOLOGY}-org3-peer1:7051 +peer channel join -b /tmp/crypto-material/artifacts/channels/mychannel.block + +export CORE_PEER_ADDRESS=${TOPOLOGY}-org3-peer2:7051 +peer channel join -b /tmp/crypto-material/artifacts/channels/mychannel.block \ No newline at end of file diff --git a/topologies/t7/scripts/org3-enroll-identities-with-ca-identities.sh b/topologies/t7/scripts/org3-enroll-identities-with-ca-identities.sh new file mode 100755 index 0000000..36f317d --- /dev/null +++ b/topologies/t7/scripts/org3-enroll-identities-with-ca-identities.sh @@ -0,0 +1,49 @@ +#!/bin/bash +set -e +set -x + +# enroll peer1 node +export FABRIC_CA_CLIENT_HOME=/tmp/crypto-material/peers/org3/peer1/node +export FABRIC_CA_CLIENT_TLS_CERTFILES=/tmp/crypto-material/cas/org3/ca-identities/ca-cert.pem +fabric-ca-client enroll -d -u https://peer1-org3:peer1PW@0.0.0.0:7054 +mv /tmp/crypto-material/peers/org3/peer1/node/msp/cacerts/* /tmp/crypto-material/peers/org3/peer1/node/msp/cacerts/ca-cert.pem +cp /tmp/config/config.yaml /tmp/crypto-material/peers/org3/peer1/node/msp + +# enroll peer2 node +export FABRIC_CA_CLIENT_HOME=/tmp/crypto-material/peers/org3/peer2/node +export FABRIC_CA_CLIENT_TLS_CERTFILES=/tmp/crypto-material/cas/org3/ca-identities/ca-cert.pem +fabric-ca-client enroll -d -u https://peer2-org3:peer2PW@0.0.0.0:7054 +mv /tmp/crypto-material/peers/org3/peer2/node/msp/cacerts/* /tmp/crypto-material/peers/org3/peer2/node/msp/cacerts/ca-cert.pem +cp /tmp/config/config.yaml /tmp/crypto-material/peers/org3/peer2/node/msp + +# enroll org3 admin +export FABRIC_CA_CLIENT_HOME=/tmp/crypto-material/orgs/org3/admins/admin +export FABRIC_CA_CLIENT_TLS_CERTFILES=/tmp/crypto-material/cas/org3/ca-identities/ca-cert.pem +fabric-ca-client enroll -d -u https://admin-org3:org3AdminPW@0.0.0.0:7054 +mv /tmp/crypto-material/orgs/org3/admins/admin/msp/cacerts/* /tmp/crypto-material/orgs/org3/admins/admin/msp/cacerts/ca-cert.pem +cp /tmp/config/config.yaml /tmp/crypto-material/orgs/org3/admins/admin/msp + +# enroll org3 user +export FABRIC_CA_CLIENT_HOME=/tmp/crypto-material/orgs/org3/users/user +export FABRIC_CA_CLIENT_TLS_CERTFILES=/tmp/crypto-material/cas/org3/ca-identities/ca-cert.pem +fabric-ca-client enroll -d -u https://admin-org3:org3AdminPW@0.0.0.0:7054 +mv /tmp/crypto-material/orgs/org3/users/user/msp/cacerts/* /tmp/crypto-material/orgs/org3/users/user/msp/cacerts/ca-cert.pem +cp /tmp/config/config.yaml /tmp/crypto-material/orgs/org3/users/user/msp + +# enroll org3 client +export FABRIC_CA_CLIENT_HOME=/tmp/crypto-material/orgs/org3/clients/client +export FABRIC_CA_CLIENT_TLS_CERTFILES=/tmp/crypto-material/cas/org3/ca-identities/ca-cert.pem +fabric-ca-client enroll -d -u https://admin-org3:org3AdminPW@0.0.0.0:7054 +mv /tmp/crypto-material/orgs/org3/clients/client/msp/cacerts/* /tmp/crypto-material/orgs/org3/clients/client/msp/cacerts/ca-cert.pem +cp /tmp/config/config.yaml /tmp/crypto-material/orgs/org3/clients/client/msp + +# setup org3 msp +mkdir -p /tmp/crypto-material/orgs/org3/msp +cp /tmp/config/config.yaml /tmp/crypto-material/orgs/org3/msp +mkdir -p /tmp/crypto-material/orgs/org3/msp/cacerts +cp /tmp/crypto-material/cas/org3/ca-identities/ca-cert.pem /tmp/crypto-material/orgs/org3/msp/cacerts/ca-cert.pem +mkdir -p /tmp/crypto-material/orgs/org3/msp/tlscacerts +cp /tmp/crypto-material/cas/org1/ca-tls/ca-cert.pem /tmp/crypto-material/orgs/org3/msp/tlscacerts/org1-tls-ca-cert.pem +cp /tmp/crypto-material/cas/org2/ca-tls/ca-cert.pem /tmp/crypto-material/orgs/org3/msp/tlscacerts/org2-tls-ca-cert.pem +cp /tmp/crypto-material/cas/org3/ca-tls/ca-cert.pem /tmp/crypto-material/orgs/org3/msp/tlscacerts/org3-tls-ca-cert.pem +mkdir -p /tmp/crypto-material/orgs/org3/msp/users diff --git a/topologies/t7/scripts/org3-enroll-identities-with-ca-tls.sh b/topologies/t7/scripts/org3-enroll-identities-with-ca-tls.sh new file mode 100755 index 0000000..2bbb3b3 --- /dev/null +++ b/topologies/t7/scripts/org3-enroll-identities-with-ca-tls.sh @@ -0,0 +1,19 @@ +#!/bin/bash +set -e +set -x + +# enroll peer1 node-tls +export FABRIC_CA_CLIENT_HOME=/tmp/crypto-material/peers/org3/peer1/node-tls +export FABRIC_CA_CLIENT_TLS_CERTFILES=/tmp/crypto-material/cas/org3/ca-tls/ca-cert.pem +fabric-ca-client enroll -d -u https://peer1-org3:peer1PW@0.0.0.0:7054 --enrollment.profile tls --csr.hosts ${TOPOLOGY}-org3-peer1 + +# enroll peer2 node-tls +export FABRIC_CA_CLIENT_HOME=/tmp/crypto-material/peers/org3/peer2/node-tls +export FABRIC_CA_CLIENT_TLS_CERTFILES=/tmp/crypto-material/cas/org3/ca-tls/ca-cert.pem +fabric-ca-client enroll -d -u https://peer2-org3:peer2PW@0.0.0.0:7054 --enrollment.profile tls --csr.hosts ${TOPOLOGY}-org3-peer2 + +mv /tmp/crypto-material/peers/org3/peer1/node-tls/msp/keystore/* /tmp/crypto-material/peers/org3/peer1/node-tls/msp/keystore/key.pem +mv /tmp/crypto-material/peers/org3/peer2/node-tls/msp/keystore/* /tmp/crypto-material/peers/org3/peer2/node-tls/msp/keystore/key.pem + +mv /tmp/crypto-material/peers/org3/peer1/node-tls/msp/tlscacerts/* /tmp/crypto-material/peers/org3/peer1/node-tls/msp/tlscacerts/ca-cert.pem +mv /tmp/crypto-material/peers/org3/peer2/node-tls/msp/tlscacerts/* /tmp/crypto-material/peers/org3/peer2/node-tls/msp/tlscacerts/ca-cert.pem \ No newline at end of file diff --git a/topologies/t7/scripts/org3-install-chaincode.sh b/topologies/t7/scripts/org3-install-chaincode.sh new file mode 100755 index 0000000..f6b8789 --- /dev/null +++ b/topologies/t7/scripts/org3-install-chaincode.sh @@ -0,0 +1,13 @@ +#!/bin/bash +set -e +set -x + +export CHAINCODE_DIR=/opt/gopath/src/github.com/hyperledger/fabric-samples/chaincode/assets-transfer-basic/chaincode-go + +export CORE_PEER_ADDRESS=${TOPOLOGY}-org3-peer1:7051 +export CORE_PEER_MSPCONFIGPATH=/tmp/crypto-material/orgs/org3/admins/admin/msp +peer lifecycle chaincode package mycc.tar.gz --path $CHAINCODE_DIR --lang golang --label myccv1 +peer lifecycle chaincode install mycc.tar.gz + +export CORE_PEER_ADDRESS=${TOPOLOGY}-org3-peer2:7051 +peer lifecycle chaincode install mycc.tar.gz diff --git a/topologies/t7/scripts/org3-register-identities-with-ca-identities.sh b/topologies/t7/scripts/org3-register-identities-with-ca-identities.sh new file mode 100755 index 0000000..1c56144 --- /dev/null +++ b/topologies/t7/scripts/org3-register-identities-with-ca-identities.sh @@ -0,0 +1,12 @@ +#!/bin/bash +set -e +set -x + +export FABRIC_CA_CLIENT_TLS_CERTFILES=/tmp/crypto-material/cas/org3/ca-identities/ca-cert.pem +export FABRIC_CA_CLIENT_HOME=/tmp/crypto-material/cas/org3/ca-identities-admin +fabric-ca-client enroll -d -u https://org3-ca-identities-admin:org3-ca-identities-adminpw@0.0.0.0:7054 +fabric-ca-client register -d --id.name peer1-org3 --id.secret peer1PW --id.type peer -u https://0.0.0.0:7054 +fabric-ca-client register -d --id.name peer2-org3 --id.secret peer2PW --id.type peer -u https://0.0.0.0:7054 +fabric-ca-client register -d --id.name admin-org3 --id.secret org3AdminPW --id.type admin -u https://0.0.0.0:7054 +fabric-ca-client register -d --id.name user-org3 --id.secret org3UserPW --id.type user -u https://0.0.0.0:7054 +fabric-ca-client register -d --id.name client-org3 --id.secret org3UserPW --id.type client -u https://0.0.0.0:7054 \ No newline at end of file diff --git a/topologies/t7/scripts/org3-register-identities-with-ca-tls.sh b/topologies/t7/scripts/org3-register-identities-with-ca-tls.sh new file mode 100755 index 0000000..a5ccd7f --- /dev/null +++ b/topologies/t7/scripts/org3-register-identities-with-ca-tls.sh @@ -0,0 +1,9 @@ +#!/bin/bash +set -e +set -x + +export FABRIC_CA_CLIENT_TLS_CERTFILES=/tmp/crypto-material/cas/org3/ca-tls/ca-cert.pem +export FABRIC_CA_CLIENT_HOME=/tmp/crypto-material/cas/org3/ca-tls-admin +fabric-ca-client enroll -d -u https://org3-ca-tls-admin:org3-ca-tls-adminpw@0.0.0.0:7054 +fabric-ca-client register -d --id.name peer1-org3 --id.secret peer1PW --id.type peer -u https://0.0.0.0:7054 +fabric-ca-client register -d --id.name peer2-org3 --id.secret peer2PW --id.type peer -u https://0.0.0.0:7054 \ No newline at end of file diff --git a/topologies/t7/scripts/patch-configtx.sh b/topologies/t7/scripts/patch-configtx.sh new file mode 100755 index 0000000..a4359d7 --- /dev/null +++ b/topologies/t7/scripts/patch-configtx.sh @@ -0,0 +1,8 @@ +#!/bin/bash +set -e +set -x + +mkdir -p /tmp/crypto-material/config +cp /tmp/config/configtx.yaml /tmp/crypto-material/config + +cat /tmp/config/configtx.yaml | sed -r "s;<>;${TOPOLOGY};g" | tee /tmp/crypto-material/config/configtx.yaml > /dev/null \ No newline at end of file diff --git a/topologies/t7/setup-network.sh b/topologies/t7/setup-network.sh new file mode 100755 index 0000000..9e4fd0a --- /dev/null +++ b/topologies/t7/setup-network.sh @@ -0,0 +1,162 @@ +#!/bin/bash +# -----stop script execution on error and log commands +set -e +set -x + +# -----get the folder of where the current script is located +export HL_TOPOLOGIES_BASE_FOLDER=$( cd ${0%/*} && pwd -P ) +rm -rf ./topologies + +export CURRENT_HL_TOPOLOGY=t7 +echo "Topology ${CURRENT_HL_TOPOLOGY} Root Folder Set to: ${HL_TOPOLOGIES_BASE_FOLDER}" + +# -----delete any old or existing docker networks and clear any state data---- +echo "Deleting the old network..." +./teardown-network.sh + +# -----bring up the hyperledger admin shell terminal console, used to bootstrap the network +docker compose --env-file ${HL_TOPOLOGIES_BASE_FOLDER}/.env -f ${HL_TOPOLOGIES_BASE_FOLDER}/../docker-compose-base.yml \ + -f ${HL_TOPOLOGIES_BASE_FOLDER}/../docker-compose-shell-cmd.yml up -d + +# -----begin by modifying the configtx yaml file with any string replacements +echo "Starting the setup of the new network..." + +docker exec ${CURRENT_HL_TOPOLOGY}-shell-cmd /bin/sh -c "/bin/sh /tmp/scripts/patch-configtx.sh" + +# ----------------------------------------------------------------------------- +# -----setup the CAs for all orgs and register with these the TLS-CA and Identities-CA users, such as admins, clients, etc...----- +# ----------------------------------------------------------------------------- +# -----org1 CAs +docker compose --env-file ${HL_TOPOLOGIES_BASE_FOLDER}/.env -f ${HL_TOPOLOGIES_BASE_FOLDER}/../docker-compose-base.yml \ + -f ${HL_TOPOLOGIES_BASE_FOLDER}/docker-compose.yml \ + -f ${HL_TOPOLOGIES_BASE_FOLDER}/containers/cas/org1-cas/docker-compose-org1-cas.yml up -d org1-ca-tls +sleep 2 +docker exec ${CURRENT_HL_TOPOLOGY}-org1-ca-tls /bin/sh -c "/bin/sh /tmp/scripts/org1-register-identities-with-ca-tls.sh" + +docker compose --env-file ${HL_TOPOLOGIES_BASE_FOLDER}/.env -f ${HL_TOPOLOGIES_BASE_FOLDER}/../docker-compose-base.yml \ + -f ${HL_TOPOLOGIES_BASE_FOLDER}/docker-compose.yml \ + -f ${HL_TOPOLOGIES_BASE_FOLDER}/containers/cas/org1-cas/docker-compose-org1-cas.yml up -d org1-ca-identities +sleep 2 +docker exec ${CURRENT_HL_TOPOLOGY}-org1-ca-identities /bin/sh -c "/bin/sh /tmp/scripts/org1-register-identities-with-ca-identities.sh" + +# -----org2 CAs +docker compose --env-file ${HL_TOPOLOGIES_BASE_FOLDER}/.env -f ${HL_TOPOLOGIES_BASE_FOLDER}/../docker-compose-base.yml \ + -f ${HL_TOPOLOGIES_BASE_FOLDER}/docker-compose.yml \ + -f ${HL_TOPOLOGIES_BASE_FOLDER}/containers/cas/org2-cas/docker-compose-org2-cas.yml up -d org2-ca-tls +sleep 2 +docker exec ${CURRENT_HL_TOPOLOGY}-org2-ca-tls /bin/sh -c "/bin/sh /tmp/scripts/org2-register-identities-with-ca-tls.sh" + +docker compose --env-file ${HL_TOPOLOGIES_BASE_FOLDER}/.env -f ${HL_TOPOLOGIES_BASE_FOLDER}/../docker-compose-base.yml \ + -f ${HL_TOPOLOGIES_BASE_FOLDER}/docker-compose.yml \ + -f ${HL_TOPOLOGIES_BASE_FOLDER}/containers/cas/org2-cas/docker-compose-org2-cas.yml up -d org2-ca-identities +sleep 2 +docker exec ${CURRENT_HL_TOPOLOGY}-org2-ca-identities /bin/sh -c "/bin/sh /tmp/scripts/org2-register-identities-with-ca-identities.sh" + +# -----org3 CAs +docker compose --env-file ${HL_TOPOLOGIES_BASE_FOLDER}/.env -f ${HL_TOPOLOGIES_BASE_FOLDER}/../docker-compose-base.yml \ + -f ${HL_TOPOLOGIES_BASE_FOLDER}/docker-compose.yml \ + -f ${HL_TOPOLOGIES_BASE_FOLDER}/containers/cas/org3-cas/docker-compose-org3-cas.yml up -d org3-ca-tls +sleep 2 +docker exec ${CURRENT_HL_TOPOLOGY}-org3-ca-tls /bin/sh -c "/bin/sh /tmp/scripts/org3-register-identities-with-ca-tls.sh" + +docker compose --env-file ${HL_TOPOLOGIES_BASE_FOLDER}/.env -f ${HL_TOPOLOGIES_BASE_FOLDER}/../docker-compose-base.yml \ + -f ${HL_TOPOLOGIES_BASE_FOLDER}/docker-compose.yml \ + -f ${HL_TOPOLOGIES_BASE_FOLDER}/containers/cas/org3-cas/docker-compose-org3-cas.yml up -d org3-ca-identities +sleep 2 +docker exec ${CURRENT_HL_TOPOLOGY}-org3-ca-identities /bin/sh -c "/bin/sh /tmp/scripts/org3-register-identities-with-ca-identities.sh" + +# ----------------------------------------------------------------------------- +# -----setup the Peers for all orgs----- +# ----------------------------------------------------------------------------- +# -----begin by enrolling each orgs' TLS-CA and Identities-CA users +docker exec ${CURRENT_HL_TOPOLOGY}-org2-ca-tls /bin/sh -c "/bin/sh /tmp/scripts/org2-enroll-identities-with-ca-tls.sh" +docker exec ${CURRENT_HL_TOPOLOGY}-org2-ca-identities /bin/sh -c "/bin/sh /tmp/scripts/org2-enroll-identities-with-ca-identities.sh" + +docker exec ${CURRENT_HL_TOPOLOGY}-org3-ca-tls /bin/sh -c "/bin/sh /tmp/scripts/org3-enroll-identities-with-ca-tls.sh" +docker exec ${CURRENT_HL_TOPOLOGY}-org3-ca-identities /bin/sh -c "/bin/sh /tmp/scripts/org3-enroll-identities-with-ca-identities.sh" + +# -----org2 peers +docker compose --env-file ${HL_TOPOLOGIES_BASE_FOLDER}/.env -f ${HL_TOPOLOGIES_BASE_FOLDER}/../docker-compose-base.yml \ + -f ${HL_TOPOLOGIES_BASE_FOLDER}/docker-compose.yml \ + -f ${HL_TOPOLOGIES_BASE_FOLDER}/containers/peers/org2-peers/docker-compose-org2-peers.yml up -d org2-peer1 + +docker compose --env-file ${HL_TOPOLOGIES_BASE_FOLDER}/.env -f ${HL_TOPOLOGIES_BASE_FOLDER}/../docker-compose-base.yml \ + -f ${HL_TOPOLOGIES_BASE_FOLDER}/docker-compose.yml \ + -f ${HL_TOPOLOGIES_BASE_FOLDER}/containers/peers/org2-peers/docker-compose-org2-peers.yml up -d org2-peer2 + +# -----org3 peers +docker compose --env-file ${HL_TOPOLOGIES_BASE_FOLDER}/.env -f ${HL_TOPOLOGIES_BASE_FOLDER}/../docker-compose-base.yml \ + -f ${HL_TOPOLOGIES_BASE_FOLDER}/docker-compose.yml \ + -f ${HL_TOPOLOGIES_BASE_FOLDER}/containers/peers/org3-peers/docker-compose-org3-peers.yml up -d org3-peer1 + +docker compose --env-file ${HL_TOPOLOGIES_BASE_FOLDER}/.env -f ${HL_TOPOLOGIES_BASE_FOLDER}/../docker-compose-base.yml \ + -f ${HL_TOPOLOGIES_BASE_FOLDER}/docker-compose.yml \ + -f ${HL_TOPOLOGIES_BASE_FOLDER}/containers/peers/org3-peers/docker-compose-org3-peers.yml up -d org3-peer2 + +# ----------------------------------------------------------------------------- +# -----setup CLIs for each org----- +# ----------------------------------------------------------------------------- +docker compose --env-file ${HL_TOPOLOGIES_BASE_FOLDER}/.env -f ${HL_TOPOLOGIES_BASE_FOLDER}/../docker-compose-base.yml \ + -f ${HL_TOPOLOGIES_BASE_FOLDER}/docker-compose.yml \ + -f ${HL_TOPOLOGIES_BASE_FOLDER}/containers/clis/org2-clis/docker-compose-org2-clis.yml up -d org2-cli-peer1 + +docker compose --env-file ${HL_TOPOLOGIES_BASE_FOLDER}/.env -f ${HL_TOPOLOGIES_BASE_FOLDER}/../docker-compose-base.yml \ + -f ${HL_TOPOLOGIES_BASE_FOLDER}/docker-compose.yml \ + -f ${HL_TOPOLOGIES_BASE_FOLDER}/containers/clis/org3-clis/docker-compose-org3-clis.yml up -d org3-cli-peer1 + +# ----------------------------------------------------------------------------- +# -----setup Orderers ----- +# ----------------------------------------------------------------------------- +# -----enroll orderer users with TLS-CA and Identities-CA for org1, the orderer org +docker exec ${CURRENT_HL_TOPOLOGY}-org1-ca-tls /bin/sh -c "/bin/sh /tmp/scripts/org1-enroll-identities-with-ca-tls.sh" +docker exec ${CURRENT_HL_TOPOLOGY}-org1-ca-identities /bin/sh -c "/bin/sh /tmp/scripts/org1-enroll-identities-with-ca-identities.sh" + +# -----generate the genesis.block and mychannel artifacts +docker exec ${CURRENT_HL_TOPOLOGY}-org2-cli-peer1 /bin/sh -c "/tmp/scripts/channels-setup.sh" + +sleep 1 + +# -----bring up org1 orderers +docker compose --env-file ${HL_TOPOLOGIES_BASE_FOLDER}/.env -f ${HL_TOPOLOGIES_BASE_FOLDER}/../docker-compose-base.yml \ + -f ${HL_TOPOLOGIES_BASE_FOLDER}/docker-compose.yml \ + -f ${HL_TOPOLOGIES_BASE_FOLDER}/containers/orderers/org1-orderers/docker-compose-org1-orderers.yml up -d org1-orderer1 + +docker compose --env-file ${HL_TOPOLOGIES_BASE_FOLDER}/.env -f ${HL_TOPOLOGIES_BASE_FOLDER}/../docker-compose-base.yml \ + -f ${HL_TOPOLOGIES_BASE_FOLDER}/docker-compose.yml \ + -f ${HL_TOPOLOGIES_BASE_FOLDER}/containers/orderers/org1-orderers/docker-compose-org1-orderers.yml up -d org1-orderer2 + +docker compose --env-file ${HL_TOPOLOGIES_BASE_FOLDER}/.env -f ${HL_TOPOLOGIES_BASE_FOLDER}/../docker-compose-base.yml \ + -f ${HL_TOPOLOGIES_BASE_FOLDER}/docker-compose.yml \ + -f ${HL_TOPOLOGIES_BASE_FOLDER}/containers/orderers/org1-orderers/docker-compose-org1-orderers.yml up -d org1-orderer3 + +# -----need to wait until raft leader selection is completed for the orderers +sleep 4 + +# ----------------------------------------------------------------------------- +# -----setup Channels ----- +# ----------------------------------------------------------------------------- +docker exec ${CURRENT_HL_TOPOLOGY}-org2-cli-peer1 /bin/sh -c "/tmp/scripts/org2-create-and-join-channels.sh" +docker exec ${CURRENT_HL_TOPOLOGY}-org3-cli-peer1 /bin/sh -c "/tmp/scripts/org3-create-and-join-channels.sh" + +# ----------------------------------------------------------------------------- +# -----setup Chaincode ----- +# ----------------------------------------------------------------------------- +docker exec ${CURRENT_HL_TOPOLOGY}-org2-cli-peer1 /bin/sh -c "/tmp/scripts/org2-install-chaincode.sh" +docker exec ${CURRENT_HL_TOPOLOGY}-org3-cli-peer1 /bin/sh -c "/tmp/scripts/org3-install-chaincode.sh" + +sleep 1 + +docker exec ${CURRENT_HL_TOPOLOGY}-org2-cli-peer1 /bin/sh -c "/tmp/scripts/org2-approve-chaincode.sh" +docker exec ${CURRENT_HL_TOPOLOGY}-org3-cli-peer1 /bin/sh -c "/tmp/scripts/org3-approve-chaincode.sh" + +sleep 1 + +docker exec ${CURRENT_HL_TOPOLOGY}-org2-cli-peer1 /bin/sh -c "/tmp/scripts/all-org-peers-commit-chaincode.sh" + +sleep 1 + +# ----------------------------------------------------------------------------- +# -----test Chaincode with invoke and query----- +# ----------------------------------------------------------------------------- +docker exec ${CURRENT_HL_TOPOLOGY}-org2-cli-peer1 /bin/sh -c "/tmp/scripts/all-org-peers-execute-chaincode.sh" +echo "******* NETWORK SETUP COMPLETED *******" diff --git a/topologies/t7/teardown-network.sh b/topologies/t7/teardown-network.sh new file mode 100755 index 0000000..caa6389 --- /dev/null +++ b/topologies/t7/teardown-network.sh @@ -0,0 +1,33 @@ +#!/bin/bash +# -----stop script execution on error and log commands +set -e +set -x + +export CURRENT_HL_TOPOLOGY=t7 + +# ----------------------------------------------------------------------------- +# -----remove current topoloy containers containers +# ----------------------------------------------------------------------------- +# -----the chaincode dockers throw an error as not being able to be deleted, although they are being deleted. ignoring errors here +set +e +docker ps --filter name=${CURRENT_HL_TOPOLOGY}-org --filter status=running -aq | xargs -r docker stop | xargs -r docker rm +docker ps --filter name=${CURRENT_HL_TOPOLOGY}-org --filter status=exited -aq | xargs -r docker rm +set -e + +# ----------------------------------------------------------------------------- +# -----clear any state data written to disk +# ----------------------------------------------------------------------------- +# -----docker exec will throw error if no running container found +if [ "$( docker container inspect -f '{{.State.Status}}' ${CURRENT_HL_TOPOLOGY}-shell-cmd )" == "running" ] +then + docker exec ${CURRENT_HL_TOPOLOGY}-shell-cmd /bin/sh -c "/bin/sh /tmp/scripts/delete-state-data.sh" +else + echo "Shell Cmd Container is not running." +fi + +# ----------------------------------------------------------------------------- +# -----remove the bootstrap shell container +# ----------------------------------------------------------------------------- +# -----remove shell command container +docker ps --filter name=${CURRENT_HL_TOPOLOGY}-shell-cmd --filter status=running -aq | xargs -r docker stop | xargs -r docker rm +docker ps --filter name=${CURRENT_HL_TOPOLOGY}-shell-cmd --filter status=exited -aq | xargs -r docker rm diff --git a/topologies/t8/.env b/topologies/t8/.env new file mode 100644 index 0000000..4176bee --- /dev/null +++ b/topologies/t8/.env @@ -0,0 +1,5 @@ +COMPOSE_PROJECT_NAME=hl-fabric-topology-t8 +FABRIC_CA_VERSION=1.5 +FABRIC_PEER_VERSION=2.4.6 +FABRIC_TOOLS_VERSION=2.4.6 +PEER_ORDERER_VERSION=2.4.6 \ No newline at end of file diff --git a/topologies/t8/.gitignore b/topologies/t8/.gitignore new file mode 100644 index 0000000..ee0881a --- /dev/null +++ b/topologies/t8/.gitignore @@ -0,0 +1,2 @@ +crypto-material/*/** +homefolders/*/** \ No newline at end of file diff --git a/topologies/t8/README.md b/topologies/t8/README.md new file mode 100644 index 0000000..d592562 --- /dev/null +++ b/topologies/t8/README.md @@ -0,0 +1,45 @@ +# T8: Clustered CouchdDB for World State +## Description +--- +T1 network + 2 node CouchDB cluster for world state DB for Org 2 Peer 1. Please not that the docker-compose.yml at the top of this topology folder has ports enable for the 2 CouchDB nodes to be exposed to the nodes. This allows one to access the CouchDB admin console at these 2 URLs, once this network topology is up and running: + +- http://localhost:5986/_utils/# +- http://localhost:5987/_utils/# + +The login credentials can be found in the topologies/t8/containers/peers/org2-peers/docker-compose-org2-couchdb.yml + +## Diagram +--- +![Diagram of components](../image_store/T8.png) + +## Relevant Documentation + +- https://hyperledger-fabric.readthedocs.io/en/latest/couchdb_tutorial.html +- https://docs.couchdb.org/en/3.2.2-docs/setup/cluster.html#the-cluster-setup-api + +## Components List +--- +* Org 1 + * Orderer 1 + * Orderer 2 + * Orderer 3 + * TLS CA + * Identities CA +* Org 2 + * Peer 1 + * Peer 1 CLI + * Peer 2 + * TLS CA + * Identities CA +* Org 3 + * Peer 1 + * Peer 1 CLI + * Peer 2 + * TLS CA + * Identities CA + +## Characteristics + +- World State Database Instance (LevelDB) embedded (in peer containers) for all peers except Org 2 Peer 1 +- Chaincode installed directly on peers with a Private Data Collection +- Communication between all components done via TLS \ No newline at end of file diff --git a/topologies/t8/assets/chaincode/assets-transfer-basic/chaincode-go/assetTransfer.go b/topologies/t8/assets/chaincode/assets-transfer-basic/chaincode-go/assetTransfer.go new file mode 100644 index 0000000..9c619d5 --- /dev/null +++ b/topologies/t8/assets/chaincode/assets-transfer-basic/chaincode-go/assetTransfer.go @@ -0,0 +1,23 @@ +/* +SPDX-License-Identifier: Apache-2.0 +*/ + +package main + +import ( + "log" + + "github.com/hyperledger/fabric-contract-api-go/contractapi" + "github.com/hyperledger/fabric-samples/asset-transfer-basic/chaincode-go/chaincode" +) + +func main() { + assetChaincode, err := contractapi.NewChaincode(&chaincode.SmartContract{}) + if err != nil { + log.Panicf("Error creating asset-transfer-basic chaincode: %v", err) + } + + if err := assetChaincode.Start(); err != nil { + log.Panicf("Error starting asset-transfer-basic chaincode: %v", err) + } +} diff --git a/topologies/t8/assets/chaincode/assets-transfer-basic/chaincode-go/chaincode/mocks/chaincodestub.go b/topologies/t8/assets/chaincode/assets-transfer-basic/chaincode-go/chaincode/mocks/chaincodestub.go new file mode 100644 index 0000000..91348fe --- /dev/null +++ b/topologies/t8/assets/chaincode/assets-transfer-basic/chaincode-go/chaincode/mocks/chaincodestub.go @@ -0,0 +1,2878 @@ +// Code generated by counterfeiter. DO NOT EDIT. +package mocks + +import ( + "sync" + + "github.com/golang/protobuf/ptypes/timestamp" + "github.com/hyperledger/fabric-chaincode-go/shim" + "github.com/hyperledger/fabric-protos-go/peer" +) + +type ChaincodeStub struct { + CreateCompositeKeyStub func(string, []string) (string, error) + createCompositeKeyMutex sync.RWMutex + createCompositeKeyArgsForCall []struct { + arg1 string + arg2 []string + } + createCompositeKeyReturns struct { + result1 string + result2 error + } + createCompositeKeyReturnsOnCall map[int]struct { + result1 string + result2 error + } + DelPrivateDataStub func(string, string) error + delPrivateDataMutex sync.RWMutex + delPrivateDataArgsForCall []struct { + arg1 string + arg2 string + } + delPrivateDataReturns struct { + result1 error + } + delPrivateDataReturnsOnCall map[int]struct { + result1 error + } + DelStateStub func(string) error + delStateMutex sync.RWMutex + delStateArgsForCall []struct { + arg1 string + } + delStateReturns struct { + result1 error + } + delStateReturnsOnCall map[int]struct { + result1 error + } + GetArgsStub func() [][]byte + getArgsMutex sync.RWMutex + getArgsArgsForCall []struct { + } + getArgsReturns struct { + result1 [][]byte + } + getArgsReturnsOnCall map[int]struct { + result1 [][]byte + } + GetArgsSliceStub func() ([]byte, error) + getArgsSliceMutex sync.RWMutex + getArgsSliceArgsForCall []struct { + } + getArgsSliceReturns struct { + result1 []byte + result2 error + } + getArgsSliceReturnsOnCall map[int]struct { + result1 []byte + result2 error + } + GetBindingStub func() ([]byte, error) + getBindingMutex sync.RWMutex + getBindingArgsForCall []struct { + } + getBindingReturns struct { + result1 []byte + result2 error + } + getBindingReturnsOnCall map[int]struct { + result1 []byte + result2 error + } + GetChannelIDStub func() string + getChannelIDMutex sync.RWMutex + getChannelIDArgsForCall []struct { + } + getChannelIDReturns struct { + result1 string + } + getChannelIDReturnsOnCall map[int]struct { + result1 string + } + GetCreatorStub func() ([]byte, error) + getCreatorMutex sync.RWMutex + getCreatorArgsForCall []struct { + } + getCreatorReturns struct { + result1 []byte + result2 error + } + getCreatorReturnsOnCall map[int]struct { + result1 []byte + result2 error + } + GetDecorationsStub func() map[string][]byte + getDecorationsMutex sync.RWMutex + getDecorationsArgsForCall []struct { + } + getDecorationsReturns struct { + result1 map[string][]byte + } + getDecorationsReturnsOnCall map[int]struct { + result1 map[string][]byte + } + GetFunctionAndParametersStub func() (string, []string) + getFunctionAndParametersMutex sync.RWMutex + getFunctionAndParametersArgsForCall []struct { + } + getFunctionAndParametersReturns struct { + result1 string + result2 []string + } + getFunctionAndParametersReturnsOnCall map[int]struct { + result1 string + result2 []string + } + GetHistoryForKeyStub func(string) (shim.HistoryQueryIteratorInterface, error) + getHistoryForKeyMutex sync.RWMutex + getHistoryForKeyArgsForCall []struct { + arg1 string + } + getHistoryForKeyReturns struct { + result1 shim.HistoryQueryIteratorInterface + result2 error + } + getHistoryForKeyReturnsOnCall map[int]struct { + result1 shim.HistoryQueryIteratorInterface + result2 error + } + GetPrivateDataStub func(string, string) ([]byte, error) + getPrivateDataMutex sync.RWMutex + getPrivateDataArgsForCall []struct { + arg1 string + arg2 string + } + getPrivateDataReturns struct { + result1 []byte + result2 error + } + getPrivateDataReturnsOnCall map[int]struct { + result1 []byte + result2 error + } + GetPrivateDataByPartialCompositeKeyStub func(string, string, []string) (shim.StateQueryIteratorInterface, error) + getPrivateDataByPartialCompositeKeyMutex sync.RWMutex + getPrivateDataByPartialCompositeKeyArgsForCall []struct { + arg1 string + arg2 string + arg3 []string + } + getPrivateDataByPartialCompositeKeyReturns struct { + result1 shim.StateQueryIteratorInterface + result2 error + } + getPrivateDataByPartialCompositeKeyReturnsOnCall map[int]struct { + result1 shim.StateQueryIteratorInterface + result2 error + } + GetPrivateDataByRangeStub func(string, string, string) (shim.StateQueryIteratorInterface, error) + getPrivateDataByRangeMutex sync.RWMutex + getPrivateDataByRangeArgsForCall []struct { + arg1 string + arg2 string + arg3 string + } + getPrivateDataByRangeReturns struct { + result1 shim.StateQueryIteratorInterface + result2 error + } + getPrivateDataByRangeReturnsOnCall map[int]struct { + result1 shim.StateQueryIteratorInterface + result2 error + } + GetPrivateDataHashStub func(string, string) ([]byte, error) + getPrivateDataHashMutex sync.RWMutex + getPrivateDataHashArgsForCall []struct { + arg1 string + arg2 string + } + getPrivateDataHashReturns struct { + result1 []byte + result2 error + } + getPrivateDataHashReturnsOnCall map[int]struct { + result1 []byte + result2 error + } + GetPrivateDataQueryResultStub func(string, string) (shim.StateQueryIteratorInterface, error) + getPrivateDataQueryResultMutex sync.RWMutex + getPrivateDataQueryResultArgsForCall []struct { + arg1 string + arg2 string + } + getPrivateDataQueryResultReturns struct { + result1 shim.StateQueryIteratorInterface + result2 error + } + getPrivateDataQueryResultReturnsOnCall map[int]struct { + result1 shim.StateQueryIteratorInterface + result2 error + } + GetPrivateDataValidationParameterStub func(string, string) ([]byte, error) + getPrivateDataValidationParameterMutex sync.RWMutex + getPrivateDataValidationParameterArgsForCall []struct { + arg1 string + arg2 string + } + getPrivateDataValidationParameterReturns struct { + result1 []byte + result2 error + } + getPrivateDataValidationParameterReturnsOnCall map[int]struct { + result1 []byte + result2 error + } + GetQueryResultStub func(string) (shim.StateQueryIteratorInterface, error) + getQueryResultMutex sync.RWMutex + getQueryResultArgsForCall []struct { + arg1 string + } + getQueryResultReturns struct { + result1 shim.StateQueryIteratorInterface + result2 error + } + getQueryResultReturnsOnCall map[int]struct { + result1 shim.StateQueryIteratorInterface + result2 error + } + GetQueryResultWithPaginationStub func(string, int32, string) (shim.StateQueryIteratorInterface, *peer.QueryResponseMetadata, error) + getQueryResultWithPaginationMutex sync.RWMutex + getQueryResultWithPaginationArgsForCall []struct { + arg1 string + arg2 int32 + arg3 string + } + getQueryResultWithPaginationReturns struct { + result1 shim.StateQueryIteratorInterface + result2 *peer.QueryResponseMetadata + result3 error + } + getQueryResultWithPaginationReturnsOnCall map[int]struct { + result1 shim.StateQueryIteratorInterface + result2 *peer.QueryResponseMetadata + result3 error + } + GetSignedProposalStub func() (*peer.SignedProposal, error) + getSignedProposalMutex sync.RWMutex + getSignedProposalArgsForCall []struct { + } + getSignedProposalReturns struct { + result1 *peer.SignedProposal + result2 error + } + getSignedProposalReturnsOnCall map[int]struct { + result1 *peer.SignedProposal + result2 error + } + GetStateStub func(string) ([]byte, error) + getStateMutex sync.RWMutex + getStateArgsForCall []struct { + arg1 string + } + getStateReturns struct { + result1 []byte + result2 error + } + getStateReturnsOnCall map[int]struct { + result1 []byte + result2 error + } + GetStateByPartialCompositeKeyStub func(string, []string) (shim.StateQueryIteratorInterface, error) + getStateByPartialCompositeKeyMutex sync.RWMutex + getStateByPartialCompositeKeyArgsForCall []struct { + arg1 string + arg2 []string + } + getStateByPartialCompositeKeyReturns struct { + result1 shim.StateQueryIteratorInterface + result2 error + } + getStateByPartialCompositeKeyReturnsOnCall map[int]struct { + result1 shim.StateQueryIteratorInterface + result2 error + } + GetStateByPartialCompositeKeyWithPaginationStub func(string, []string, int32, string) (shim.StateQueryIteratorInterface, *peer.QueryResponseMetadata, error) + getStateByPartialCompositeKeyWithPaginationMutex sync.RWMutex + getStateByPartialCompositeKeyWithPaginationArgsForCall []struct { + arg1 string + arg2 []string + arg3 int32 + arg4 string + } + getStateByPartialCompositeKeyWithPaginationReturns struct { + result1 shim.StateQueryIteratorInterface + result2 *peer.QueryResponseMetadata + result3 error + } + getStateByPartialCompositeKeyWithPaginationReturnsOnCall map[int]struct { + result1 shim.StateQueryIteratorInterface + result2 *peer.QueryResponseMetadata + result3 error + } + GetStateByRangeStub func(string, string) (shim.StateQueryIteratorInterface, error) + getStateByRangeMutex sync.RWMutex + getStateByRangeArgsForCall []struct { + arg1 string + arg2 string + } + getStateByRangeReturns struct { + result1 shim.StateQueryIteratorInterface + result2 error + } + getStateByRangeReturnsOnCall map[int]struct { + result1 shim.StateQueryIteratorInterface + result2 error + } + GetStateByRangeWithPaginationStub func(string, string, int32, string) (shim.StateQueryIteratorInterface, *peer.QueryResponseMetadata, error) + getStateByRangeWithPaginationMutex sync.RWMutex + getStateByRangeWithPaginationArgsForCall []struct { + arg1 string + arg2 string + arg3 int32 + arg4 string + } + getStateByRangeWithPaginationReturns struct { + result1 shim.StateQueryIteratorInterface + result2 *peer.QueryResponseMetadata + result3 error + } + getStateByRangeWithPaginationReturnsOnCall map[int]struct { + result1 shim.StateQueryIteratorInterface + result2 *peer.QueryResponseMetadata + result3 error + } + GetStateValidationParameterStub func(string) ([]byte, error) + getStateValidationParameterMutex sync.RWMutex + getStateValidationParameterArgsForCall []struct { + arg1 string + } + getStateValidationParameterReturns struct { + result1 []byte + result2 error + } + getStateValidationParameterReturnsOnCall map[int]struct { + result1 []byte + result2 error + } + GetStringArgsStub func() []string + getStringArgsMutex sync.RWMutex + getStringArgsArgsForCall []struct { + } + getStringArgsReturns struct { + result1 []string + } + getStringArgsReturnsOnCall map[int]struct { + result1 []string + } + GetTransientStub func() (map[string][]byte, error) + getTransientMutex sync.RWMutex + getTransientArgsForCall []struct { + } + getTransientReturns struct { + result1 map[string][]byte + result2 error + } + getTransientReturnsOnCall map[int]struct { + result1 map[string][]byte + result2 error + } + GetTxIDStub func() string + getTxIDMutex sync.RWMutex + getTxIDArgsForCall []struct { + } + getTxIDReturns struct { + result1 string + } + getTxIDReturnsOnCall map[int]struct { + result1 string + } + GetTxTimestampStub func() (*timestamp.Timestamp, error) + getTxTimestampMutex sync.RWMutex + getTxTimestampArgsForCall []struct { + } + getTxTimestampReturns struct { + result1 *timestamp.Timestamp + result2 error + } + getTxTimestampReturnsOnCall map[int]struct { + result1 *timestamp.Timestamp + result2 error + } + InvokeChaincodeStub func(string, [][]byte, string) peer.Response + invokeChaincodeMutex sync.RWMutex + invokeChaincodeArgsForCall []struct { + arg1 string + arg2 [][]byte + arg3 string + } + invokeChaincodeReturns struct { + result1 peer.Response + } + invokeChaincodeReturnsOnCall map[int]struct { + result1 peer.Response + } + PutPrivateDataStub func(string, string, []byte) error + putPrivateDataMutex sync.RWMutex + putPrivateDataArgsForCall []struct { + arg1 string + arg2 string + arg3 []byte + } + putPrivateDataReturns struct { + result1 error + } + putPrivateDataReturnsOnCall map[int]struct { + result1 error + } + PutStateStub func(string, []byte) error + putStateMutex sync.RWMutex + putStateArgsForCall []struct { + arg1 string + arg2 []byte + } + putStateReturns struct { + result1 error + } + putStateReturnsOnCall map[int]struct { + result1 error + } + SetEventStub func(string, []byte) error + setEventMutex sync.RWMutex + setEventArgsForCall []struct { + arg1 string + arg2 []byte + } + setEventReturns struct { + result1 error + } + setEventReturnsOnCall map[int]struct { + result1 error + } + SetPrivateDataValidationParameterStub func(string, string, []byte) error + setPrivateDataValidationParameterMutex sync.RWMutex + setPrivateDataValidationParameterArgsForCall []struct { + arg1 string + arg2 string + arg3 []byte + } + setPrivateDataValidationParameterReturns struct { + result1 error + } + setPrivateDataValidationParameterReturnsOnCall map[int]struct { + result1 error + } + SetStateValidationParameterStub func(string, []byte) error + setStateValidationParameterMutex sync.RWMutex + setStateValidationParameterArgsForCall []struct { + arg1 string + arg2 []byte + } + setStateValidationParameterReturns struct { + result1 error + } + setStateValidationParameterReturnsOnCall map[int]struct { + result1 error + } + SplitCompositeKeyStub func(string) (string, []string, error) + splitCompositeKeyMutex sync.RWMutex + splitCompositeKeyArgsForCall []struct { + arg1 string + } + splitCompositeKeyReturns struct { + result1 string + result2 []string + result3 error + } + splitCompositeKeyReturnsOnCall map[int]struct { + result1 string + result2 []string + result3 error + } + invocations map[string][][]interface{} + invocationsMutex sync.RWMutex +} + +func (fake *ChaincodeStub) CreateCompositeKey(arg1 string, arg2 []string) (string, error) { + var arg2Copy []string + if arg2 != nil { + arg2Copy = make([]string, len(arg2)) + copy(arg2Copy, arg2) + } + fake.createCompositeKeyMutex.Lock() + ret, specificReturn := fake.createCompositeKeyReturnsOnCall[len(fake.createCompositeKeyArgsForCall)] + fake.createCompositeKeyArgsForCall = append(fake.createCompositeKeyArgsForCall, struct { + arg1 string + arg2 []string + }{arg1, arg2Copy}) + fake.recordInvocation("CreateCompositeKey", []interface{}{arg1, arg2Copy}) + fake.createCompositeKeyMutex.Unlock() + if fake.CreateCompositeKeyStub != nil { + return fake.CreateCompositeKeyStub(arg1, arg2) + } + if specificReturn { + return ret.result1, ret.result2 + } + fakeReturns := fake.createCompositeKeyReturns + return fakeReturns.result1, fakeReturns.result2 +} + +func (fake *ChaincodeStub) CreateCompositeKeyCallCount() int { + fake.createCompositeKeyMutex.RLock() + defer fake.createCompositeKeyMutex.RUnlock() + return len(fake.createCompositeKeyArgsForCall) +} + +func (fake *ChaincodeStub) CreateCompositeKeyCalls(stub func(string, []string) (string, error)) { + fake.createCompositeKeyMutex.Lock() + defer fake.createCompositeKeyMutex.Unlock() + fake.CreateCompositeKeyStub = stub +} + +func (fake *ChaincodeStub) CreateCompositeKeyArgsForCall(i int) (string, []string) { + fake.createCompositeKeyMutex.RLock() + defer fake.createCompositeKeyMutex.RUnlock() + argsForCall := fake.createCompositeKeyArgsForCall[i] + return argsForCall.arg1, argsForCall.arg2 +} + +func (fake *ChaincodeStub) CreateCompositeKeyReturns(result1 string, result2 error) { + fake.createCompositeKeyMutex.Lock() + defer fake.createCompositeKeyMutex.Unlock() + fake.CreateCompositeKeyStub = nil + fake.createCompositeKeyReturns = struct { + result1 string + result2 error + }{result1, result2} +} + +func (fake *ChaincodeStub) CreateCompositeKeyReturnsOnCall(i int, result1 string, result2 error) { + fake.createCompositeKeyMutex.Lock() + defer fake.createCompositeKeyMutex.Unlock() + fake.CreateCompositeKeyStub = nil + if fake.createCompositeKeyReturnsOnCall == nil { + fake.createCompositeKeyReturnsOnCall = make(map[int]struct { + result1 string + result2 error + }) + } + fake.createCompositeKeyReturnsOnCall[i] = struct { + result1 string + result2 error + }{result1, result2} +} + +func (fake *ChaincodeStub) DelPrivateData(arg1 string, arg2 string) error { + fake.delPrivateDataMutex.Lock() + ret, specificReturn := fake.delPrivateDataReturnsOnCall[len(fake.delPrivateDataArgsForCall)] + fake.delPrivateDataArgsForCall = append(fake.delPrivateDataArgsForCall, struct { + arg1 string + arg2 string + }{arg1, arg2}) + fake.recordInvocation("DelPrivateData", []interface{}{arg1, arg2}) + fake.delPrivateDataMutex.Unlock() + if fake.DelPrivateDataStub != nil { + return fake.DelPrivateDataStub(arg1, arg2) + } + if specificReturn { + return ret.result1 + } + fakeReturns := fake.delPrivateDataReturns + return fakeReturns.result1 +} + +func (fake *ChaincodeStub) DelPrivateDataCallCount() int { + fake.delPrivateDataMutex.RLock() + defer fake.delPrivateDataMutex.RUnlock() + return len(fake.delPrivateDataArgsForCall) +} + +func (fake *ChaincodeStub) DelPrivateDataCalls(stub func(string, string) error) { + fake.delPrivateDataMutex.Lock() + defer fake.delPrivateDataMutex.Unlock() + fake.DelPrivateDataStub = stub +} + +func (fake *ChaincodeStub) DelPrivateDataArgsForCall(i int) (string, string) { + fake.delPrivateDataMutex.RLock() + defer fake.delPrivateDataMutex.RUnlock() + argsForCall := fake.delPrivateDataArgsForCall[i] + return argsForCall.arg1, argsForCall.arg2 +} + +func (fake *ChaincodeStub) DelPrivateDataReturns(result1 error) { + fake.delPrivateDataMutex.Lock() + defer fake.delPrivateDataMutex.Unlock() + fake.DelPrivateDataStub = nil + fake.delPrivateDataReturns = struct { + result1 error + }{result1} +} + +func (fake *ChaincodeStub) DelPrivateDataReturnsOnCall(i int, result1 error) { + fake.delPrivateDataMutex.Lock() + defer fake.delPrivateDataMutex.Unlock() + fake.DelPrivateDataStub = nil + if fake.delPrivateDataReturnsOnCall == nil { + fake.delPrivateDataReturnsOnCall = make(map[int]struct { + result1 error + }) + } + fake.delPrivateDataReturnsOnCall[i] = struct { + result1 error + }{result1} +} + +func (fake *ChaincodeStub) DelState(arg1 string) error { + fake.delStateMutex.Lock() + ret, specificReturn := fake.delStateReturnsOnCall[len(fake.delStateArgsForCall)] + fake.delStateArgsForCall = append(fake.delStateArgsForCall, struct { + arg1 string + }{arg1}) + fake.recordInvocation("DelState", []interface{}{arg1}) + fake.delStateMutex.Unlock() + if fake.DelStateStub != nil { + return fake.DelStateStub(arg1) + } + if specificReturn { + return ret.result1 + } + fakeReturns := fake.delStateReturns + return fakeReturns.result1 +} + +func (fake *ChaincodeStub) DelStateCallCount() int { + fake.delStateMutex.RLock() + defer fake.delStateMutex.RUnlock() + return len(fake.delStateArgsForCall) +} + +func (fake *ChaincodeStub) DelStateCalls(stub func(string) error) { + fake.delStateMutex.Lock() + defer fake.delStateMutex.Unlock() + fake.DelStateStub = stub +} + +func (fake *ChaincodeStub) DelStateArgsForCall(i int) string { + fake.delStateMutex.RLock() + defer fake.delStateMutex.RUnlock() + argsForCall := fake.delStateArgsForCall[i] + return argsForCall.arg1 +} + +func (fake *ChaincodeStub) DelStateReturns(result1 error) { + fake.delStateMutex.Lock() + defer fake.delStateMutex.Unlock() + fake.DelStateStub = nil + fake.delStateReturns = struct { + result1 error + }{result1} +} + +func (fake *ChaincodeStub) DelStateReturnsOnCall(i int, result1 error) { + fake.delStateMutex.Lock() + defer fake.delStateMutex.Unlock() + fake.DelStateStub = nil + if fake.delStateReturnsOnCall == nil { + fake.delStateReturnsOnCall = make(map[int]struct { + result1 error + }) + } + fake.delStateReturnsOnCall[i] = struct { + result1 error + }{result1} +} + +func (fake *ChaincodeStub) GetArgs() [][]byte { + fake.getArgsMutex.Lock() + ret, specificReturn := fake.getArgsReturnsOnCall[len(fake.getArgsArgsForCall)] + fake.getArgsArgsForCall = append(fake.getArgsArgsForCall, struct { + }{}) + fake.recordInvocation("GetArgs", []interface{}{}) + fake.getArgsMutex.Unlock() + if fake.GetArgsStub != nil { + return fake.GetArgsStub() + } + if specificReturn { + return ret.result1 + } + fakeReturns := fake.getArgsReturns + return fakeReturns.result1 +} + +func (fake *ChaincodeStub) GetArgsCallCount() int { + fake.getArgsMutex.RLock() + defer fake.getArgsMutex.RUnlock() + return len(fake.getArgsArgsForCall) +} + +func (fake *ChaincodeStub) GetArgsCalls(stub func() [][]byte) { + fake.getArgsMutex.Lock() + defer fake.getArgsMutex.Unlock() + fake.GetArgsStub = stub +} + +func (fake *ChaincodeStub) GetArgsReturns(result1 [][]byte) { + fake.getArgsMutex.Lock() + defer fake.getArgsMutex.Unlock() + fake.GetArgsStub = nil + fake.getArgsReturns = struct { + result1 [][]byte + }{result1} +} + +func (fake *ChaincodeStub) GetArgsReturnsOnCall(i int, result1 [][]byte) { + fake.getArgsMutex.Lock() + defer fake.getArgsMutex.Unlock() + fake.GetArgsStub = nil + if fake.getArgsReturnsOnCall == nil { + fake.getArgsReturnsOnCall = make(map[int]struct { + result1 [][]byte + }) + } + fake.getArgsReturnsOnCall[i] = struct { + result1 [][]byte + }{result1} +} + +func (fake *ChaincodeStub) GetArgsSlice() ([]byte, error) { + fake.getArgsSliceMutex.Lock() + ret, specificReturn := fake.getArgsSliceReturnsOnCall[len(fake.getArgsSliceArgsForCall)] + fake.getArgsSliceArgsForCall = append(fake.getArgsSliceArgsForCall, struct { + }{}) + fake.recordInvocation("GetArgsSlice", []interface{}{}) + fake.getArgsSliceMutex.Unlock() + if fake.GetArgsSliceStub != nil { + return fake.GetArgsSliceStub() + } + if specificReturn { + return ret.result1, ret.result2 + } + fakeReturns := fake.getArgsSliceReturns + return fakeReturns.result1, fakeReturns.result2 +} + +func (fake *ChaincodeStub) GetArgsSliceCallCount() int { + fake.getArgsSliceMutex.RLock() + defer fake.getArgsSliceMutex.RUnlock() + return len(fake.getArgsSliceArgsForCall) +} + +func (fake *ChaincodeStub) GetArgsSliceCalls(stub func() ([]byte, error)) { + fake.getArgsSliceMutex.Lock() + defer fake.getArgsSliceMutex.Unlock() + fake.GetArgsSliceStub = stub +} + +func (fake *ChaincodeStub) GetArgsSliceReturns(result1 []byte, result2 error) { + fake.getArgsSliceMutex.Lock() + defer fake.getArgsSliceMutex.Unlock() + fake.GetArgsSliceStub = nil + fake.getArgsSliceReturns = struct { + result1 []byte + result2 error + }{result1, result2} +} + +func (fake *ChaincodeStub) GetArgsSliceReturnsOnCall(i int, result1 []byte, result2 error) { + fake.getArgsSliceMutex.Lock() + defer fake.getArgsSliceMutex.Unlock() + fake.GetArgsSliceStub = nil + if fake.getArgsSliceReturnsOnCall == nil { + fake.getArgsSliceReturnsOnCall = make(map[int]struct { + result1 []byte + result2 error + }) + } + fake.getArgsSliceReturnsOnCall[i] = struct { + result1 []byte + result2 error + }{result1, result2} +} + +func (fake *ChaincodeStub) GetBinding() ([]byte, error) { + fake.getBindingMutex.Lock() + ret, specificReturn := fake.getBindingReturnsOnCall[len(fake.getBindingArgsForCall)] + fake.getBindingArgsForCall = append(fake.getBindingArgsForCall, struct { + }{}) + fake.recordInvocation("GetBinding", []interface{}{}) + fake.getBindingMutex.Unlock() + if fake.GetBindingStub != nil { + return fake.GetBindingStub() + } + if specificReturn { + return ret.result1, ret.result2 + } + fakeReturns := fake.getBindingReturns + return fakeReturns.result1, fakeReturns.result2 +} + +func (fake *ChaincodeStub) GetBindingCallCount() int { + fake.getBindingMutex.RLock() + defer fake.getBindingMutex.RUnlock() + return len(fake.getBindingArgsForCall) +} + +func (fake *ChaincodeStub) GetBindingCalls(stub func() ([]byte, error)) { + fake.getBindingMutex.Lock() + defer fake.getBindingMutex.Unlock() + fake.GetBindingStub = stub +} + +func (fake *ChaincodeStub) GetBindingReturns(result1 []byte, result2 error) { + fake.getBindingMutex.Lock() + defer fake.getBindingMutex.Unlock() + fake.GetBindingStub = nil + fake.getBindingReturns = struct { + result1 []byte + result2 error + }{result1, result2} +} + +func (fake *ChaincodeStub) GetBindingReturnsOnCall(i int, result1 []byte, result2 error) { + fake.getBindingMutex.Lock() + defer fake.getBindingMutex.Unlock() + fake.GetBindingStub = nil + if fake.getBindingReturnsOnCall == nil { + fake.getBindingReturnsOnCall = make(map[int]struct { + result1 []byte + result2 error + }) + } + fake.getBindingReturnsOnCall[i] = struct { + result1 []byte + result2 error + }{result1, result2} +} + +func (fake *ChaincodeStub) GetChannelID() string { + fake.getChannelIDMutex.Lock() + ret, specificReturn := fake.getChannelIDReturnsOnCall[len(fake.getChannelIDArgsForCall)] + fake.getChannelIDArgsForCall = append(fake.getChannelIDArgsForCall, struct { + }{}) + fake.recordInvocation("GetChannelID", []interface{}{}) + fake.getChannelIDMutex.Unlock() + if fake.GetChannelIDStub != nil { + return fake.GetChannelIDStub() + } + if specificReturn { + return ret.result1 + } + fakeReturns := fake.getChannelIDReturns + return fakeReturns.result1 +} + +func (fake *ChaincodeStub) GetChannelIDCallCount() int { + fake.getChannelIDMutex.RLock() + defer fake.getChannelIDMutex.RUnlock() + return len(fake.getChannelIDArgsForCall) +} + +func (fake *ChaincodeStub) GetChannelIDCalls(stub func() string) { + fake.getChannelIDMutex.Lock() + defer fake.getChannelIDMutex.Unlock() + fake.GetChannelIDStub = stub +} + +func (fake *ChaincodeStub) GetChannelIDReturns(result1 string) { + fake.getChannelIDMutex.Lock() + defer fake.getChannelIDMutex.Unlock() + fake.GetChannelIDStub = nil + fake.getChannelIDReturns = struct { + result1 string + }{result1} +} + +func (fake *ChaincodeStub) GetChannelIDReturnsOnCall(i int, result1 string) { + fake.getChannelIDMutex.Lock() + defer fake.getChannelIDMutex.Unlock() + fake.GetChannelIDStub = nil + if fake.getChannelIDReturnsOnCall == nil { + fake.getChannelIDReturnsOnCall = make(map[int]struct { + result1 string + }) + } + fake.getChannelIDReturnsOnCall[i] = struct { + result1 string + }{result1} +} + +func (fake *ChaincodeStub) GetCreator() ([]byte, error) { + fake.getCreatorMutex.Lock() + ret, specificReturn := fake.getCreatorReturnsOnCall[len(fake.getCreatorArgsForCall)] + fake.getCreatorArgsForCall = append(fake.getCreatorArgsForCall, struct { + }{}) + fake.recordInvocation("GetCreator", []interface{}{}) + fake.getCreatorMutex.Unlock() + if fake.GetCreatorStub != nil { + return fake.GetCreatorStub() + } + if specificReturn { + return ret.result1, ret.result2 + } + fakeReturns := fake.getCreatorReturns + return fakeReturns.result1, fakeReturns.result2 +} + +func (fake *ChaincodeStub) GetCreatorCallCount() int { + fake.getCreatorMutex.RLock() + defer fake.getCreatorMutex.RUnlock() + return len(fake.getCreatorArgsForCall) +} + +func (fake *ChaincodeStub) GetCreatorCalls(stub func() ([]byte, error)) { + fake.getCreatorMutex.Lock() + defer fake.getCreatorMutex.Unlock() + fake.GetCreatorStub = stub +} + +func (fake *ChaincodeStub) GetCreatorReturns(result1 []byte, result2 error) { + fake.getCreatorMutex.Lock() + defer fake.getCreatorMutex.Unlock() + fake.GetCreatorStub = nil + fake.getCreatorReturns = struct { + result1 []byte + result2 error + }{result1, result2} +} + +func (fake *ChaincodeStub) GetCreatorReturnsOnCall(i int, result1 []byte, result2 error) { + fake.getCreatorMutex.Lock() + defer fake.getCreatorMutex.Unlock() + fake.GetCreatorStub = nil + if fake.getCreatorReturnsOnCall == nil { + fake.getCreatorReturnsOnCall = make(map[int]struct { + result1 []byte + result2 error + }) + } + fake.getCreatorReturnsOnCall[i] = struct { + result1 []byte + result2 error + }{result1, result2} +} + +func (fake *ChaincodeStub) GetDecorations() map[string][]byte { + fake.getDecorationsMutex.Lock() + ret, specificReturn := fake.getDecorationsReturnsOnCall[len(fake.getDecorationsArgsForCall)] + fake.getDecorationsArgsForCall = append(fake.getDecorationsArgsForCall, struct { + }{}) + fake.recordInvocation("GetDecorations", []interface{}{}) + fake.getDecorationsMutex.Unlock() + if fake.GetDecorationsStub != nil { + return fake.GetDecorationsStub() + } + if specificReturn { + return ret.result1 + } + fakeReturns := fake.getDecorationsReturns + return fakeReturns.result1 +} + +func (fake *ChaincodeStub) GetDecorationsCallCount() int { + fake.getDecorationsMutex.RLock() + defer fake.getDecorationsMutex.RUnlock() + return len(fake.getDecorationsArgsForCall) +} + +func (fake *ChaincodeStub) GetDecorationsCalls(stub func() map[string][]byte) { + fake.getDecorationsMutex.Lock() + defer fake.getDecorationsMutex.Unlock() + fake.GetDecorationsStub = stub +} + +func (fake *ChaincodeStub) GetDecorationsReturns(result1 map[string][]byte) { + fake.getDecorationsMutex.Lock() + defer fake.getDecorationsMutex.Unlock() + fake.GetDecorationsStub = nil + fake.getDecorationsReturns = struct { + result1 map[string][]byte + }{result1} +} + +func (fake *ChaincodeStub) GetDecorationsReturnsOnCall(i int, result1 map[string][]byte) { + fake.getDecorationsMutex.Lock() + defer fake.getDecorationsMutex.Unlock() + fake.GetDecorationsStub = nil + if fake.getDecorationsReturnsOnCall == nil { + fake.getDecorationsReturnsOnCall = make(map[int]struct { + result1 map[string][]byte + }) + } + fake.getDecorationsReturnsOnCall[i] = struct { + result1 map[string][]byte + }{result1} +} + +func (fake *ChaincodeStub) GetFunctionAndParameters() (string, []string) { + fake.getFunctionAndParametersMutex.Lock() + ret, specificReturn := fake.getFunctionAndParametersReturnsOnCall[len(fake.getFunctionAndParametersArgsForCall)] + fake.getFunctionAndParametersArgsForCall = append(fake.getFunctionAndParametersArgsForCall, struct { + }{}) + fake.recordInvocation("GetFunctionAndParameters", []interface{}{}) + fake.getFunctionAndParametersMutex.Unlock() + if fake.GetFunctionAndParametersStub != nil { + return fake.GetFunctionAndParametersStub() + } + if specificReturn { + return ret.result1, ret.result2 + } + fakeReturns := fake.getFunctionAndParametersReturns + return fakeReturns.result1, fakeReturns.result2 +} + +func (fake *ChaincodeStub) GetFunctionAndParametersCallCount() int { + fake.getFunctionAndParametersMutex.RLock() + defer fake.getFunctionAndParametersMutex.RUnlock() + return len(fake.getFunctionAndParametersArgsForCall) +} + +func (fake *ChaincodeStub) GetFunctionAndParametersCalls(stub func() (string, []string)) { + fake.getFunctionAndParametersMutex.Lock() + defer fake.getFunctionAndParametersMutex.Unlock() + fake.GetFunctionAndParametersStub = stub +} + +func (fake *ChaincodeStub) GetFunctionAndParametersReturns(result1 string, result2 []string) { + fake.getFunctionAndParametersMutex.Lock() + defer fake.getFunctionAndParametersMutex.Unlock() + fake.GetFunctionAndParametersStub = nil + fake.getFunctionAndParametersReturns = struct { + result1 string + result2 []string + }{result1, result2} +} + +func (fake *ChaincodeStub) GetFunctionAndParametersReturnsOnCall(i int, result1 string, result2 []string) { + fake.getFunctionAndParametersMutex.Lock() + defer fake.getFunctionAndParametersMutex.Unlock() + fake.GetFunctionAndParametersStub = nil + if fake.getFunctionAndParametersReturnsOnCall == nil { + fake.getFunctionAndParametersReturnsOnCall = make(map[int]struct { + result1 string + result2 []string + }) + } + fake.getFunctionAndParametersReturnsOnCall[i] = struct { + result1 string + result2 []string + }{result1, result2} +} + +func (fake *ChaincodeStub) GetHistoryForKey(arg1 string) (shim.HistoryQueryIteratorInterface, error) { + fake.getHistoryForKeyMutex.Lock() + ret, specificReturn := fake.getHistoryForKeyReturnsOnCall[len(fake.getHistoryForKeyArgsForCall)] + fake.getHistoryForKeyArgsForCall = append(fake.getHistoryForKeyArgsForCall, struct { + arg1 string + }{arg1}) + fake.recordInvocation("GetHistoryForKey", []interface{}{arg1}) + fake.getHistoryForKeyMutex.Unlock() + if fake.GetHistoryForKeyStub != nil { + return fake.GetHistoryForKeyStub(arg1) + } + if specificReturn { + return ret.result1, ret.result2 + } + fakeReturns := fake.getHistoryForKeyReturns + return fakeReturns.result1, fakeReturns.result2 +} + +func (fake *ChaincodeStub) GetHistoryForKeyCallCount() int { + fake.getHistoryForKeyMutex.RLock() + defer fake.getHistoryForKeyMutex.RUnlock() + return len(fake.getHistoryForKeyArgsForCall) +} + +func (fake *ChaincodeStub) GetHistoryForKeyCalls(stub func(string) (shim.HistoryQueryIteratorInterface, error)) { + fake.getHistoryForKeyMutex.Lock() + defer fake.getHistoryForKeyMutex.Unlock() + fake.GetHistoryForKeyStub = stub +} + +func (fake *ChaincodeStub) GetHistoryForKeyArgsForCall(i int) string { + fake.getHistoryForKeyMutex.RLock() + defer fake.getHistoryForKeyMutex.RUnlock() + argsForCall := fake.getHistoryForKeyArgsForCall[i] + return argsForCall.arg1 +} + +func (fake *ChaincodeStub) GetHistoryForKeyReturns(result1 shim.HistoryQueryIteratorInterface, result2 error) { + fake.getHistoryForKeyMutex.Lock() + defer fake.getHistoryForKeyMutex.Unlock() + fake.GetHistoryForKeyStub = nil + fake.getHistoryForKeyReturns = struct { + result1 shim.HistoryQueryIteratorInterface + result2 error + }{result1, result2} +} + +func (fake *ChaincodeStub) GetHistoryForKeyReturnsOnCall(i int, result1 shim.HistoryQueryIteratorInterface, result2 error) { + fake.getHistoryForKeyMutex.Lock() + defer fake.getHistoryForKeyMutex.Unlock() + fake.GetHistoryForKeyStub = nil + if fake.getHistoryForKeyReturnsOnCall == nil { + fake.getHistoryForKeyReturnsOnCall = make(map[int]struct { + result1 shim.HistoryQueryIteratorInterface + result2 error + }) + } + fake.getHistoryForKeyReturnsOnCall[i] = struct { + result1 shim.HistoryQueryIteratorInterface + result2 error + }{result1, result2} +} + +func (fake *ChaincodeStub) GetPrivateData(arg1 string, arg2 string) ([]byte, error) { + fake.getPrivateDataMutex.Lock() + ret, specificReturn := fake.getPrivateDataReturnsOnCall[len(fake.getPrivateDataArgsForCall)] + fake.getPrivateDataArgsForCall = append(fake.getPrivateDataArgsForCall, struct { + arg1 string + arg2 string + }{arg1, arg2}) + fake.recordInvocation("GetPrivateData", []interface{}{arg1, arg2}) + fake.getPrivateDataMutex.Unlock() + if fake.GetPrivateDataStub != nil { + return fake.GetPrivateDataStub(arg1, arg2) + } + if specificReturn { + return ret.result1, ret.result2 + } + fakeReturns := fake.getPrivateDataReturns + return fakeReturns.result1, fakeReturns.result2 +} + +func (fake *ChaincodeStub) GetPrivateDataCallCount() int { + fake.getPrivateDataMutex.RLock() + defer fake.getPrivateDataMutex.RUnlock() + return len(fake.getPrivateDataArgsForCall) +} + +func (fake *ChaincodeStub) GetPrivateDataCalls(stub func(string, string) ([]byte, error)) { + fake.getPrivateDataMutex.Lock() + defer fake.getPrivateDataMutex.Unlock() + fake.GetPrivateDataStub = stub +} + +func (fake *ChaincodeStub) GetPrivateDataArgsForCall(i int) (string, string) { + fake.getPrivateDataMutex.RLock() + defer fake.getPrivateDataMutex.RUnlock() + argsForCall := fake.getPrivateDataArgsForCall[i] + return argsForCall.arg1, argsForCall.arg2 +} + +func (fake *ChaincodeStub) GetPrivateDataReturns(result1 []byte, result2 error) { + fake.getPrivateDataMutex.Lock() + defer fake.getPrivateDataMutex.Unlock() + fake.GetPrivateDataStub = nil + fake.getPrivateDataReturns = struct { + result1 []byte + result2 error + }{result1, result2} +} + +func (fake *ChaincodeStub) GetPrivateDataReturnsOnCall(i int, result1 []byte, result2 error) { + fake.getPrivateDataMutex.Lock() + defer fake.getPrivateDataMutex.Unlock() + fake.GetPrivateDataStub = nil + if fake.getPrivateDataReturnsOnCall == nil { + fake.getPrivateDataReturnsOnCall = make(map[int]struct { + result1 []byte + result2 error + }) + } + fake.getPrivateDataReturnsOnCall[i] = struct { + result1 []byte + result2 error + }{result1, result2} +} + +func (fake *ChaincodeStub) GetPrivateDataByPartialCompositeKey(arg1 string, arg2 string, arg3 []string) (shim.StateQueryIteratorInterface, error) { + var arg3Copy []string + if arg3 != nil { + arg3Copy = make([]string, len(arg3)) + copy(arg3Copy, arg3) + } + fake.getPrivateDataByPartialCompositeKeyMutex.Lock() + ret, specificReturn := fake.getPrivateDataByPartialCompositeKeyReturnsOnCall[len(fake.getPrivateDataByPartialCompositeKeyArgsForCall)] + fake.getPrivateDataByPartialCompositeKeyArgsForCall = append(fake.getPrivateDataByPartialCompositeKeyArgsForCall, struct { + arg1 string + arg2 string + arg3 []string + }{arg1, arg2, arg3Copy}) + fake.recordInvocation("GetPrivateDataByPartialCompositeKey", []interface{}{arg1, arg2, arg3Copy}) + fake.getPrivateDataByPartialCompositeKeyMutex.Unlock() + if fake.GetPrivateDataByPartialCompositeKeyStub != nil { + return fake.GetPrivateDataByPartialCompositeKeyStub(arg1, arg2, arg3) + } + if specificReturn { + return ret.result1, ret.result2 + } + fakeReturns := fake.getPrivateDataByPartialCompositeKeyReturns + return fakeReturns.result1, fakeReturns.result2 +} + +func (fake *ChaincodeStub) GetPrivateDataByPartialCompositeKeyCallCount() int { + fake.getPrivateDataByPartialCompositeKeyMutex.RLock() + defer fake.getPrivateDataByPartialCompositeKeyMutex.RUnlock() + return len(fake.getPrivateDataByPartialCompositeKeyArgsForCall) +} + +func (fake *ChaincodeStub) GetPrivateDataByPartialCompositeKeyCalls(stub func(string, string, []string) (shim.StateQueryIteratorInterface, error)) { + fake.getPrivateDataByPartialCompositeKeyMutex.Lock() + defer fake.getPrivateDataByPartialCompositeKeyMutex.Unlock() + fake.GetPrivateDataByPartialCompositeKeyStub = stub +} + +func (fake *ChaincodeStub) GetPrivateDataByPartialCompositeKeyArgsForCall(i int) (string, string, []string) { + fake.getPrivateDataByPartialCompositeKeyMutex.RLock() + defer fake.getPrivateDataByPartialCompositeKeyMutex.RUnlock() + argsForCall := fake.getPrivateDataByPartialCompositeKeyArgsForCall[i] + return argsForCall.arg1, argsForCall.arg2, argsForCall.arg3 +} + +func (fake *ChaincodeStub) GetPrivateDataByPartialCompositeKeyReturns(result1 shim.StateQueryIteratorInterface, result2 error) { + fake.getPrivateDataByPartialCompositeKeyMutex.Lock() + defer fake.getPrivateDataByPartialCompositeKeyMutex.Unlock() + fake.GetPrivateDataByPartialCompositeKeyStub = nil + fake.getPrivateDataByPartialCompositeKeyReturns = struct { + result1 shim.StateQueryIteratorInterface + result2 error + }{result1, result2} +} + +func (fake *ChaincodeStub) GetPrivateDataByPartialCompositeKeyReturnsOnCall(i int, result1 shim.StateQueryIteratorInterface, result2 error) { + fake.getPrivateDataByPartialCompositeKeyMutex.Lock() + defer fake.getPrivateDataByPartialCompositeKeyMutex.Unlock() + fake.GetPrivateDataByPartialCompositeKeyStub = nil + if fake.getPrivateDataByPartialCompositeKeyReturnsOnCall == nil { + fake.getPrivateDataByPartialCompositeKeyReturnsOnCall = make(map[int]struct { + result1 shim.StateQueryIteratorInterface + result2 error + }) + } + fake.getPrivateDataByPartialCompositeKeyReturnsOnCall[i] = struct { + result1 shim.StateQueryIteratorInterface + result2 error + }{result1, result2} +} + +func (fake *ChaincodeStub) GetPrivateDataByRange(arg1 string, arg2 string, arg3 string) (shim.StateQueryIteratorInterface, error) { + fake.getPrivateDataByRangeMutex.Lock() + ret, specificReturn := fake.getPrivateDataByRangeReturnsOnCall[len(fake.getPrivateDataByRangeArgsForCall)] + fake.getPrivateDataByRangeArgsForCall = append(fake.getPrivateDataByRangeArgsForCall, struct { + arg1 string + arg2 string + arg3 string + }{arg1, arg2, arg3}) + fake.recordInvocation("GetPrivateDataByRange", []interface{}{arg1, arg2, arg3}) + fake.getPrivateDataByRangeMutex.Unlock() + if fake.GetPrivateDataByRangeStub != nil { + return fake.GetPrivateDataByRangeStub(arg1, arg2, arg3) + } + if specificReturn { + return ret.result1, ret.result2 + } + fakeReturns := fake.getPrivateDataByRangeReturns + return fakeReturns.result1, fakeReturns.result2 +} + +func (fake *ChaincodeStub) GetPrivateDataByRangeCallCount() int { + fake.getPrivateDataByRangeMutex.RLock() + defer fake.getPrivateDataByRangeMutex.RUnlock() + return len(fake.getPrivateDataByRangeArgsForCall) +} + +func (fake *ChaincodeStub) GetPrivateDataByRangeCalls(stub func(string, string, string) (shim.StateQueryIteratorInterface, error)) { + fake.getPrivateDataByRangeMutex.Lock() + defer fake.getPrivateDataByRangeMutex.Unlock() + fake.GetPrivateDataByRangeStub = stub +} + +func (fake *ChaincodeStub) GetPrivateDataByRangeArgsForCall(i int) (string, string, string) { + fake.getPrivateDataByRangeMutex.RLock() + defer fake.getPrivateDataByRangeMutex.RUnlock() + argsForCall := fake.getPrivateDataByRangeArgsForCall[i] + return argsForCall.arg1, argsForCall.arg2, argsForCall.arg3 +} + +func (fake *ChaincodeStub) GetPrivateDataByRangeReturns(result1 shim.StateQueryIteratorInterface, result2 error) { + fake.getPrivateDataByRangeMutex.Lock() + defer fake.getPrivateDataByRangeMutex.Unlock() + fake.GetPrivateDataByRangeStub = nil + fake.getPrivateDataByRangeReturns = struct { + result1 shim.StateQueryIteratorInterface + result2 error + }{result1, result2} +} + +func (fake *ChaincodeStub) GetPrivateDataByRangeReturnsOnCall(i int, result1 shim.StateQueryIteratorInterface, result2 error) { + fake.getPrivateDataByRangeMutex.Lock() + defer fake.getPrivateDataByRangeMutex.Unlock() + fake.GetPrivateDataByRangeStub = nil + if fake.getPrivateDataByRangeReturnsOnCall == nil { + fake.getPrivateDataByRangeReturnsOnCall = make(map[int]struct { + result1 shim.StateQueryIteratorInterface + result2 error + }) + } + fake.getPrivateDataByRangeReturnsOnCall[i] = struct { + result1 shim.StateQueryIteratorInterface + result2 error + }{result1, result2} +} + +func (fake *ChaincodeStub) GetPrivateDataHash(arg1 string, arg2 string) ([]byte, error) { + fake.getPrivateDataHashMutex.Lock() + ret, specificReturn := fake.getPrivateDataHashReturnsOnCall[len(fake.getPrivateDataHashArgsForCall)] + fake.getPrivateDataHashArgsForCall = append(fake.getPrivateDataHashArgsForCall, struct { + arg1 string + arg2 string + }{arg1, arg2}) + fake.recordInvocation("GetPrivateDataHash", []interface{}{arg1, arg2}) + fake.getPrivateDataHashMutex.Unlock() + if fake.GetPrivateDataHashStub != nil { + return fake.GetPrivateDataHashStub(arg1, arg2) + } + if specificReturn { + return ret.result1, ret.result2 + } + fakeReturns := fake.getPrivateDataHashReturns + return fakeReturns.result1, fakeReturns.result2 +} + +func (fake *ChaincodeStub) GetPrivateDataHashCallCount() int { + fake.getPrivateDataHashMutex.RLock() + defer fake.getPrivateDataHashMutex.RUnlock() + return len(fake.getPrivateDataHashArgsForCall) +} + +func (fake *ChaincodeStub) GetPrivateDataHashCalls(stub func(string, string) ([]byte, error)) { + fake.getPrivateDataHashMutex.Lock() + defer fake.getPrivateDataHashMutex.Unlock() + fake.GetPrivateDataHashStub = stub +} + +func (fake *ChaincodeStub) GetPrivateDataHashArgsForCall(i int) (string, string) { + fake.getPrivateDataHashMutex.RLock() + defer fake.getPrivateDataHashMutex.RUnlock() + argsForCall := fake.getPrivateDataHashArgsForCall[i] + return argsForCall.arg1, argsForCall.arg2 +} + +func (fake *ChaincodeStub) GetPrivateDataHashReturns(result1 []byte, result2 error) { + fake.getPrivateDataHashMutex.Lock() + defer fake.getPrivateDataHashMutex.Unlock() + fake.GetPrivateDataHashStub = nil + fake.getPrivateDataHashReturns = struct { + result1 []byte + result2 error + }{result1, result2} +} + +func (fake *ChaincodeStub) GetPrivateDataHashReturnsOnCall(i int, result1 []byte, result2 error) { + fake.getPrivateDataHashMutex.Lock() + defer fake.getPrivateDataHashMutex.Unlock() + fake.GetPrivateDataHashStub = nil + if fake.getPrivateDataHashReturnsOnCall == nil { + fake.getPrivateDataHashReturnsOnCall = make(map[int]struct { + result1 []byte + result2 error + }) + } + fake.getPrivateDataHashReturnsOnCall[i] = struct { + result1 []byte + result2 error + }{result1, result2} +} + +func (fake *ChaincodeStub) GetPrivateDataQueryResult(arg1 string, arg2 string) (shim.StateQueryIteratorInterface, error) { + fake.getPrivateDataQueryResultMutex.Lock() + ret, specificReturn := fake.getPrivateDataQueryResultReturnsOnCall[len(fake.getPrivateDataQueryResultArgsForCall)] + fake.getPrivateDataQueryResultArgsForCall = append(fake.getPrivateDataQueryResultArgsForCall, struct { + arg1 string + arg2 string + }{arg1, arg2}) + fake.recordInvocation("GetPrivateDataQueryResult", []interface{}{arg1, arg2}) + fake.getPrivateDataQueryResultMutex.Unlock() + if fake.GetPrivateDataQueryResultStub != nil { + return fake.GetPrivateDataQueryResultStub(arg1, arg2) + } + if specificReturn { + return ret.result1, ret.result2 + } + fakeReturns := fake.getPrivateDataQueryResultReturns + return fakeReturns.result1, fakeReturns.result2 +} + +func (fake *ChaincodeStub) GetPrivateDataQueryResultCallCount() int { + fake.getPrivateDataQueryResultMutex.RLock() + defer fake.getPrivateDataQueryResultMutex.RUnlock() + return len(fake.getPrivateDataQueryResultArgsForCall) +} + +func (fake *ChaincodeStub) GetPrivateDataQueryResultCalls(stub func(string, string) (shim.StateQueryIteratorInterface, error)) { + fake.getPrivateDataQueryResultMutex.Lock() + defer fake.getPrivateDataQueryResultMutex.Unlock() + fake.GetPrivateDataQueryResultStub = stub +} + +func (fake *ChaincodeStub) GetPrivateDataQueryResultArgsForCall(i int) (string, string) { + fake.getPrivateDataQueryResultMutex.RLock() + defer fake.getPrivateDataQueryResultMutex.RUnlock() + argsForCall := fake.getPrivateDataQueryResultArgsForCall[i] + return argsForCall.arg1, argsForCall.arg2 +} + +func (fake *ChaincodeStub) GetPrivateDataQueryResultReturns(result1 shim.StateQueryIteratorInterface, result2 error) { + fake.getPrivateDataQueryResultMutex.Lock() + defer fake.getPrivateDataQueryResultMutex.Unlock() + fake.GetPrivateDataQueryResultStub = nil + fake.getPrivateDataQueryResultReturns = struct { + result1 shim.StateQueryIteratorInterface + result2 error + }{result1, result2} +} + +func (fake *ChaincodeStub) GetPrivateDataQueryResultReturnsOnCall(i int, result1 shim.StateQueryIteratorInterface, result2 error) { + fake.getPrivateDataQueryResultMutex.Lock() + defer fake.getPrivateDataQueryResultMutex.Unlock() + fake.GetPrivateDataQueryResultStub = nil + if fake.getPrivateDataQueryResultReturnsOnCall == nil { + fake.getPrivateDataQueryResultReturnsOnCall = make(map[int]struct { + result1 shim.StateQueryIteratorInterface + result2 error + }) + } + fake.getPrivateDataQueryResultReturnsOnCall[i] = struct { + result1 shim.StateQueryIteratorInterface + result2 error + }{result1, result2} +} + +func (fake *ChaincodeStub) GetPrivateDataValidationParameter(arg1 string, arg2 string) ([]byte, error) { + fake.getPrivateDataValidationParameterMutex.Lock() + ret, specificReturn := fake.getPrivateDataValidationParameterReturnsOnCall[len(fake.getPrivateDataValidationParameterArgsForCall)] + fake.getPrivateDataValidationParameterArgsForCall = append(fake.getPrivateDataValidationParameterArgsForCall, struct { + arg1 string + arg2 string + }{arg1, arg2}) + fake.recordInvocation("GetPrivateDataValidationParameter", []interface{}{arg1, arg2}) + fake.getPrivateDataValidationParameterMutex.Unlock() + if fake.GetPrivateDataValidationParameterStub != nil { + return fake.GetPrivateDataValidationParameterStub(arg1, arg2) + } + if specificReturn { + return ret.result1, ret.result2 + } + fakeReturns := fake.getPrivateDataValidationParameterReturns + return fakeReturns.result1, fakeReturns.result2 +} + +func (fake *ChaincodeStub) GetPrivateDataValidationParameterCallCount() int { + fake.getPrivateDataValidationParameterMutex.RLock() + defer fake.getPrivateDataValidationParameterMutex.RUnlock() + return len(fake.getPrivateDataValidationParameterArgsForCall) +} + +func (fake *ChaincodeStub) GetPrivateDataValidationParameterCalls(stub func(string, string) ([]byte, error)) { + fake.getPrivateDataValidationParameterMutex.Lock() + defer fake.getPrivateDataValidationParameterMutex.Unlock() + fake.GetPrivateDataValidationParameterStub = stub +} + +func (fake *ChaincodeStub) GetPrivateDataValidationParameterArgsForCall(i int) (string, string) { + fake.getPrivateDataValidationParameterMutex.RLock() + defer fake.getPrivateDataValidationParameterMutex.RUnlock() + argsForCall := fake.getPrivateDataValidationParameterArgsForCall[i] + return argsForCall.arg1, argsForCall.arg2 +} + +func (fake *ChaincodeStub) GetPrivateDataValidationParameterReturns(result1 []byte, result2 error) { + fake.getPrivateDataValidationParameterMutex.Lock() + defer fake.getPrivateDataValidationParameterMutex.Unlock() + fake.GetPrivateDataValidationParameterStub = nil + fake.getPrivateDataValidationParameterReturns = struct { + result1 []byte + result2 error + }{result1, result2} +} + +func (fake *ChaincodeStub) GetPrivateDataValidationParameterReturnsOnCall(i int, result1 []byte, result2 error) { + fake.getPrivateDataValidationParameterMutex.Lock() + defer fake.getPrivateDataValidationParameterMutex.Unlock() + fake.GetPrivateDataValidationParameterStub = nil + if fake.getPrivateDataValidationParameterReturnsOnCall == nil { + fake.getPrivateDataValidationParameterReturnsOnCall = make(map[int]struct { + result1 []byte + result2 error + }) + } + fake.getPrivateDataValidationParameterReturnsOnCall[i] = struct { + result1 []byte + result2 error + }{result1, result2} +} + +func (fake *ChaincodeStub) GetQueryResult(arg1 string) (shim.StateQueryIteratorInterface, error) { + fake.getQueryResultMutex.Lock() + ret, specificReturn := fake.getQueryResultReturnsOnCall[len(fake.getQueryResultArgsForCall)] + fake.getQueryResultArgsForCall = append(fake.getQueryResultArgsForCall, struct { + arg1 string + }{arg1}) + fake.recordInvocation("GetQueryResult", []interface{}{arg1}) + fake.getQueryResultMutex.Unlock() + if fake.GetQueryResultStub != nil { + return fake.GetQueryResultStub(arg1) + } + if specificReturn { + return ret.result1, ret.result2 + } + fakeReturns := fake.getQueryResultReturns + return fakeReturns.result1, fakeReturns.result2 +} + +func (fake *ChaincodeStub) GetQueryResultCallCount() int { + fake.getQueryResultMutex.RLock() + defer fake.getQueryResultMutex.RUnlock() + return len(fake.getQueryResultArgsForCall) +} + +func (fake *ChaincodeStub) GetQueryResultCalls(stub func(string) (shim.StateQueryIteratorInterface, error)) { + fake.getQueryResultMutex.Lock() + defer fake.getQueryResultMutex.Unlock() + fake.GetQueryResultStub = stub +} + +func (fake *ChaincodeStub) GetQueryResultArgsForCall(i int) string { + fake.getQueryResultMutex.RLock() + defer fake.getQueryResultMutex.RUnlock() + argsForCall := fake.getQueryResultArgsForCall[i] + return argsForCall.arg1 +} + +func (fake *ChaincodeStub) GetQueryResultReturns(result1 shim.StateQueryIteratorInterface, result2 error) { + fake.getQueryResultMutex.Lock() + defer fake.getQueryResultMutex.Unlock() + fake.GetQueryResultStub = nil + fake.getQueryResultReturns = struct { + result1 shim.StateQueryIteratorInterface + result2 error + }{result1, result2} +} + +func (fake *ChaincodeStub) GetQueryResultReturnsOnCall(i int, result1 shim.StateQueryIteratorInterface, result2 error) { + fake.getQueryResultMutex.Lock() + defer fake.getQueryResultMutex.Unlock() + fake.GetQueryResultStub = nil + if fake.getQueryResultReturnsOnCall == nil { + fake.getQueryResultReturnsOnCall = make(map[int]struct { + result1 shim.StateQueryIteratorInterface + result2 error + }) + } + fake.getQueryResultReturnsOnCall[i] = struct { + result1 shim.StateQueryIteratorInterface + result2 error + }{result1, result2} +} + +func (fake *ChaincodeStub) GetQueryResultWithPagination(arg1 string, arg2 int32, arg3 string) (shim.StateQueryIteratorInterface, *peer.QueryResponseMetadata, error) { + fake.getQueryResultWithPaginationMutex.Lock() + ret, specificReturn := fake.getQueryResultWithPaginationReturnsOnCall[len(fake.getQueryResultWithPaginationArgsForCall)] + fake.getQueryResultWithPaginationArgsForCall = append(fake.getQueryResultWithPaginationArgsForCall, struct { + arg1 string + arg2 int32 + arg3 string + }{arg1, arg2, arg3}) + fake.recordInvocation("GetQueryResultWithPagination", []interface{}{arg1, arg2, arg3}) + fake.getQueryResultWithPaginationMutex.Unlock() + if fake.GetQueryResultWithPaginationStub != nil { + return fake.GetQueryResultWithPaginationStub(arg1, arg2, arg3) + } + if specificReturn { + return ret.result1, ret.result2, ret.result3 + } + fakeReturns := fake.getQueryResultWithPaginationReturns + return fakeReturns.result1, fakeReturns.result2, fakeReturns.result3 +} + +func (fake *ChaincodeStub) GetQueryResultWithPaginationCallCount() int { + fake.getQueryResultWithPaginationMutex.RLock() + defer fake.getQueryResultWithPaginationMutex.RUnlock() + return len(fake.getQueryResultWithPaginationArgsForCall) +} + +func (fake *ChaincodeStub) GetQueryResultWithPaginationCalls(stub func(string, int32, string) (shim.StateQueryIteratorInterface, *peer.QueryResponseMetadata, error)) { + fake.getQueryResultWithPaginationMutex.Lock() + defer fake.getQueryResultWithPaginationMutex.Unlock() + fake.GetQueryResultWithPaginationStub = stub +} + +func (fake *ChaincodeStub) GetQueryResultWithPaginationArgsForCall(i int) (string, int32, string) { + fake.getQueryResultWithPaginationMutex.RLock() + defer fake.getQueryResultWithPaginationMutex.RUnlock() + argsForCall := fake.getQueryResultWithPaginationArgsForCall[i] + return argsForCall.arg1, argsForCall.arg2, argsForCall.arg3 +} + +func (fake *ChaincodeStub) GetQueryResultWithPaginationReturns(result1 shim.StateQueryIteratorInterface, result2 *peer.QueryResponseMetadata, result3 error) { + fake.getQueryResultWithPaginationMutex.Lock() + defer fake.getQueryResultWithPaginationMutex.Unlock() + fake.GetQueryResultWithPaginationStub = nil + fake.getQueryResultWithPaginationReturns = struct { + result1 shim.StateQueryIteratorInterface + result2 *peer.QueryResponseMetadata + result3 error + }{result1, result2, result3} +} + +func (fake *ChaincodeStub) GetQueryResultWithPaginationReturnsOnCall(i int, result1 shim.StateQueryIteratorInterface, result2 *peer.QueryResponseMetadata, result3 error) { + fake.getQueryResultWithPaginationMutex.Lock() + defer fake.getQueryResultWithPaginationMutex.Unlock() + fake.GetQueryResultWithPaginationStub = nil + if fake.getQueryResultWithPaginationReturnsOnCall == nil { + fake.getQueryResultWithPaginationReturnsOnCall = make(map[int]struct { + result1 shim.StateQueryIteratorInterface + result2 *peer.QueryResponseMetadata + result3 error + }) + } + fake.getQueryResultWithPaginationReturnsOnCall[i] = struct { + result1 shim.StateQueryIteratorInterface + result2 *peer.QueryResponseMetadata + result3 error + }{result1, result2, result3} +} + +func (fake *ChaincodeStub) GetSignedProposal() (*peer.SignedProposal, error) { + fake.getSignedProposalMutex.Lock() + ret, specificReturn := fake.getSignedProposalReturnsOnCall[len(fake.getSignedProposalArgsForCall)] + fake.getSignedProposalArgsForCall = append(fake.getSignedProposalArgsForCall, struct { + }{}) + fake.recordInvocation("GetSignedProposal", []interface{}{}) + fake.getSignedProposalMutex.Unlock() + if fake.GetSignedProposalStub != nil { + return fake.GetSignedProposalStub() + } + if specificReturn { + return ret.result1, ret.result2 + } + fakeReturns := fake.getSignedProposalReturns + return fakeReturns.result1, fakeReturns.result2 +} + +func (fake *ChaincodeStub) GetSignedProposalCallCount() int { + fake.getSignedProposalMutex.RLock() + defer fake.getSignedProposalMutex.RUnlock() + return len(fake.getSignedProposalArgsForCall) +} + +func (fake *ChaincodeStub) GetSignedProposalCalls(stub func() (*peer.SignedProposal, error)) { + fake.getSignedProposalMutex.Lock() + defer fake.getSignedProposalMutex.Unlock() + fake.GetSignedProposalStub = stub +} + +func (fake *ChaincodeStub) GetSignedProposalReturns(result1 *peer.SignedProposal, result2 error) { + fake.getSignedProposalMutex.Lock() + defer fake.getSignedProposalMutex.Unlock() + fake.GetSignedProposalStub = nil + fake.getSignedProposalReturns = struct { + result1 *peer.SignedProposal + result2 error + }{result1, result2} +} + +func (fake *ChaincodeStub) GetSignedProposalReturnsOnCall(i int, result1 *peer.SignedProposal, result2 error) { + fake.getSignedProposalMutex.Lock() + defer fake.getSignedProposalMutex.Unlock() + fake.GetSignedProposalStub = nil + if fake.getSignedProposalReturnsOnCall == nil { + fake.getSignedProposalReturnsOnCall = make(map[int]struct { + result1 *peer.SignedProposal + result2 error + }) + } + fake.getSignedProposalReturnsOnCall[i] = struct { + result1 *peer.SignedProposal + result2 error + }{result1, result2} +} + +func (fake *ChaincodeStub) GetState(arg1 string) ([]byte, error) { + fake.getStateMutex.Lock() + ret, specificReturn := fake.getStateReturnsOnCall[len(fake.getStateArgsForCall)] + fake.getStateArgsForCall = append(fake.getStateArgsForCall, struct { + arg1 string + }{arg1}) + fake.recordInvocation("GetState", []interface{}{arg1}) + fake.getStateMutex.Unlock() + if fake.GetStateStub != nil { + return fake.GetStateStub(arg1) + } + if specificReturn { + return ret.result1, ret.result2 + } + fakeReturns := fake.getStateReturns + return fakeReturns.result1, fakeReturns.result2 +} + +func (fake *ChaincodeStub) GetStateCallCount() int { + fake.getStateMutex.RLock() + defer fake.getStateMutex.RUnlock() + return len(fake.getStateArgsForCall) +} + +func (fake *ChaincodeStub) GetStateCalls(stub func(string) ([]byte, error)) { + fake.getStateMutex.Lock() + defer fake.getStateMutex.Unlock() + fake.GetStateStub = stub +} + +func (fake *ChaincodeStub) GetStateArgsForCall(i int) string { + fake.getStateMutex.RLock() + defer fake.getStateMutex.RUnlock() + argsForCall := fake.getStateArgsForCall[i] + return argsForCall.arg1 +} + +func (fake *ChaincodeStub) GetStateReturns(result1 []byte, result2 error) { + fake.getStateMutex.Lock() + defer fake.getStateMutex.Unlock() + fake.GetStateStub = nil + fake.getStateReturns = struct { + result1 []byte + result2 error + }{result1, result2} +} + +func (fake *ChaincodeStub) GetStateReturnsOnCall(i int, result1 []byte, result2 error) { + fake.getStateMutex.Lock() + defer fake.getStateMutex.Unlock() + fake.GetStateStub = nil + if fake.getStateReturnsOnCall == nil { + fake.getStateReturnsOnCall = make(map[int]struct { + result1 []byte + result2 error + }) + } + fake.getStateReturnsOnCall[i] = struct { + result1 []byte + result2 error + }{result1, result2} +} + +func (fake *ChaincodeStub) GetStateByPartialCompositeKey(arg1 string, arg2 []string) (shim.StateQueryIteratorInterface, error) { + var arg2Copy []string + if arg2 != nil { + arg2Copy = make([]string, len(arg2)) + copy(arg2Copy, arg2) + } + fake.getStateByPartialCompositeKeyMutex.Lock() + ret, specificReturn := fake.getStateByPartialCompositeKeyReturnsOnCall[len(fake.getStateByPartialCompositeKeyArgsForCall)] + fake.getStateByPartialCompositeKeyArgsForCall = append(fake.getStateByPartialCompositeKeyArgsForCall, struct { + arg1 string + arg2 []string + }{arg1, arg2Copy}) + fake.recordInvocation("GetStateByPartialCompositeKey", []interface{}{arg1, arg2Copy}) + fake.getStateByPartialCompositeKeyMutex.Unlock() + if fake.GetStateByPartialCompositeKeyStub != nil { + return fake.GetStateByPartialCompositeKeyStub(arg1, arg2) + } + if specificReturn { + return ret.result1, ret.result2 + } + fakeReturns := fake.getStateByPartialCompositeKeyReturns + return fakeReturns.result1, fakeReturns.result2 +} + +func (fake *ChaincodeStub) GetStateByPartialCompositeKeyCallCount() int { + fake.getStateByPartialCompositeKeyMutex.RLock() + defer fake.getStateByPartialCompositeKeyMutex.RUnlock() + return len(fake.getStateByPartialCompositeKeyArgsForCall) +} + +func (fake *ChaincodeStub) GetStateByPartialCompositeKeyCalls(stub func(string, []string) (shim.StateQueryIteratorInterface, error)) { + fake.getStateByPartialCompositeKeyMutex.Lock() + defer fake.getStateByPartialCompositeKeyMutex.Unlock() + fake.GetStateByPartialCompositeKeyStub = stub +} + +func (fake *ChaincodeStub) GetStateByPartialCompositeKeyArgsForCall(i int) (string, []string) { + fake.getStateByPartialCompositeKeyMutex.RLock() + defer fake.getStateByPartialCompositeKeyMutex.RUnlock() + argsForCall := fake.getStateByPartialCompositeKeyArgsForCall[i] + return argsForCall.arg1, argsForCall.arg2 +} + +func (fake *ChaincodeStub) GetStateByPartialCompositeKeyReturns(result1 shim.StateQueryIteratorInterface, result2 error) { + fake.getStateByPartialCompositeKeyMutex.Lock() + defer fake.getStateByPartialCompositeKeyMutex.Unlock() + fake.GetStateByPartialCompositeKeyStub = nil + fake.getStateByPartialCompositeKeyReturns = struct { + result1 shim.StateQueryIteratorInterface + result2 error + }{result1, result2} +} + +func (fake *ChaincodeStub) GetStateByPartialCompositeKeyReturnsOnCall(i int, result1 shim.StateQueryIteratorInterface, result2 error) { + fake.getStateByPartialCompositeKeyMutex.Lock() + defer fake.getStateByPartialCompositeKeyMutex.Unlock() + fake.GetStateByPartialCompositeKeyStub = nil + if fake.getStateByPartialCompositeKeyReturnsOnCall == nil { + fake.getStateByPartialCompositeKeyReturnsOnCall = make(map[int]struct { + result1 shim.StateQueryIteratorInterface + result2 error + }) + } + fake.getStateByPartialCompositeKeyReturnsOnCall[i] = struct { + result1 shim.StateQueryIteratorInterface + result2 error + }{result1, result2} +} + +func (fake *ChaincodeStub) GetStateByPartialCompositeKeyWithPagination(arg1 string, arg2 []string, arg3 int32, arg4 string) (shim.StateQueryIteratorInterface, *peer.QueryResponseMetadata, error) { + var arg2Copy []string + if arg2 != nil { + arg2Copy = make([]string, len(arg2)) + copy(arg2Copy, arg2) + } + fake.getStateByPartialCompositeKeyWithPaginationMutex.Lock() + ret, specificReturn := fake.getStateByPartialCompositeKeyWithPaginationReturnsOnCall[len(fake.getStateByPartialCompositeKeyWithPaginationArgsForCall)] + fake.getStateByPartialCompositeKeyWithPaginationArgsForCall = append(fake.getStateByPartialCompositeKeyWithPaginationArgsForCall, struct { + arg1 string + arg2 []string + arg3 int32 + arg4 string + }{arg1, arg2Copy, arg3, arg4}) + fake.recordInvocation("GetStateByPartialCompositeKeyWithPagination", []interface{}{arg1, arg2Copy, arg3, arg4}) + fake.getStateByPartialCompositeKeyWithPaginationMutex.Unlock() + if fake.GetStateByPartialCompositeKeyWithPaginationStub != nil { + return fake.GetStateByPartialCompositeKeyWithPaginationStub(arg1, arg2, arg3, arg4) + } + if specificReturn { + return ret.result1, ret.result2, ret.result3 + } + fakeReturns := fake.getStateByPartialCompositeKeyWithPaginationReturns + return fakeReturns.result1, fakeReturns.result2, fakeReturns.result3 +} + +func (fake *ChaincodeStub) GetStateByPartialCompositeKeyWithPaginationCallCount() int { + fake.getStateByPartialCompositeKeyWithPaginationMutex.RLock() + defer fake.getStateByPartialCompositeKeyWithPaginationMutex.RUnlock() + return len(fake.getStateByPartialCompositeKeyWithPaginationArgsForCall) +} + +func (fake *ChaincodeStub) GetStateByPartialCompositeKeyWithPaginationCalls(stub func(string, []string, int32, string) (shim.StateQueryIteratorInterface, *peer.QueryResponseMetadata, error)) { + fake.getStateByPartialCompositeKeyWithPaginationMutex.Lock() + defer fake.getStateByPartialCompositeKeyWithPaginationMutex.Unlock() + fake.GetStateByPartialCompositeKeyWithPaginationStub = stub +} + +func (fake *ChaincodeStub) GetStateByPartialCompositeKeyWithPaginationArgsForCall(i int) (string, []string, int32, string) { + fake.getStateByPartialCompositeKeyWithPaginationMutex.RLock() + defer fake.getStateByPartialCompositeKeyWithPaginationMutex.RUnlock() + argsForCall := fake.getStateByPartialCompositeKeyWithPaginationArgsForCall[i] + return argsForCall.arg1, argsForCall.arg2, argsForCall.arg3, argsForCall.arg4 +} + +func (fake *ChaincodeStub) GetStateByPartialCompositeKeyWithPaginationReturns(result1 shim.StateQueryIteratorInterface, result2 *peer.QueryResponseMetadata, result3 error) { + fake.getStateByPartialCompositeKeyWithPaginationMutex.Lock() + defer fake.getStateByPartialCompositeKeyWithPaginationMutex.Unlock() + fake.GetStateByPartialCompositeKeyWithPaginationStub = nil + fake.getStateByPartialCompositeKeyWithPaginationReturns = struct { + result1 shim.StateQueryIteratorInterface + result2 *peer.QueryResponseMetadata + result3 error + }{result1, result2, result3} +} + +func (fake *ChaincodeStub) GetStateByPartialCompositeKeyWithPaginationReturnsOnCall(i int, result1 shim.StateQueryIteratorInterface, result2 *peer.QueryResponseMetadata, result3 error) { + fake.getStateByPartialCompositeKeyWithPaginationMutex.Lock() + defer fake.getStateByPartialCompositeKeyWithPaginationMutex.Unlock() + fake.GetStateByPartialCompositeKeyWithPaginationStub = nil + if fake.getStateByPartialCompositeKeyWithPaginationReturnsOnCall == nil { + fake.getStateByPartialCompositeKeyWithPaginationReturnsOnCall = make(map[int]struct { + result1 shim.StateQueryIteratorInterface + result2 *peer.QueryResponseMetadata + result3 error + }) + } + fake.getStateByPartialCompositeKeyWithPaginationReturnsOnCall[i] = struct { + result1 shim.StateQueryIteratorInterface + result2 *peer.QueryResponseMetadata + result3 error + }{result1, result2, result3} +} + +func (fake *ChaincodeStub) GetStateByRange(arg1 string, arg2 string) (shim.StateQueryIteratorInterface, error) { + fake.getStateByRangeMutex.Lock() + ret, specificReturn := fake.getStateByRangeReturnsOnCall[len(fake.getStateByRangeArgsForCall)] + fake.getStateByRangeArgsForCall = append(fake.getStateByRangeArgsForCall, struct { + arg1 string + arg2 string + }{arg1, arg2}) + fake.recordInvocation("GetStateByRange", []interface{}{arg1, arg2}) + fake.getStateByRangeMutex.Unlock() + if fake.GetStateByRangeStub != nil { + return fake.GetStateByRangeStub(arg1, arg2) + } + if specificReturn { + return ret.result1, ret.result2 + } + fakeReturns := fake.getStateByRangeReturns + return fakeReturns.result1, fakeReturns.result2 +} + +func (fake *ChaincodeStub) GetStateByRangeCallCount() int { + fake.getStateByRangeMutex.RLock() + defer fake.getStateByRangeMutex.RUnlock() + return len(fake.getStateByRangeArgsForCall) +} + +func (fake *ChaincodeStub) GetStateByRangeCalls(stub func(string, string) (shim.StateQueryIteratorInterface, error)) { + fake.getStateByRangeMutex.Lock() + defer fake.getStateByRangeMutex.Unlock() + fake.GetStateByRangeStub = stub +} + +func (fake *ChaincodeStub) GetStateByRangeArgsForCall(i int) (string, string) { + fake.getStateByRangeMutex.RLock() + defer fake.getStateByRangeMutex.RUnlock() + argsForCall := fake.getStateByRangeArgsForCall[i] + return argsForCall.arg1, argsForCall.arg2 +} + +func (fake *ChaincodeStub) GetStateByRangeReturns(result1 shim.StateQueryIteratorInterface, result2 error) { + fake.getStateByRangeMutex.Lock() + defer fake.getStateByRangeMutex.Unlock() + fake.GetStateByRangeStub = nil + fake.getStateByRangeReturns = struct { + result1 shim.StateQueryIteratorInterface + result2 error + }{result1, result2} +} + +func (fake *ChaincodeStub) GetStateByRangeReturnsOnCall(i int, result1 shim.StateQueryIteratorInterface, result2 error) { + fake.getStateByRangeMutex.Lock() + defer fake.getStateByRangeMutex.Unlock() + fake.GetStateByRangeStub = nil + if fake.getStateByRangeReturnsOnCall == nil { + fake.getStateByRangeReturnsOnCall = make(map[int]struct { + result1 shim.StateQueryIteratorInterface + result2 error + }) + } + fake.getStateByRangeReturnsOnCall[i] = struct { + result1 shim.StateQueryIteratorInterface + result2 error + }{result1, result2} +} + +func (fake *ChaincodeStub) GetStateByRangeWithPagination(arg1 string, arg2 string, arg3 int32, arg4 string) (shim.StateQueryIteratorInterface, *peer.QueryResponseMetadata, error) { + fake.getStateByRangeWithPaginationMutex.Lock() + ret, specificReturn := fake.getStateByRangeWithPaginationReturnsOnCall[len(fake.getStateByRangeWithPaginationArgsForCall)] + fake.getStateByRangeWithPaginationArgsForCall = append(fake.getStateByRangeWithPaginationArgsForCall, struct { + arg1 string + arg2 string + arg3 int32 + arg4 string + }{arg1, arg2, arg3, arg4}) + fake.recordInvocation("GetStateByRangeWithPagination", []interface{}{arg1, arg2, arg3, arg4}) + fake.getStateByRangeWithPaginationMutex.Unlock() + if fake.GetStateByRangeWithPaginationStub != nil { + return fake.GetStateByRangeWithPaginationStub(arg1, arg2, arg3, arg4) + } + if specificReturn { + return ret.result1, ret.result2, ret.result3 + } + fakeReturns := fake.getStateByRangeWithPaginationReturns + return fakeReturns.result1, fakeReturns.result2, fakeReturns.result3 +} + +func (fake *ChaincodeStub) GetStateByRangeWithPaginationCallCount() int { + fake.getStateByRangeWithPaginationMutex.RLock() + defer fake.getStateByRangeWithPaginationMutex.RUnlock() + return len(fake.getStateByRangeWithPaginationArgsForCall) +} + +func (fake *ChaincodeStub) GetStateByRangeWithPaginationCalls(stub func(string, string, int32, string) (shim.StateQueryIteratorInterface, *peer.QueryResponseMetadata, error)) { + fake.getStateByRangeWithPaginationMutex.Lock() + defer fake.getStateByRangeWithPaginationMutex.Unlock() + fake.GetStateByRangeWithPaginationStub = stub +} + +func (fake *ChaincodeStub) GetStateByRangeWithPaginationArgsForCall(i int) (string, string, int32, string) { + fake.getStateByRangeWithPaginationMutex.RLock() + defer fake.getStateByRangeWithPaginationMutex.RUnlock() + argsForCall := fake.getStateByRangeWithPaginationArgsForCall[i] + return argsForCall.arg1, argsForCall.arg2, argsForCall.arg3, argsForCall.arg4 +} + +func (fake *ChaincodeStub) GetStateByRangeWithPaginationReturns(result1 shim.StateQueryIteratorInterface, result2 *peer.QueryResponseMetadata, result3 error) { + fake.getStateByRangeWithPaginationMutex.Lock() + defer fake.getStateByRangeWithPaginationMutex.Unlock() + fake.GetStateByRangeWithPaginationStub = nil + fake.getStateByRangeWithPaginationReturns = struct { + result1 shim.StateQueryIteratorInterface + result2 *peer.QueryResponseMetadata + result3 error + }{result1, result2, result3} +} + +func (fake *ChaincodeStub) GetStateByRangeWithPaginationReturnsOnCall(i int, result1 shim.StateQueryIteratorInterface, result2 *peer.QueryResponseMetadata, result3 error) { + fake.getStateByRangeWithPaginationMutex.Lock() + defer fake.getStateByRangeWithPaginationMutex.Unlock() + fake.GetStateByRangeWithPaginationStub = nil + if fake.getStateByRangeWithPaginationReturnsOnCall == nil { + fake.getStateByRangeWithPaginationReturnsOnCall = make(map[int]struct { + result1 shim.StateQueryIteratorInterface + result2 *peer.QueryResponseMetadata + result3 error + }) + } + fake.getStateByRangeWithPaginationReturnsOnCall[i] = struct { + result1 shim.StateQueryIteratorInterface + result2 *peer.QueryResponseMetadata + result3 error + }{result1, result2, result3} +} + +func (fake *ChaincodeStub) GetStateValidationParameter(arg1 string) ([]byte, error) { + fake.getStateValidationParameterMutex.Lock() + ret, specificReturn := fake.getStateValidationParameterReturnsOnCall[len(fake.getStateValidationParameterArgsForCall)] + fake.getStateValidationParameterArgsForCall = append(fake.getStateValidationParameterArgsForCall, struct { + arg1 string + }{arg1}) + fake.recordInvocation("GetStateValidationParameter", []interface{}{arg1}) + fake.getStateValidationParameterMutex.Unlock() + if fake.GetStateValidationParameterStub != nil { + return fake.GetStateValidationParameterStub(arg1) + } + if specificReturn { + return ret.result1, ret.result2 + } + fakeReturns := fake.getStateValidationParameterReturns + return fakeReturns.result1, fakeReturns.result2 +} + +func (fake *ChaincodeStub) GetStateValidationParameterCallCount() int { + fake.getStateValidationParameterMutex.RLock() + defer fake.getStateValidationParameterMutex.RUnlock() + return len(fake.getStateValidationParameterArgsForCall) +} + +func (fake *ChaincodeStub) GetStateValidationParameterCalls(stub func(string) ([]byte, error)) { + fake.getStateValidationParameterMutex.Lock() + defer fake.getStateValidationParameterMutex.Unlock() + fake.GetStateValidationParameterStub = stub +} + +func (fake *ChaincodeStub) GetStateValidationParameterArgsForCall(i int) string { + fake.getStateValidationParameterMutex.RLock() + defer fake.getStateValidationParameterMutex.RUnlock() + argsForCall := fake.getStateValidationParameterArgsForCall[i] + return argsForCall.arg1 +} + +func (fake *ChaincodeStub) GetStateValidationParameterReturns(result1 []byte, result2 error) { + fake.getStateValidationParameterMutex.Lock() + defer fake.getStateValidationParameterMutex.Unlock() + fake.GetStateValidationParameterStub = nil + fake.getStateValidationParameterReturns = struct { + result1 []byte + result2 error + }{result1, result2} +} + +func (fake *ChaincodeStub) GetStateValidationParameterReturnsOnCall(i int, result1 []byte, result2 error) { + fake.getStateValidationParameterMutex.Lock() + defer fake.getStateValidationParameterMutex.Unlock() + fake.GetStateValidationParameterStub = nil + if fake.getStateValidationParameterReturnsOnCall == nil { + fake.getStateValidationParameterReturnsOnCall = make(map[int]struct { + result1 []byte + result2 error + }) + } + fake.getStateValidationParameterReturnsOnCall[i] = struct { + result1 []byte + result2 error + }{result1, result2} +} + +func (fake *ChaincodeStub) GetStringArgs() []string { + fake.getStringArgsMutex.Lock() + ret, specificReturn := fake.getStringArgsReturnsOnCall[len(fake.getStringArgsArgsForCall)] + fake.getStringArgsArgsForCall = append(fake.getStringArgsArgsForCall, struct { + }{}) + fake.recordInvocation("GetStringArgs", []interface{}{}) + fake.getStringArgsMutex.Unlock() + if fake.GetStringArgsStub != nil { + return fake.GetStringArgsStub() + } + if specificReturn { + return ret.result1 + } + fakeReturns := fake.getStringArgsReturns + return fakeReturns.result1 +} + +func (fake *ChaincodeStub) GetStringArgsCallCount() int { + fake.getStringArgsMutex.RLock() + defer fake.getStringArgsMutex.RUnlock() + return len(fake.getStringArgsArgsForCall) +} + +func (fake *ChaincodeStub) GetStringArgsCalls(stub func() []string) { + fake.getStringArgsMutex.Lock() + defer fake.getStringArgsMutex.Unlock() + fake.GetStringArgsStub = stub +} + +func (fake *ChaincodeStub) GetStringArgsReturns(result1 []string) { + fake.getStringArgsMutex.Lock() + defer fake.getStringArgsMutex.Unlock() + fake.GetStringArgsStub = nil + fake.getStringArgsReturns = struct { + result1 []string + }{result1} +} + +func (fake *ChaincodeStub) GetStringArgsReturnsOnCall(i int, result1 []string) { + fake.getStringArgsMutex.Lock() + defer fake.getStringArgsMutex.Unlock() + fake.GetStringArgsStub = nil + if fake.getStringArgsReturnsOnCall == nil { + fake.getStringArgsReturnsOnCall = make(map[int]struct { + result1 []string + }) + } + fake.getStringArgsReturnsOnCall[i] = struct { + result1 []string + }{result1} +} + +func (fake *ChaincodeStub) GetTransient() (map[string][]byte, error) { + fake.getTransientMutex.Lock() + ret, specificReturn := fake.getTransientReturnsOnCall[len(fake.getTransientArgsForCall)] + fake.getTransientArgsForCall = append(fake.getTransientArgsForCall, struct { + }{}) + fake.recordInvocation("GetTransient", []interface{}{}) + fake.getTransientMutex.Unlock() + if fake.GetTransientStub != nil { + return fake.GetTransientStub() + } + if specificReturn { + return ret.result1, ret.result2 + } + fakeReturns := fake.getTransientReturns + return fakeReturns.result1, fakeReturns.result2 +} + +func (fake *ChaincodeStub) GetTransientCallCount() int { + fake.getTransientMutex.RLock() + defer fake.getTransientMutex.RUnlock() + return len(fake.getTransientArgsForCall) +} + +func (fake *ChaincodeStub) GetTransientCalls(stub func() (map[string][]byte, error)) { + fake.getTransientMutex.Lock() + defer fake.getTransientMutex.Unlock() + fake.GetTransientStub = stub +} + +func (fake *ChaincodeStub) GetTransientReturns(result1 map[string][]byte, result2 error) { + fake.getTransientMutex.Lock() + defer fake.getTransientMutex.Unlock() + fake.GetTransientStub = nil + fake.getTransientReturns = struct { + result1 map[string][]byte + result2 error + }{result1, result2} +} + +func (fake *ChaincodeStub) GetTransientReturnsOnCall(i int, result1 map[string][]byte, result2 error) { + fake.getTransientMutex.Lock() + defer fake.getTransientMutex.Unlock() + fake.GetTransientStub = nil + if fake.getTransientReturnsOnCall == nil { + fake.getTransientReturnsOnCall = make(map[int]struct { + result1 map[string][]byte + result2 error + }) + } + fake.getTransientReturnsOnCall[i] = struct { + result1 map[string][]byte + result2 error + }{result1, result2} +} + +func (fake *ChaincodeStub) GetTxID() string { + fake.getTxIDMutex.Lock() + ret, specificReturn := fake.getTxIDReturnsOnCall[len(fake.getTxIDArgsForCall)] + fake.getTxIDArgsForCall = append(fake.getTxIDArgsForCall, struct { + }{}) + fake.recordInvocation("GetTxID", []interface{}{}) + fake.getTxIDMutex.Unlock() + if fake.GetTxIDStub != nil { + return fake.GetTxIDStub() + } + if specificReturn { + return ret.result1 + } + fakeReturns := fake.getTxIDReturns + return fakeReturns.result1 +} + +func (fake *ChaincodeStub) GetTxIDCallCount() int { + fake.getTxIDMutex.RLock() + defer fake.getTxIDMutex.RUnlock() + return len(fake.getTxIDArgsForCall) +} + +func (fake *ChaincodeStub) GetTxIDCalls(stub func() string) { + fake.getTxIDMutex.Lock() + defer fake.getTxIDMutex.Unlock() + fake.GetTxIDStub = stub +} + +func (fake *ChaincodeStub) GetTxIDReturns(result1 string) { + fake.getTxIDMutex.Lock() + defer fake.getTxIDMutex.Unlock() + fake.GetTxIDStub = nil + fake.getTxIDReturns = struct { + result1 string + }{result1} +} + +func (fake *ChaincodeStub) GetTxIDReturnsOnCall(i int, result1 string) { + fake.getTxIDMutex.Lock() + defer fake.getTxIDMutex.Unlock() + fake.GetTxIDStub = nil + if fake.getTxIDReturnsOnCall == nil { + fake.getTxIDReturnsOnCall = make(map[int]struct { + result1 string + }) + } + fake.getTxIDReturnsOnCall[i] = struct { + result1 string + }{result1} +} + +func (fake *ChaincodeStub) GetTxTimestamp() (*timestamp.Timestamp, error) { + fake.getTxTimestampMutex.Lock() + ret, specificReturn := fake.getTxTimestampReturnsOnCall[len(fake.getTxTimestampArgsForCall)] + fake.getTxTimestampArgsForCall = append(fake.getTxTimestampArgsForCall, struct { + }{}) + fake.recordInvocation("GetTxTimestamp", []interface{}{}) + fake.getTxTimestampMutex.Unlock() + if fake.GetTxTimestampStub != nil { + return fake.GetTxTimestampStub() + } + if specificReturn { + return ret.result1, ret.result2 + } + fakeReturns := fake.getTxTimestampReturns + return fakeReturns.result1, fakeReturns.result2 +} + +func (fake *ChaincodeStub) GetTxTimestampCallCount() int { + fake.getTxTimestampMutex.RLock() + defer fake.getTxTimestampMutex.RUnlock() + return len(fake.getTxTimestampArgsForCall) +} + +func (fake *ChaincodeStub) GetTxTimestampCalls(stub func() (*timestamp.Timestamp, error)) { + fake.getTxTimestampMutex.Lock() + defer fake.getTxTimestampMutex.Unlock() + fake.GetTxTimestampStub = stub +} + +func (fake *ChaincodeStub) GetTxTimestampReturns(result1 *timestamp.Timestamp, result2 error) { + fake.getTxTimestampMutex.Lock() + defer fake.getTxTimestampMutex.Unlock() + fake.GetTxTimestampStub = nil + fake.getTxTimestampReturns = struct { + result1 *timestamp.Timestamp + result2 error + }{result1, result2} +} + +func (fake *ChaincodeStub) GetTxTimestampReturnsOnCall(i int, result1 *timestamp.Timestamp, result2 error) { + fake.getTxTimestampMutex.Lock() + defer fake.getTxTimestampMutex.Unlock() + fake.GetTxTimestampStub = nil + if fake.getTxTimestampReturnsOnCall == nil { + fake.getTxTimestampReturnsOnCall = make(map[int]struct { + result1 *timestamp.Timestamp + result2 error + }) + } + fake.getTxTimestampReturnsOnCall[i] = struct { + result1 *timestamp.Timestamp + result2 error + }{result1, result2} +} + +func (fake *ChaincodeStub) InvokeChaincode(arg1 string, arg2 [][]byte, arg3 string) peer.Response { + var arg2Copy [][]byte + if arg2 != nil { + arg2Copy = make([][]byte, len(arg2)) + copy(arg2Copy, arg2) + } + fake.invokeChaincodeMutex.Lock() + ret, specificReturn := fake.invokeChaincodeReturnsOnCall[len(fake.invokeChaincodeArgsForCall)] + fake.invokeChaincodeArgsForCall = append(fake.invokeChaincodeArgsForCall, struct { + arg1 string + arg2 [][]byte + arg3 string + }{arg1, arg2Copy, arg3}) + fake.recordInvocation("InvokeChaincode", []interface{}{arg1, arg2Copy, arg3}) + fake.invokeChaincodeMutex.Unlock() + if fake.InvokeChaincodeStub != nil { + return fake.InvokeChaincodeStub(arg1, arg2, arg3) + } + if specificReturn { + return ret.result1 + } + fakeReturns := fake.invokeChaincodeReturns + return fakeReturns.result1 +} + +func (fake *ChaincodeStub) InvokeChaincodeCallCount() int { + fake.invokeChaincodeMutex.RLock() + defer fake.invokeChaincodeMutex.RUnlock() + return len(fake.invokeChaincodeArgsForCall) +} + +func (fake *ChaincodeStub) InvokeChaincodeCalls(stub func(string, [][]byte, string) peer.Response) { + fake.invokeChaincodeMutex.Lock() + defer fake.invokeChaincodeMutex.Unlock() + fake.InvokeChaincodeStub = stub +} + +func (fake *ChaincodeStub) InvokeChaincodeArgsForCall(i int) (string, [][]byte, string) { + fake.invokeChaincodeMutex.RLock() + defer fake.invokeChaincodeMutex.RUnlock() + argsForCall := fake.invokeChaincodeArgsForCall[i] + return argsForCall.arg1, argsForCall.arg2, argsForCall.arg3 +} + +func (fake *ChaincodeStub) InvokeChaincodeReturns(result1 peer.Response) { + fake.invokeChaincodeMutex.Lock() + defer fake.invokeChaincodeMutex.Unlock() + fake.InvokeChaincodeStub = nil + fake.invokeChaincodeReturns = struct { + result1 peer.Response + }{result1} +} + +func (fake *ChaincodeStub) InvokeChaincodeReturnsOnCall(i int, result1 peer.Response) { + fake.invokeChaincodeMutex.Lock() + defer fake.invokeChaincodeMutex.Unlock() + fake.InvokeChaincodeStub = nil + if fake.invokeChaincodeReturnsOnCall == nil { + fake.invokeChaincodeReturnsOnCall = make(map[int]struct { + result1 peer.Response + }) + } + fake.invokeChaincodeReturnsOnCall[i] = struct { + result1 peer.Response + }{result1} +} + +func (fake *ChaincodeStub) PutPrivateData(arg1 string, arg2 string, arg3 []byte) error { + var arg3Copy []byte + if arg3 != nil { + arg3Copy = make([]byte, len(arg3)) + copy(arg3Copy, arg3) + } + fake.putPrivateDataMutex.Lock() + ret, specificReturn := fake.putPrivateDataReturnsOnCall[len(fake.putPrivateDataArgsForCall)] + fake.putPrivateDataArgsForCall = append(fake.putPrivateDataArgsForCall, struct { + arg1 string + arg2 string + arg3 []byte + }{arg1, arg2, arg3Copy}) + fake.recordInvocation("PutPrivateData", []interface{}{arg1, arg2, arg3Copy}) + fake.putPrivateDataMutex.Unlock() + if fake.PutPrivateDataStub != nil { + return fake.PutPrivateDataStub(arg1, arg2, arg3) + } + if specificReturn { + return ret.result1 + } + fakeReturns := fake.putPrivateDataReturns + return fakeReturns.result1 +} + +func (fake *ChaincodeStub) PutPrivateDataCallCount() int { + fake.putPrivateDataMutex.RLock() + defer fake.putPrivateDataMutex.RUnlock() + return len(fake.putPrivateDataArgsForCall) +} + +func (fake *ChaincodeStub) PutPrivateDataCalls(stub func(string, string, []byte) error) { + fake.putPrivateDataMutex.Lock() + defer fake.putPrivateDataMutex.Unlock() + fake.PutPrivateDataStub = stub +} + +func (fake *ChaincodeStub) PutPrivateDataArgsForCall(i int) (string, string, []byte) { + fake.putPrivateDataMutex.RLock() + defer fake.putPrivateDataMutex.RUnlock() + argsForCall := fake.putPrivateDataArgsForCall[i] + return argsForCall.arg1, argsForCall.arg2, argsForCall.arg3 +} + +func (fake *ChaincodeStub) PutPrivateDataReturns(result1 error) { + fake.putPrivateDataMutex.Lock() + defer fake.putPrivateDataMutex.Unlock() + fake.PutPrivateDataStub = nil + fake.putPrivateDataReturns = struct { + result1 error + }{result1} +} + +func (fake *ChaincodeStub) PutPrivateDataReturnsOnCall(i int, result1 error) { + fake.putPrivateDataMutex.Lock() + defer fake.putPrivateDataMutex.Unlock() + fake.PutPrivateDataStub = nil + if fake.putPrivateDataReturnsOnCall == nil { + fake.putPrivateDataReturnsOnCall = make(map[int]struct { + result1 error + }) + } + fake.putPrivateDataReturnsOnCall[i] = struct { + result1 error + }{result1} +} + +func (fake *ChaincodeStub) PutState(arg1 string, arg2 []byte) error { + var arg2Copy []byte + if arg2 != nil { + arg2Copy = make([]byte, len(arg2)) + copy(arg2Copy, arg2) + } + fake.putStateMutex.Lock() + ret, specificReturn := fake.putStateReturnsOnCall[len(fake.putStateArgsForCall)] + fake.putStateArgsForCall = append(fake.putStateArgsForCall, struct { + arg1 string + arg2 []byte + }{arg1, arg2Copy}) + fake.recordInvocation("PutState", []interface{}{arg1, arg2Copy}) + fake.putStateMutex.Unlock() + if fake.PutStateStub != nil { + return fake.PutStateStub(arg1, arg2) + } + if specificReturn { + return ret.result1 + } + fakeReturns := fake.putStateReturns + return fakeReturns.result1 +} + +func (fake *ChaincodeStub) PutStateCallCount() int { + fake.putStateMutex.RLock() + defer fake.putStateMutex.RUnlock() + return len(fake.putStateArgsForCall) +} + +func (fake *ChaincodeStub) PutStateCalls(stub func(string, []byte) error) { + fake.putStateMutex.Lock() + defer fake.putStateMutex.Unlock() + fake.PutStateStub = stub +} + +func (fake *ChaincodeStub) PutStateArgsForCall(i int) (string, []byte) { + fake.putStateMutex.RLock() + defer fake.putStateMutex.RUnlock() + argsForCall := fake.putStateArgsForCall[i] + return argsForCall.arg1, argsForCall.arg2 +} + +func (fake *ChaincodeStub) PutStateReturns(result1 error) { + fake.putStateMutex.Lock() + defer fake.putStateMutex.Unlock() + fake.PutStateStub = nil + fake.putStateReturns = struct { + result1 error + }{result1} +} + +func (fake *ChaincodeStub) PutStateReturnsOnCall(i int, result1 error) { + fake.putStateMutex.Lock() + defer fake.putStateMutex.Unlock() + fake.PutStateStub = nil + if fake.putStateReturnsOnCall == nil { + fake.putStateReturnsOnCall = make(map[int]struct { + result1 error + }) + } + fake.putStateReturnsOnCall[i] = struct { + result1 error + }{result1} +} + +func (fake *ChaincodeStub) SetEvent(arg1 string, arg2 []byte) error { + var arg2Copy []byte + if arg2 != nil { + arg2Copy = make([]byte, len(arg2)) + copy(arg2Copy, arg2) + } + fake.setEventMutex.Lock() + ret, specificReturn := fake.setEventReturnsOnCall[len(fake.setEventArgsForCall)] + fake.setEventArgsForCall = append(fake.setEventArgsForCall, struct { + arg1 string + arg2 []byte + }{arg1, arg2Copy}) + fake.recordInvocation("SetEvent", []interface{}{arg1, arg2Copy}) + fake.setEventMutex.Unlock() + if fake.SetEventStub != nil { + return fake.SetEventStub(arg1, arg2) + } + if specificReturn { + return ret.result1 + } + fakeReturns := fake.setEventReturns + return fakeReturns.result1 +} + +func (fake *ChaincodeStub) SetEventCallCount() int { + fake.setEventMutex.RLock() + defer fake.setEventMutex.RUnlock() + return len(fake.setEventArgsForCall) +} + +func (fake *ChaincodeStub) SetEventCalls(stub func(string, []byte) error) { + fake.setEventMutex.Lock() + defer fake.setEventMutex.Unlock() + fake.SetEventStub = stub +} + +func (fake *ChaincodeStub) SetEventArgsForCall(i int) (string, []byte) { + fake.setEventMutex.RLock() + defer fake.setEventMutex.RUnlock() + argsForCall := fake.setEventArgsForCall[i] + return argsForCall.arg1, argsForCall.arg2 +} + +func (fake *ChaincodeStub) SetEventReturns(result1 error) { + fake.setEventMutex.Lock() + defer fake.setEventMutex.Unlock() + fake.SetEventStub = nil + fake.setEventReturns = struct { + result1 error + }{result1} +} + +func (fake *ChaincodeStub) SetEventReturnsOnCall(i int, result1 error) { + fake.setEventMutex.Lock() + defer fake.setEventMutex.Unlock() + fake.SetEventStub = nil + if fake.setEventReturnsOnCall == nil { + fake.setEventReturnsOnCall = make(map[int]struct { + result1 error + }) + } + fake.setEventReturnsOnCall[i] = struct { + result1 error + }{result1} +} + +func (fake *ChaincodeStub) SetPrivateDataValidationParameter(arg1 string, arg2 string, arg3 []byte) error { + var arg3Copy []byte + if arg3 != nil { + arg3Copy = make([]byte, len(arg3)) + copy(arg3Copy, arg3) + } + fake.setPrivateDataValidationParameterMutex.Lock() + ret, specificReturn := fake.setPrivateDataValidationParameterReturnsOnCall[len(fake.setPrivateDataValidationParameterArgsForCall)] + fake.setPrivateDataValidationParameterArgsForCall = append(fake.setPrivateDataValidationParameterArgsForCall, struct { + arg1 string + arg2 string + arg3 []byte + }{arg1, arg2, arg3Copy}) + fake.recordInvocation("SetPrivateDataValidationParameter", []interface{}{arg1, arg2, arg3Copy}) + fake.setPrivateDataValidationParameterMutex.Unlock() + if fake.SetPrivateDataValidationParameterStub != nil { + return fake.SetPrivateDataValidationParameterStub(arg1, arg2, arg3) + } + if specificReturn { + return ret.result1 + } + fakeReturns := fake.setPrivateDataValidationParameterReturns + return fakeReturns.result1 +} + +func (fake *ChaincodeStub) SetPrivateDataValidationParameterCallCount() int { + fake.setPrivateDataValidationParameterMutex.RLock() + defer fake.setPrivateDataValidationParameterMutex.RUnlock() + return len(fake.setPrivateDataValidationParameterArgsForCall) +} + +func (fake *ChaincodeStub) SetPrivateDataValidationParameterCalls(stub func(string, string, []byte) error) { + fake.setPrivateDataValidationParameterMutex.Lock() + defer fake.setPrivateDataValidationParameterMutex.Unlock() + fake.SetPrivateDataValidationParameterStub = stub +} + +func (fake *ChaincodeStub) SetPrivateDataValidationParameterArgsForCall(i int) (string, string, []byte) { + fake.setPrivateDataValidationParameterMutex.RLock() + defer fake.setPrivateDataValidationParameterMutex.RUnlock() + argsForCall := fake.setPrivateDataValidationParameterArgsForCall[i] + return argsForCall.arg1, argsForCall.arg2, argsForCall.arg3 +} + +func (fake *ChaincodeStub) SetPrivateDataValidationParameterReturns(result1 error) { + fake.setPrivateDataValidationParameterMutex.Lock() + defer fake.setPrivateDataValidationParameterMutex.Unlock() + fake.SetPrivateDataValidationParameterStub = nil + fake.setPrivateDataValidationParameterReturns = struct { + result1 error + }{result1} +} + +func (fake *ChaincodeStub) SetPrivateDataValidationParameterReturnsOnCall(i int, result1 error) { + fake.setPrivateDataValidationParameterMutex.Lock() + defer fake.setPrivateDataValidationParameterMutex.Unlock() + fake.SetPrivateDataValidationParameterStub = nil + if fake.setPrivateDataValidationParameterReturnsOnCall == nil { + fake.setPrivateDataValidationParameterReturnsOnCall = make(map[int]struct { + result1 error + }) + } + fake.setPrivateDataValidationParameterReturnsOnCall[i] = struct { + result1 error + }{result1} +} + +func (fake *ChaincodeStub) SetStateValidationParameter(arg1 string, arg2 []byte) error { + var arg2Copy []byte + if arg2 != nil { + arg2Copy = make([]byte, len(arg2)) + copy(arg2Copy, arg2) + } + fake.setStateValidationParameterMutex.Lock() + ret, specificReturn := fake.setStateValidationParameterReturnsOnCall[len(fake.setStateValidationParameterArgsForCall)] + fake.setStateValidationParameterArgsForCall = append(fake.setStateValidationParameterArgsForCall, struct { + arg1 string + arg2 []byte + }{arg1, arg2Copy}) + fake.recordInvocation("SetStateValidationParameter", []interface{}{arg1, arg2Copy}) + fake.setStateValidationParameterMutex.Unlock() + if fake.SetStateValidationParameterStub != nil { + return fake.SetStateValidationParameterStub(arg1, arg2) + } + if specificReturn { + return ret.result1 + } + fakeReturns := fake.setStateValidationParameterReturns + return fakeReturns.result1 +} + +func (fake *ChaincodeStub) SetStateValidationParameterCallCount() int { + fake.setStateValidationParameterMutex.RLock() + defer fake.setStateValidationParameterMutex.RUnlock() + return len(fake.setStateValidationParameterArgsForCall) +} + +func (fake *ChaincodeStub) SetStateValidationParameterCalls(stub func(string, []byte) error) { + fake.setStateValidationParameterMutex.Lock() + defer fake.setStateValidationParameterMutex.Unlock() + fake.SetStateValidationParameterStub = stub +} + +func (fake *ChaincodeStub) SetStateValidationParameterArgsForCall(i int) (string, []byte) { + fake.setStateValidationParameterMutex.RLock() + defer fake.setStateValidationParameterMutex.RUnlock() + argsForCall := fake.setStateValidationParameterArgsForCall[i] + return argsForCall.arg1, argsForCall.arg2 +} + +func (fake *ChaincodeStub) SetStateValidationParameterReturns(result1 error) { + fake.setStateValidationParameterMutex.Lock() + defer fake.setStateValidationParameterMutex.Unlock() + fake.SetStateValidationParameterStub = nil + fake.setStateValidationParameterReturns = struct { + result1 error + }{result1} +} + +func (fake *ChaincodeStub) SetStateValidationParameterReturnsOnCall(i int, result1 error) { + fake.setStateValidationParameterMutex.Lock() + defer fake.setStateValidationParameterMutex.Unlock() + fake.SetStateValidationParameterStub = nil + if fake.setStateValidationParameterReturnsOnCall == nil { + fake.setStateValidationParameterReturnsOnCall = make(map[int]struct { + result1 error + }) + } + fake.setStateValidationParameterReturnsOnCall[i] = struct { + result1 error + }{result1} +} + +func (fake *ChaincodeStub) SplitCompositeKey(arg1 string) (string, []string, error) { + fake.splitCompositeKeyMutex.Lock() + ret, specificReturn := fake.splitCompositeKeyReturnsOnCall[len(fake.splitCompositeKeyArgsForCall)] + fake.splitCompositeKeyArgsForCall = append(fake.splitCompositeKeyArgsForCall, struct { + arg1 string + }{arg1}) + fake.recordInvocation("SplitCompositeKey", []interface{}{arg1}) + fake.splitCompositeKeyMutex.Unlock() + if fake.SplitCompositeKeyStub != nil { + return fake.SplitCompositeKeyStub(arg1) + } + if specificReturn { + return ret.result1, ret.result2, ret.result3 + } + fakeReturns := fake.splitCompositeKeyReturns + return fakeReturns.result1, fakeReturns.result2, fakeReturns.result3 +} + +func (fake *ChaincodeStub) SplitCompositeKeyCallCount() int { + fake.splitCompositeKeyMutex.RLock() + defer fake.splitCompositeKeyMutex.RUnlock() + return len(fake.splitCompositeKeyArgsForCall) +} + +func (fake *ChaincodeStub) SplitCompositeKeyCalls(stub func(string) (string, []string, error)) { + fake.splitCompositeKeyMutex.Lock() + defer fake.splitCompositeKeyMutex.Unlock() + fake.SplitCompositeKeyStub = stub +} + +func (fake *ChaincodeStub) SplitCompositeKeyArgsForCall(i int) string { + fake.splitCompositeKeyMutex.RLock() + defer fake.splitCompositeKeyMutex.RUnlock() + argsForCall := fake.splitCompositeKeyArgsForCall[i] + return argsForCall.arg1 +} + +func (fake *ChaincodeStub) SplitCompositeKeyReturns(result1 string, result2 []string, result3 error) { + fake.splitCompositeKeyMutex.Lock() + defer fake.splitCompositeKeyMutex.Unlock() + fake.SplitCompositeKeyStub = nil + fake.splitCompositeKeyReturns = struct { + result1 string + result2 []string + result3 error + }{result1, result2, result3} +} + +func (fake *ChaincodeStub) SplitCompositeKeyReturnsOnCall(i int, result1 string, result2 []string, result3 error) { + fake.splitCompositeKeyMutex.Lock() + defer fake.splitCompositeKeyMutex.Unlock() + fake.SplitCompositeKeyStub = nil + if fake.splitCompositeKeyReturnsOnCall == nil { + fake.splitCompositeKeyReturnsOnCall = make(map[int]struct { + result1 string + result2 []string + result3 error + }) + } + fake.splitCompositeKeyReturnsOnCall[i] = struct { + result1 string + result2 []string + result3 error + }{result1, result2, result3} +} + +func (fake *ChaincodeStub) Invocations() map[string][][]interface{} { + fake.invocationsMutex.RLock() + defer fake.invocationsMutex.RUnlock() + fake.createCompositeKeyMutex.RLock() + defer fake.createCompositeKeyMutex.RUnlock() + fake.delPrivateDataMutex.RLock() + defer fake.delPrivateDataMutex.RUnlock() + fake.delStateMutex.RLock() + defer fake.delStateMutex.RUnlock() + fake.getArgsMutex.RLock() + defer fake.getArgsMutex.RUnlock() + fake.getArgsSliceMutex.RLock() + defer fake.getArgsSliceMutex.RUnlock() + fake.getBindingMutex.RLock() + defer fake.getBindingMutex.RUnlock() + fake.getChannelIDMutex.RLock() + defer fake.getChannelIDMutex.RUnlock() + fake.getCreatorMutex.RLock() + defer fake.getCreatorMutex.RUnlock() + fake.getDecorationsMutex.RLock() + defer fake.getDecorationsMutex.RUnlock() + fake.getFunctionAndParametersMutex.RLock() + defer fake.getFunctionAndParametersMutex.RUnlock() + fake.getHistoryForKeyMutex.RLock() + defer fake.getHistoryForKeyMutex.RUnlock() + fake.getPrivateDataMutex.RLock() + defer fake.getPrivateDataMutex.RUnlock() + fake.getPrivateDataByPartialCompositeKeyMutex.RLock() + defer fake.getPrivateDataByPartialCompositeKeyMutex.RUnlock() + fake.getPrivateDataByRangeMutex.RLock() + defer fake.getPrivateDataByRangeMutex.RUnlock() + fake.getPrivateDataHashMutex.RLock() + defer fake.getPrivateDataHashMutex.RUnlock() + fake.getPrivateDataQueryResultMutex.RLock() + defer fake.getPrivateDataQueryResultMutex.RUnlock() + fake.getPrivateDataValidationParameterMutex.RLock() + defer fake.getPrivateDataValidationParameterMutex.RUnlock() + fake.getQueryResultMutex.RLock() + defer fake.getQueryResultMutex.RUnlock() + fake.getQueryResultWithPaginationMutex.RLock() + defer fake.getQueryResultWithPaginationMutex.RUnlock() + fake.getSignedProposalMutex.RLock() + defer fake.getSignedProposalMutex.RUnlock() + fake.getStateMutex.RLock() + defer fake.getStateMutex.RUnlock() + fake.getStateByPartialCompositeKeyMutex.RLock() + defer fake.getStateByPartialCompositeKeyMutex.RUnlock() + fake.getStateByPartialCompositeKeyWithPaginationMutex.RLock() + defer fake.getStateByPartialCompositeKeyWithPaginationMutex.RUnlock() + fake.getStateByRangeMutex.RLock() + defer fake.getStateByRangeMutex.RUnlock() + fake.getStateByRangeWithPaginationMutex.RLock() + defer fake.getStateByRangeWithPaginationMutex.RUnlock() + fake.getStateValidationParameterMutex.RLock() + defer fake.getStateValidationParameterMutex.RUnlock() + fake.getStringArgsMutex.RLock() + defer fake.getStringArgsMutex.RUnlock() + fake.getTransientMutex.RLock() + defer fake.getTransientMutex.RUnlock() + fake.getTxIDMutex.RLock() + defer fake.getTxIDMutex.RUnlock() + fake.getTxTimestampMutex.RLock() + defer fake.getTxTimestampMutex.RUnlock() + fake.invokeChaincodeMutex.RLock() + defer fake.invokeChaincodeMutex.RUnlock() + fake.putPrivateDataMutex.RLock() + defer fake.putPrivateDataMutex.RUnlock() + fake.putStateMutex.RLock() + defer fake.putStateMutex.RUnlock() + fake.setEventMutex.RLock() + defer fake.setEventMutex.RUnlock() + fake.setPrivateDataValidationParameterMutex.RLock() + defer fake.setPrivateDataValidationParameterMutex.RUnlock() + fake.setStateValidationParameterMutex.RLock() + defer fake.setStateValidationParameterMutex.RUnlock() + fake.splitCompositeKeyMutex.RLock() + defer fake.splitCompositeKeyMutex.RUnlock() + copiedInvocations := map[string][][]interface{}{} + for key, value := range fake.invocations { + copiedInvocations[key] = value + } + return copiedInvocations +} + +func (fake *ChaincodeStub) recordInvocation(key string, args []interface{}) { + fake.invocationsMutex.Lock() + defer fake.invocationsMutex.Unlock() + if fake.invocations == nil { + fake.invocations = map[string][][]interface{}{} + } + if fake.invocations[key] == nil { + fake.invocations[key] = [][]interface{}{} + } + fake.invocations[key] = append(fake.invocations[key], args) +} diff --git a/topologies/t8/assets/chaincode/assets-transfer-basic/chaincode-go/chaincode/mocks/statequeryiterator.go b/topologies/t8/assets/chaincode/assets-transfer-basic/chaincode-go/chaincode/mocks/statequeryiterator.go new file mode 100644 index 0000000..27e3034 --- /dev/null +++ b/topologies/t8/assets/chaincode/assets-transfer-basic/chaincode-go/chaincode/mocks/statequeryiterator.go @@ -0,0 +1,232 @@ +// Code generated by counterfeiter. DO NOT EDIT. +package mocks + +import ( + "sync" + + "github.com/hyperledger/fabric-protos-go/ledger/queryresult" +) + +type StateQueryIterator struct { + CloseStub func() error + closeMutex sync.RWMutex + closeArgsForCall []struct { + } + closeReturns struct { + result1 error + } + closeReturnsOnCall map[int]struct { + result1 error + } + HasNextStub func() bool + hasNextMutex sync.RWMutex + hasNextArgsForCall []struct { + } + hasNextReturns struct { + result1 bool + } + hasNextReturnsOnCall map[int]struct { + result1 bool + } + NextStub func() (*queryresult.KV, error) + nextMutex sync.RWMutex + nextArgsForCall []struct { + } + nextReturns struct { + result1 *queryresult.KV + result2 error + } + nextReturnsOnCall map[int]struct { + result1 *queryresult.KV + result2 error + } + invocations map[string][][]interface{} + invocationsMutex sync.RWMutex +} + +func (fake *StateQueryIterator) Close() error { + fake.closeMutex.Lock() + ret, specificReturn := fake.closeReturnsOnCall[len(fake.closeArgsForCall)] + fake.closeArgsForCall = append(fake.closeArgsForCall, struct { + }{}) + fake.recordInvocation("Close", []interface{}{}) + fake.closeMutex.Unlock() + if fake.CloseStub != nil { + return fake.CloseStub() + } + if specificReturn { + return ret.result1 + } + fakeReturns := fake.closeReturns + return fakeReturns.result1 +} + +func (fake *StateQueryIterator) CloseCallCount() int { + fake.closeMutex.RLock() + defer fake.closeMutex.RUnlock() + return len(fake.closeArgsForCall) +} + +func (fake *StateQueryIterator) CloseCalls(stub func() error) { + fake.closeMutex.Lock() + defer fake.closeMutex.Unlock() + fake.CloseStub = stub +} + +func (fake *StateQueryIterator) CloseReturns(result1 error) { + fake.closeMutex.Lock() + defer fake.closeMutex.Unlock() + fake.CloseStub = nil + fake.closeReturns = struct { + result1 error + }{result1} +} + +func (fake *StateQueryIterator) CloseReturnsOnCall(i int, result1 error) { + fake.closeMutex.Lock() + defer fake.closeMutex.Unlock() + fake.CloseStub = nil + if fake.closeReturnsOnCall == nil { + fake.closeReturnsOnCall = make(map[int]struct { + result1 error + }) + } + fake.closeReturnsOnCall[i] = struct { + result1 error + }{result1} +} + +func (fake *StateQueryIterator) HasNext() bool { + fake.hasNextMutex.Lock() + ret, specificReturn := fake.hasNextReturnsOnCall[len(fake.hasNextArgsForCall)] + fake.hasNextArgsForCall = append(fake.hasNextArgsForCall, struct { + }{}) + fake.recordInvocation("HasNext", []interface{}{}) + fake.hasNextMutex.Unlock() + if fake.HasNextStub != nil { + return fake.HasNextStub() + } + if specificReturn { + return ret.result1 + } + fakeReturns := fake.hasNextReturns + return fakeReturns.result1 +} + +func (fake *StateQueryIterator) HasNextCallCount() int { + fake.hasNextMutex.RLock() + defer fake.hasNextMutex.RUnlock() + return len(fake.hasNextArgsForCall) +} + +func (fake *StateQueryIterator) HasNextCalls(stub func() bool) { + fake.hasNextMutex.Lock() + defer fake.hasNextMutex.Unlock() + fake.HasNextStub = stub +} + +func (fake *StateQueryIterator) HasNextReturns(result1 bool) { + fake.hasNextMutex.Lock() + defer fake.hasNextMutex.Unlock() + fake.HasNextStub = nil + fake.hasNextReturns = struct { + result1 bool + }{result1} +} + +func (fake *StateQueryIterator) HasNextReturnsOnCall(i int, result1 bool) { + fake.hasNextMutex.Lock() + defer fake.hasNextMutex.Unlock() + fake.HasNextStub = nil + if fake.hasNextReturnsOnCall == nil { + fake.hasNextReturnsOnCall = make(map[int]struct { + result1 bool + }) + } + fake.hasNextReturnsOnCall[i] = struct { + result1 bool + }{result1} +} + +func (fake *StateQueryIterator) Next() (*queryresult.KV, error) { + fake.nextMutex.Lock() + ret, specificReturn := fake.nextReturnsOnCall[len(fake.nextArgsForCall)] + fake.nextArgsForCall = append(fake.nextArgsForCall, struct { + }{}) + fake.recordInvocation("Next", []interface{}{}) + fake.nextMutex.Unlock() + if fake.NextStub != nil { + return fake.NextStub() + } + if specificReturn { + return ret.result1, ret.result2 + } + fakeReturns := fake.nextReturns + return fakeReturns.result1, fakeReturns.result2 +} + +func (fake *StateQueryIterator) NextCallCount() int { + fake.nextMutex.RLock() + defer fake.nextMutex.RUnlock() + return len(fake.nextArgsForCall) +} + +func (fake *StateQueryIterator) NextCalls(stub func() (*queryresult.KV, error)) { + fake.nextMutex.Lock() + defer fake.nextMutex.Unlock() + fake.NextStub = stub +} + +func (fake *StateQueryIterator) NextReturns(result1 *queryresult.KV, result2 error) { + fake.nextMutex.Lock() + defer fake.nextMutex.Unlock() + fake.NextStub = nil + fake.nextReturns = struct { + result1 *queryresult.KV + result2 error + }{result1, result2} +} + +func (fake *StateQueryIterator) NextReturnsOnCall(i int, result1 *queryresult.KV, result2 error) { + fake.nextMutex.Lock() + defer fake.nextMutex.Unlock() + fake.NextStub = nil + if fake.nextReturnsOnCall == nil { + fake.nextReturnsOnCall = make(map[int]struct { + result1 *queryresult.KV + result2 error + }) + } + fake.nextReturnsOnCall[i] = struct { + result1 *queryresult.KV + result2 error + }{result1, result2} +} + +func (fake *StateQueryIterator) Invocations() map[string][][]interface{} { + fake.invocationsMutex.RLock() + defer fake.invocationsMutex.RUnlock() + fake.closeMutex.RLock() + defer fake.closeMutex.RUnlock() + fake.hasNextMutex.RLock() + defer fake.hasNextMutex.RUnlock() + fake.nextMutex.RLock() + defer fake.nextMutex.RUnlock() + copiedInvocations := map[string][][]interface{}{} + for key, value := range fake.invocations { + copiedInvocations[key] = value + } + return copiedInvocations +} + +func (fake *StateQueryIterator) recordInvocation(key string, args []interface{}) { + fake.invocationsMutex.Lock() + defer fake.invocationsMutex.Unlock() + if fake.invocations == nil { + fake.invocations = map[string][][]interface{}{} + } + if fake.invocations[key] == nil { + fake.invocations[key] = [][]interface{}{} + } + fake.invocations[key] = append(fake.invocations[key], args) +} diff --git a/topologies/t8/assets/chaincode/assets-transfer-basic/chaincode-go/chaincode/mocks/transaction.go b/topologies/t8/assets/chaincode/assets-transfer-basic/chaincode-go/chaincode/mocks/transaction.go new file mode 100644 index 0000000..eea37db --- /dev/null +++ b/topologies/t8/assets/chaincode/assets-transfer-basic/chaincode-go/chaincode/mocks/transaction.go @@ -0,0 +1,164 @@ +// Code generated by counterfeiter. DO NOT EDIT. +package mocks + +import ( + "sync" + + "github.com/hyperledger/fabric-chaincode-go/pkg/cid" + "github.com/hyperledger/fabric-chaincode-go/shim" +) + +type TransactionContext struct { + GetClientIdentityStub func() cid.ClientIdentity + getClientIdentityMutex sync.RWMutex + getClientIdentityArgsForCall []struct { + } + getClientIdentityReturns struct { + result1 cid.ClientIdentity + } + getClientIdentityReturnsOnCall map[int]struct { + result1 cid.ClientIdentity + } + GetStubStub func() shim.ChaincodeStubInterface + getStubMutex sync.RWMutex + getStubArgsForCall []struct { + } + getStubReturns struct { + result1 shim.ChaincodeStubInterface + } + getStubReturnsOnCall map[int]struct { + result1 shim.ChaincodeStubInterface + } + invocations map[string][][]interface{} + invocationsMutex sync.RWMutex +} + +func (fake *TransactionContext) GetClientIdentity() cid.ClientIdentity { + fake.getClientIdentityMutex.Lock() + ret, specificReturn := fake.getClientIdentityReturnsOnCall[len(fake.getClientIdentityArgsForCall)] + fake.getClientIdentityArgsForCall = append(fake.getClientIdentityArgsForCall, struct { + }{}) + fake.recordInvocation("GetClientIdentity", []interface{}{}) + fake.getClientIdentityMutex.Unlock() + if fake.GetClientIdentityStub != nil { + return fake.GetClientIdentityStub() + } + if specificReturn { + return ret.result1 + } + fakeReturns := fake.getClientIdentityReturns + return fakeReturns.result1 +} + +func (fake *TransactionContext) GetClientIdentityCallCount() int { + fake.getClientIdentityMutex.RLock() + defer fake.getClientIdentityMutex.RUnlock() + return len(fake.getClientIdentityArgsForCall) +} + +func (fake *TransactionContext) GetClientIdentityCalls(stub func() cid.ClientIdentity) { + fake.getClientIdentityMutex.Lock() + defer fake.getClientIdentityMutex.Unlock() + fake.GetClientIdentityStub = stub +} + +func (fake *TransactionContext) GetClientIdentityReturns(result1 cid.ClientIdentity) { + fake.getClientIdentityMutex.Lock() + defer fake.getClientIdentityMutex.Unlock() + fake.GetClientIdentityStub = nil + fake.getClientIdentityReturns = struct { + result1 cid.ClientIdentity + }{result1} +} + +func (fake *TransactionContext) GetClientIdentityReturnsOnCall(i int, result1 cid.ClientIdentity) { + fake.getClientIdentityMutex.Lock() + defer fake.getClientIdentityMutex.Unlock() + fake.GetClientIdentityStub = nil + if fake.getClientIdentityReturnsOnCall == nil { + fake.getClientIdentityReturnsOnCall = make(map[int]struct { + result1 cid.ClientIdentity + }) + } + fake.getClientIdentityReturnsOnCall[i] = struct { + result1 cid.ClientIdentity + }{result1} +} + +func (fake *TransactionContext) GetStub() shim.ChaincodeStubInterface { + fake.getStubMutex.Lock() + ret, specificReturn := fake.getStubReturnsOnCall[len(fake.getStubArgsForCall)] + fake.getStubArgsForCall = append(fake.getStubArgsForCall, struct { + }{}) + fake.recordInvocation("GetStub", []interface{}{}) + fake.getStubMutex.Unlock() + if fake.GetStubStub != nil { + return fake.GetStubStub() + } + if specificReturn { + return ret.result1 + } + fakeReturns := fake.getStubReturns + return fakeReturns.result1 +} + +func (fake *TransactionContext) GetStubCallCount() int { + fake.getStubMutex.RLock() + defer fake.getStubMutex.RUnlock() + return len(fake.getStubArgsForCall) +} + +func (fake *TransactionContext) GetStubCalls(stub func() shim.ChaincodeStubInterface) { + fake.getStubMutex.Lock() + defer fake.getStubMutex.Unlock() + fake.GetStubStub = stub +} + +func (fake *TransactionContext) GetStubReturns(result1 shim.ChaincodeStubInterface) { + fake.getStubMutex.Lock() + defer fake.getStubMutex.Unlock() + fake.GetStubStub = nil + fake.getStubReturns = struct { + result1 shim.ChaincodeStubInterface + }{result1} +} + +func (fake *TransactionContext) GetStubReturnsOnCall(i int, result1 shim.ChaincodeStubInterface) { + fake.getStubMutex.Lock() + defer fake.getStubMutex.Unlock() + fake.GetStubStub = nil + if fake.getStubReturnsOnCall == nil { + fake.getStubReturnsOnCall = make(map[int]struct { + result1 shim.ChaincodeStubInterface + }) + } + fake.getStubReturnsOnCall[i] = struct { + result1 shim.ChaincodeStubInterface + }{result1} +} + +func (fake *TransactionContext) Invocations() map[string][][]interface{} { + fake.invocationsMutex.RLock() + defer fake.invocationsMutex.RUnlock() + fake.getClientIdentityMutex.RLock() + defer fake.getClientIdentityMutex.RUnlock() + fake.getStubMutex.RLock() + defer fake.getStubMutex.RUnlock() + copiedInvocations := map[string][][]interface{}{} + for key, value := range fake.invocations { + copiedInvocations[key] = value + } + return copiedInvocations +} + +func (fake *TransactionContext) recordInvocation(key string, args []interface{}) { + fake.invocationsMutex.Lock() + defer fake.invocationsMutex.Unlock() + if fake.invocations == nil { + fake.invocations = map[string][][]interface{}{} + } + if fake.invocations[key] == nil { + fake.invocations[key] = [][]interface{}{} + } + fake.invocations[key] = append(fake.invocations[key], args) +} diff --git a/topologies/t8/assets/chaincode/assets-transfer-basic/chaincode-go/chaincode/smartcontract.go b/topologies/t8/assets/chaincode/assets-transfer-basic/chaincode-go/chaincode/smartcontract.go new file mode 100644 index 0000000..71e8dd8 --- /dev/null +++ b/topologies/t8/assets/chaincode/assets-transfer-basic/chaincode-go/chaincode/smartcontract.go @@ -0,0 +1,185 @@ +package chaincode + +import ( + "encoding/json" + "fmt" + + "github.com/hyperledger/fabric-contract-api-go/contractapi" +) + +// SmartContract provides functions for managing an Asset +type SmartContract struct { + contractapi.Contract +} + +// Asset describes basic details of what makes up a simple asset +type Asset struct { + ID string `json:"ID"` + Color string `json:"color"` + Size int `json:"size"` + Owner string `json:"owner"` + AppraisedValue int `json:"appraisedValue"` +} + +// InitLedger adds a base set of assets to the ledger +func (s *SmartContract) InitLedger(ctx contractapi.TransactionContextInterface) error { + assets := []Asset{ + {ID: "asset1", Color: "blue", Size: 5, Owner: "Tomoko", AppraisedValue: 300}, + {ID: "asset2", Color: "red", Size: 5, Owner: "Brad", AppraisedValue: 400}, + {ID: "asset3", Color: "green", Size: 10, Owner: "Jin Soo", AppraisedValue: 500}, + {ID: "asset4", Color: "yellow", Size: 10, Owner: "Max", AppraisedValue: 600}, + {ID: "asset5", Color: "black", Size: 15, Owner: "Adriana", AppraisedValue: 700}, + {ID: "asset6", Color: "white", Size: 15, Owner: "Michel", AppraisedValue: 800}, + } + + for _, asset := range assets { + assetJSON, err := json.Marshal(asset) + if err != nil { + return err + } + + err = ctx.GetStub().PutState(asset.ID, assetJSON) + if err != nil { + return fmt.Errorf("failed to put to world state. %v", err) + } + } + + return nil +} + +// CreateAsset issues a new asset to the world state with given details. +func (s *SmartContract) CreateAsset(ctx contractapi.TransactionContextInterface, id string, color string, size int, owner string, appraisedValue int) error { + exists, err := s.AssetExists(ctx, id) + if err != nil { + return err + } + if exists { + return fmt.Errorf("the asset %s already exists", id) + } + + asset := Asset{ + ID: id, + Color: color, + Size: size, + Owner: owner, + AppraisedValue: appraisedValue, + } + assetJSON, err := json.Marshal(asset) + if err != nil { + return err + } + + return ctx.GetStub().PutState(id, assetJSON) +} + +// ReadAsset returns the asset stored in the world state with given id. +func (s *SmartContract) ReadAsset(ctx contractapi.TransactionContextInterface, id string) (*Asset, error) { + assetJSON, err := ctx.GetStub().GetState(id) + if err != nil { + return nil, fmt.Errorf("failed to read from world state: %v", err) + } + if assetJSON == nil { + return nil, fmt.Errorf("the asset %s does not exist", id) + } + + var asset Asset + err = json.Unmarshal(assetJSON, &asset) + if err != nil { + return nil, err + } + + return &asset, nil +} + +// UpdateAsset updates an existing asset in the world state with provided parameters. +func (s *SmartContract) UpdateAsset(ctx contractapi.TransactionContextInterface, id string, color string, size int, owner string, appraisedValue int) error { + exists, err := s.AssetExists(ctx, id) + if err != nil { + return err + } + if !exists { + return fmt.Errorf("the asset %s does not exist", id) + } + + // overwriting original asset with new asset + asset := Asset{ + ID: id, + Color: color, + Size: size, + Owner: owner, + AppraisedValue: appraisedValue, + } + assetJSON, err := json.Marshal(asset) + if err != nil { + return err + } + + return ctx.GetStub().PutState(id, assetJSON) +} + +// DeleteAsset deletes an given asset from the world state. +func (s *SmartContract) DeleteAsset(ctx contractapi.TransactionContextInterface, id string) error { + exists, err := s.AssetExists(ctx, id) + if err != nil { + return err + } + if !exists { + return fmt.Errorf("the asset %s does not exist", id) + } + + return ctx.GetStub().DelState(id) +} + +// AssetExists returns true when asset with given ID exists in world state +func (s *SmartContract) AssetExists(ctx contractapi.TransactionContextInterface, id string) (bool, error) { + assetJSON, err := ctx.GetStub().GetState(id) + if err != nil { + return false, fmt.Errorf("failed to read from world state: %v", err) + } + + return assetJSON != nil, nil +} + +// TransferAsset updates the owner field of asset with given id in world state. +func (s *SmartContract) TransferAsset(ctx contractapi.TransactionContextInterface, id string, newOwner string) error { + asset, err := s.ReadAsset(ctx, id) + if err != nil { + return err + } + + asset.Owner = newOwner + assetJSON, err := json.Marshal(asset) + if err != nil { + return err + } + + return ctx.GetStub().PutState(id, assetJSON) +} + +// GetAllAssets returns all assets found in world state +func (s *SmartContract) GetAllAssets(ctx contractapi.TransactionContextInterface) ([]*Asset, error) { + // range query with empty string for startKey and endKey does an + // open-ended query of all assets in the chaincode namespace. + resultsIterator, err := ctx.GetStub().GetStateByRange("", "") + if err != nil { + return nil, err + } + defer resultsIterator.Close() + + var assets []*Asset + for resultsIterator.HasNext() { + queryResponse, err := resultsIterator.Next() + if err != nil { + return nil, err + } + + var asset Asset + err = json.Unmarshal(queryResponse.Value, &asset) + if err != nil { + return nil, err + } + assets = append(assets, &asset) + } + + return assets, nil +} diff --git a/topologies/t8/assets/chaincode/assets-transfer-basic/chaincode-go/chaincode/smartcontract_test.go b/topologies/t8/assets/chaincode/assets-transfer-basic/chaincode-go/chaincode/smartcontract_test.go new file mode 100644 index 0000000..cb001de --- /dev/null +++ b/topologies/t8/assets/chaincode/assets-transfer-basic/chaincode-go/chaincode/smartcontract_test.go @@ -0,0 +1,184 @@ +package chaincode_test + +import ( + "encoding/json" + "fmt" + "testing" + + "github.com/hyperledger/fabric-chaincode-go/shim" + "github.com/hyperledger/fabric-contract-api-go/contractapi" + "github.com/hyperledger/fabric-protos-go/ledger/queryresult" + "github.com/hyperledger/fabric-samples/asset-transfer-basic/chaincode-go/chaincode" + "github.com/hyperledger/fabric-samples/asset-transfer-basic/chaincode-go/chaincode/mocks" + "github.com/stretchr/testify/require" +) + +//go:generate counterfeiter -o mocks/transaction.go -fake-name TransactionContext . transactionContext +type transactionContext interface { + contractapi.TransactionContextInterface +} + +//go:generate counterfeiter -o mocks/chaincodestub.go -fake-name ChaincodeStub . chaincodeStub +type chaincodeStub interface { + shim.ChaincodeStubInterface +} + +//go:generate counterfeiter -o mocks/statequeryiterator.go -fake-name StateQueryIterator . stateQueryIterator +type stateQueryIterator interface { + shim.StateQueryIteratorInterface +} + +func TestInitLedger(t *testing.T) { + chaincodeStub := &mocks.ChaincodeStub{} + transactionContext := &mocks.TransactionContext{} + transactionContext.GetStubReturns(chaincodeStub) + + assetTransfer := chaincode.SmartContract{} + err := assetTransfer.InitLedger(transactionContext) + require.NoError(t, err) + + chaincodeStub.PutStateReturns(fmt.Errorf("failed inserting key")) + err = assetTransfer.InitLedger(transactionContext) + require.EqualError(t, err, "failed to put to world state. failed inserting key") +} + +func TestCreateAsset(t *testing.T) { + chaincodeStub := &mocks.ChaincodeStub{} + transactionContext := &mocks.TransactionContext{} + transactionContext.GetStubReturns(chaincodeStub) + + assetTransfer := chaincode.SmartContract{} + err := assetTransfer.CreateAsset(transactionContext, "", "", 0, "", 0) + require.NoError(t, err) + + chaincodeStub.GetStateReturns([]byte{}, nil) + err = assetTransfer.CreateAsset(transactionContext, "asset1", "", 0, "", 0) + require.EqualError(t, err, "the asset asset1 already exists") + + chaincodeStub.GetStateReturns(nil, fmt.Errorf("unable to retrieve asset")) + err = assetTransfer.CreateAsset(transactionContext, "asset1", "", 0, "", 0) + require.EqualError(t, err, "failed to read from world state: unable to retrieve asset") +} + +func TestReadAsset(t *testing.T) { + chaincodeStub := &mocks.ChaincodeStub{} + transactionContext := &mocks.TransactionContext{} + transactionContext.GetStubReturns(chaincodeStub) + + expectedAsset := &chaincode.Asset{ID: "asset1"} + bytes, err := json.Marshal(expectedAsset) + require.NoError(t, err) + + chaincodeStub.GetStateReturns(bytes, nil) + assetTransfer := chaincode.SmartContract{} + asset, err := assetTransfer.ReadAsset(transactionContext, "") + require.NoError(t, err) + require.Equal(t, expectedAsset, asset) + + chaincodeStub.GetStateReturns(nil, fmt.Errorf("unable to retrieve asset")) + _, err = assetTransfer.ReadAsset(transactionContext, "") + require.EqualError(t, err, "failed to read from world state: unable to retrieve asset") + + chaincodeStub.GetStateReturns(nil, nil) + asset, err = assetTransfer.ReadAsset(transactionContext, "asset1") + require.EqualError(t, err, "the asset asset1 does not exist") + require.Nil(t, asset) +} + +func TestUpdateAsset(t *testing.T) { + chaincodeStub := &mocks.ChaincodeStub{} + transactionContext := &mocks.TransactionContext{} + transactionContext.GetStubReturns(chaincodeStub) + + expectedAsset := &chaincode.Asset{ID: "asset1"} + bytes, err := json.Marshal(expectedAsset) + require.NoError(t, err) + + chaincodeStub.GetStateReturns(bytes, nil) + assetTransfer := chaincode.SmartContract{} + err = assetTransfer.UpdateAsset(transactionContext, "", "", 0, "", 0) + require.NoError(t, err) + + chaincodeStub.GetStateReturns(nil, nil) + err = assetTransfer.UpdateAsset(transactionContext, "asset1", "", 0, "", 0) + require.EqualError(t, err, "the asset asset1 does not exist") + + chaincodeStub.GetStateReturns(nil, fmt.Errorf("unable to retrieve asset")) + err = assetTransfer.UpdateAsset(transactionContext, "asset1", "", 0, "", 0) + require.EqualError(t, err, "failed to read from world state: unable to retrieve asset") +} + +func TestDeleteAsset(t *testing.T) { + chaincodeStub := &mocks.ChaincodeStub{} + transactionContext := &mocks.TransactionContext{} + transactionContext.GetStubReturns(chaincodeStub) + + asset := &chaincode.Asset{ID: "asset1"} + bytes, err := json.Marshal(asset) + require.NoError(t, err) + + chaincodeStub.GetStateReturns(bytes, nil) + chaincodeStub.DelStateReturns(nil) + assetTransfer := chaincode.SmartContract{} + err = assetTransfer.DeleteAsset(transactionContext, "") + require.NoError(t, err) + + chaincodeStub.GetStateReturns(nil, nil) + err = assetTransfer.DeleteAsset(transactionContext, "asset1") + require.EqualError(t, err, "the asset asset1 does not exist") + + chaincodeStub.GetStateReturns(nil, fmt.Errorf("unable to retrieve asset")) + err = assetTransfer.DeleteAsset(transactionContext, "") + require.EqualError(t, err, "failed to read from world state: unable to retrieve asset") +} + +func TestTransferAsset(t *testing.T) { + chaincodeStub := &mocks.ChaincodeStub{} + transactionContext := &mocks.TransactionContext{} + transactionContext.GetStubReturns(chaincodeStub) + + asset := &chaincode.Asset{ID: "asset1"} + bytes, err := json.Marshal(asset) + require.NoError(t, err) + + chaincodeStub.GetStateReturns(bytes, nil) + assetTransfer := chaincode.SmartContract{} + err = assetTransfer.TransferAsset(transactionContext, "", "") + require.NoError(t, err) + + chaincodeStub.GetStateReturns(nil, fmt.Errorf("unable to retrieve asset")) + err = assetTransfer.TransferAsset(transactionContext, "", "") + require.EqualError(t, err, "failed to read from world state: unable to retrieve asset") +} + +func TestGetAllAssets(t *testing.T) { + asset := &chaincode.Asset{ID: "asset1"} + bytes, err := json.Marshal(asset) + require.NoError(t, err) + + iterator := &mocks.StateQueryIterator{} + iterator.HasNextReturnsOnCall(0, true) + iterator.HasNextReturnsOnCall(1, false) + iterator.NextReturns(&queryresult.KV{Value: bytes}, nil) + + chaincodeStub := &mocks.ChaincodeStub{} + transactionContext := &mocks.TransactionContext{} + transactionContext.GetStubReturns(chaincodeStub) + + chaincodeStub.GetStateByRangeReturns(iterator, nil) + assetTransfer := &chaincode.SmartContract{} + assets, err := assetTransfer.GetAllAssets(transactionContext) + require.NoError(t, err) + require.Equal(t, []*chaincode.Asset{asset}, assets) + + iterator.HasNextReturns(true) + iterator.NextReturns(nil, fmt.Errorf("failed retrieving next item")) + assets, err = assetTransfer.GetAllAssets(transactionContext) + require.EqualError(t, err, "failed retrieving next item") + require.Nil(t, assets) + + chaincodeStub.GetStateByRangeReturns(nil, fmt.Errorf("failed retrieving all assets")) + assets, err = assetTransfer.GetAllAssets(transactionContext) + require.EqualError(t, err, "failed retrieving all assets") + require.Nil(t, assets) +} diff --git a/topologies/t8/assets/chaincode/assets-transfer-basic/chaincode-go/go.mod b/topologies/t8/assets/chaincode/assets-transfer-basic/chaincode-go/go.mod new file mode 100644 index 0000000..630a157 --- /dev/null +++ b/topologies/t8/assets/chaincode/assets-transfer-basic/chaincode-go/go.mod @@ -0,0 +1,11 @@ +module github.com/hyperledger/fabric-samples/asset-transfer-basic/chaincode-go + +go 1.14 + +require ( + github.com/golang/protobuf v1.3.2 + github.com/hyperledger/fabric-chaincode-go v0.0.0-20210718160520-38d29fabecb9 + github.com/hyperledger/fabric-contract-api-go v1.1.1 + github.com/hyperledger/fabric-protos-go v0.0.0-20201028172056-a3136dde2354 + github.com/stretchr/testify v1.5.1 +) diff --git a/topologies/t8/assets/chaincode/assets-transfer-basic/chaincode-go/go.sum b/topologies/t8/assets/chaincode/assets-transfer-basic/chaincode-go/go.sum new file mode 100644 index 0000000..577c18b --- /dev/null +++ b/topologies/t8/assets/chaincode/assets-transfer-basic/chaincode-go/go.sum @@ -0,0 +1,154 @@ +cloud.google.com/go v0.26.0/go.mod h1:aQUYkXzVsufM+DwF1aE+0xfcU+56JwCaLick0ClmMTw= +github.com/BurntSushi/toml v0.3.1/go.mod h1:xHWCNGjB5oqiDr8zfno3MHue2Ht5sIBksp03qcyfWMU= +github.com/DATA-DOG/go-txdb v0.1.3/go.mod h1:DhAhxMXZpUJVGnT+p9IbzJoRKvlArO2pkHjnGX7o0n0= +github.com/PuerkitoBio/purell v1.1.1 h1:WEQqlqaGbrPkxLJWfBwQmfEAE1Z7ONdDLqrN38tNFfI= +github.com/PuerkitoBio/purell v1.1.1/go.mod h1:c11w/QuzBsJSee3cPx9rAFu61PvFxuPbtSwDGJws/X0= +github.com/PuerkitoBio/urlesc v0.0.0-20170810143723-de5bf2ad4578 h1:d+Bc7a5rLufV/sSk/8dngufqelfh6jnri85riMAaF/M= +github.com/PuerkitoBio/urlesc v0.0.0-20170810143723-de5bf2ad4578/go.mod h1:uGdkoq3SwY9Y+13GIhn11/XLaGBb4BfwItxLd5jeuXE= +github.com/armon/consul-api v0.0.0-20180202201655-eb2c6b5be1b6/go.mod h1:grANhF5doyWs3UAsr3K4I6qtAmlQcZDesFNEHPZAzj8= +github.com/client9/misspell v0.3.4/go.mod h1:qj6jICC3Q7zFZvVWo7KLAzC3yx5G7kyvSDkc90ppPyw= +github.com/coreos/etcd v3.3.10+incompatible/go.mod h1:uF7uidLiAD3TWHmW31ZFd/JWoc32PjwdhPthX9715RE= +github.com/coreos/go-etcd v2.0.0+incompatible/go.mod h1:Jez6KQU2B/sWsbdaef3ED8NzMklzPG4d5KIOhIy30Tk= +github.com/coreos/go-semver v0.2.0/go.mod h1:nnelYz7RCh+5ahJtPPxZlU+153eP4D4r3EedlOD2RNk= +github.com/cpuguy83/go-md2man v1.0.10/go.mod h1:SmD6nW6nTyfqj6ABTjUi3V3JVMnlJmwcJI5acqYI6dE= +github.com/cucumber/godog v0.8.0/go.mod h1:Cp3tEV1LRAyH/RuCThcxHS/+9ORZ+FMzPva2AZ5Ki+A= +github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38= +github.com/davecgh/go-spew v1.1.1 h1:vj9j/u1bqnvCEfJOwUhtlOARqs3+rkHYY13jYWTU97c= +github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38= +github.com/fsnotify/fsnotify v1.4.7/go.mod h1:jwhsz4b93w/PPRr/qN1Yymfu8t87LnFCMoQvtojpjFo= +github.com/go-openapi/jsonpointer v0.19.2/go.mod h1:3akKfEdA7DF1sugOqz1dVQHBcuDBPKZGEoHC/NkiQRg= +github.com/go-openapi/jsonpointer v0.19.3 h1:gihV7YNZK1iK6Tgwwsxo2rJbD1GTbdm72325Bq8FI3w= +github.com/go-openapi/jsonpointer v0.19.3/go.mod h1:Pl9vOtqEWErmShwVjC8pYs9cog34VGT37dQOVbmoatg= +github.com/go-openapi/jsonreference v0.19.2 h1:o20suLFB4Ri0tuzpWtyHlh7E7HnkqTNLq6aR6WVNS1w= +github.com/go-openapi/jsonreference v0.19.2/go.mod h1:jMjeRr2HHw6nAVajTXJ4eiUwohSTlpa0o73RUL1owJc= +github.com/go-openapi/spec v0.19.4 h1:ixzUSnHTd6hCemgtAJgluaTSGYpLNpJY4mA2DIkdOAo= +github.com/go-openapi/spec v0.19.4/go.mod h1:FpwSN1ksY1eteniUU7X0N/BgJ7a4WvBFVA8Lj9mJglo= +github.com/go-openapi/swag v0.19.2/go.mod h1:POnQmlKehdgb5mhVOsnJFsivZCEZ/vjK9gh66Z9tfKk= +github.com/go-openapi/swag v0.19.5 h1:lTz6Ys4CmqqCQmZPBlbQENR1/GucA2bzYTE12Pw4tFY= +github.com/go-openapi/swag v0.19.5/go.mod h1:POnQmlKehdgb5mhVOsnJFsivZCEZ/vjK9gh66Z9tfKk= +github.com/go-sql-driver/mysql v1.5.0/go.mod h1:DCzpHaOWr8IXmIStZouvnhqoel9Qv2LBy8hT2VhHyBg= +github.com/gobuffalo/envy v1.7.0 h1:GlXgaiBkmrYMHco6t4j7SacKO4XUjvh5pwXh0f4uxXU= +github.com/gobuffalo/envy v1.7.0/go.mod h1:n7DRkBerg/aorDM8kbduw5dN3oXGswK5liaSCx4T5NI= +github.com/gobuffalo/logger v1.0.0/go.mod h1:2zbswyIUa45I+c+FLXuWl9zSWEiVuthsk8ze5s8JvPs= +github.com/gobuffalo/packd v0.3.0 h1:eMwymTkA1uXsqxS0Tpoop3Lc0u3kTfiMBE6nKtQU4g4= +github.com/gobuffalo/packd v0.3.0/go.mod h1:zC7QkmNkYVGKPw4tHpBQ+ml7W/3tIebgeo1b36chA3Q= +github.com/gobuffalo/packr v1.30.1 h1:hu1fuVR3fXEZR7rXNW3h8rqSML8EVAf6KNm0NKO/wKg= +github.com/gobuffalo/packr v1.30.1/go.mod h1:ljMyFO2EcrnzsHsN99cvbq055Y9OhRrIaviy289eRuk= +github.com/gobuffalo/packr/v2 v2.5.1/go.mod h1:8f9c96ITobJlPzI44jj+4tHnEKNt0xXWSVlXRN9X1Iw= +github.com/golang/glog v0.0.0-20160126235308-23def4e6c14b h1:VKtxabqXZkF25pY9ekfRL6a582T4P37/31XEstQ5p58= +github.com/golang/glog v0.0.0-20160126235308-23def4e6c14b/go.mod h1:SBH7ygxi8pfUlaOkMMuAQtPIUF8ecWP5IEl/CR7VP2Q= +github.com/golang/mock v1.1.1/go.mod h1:oTYuIxOrZwtPieC+H1uAHpcLFnEyAGVDL/k47Jfbm0A= +github.com/golang/protobuf v1.2.0/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U= +github.com/golang/protobuf v1.3.2 h1:6nsPYzhq5kReh6QImI3k5qWzO4PEbvbIW2cwSfR/6xs= +github.com/golang/protobuf v1.3.2/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U= +github.com/google/go-cmp v0.2.0/go.mod h1:oXzfMopK8JAjlY9xF4vHSVASa0yLyX7SntLO5aqRK0M= +github.com/hashicorp/hcl v1.0.0/go.mod h1:E5yfLk+7swimpb2L/Alb/PJmXilQ/rhwaUYs4T20WEQ= +github.com/hyperledger/fabric-chaincode-go v0.0.0-20200424173110-d7076418f212 h1:1i4lnpV8BDgKOLi1hgElfBqdHXjXieSuj8629mwBZ8o= +github.com/hyperledger/fabric-chaincode-go v0.0.0-20200424173110-d7076418f212/go.mod h1:N7H3sA7Tx4k/YzFq7U0EPdqJtqvM4Kild0JoCc7C0Dc= +github.com/hyperledger/fabric-chaincode-go v0.0.0-20210718160520-38d29fabecb9 h1:1cAZHHrBYFrX3bwQGhOZtOB4sCM9QWVppd81O8vsPXs= +github.com/hyperledger/fabric-chaincode-go v0.0.0-20210718160520-38d29fabecb9/go.mod h1:N7H3sA7Tx4k/YzFq7U0EPdqJtqvM4Kild0JoCc7C0Dc= +github.com/hyperledger/fabric-contract-api-go v1.1.0 h1:K9uucl/6eX3NF0/b+CGIiO1IPm1VYQxBkpnVGJur2S4= +github.com/hyperledger/fabric-contract-api-go v1.1.0/go.mod h1:nHWt0B45fK53owcFpLtAe8DH0Q5P068mnzkNXMPSL7E= +github.com/hyperledger/fabric-contract-api-go v1.1.1 h1:gDhOC18gjgElNZ85kFWsbCQq95hyUP/21n++m0Sv6B0= +github.com/hyperledger/fabric-contract-api-go v1.1.1/go.mod h1:+39cWxbh5py3NtXpRA63rAH7NzXyED+QJx1EZr0tJPo= +github.com/hyperledger/fabric-protos-go v0.0.0-20190919234611-2a87503ac7c9/go.mod h1:xVYTjK4DtZRBxZ2D9aE4y6AbLaPwue2o/criQyQbVD0= +github.com/hyperledger/fabric-protos-go v0.0.0-20200424173316-dd554ba3746e h1:9PS5iezHk/j7XriSlNuSQILyCOfcZ9wZ3/PiucmSE8E= +github.com/hyperledger/fabric-protos-go v0.0.0-20200424173316-dd554ba3746e/go.mod h1:xVYTjK4DtZRBxZ2D9aE4y6AbLaPwue2o/criQyQbVD0= +github.com/hyperledger/fabric-protos-go v0.0.0-20201028172056-a3136dde2354 h1:6vLLEpvDbSlmUJFjg1hB5YMBpI+WgKguztlONcAFBoY= +github.com/hyperledger/fabric-protos-go v0.0.0-20201028172056-a3136dde2354/go.mod h1:xVYTjK4DtZRBxZ2D9aE4y6AbLaPwue2o/criQyQbVD0= +github.com/hyperledger/fabric-protos-go v0.0.0-20210722212527-1ed7094bb8fe h1:1ef+SKRVYiQCAMhZ5W6HZhStEcNNAm27V8YcptzSWnM= +github.com/hyperledger/fabric-protos-go v0.0.0-20210722212527-1ed7094bb8fe/go.mod h1:xVYTjK4DtZRBxZ2D9aE4y6AbLaPwue2o/criQyQbVD0= +github.com/inconshreveable/mousetrap v1.0.0/go.mod h1:PxqpIevigyE2G7u3NXJIT2ANytuPF1OarO4DADm73n8= +github.com/joho/godotenv v1.3.0 h1:Zjp+RcGpHhGlrMbJzXTrZZPrWj+1vfm90La1wgB6Bhc= +github.com/joho/godotenv v1.3.0/go.mod h1:7hK45KPybAkOC6peb+G5yklZfMxEjkZhHbwpqxOKXbg= +github.com/karrick/godirwalk v1.10.12/go.mod h1:RoGL9dQei4vP9ilrpETWE8CLOZ1kiN0LhBygSwrAsHA= +github.com/konsorten/go-windows-terminal-sequences v1.0.1/go.mod h1:T0+1ngSBFLxvqU3pZ+m/2kptfBszLMUkC4ZK/EgS/cQ= +github.com/konsorten/go-windows-terminal-sequences v1.0.2/go.mod h1:T0+1ngSBFLxvqU3pZ+m/2kptfBszLMUkC4ZK/EgS/cQ= +github.com/kr/pretty v0.1.0/go.mod h1:dAy3ld7l9f0ibDNOQOHHMYYIIbhfbHSm3C4ZsoJORNo= +github.com/kr/pretty v0.2.0 h1:s5hAObm+yFO5uHYt5dYjxi2rXrsnmRpJx4OYvIWUaQs= +github.com/kr/pretty v0.2.0/go.mod h1:ipq/a2n7PKx3OHsz4KJII5eveXtPO4qwEXGdVfWzfnI= +github.com/kr/pty v1.1.1/go.mod h1:pFQYn66WHrOpPYNljwOMqo10TkYh1fy3cYio2l3bCsQ= +github.com/kr/pty v1.1.5/go.mod h1:9r2w37qlBe7rQ6e1fg1S/9xpWHSnaqNdHD3WcMdbPDA= +github.com/kr/text v0.1.0 h1:45sCR5RtlFHMR4UwH9sdQ5TC8v0qDQCHnXt+kaKSTVE= +github.com/kr/text v0.1.0/go.mod h1:4Jbv+DJW3UT/LiOwJeYQe1efqtUx/iVham/4vfdArNI= +github.com/magiconair/properties v1.8.0/go.mod h1:PppfXfuXeibc/6YijjN8zIbojt8czPbwD3XqdrwzmxQ= +github.com/mailru/easyjson v0.0.0-20190614124828-94de47d64c63/go.mod h1:C1wdFJiN94OJF2b5HbByQZoLdCWB1Yqtg26g4irojpc= +github.com/mailru/easyjson v0.0.0-20190626092158-b2ccc519800e h1:hB2xlXdHp/pmPZq0y3QnmWAArdw9PqbmotexnWx/FU8= +github.com/mailru/easyjson v0.0.0-20190626092158-b2ccc519800e/go.mod h1:C1wdFJiN94OJF2b5HbByQZoLdCWB1Yqtg26g4irojpc= +github.com/mitchellh/go-homedir v1.1.0/go.mod h1:SfyaCUpYCn1Vlf4IUYiD9fPX4A5wJrkLzIz1N1q0pr0= +github.com/mitchellh/mapstructure v1.1.2/go.mod h1:FVVH3fgwuzCH5S8UJGiWEs2h04kUh9fWfEaFds41c1Y= +github.com/pelletier/go-toml v1.2.0/go.mod h1:5z9KED0ma1S8pY6P1sdut58dfprrGBbd/94hg7ilaic= +github.com/pmezard/go-difflib v1.0.0 h1:4DBwDE0NGyQoBHbLQYPwSUPoCMWR5BEzIk/f1lZbAQM= +github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4= +github.com/rogpeppe/go-internal v1.1.0/go.mod h1:M8bDsm7K2OlrFYOpmOWEs/qY81heoFRclV5y23lUDJ4= +github.com/rogpeppe/go-internal v1.3.0 h1:RR9dF3JtopPvtkroDZuVD7qquD0bnHlKSqaQhgwt8yk= +github.com/rogpeppe/go-internal v1.3.0/go.mod h1:M8bDsm7K2OlrFYOpmOWEs/qY81heoFRclV5y23lUDJ4= +github.com/russross/blackfriday v1.5.2/go.mod h1:JO/DiYxRf+HjHt06OyowR9PTA263kcR/rfWxYHBV53g= +github.com/sirupsen/logrus v1.4.2/go.mod h1:tLMulIdttU9McNUspp0xgXVQah82FyeX6MwdIuYE2rE= +github.com/spf13/afero v1.1.2/go.mod h1:j4pytiNVoe2o6bmDsKpLACNPDBIoEAkihy7loJ1B0CQ= +github.com/spf13/cast v1.3.0/go.mod h1:Qx5cxh0v+4UWYiBimWS+eyWzqEqokIECu5etghLkUJE= +github.com/spf13/cobra v0.0.5/go.mod h1:3K3wKZymM7VvHMDS9+Akkh4K60UwM26emMESw8tLCHU= +github.com/spf13/jwalterweatherman v1.0.0/go.mod h1:cQK4TGJAtQXfYWX+Ddv3mKDzgVb68N+wFjFa4jdeBTo= +github.com/spf13/pflag v1.0.3/go.mod h1:DYY7MBk1bdzusC3SYhjObp+wFpr4gzcvqqNjLnInEg4= +github.com/spf13/viper v1.3.2/go.mod h1:ZiWeW+zYFKm7srdB9IoDzzZXaJaI5eL9QjNiN/DMA2s= +github.com/stretchr/objx v0.1.0/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME= +github.com/stretchr/objx v0.1.1/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME= +github.com/stretchr/objx v0.2.0/go.mod h1:qt09Ya8vawLte6SNmTgCsAVtYtaKzEcn8ATUoHMkEqE= +github.com/stretchr/testify v1.2.2/go.mod h1:a8OnRcib4nhh0OaRAV+Yts87kKdq0PP7pXfy6kDkUVs= +github.com/stretchr/testify v1.3.0/go.mod h1:M5WIy9Dh21IEIfnGCwXGc5bZfKNJtfHm1UVUgZn+9EI= +github.com/stretchr/testify v1.4.0/go.mod h1:j7eGeouHqKxXV5pUuKE4zz7dFj8WfuZ+81PSLYec5m4= +github.com/stretchr/testify v1.5.1 h1:nOGnQDM7FYENwehXlg/kFVnos3rEvtKTjRvOWSzb6H4= +github.com/stretchr/testify v1.5.1/go.mod h1:5W2xD1RspED5o8YsWQXVCued0rvSQ+mT+I5cxcmMvtA= +github.com/ugorji/go/codec v0.0.0-20181204163529-d75b2dcb6bc8/go.mod h1:VFNgLljTbGfSG7qAOspJ7OScBnGdDN/yBr0sguwnwf0= +github.com/xeipuuv/gojsonpointer v0.0.0-20180127040702-4e3ac2762d5f h1:J9EGpcZtP0E/raorCMxlFGSTBrsSlaDGf3jU/qvAE2c= +github.com/xeipuuv/gojsonpointer v0.0.0-20180127040702-4e3ac2762d5f/go.mod h1:N2zxlSyiKSe5eX1tZViRH5QA0qijqEDrYZiPEAiq3wU= +github.com/xeipuuv/gojsonreference v0.0.0-20180127040603-bd5ef7bd5415 h1:EzJWgHovont7NscjpAxXsDA8S8BMYve8Y5+7cuRE7R0= +github.com/xeipuuv/gojsonreference v0.0.0-20180127040603-bd5ef7bd5415/go.mod h1:GwrjFmJcFw6At/Gs6z4yjiIwzuJ1/+UwLxMQDVQXShQ= +github.com/xeipuuv/gojsonschema v1.2.0 h1:LhYJRs+L4fBtjZUfuSZIKGeVu0QRy8e5Xi7D17UxZ74= +github.com/xeipuuv/gojsonschema v1.2.0/go.mod h1:anYRn/JVcOK2ZgGU+IjEV4nwlhoK5sQluxsYJ78Id3Y= +github.com/xordataexchange/crypt v0.0.3-0.20170626215501-b2862e3d0a77/go.mod h1:aYKd//L2LvnjZzWKhF00oedf4jCCReLcmhLdhm1A27Q= +golang.org/x/crypto v0.0.0-20181203042331-505ab145d0a9/go.mod h1:6SG95UA2DQfeDnfUPMdvaQW0Q7yPrPDi9nlGo2tz2b4= +golang.org/x/crypto v0.0.0-20190308221718-c2843e01d9a2/go.mod h1:djNgcEr1/C05ACkg1iLfiJU5Ep61QUkGW8qpdssI0+w= +golang.org/x/crypto v0.0.0-20190611184440-5c40567a22f8/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI= +golang.org/x/crypto v0.0.0-20190621222207-cc06ce4a13d4/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI= +golang.org/x/lint v0.0.0-20190313153728-d0100b6bd8b3/go.mod h1:6SW0HCj/g11FgYtHlgUYUwCkIfeOF89ocIRzGO/8vkc= +golang.org/x/net v0.0.0-20190311183353-d8887717615a/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg= +golang.org/x/net v0.0.0-20190404232315-eb5bcb51f2a3/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg= +golang.org/x/net v0.0.0-20190522155817-f3200d17e092/go.mod h1:HSz+uSET+XFnRR8LxR5pz3Of3rY3CfYBVs4xY44aLks= +golang.org/x/net v0.0.0-20190613194153-d28f0bde5980/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s= +golang.org/x/net v0.0.0-20190827160401-ba9fcec4b297 h1:k7pJ2yAPLPgbskkFdhRCsA77k2fySZ1zf2zCjvQCiIM= +golang.org/x/net v0.0.0-20190827160401-ba9fcec4b297/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s= +golang.org/x/oauth2 v0.0.0-20180821212333-d2e6202438be/go.mod h1:N/0e6XlmueqKjAGxoOufVs8QHGRruUQn6yWY3a++T0U= +golang.org/x/sync v0.0.0-20190423024810-112230192c58/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= +golang.org/x/sys v0.0.0-20181205085412-a5c9d58dba9a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY= +golang.org/x/sys v0.0.0-20190215142949-d0b11bdaac8a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY= +golang.org/x/sys v0.0.0-20190412213103-97732733099d/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= +golang.org/x/sys v0.0.0-20190422165155-953cdadca894/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= +golang.org/x/sys v0.0.0-20190515120540-06a5c4944438/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= +golang.org/x/sys v0.0.0-20190616124812-15dcb6c0061f/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= +golang.org/x/sys v0.0.0-20190710143415-6ec70d6a5542 h1:6ZQFf1D2YYDDI7eSwW8adlkkavTB9sw5I24FVtEvNUQ= +golang.org/x/sys v0.0.0-20190710143415-6ec70d6a5542/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= +golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ= +golang.org/x/text v0.3.2 h1:tW2bmiBqwgJj/UpqtC8EpXEZVYOwU0yG4iWbprSVAcs= +golang.org/x/text v0.3.2/go.mod h1:bEr9sfX3Q8Zfm5fL9x+3itogRgK3+ptLWKqgva+5dAk= +golang.org/x/tools v0.0.0-20180917221912-90fa682c2a6e/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ= +golang.org/x/tools v0.0.0-20190311212946-11955173bddd/go.mod h1:LCzVGOaR6xXOjkQ3onu1FJEFr0SW1gC7cKk1uF8kGRs= +golang.org/x/tools v0.0.0-20190524140312-2c0ae7006135/go.mod h1:RgjU9mgBXZiqYHBnxXauZ1Gv1EHHAz9KjViQ78xBX0Q= +golang.org/x/tools v0.0.0-20190614205625-5aca471b1d59/go.mod h1:/rFqwRUd4F7ZHNgwSSTFct+R/Kf4OFW1sUzUTQQTgfc= +golang.org/x/tools v0.0.0-20190624180213-70d37148ca0c/go.mod h1:/rFqwRUd4F7ZHNgwSSTFct+R/Kf4OFW1sUzUTQQTgfc= +google.golang.org/appengine v1.1.0/go.mod h1:EbEs0AVv82hx2wNQdGPgUI5lhzA/G0D9YwlJXL52JkM= +google.golang.org/genproto v0.0.0-20180817151627-c66870c02cf8/go.mod h1:JiN7NxoALGmiZfu7CAH4rXhgtRTLTxftemlI0sWmxmc= +google.golang.org/genproto v0.0.0-20180831171423-11092d34479b h1:lohp5blsw53GBXtLyLNaTXPXS9pJ1tiTw61ZHUoE9Qw= +google.golang.org/genproto v0.0.0-20180831171423-11092d34479b/go.mod h1:JiN7NxoALGmiZfu7CAH4rXhgtRTLTxftemlI0sWmxmc= +google.golang.org/grpc v1.23.0 h1:AzbTB6ux+okLTzP8Ru1Xs41C303zdcfEht7MQnYJt5A= +google.golang.org/grpc v1.23.0/go.mod h1:Y5yQAOtifL1yxbo5wqy6BxZv8vAUGQwXBOALyacEbxg= +gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0= +gopkg.in/check.v1 v1.0.0-20180628173108-788fd7840127/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0= +gopkg.in/check.v1 v1.0.0-20190902080502-41f04d3bba15 h1:YR8cESwS4TdDjEe65xsg0ogRM/Nc3DYOhEAlW+xobZo= +gopkg.in/check.v1 v1.0.0-20190902080502-41f04d3bba15/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0= +gopkg.in/errgo.v2 v2.1.0/go.mod h1:hNsd1EY+bozCKY1Ytp96fpM3vjJbqLJn88ws8XvfDNI= +gopkg.in/yaml.v2 v2.2.2 h1:ZCJp+EgiOT7lHqUV2J862kp8Qj64Jo6az82+3Td9dZw= +gopkg.in/yaml.v2 v2.2.2/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI= +gopkg.in/yaml.v2 v2.2.8 h1:obN1ZagJSUGI0Ek/LBmuj4SNLPfIny3KsKFopxRdj10= +gopkg.in/yaml.v2 v2.2.8/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI= +honnef.co/go/tools v0.0.0-20190523083050-ea95bdfd59fc/go.mod h1:rf3lG4BRIbNafJWhAfAdb/ePZxsR/4RtNHQocxwk9r4= diff --git a/topologies/t8/config/config.yaml b/topologies/t8/config/config.yaml new file mode 100644 index 0000000..1f4cc49 --- /dev/null +++ b/topologies/t8/config/config.yaml @@ -0,0 +1,14 @@ +NodeOUs: + Enable: true + ClientOUIdentifier: + Certificate: cacerts/ca-cert.pem + OrganizationalUnitIdentifier: client + PeerOUIdentifier: + Certificate: cacerts/ca-cert.pem + OrganizationalUnitIdentifier: peer + AdminOUIdentifier: + Certificate: cacerts/ca-cert.pem + OrganizationalUnitIdentifier: admin + OrdererOUIdentifier: + Certificate: cacerts/ca-cert.pem + OrganizationalUnitIdentifier: orderer \ No newline at end of file diff --git a/topologies/t8/config/configtx.yaml b/topologies/t8/config/configtx.yaml new file mode 100644 index 0000000..1264040 --- /dev/null +++ b/topologies/t8/config/configtx.yaml @@ -0,0 +1,428 @@ +# Copyright IBM Corp. All Rights Reserved. +# +# SPDX-License-Identifier: Apache-2.0 +# + +--- +################################################################################ +# +# Section: Organizations +# +# - This section defines the different organizational identities which will +# be referenced later in the configuration. +# +################################################################################ +Organizations: + + # SampleOrg defines an MSP using the sampleconfig. It should never be used + # in production but may be used as a template for other definitions + - &org1 + # DefaultOrg defines the organization which is used in the sampleconfig + # of the fabric.git development environment + Name: org1 + + # ID to load the MSP definition as + ID: org1MSP + + # MSPDir is the filesystem path which contains the MSP configuration + MSPDir: /tmp/crypto-material/orgs/org1/msp + + # Policies defines the set of policies at this level of the config tree + # For organization policies, their canonical path is usually + # /Channel/// + Policies: + Readers: + Type: Signature + Rule: "OR('org1MSP.member')" + Writers: + Type: Signature + Rule: "OR('org1MSP.member')" + Admins: + Type: Signature + Rule: "OR('org1MSP.admin')" + Endorsement: + Type: Signature + Rule: "OR('org1MSP.member')" + + OrdererEndpoints: + - <>-org1-orderer1:7050 + - <>-org1-orderer2:7050 + - <>-org1-orderer3:7050 + + - &org2 + # DefaultOrg defines the organization which is used in the sampleconfig + # of the fabric.git development environment + Name: org2MSP + + # ID to load the MSP definition as + ID: org2MSP + + MSPDir: /tmp/crypto-material/orgs/org2/msp + + # Policies defines the set of policies at this level of the config tree + # For organization policies, their canonical path is usually + # /Channel/// + Policies: + Readers: + Type: Signature + Rule: "OR('org2MSP.member')" + Writers: + Type: Signature + Rule: "OR('org2MSP.member')" + Admins: + Type: Signature + Rule: "OR('org2MSP.admin')" + Endorsement: + Type: Signature + Rule: "OR('org2MSP.peer')" + + # leave this flag set to true. + AnchorPeers: + # AnchorPeers defines the location of peers which can be used + # for cross org gossip communication. Note, this value is only + # encoded in the genesis block in the Application section context + - Host: <>-org2-peer1 + Port: 7051 + + - &org3 + # DefaultOrg defines the organization which is used in the sampleconfig + # of the fabric.git development environment + Name: org3MSP + + # ID to load the MSP definition as + ID: org3MSP + + MSPDir: /tmp/crypto-material/orgs/org3/msp + + # Policies defines the set of policies at this level of the config tree + # For organization policies, their canonical path is usually + # /Channel/// + Policies: + Readers: + Type: Signature + Rule: "OR('org3MSP.member')" + Writers: + Type: Signature + Rule: "OR('org3MSP.member')" + Admins: + Type: Signature + Rule: "OR('org3MSP.admin')" + Endorsement: + Type: Signature + Rule: "OR('org3MSP.peer')" + + # leave this flag set to true. + AnchorPeers: + # AnchorPeers defines the location of peers which can be used + # for cross org gossip communication. Note, this value is only + # encoded in the genesis block in the Application section context + - Host: <>-org3-peer1 + Port: 7051 + +################################################################################ +# +# SECTION: Capabilities +# +# - This section defines the capabilities of fabric network. This is a new +# concept as of v1.1.0 and should not be utilized in mixed networks with +# v1.0.x peers and orderers. Capabilities define features which must be +# present in a fabric binary for that binary to safely participate in the +# fabric network. For instance, if a new MSP type is added, newer binaries +# might recognize and validate the signatures from this type, while older +# binaries without this support would be unable to validate those +# transactions. This could lead to different versions of the fabric binaries +# having different world states. Instead, defining a capability for a channel +# informs those binaries without this capability that they must cease +# processing transactions until they have been upgraded. For v1.0.x if any +# capabilities are defined (including a map with all capabilities turned off) +# then the v1.0.x peer will deliberately crash. +# +################################################################################ +Capabilities: + # Channel capabilities apply to both the orderers and the peers and must be + # supported by both. + # Set the value of the capability to true to require it. + Channel: &ChannelCapabilities + # V2_0 capability ensures that orderers and peers behave according + # to v2.0 channel capabilities. Orderers and peers from + # prior releases would behave in an incompatible way, and are therefore + # not able to participate in channels at v2.0 capability. + # Prior to enabling V2.0 channel capabilities, ensure that all + # orderers and peers on a channel are at v2.0.0 or later. + V2_0: true + + # Orderer capabilities apply only to the orderers, and may be safely + # used with prior release peers. + # Set the value of the capability to true to require it. + Orderer: &OrdererCapabilities + # V2_0 orderer capability ensures that orderers behave according + # to v2.0 orderer capabilities. Orderers from + # prior releases would behave in an incompatible way, and are therefore + # not able to participate in channels at v2.0 orderer capability. + # Prior to enabling V2.0 orderer capabilities, ensure that all + # orderers on channel are at v2.0.0 or later. + V2_0: true + + # Application capabilities apply only to the peer network, and may be safely + # used with prior release orderers. + # Set the value of the capability to true to require it. + Application: &ApplicationCapabilities + # V2_0 application capability ensures that peers behave according + # to v2.0 application capabilities. Peers from + # prior releases would behave in an incompatible way, and are therefore + # not able to participate in channels at v2.0 application capability. + # Prior to enabling V2.0 application capabilities, ensure that all + # peers on channel are at v2.0.0 or later. + V2_0: true + +################################################################################ +# +# SECTION: Application +# +# - This section defines the values to encode into a config transaction or +# genesis block for application related parameters +# +################################################################################ +Application: &ApplicationDefaults + ACLs: &ACLsDefault + + # ACL policy for lscc's "getid" function + lscc/ChaincodeExists: /Channel/Application/Readers + + # ACL policy for lscc's "getdepspec" function + lscc/GetDeploymentSpec: /Channel/Application/Readers + + # ACL policy for lscc's "getccdata" function + lscc/GetChaincodeData: /Channel/Application/Readers + + # ACL Policy for lscc's "getchaincodes" function + lscc/GetInstantiatedChaincodes: /Channel/Application/Readers + + + #---Query System Chaincode (qscc) function to policy mapping for access control---# + + # ACL policy for qscc's "GetChainInfo" function + qscc/GetChainInfo: /Channel/Application/Readers + + + # ACL policy for qscc's "GetBlockByNumber" function + qscc/GetBlockByNumber: /Channel/Application/Readers + + # ACL policy for qscc's "GetBlockByHash" function + qscc/GetBlockByHash: /Channel/Application/Readers + + # ACL policy for qscc's "GetTransactionByID" function + qscc/GetTransactionByID: /Channel/Application/Readers + + # ACL policy for qscc's "GetBlockByTxID" function + qscc/GetBlockByTxID: /Channel/Application/Readers + + #---Configuration System Chaincode (cscc) function to policy mapping for access control---# + + # ACL policy for cscc's "GetConfigBlock" function + cscc/GetConfigBlock: /Channel/Application/Readers + + # ACL policy for cscc's "GetConfigTree" function + cscc/GetConfigTree: /Channel/Application/Readers + + # ACL policy for cscc's "SimulateConfigTreeUpdate" function + cscc/SimulateConfigTreeUpdate: /Channel/Application/Readers + + #---Miscellanesous peer function to policy mapping for access control---# + + # ACL policy for invoking chaincodes on peer + peer/Propose: /Channel/Application/Writers + + # ACL policy for chaincode to chaincode invocation + peer/ChaincodeToChaincode: /Channel/Application/Readers + + #---Events resource to policy mapping for access control###---# + + # ACL policy for sending block events + event/Block: /Channel/Application/Readers + + # ACL policy for sending filtered block events + event/FilteredBlock: /Channel/Application/Readers + + # Chaincode Lifecycle Policies introduced in Fabric 2.x + # ACL policy for _lifecycle's "CheckCommitReadiness" function + _lifecycle/CheckCommitReadiness: /Channel/Application/Writers + + # ACL policy for _lifecycle's "CommitChaincodeDefinition" function + _lifecycle/CommitChaincodeDefinition: /Channel/Application/Writers + + # ACL policy for _lifecycle's "QueryChaincodeDefinition" function + _lifecycle/QueryChaincodeDefinition: /Channel/Application/Readers + + + # Organizations is the list of orgs which are defined as participants on + # the application side of the network + Organizations: + + # Policies defines the set of policies at this level of the config tree + # For Application policies, their canonical path is + # /Channel/Application/ + Policies: + Readers: + Type: ImplicitMeta + Rule: "ANY Readers" + Writers: + Type: ImplicitMeta + Rule: "ANY Writers" + Admins: + Type: ImplicitMeta + Rule: "MAJORITY Admins" + LifecycleEndorsement: + Type: ImplicitMeta + Rule: "MAJORITY Endorsement" + Endorsement: + Type: ImplicitMeta + Rule: "MAJORITY Endorsement" + + Capabilities: + <<: *ApplicationCapabilities +################################################################################ +# +# SECTION: Orderer +# +# - This section defines the values to encode into a config transaction or +# genesis block for orderer related parameters +# +################################################################################ +Orderer: &OrdererDefaults + + # Orderer Type: The orderer implementation to start + OrdererType: etcdraft + + # Addresses used to be the list of orderer addresses that clients and peers + # could connect to. However, this does not allow clients to associate orderer + # addresses and orderer organizations which can be useful for things such + # as TLS validation. The preferred way to specify orderer addresses is now + # to include the OrdererEndpoints item in your org definition + Addresses: + - <>-org1-orderer1:7050 + - <>-org1-orderer2:7050 + - <>-org1-orderer3:7050 + + EtcdRaft: + Consenters: + - Host: <>-org1-orderer1 + Port: 7050 + ClientTLSCert: /tmp/crypto-material/orderers/org1/orderer1/node-tls/msp/signcerts/cert.pem + ServerTLSCert: /tmp/crypto-material/orderers/org1/orderer1/node-tls/msp/signcerts/cert.pem + - Host: <>-org1-orderer2 + Port: 7050 + ClientTLSCert: /tmp/crypto-material/orderers/org1/orderer2/node-tls/msp/signcerts/cert.pem + ServerTLSCert: /tmp/crypto-material/orderers/org1/orderer2/node-tls/msp/signcerts/cert.pem + - Host: <>-org1-orderer3 + Port: 7050 + ClientTLSCert: /tmp/crypto-material/orderers/org1/orderer3/node-tls/msp/signcerts/cert.pem + ServerTLSCert: /tmp/crypto-material/orderers/org1/orderer3/node-tls/msp/signcerts/cert.pem + + # Batch Timeout: The amount of time to wait before creating a batch + BatchTimeout: 2s + + # Batch Size: Controls the number of messages batched into a block + BatchSize: + + # Max Message Count: The maximum number of messages to permit in a batch + MaxMessageCount: 10 + + # Absolute Max Bytes: The absolute maximum number of bytes allowed for + # the serialized messages in a batch. + AbsoluteMaxBytes: 99 MB + + # Preferred Max Bytes: The preferred maximum number of bytes allowed for + # the serialized messages in a batch. A message larger than the preferred + # max bytes will result in a batch larger than preferred max bytes. + PreferredMaxBytes: 512 KB + + # Organizations is the list of orgs which are defined as participants on + # the orderer side of the network + Organizations: + + # Policies defines the set of policies at this level of the config tree + # For Orderer policies, their canonical path is + # /Channel/Orderer/ + Policies: + Readers: + Type: ImplicitMeta + Rule: "ANY Readers" + Writers: + Type: ImplicitMeta + Rule: "ANY Writers" + Admins: + Type: ImplicitMeta + Rule: "MAJORITY Admins" + # BlockValidation specifies what signatures must be included in the block + # from the orderer for the peer to validate it. + BlockValidation: + Type: ImplicitMeta + Rule: "ANY Writers" + +################################################################################ +# +# CHANNEL +# +# This section defines the values to encode into a config transaction or +# genesis block for channel related parameters. +# +################################################################################ +Channel: &ChannelDefaults + # Policies defines the set of policies at this level of the config tree + # For Channel policies, their canonical path is + # /Channel/ + Policies: + # Who may invoke the 'Deliver' API + Readers: + Type: ImplicitMeta + Rule: "ANY Readers" + # Who may invoke the 'Broadcast' API + Writers: + Type: ImplicitMeta + Rule: "ANY Writers" + # By default, who may modify elements at this config level + Admins: + Type: ImplicitMeta + Rule: "MAJORITY Admins" + + # Capabilities describes the channel level capabilities, see the + # dedicated Capabilities section elsewhere in this file for a full + # description + Capabilities: + <<: *ChannelCapabilities + +################################################################################ +# +# Profile +# +# - Different configuration profiles may be encoded here to be specified +# as parameters to the configtxgen tool +# +################################################################################ +Profiles: + OrgsOrdererGenesis: + <<: *ChannelDefaults + Orderer: + <<: *OrdererDefaults + Organizations: + - *org1 + Capabilities: + <<: *OrdererCapabilities + Consortiums: + MainConsortium: + Organizations: + - *org2 + - *org3 + + OrgsChannel: + Consortium: MainConsortium + <<: *ChannelDefaults + Application: + <<: *ApplicationDefaults + Organizations: + - *org2 + - *org3 + Capabilities: + <<: *ApplicationCapabilities + diff --git a/topologies/t8/config/couchdb/nginx-proxy/nginx.conf b/topologies/t8/config/couchdb/nginx-proxy/nginx.conf new file mode 100644 index 0000000..95c81b7 --- /dev/null +++ b/topologies/t8/config/couchdb/nginx-proxy/nginx.conf @@ -0,0 +1,29 @@ +worker_processes auto; +pid /run/nginx.pid; +include /etc/nginx/modules-enabled/*.conf; + +events { + worker_connections 768; + # multi_accept on; +} + +stream { + upstream backend_servers { + server t8-org2-peer1-couchdb1.local:5984 max_fails=3 fail_timeout=10s; + server t8-org2-peer1-couchdb2.local:5984 max_fails=3 fail_timeout=10s; + } + + log_format basic '$remote_addr [$time_local] ' + '$protocol $status $bytes_sent $bytes_received ' + '$session_time "$upstream_addr" ' + '"$upstream_bytes_sent" "$upstream_bytes_received" "$upstream_connect_time"'; + + access_log /var/log/nginx/access.log basic; + error_log /var/log/nginx/error.log debug; + + server { + listen 0.0.0.0:9000; + proxy_pass backend_servers; + proxy_next_upstream on; + } +} \ No newline at end of file diff --git a/topologies/t8/containers/cas/org1-cas/docker-compose-org1-cas.yml b/topologies/t8/containers/cas/org1-cas/docker-compose-org1-cas.yml new file mode 100644 index 0000000..1d941c5 --- /dev/null +++ b/topologies/t8/containers/cas/org1-cas/docker-compose-org1-cas.yml @@ -0,0 +1,29 @@ +version: "3.9" +services: + org1-ca-tls: + container_name: ${CURRENT_HL_TOPOLOGY}-org1-ca-tls + + command: sh -c 'fabric-ca-server start -d -b org1-ca-tls-admin:org1-ca-tls-adminpw --port 7054' + environment: + - FABRIC_CA_SERVER_HOME=/tmp/hyperledger/fabric-ca/ + - FABRIC_CA_SERVER_TLS_ENABLED=true + - FABRIC_CA_SERVER_CSR_CN=${CURRENT_HL_TOPOLOGY}-org1-ca-tls + - FABRIC_CA_SERVER_CSR_HOSTS=0.0.0.0 + - FABRIC_CA_SERVER_DEBUG=true + - TOPOLOGY=${CURRENT_HL_TOPOLOGY} + volumes: + - "${HL_TOPOLOGIES_BASE_FOLDER}/crypto-material/cas/org1/ca-tls:/tmp/hyperledger/fabric-ca" + - "${HL_TOPOLOGIES_BASE_FOLDER}:/tmp" + org1-ca-identities: + container_name: ${CURRENT_HL_TOPOLOGY}-org1-ca-identities + command: sh -c 'fabric-ca-server start -d -b org1-ca-identities-admin:org1-ca-identities-adminpw --port 7054' + environment: + - FABRIC_CA_SERVER_HOME=/tmp/hyperledger/fabric-ca/ + - FABRIC_CA_SERVER_TLS_ENABLED=true + - FABRIC_CA_SERVER_CSR_CN=${CURRENT_HL_TOPOLOGY}-org1-ca-identities + - FABRIC_CA_SERVER_CSR_HOSTS=0.0.0.0 + - FABRIC_CA_SERVER_DEBUG=true + - TOPOLOGY=${CURRENT_HL_TOPOLOGY} + volumes: + - "${HL_TOPOLOGIES_BASE_FOLDER}/crypto-material/cas/org1/ca-identities:/tmp/hyperledger/fabric-ca" + - "${HL_TOPOLOGIES_BASE_FOLDER}:/tmp" \ No newline at end of file diff --git a/topologies/t8/containers/cas/org2-cas/docker-compose-org2-cas.yml b/topologies/t8/containers/cas/org2-cas/docker-compose-org2-cas.yml new file mode 100644 index 0000000..e22813e --- /dev/null +++ b/topologies/t8/containers/cas/org2-cas/docker-compose-org2-cas.yml @@ -0,0 +1,28 @@ +version: "3.9" +services: + org2-ca-tls: + container_name: ${CURRENT_HL_TOPOLOGY}-org2-ca-tls + command: sh -c 'fabric-ca-server start -d -b org2-ca-tls-admin:org2-ca-tls-adminpw --port 7054' + environment: + - FABRIC_CA_SERVER_HOME=/tmp/hyperledger/fabric-ca/ + - FABRIC_CA_SERVER_TLS_ENABLED=true + - FABRIC_CA_SERVER_CSR_CN=${CURRENT_HL_TOPOLOGY}-org2-ca-tls + - FABRIC_CA_SERVER_CSR_HOSTS=0.0.0.0 + - FABRIC_CA_SERVER_DEBUG=true + - TOPOLOGY=${CURRENT_HL_TOPOLOGY} + volumes: + - "${HL_TOPOLOGIES_BASE_FOLDER}/crypto-material/cas/org2/ca-tls:/tmp/hyperledger/fabric-ca" + - "${HL_TOPOLOGIES_BASE_FOLDER}:/tmp" + org2-ca-identities: + container_name: ${CURRENT_HL_TOPOLOGY}-org2-ca-identities + command: sh -c 'fabric-ca-server start -d -b org2-ca-identities-admin:org2-ca-identities-adminpw --port 7054' + environment: + - FABRIC_CA_SERVER_HOME=/tmp/hyperledger/fabric-ca/ + - FABRIC_CA_SERVER_TLS_ENABLED=true + - FABRIC_CA_SERVER_CSR_CN=${CURRENT_HL_TOPOLOGY}-org2-ca-identities + - FABRIC_CA_SERVER_CSR_HOSTS=0.0.0.0 + - FABRIC_CA_SERVER_DEBUG=true + - TOPOLOGY=${CURRENT_HL_TOPOLOGY} + volumes: + - "${HL_TOPOLOGIES_BASE_FOLDER}/crypto-material/cas/org2/ca-identities:/tmp/hyperledger/fabric-ca" + - "${HL_TOPOLOGIES_BASE_FOLDER}:/tmp" \ No newline at end of file diff --git a/topologies/t8/containers/cas/org3-cas/docker-compose-org3-cas.yml b/topologies/t8/containers/cas/org3-cas/docker-compose-org3-cas.yml new file mode 100644 index 0000000..b18ea4d --- /dev/null +++ b/topologies/t8/containers/cas/org3-cas/docker-compose-org3-cas.yml @@ -0,0 +1,28 @@ +version: "3.9" +services: + org3-ca-tls: + container_name: ${CURRENT_HL_TOPOLOGY}-org3-ca-tls + command: sh -c 'fabric-ca-server start -d -b org3-ca-tls-admin:org3-ca-tls-adminpw --port 7054' + environment: + - FABRIC_CA_SERVER_HOME=/tmp/hyperledger/fabric-ca/ + - FABRIC_CA_SERVER_TLS_ENABLED=true + - FABRIC_CA_SERVER_CSR_CN=${CURRENT_HL_TOPOLOGY}-org3-ca-tls + - FABRIC_CA_SERVER_CSR_HOSTS=0.0.0.0 + - FABRIC_CA_SERVER_DEBUG=true + - TOPOLOGY=${CURRENT_HL_TOPOLOGY} + volumes: + - "${HL_TOPOLOGIES_BASE_FOLDER}/crypto-material/cas/org3/ca-tls:/tmp/hyperledger/fabric-ca" + - "${HL_TOPOLOGIES_BASE_FOLDER}:/tmp" + org3-ca-identities: + container_name: ${CURRENT_HL_TOPOLOGY}-org3-ca-identities + command: sh -c 'fabric-ca-server start -d -b org3-ca-identities-admin:org3-ca-identities-adminpw --port 7054' + environment: + - FABRIC_CA_SERVER_HOME=/tmp/hyperledger/fabric-ca/ + - FABRIC_CA_SERVER_TLS_ENABLED=true + - FABRIC_CA_SERVER_CSR_CN=${CURRENT_HL_TOPOLOGY}-org3-ca-identities + - FABRIC_CA_SERVER_CSR_HOSTS=0.0.0.0 + - FABRIC_CA_SERVER_DEBUG=true + - TOPOLOGY=${CURRENT_HL_TOPOLOGY} + volumes: + - "${HL_TOPOLOGIES_BASE_FOLDER}/crypto-material/cas/org3/ca-identities:/tmp/hyperledger/fabric-ca" + - "${HL_TOPOLOGIES_BASE_FOLDER}:/tmp" \ No newline at end of file diff --git a/topologies/t8/containers/clis/org2-clis/docker-compose-org2-clis.yml b/topologies/t8/containers/clis/org2-clis/docker-compose-org2-clis.yml new file mode 100644 index 0000000..4ceb8a0 --- /dev/null +++ b/topologies/t8/containers/clis/org2-clis/docker-compose-org2-clis.yml @@ -0,0 +1,21 @@ +services: + org2-cli-peer1: + container_name: ${CURRENT_HL_TOPOLOGY}-org2-cli-peer1 + tty: true + stdin_open: true + environment: + - GOPATH=/opt/gopath + - CORE_VM_ENDPOINT=unix:///host/var/run/docker.sock + - FABRIC_LOGGING_SPEC=DEBUG + - CORE_PEER_ID=${CURRENT_HL_TOPOLOGY}-org2-cli-peer1 + - CORE_PEER_ADDRESS=${CURRENT_HL_TOPOLOGY}-org2-peer1:7051 + - CORE_PEER_LOCALMSPID=org2MSP + - CORE_PEER_TLS_ENABLED=true + - CORE_PEER_TLS_ROOTCERT_FILE=/tmp/crypto-material/cas/org2/ca-tls/ca-cert.pem + - CORE_PEER_MSPCONFIGPATH=/tmp/crypto-material/peers/org2/peer1/node/msp + - TOPOLOGY=${CURRENT_HL_TOPOLOGY} + working_dir: /opt/gopath/src/github.com/hyperledger/fabric/org2 + command: sh + volumes: + - ${HL_TOPOLOGIES_BASE_FOLDER}/assets/chaincode:/opt/gopath/src/github.com/hyperledger/fabric-samples/chaincode + - ${HL_TOPOLOGIES_BASE_FOLDER}:/tmp diff --git a/topologies/t8/containers/clis/org3-clis/docker-compose-org3-clis.yml b/topologies/t8/containers/clis/org3-clis/docker-compose-org3-clis.yml new file mode 100644 index 0000000..68d2ec0 --- /dev/null +++ b/topologies/t8/containers/clis/org3-clis/docker-compose-org3-clis.yml @@ -0,0 +1,21 @@ +services: + org3-cli-peer1: + container_name: ${CURRENT_HL_TOPOLOGY}-org3-cli-peer1 + tty: true + stdin_open: true + environment: + - GOPATH=/opt/gopath + - CORE_VM_ENDPOINT=unix:///host/var/run/docker.sock + - FABRIC_LOGGING_SPEC=DEBUG + - CORE_PEER_ID=${CURRENT_HL_TOPOLOGY}-org3-cli-peer1 + - CORE_PEER_ADDRESS=${CURRENT_HL_TOPOLOGY}-org3-peer1:7051 + - CORE_PEER_LOCALMSPID=org3MSP + - CORE_PEER_TLS_ENABLED=true + - CORE_PEER_TLS_ROOTCERT_FILE=/tmp/crypto-material/cas/org3/ca-tls/ca-cert.pem + - CORE_PEER_MSPCONFIGPATH=/tmp/crypto-material/peers/org3/peer1/node/msp + - TOPOLOGY=${CURRENT_HL_TOPOLOGY} + working_dir: /opt/gopath/src/github.com/hyperledger/fabric/org3 + command: sh + volumes: + - ${HL_TOPOLOGIES_BASE_FOLDER}/assets/chaincode:/opt/gopath/src/github.com/hyperledger/fabric-samples/chaincode + - ${HL_TOPOLOGIES_BASE_FOLDER}:/tmp \ No newline at end of file diff --git a/topologies/t8/containers/docker-compose-shell-cmd-utils.yml b/topologies/t8/containers/docker-compose-shell-cmd-utils.yml new file mode 100644 index 0000000..4b62994 --- /dev/null +++ b/topologies/t8/containers/docker-compose-shell-cmd-utils.yml @@ -0,0 +1,14 @@ +services: + org-shell-cmd: + container_name: ${CURRENT_HL_TOPOLOGY}-shell-cmd + image: shell-cmd-utils-hl-fabric + tty: true + stdin_open: true + command: sh + environment: + - HL_TOPOLOGIES_BASE_FOLDER=${HL_TOPOLOGIES_BASE_FOLDER} + - TOPOLOGY=${CURRENT_HL_TOPOLOGY} + volumes: + - "${HL_TOPOLOGIES_BASE_FOLDER}:/tmp" + networks: + - hl-fabric diff --git a/topologies/t8/containers/orderers/org1-orderers/docker-compose-org1-orderers.yml b/topologies/t8/containers/orderers/org1-orderers/docker-compose-org1-orderers.yml new file mode 100644 index 0000000..0045893 --- /dev/null +++ b/topologies/t8/containers/orderers/org1-orderers/docker-compose-org1-orderers.yml @@ -0,0 +1,82 @@ +services: + org1-orderer1: + container_name: ${CURRENT_HL_TOPOLOGY}-org1-orderer1 + environment: + - ORDERER_HOME=/tmp/hyperledger/orderer/home + - ORDERER_FILELEDGER_LOCATION=/tmp/hyperledger/orderer/fileledger + - ORDERER_HOST=${CURRENT_HL_TOPOLOGY}-org1-orderer1 + - ORDERER_GENERAL_LISTENADDRESS=0.0.0.0 + - ORDERER_GENERAL_LISTENPORT=7050 + - ORDERER_GENERAL_GENESISMETHOD=file + # - ORDERER_GENERAL_GENESISFILE=/tmp/crypto-material/artifacts/channels/genesis.block + - ORDERER_GENERAL_BOOTSTRAPFILE=/tmp/crypto-material/artifacts/channels/genesis.block + - ORDERER_GENERAL_LOCALMSPID=org1MSP + - ORDERER_GENERAL_LOCALMSPDIR=/tmp/crypto-material/orderers/org1/orderer1/node/msp + - ORDERER_GENERAL_TLS_ENABLED=true + - ORDERER_GENERAL_TLS_CERTIFICATE=/tmp/crypto-material/orderers/org1/orderer1/node-tls/msp/signcerts/cert.pem + - ORDERER_GENERAL_TLS_PRIVATEKEY=/tmp/crypto-material/orderers/org1/orderer1/node-tls/msp/keystore/key.pem + # - ORDERER_GENERAL_TLS_ROOTCAS=[/tmp/crypto-material/cas/org1/ca-tls/ca-cert.pem] + - ORDERER_GENERAL_TLS_ROOTCAS=[/tmp/crypto-material/cas/org1/ca-tls/ca-cert.pem,/tmp/crypto-material/cas/org2/ca-tls/ca-cert.pem,/tmp/crypto-material/cas/org3/ca-tls/ca-cert.pem] + - ORDERER_GENERAL_LOGLEVEL=debug + - ORDERER_DEBUG_BROADCASTTRACEDIR=data/logs + - ORDERER_OPERATIONS_LISTENADDRESS=0.0.0.0:7051 + - ORDERER_METRICS_STATSD_ADDRESS=0.0.0.0:7052 + - ORDERER_GENERAL_PROFILE_ADDRESS=0.0.0.0:7053 + - TOPOLOGY=${CURRENT_HL_TOPOLOGY} + volumes: + - ${HL_TOPOLOGIES_BASE_FOLDER}/crypto-material:/tmp/crypto-material + - ${HL_TOPOLOGIES_BASE_FOLDER}/homefolders/orderers/org1/orderer1:/tmp/hyperledger/orderer + org1-orderer2: + container_name: ${CURRENT_HL_TOPOLOGY}-org1-orderer2 + environment: + - ORDERER_HOME=/tmp/hyperledger/orderer/home + - ORDERER_FILELEDGER_LOCATION=/tmp/hyperledger/orderer/fileledger + - ORDERER_HOST=${CURRENT_HL_TOPOLOGY}-org1-orderer2 + - ORDERER_GENERAL_LISTENADDRESS=0.0.0.0 + - ORDERER_GENERAL_LISTENPORT=7050 + - ORDERER_GENERAL_GENESISMETHOD=file + # - ORDERER_GENERAL_GENESISFILE=/tmp/crypto-material/artifacts/channels/genesis.block + - ORDERER_GENERAL_BOOTSTRAPFILE=/tmp/crypto-material/artifacts/channels/genesis.block + - ORDERER_GENERAL_LOCALMSPID=org1MSP + - ORDERER_GENERAL_LOCALMSPDIR=/tmp/crypto-material/orderers/org1/orderer2/node/msp + - ORDERER_GENERAL_TLS_ENABLED=true + - ORDERER_GENERAL_TLS_CERTIFICATE=/tmp/crypto-material/orderers/org1/orderer2/node-tls/msp/signcerts/cert.pem + - ORDERER_GENERAL_TLS_PRIVATEKEY=/tmp/crypto-material/orderers/org1/orderer2/node-tls/msp/keystore/key.pem + # - ORDERER_GENERAL_TLS_ROOTCAS=[/tmp/crypto-material/cas/org1/ca-tls/ca-cert.pem] + - ORDERER_GENERAL_TLS_ROOTCAS=[/tmp/crypto-material/cas/org1/ca-tls/ca-cert.pem,/tmp/crypto-material/cas/org2/ca-tls/ca-cert.pem,/tmp/crypto-material/cas/org3/ca-tls/ca-cert.pem] + - ORDERER_GENERAL_LOGLEVEL=debug + - ORDERER_DEBUG_BROADCASTTRACEDIR=data/logs + - ORDERER_OPERATIONS_LISTENADDRESS=0.0.0.0:7051 + - ORDERER_METRICS_STATSD_ADDRESS=0.0.0.0:7052 + - ORDERER_GENERAL_PROFILE_ADDRESS=0.0.0.0:7053 + - TOPOLOGY=${CURRENT_HL_TOPOLOGY} + volumes: + - ${HL_TOPOLOGIES_BASE_FOLDER}/crypto-material:/tmp/crypto-material + - ${HL_TOPOLOGIES_BASE_FOLDER}/homefolders/orderers/org1/orderer2:/tmp/hyperledger/orderer + org1-orderer3: + container_name: ${CURRENT_HL_TOPOLOGY}-org1-orderer3 + environment: + - ORDERER_HOME=/tmp/hyperledger/orderer/home + - ORDERER_FILELEDGER_LOCATION=/tmp/hyperledger/orderer/fileledger + - ORDERER_HOST=${CURRENT_HL_TOPOLOGY}-org1-orderer3 + - ORDERER_GENERAL_LISTENADDRESS=0.0.0.0 + - ORDERER_GENERAL_LISTENPORT=7050 + - ORDERER_GENERAL_GENESISMETHOD=file + # - ORDERER_GENERAL_GENESISFILE=/tmp/crypto-material/artifacts/channels/genesis.block + - ORDERER_GENERAL_BOOTSTRAPFILE=/tmp/crypto-material/artifacts/channels/genesis.block + - ORDERER_GENERAL_LOCALMSPID=org1MSP + - ORDERER_GENERAL_LOCALMSPDIR=/tmp/crypto-material/orderers/org1/orderer3/node/msp + - ORDERER_GENERAL_TLS_ENABLED=true + - ORDERER_GENERAL_TLS_CERTIFICATE=/tmp/crypto-material/orderers/org1/orderer3/node-tls/msp/signcerts/cert.pem + - ORDERER_GENERAL_TLS_PRIVATEKEY=/tmp/crypto-material/orderers/org1/orderer3/node-tls/msp/keystore/key.pem + # - ORDERER_GENERAL_TLS_ROOTCAS=[/tmp/crypto-material/cas/org1/ca-tls/ca-cert.pem] + - ORDERER_GENERAL_TLS_ROOTCAS=[/tmp/crypto-material/cas/org1/ca-tls/ca-cert.pem,/tmp/crypto-material/cas/org2/ca-tls/ca-cert.pem,/tmp/crypto-material/cas/org3/ca-tls/ca-cert.pem] + - ORDERER_GENERAL_LOGLEVEL=debug + - ORDERER_DEBUG_BROADCASTTRACEDIR=data/logs + - ORDERER_OPERATIONS_LISTENADDRESS=0.0.0.0:7051 + - ORDERER_METRICS_STATSD_ADDRESS=0.0.0.0:7052 + - ORDERER_GENERAL_PROFILE_ADDRESS=0.0.0.0:7053 + - TOPOLOGY=${CURRENT_HL_TOPOLOGY} + volumes: + - ${HL_TOPOLOGIES_BASE_FOLDER}/crypto-material:/tmp/crypto-material + - ${HL_TOPOLOGIES_BASE_FOLDER}/homefolders/orderers/org1/orderer3:/tmp/hyperledger/orderer diff --git a/topologies/t8/containers/peers/org2-peers/docker-compose-org2-couchdb.yml b/topologies/t8/containers/peers/org2-peers/docker-compose-org2-couchdb.yml new file mode 100644 index 0000000..2970ac3 --- /dev/null +++ b/topologies/t8/containers/peers/org2-peers/docker-compose-org2-couchdb.yml @@ -0,0 +1,29 @@ +services: + org2-peer1-couchdb1: + container_name: ${CURRENT_HL_TOPOLOGY}-org2-peer1-couchdb1 + environment: + - COUCHDB_USER=admin + - COUCHDB_PASSWORD=hltopos + - NODENAME=${CURRENT_HL_TOPOLOGY}-org2-peer1-couchdb1.local + - NODE_NETBIOS_NAME=${CURRENT_HL_TOPOLOGY}-org2-peer1-couchdb1.local + hostname: ${CURRENT_HL_TOPOLOGY}-org2-peer1-couchdb1.local + volumes: + - ${HL_TOPOLOGIES_BASE_FOLDER}/homefolders/peer1/couchdb1/logs:/opt/couchdb/var/log + - ${HL_TOPOLOGIES_BASE_FOLDER}/homefolders/peer1/couchdb1/data:/opt/couchdb/data + - ${HL_TOPOLOGIES_BASE_FOLDER}/homefolders/peer1/couchdb1/config:/opt/couchdb/etc/local.d + org2-peer1-couchdb2: + container_name: ${CURRENT_HL_TOPOLOGY}-org2-peer1-couchdb2 + environment: + - COUCHDB_USER=admin + - COUCHDB_PASSWORD=hltopos + - NODENAME=${CURRENT_HL_TOPOLOGY}-org2-peer1-couchdb2.local + - NODE_NETBIOS_NAME=${CURRENT_HL_TOPOLOGY}-org2-peer1-couchdb2.local + hostname: ${CURRENT_HL_TOPOLOGY}-org2-peer1-couchdb2.local + volumes: + - ${HL_TOPOLOGIES_BASE_FOLDER}/homefolders/peer1/couchdb2/logs:/opt/couchdb/var/log + - ${HL_TOPOLOGIES_BASE_FOLDER}/homefolders/peer1/couchdb2/data:/opt/couchdb/data + - ${HL_TOPOLOGIES_BASE_FOLDER}/homefolders/peer1/couchdb2 /config:/opt/couchdb/etc/local.d + org2-peer1-couchdb-proxy: + container_name: ${CURRENT_HL_TOPOLOGY}-org2-peer1-couchdb-proxy + volumes: + - ${HL_TOPOLOGIES_BASE_FOLDER}/config/couchdb/nginx-proxy/nginx.conf:/etc/nginx/nginx.conf diff --git a/topologies/t8/containers/peers/org2-peers/docker-compose-org2-peers.yml b/topologies/t8/containers/peers/org2-peers/docker-compose-org2-peers.yml new file mode 100644 index 0000000..53b9fe8 --- /dev/null +++ b/topologies/t8/containers/peers/org2-peers/docker-compose-org2-peers.yml @@ -0,0 +1,53 @@ +services: + org2-peer1: + container_name: ${CURRENT_HL_TOPOLOGY}-org2-peer1 + environment: + - CORE_PEER_ID=${CURRENT_HL_TOPOLOGY}-org2-peer1 + - CORE_PEER_ADDRESS=${CURRENT_HL_TOPOLOGY}-org2-peer1:7051 + - CORE_PEER_LOCALMSPID=org2MSP + - CORE_PEER_MSPCONFIGPATH=/tmp/crypto-material/peers/org2/peer1/node/msp + - CORE_VM_ENDPOINT=unix:///host/var/run/docker.sock + - CORE_VM_DOCKER_HOSTCONFIG_NETWORKMODE=hl-fabric-${CURRENT_HL_TOPOLOGY} + # - FABRIC_LOGGING_SPEC=debug + - CORE_PEER_TLS_ENABLED=true + - CORE_PEER_TLS_CERT_FILE=/tmp/crypto-material/peers/org2/peer1/node-tls/msp/signcerts/cert.pem + - CORE_PEER_TLS_KEY_FILE=/tmp/crypto-material/peers/org2/peer1/node-tls/msp/keystore/key.pem + - CORE_PEER_TLS_ROOTCERT_FILE=/tmp/crypto-material/peers/org2/peer1/node-tls/msp/tlscacerts/ca-cert.pem + - CORE_PEER_GOSSIP_USELEADERELECTION=true + - CORE_PEER_GOSSIP_ORGLEADER=false + - CORE_PEER_GOSSIP_EXTERNALENDPOINT=${CURRENT_HL_TOPOLOGY}-org2-peer1:7051 + - CORE_PEER_GOSSIP_SKIPHANDSHAKE=true + - TOPOLOGY=${CURRENT_HL_TOPOLOGY} + - CORE_LEDGER_STATE_STATEDATABASE=CouchDB + - CORE_LEDGER_STATE_COUCHDBCONFIG_COUCHDBADDRESS=${CURRENT_HL_TOPOLOGY}-org2-peer1-couchdb-proxy:9000 + - CORE_LEDGER_STATE_COUCHDBCONFIG_USERNAME=admin + - CORE_LEDGER_STATE_COUCHDBCONFIG_PASSWORD=hltopos + working_dir: /opt/gopath/src/github.com/hyperledger/fabric/org2/peer1 + volumes: + - /var/run:/host/var/run + - ${HL_TOPOLOGIES_BASE_FOLDER}:/tmp + org2-peer2: + container_name: ${CURRENT_HL_TOPOLOGY}-org2-peer2 + environment: + - CORE_PEER_ID=${CURRENT_HL_TOPOLOGY}-org2-peer2 + - CORE_PEER_ADDRESS=${CURRENT_HL_TOPOLOGY}-org2-peer2:7051 + - CORE_PEER_LOCALMSPID=org2MSP + - CORE_PEER_MSPCONFIGPATH=/tmp/crypto-material/peers/org2/peer2/node/msp + - CORE_VM_ENDPOINT=unix:///host/var/run/docker.sock + - CORE_VM_DOCKER_HOSTCONFIG_NETWORKMODE=hl-fabric-${CURRENT_HL_TOPOLOGY} + # - FABRIC_LOGGING_SPEC=debug + - CORE_PEER_TLS_ENABLED=true + - CORE_PEER_TLS_CERT_FILE=/tmp/crypto-material/peers/org2/peer2/node-tls/msp/signcerts/cert.pem + - CORE_PEER_TLS_KEY_FILE=/tmp/crypto-material/peers/org2/peer2/node-tls/msp/keystore/key.pem + - CORE_PEER_TLS_ROOTCERT_FILE=/tmp/crypto-material/peers/org2/peer2/node-tls/msp/tlscacerts/ca-cert.pem + - CORE_PEER_GOSSIP_USELEADERELECTION=true + - CORE_PEER_GOSSIP_ORGLEADER=false + - CORE_PEER_GOSSIP_EXTERNALENDPOINT=${CURRENT_HL_TOPOLOGY}-org2-peer2:7051 + - CORE_PEER_GOSSIP_SKIPHANDSHAKE=true + # - CORE_PEER_CHAINCODELISTENADDRESS=0.0.0.0:7052 + - CORE_PEER_GOSSIP_BOOTSTRAP=${CURRENT_HL_TOPOLOGY}-org2-peer1:7051 + - TOPOLOGY=${CURRENT_HL_TOPOLOGY} + working_dir: /opt/gopath/src/github.com/hyperledger/fabric/org2/peer2 + volumes: + - /var/run:/host/var/run + - ${HL_TOPOLOGIES_BASE_FOLDER}:/tmp diff --git a/topologies/t8/containers/peers/org3-peers/docker-compose-org3-peers.yml b/topologies/t8/containers/peers/org3-peers/docker-compose-org3-peers.yml new file mode 100644 index 0000000..c58de7b --- /dev/null +++ b/topologies/t8/containers/peers/org3-peers/docker-compose-org3-peers.yml @@ -0,0 +1,50 @@ +services: + org3-peer1: + container_name: ${CURRENT_HL_TOPOLOGY}-org3-peer1 + environment: + - CORE_PEER_ID=${CURRENT_HL_TOPOLOGY}-org3-peer1 + - CORE_PEER_ADDRESS=${CURRENT_HL_TOPOLOGY}-org3-peer1:7051 + - CORE_PEER_LOCALMSPID=org3MSP + - CORE_PEER_MSPCONFIGPATH=/tmp/crypto-material/peers/org3/peer1/node/msp + - CORE_VM_ENDPOINT=unix:///host/var/run/docker.sock + - CORE_VM_DOCKER_HOSTCONFIG_NETWORKMODE=hl-fabric-${CURRENT_HL_TOPOLOGY} + # - FABRIC_LOGGING_SPEC=debug + - CORE_PEER_TLS_ENABLED=true + - CORE_PEER_TLS_CERT_FILE=/tmp/crypto-material/peers/org3/peer1/node-tls/msp/signcerts/cert.pem + - CORE_PEER_TLS_KEY_FILE=/tmp/crypto-material/peers/org3/peer1/node-tls/msp/keystore/key.pem + - CORE_PEER_TLS_ROOTCERT_FILE=/tmp/crypto-material/peers/org3/peer1/node-tls/msp/tlscacerts/ca-cert.pem + - CORE_PEER_GOSSIP_USELEADERELECTION=true + - CORE_PEER_GOSSIP_ORGLEADER=false + - CORE_PEER_GOSSIP_EXTERNALENDPOINT=${CURRENT_HL_TOPOLOGY}-org3-peer1:7051 + - CORE_PEER_GOSSIP_SKIPHANDSHAKE=true + - TOPOLOGY=${CURRENT_HL_TOPOLOGY} + # - CORE_PEER_CHAINCODELISTENADDRESS=0.0.0.0:7052 + working_dir: /opt/gopath/src/github.com/hyperledger/fabric/org3/peer1 + volumes: + - /var/run:/host/var/run + - ${HL_TOPOLOGIES_BASE_FOLDER}:/tmp + org3-peer2: + container_name: ${CURRENT_HL_TOPOLOGY}-org3-peer2 + environment: + - CORE_PEER_ID=${CURRENT_HL_TOPOLOGY}-org3-peer2 + - CORE_PEER_ADDRESS=${CURRENT_HL_TOPOLOGY}-org3-peer2:7051 + - CORE_PEER_LOCALMSPID=org3MSP + - CORE_PEER_MSPCONFIGPATH=/tmp/crypto-material/peers/org3/peer2/node/msp + - CORE_VM_ENDPOINT=unix:///host/var/run/docker.sock + - CORE_VM_DOCKER_HOSTCONFIG_NETWORKMODE=hl-fabric-${CURRENT_HL_TOPOLOGY} + # - FABRIC_LOGGING_SPEC=debug + - CORE_PEER_TLS_ENABLED=true + - CORE_PEER_TLS_CERT_FILE=/tmp/crypto-material/peers/org3/peer2/node-tls/msp/signcerts/cert.pem + - CORE_PEER_TLS_KEY_FILE=/tmp/crypto-material/peers/org3/peer2/node-tls/msp/keystore/key.pem + - CORE_PEER_TLS_ROOTCERT_FILE=/tmp/crypto-material/peers/org3/peer2/node-tls/msp/tlscacerts/ca-cert.pem + - CORE_PEER_GOSSIP_USELEADERELECTION=true + - CORE_PEER_GOSSIP_ORGLEADER=false + - CORE_PEER_GOSSIP_EXTERNALENDPOINT=${CURRENT_HL_TOPOLOGY}-org3-peer2:7051 + - CORE_PEER_GOSSIP_SKIPHANDSHAKE=true + # - CORE_PEER_CHAINCODELISTENADDRESS=0.0.0.0:7052 + - CORE_PEER_GOSSIP_BOOTSTRAP=${CURRENT_HL_TOPOLOGY}-org3-peer1:7051 + - TOPOLOGY=${CURRENT_HL_TOPOLOGY} + working_dir: /opt/gopath/src/github.com/hyperledger/fabric/org3/peer2 + volumes: + - /var/run:/host/var/run + - ${HL_TOPOLOGIES_BASE_FOLDER}:/tmp \ No newline at end of file diff --git a/topologies/t8/crypto-material/.gitkeep b/topologies/t8/crypto-material/.gitkeep new file mode 100644 index 0000000..e69de29 diff --git a/topologies/t8/docker-compose.yml b/topologies/t8/docker-compose.yml new file mode 100644 index 0000000..a9fcef3 --- /dev/null +++ b/topologies/t8/docker-compose.yml @@ -0,0 +1,109 @@ +services: + org-shell-cmd: + image: alpine + networks: + - hl-fabric + org1-ca-tls: + image: hyperledger/fabric-ca:${FABRIC_CA_VERSION} + # ports: + # - :7054 + networks: + - hl-fabric + org1-ca-identities: + image: hyperledger/fabric-ca:${FABRIC_CA_VERSION} + # ports: + # - :7054 + networks: + - hl-fabric + org2-ca-tls: + image: hyperledger/fabric-ca:${FABRIC_CA_VERSION} + # ports: + # - :7054 + networks: + - hl-fabric + org2-ca-identities: + image: hyperledger/fabric-ca:${FABRIC_CA_VERSION} + # ports: + # - :7054 + networks: + - hl-fabric + org3-ca-tls: + image: hyperledger/fabric-ca:${FABRIC_CA_VERSION} + # ports: + # - :7054 + networks: + - hl-fabric + org3-ca-identities: + image: hyperledger/fabric-ca:${FABRIC_CA_VERSION} + # ports: + # - :7054 + networks: + - hl-fabric + org2-peer1: + image: hyperledger/fabric-peer:${FABRIC_PEER_VERSION} + # ports: + # - :7051 + networks: + - hl-fabric + org2-peer1-couchdb1: + image: registry.hub.docker.com/library/couchdb:3.2.2 + ports: + - 5987:5984 + networks: + - hl-fabric + org2-peer1-couchdb2: + image: registry.hub.docker.com/library/couchdb:3.2.2 + ports: + - 5988:5984 + networks: + - hl-fabric + org2-peer1-couchdb-proxy: + image: nginx-hl-fabric + #ports: + # - 9000:9000 + networks: + - hl-fabric + org2-peer2: + image: hyperledger/fabric-peer:${FABRIC_PEER_VERSION} + # ports: + # - :7051 + networks: + - hl-fabric + org3-peer1: + image: hyperledger/fabric-peer:${FABRIC_PEER_VERSION} + # ports: + # - :7051 + networks: + - hl-fabric + org3-peer2: + image: hyperledger/fabric-peer:${FABRIC_PEER_VERSION} + # ports: + # - :7051 + networks: + - hl-fabric + org2-cli-peer1: + image: hyperledger/fabric-tools:${FABRIC_TOOLS_VERSION} + networks: + - hl-fabric + org3-cli-peer1: + image: hyperledger/fabric-tools:${FABRIC_TOOLS_VERSION} + networks: + - hl-fabric + org1-orderer1: + image: hyperledger/fabric-orderer:${PEER_ORDERER_VERSION} + # ports: + # - :7050 + networks: + - hl-fabric + org1-orderer2: + image: hyperledger/fabric-orderer:${PEER_ORDERER_VERSION} + # ports: + # - :7050 + networks: + - hl-fabric + org1-orderer3: + image: hyperledger/fabric-orderer:${PEER_ORDERER_VERSION} + # ports: + # - :7050 + networks: + - hl-fabric diff --git a/topologies/t8/images/nginx/Dockerfile b/topologies/t8/images/nginx/Dockerfile new file mode 100644 index 0000000..bd4aa06 --- /dev/null +++ b/topologies/t8/images/nginx/Dockerfile @@ -0,0 +1,4 @@ +FROM nginx +RUN apt update +RUN apt install -y nginx-common +RUN apt install -y libnginx-mod-stream \ No newline at end of file diff --git a/topologies/t8/images/shell-cmd-utils/Dockerfile b/topologies/t8/images/shell-cmd-utils/Dockerfile new file mode 100644 index 0000000..e269eae --- /dev/null +++ b/topologies/t8/images/shell-cmd-utils/Dockerfile @@ -0,0 +1,2 @@ +FROM alpine +RUN apk add curl \ No newline at end of file diff --git a/topologies/t8/scripts/all-org-peers-commit-chaincode.sh b/topologies/t8/scripts/all-org-peers-commit-chaincode.sh new file mode 100755 index 0000000..7810f33 --- /dev/null +++ b/topologies/t8/scripts/all-org-peers-commit-chaincode.sh @@ -0,0 +1,26 @@ +#!/bin/bash +set -e +set -x + +peer lifecycle chaincode checkcommitreadiness -C mychannel --name myccv1 -v "1.0" --sequence 1 + +export CORE_PEER_ADDRESS=${TOPOLOGY}-org2-peer1:7051 +export CORE_PEER_MSPCONFIGPATH=/tmp/crypto-material/orgs/org2/admins/admin/msp +QUERY_INSTALLED=`FABRIC_LOGGING_SPEC=ERROR peer lifecycle chaincode queryinstalled | grep myccv1` +IFS=' ' read -r -a array <<< $QUERY_INSTALLED +PACKAGE_ID=${array[2]} +PACKAGE_ID=${PACKAGE_ID::-1} +echo "The Package ID for the installed chaincode is: $PACKAGE_ID" + +peer lifecycle chaincode commit --channelID mychannel --name myccv1 --version "1.0" \ + --peerAddresses ${TOPOLOGY}-org2-peer1:7051 \ + --peerAddresses ${TOPOLOGY}-org2-peer2:7051 \ + --peerAddresses ${TOPOLOGY}-org3-peer1:7051 \ + --peerAddresses ${TOPOLOGY}-org3-peer2:7051 \ + --tlsRootCertFiles /tmp/crypto-material/cas/org2/ca-tls/ca-cert.pem \ + --tlsRootCertFiles /tmp/crypto-material/cas/org2/ca-tls/ca-cert.pem \ + --tlsRootCertFiles /tmp/crypto-material/cas/org3/ca-tls/ca-cert.pem \ + --tlsRootCertFiles /tmp/crypto-material/cas/org3/ca-tls/ca-cert.pem \ + --sequence 1 --tls --cafile /tmp/crypto-material/cas/org1/ca-tls/ca-cert.pem + +peer lifecycle chaincode querycommitted --channelID mychannel --name myccv1 \ No newline at end of file diff --git a/topologies/t8/scripts/all-org-peers-execute-chaincode.sh b/topologies/t8/scripts/all-org-peers-execute-chaincode.sh new file mode 100755 index 0000000..d8be52e --- /dev/null +++ b/topologies/t8/scripts/all-org-peers-execute-chaincode.sh @@ -0,0 +1,18 @@ +#!/bin/sh + +set -e +set -x + +export CORE_PEER_ADDRESS=${TOPOLOGY}-org2-peer1:7051 +export CORE_PEER_MSPCONFIGPATH=/tmp/crypto-material/orgs/org2/users/user/msp +peer chaincode invoke -C mychannel -n myccv1 -c '{"Args":["InitLedger"]}' --waitForEvent --tls --cafile /tmp/crypto-material/cas/org1/ca-tls/ca-cert.pem \ + --peerAddresses ${TOPOLOGY}-org2-peer1:7051 \ + --peerAddresses ${TOPOLOGY}-org2-peer2:7051 \ + --peerAddresses ${TOPOLOGY}-org3-peer1:7051 \ + --peerAddresses ${TOPOLOGY}-org3-peer2:7051 \ + --tlsRootCertFiles /tmp/crypto-material/cas/org2/ca-tls/ca-cert.pem \ + --tlsRootCertFiles /tmp/crypto-material/cas/org2/ca-tls/ca-cert.pem \ + --tlsRootCertFiles /tmp/crypto-material/cas/org3/ca-tls/ca-cert.pem \ + --tlsRootCertFiles /tmp/crypto-material/cas/org3/ca-tls/ca-cert.pem + +peer chaincode query -C mychannel -n myccv1 -c '{"Args":["GetAllAssets"]}' diff --git a/topologies/t8/scripts/channels-setup.sh b/topologies/t8/scripts/channels-setup.sh new file mode 100755 index 0000000..230aac8 --- /dev/null +++ b/topologies/t8/scripts/channels-setup.sh @@ -0,0 +1,7 @@ +#!/bin/sh +set -e +set -x + +export FABRIC_CFG_PATH=/tmp/crypto-material/config +configtxgen -profile OrgsOrdererGenesis -outputBlock /tmp/crypto-material/artifacts/channels/genesis.block -channelID syschannel +configtxgen -profile OrgsChannel -outputCreateChannelTx /tmp/crypto-material/artifacts/channels/channel.tx -channelID mychannel \ No newline at end of file diff --git a/topologies/t8/scripts/delete-state-data.sh b/topologies/t8/scripts/delete-state-data.sh new file mode 100755 index 0000000..2911401 --- /dev/null +++ b/topologies/t8/scripts/delete-state-data.sh @@ -0,0 +1,21 @@ +#!/bin/bash +set -e +set -x + +# remove crypto material files +rm -rf /tmp/crypto-material/* +touch /tmp/crypto-material/.gitkeep + +rm -rf /tmp/homefolders/* +mkdir -p /tmp/homefolders/peer1/couchdb1/logs +touch /tmp/homefolders/peer1/couchdb1/logs/.gitkeep +mkdir -p /tmp/homefolders/peer1/couchdb1/data +touch /tmp/homefolders/peer1/couchdb1/data/.gitkeep +mkdir -p /tmp/homefolders/peer1/couchdb1/config +touch /tmp/homefolders/peer1/couchdb1/config/.gitkeep +mkdir -p /tmp/homefolders/peer1/couchdb2/logs +touch /tmp/homefolders/peer1/couchdb2/logs/.gitkeep +mkdir -p /tmp/homefolders/peer1/couchdb2/data +touch /tmp/homefolders/peer1/couchdb2/data/.gitkeep +mkdir -p /tmp/homefolders/peer1/couchdb2/config +touch /tmp/homefolders/peer1/couchdb2/config/.gitkeep \ No newline at end of file diff --git a/topologies/t8/scripts/org1-enroll-identities-with-ca-identities.sh b/topologies/t8/scripts/org1-enroll-identities-with-ca-identities.sh new file mode 100755 index 0000000..5c5dec7 --- /dev/null +++ b/topologies/t8/scripts/org1-enroll-identities-with-ca-identities.sh @@ -0,0 +1,46 @@ +#!/bin/bash +set -e +set -x + +# enroll orderer1 node +export FABRIC_CA_CLIENT_HOME=/tmp/crypto-material/orderers/org1/orderer1/node +export FABRIC_CA_CLIENT_TLS_CERTFILES=/tmp/crypto-material/cas/org1/ca-identities/ca-cert.pem +fabric-ca-client enroll -d -u https://org1-orderer1:orderer1PW@0.0.0.0:7054 +mv /tmp/crypto-material/orderers/org1/orderer1/node/msp/cacerts/* /tmp/crypto-material/orderers/org1/orderer1/node/msp/cacerts/ca-cert.pem +cp /tmp/config/config.yaml /tmp/crypto-material/orderers/org1/orderer1/node/msp + +# enroll orderer2 node +export FABRIC_CA_CLIENT_HOME=/tmp/crypto-material/orderers/org1/orderer2/node +export FABRIC_CA_CLIENT_TLS_CERTFILES=/tmp/crypto-material/cas/org1/ca-identities/ca-cert.pem +fabric-ca-client enroll -d -u https://org1-orderer2:orderer2PW@0.0.0.0:7054 +mv /tmp/crypto-material/orderers/org1/orderer2/node/msp/cacerts/* /tmp/crypto-material/orderers/org1/orderer2/node/msp/cacerts/ca-cert.pem +cp /tmp/config/config.yaml /tmp/crypto-material/orderers/org1/orderer2/node/msp + +# enroll orderer3 node +export FABRIC_CA_CLIENT_HOME=/tmp/crypto-material/orderers/org1/orderer3/node +export FABRIC_CA_CLIENT_TLS_CERTFILES=/tmp/crypto-material/cas/org1/ca-identities/ca-cert.pem +fabric-ca-client enroll -d -u https://org1-orderer3:orderer3PW@0.0.0.0:7054 +mv /tmp/crypto-material/orderers/org1/orderer3/node/msp/cacerts/* /tmp/crypto-material/orderers/org1/orderer3/node/msp/cacerts/ca-cert.pem +cp /tmp/config/config.yaml /tmp/crypto-material/orderers/org1/orderer3/node/msp + +# enroll org1 admin +export FABRIC_CA_CLIENT_HOME=/tmp/crypto-material/orgs/org1/admins/admin +export FABRIC_CA_CLIENT_TLS_CERTFILES=/tmp/crypto-material/cas/org1/ca-identities/ca-cert.pem +fabric-ca-client enroll -d -u https://admin-org1:org1adminpw@0.0.0.0:7054 +mv /tmp/crypto-material/orgs/org1/admins/admin/msp/cacerts/* /tmp/crypto-material/orgs/org1/admins/admin/msp/cacerts/ca-cert.pem +cp /tmp/config/config.yaml /tmp/crypto-material/orgs/org1/admins/admin/msp + +mv /tmp/crypto-material/orderers/org1/orderer1/node-tls/msp/keystore/* /tmp/crypto-material/orderers/org1/orderer1/node-tls/msp/keystore/key.pem +mv /tmp/crypto-material/orderers/org1/orderer2/node-tls/msp/keystore/* /tmp/crypto-material/orderers/org1/orderer2/node-tls/msp/keystore/key.pem +mv /tmp/crypto-material/orderers/org1/orderer3/node-tls/msp/keystore/* /tmp/crypto-material/orderers/org1/orderer3/node-tls/msp/keystore/key.pem + +# setup org1 msp +mkdir -p /tmp/crypto-material/orgs/org1/msp +cp /tmp/config/config.yaml /tmp/crypto-material/orgs/org1/msp +mkdir -p /tmp/crypto-material/orgs/org1/msp/cacerts +cp /tmp/crypto-material/cas/org1/ca-identities/ca-cert.pem /tmp/crypto-material/orgs/org1/msp/cacerts/ca-cert.pem +mkdir -p /tmp/crypto-material/orgs/org1/msp/tlscacerts +cp /tmp/crypto-material/cas/org1/ca-tls/ca-cert.pem /tmp/crypto-material/orgs/org1/msp/tlscacerts/org1-tls-ca-cert.pem +cp /tmp/crypto-material/cas/org2/ca-tls/ca-cert.pem /tmp/crypto-material/orgs/org1/msp/tlscacerts/org2-tls-ca-cert.pem +cp /tmp/crypto-material/cas/org3/ca-tls/ca-cert.pem /tmp/crypto-material/orgs/org1/msp/tlscacerts/org3-tls-ca-cert.pem +mkdir -p /tmp/crypto-material/orgs/org1/msp/users \ No newline at end of file diff --git a/topologies/t8/scripts/org1-enroll-identities-with-ca-tls.sh b/topologies/t8/scripts/org1-enroll-identities-with-ca-tls.sh new file mode 100755 index 0000000..67e742b --- /dev/null +++ b/topologies/t8/scripts/org1-enroll-identities-with-ca-tls.sh @@ -0,0 +1,26 @@ +#!/bin/bash +set -e +set -x + +# enroll orderer1 node-tls +export FABRIC_CA_CLIENT_HOME=/tmp/crypto-material/orderers/org1/orderer1/node-tls +export FABRIC_CA_CLIENT_TLS_CERTFILES=/tmp/crypto-material/cas/org1/ca-tls/ca-cert.pem +fabric-ca-client enroll -d -u https://org1-orderer1:orderer1PW@0.0.0.0:7054 --enrollment.profile tls --csr.hosts ${TOPOLOGY}-org1-orderer1 + +# enroll orderer2 node-tls +export FABRIC_CA_CLIENT_HOME=/tmp/crypto-material/orderers/org1/orderer2/node-tls +export FABRIC_CA_CLIENT_TLS_CERTFILES=/tmp/crypto-material/cas/org1/ca-tls/ca-cert.pem +fabric-ca-client enroll -d -u https://org1-orderer2:orderer2PW@0.0.0.0:7054 --enrollment.profile tls --csr.hosts ${TOPOLOGY}-org1-orderer2 + +# enroll orderer3 node-tls +export FABRIC_CA_CLIENT_HOME=/tmp/crypto-material/orderers/org1/orderer3/node-tls +export FABRIC_CA_CLIENT_TLS_CERTFILES=/tmp/crypto-material/cas/org1/ca-tls/ca-cert.pem +fabric-ca-client enroll -d -u https://org1-orderer3:orderer3PW@0.0.0.0:7054 --enrollment.profile tls --csr.hosts ${TOPOLOGY}-org1-orderer3 + +mv /tmp/crypto-material/orderers/org1/orderer1/node-tls/msp/keystore/* /tmp/crypto-material/orderers/org1/orderer1/node-tls/msp/keystore/key.pem +mv /tmp/crypto-material/orderers/org1/orderer2/node-tls/msp/keystore/* /tmp/crypto-material/orderers/org1/orderer2/node-tls/msp/keystore/key.pem +mv /tmp/crypto-material/orderers/org1/orderer3/node-tls/msp/keystore/* /tmp/crypto-material/orderers/org1/orderer3/node-tls/msp/keystore/key.pem + +mv /tmp/crypto-material/orderers/org1/orderer1/node-tls/msp/tlscacerts/* /tmp/crypto-material/orderers/org1/orderer1/node-tls/msp/tlscacerts/ca-cert.pem +mv /tmp/crypto-material/orderers/org1/orderer2/node-tls/msp/tlscacerts/* /tmp/crypto-material/orderers/org1/orderer2/node-tls/msp/tlscacerts/ca-cert.pem +mv /tmp/crypto-material/orderers/org1/orderer3/node-tls/msp/tlscacerts/* /tmp/crypto-material/orderers/org1/orderer3/node-tls/msp/tlscacerts/ca-cert.pem \ No newline at end of file diff --git a/topologies/t8/scripts/org1-register-identities-with-ca-identities.sh b/topologies/t8/scripts/org1-register-identities-with-ca-identities.sh new file mode 100755 index 0000000..1e6dcb2 --- /dev/null +++ b/topologies/t8/scripts/org1-register-identities-with-ca-identities.sh @@ -0,0 +1,11 @@ +#!/bin/bash +set -e +set -x + +export FABRIC_CA_CLIENT_TLS_CERTFILES=/tmp/crypto-material/cas/org1/ca-identities/ca-cert.pem +export FABRIC_CA_CLIENT_HOME=/tmp/crypto-material/cas/org1/ca-identities-admin +fabric-ca-client enroll -d -u https://org1-ca-identities-admin:org1-ca-identities-adminpw@0.0.0.0:7054 +fabric-ca-client register -d --id.name org1-orderer1 --id.secret orderer1PW --id.type orderer -u https://0.0.0.0:7054 +fabric-ca-client register -d --id.name org1-orderer2 --id.secret orderer2PW --id.type orderer -u https://0.0.0.0:7054 +fabric-ca-client register -d --id.name org1-orderer3 --id.secret orderer3PW --id.type orderer -u https://0.0.0.0:7054 +fabric-ca-client register -d --id.name admin-org1 --id.secret org1adminpw --id.type admin --id.attrs "hf.Registrar.Roles=client,hf.Registrar.Attributes=*,hf.Revoker=true,hf.GenCRL=true,admin=true:ecert,abac.init=true:ecert" -u https://0.0.0.0:7054 \ No newline at end of file diff --git a/topologies/t8/scripts/org1-register-identities-with-ca-tls.sh b/topologies/t8/scripts/org1-register-identities-with-ca-tls.sh new file mode 100755 index 0000000..ab095a5 --- /dev/null +++ b/topologies/t8/scripts/org1-register-identities-with-ca-tls.sh @@ -0,0 +1,10 @@ +#!/bin/bash +set -e +set -x + +export FABRIC_CA_CLIENT_TLS_CERTFILES=/tmp/crypto-material/cas/org1/ca-tls/ca-cert.pem +export FABRIC_CA_CLIENT_HOME=/tmp/crypto-material/cas/org1/ca-tls-admin +fabric-ca-client enroll -d -u https://org1-ca-tls-admin:org1-ca-tls-adminpw@0.0.0.0:7054 +fabric-ca-client register -d --id.name org1-orderer1 --id.secret orderer1PW --id.type orderer -u https://0.0.0.0:7054 +fabric-ca-client register -d --id.name org1-orderer2 --id.secret orderer2PW --id.type orderer -u https://0.0.0.0:7054 +fabric-ca-client register -d --id.name org1-orderer3 --id.secret orderer3PW --id.type orderer -u https://0.0.0.0:7054 \ No newline at end of file diff --git a/topologies/t8/scripts/org2-approve-chaincode.sh b/topologies/t8/scripts/org2-approve-chaincode.sh new file mode 100755 index 0000000..fb8ac61 --- /dev/null +++ b/topologies/t8/scripts/org2-approve-chaincode.sh @@ -0,0 +1,22 @@ +#!/bin/bash +set -e +set -x + +export CORE_PEER_ADDRESS=${TOPOLOGY}-org2-peer1:7051 +export CORE_PEER_MSPCONFIGPATH=/tmp/crypto-material/orgs/org2/admins/admin/msp +QUERY_INSTALLED=`FABRIC_LOGGING_SPEC=ERROR peer lifecycle chaincode queryinstalled | grep myccv1` +IFS=' ' read -r -a array <<< $QUERY_INSTALLED +PACKAGE_ID=${array[2]} +PACKAGE_ID=${PACKAGE_ID::-1} +echo "The Package ID for the installed chaincode is: $PACKAGE_ID" + + +peer lifecycle chaincode approveformyorg --channelID mychannel --name myccv1 --version "1.0" \ + --package-id $PACKAGE_ID \ + --peerAddresses ${TOPOLOGY}-org2-peer1:7051 \ + --peerAddresses ${TOPOLOGY}-org2-peer2:7051 \ + --tlsRootCertFiles /tmp/crypto-material/cas/org2/ca-tls/ca-cert.pem \ + --tlsRootCertFiles /tmp/crypto-material/cas/org2/ca-tls/ca-cert.pem \ + --sequence 1 --tls --cafile /tmp/crypto-material/cas/org1/ca-tls/ca-cert.pem + +peer lifecycle chaincode queryapproved -C mychannel --name myccv1 \ No newline at end of file diff --git a/topologies/t8/scripts/org2-create-and-join-channels.sh b/topologies/t8/scripts/org2-create-and-join-channels.sh new file mode 100755 index 0000000..ed6a189 --- /dev/null +++ b/topologies/t8/scripts/org2-create-and-join-channels.sh @@ -0,0 +1,12 @@ +#!/bin/sh +set -e +set -x + +export CORE_PEER_MSPCONFIGPATH=/tmp/crypto-material/orgs/org2/admins/admin/msp +peer channel create -c mychannel -f /tmp/crypto-material/artifacts/channels/channel.tx -o ${TOPOLOGY}-org1-orderer1:7050 --outputBlock /tmp/crypto-material/artifacts/channels/mychannel.block --tls --cafile /tmp/crypto-material/cas/org1/ca-tls/ca-cert.pem + +export CORE_PEER_ADDRESS=${TOPOLOGY}-org2-peer1:7051 +peer channel join -b /tmp/crypto-material/artifacts/channels/mychannel.block + +export CORE_PEER_ADDRESS=${TOPOLOGY}-org2-peer2:7051 +peer channel join -b /tmp/crypto-material/artifacts/channels/mychannel.block \ No newline at end of file diff --git a/topologies/t8/scripts/org2-enroll-identities-with-ca-identities.sh b/topologies/t8/scripts/org2-enroll-identities-with-ca-identities.sh new file mode 100755 index 0000000..ad0ea07 --- /dev/null +++ b/topologies/t8/scripts/org2-enroll-identities-with-ca-identities.sh @@ -0,0 +1,49 @@ +#!/bin/bash +set -e +set -x + +# enroll peer1 node +export FABRIC_CA_CLIENT_HOME=/tmp/crypto-material/peers/org2/peer1/node +export FABRIC_CA_CLIENT_TLS_CERTFILES=/tmp/crypto-material/cas/org2/ca-identities/ca-cert.pem +fabric-ca-client enroll -d -u https://peer1-org2:peer1PW@0.0.0.0:7054 +mv /tmp/crypto-material/peers/org2/peer1/node/msp/cacerts/* /tmp/crypto-material/peers/org2/peer1/node/msp/cacerts/ca-cert.pem +cp /tmp/config/config.yaml /tmp/crypto-material/peers/org2/peer1/node/msp + +# enroll peer2 node +export FABRIC_CA_CLIENT_HOME=/tmp/crypto-material/peers/org2/peer2/node +export FABRIC_CA_CLIENT_TLS_CERTFILES=/tmp/crypto-material/cas/org2/ca-identities/ca-cert.pem +fabric-ca-client enroll -d -u https://peer2-org2:peer2PW@0.0.0.0:7054 +mv /tmp/crypto-material/peers/org2/peer2/node/msp/cacerts/* /tmp/crypto-material/peers/org2/peer2/node/msp/cacerts/ca-cert.pem +cp /tmp/config/config.yaml /tmp/crypto-material/peers/org2/peer2/node/msp + +# enroll org2 admin +export FABRIC_CA_CLIENT_HOME=/tmp/crypto-material/orgs/org2/admins/admin +export FABRIC_CA_CLIENT_TLS_CERTFILES=/tmp/crypto-material/cas/org2/ca-identities/ca-cert.pem +fabric-ca-client enroll -d -u https://admin-org2:org2AdminPW@0.0.0.0:7054 +mv /tmp/crypto-material/orgs/org2/admins/admin/msp/cacerts/* /tmp/crypto-material/orgs/org2/admins/admin/msp/cacerts/ca-cert.pem +cp /tmp/config/config.yaml /tmp/crypto-material/orgs/org2/admins/admin/msp + +# enroll org2 user +export FABRIC_CA_CLIENT_HOME=/tmp/crypto-material/orgs/org2/users/user +export FABRIC_CA_CLIENT_TLS_CERTFILES=/tmp/crypto-material/cas/org2/ca-identities/ca-cert.pem +fabric-ca-client enroll -d -u https://admin-org2:org2AdminPW@0.0.0.0:7054 +mv /tmp/crypto-material/orgs/org2/users/user/msp/cacerts/* /tmp/crypto-material/orgs/org2/users/user/msp/cacerts/ca-cert.pem +cp /tmp/config/config.yaml /tmp/crypto-material/orgs/org2/users/user/msp + +# enroll org2 client +export FABRIC_CA_CLIENT_HOME=/tmp/crypto-material/orgs/org2/clients/client +export FABRIC_CA_CLIENT_TLS_CERTFILES=/tmp/crypto-material/cas/org2/ca-identities/ca-cert.pem +fabric-ca-client enroll -d -u https://admin-org2:org2AdminPW@0.0.0.0:7054 +mv /tmp/crypto-material/orgs/org2/clients/client/msp/cacerts/* /tmp/crypto-material/orgs/org2/clients/client/msp/cacerts/ca-cert.pem +cp /tmp/config/config.yaml /tmp/crypto-material/orgs/org2/clients/client/msp + +# setup org2 msp +mkdir -p /tmp/crypto-material/orgs/org2/msp +cp /tmp/config/config.yaml /tmp/crypto-material/orgs/org2/msp +mkdir -p /tmp/crypto-material/orgs/org2/msp/cacerts +cp /tmp/crypto-material/cas/org2/ca-identities/ca-cert.pem /tmp/crypto-material/orgs/org2/msp/cacerts/ca-cert.pem +mkdir -p /tmp/crypto-material/orgs/org2/msp/tlscacerts +cp /tmp/crypto-material/cas/org1/ca-tls/ca-cert.pem /tmp/crypto-material/orgs/org2/msp/tlscacerts/org1-tls-ca-cert.pem +cp /tmp/crypto-material/cas/org2/ca-tls/ca-cert.pem /tmp/crypto-material/orgs/org2/msp/tlscacerts/org2-tls-ca-cert.pem +cp /tmp/crypto-material/cas/org3/ca-tls/ca-cert.pem /tmp/crypto-material/orgs/org2/msp/tlscacerts/org3-tls-ca-cert.pem +mkdir -p /tmp/crypto-material/orgs/org2/msp/users diff --git a/topologies/t8/scripts/org2-enroll-identities-with-ca-tls.sh b/topologies/t8/scripts/org2-enroll-identities-with-ca-tls.sh new file mode 100755 index 0000000..ea37b7e --- /dev/null +++ b/topologies/t8/scripts/org2-enroll-identities-with-ca-tls.sh @@ -0,0 +1,19 @@ +#!/bin/bash +set -e +set -x + +# enroll peer1 node-tls +export FABRIC_CA_CLIENT_HOME=/tmp/crypto-material/peers/org2/peer1/node-tls +export FABRIC_CA_CLIENT_TLS_CERTFILES=/tmp/crypto-material/cas/org2/ca-tls/ca-cert.pem +fabric-ca-client enroll -d -u https://peer1-org2:peer1PW@0.0.0.0:7054 --enrollment.profile tls --csr.hosts ${TOPOLOGY}-org2-peer1 + +# enroll peer2 node-tls +export FABRIC_CA_CLIENT_HOME=/tmp/crypto-material/peers/org2/peer2/node-tls +export FABRIC_CA_CLIENT_TLS_CERTFILES=/tmp/crypto-material/cas/org2/ca-tls/ca-cert.pem +fabric-ca-client enroll -d -u https://peer2-org2:peer2PW@0.0.0.0:7054 --enrollment.profile tls --csr.hosts ${TOPOLOGY}-org2-peer2 + +mv /tmp/crypto-material/peers/org2/peer1/node-tls/msp/keystore/* /tmp/crypto-material/peers/org2/peer1/node-tls/msp/keystore/key.pem +mv /tmp/crypto-material/peers/org2/peer2/node-tls/msp/keystore/* /tmp/crypto-material/peers/org2/peer2/node-tls/msp/keystore/key.pem + +mv /tmp/crypto-material/peers/org2/peer1/node-tls/msp/tlscacerts/* /tmp/crypto-material/peers/org2/peer1/node-tls/msp/tlscacerts/ca-cert.pem +mv /tmp/crypto-material/peers/org2/peer2/node-tls/msp/tlscacerts/* /tmp/crypto-material/peers/org2/peer2/node-tls/msp/tlscacerts/ca-cert.pem \ No newline at end of file diff --git a/topologies/t8/scripts/org2-install-chaincode.sh b/topologies/t8/scripts/org2-install-chaincode.sh new file mode 100755 index 0000000..b9db570 --- /dev/null +++ b/topologies/t8/scripts/org2-install-chaincode.sh @@ -0,0 +1,13 @@ +#!/bin/bash +set -e +set -x + +export CHAINCODE_DIR=/opt/gopath/src/github.com/hyperledger/fabric-samples/chaincode/assets-transfer-basic/chaincode-go + +export CORE_PEER_ADDRESS=${TOPOLOGY}-org2-peer1:7051 +export CORE_PEER_MSPCONFIGPATH=/tmp/crypto-material/orgs/org2/admins/admin/msp +peer lifecycle chaincode package mycc.tar.gz --path $CHAINCODE_DIR --lang golang --label myccv1 +peer lifecycle chaincode install mycc.tar.gz + +export CORE_PEER_ADDRESS=${TOPOLOGY}-org2-peer2:7051 +peer lifecycle chaincode install mycc.tar.gz \ No newline at end of file diff --git a/topologies/t8/scripts/org2-peer1-setup-couchdb-cluster.sh b/topologies/t8/scripts/org2-peer1-setup-couchdb-cluster.sh new file mode 100755 index 0000000..3bcc3fa --- /dev/null +++ b/topologies/t8/scripts/org2-peer1-setup-couchdb-cluster.sh @@ -0,0 +1,46 @@ +#!/bin/bash +set -e +set -x + +echo "Setup configs for couchdb1 ..." +curl -u admin:hltopos -X PUT "http://${TOPOLOGY}-org2-peer1-couchdb1.local:5984/_node/couchdb@${TOPOLOGY}-org2-peer1-couchdb1.local/_config/log/file" -d '"/opt/couchdb/var/log/log.txt"' +curl -u admin:hltopos -X PUT "http://${TOPOLOGY}-org2-peer1-couchdb1.local:5984/_node/couchdb@${TOPOLOGY}-org2-peer1-couchdb1.local/_config/log/writer" -d '"file"' +curl -u admin:hltopos -X PUT "http://${TOPOLOGY}-org2-peer1-couchdb1.local:5984/_node/couchdb@${TOPOLOGY}-org2-peer1-couchdb1.local/_config/log/level" -d '"debug"' +curl -u admin:hltopos -X PUT "http://${TOPOLOGY}-org2-peer1-couchdb1.local:5984/_node/couchdb@${TOPOLOGY}-org2-peer1-couchdb1.local/_config/couch_httpd_auth/timeout" -d '"60000"' +curl -u admin:hltopos -X PUT "http://${TOPOLOGY}-org2-peer1-couchdb1.local:5984/_node/couchdb@${TOPOLOGY}-org2-peer1-couchdb1.local/_config/chttpd_auth/secret" -d '"e7d0e9c49271253dbd3bfdeb19ba9db5"' +curl -u admin:hltopos -X PUT "http://${TOPOLOGY}-org2-peer1-couchdb1.local:5984/_node/couchdb@${TOPOLOGY}-org2-peer1-couchdb1.local/_config/couchdb/uuid" -d '"38567436ddaf5b11fb11181fdede1791"' +curl -u admin:hltopos -X PUT "http://${TOPOLOGY}-org2-peer1-couchdb1.local:5984/_node/couchdb@${TOPOLOGY}-org2-peer1-couchdb1.local/_config/couchdb/os_process_timeout" -d '"60000"' +curl -u admin:hltopos -X PUT "http://${TOPOLOGY}-org2-peer1-couchdb1.local:5984/_node/couchdb@${TOPOLOGY}-org2-peer1-couchdb1.local/_config/replicator/connection_timeout" -d '"60000"' +curl -u admin:hltopos -X PUT "http://${TOPOLOGY}-org2-peer1-couchdb1.local:5984/_node/couchdb@${TOPOLOGY}-org2-peer1-couchdb1.local/_config/replicator/worker_processes" -d '"4"' +curl -u admin:hltopos -X PUT "http://${TOPOLOGY}-org2-peer1-couchdb1.local:5984/_node/couchdb@${TOPOLOGY}-org2-peer1-couchdb1.local/_config/query_server_config/os_process_limit" -d '"4"' +curl -u admin:hltopos -X PUT "http://${TOPOLOGY}-org2-peer1-couchdb1.local:5984/_users" +curl -u admin:hltopos -X PUT "http://${TOPOLOGY}-org2-peer1-couchdb1.local:5984/_replicator" +curl -u admin:hltopos -X PUT "http://${TOPOLOGY}-org2-peer1-couchdb1.local:5984/_global_changes" + +echo "Setup configs for couchdb2 ..." +curl -u admin:hltopos -X PUT "http://${TOPOLOGY}-org2-peer1-couchdb2.local:5984/_node/couchdb@${TOPOLOGY}-org2-peer1-couchdb2.local/_config/log/file" -d '"/opt/couchdb/var/log/log.txt"' +curl -u admin:hltopos -X PUT "http://${TOPOLOGY}-org2-peer1-couchdb2.local:5984/_node/couchdb@${TOPOLOGY}-org2-peer1-couchdb2.local/_config/log/writer" -d '"file"' +curl -u admin:hltopos -X PUT "http://${TOPOLOGY}-org2-peer1-couchdb2.local:5984/_node/couchdb@${TOPOLOGY}-org2-peer1-couchdb2.local/_config/log/level" -d '"debug"' +curl -u admin:hltopos -X PUT "http://${TOPOLOGY}-org2-peer1-couchdb2.local:5984/_node/couchdb@${TOPOLOGY}-org2-peer1-couchdb2.local/_config/couch_httpd_auth/timeout" -d '"60000"' +curl -u admin:hltopos -X PUT "http://${TOPOLOGY}-org2-peer1-couchdb2.local:5984/_node/couchdb@${TOPOLOGY}-org2-peer1-couchdb2.local/_config/chttpd_auth/secret" -d '"e7d0e9c49271253dbd3bfdeb19ba9db5"' +curl -u admin:hltopos -X PUT "http://${TOPOLOGY}-org2-peer1-couchdb2.local:5984/_node/couchdb@${TOPOLOGY}-org2-peer1-couchdb2.local/_config/couchdb/uuid" -d '"38567436ddaf5b11fb11181fdede1791"' +curl -u admin:hltopos -X PUT "http://${TOPOLOGY}-org2-peer1-couchdb2.local:5984/_node/couchdb@${TOPOLOGY}-org2-peer1-couchdb2.local/_config/couchdb/os_process_timeout" -d '"60000"' +curl -u admin:hltopos -X PUT "http://${TOPOLOGY}-org2-peer1-couchdb2.local:5984/_node/couchdb@${TOPOLOGY}-org2-peer1-couchdb2.local/_config/replicator/connection_timeout" -d '"60000"' +curl -u admin:hltopos -X PUT "http://${TOPOLOGY}-org2-peer1-couchdb2.local:5984/_node/couchdb@${TOPOLOGY}-org2-peer1-couchdb2.local/_config/replicator/worker_processes" -d '"4"' +curl -u admin:hltopos -X PUT "http://${TOPOLOGY}-org2-peer1-couchdb2.local:5984/_node/couchdb@${TOPOLOGY}-org2-peer1-couchdb2.local/_config/query_server_config/os_process_limit" -d '"4"' +curl -u admin:hltopos -X PUT "http://${TOPOLOGY}-org2-peer1-couchdb2.local:5984/_users" +curl -u admin:hltopos -X PUT "http://${TOPOLOGY}-org2-peer1-couchdb2.local:5984/_replicator" +curl -u admin:hltopos -X PUT "http://${TOPOLOGY}-org2-peer1-couchdb2.local:5984/_global_changes" + +echo "Setup cluster for couchdb1 node ..." +curl -u admin:hltopos -X POST -H "Content-Type: application/json" http://${TOPOLOGY}-org2-peer1-couchdb1.local:5984/_cluster_setup -d '{"action": "enable_cluster", "bind_address":"0.0.0.0", "username": "admin", "password":"hltopos", "port": 5984, "node_count": "2", "remote_node": "t8-org2-peer1-couchdb2.local", "remote_current_user": "admin", "remote_current_password": "password"}' +curl -u admin:hltopos -X POST -H "Content-Type: application/json" http://${TOPOLOGY}-org2-peer1-couchdb1.local:5984/_cluster_setup -d '{"action": "add_node", "host":"t8-org2-peer1-couchdb2.local", "port": 5984, "username": "admin", "password":"hltopos"}' + +echo "Setup cluster for couchdb2 node ..." +curl -u admin:hltopos -X POST -H "Content-Type: application/json" http://${TOPOLOGY}-org2-peer1-couchdb2.local:5984/_cluster_setup -d '{"action": "enable_cluster", "bind_address":"0.0.0.0", "username": "admin", "password":"hltopos", "port": 5984, "node_count": "2", "remote_node": "t8-org2-peer1-couchdb1.local", "remote_current_user": "admin", "remote_current_password": "password"}' +curl -u admin:hltopos -X POST -H "Content-Type: application/json" http://${TOPOLOGY}-org2-peer1-couchdb2.local:5984/_cluster_setup -d '{"action": "add_node", "host":"t8-org2-peer1-couchdb1.local", "port": 5984, "username": "admin", "password":"hltopos"}' + +echo "Finish cluster ..." +curl -u admin:hltopos -X POST -H "Content-Type: application/json" http://${TOPOLOGY}-org2-peer1-couchdb1.local:5984/_cluster_setup -d '{"action": "finish_cluster"}' + +# curl -u admin:hltopos "http://${TOPOLOGY}-org2-peer1-couchdb1.local:5984/_membership" \ No newline at end of file diff --git a/topologies/t8/scripts/org2-register-identities-with-ca-identities.sh b/topologies/t8/scripts/org2-register-identities-with-ca-identities.sh new file mode 100755 index 0000000..beb2cd6 --- /dev/null +++ b/topologies/t8/scripts/org2-register-identities-with-ca-identities.sh @@ -0,0 +1,12 @@ +#!/bin/bash +set -e +set -x + +export FABRIC_CA_CLIENT_TLS_CERTFILES=/tmp/crypto-material/cas/org2/ca-identities/ca-cert.pem +export FABRIC_CA_CLIENT_HOME=/tmp/crypto-material/cas/org2/ca-identities-admin +fabric-ca-client enroll -d -u https://org2-ca-identities-admin:org2-ca-identities-adminpw@0.0.0.0:7054 +fabric-ca-client register -d --id.name peer1-org2 --id.secret peer1PW --id.type peer -u https://0.0.0.0:7054 +fabric-ca-client register -d --id.name peer2-org2 --id.secret peer2PW --id.type peer -u https://0.0.0.0:7054 +fabric-ca-client register -d --id.name admin-org2 --id.secret org2AdminPW --id.type admin -u https://0.0.0.0:7054 +fabric-ca-client register -d --id.name user-org2 --id.secret org2UserPW --id.type user -u https://0.0.0.0:7054 +fabric-ca-client register -d --id.name client-org2 --id.secret org2UserPW --id.type client -u https://0.0.0.0:7054 \ No newline at end of file diff --git a/topologies/t8/scripts/org2-register-identities-with-ca-tls.sh b/topologies/t8/scripts/org2-register-identities-with-ca-tls.sh new file mode 100755 index 0000000..3a99816 --- /dev/null +++ b/topologies/t8/scripts/org2-register-identities-with-ca-tls.sh @@ -0,0 +1,9 @@ +#!/bin/bash +set -e +set -x + +export FABRIC_CA_CLIENT_TLS_CERTFILES=/tmp/crypto-material/cas/org2/ca-tls/ca-cert.pem +export FABRIC_CA_CLIENT_HOME=/tmp/crypto-material/cas/org2/ca-tls-admin +fabric-ca-client enroll -d -u https://org2-ca-tls-admin:org2-ca-tls-adminpw@0.0.0.0:7054 +fabric-ca-client register -d --id.name peer1-org2 --id.secret peer1PW --id.type peer -u https://0.0.0.0:7054 +fabric-ca-client register -d --id.name peer2-org2 --id.secret peer2PW --id.type peer -u https://0.0.0.0:7054 \ No newline at end of file diff --git a/topologies/t8/scripts/org3-approve-chaincode.sh b/topologies/t8/scripts/org3-approve-chaincode.sh new file mode 100755 index 0000000..094c94f --- /dev/null +++ b/topologies/t8/scripts/org3-approve-chaincode.sh @@ -0,0 +1,22 @@ +#!/bin/bash +set -e +set -x + +export CORE_PEER_ADDRESS=${TOPOLOGY}-org3-peer1:7051 +export CORE_PEER_MSPCONFIGPATH=/tmp/crypto-material/orgs/org3/admins/admin/msp +QUERY_INSTALLED=`FABRIC_LOGGING_SPEC=ERROR peer lifecycle chaincode queryinstalled | grep myccv1` +IFS=' ' read -r -a array <<< $QUERY_INSTALLED +PACKAGE_ID=${array[2]} +PACKAGE_ID=${PACKAGE_ID::-1} +echo "The Package ID for the installed chaincode is: $PACKAGE_ID" + + +peer lifecycle chaincode approveformyorg --channelID mychannel --name myccv1 --version "1.0" \ + --package-id $PACKAGE_ID \ + --peerAddresses ${TOPOLOGY}-org3-peer1:7051 \ + --peerAddresses ${TOPOLOGY}-org3-peer2:7051 \ + --tlsRootCertFiles /tmp/crypto-material/cas/org3/ca-tls/ca-cert.pem \ + --tlsRootCertFiles /tmp/crypto-material/cas/org3/ca-tls/ca-cert.pem \ + --sequence 1 --tls --cafile /tmp/crypto-material/cas/org1/ca-tls/ca-cert.pem + +peer lifecycle chaincode queryapproved -C mychannel --name myccv1 \ No newline at end of file diff --git a/topologies/t8/scripts/org3-create-and-join-channels.sh b/topologies/t8/scripts/org3-create-and-join-channels.sh new file mode 100755 index 0000000..ebf8b48 --- /dev/null +++ b/topologies/t8/scripts/org3-create-and-join-channels.sh @@ -0,0 +1,11 @@ +#!/bin/sh +set -e +set -x + +export CORE_PEER_MSPCONFIGPATH=/tmp/crypto-material/orgs/org3/admins/admin/msp + +export CORE_PEER_ADDRESS=${TOPOLOGY}-org3-peer1:7051 +peer channel join -b /tmp/crypto-material/artifacts/channels/mychannel.block + +export CORE_PEER_ADDRESS=${TOPOLOGY}-org3-peer2:7051 +peer channel join -b /tmp/crypto-material/artifacts/channels/mychannel.block \ No newline at end of file diff --git a/topologies/t8/scripts/org3-enroll-identities-with-ca-identities.sh b/topologies/t8/scripts/org3-enroll-identities-with-ca-identities.sh new file mode 100755 index 0000000..36f317d --- /dev/null +++ b/topologies/t8/scripts/org3-enroll-identities-with-ca-identities.sh @@ -0,0 +1,49 @@ +#!/bin/bash +set -e +set -x + +# enroll peer1 node +export FABRIC_CA_CLIENT_HOME=/tmp/crypto-material/peers/org3/peer1/node +export FABRIC_CA_CLIENT_TLS_CERTFILES=/tmp/crypto-material/cas/org3/ca-identities/ca-cert.pem +fabric-ca-client enroll -d -u https://peer1-org3:peer1PW@0.0.0.0:7054 +mv /tmp/crypto-material/peers/org3/peer1/node/msp/cacerts/* /tmp/crypto-material/peers/org3/peer1/node/msp/cacerts/ca-cert.pem +cp /tmp/config/config.yaml /tmp/crypto-material/peers/org3/peer1/node/msp + +# enroll peer2 node +export FABRIC_CA_CLIENT_HOME=/tmp/crypto-material/peers/org3/peer2/node +export FABRIC_CA_CLIENT_TLS_CERTFILES=/tmp/crypto-material/cas/org3/ca-identities/ca-cert.pem +fabric-ca-client enroll -d -u https://peer2-org3:peer2PW@0.0.0.0:7054 +mv /tmp/crypto-material/peers/org3/peer2/node/msp/cacerts/* /tmp/crypto-material/peers/org3/peer2/node/msp/cacerts/ca-cert.pem +cp /tmp/config/config.yaml /tmp/crypto-material/peers/org3/peer2/node/msp + +# enroll org3 admin +export FABRIC_CA_CLIENT_HOME=/tmp/crypto-material/orgs/org3/admins/admin +export FABRIC_CA_CLIENT_TLS_CERTFILES=/tmp/crypto-material/cas/org3/ca-identities/ca-cert.pem +fabric-ca-client enroll -d -u https://admin-org3:org3AdminPW@0.0.0.0:7054 +mv /tmp/crypto-material/orgs/org3/admins/admin/msp/cacerts/* /tmp/crypto-material/orgs/org3/admins/admin/msp/cacerts/ca-cert.pem +cp /tmp/config/config.yaml /tmp/crypto-material/orgs/org3/admins/admin/msp + +# enroll org3 user +export FABRIC_CA_CLIENT_HOME=/tmp/crypto-material/orgs/org3/users/user +export FABRIC_CA_CLIENT_TLS_CERTFILES=/tmp/crypto-material/cas/org3/ca-identities/ca-cert.pem +fabric-ca-client enroll -d -u https://admin-org3:org3AdminPW@0.0.0.0:7054 +mv /tmp/crypto-material/orgs/org3/users/user/msp/cacerts/* /tmp/crypto-material/orgs/org3/users/user/msp/cacerts/ca-cert.pem +cp /tmp/config/config.yaml /tmp/crypto-material/orgs/org3/users/user/msp + +# enroll org3 client +export FABRIC_CA_CLIENT_HOME=/tmp/crypto-material/orgs/org3/clients/client +export FABRIC_CA_CLIENT_TLS_CERTFILES=/tmp/crypto-material/cas/org3/ca-identities/ca-cert.pem +fabric-ca-client enroll -d -u https://admin-org3:org3AdminPW@0.0.0.0:7054 +mv /tmp/crypto-material/orgs/org3/clients/client/msp/cacerts/* /tmp/crypto-material/orgs/org3/clients/client/msp/cacerts/ca-cert.pem +cp /tmp/config/config.yaml /tmp/crypto-material/orgs/org3/clients/client/msp + +# setup org3 msp +mkdir -p /tmp/crypto-material/orgs/org3/msp +cp /tmp/config/config.yaml /tmp/crypto-material/orgs/org3/msp +mkdir -p /tmp/crypto-material/orgs/org3/msp/cacerts +cp /tmp/crypto-material/cas/org3/ca-identities/ca-cert.pem /tmp/crypto-material/orgs/org3/msp/cacerts/ca-cert.pem +mkdir -p /tmp/crypto-material/orgs/org3/msp/tlscacerts +cp /tmp/crypto-material/cas/org1/ca-tls/ca-cert.pem /tmp/crypto-material/orgs/org3/msp/tlscacerts/org1-tls-ca-cert.pem +cp /tmp/crypto-material/cas/org2/ca-tls/ca-cert.pem /tmp/crypto-material/orgs/org3/msp/tlscacerts/org2-tls-ca-cert.pem +cp /tmp/crypto-material/cas/org3/ca-tls/ca-cert.pem /tmp/crypto-material/orgs/org3/msp/tlscacerts/org3-tls-ca-cert.pem +mkdir -p /tmp/crypto-material/orgs/org3/msp/users diff --git a/topologies/t8/scripts/org3-enroll-identities-with-ca-tls.sh b/topologies/t8/scripts/org3-enroll-identities-with-ca-tls.sh new file mode 100755 index 0000000..2bbb3b3 --- /dev/null +++ b/topologies/t8/scripts/org3-enroll-identities-with-ca-tls.sh @@ -0,0 +1,19 @@ +#!/bin/bash +set -e +set -x + +# enroll peer1 node-tls +export FABRIC_CA_CLIENT_HOME=/tmp/crypto-material/peers/org3/peer1/node-tls +export FABRIC_CA_CLIENT_TLS_CERTFILES=/tmp/crypto-material/cas/org3/ca-tls/ca-cert.pem +fabric-ca-client enroll -d -u https://peer1-org3:peer1PW@0.0.0.0:7054 --enrollment.profile tls --csr.hosts ${TOPOLOGY}-org3-peer1 + +# enroll peer2 node-tls +export FABRIC_CA_CLIENT_HOME=/tmp/crypto-material/peers/org3/peer2/node-tls +export FABRIC_CA_CLIENT_TLS_CERTFILES=/tmp/crypto-material/cas/org3/ca-tls/ca-cert.pem +fabric-ca-client enroll -d -u https://peer2-org3:peer2PW@0.0.0.0:7054 --enrollment.profile tls --csr.hosts ${TOPOLOGY}-org3-peer2 + +mv /tmp/crypto-material/peers/org3/peer1/node-tls/msp/keystore/* /tmp/crypto-material/peers/org3/peer1/node-tls/msp/keystore/key.pem +mv /tmp/crypto-material/peers/org3/peer2/node-tls/msp/keystore/* /tmp/crypto-material/peers/org3/peer2/node-tls/msp/keystore/key.pem + +mv /tmp/crypto-material/peers/org3/peer1/node-tls/msp/tlscacerts/* /tmp/crypto-material/peers/org3/peer1/node-tls/msp/tlscacerts/ca-cert.pem +mv /tmp/crypto-material/peers/org3/peer2/node-tls/msp/tlscacerts/* /tmp/crypto-material/peers/org3/peer2/node-tls/msp/tlscacerts/ca-cert.pem \ No newline at end of file diff --git a/topologies/t8/scripts/org3-install-chaincode.sh b/topologies/t8/scripts/org3-install-chaincode.sh new file mode 100755 index 0000000..f6b8789 --- /dev/null +++ b/topologies/t8/scripts/org3-install-chaincode.sh @@ -0,0 +1,13 @@ +#!/bin/bash +set -e +set -x + +export CHAINCODE_DIR=/opt/gopath/src/github.com/hyperledger/fabric-samples/chaincode/assets-transfer-basic/chaincode-go + +export CORE_PEER_ADDRESS=${TOPOLOGY}-org3-peer1:7051 +export CORE_PEER_MSPCONFIGPATH=/tmp/crypto-material/orgs/org3/admins/admin/msp +peer lifecycle chaincode package mycc.tar.gz --path $CHAINCODE_DIR --lang golang --label myccv1 +peer lifecycle chaincode install mycc.tar.gz + +export CORE_PEER_ADDRESS=${TOPOLOGY}-org3-peer2:7051 +peer lifecycle chaincode install mycc.tar.gz diff --git a/topologies/t8/scripts/org3-register-identities-with-ca-identities.sh b/topologies/t8/scripts/org3-register-identities-with-ca-identities.sh new file mode 100755 index 0000000..1c56144 --- /dev/null +++ b/topologies/t8/scripts/org3-register-identities-with-ca-identities.sh @@ -0,0 +1,12 @@ +#!/bin/bash +set -e +set -x + +export FABRIC_CA_CLIENT_TLS_CERTFILES=/tmp/crypto-material/cas/org3/ca-identities/ca-cert.pem +export FABRIC_CA_CLIENT_HOME=/tmp/crypto-material/cas/org3/ca-identities-admin +fabric-ca-client enroll -d -u https://org3-ca-identities-admin:org3-ca-identities-adminpw@0.0.0.0:7054 +fabric-ca-client register -d --id.name peer1-org3 --id.secret peer1PW --id.type peer -u https://0.0.0.0:7054 +fabric-ca-client register -d --id.name peer2-org3 --id.secret peer2PW --id.type peer -u https://0.0.0.0:7054 +fabric-ca-client register -d --id.name admin-org3 --id.secret org3AdminPW --id.type admin -u https://0.0.0.0:7054 +fabric-ca-client register -d --id.name user-org3 --id.secret org3UserPW --id.type user -u https://0.0.0.0:7054 +fabric-ca-client register -d --id.name client-org3 --id.secret org3UserPW --id.type client -u https://0.0.0.0:7054 \ No newline at end of file diff --git a/topologies/t8/scripts/org3-register-identities-with-ca-tls.sh b/topologies/t8/scripts/org3-register-identities-with-ca-tls.sh new file mode 100755 index 0000000..a5ccd7f --- /dev/null +++ b/topologies/t8/scripts/org3-register-identities-with-ca-tls.sh @@ -0,0 +1,9 @@ +#!/bin/bash +set -e +set -x + +export FABRIC_CA_CLIENT_TLS_CERTFILES=/tmp/crypto-material/cas/org3/ca-tls/ca-cert.pem +export FABRIC_CA_CLIENT_HOME=/tmp/crypto-material/cas/org3/ca-tls-admin +fabric-ca-client enroll -d -u https://org3-ca-tls-admin:org3-ca-tls-adminpw@0.0.0.0:7054 +fabric-ca-client register -d --id.name peer1-org3 --id.secret peer1PW --id.type peer -u https://0.0.0.0:7054 +fabric-ca-client register -d --id.name peer2-org3 --id.secret peer2PW --id.type peer -u https://0.0.0.0:7054 \ No newline at end of file diff --git a/topologies/t8/scripts/patch-configtx.sh b/topologies/t8/scripts/patch-configtx.sh new file mode 100755 index 0000000..a4359d7 --- /dev/null +++ b/topologies/t8/scripts/patch-configtx.sh @@ -0,0 +1,8 @@ +#!/bin/bash +set -e +set -x + +mkdir -p /tmp/crypto-material/config +cp /tmp/config/configtx.yaml /tmp/crypto-material/config + +cat /tmp/config/configtx.yaml | sed -r "s;<>;${TOPOLOGY};g" | tee /tmp/crypto-material/config/configtx.yaml > /dev/null \ No newline at end of file diff --git a/topologies/t8/scripts/setup-docker-images.sh b/topologies/t8/scripts/setup-docker-images.sh new file mode 100755 index 0000000..ec4705a --- /dev/null +++ b/topologies/t8/scripts/setup-docker-images.sh @@ -0,0 +1,4 @@ +cd $1/images/nginx +docker build -t nginx-hl-fabric . +cd $1/images/shell-cmd-utils +docker build -t shell-cmd-utils-hl-fabric . \ No newline at end of file diff --git a/topologies/t8/setup-network.sh b/topologies/t8/setup-network.sh new file mode 100755 index 0000000..6a4c686 --- /dev/null +++ b/topologies/t8/setup-network.sh @@ -0,0 +1,186 @@ +#!/bin/bash +# -----stop script execution on error and log commands +set -e +set -x + +# -----get the folder of where the current script is located +export HL_TOPOLOGIES_BASE_FOLDER=$( cd ${0%/*} && pwd -P ) +rm -rf ./topologies + +export CURRENT_HL_TOPOLOGY=t8 +echo "Topology ${CURRENT_HL_TOPOLOGY} Root Folder Set to: ${HL_TOPOLOGIES_BASE_FOLDER}" + +# -----delete any old or existing docker networks and clear any state data---- +echo "Deleting the old network..." +./teardown-network.sh + +# ----Setup Docker Images ---- +./scripts/setup-docker-images.sh ${HL_TOPOLOGIES_BASE_FOLDER} + +# -----bring up the hyperledger admin shell terminal console, used to bootstrap the network +docker compose --env-file ${HL_TOPOLOGIES_BASE_FOLDER}/.env -f ${HL_TOPOLOGIES_BASE_FOLDER}/../docker-compose-base.yml \ + -f ${HL_TOPOLOGIES_BASE_FOLDER}/containers/docker-compose-shell-cmd-utils.yml up -d + +# -----begin by modifying the configtx yaml file with any string replacements +echo "Starting the setup of the new network..." + +# -----org2 peer1 couchdb +docker compose --env-file ${HL_TOPOLOGIES_BASE_FOLDER}/.env -f ${HL_TOPOLOGIES_BASE_FOLDER}/../docker-compose-base.yml \ + -f ${HL_TOPOLOGIES_BASE_FOLDER}/docker-compose.yml \ + -f ${HL_TOPOLOGIES_BASE_FOLDER}/containers/peers/org2-peers/docker-compose-org2-couchdb.yml up -d org2-peer1-couchdb1 + +docker compose --env-file ${HL_TOPOLOGIES_BASE_FOLDER}/.env -f ${HL_TOPOLOGIES_BASE_FOLDER}/../docker-compose-base.yml \ + -f ${HL_TOPOLOGIES_BASE_FOLDER}/docker-compose.yml \ + -f ${HL_TOPOLOGIES_BASE_FOLDER}/containers/peers/org2-peers/docker-compose-org2-couchdb.yml up -d org2-peer1-couchdb2 + +sleep 3 + +docker compose --env-file ${HL_TOPOLOGIES_BASE_FOLDER}/.env -f ${HL_TOPOLOGIES_BASE_FOLDER}/../docker-compose-base.yml \ + -f ${HL_TOPOLOGIES_BASE_FOLDER}/docker-compose.yml \ + -f ${HL_TOPOLOGIES_BASE_FOLDER}/containers/peers/org2-peers/docker-compose-org2-couchdb.yml up -d org2-peer1-couchdb-proxy + +# Configure Couch DB cluster for org2 peer1 +docker exec ${CURRENT_HL_TOPOLOGY}-shell-cmd /bin/sh -c "/bin/sh /tmp/scripts/org2-peer1-setup-couchdb-cluster.sh" + +docker exec ${CURRENT_HL_TOPOLOGY}-shell-cmd /bin/sh -c "/bin/sh /tmp/scripts/patch-configtx.sh" + +# ----------------------------------------------------------------------------- +# -----setup the CAs for all orgs and register with these the TLS-CA and Identities-CA users, such as admins, clients, etc...----- +# ----------------------------------------------------------------------------- +# -----org1 CAs +docker compose --env-file ${HL_TOPOLOGIES_BASE_FOLDER}/.env -f ${HL_TOPOLOGIES_BASE_FOLDER}/../docker-compose-base.yml \ + -f ${HL_TOPOLOGIES_BASE_FOLDER}/docker-compose.yml \ + -f ${HL_TOPOLOGIES_BASE_FOLDER}/containers/cas/org1-cas/docker-compose-org1-cas.yml up -d org1-ca-tls +sleep 2 +docker exec ${CURRENT_HL_TOPOLOGY}-org1-ca-tls /bin/sh -c "/bin/sh /tmp/scripts/org1-register-identities-with-ca-tls.sh" + +docker compose --env-file ${HL_TOPOLOGIES_BASE_FOLDER}/.env -f ${HL_TOPOLOGIES_BASE_FOLDER}/../docker-compose-base.yml \ + -f ${HL_TOPOLOGIES_BASE_FOLDER}/docker-compose.yml \ + -f ${HL_TOPOLOGIES_BASE_FOLDER}/containers/cas/org1-cas/docker-compose-org1-cas.yml up -d org1-ca-identities +sleep 2 +docker exec ${CURRENT_HL_TOPOLOGY}-org1-ca-identities /bin/sh -c "/bin/sh /tmp/scripts/org1-register-identities-with-ca-identities.sh" + +# -----org2 CAs +docker compose --env-file ${HL_TOPOLOGIES_BASE_FOLDER}/.env -f ${HL_TOPOLOGIES_BASE_FOLDER}/../docker-compose-base.yml \ + -f ${HL_TOPOLOGIES_BASE_FOLDER}/docker-compose.yml \ + -f ${HL_TOPOLOGIES_BASE_FOLDER}/containers/cas/org2-cas/docker-compose-org2-cas.yml up -d org2-ca-tls +sleep 2 +docker exec ${CURRENT_HL_TOPOLOGY}-org2-ca-tls /bin/sh -c "/bin/sh /tmp/scripts/org2-register-identities-with-ca-tls.sh" + +docker compose --env-file ${HL_TOPOLOGIES_BASE_FOLDER}/.env -f ${HL_TOPOLOGIES_BASE_FOLDER}/../docker-compose-base.yml \ + -f ${HL_TOPOLOGIES_BASE_FOLDER}/docker-compose.yml \ + -f ${HL_TOPOLOGIES_BASE_FOLDER}/containers/cas/org2-cas/docker-compose-org2-cas.yml up -d org2-ca-identities +sleep 2 +docker exec ${CURRENT_HL_TOPOLOGY}-org2-ca-identities /bin/sh -c "/bin/sh /tmp/scripts/org2-register-identities-with-ca-identities.sh" + +# -----org3 CAs +docker compose --env-file ${HL_TOPOLOGIES_BASE_FOLDER}/.env -f ${HL_TOPOLOGIES_BASE_FOLDER}/../docker-compose-base.yml \ + -f ${HL_TOPOLOGIES_BASE_FOLDER}/docker-compose.yml \ + -f ${HL_TOPOLOGIES_BASE_FOLDER}/containers/cas/org3-cas/docker-compose-org3-cas.yml up -d org3-ca-tls +sleep 2 +docker exec ${CURRENT_HL_TOPOLOGY}-org3-ca-tls /bin/sh -c "/bin/sh /tmp/scripts/org3-register-identities-with-ca-tls.sh" + +docker compose --env-file ${HL_TOPOLOGIES_BASE_FOLDER}/.env -f ${HL_TOPOLOGIES_BASE_FOLDER}/../docker-compose-base.yml \ + -f ${HL_TOPOLOGIES_BASE_FOLDER}/docker-compose.yml \ + -f ${HL_TOPOLOGIES_BASE_FOLDER}/containers/cas/org3-cas/docker-compose-org3-cas.yml up -d org3-ca-identities +sleep 2 +docker exec ${CURRENT_HL_TOPOLOGY}-org3-ca-identities /bin/sh -c "/bin/sh /tmp/scripts/org3-register-identities-with-ca-identities.sh" + +# ----------------------------------------------------------------------------- +# -----setup the Peers for all orgs----- +# ----------------------------------------------------------------------------- +# -----begin by enrolling each orgs' TLS-CA and Identities-CA users +docker exec ${CURRENT_HL_TOPOLOGY}-org2-ca-tls /bin/sh -c "/bin/sh /tmp/scripts/org2-enroll-identities-with-ca-tls.sh" +docker exec ${CURRENT_HL_TOPOLOGY}-org2-ca-identities /bin/sh -c "/bin/sh /tmp/scripts/org2-enroll-identities-with-ca-identities.sh" + +docker exec ${CURRENT_HL_TOPOLOGY}-org3-ca-tls /bin/sh -c "/bin/sh /tmp/scripts/org3-enroll-identities-with-ca-tls.sh" +docker exec ${CURRENT_HL_TOPOLOGY}-org3-ca-identities /bin/sh -c "/bin/sh /tmp/scripts/org3-enroll-identities-with-ca-identities.sh" + + + +# -----org2 peers +docker compose --env-file ${HL_TOPOLOGIES_BASE_FOLDER}/.env -f ${HL_TOPOLOGIES_BASE_FOLDER}/../docker-compose-base.yml \ + -f ${HL_TOPOLOGIES_BASE_FOLDER}/docker-compose.yml \ + -f ${HL_TOPOLOGIES_BASE_FOLDER}/containers/peers/org2-peers/docker-compose-org2-peers.yml up -d org2-peer1 + +docker compose --env-file ${HL_TOPOLOGIES_BASE_FOLDER}/.env -f ${HL_TOPOLOGIES_BASE_FOLDER}/../docker-compose-base.yml \ + -f ${HL_TOPOLOGIES_BASE_FOLDER}/docker-compose.yml \ + -f ${HL_TOPOLOGIES_BASE_FOLDER}/containers/peers/org2-peers/docker-compose-org2-peers.yml up -d org2-peer2 + +# -----org3 peers +docker compose --env-file ${HL_TOPOLOGIES_BASE_FOLDER}/.env -f ${HL_TOPOLOGIES_BASE_FOLDER}/../docker-compose-base.yml \ + -f ${HL_TOPOLOGIES_BASE_FOLDER}/docker-compose.yml \ + -f ${HL_TOPOLOGIES_BASE_FOLDER}/containers/peers/org3-peers/docker-compose-org3-peers.yml up -d org3-peer1 + +docker compose --env-file ${HL_TOPOLOGIES_BASE_FOLDER}/.env -f ${HL_TOPOLOGIES_BASE_FOLDER}/../docker-compose-base.yml \ + -f ${HL_TOPOLOGIES_BASE_FOLDER}/docker-compose.yml \ + -f ${HL_TOPOLOGIES_BASE_FOLDER}/containers/peers/org3-peers/docker-compose-org3-peers.yml up -d org3-peer2 + +# ----------------------------------------------------------------------------- +# -----setup CLIs for each org----- +# ----------------------------------------------------------------------------- +docker compose --env-file ${HL_TOPOLOGIES_BASE_FOLDER}/.env -f ${HL_TOPOLOGIES_BASE_FOLDER}/../docker-compose-base.yml \ + -f ${HL_TOPOLOGIES_BASE_FOLDER}/docker-compose.yml \ + -f ${HL_TOPOLOGIES_BASE_FOLDER}/containers/clis/org2-clis/docker-compose-org2-clis.yml up -d org2-cli-peer1 + +docker compose --env-file ${HL_TOPOLOGIES_BASE_FOLDER}/.env -f ${HL_TOPOLOGIES_BASE_FOLDER}/../docker-compose-base.yml \ + -f ${HL_TOPOLOGIES_BASE_FOLDER}/docker-compose.yml \ + -f ${HL_TOPOLOGIES_BASE_FOLDER}/containers/clis/org3-clis/docker-compose-org3-clis.yml up -d org3-cli-peer1 + +# ----------------------------------------------------------------------------- +# -----setup Orderers ----- +# ----------------------------------------------------------------------------- +# -----enroll orderer users with TLS-CA and Identities-CA for org1, the orderer org +docker exec ${CURRENT_HL_TOPOLOGY}-org1-ca-tls /bin/sh -c "/bin/sh /tmp/scripts/org1-enroll-identities-with-ca-tls.sh" +docker exec ${CURRENT_HL_TOPOLOGY}-org1-ca-identities /bin/sh -c "/bin/sh /tmp/scripts/org1-enroll-identities-with-ca-identities.sh" + +# -----generate the genesis.block and mychannel artifacts +docker exec ${CURRENT_HL_TOPOLOGY}-org2-cli-peer1 /bin/sh -c "/tmp/scripts/channels-setup.sh" + +sleep 1 + +# -----bring up org1 orderers +docker compose --env-file ${HL_TOPOLOGIES_BASE_FOLDER}/.env -f ${HL_TOPOLOGIES_BASE_FOLDER}/../docker-compose-base.yml \ + -f ${HL_TOPOLOGIES_BASE_FOLDER}/docker-compose.yml \ + -f ${HL_TOPOLOGIES_BASE_FOLDER}/containers/orderers/org1-orderers/docker-compose-org1-orderers.yml up -d org1-orderer1 + +docker compose --env-file ${HL_TOPOLOGIES_BASE_FOLDER}/.env -f ${HL_TOPOLOGIES_BASE_FOLDER}/../docker-compose-base.yml \ + -f ${HL_TOPOLOGIES_BASE_FOLDER}/docker-compose.yml \ + -f ${HL_TOPOLOGIES_BASE_FOLDER}/containers/orderers/org1-orderers/docker-compose-org1-orderers.yml up -d org1-orderer2 + +docker compose --env-file ${HL_TOPOLOGIES_BASE_FOLDER}/.env -f ${HL_TOPOLOGIES_BASE_FOLDER}/../docker-compose-base.yml \ + -f ${HL_TOPOLOGIES_BASE_FOLDER}/docker-compose.yml \ + -f ${HL_TOPOLOGIES_BASE_FOLDER}/containers/orderers/org1-orderers/docker-compose-org1-orderers.yml up -d org1-orderer3 + +# -----need to wait until raft leader selection is completed for the orderers +sleep 4 + +# ----------------------------------------------------------------------------- +# -----setup Channels ----- +# ----------------------------------------------------------------------------- +docker exec ${CURRENT_HL_TOPOLOGY}-org2-cli-peer1 /bin/sh -c "/tmp/scripts/org2-create-and-join-channels.sh" +docker exec ${CURRENT_HL_TOPOLOGY}-org3-cli-peer1 /bin/sh -c "/tmp/scripts/org3-create-and-join-channels.sh" + +# ----------------------------------------------------------------------------- +# -----setup Chaincode ----- +# ----------------------------------------------------------------------------- +docker exec ${CURRENT_HL_TOPOLOGY}-org2-cli-peer1 /bin/sh -c "/tmp/scripts/org2-install-chaincode.sh" +docker exec ${CURRENT_HL_TOPOLOGY}-org3-cli-peer1 /bin/sh -c "/tmp/scripts/org3-install-chaincode.sh" + +sleep 1 + +docker exec ${CURRENT_HL_TOPOLOGY}-org2-cli-peer1 /bin/sh -c "/tmp/scripts/org2-approve-chaincode.sh" +docker exec ${CURRENT_HL_TOPOLOGY}-org3-cli-peer1 /bin/sh -c "/tmp/scripts/org3-approve-chaincode.sh" + +sleep 1 + +docker exec ${CURRENT_HL_TOPOLOGY}-org2-cli-peer1 /bin/sh -c "/tmp/scripts/all-org-peers-commit-chaincode.sh" + +sleep 1 + +# ----------------------------------------------------------------------------- +# -----test Chaincode with invoke and query----- +# ----------------------------------------------------------------------------- +docker exec ${CURRENT_HL_TOPOLOGY}-org2-cli-peer1 /bin/sh -c "/tmp/scripts/all-org-peers-execute-chaincode.sh" + +echo "******* NETWORK SETUP COMPLETED *******" diff --git a/topologies/t8/teardown-network.sh b/topologies/t8/teardown-network.sh new file mode 100755 index 0000000..8481f6d --- /dev/null +++ b/topologies/t8/teardown-network.sh @@ -0,0 +1,33 @@ +#!/bin/bash +# -----stop script execution on error and log commands +set -e +set -x + +export CURRENT_HL_TOPOLOGY=t8 + +# ----------------------------------------------------------------------------- +# -----remove current topoloy containers containers +# ----------------------------------------------------------------------------- +# -----the chaincode dockers throw an error as not being able to be deleted, although they are being deleted. ignoring errors here +set +e +docker ps --filter name=${CURRENT_HL_TOPOLOGY}-org --filter status=running -aq | xargs -r docker stop | xargs -r docker rm +docker ps --filter name=${CURRENT_HL_TOPOLOGY}-org --filter status=exited -aq | xargs -r docker rm +set -e + +# ----------------------------------------------------------------------------- +# -----clear any state data written to disk +# ----------------------------------------------------------------------------- +# -----docker exec will throw error if no running container found +if [ "$( docker container inspect -f '{{.State.Status}}' ${CURRENT_HL_TOPOLOGY}-shell-cmd )" == "running" ] +then + docker exec ${CURRENT_HL_TOPOLOGY}-shell-cmd /bin/sh -c "/bin/sh /tmp/scripts/delete-state-data.sh" +else + echo "Shell Cmd Container is not running." +fi + +# ----------------------------------------------------------------------------- +# -----remove the bootstrap shell container +# ----------------------------------------------------------------------------- +# -----remove shell command container +docker ps --filter name=${CURRENT_HL_TOPOLOGY}-shell-cmd --filter status=running -aq | xargs -r docker stop | xargs -r docker rm +docker ps --filter name=${CURRENT_HL_TOPOLOGY}-shell-cmd --filter status=exited -aq | xargs -r docker rm diff --git a/topologies/t9/.env b/topologies/t9/.env new file mode 100644 index 0000000..93ad75c --- /dev/null +++ b/topologies/t9/.env @@ -0,0 +1,5 @@ +COMPOSE_PROJECT_NAME=hl-fabric-topology-t9 +FABRIC_CA_VERSION=1.5 +FABRIC_PEER_VERSION=2.4.6 +FABRIC_TOOLS_VERSION=2.4.6 +PEER_ORDERER_VERSION=2.4.6 \ No newline at end of file diff --git a/topologies/t9/.gitignore b/topologies/t9/.gitignore new file mode 100644 index 0000000..ee0881a --- /dev/null +++ b/topologies/t9/.gitignore @@ -0,0 +1,2 @@ +crypto-material/*/** +homefolders/*/** \ No newline at end of file diff --git a/topologies/t9/README.md b/topologies/t9/README.md new file mode 100644 index 0000000..1d48953 --- /dev/null +++ b/topologies/t9/README.md @@ -0,0 +1,39 @@ +# T9: Channel Participation API +## Description +--- +T1 network + Channel Participation API (No System Channel) + +## Diagram +--- +![Diagram of components](../image_store/T9.png) + +## Relevant Documentation + +- https://hyperledger-fabric.readthedocs.io/en/latest/create_channel/create_channel_participation.html + +## Components List +--- +* Org 1 + * Orderer 1 + * Orderer 2 + * Orderer 3 + * TLS CA + * Identities CA +* Org 2 + * Peer 1 + * Peer 1 CLI + * Peer 2 + * TLS CA + * Identities CA +* Org 3 + * Peer 1 + * Peer 1 CLI + * Peer 2 + * TLS CA + * Identities CA + +## Characteristics + +- World State Database Instance (LevelDB) embedded (in peer containers) +- Chaincode installed directly on peers +- Communication between all components done via TLS \ No newline at end of file diff --git a/topologies/t9/assets/chaincode/assets-transfer-basic/chaincode-go/assetTransfer.go b/topologies/t9/assets/chaincode/assets-transfer-basic/chaincode-go/assetTransfer.go new file mode 100644 index 0000000..9c619d5 --- /dev/null +++ b/topologies/t9/assets/chaincode/assets-transfer-basic/chaincode-go/assetTransfer.go @@ -0,0 +1,23 @@ +/* +SPDX-License-Identifier: Apache-2.0 +*/ + +package main + +import ( + "log" + + "github.com/hyperledger/fabric-contract-api-go/contractapi" + "github.com/hyperledger/fabric-samples/asset-transfer-basic/chaincode-go/chaincode" +) + +func main() { + assetChaincode, err := contractapi.NewChaincode(&chaincode.SmartContract{}) + if err != nil { + log.Panicf("Error creating asset-transfer-basic chaincode: %v", err) + } + + if err := assetChaincode.Start(); err != nil { + log.Panicf("Error starting asset-transfer-basic chaincode: %v", err) + } +} diff --git a/topologies/t9/assets/chaincode/assets-transfer-basic/chaincode-go/chaincode/mocks/chaincodestub.go b/topologies/t9/assets/chaincode/assets-transfer-basic/chaincode-go/chaincode/mocks/chaincodestub.go new file mode 100644 index 0000000..91348fe --- /dev/null +++ b/topologies/t9/assets/chaincode/assets-transfer-basic/chaincode-go/chaincode/mocks/chaincodestub.go @@ -0,0 +1,2878 @@ +// Code generated by counterfeiter. DO NOT EDIT. +package mocks + +import ( + "sync" + + "github.com/golang/protobuf/ptypes/timestamp" + "github.com/hyperledger/fabric-chaincode-go/shim" + "github.com/hyperledger/fabric-protos-go/peer" +) + +type ChaincodeStub struct { + CreateCompositeKeyStub func(string, []string) (string, error) + createCompositeKeyMutex sync.RWMutex + createCompositeKeyArgsForCall []struct { + arg1 string + arg2 []string + } + createCompositeKeyReturns struct { + result1 string + result2 error + } + createCompositeKeyReturnsOnCall map[int]struct { + result1 string + result2 error + } + DelPrivateDataStub func(string, string) error + delPrivateDataMutex sync.RWMutex + delPrivateDataArgsForCall []struct { + arg1 string + arg2 string + } + delPrivateDataReturns struct { + result1 error + } + delPrivateDataReturnsOnCall map[int]struct { + result1 error + } + DelStateStub func(string) error + delStateMutex sync.RWMutex + delStateArgsForCall []struct { + arg1 string + } + delStateReturns struct { + result1 error + } + delStateReturnsOnCall map[int]struct { + result1 error + } + GetArgsStub func() [][]byte + getArgsMutex sync.RWMutex + getArgsArgsForCall []struct { + } + getArgsReturns struct { + result1 [][]byte + } + getArgsReturnsOnCall map[int]struct { + result1 [][]byte + } + GetArgsSliceStub func() ([]byte, error) + getArgsSliceMutex sync.RWMutex + getArgsSliceArgsForCall []struct { + } + getArgsSliceReturns struct { + result1 []byte + result2 error + } + getArgsSliceReturnsOnCall map[int]struct { + result1 []byte + result2 error + } + GetBindingStub func() ([]byte, error) + getBindingMutex sync.RWMutex + getBindingArgsForCall []struct { + } + getBindingReturns struct { + result1 []byte + result2 error + } + getBindingReturnsOnCall map[int]struct { + result1 []byte + result2 error + } + GetChannelIDStub func() string + getChannelIDMutex sync.RWMutex + getChannelIDArgsForCall []struct { + } + getChannelIDReturns struct { + result1 string + } + getChannelIDReturnsOnCall map[int]struct { + result1 string + } + GetCreatorStub func() ([]byte, error) + getCreatorMutex sync.RWMutex + getCreatorArgsForCall []struct { + } + getCreatorReturns struct { + result1 []byte + result2 error + } + getCreatorReturnsOnCall map[int]struct { + result1 []byte + result2 error + } + GetDecorationsStub func() map[string][]byte + getDecorationsMutex sync.RWMutex + getDecorationsArgsForCall []struct { + } + getDecorationsReturns struct { + result1 map[string][]byte + } + getDecorationsReturnsOnCall map[int]struct { + result1 map[string][]byte + } + GetFunctionAndParametersStub func() (string, []string) + getFunctionAndParametersMutex sync.RWMutex + getFunctionAndParametersArgsForCall []struct { + } + getFunctionAndParametersReturns struct { + result1 string + result2 []string + } + getFunctionAndParametersReturnsOnCall map[int]struct { + result1 string + result2 []string + } + GetHistoryForKeyStub func(string) (shim.HistoryQueryIteratorInterface, error) + getHistoryForKeyMutex sync.RWMutex + getHistoryForKeyArgsForCall []struct { + arg1 string + } + getHistoryForKeyReturns struct { + result1 shim.HistoryQueryIteratorInterface + result2 error + } + getHistoryForKeyReturnsOnCall map[int]struct { + result1 shim.HistoryQueryIteratorInterface + result2 error + } + GetPrivateDataStub func(string, string) ([]byte, error) + getPrivateDataMutex sync.RWMutex + getPrivateDataArgsForCall []struct { + arg1 string + arg2 string + } + getPrivateDataReturns struct { + result1 []byte + result2 error + } + getPrivateDataReturnsOnCall map[int]struct { + result1 []byte + result2 error + } + GetPrivateDataByPartialCompositeKeyStub func(string, string, []string) (shim.StateQueryIteratorInterface, error) + getPrivateDataByPartialCompositeKeyMutex sync.RWMutex + getPrivateDataByPartialCompositeKeyArgsForCall []struct { + arg1 string + arg2 string + arg3 []string + } + getPrivateDataByPartialCompositeKeyReturns struct { + result1 shim.StateQueryIteratorInterface + result2 error + } + getPrivateDataByPartialCompositeKeyReturnsOnCall map[int]struct { + result1 shim.StateQueryIteratorInterface + result2 error + } + GetPrivateDataByRangeStub func(string, string, string) (shim.StateQueryIteratorInterface, error) + getPrivateDataByRangeMutex sync.RWMutex + getPrivateDataByRangeArgsForCall []struct { + arg1 string + arg2 string + arg3 string + } + getPrivateDataByRangeReturns struct { + result1 shim.StateQueryIteratorInterface + result2 error + } + getPrivateDataByRangeReturnsOnCall map[int]struct { + result1 shim.StateQueryIteratorInterface + result2 error + } + GetPrivateDataHashStub func(string, string) ([]byte, error) + getPrivateDataHashMutex sync.RWMutex + getPrivateDataHashArgsForCall []struct { + arg1 string + arg2 string + } + getPrivateDataHashReturns struct { + result1 []byte + result2 error + } + getPrivateDataHashReturnsOnCall map[int]struct { + result1 []byte + result2 error + } + GetPrivateDataQueryResultStub func(string, string) (shim.StateQueryIteratorInterface, error) + getPrivateDataQueryResultMutex sync.RWMutex + getPrivateDataQueryResultArgsForCall []struct { + arg1 string + arg2 string + } + getPrivateDataQueryResultReturns struct { + result1 shim.StateQueryIteratorInterface + result2 error + } + getPrivateDataQueryResultReturnsOnCall map[int]struct { + result1 shim.StateQueryIteratorInterface + result2 error + } + GetPrivateDataValidationParameterStub func(string, string) ([]byte, error) + getPrivateDataValidationParameterMutex sync.RWMutex + getPrivateDataValidationParameterArgsForCall []struct { + arg1 string + arg2 string + } + getPrivateDataValidationParameterReturns struct { + result1 []byte + result2 error + } + getPrivateDataValidationParameterReturnsOnCall map[int]struct { + result1 []byte + result2 error + } + GetQueryResultStub func(string) (shim.StateQueryIteratorInterface, error) + getQueryResultMutex sync.RWMutex + getQueryResultArgsForCall []struct { + arg1 string + } + getQueryResultReturns struct { + result1 shim.StateQueryIteratorInterface + result2 error + } + getQueryResultReturnsOnCall map[int]struct { + result1 shim.StateQueryIteratorInterface + result2 error + } + GetQueryResultWithPaginationStub func(string, int32, string) (shim.StateQueryIteratorInterface, *peer.QueryResponseMetadata, error) + getQueryResultWithPaginationMutex sync.RWMutex + getQueryResultWithPaginationArgsForCall []struct { + arg1 string + arg2 int32 + arg3 string + } + getQueryResultWithPaginationReturns struct { + result1 shim.StateQueryIteratorInterface + result2 *peer.QueryResponseMetadata + result3 error + } + getQueryResultWithPaginationReturnsOnCall map[int]struct { + result1 shim.StateQueryIteratorInterface + result2 *peer.QueryResponseMetadata + result3 error + } + GetSignedProposalStub func() (*peer.SignedProposal, error) + getSignedProposalMutex sync.RWMutex + getSignedProposalArgsForCall []struct { + } + getSignedProposalReturns struct { + result1 *peer.SignedProposal + result2 error + } + getSignedProposalReturnsOnCall map[int]struct { + result1 *peer.SignedProposal + result2 error + } + GetStateStub func(string) ([]byte, error) + getStateMutex sync.RWMutex + getStateArgsForCall []struct { + arg1 string + } + getStateReturns struct { + result1 []byte + result2 error + } + getStateReturnsOnCall map[int]struct { + result1 []byte + result2 error + } + GetStateByPartialCompositeKeyStub func(string, []string) (shim.StateQueryIteratorInterface, error) + getStateByPartialCompositeKeyMutex sync.RWMutex + getStateByPartialCompositeKeyArgsForCall []struct { + arg1 string + arg2 []string + } + getStateByPartialCompositeKeyReturns struct { + result1 shim.StateQueryIteratorInterface + result2 error + } + getStateByPartialCompositeKeyReturnsOnCall map[int]struct { + result1 shim.StateQueryIteratorInterface + result2 error + } + GetStateByPartialCompositeKeyWithPaginationStub func(string, []string, int32, string) (shim.StateQueryIteratorInterface, *peer.QueryResponseMetadata, error) + getStateByPartialCompositeKeyWithPaginationMutex sync.RWMutex + getStateByPartialCompositeKeyWithPaginationArgsForCall []struct { + arg1 string + arg2 []string + arg3 int32 + arg4 string + } + getStateByPartialCompositeKeyWithPaginationReturns struct { + result1 shim.StateQueryIteratorInterface + result2 *peer.QueryResponseMetadata + result3 error + } + getStateByPartialCompositeKeyWithPaginationReturnsOnCall map[int]struct { + result1 shim.StateQueryIteratorInterface + result2 *peer.QueryResponseMetadata + result3 error + } + GetStateByRangeStub func(string, string) (shim.StateQueryIteratorInterface, error) + getStateByRangeMutex sync.RWMutex + getStateByRangeArgsForCall []struct { + arg1 string + arg2 string + } + getStateByRangeReturns struct { + result1 shim.StateQueryIteratorInterface + result2 error + } + getStateByRangeReturnsOnCall map[int]struct { + result1 shim.StateQueryIteratorInterface + result2 error + } + GetStateByRangeWithPaginationStub func(string, string, int32, string) (shim.StateQueryIteratorInterface, *peer.QueryResponseMetadata, error) + getStateByRangeWithPaginationMutex sync.RWMutex + getStateByRangeWithPaginationArgsForCall []struct { + arg1 string + arg2 string + arg3 int32 + arg4 string + } + getStateByRangeWithPaginationReturns struct { + result1 shim.StateQueryIteratorInterface + result2 *peer.QueryResponseMetadata + result3 error + } + getStateByRangeWithPaginationReturnsOnCall map[int]struct { + result1 shim.StateQueryIteratorInterface + result2 *peer.QueryResponseMetadata + result3 error + } + GetStateValidationParameterStub func(string) ([]byte, error) + getStateValidationParameterMutex sync.RWMutex + getStateValidationParameterArgsForCall []struct { + arg1 string + } + getStateValidationParameterReturns struct { + result1 []byte + result2 error + } + getStateValidationParameterReturnsOnCall map[int]struct { + result1 []byte + result2 error + } + GetStringArgsStub func() []string + getStringArgsMutex sync.RWMutex + getStringArgsArgsForCall []struct { + } + getStringArgsReturns struct { + result1 []string + } + getStringArgsReturnsOnCall map[int]struct { + result1 []string + } + GetTransientStub func() (map[string][]byte, error) + getTransientMutex sync.RWMutex + getTransientArgsForCall []struct { + } + getTransientReturns struct { + result1 map[string][]byte + result2 error + } + getTransientReturnsOnCall map[int]struct { + result1 map[string][]byte + result2 error + } + GetTxIDStub func() string + getTxIDMutex sync.RWMutex + getTxIDArgsForCall []struct { + } + getTxIDReturns struct { + result1 string + } + getTxIDReturnsOnCall map[int]struct { + result1 string + } + GetTxTimestampStub func() (*timestamp.Timestamp, error) + getTxTimestampMutex sync.RWMutex + getTxTimestampArgsForCall []struct { + } + getTxTimestampReturns struct { + result1 *timestamp.Timestamp + result2 error + } + getTxTimestampReturnsOnCall map[int]struct { + result1 *timestamp.Timestamp + result2 error + } + InvokeChaincodeStub func(string, [][]byte, string) peer.Response + invokeChaincodeMutex sync.RWMutex + invokeChaincodeArgsForCall []struct { + arg1 string + arg2 [][]byte + arg3 string + } + invokeChaincodeReturns struct { + result1 peer.Response + } + invokeChaincodeReturnsOnCall map[int]struct { + result1 peer.Response + } + PutPrivateDataStub func(string, string, []byte) error + putPrivateDataMutex sync.RWMutex + putPrivateDataArgsForCall []struct { + arg1 string + arg2 string + arg3 []byte + } + putPrivateDataReturns struct { + result1 error + } + putPrivateDataReturnsOnCall map[int]struct { + result1 error + } + PutStateStub func(string, []byte) error + putStateMutex sync.RWMutex + putStateArgsForCall []struct { + arg1 string + arg2 []byte + } + putStateReturns struct { + result1 error + } + putStateReturnsOnCall map[int]struct { + result1 error + } + SetEventStub func(string, []byte) error + setEventMutex sync.RWMutex + setEventArgsForCall []struct { + arg1 string + arg2 []byte + } + setEventReturns struct { + result1 error + } + setEventReturnsOnCall map[int]struct { + result1 error + } + SetPrivateDataValidationParameterStub func(string, string, []byte) error + setPrivateDataValidationParameterMutex sync.RWMutex + setPrivateDataValidationParameterArgsForCall []struct { + arg1 string + arg2 string + arg3 []byte + } + setPrivateDataValidationParameterReturns struct { + result1 error + } + setPrivateDataValidationParameterReturnsOnCall map[int]struct { + result1 error + } + SetStateValidationParameterStub func(string, []byte) error + setStateValidationParameterMutex sync.RWMutex + setStateValidationParameterArgsForCall []struct { + arg1 string + arg2 []byte + } + setStateValidationParameterReturns struct { + result1 error + } + setStateValidationParameterReturnsOnCall map[int]struct { + result1 error + } + SplitCompositeKeyStub func(string) (string, []string, error) + splitCompositeKeyMutex sync.RWMutex + splitCompositeKeyArgsForCall []struct { + arg1 string + } + splitCompositeKeyReturns struct { + result1 string + result2 []string + result3 error + } + splitCompositeKeyReturnsOnCall map[int]struct { + result1 string + result2 []string + result3 error + } + invocations map[string][][]interface{} + invocationsMutex sync.RWMutex +} + +func (fake *ChaincodeStub) CreateCompositeKey(arg1 string, arg2 []string) (string, error) { + var arg2Copy []string + if arg2 != nil { + arg2Copy = make([]string, len(arg2)) + copy(arg2Copy, arg2) + } + fake.createCompositeKeyMutex.Lock() + ret, specificReturn := fake.createCompositeKeyReturnsOnCall[len(fake.createCompositeKeyArgsForCall)] + fake.createCompositeKeyArgsForCall = append(fake.createCompositeKeyArgsForCall, struct { + arg1 string + arg2 []string + }{arg1, arg2Copy}) + fake.recordInvocation("CreateCompositeKey", []interface{}{arg1, arg2Copy}) + fake.createCompositeKeyMutex.Unlock() + if fake.CreateCompositeKeyStub != nil { + return fake.CreateCompositeKeyStub(arg1, arg2) + } + if specificReturn { + return ret.result1, ret.result2 + } + fakeReturns := fake.createCompositeKeyReturns + return fakeReturns.result1, fakeReturns.result2 +} + +func (fake *ChaincodeStub) CreateCompositeKeyCallCount() int { + fake.createCompositeKeyMutex.RLock() + defer fake.createCompositeKeyMutex.RUnlock() + return len(fake.createCompositeKeyArgsForCall) +} + +func (fake *ChaincodeStub) CreateCompositeKeyCalls(stub func(string, []string) (string, error)) { + fake.createCompositeKeyMutex.Lock() + defer fake.createCompositeKeyMutex.Unlock() + fake.CreateCompositeKeyStub = stub +} + +func (fake *ChaincodeStub) CreateCompositeKeyArgsForCall(i int) (string, []string) { + fake.createCompositeKeyMutex.RLock() + defer fake.createCompositeKeyMutex.RUnlock() + argsForCall := fake.createCompositeKeyArgsForCall[i] + return argsForCall.arg1, argsForCall.arg2 +} + +func (fake *ChaincodeStub) CreateCompositeKeyReturns(result1 string, result2 error) { + fake.createCompositeKeyMutex.Lock() + defer fake.createCompositeKeyMutex.Unlock() + fake.CreateCompositeKeyStub = nil + fake.createCompositeKeyReturns = struct { + result1 string + result2 error + }{result1, result2} +} + +func (fake *ChaincodeStub) CreateCompositeKeyReturnsOnCall(i int, result1 string, result2 error) { + fake.createCompositeKeyMutex.Lock() + defer fake.createCompositeKeyMutex.Unlock() + fake.CreateCompositeKeyStub = nil + if fake.createCompositeKeyReturnsOnCall == nil { + fake.createCompositeKeyReturnsOnCall = make(map[int]struct { + result1 string + result2 error + }) + } + fake.createCompositeKeyReturnsOnCall[i] = struct { + result1 string + result2 error + }{result1, result2} +} + +func (fake *ChaincodeStub) DelPrivateData(arg1 string, arg2 string) error { + fake.delPrivateDataMutex.Lock() + ret, specificReturn := fake.delPrivateDataReturnsOnCall[len(fake.delPrivateDataArgsForCall)] + fake.delPrivateDataArgsForCall = append(fake.delPrivateDataArgsForCall, struct { + arg1 string + arg2 string + }{arg1, arg2}) + fake.recordInvocation("DelPrivateData", []interface{}{arg1, arg2}) + fake.delPrivateDataMutex.Unlock() + if fake.DelPrivateDataStub != nil { + return fake.DelPrivateDataStub(arg1, arg2) + } + if specificReturn { + return ret.result1 + } + fakeReturns := fake.delPrivateDataReturns + return fakeReturns.result1 +} + +func (fake *ChaincodeStub) DelPrivateDataCallCount() int { + fake.delPrivateDataMutex.RLock() + defer fake.delPrivateDataMutex.RUnlock() + return len(fake.delPrivateDataArgsForCall) +} + +func (fake *ChaincodeStub) DelPrivateDataCalls(stub func(string, string) error) { + fake.delPrivateDataMutex.Lock() + defer fake.delPrivateDataMutex.Unlock() + fake.DelPrivateDataStub = stub +} + +func (fake *ChaincodeStub) DelPrivateDataArgsForCall(i int) (string, string) { + fake.delPrivateDataMutex.RLock() + defer fake.delPrivateDataMutex.RUnlock() + argsForCall := fake.delPrivateDataArgsForCall[i] + return argsForCall.arg1, argsForCall.arg2 +} + +func (fake *ChaincodeStub) DelPrivateDataReturns(result1 error) { + fake.delPrivateDataMutex.Lock() + defer fake.delPrivateDataMutex.Unlock() + fake.DelPrivateDataStub = nil + fake.delPrivateDataReturns = struct { + result1 error + }{result1} +} + +func (fake *ChaincodeStub) DelPrivateDataReturnsOnCall(i int, result1 error) { + fake.delPrivateDataMutex.Lock() + defer fake.delPrivateDataMutex.Unlock() + fake.DelPrivateDataStub = nil + if fake.delPrivateDataReturnsOnCall == nil { + fake.delPrivateDataReturnsOnCall = make(map[int]struct { + result1 error + }) + } + fake.delPrivateDataReturnsOnCall[i] = struct { + result1 error + }{result1} +} + +func (fake *ChaincodeStub) DelState(arg1 string) error { + fake.delStateMutex.Lock() + ret, specificReturn := fake.delStateReturnsOnCall[len(fake.delStateArgsForCall)] + fake.delStateArgsForCall = append(fake.delStateArgsForCall, struct { + arg1 string + }{arg1}) + fake.recordInvocation("DelState", []interface{}{arg1}) + fake.delStateMutex.Unlock() + if fake.DelStateStub != nil { + return fake.DelStateStub(arg1) + } + if specificReturn { + return ret.result1 + } + fakeReturns := fake.delStateReturns + return fakeReturns.result1 +} + +func (fake *ChaincodeStub) DelStateCallCount() int { + fake.delStateMutex.RLock() + defer fake.delStateMutex.RUnlock() + return len(fake.delStateArgsForCall) +} + +func (fake *ChaincodeStub) DelStateCalls(stub func(string) error) { + fake.delStateMutex.Lock() + defer fake.delStateMutex.Unlock() + fake.DelStateStub = stub +} + +func (fake *ChaincodeStub) DelStateArgsForCall(i int) string { + fake.delStateMutex.RLock() + defer fake.delStateMutex.RUnlock() + argsForCall := fake.delStateArgsForCall[i] + return argsForCall.arg1 +} + +func (fake *ChaincodeStub) DelStateReturns(result1 error) { + fake.delStateMutex.Lock() + defer fake.delStateMutex.Unlock() + fake.DelStateStub = nil + fake.delStateReturns = struct { + result1 error + }{result1} +} + +func (fake *ChaincodeStub) DelStateReturnsOnCall(i int, result1 error) { + fake.delStateMutex.Lock() + defer fake.delStateMutex.Unlock() + fake.DelStateStub = nil + if fake.delStateReturnsOnCall == nil { + fake.delStateReturnsOnCall = make(map[int]struct { + result1 error + }) + } + fake.delStateReturnsOnCall[i] = struct { + result1 error + }{result1} +} + +func (fake *ChaincodeStub) GetArgs() [][]byte { + fake.getArgsMutex.Lock() + ret, specificReturn := fake.getArgsReturnsOnCall[len(fake.getArgsArgsForCall)] + fake.getArgsArgsForCall = append(fake.getArgsArgsForCall, struct { + }{}) + fake.recordInvocation("GetArgs", []interface{}{}) + fake.getArgsMutex.Unlock() + if fake.GetArgsStub != nil { + return fake.GetArgsStub() + } + if specificReturn { + return ret.result1 + } + fakeReturns := fake.getArgsReturns + return fakeReturns.result1 +} + +func (fake *ChaincodeStub) GetArgsCallCount() int { + fake.getArgsMutex.RLock() + defer fake.getArgsMutex.RUnlock() + return len(fake.getArgsArgsForCall) +} + +func (fake *ChaincodeStub) GetArgsCalls(stub func() [][]byte) { + fake.getArgsMutex.Lock() + defer fake.getArgsMutex.Unlock() + fake.GetArgsStub = stub +} + +func (fake *ChaincodeStub) GetArgsReturns(result1 [][]byte) { + fake.getArgsMutex.Lock() + defer fake.getArgsMutex.Unlock() + fake.GetArgsStub = nil + fake.getArgsReturns = struct { + result1 [][]byte + }{result1} +} + +func (fake *ChaincodeStub) GetArgsReturnsOnCall(i int, result1 [][]byte) { + fake.getArgsMutex.Lock() + defer fake.getArgsMutex.Unlock() + fake.GetArgsStub = nil + if fake.getArgsReturnsOnCall == nil { + fake.getArgsReturnsOnCall = make(map[int]struct { + result1 [][]byte + }) + } + fake.getArgsReturnsOnCall[i] = struct { + result1 [][]byte + }{result1} +} + +func (fake *ChaincodeStub) GetArgsSlice() ([]byte, error) { + fake.getArgsSliceMutex.Lock() + ret, specificReturn := fake.getArgsSliceReturnsOnCall[len(fake.getArgsSliceArgsForCall)] + fake.getArgsSliceArgsForCall = append(fake.getArgsSliceArgsForCall, struct { + }{}) + fake.recordInvocation("GetArgsSlice", []interface{}{}) + fake.getArgsSliceMutex.Unlock() + if fake.GetArgsSliceStub != nil { + return fake.GetArgsSliceStub() + } + if specificReturn { + return ret.result1, ret.result2 + } + fakeReturns := fake.getArgsSliceReturns + return fakeReturns.result1, fakeReturns.result2 +} + +func (fake *ChaincodeStub) GetArgsSliceCallCount() int { + fake.getArgsSliceMutex.RLock() + defer fake.getArgsSliceMutex.RUnlock() + return len(fake.getArgsSliceArgsForCall) +} + +func (fake *ChaincodeStub) GetArgsSliceCalls(stub func() ([]byte, error)) { + fake.getArgsSliceMutex.Lock() + defer fake.getArgsSliceMutex.Unlock() + fake.GetArgsSliceStub = stub +} + +func (fake *ChaincodeStub) GetArgsSliceReturns(result1 []byte, result2 error) { + fake.getArgsSliceMutex.Lock() + defer fake.getArgsSliceMutex.Unlock() + fake.GetArgsSliceStub = nil + fake.getArgsSliceReturns = struct { + result1 []byte + result2 error + }{result1, result2} +} + +func (fake *ChaincodeStub) GetArgsSliceReturnsOnCall(i int, result1 []byte, result2 error) { + fake.getArgsSliceMutex.Lock() + defer fake.getArgsSliceMutex.Unlock() + fake.GetArgsSliceStub = nil + if fake.getArgsSliceReturnsOnCall == nil { + fake.getArgsSliceReturnsOnCall = make(map[int]struct { + result1 []byte + result2 error + }) + } + fake.getArgsSliceReturnsOnCall[i] = struct { + result1 []byte + result2 error + }{result1, result2} +} + +func (fake *ChaincodeStub) GetBinding() ([]byte, error) { + fake.getBindingMutex.Lock() + ret, specificReturn := fake.getBindingReturnsOnCall[len(fake.getBindingArgsForCall)] + fake.getBindingArgsForCall = append(fake.getBindingArgsForCall, struct { + }{}) + fake.recordInvocation("GetBinding", []interface{}{}) + fake.getBindingMutex.Unlock() + if fake.GetBindingStub != nil { + return fake.GetBindingStub() + } + if specificReturn { + return ret.result1, ret.result2 + } + fakeReturns := fake.getBindingReturns + return fakeReturns.result1, fakeReturns.result2 +} + +func (fake *ChaincodeStub) GetBindingCallCount() int { + fake.getBindingMutex.RLock() + defer fake.getBindingMutex.RUnlock() + return len(fake.getBindingArgsForCall) +} + +func (fake *ChaincodeStub) GetBindingCalls(stub func() ([]byte, error)) { + fake.getBindingMutex.Lock() + defer fake.getBindingMutex.Unlock() + fake.GetBindingStub = stub +} + +func (fake *ChaincodeStub) GetBindingReturns(result1 []byte, result2 error) { + fake.getBindingMutex.Lock() + defer fake.getBindingMutex.Unlock() + fake.GetBindingStub = nil + fake.getBindingReturns = struct { + result1 []byte + result2 error + }{result1, result2} +} + +func (fake *ChaincodeStub) GetBindingReturnsOnCall(i int, result1 []byte, result2 error) { + fake.getBindingMutex.Lock() + defer fake.getBindingMutex.Unlock() + fake.GetBindingStub = nil + if fake.getBindingReturnsOnCall == nil { + fake.getBindingReturnsOnCall = make(map[int]struct { + result1 []byte + result2 error + }) + } + fake.getBindingReturnsOnCall[i] = struct { + result1 []byte + result2 error + }{result1, result2} +} + +func (fake *ChaincodeStub) GetChannelID() string { + fake.getChannelIDMutex.Lock() + ret, specificReturn := fake.getChannelIDReturnsOnCall[len(fake.getChannelIDArgsForCall)] + fake.getChannelIDArgsForCall = append(fake.getChannelIDArgsForCall, struct { + }{}) + fake.recordInvocation("GetChannelID", []interface{}{}) + fake.getChannelIDMutex.Unlock() + if fake.GetChannelIDStub != nil { + return fake.GetChannelIDStub() + } + if specificReturn { + return ret.result1 + } + fakeReturns := fake.getChannelIDReturns + return fakeReturns.result1 +} + +func (fake *ChaincodeStub) GetChannelIDCallCount() int { + fake.getChannelIDMutex.RLock() + defer fake.getChannelIDMutex.RUnlock() + return len(fake.getChannelIDArgsForCall) +} + +func (fake *ChaincodeStub) GetChannelIDCalls(stub func() string) { + fake.getChannelIDMutex.Lock() + defer fake.getChannelIDMutex.Unlock() + fake.GetChannelIDStub = stub +} + +func (fake *ChaincodeStub) GetChannelIDReturns(result1 string) { + fake.getChannelIDMutex.Lock() + defer fake.getChannelIDMutex.Unlock() + fake.GetChannelIDStub = nil + fake.getChannelIDReturns = struct { + result1 string + }{result1} +} + +func (fake *ChaincodeStub) GetChannelIDReturnsOnCall(i int, result1 string) { + fake.getChannelIDMutex.Lock() + defer fake.getChannelIDMutex.Unlock() + fake.GetChannelIDStub = nil + if fake.getChannelIDReturnsOnCall == nil { + fake.getChannelIDReturnsOnCall = make(map[int]struct { + result1 string + }) + } + fake.getChannelIDReturnsOnCall[i] = struct { + result1 string + }{result1} +} + +func (fake *ChaincodeStub) GetCreator() ([]byte, error) { + fake.getCreatorMutex.Lock() + ret, specificReturn := fake.getCreatorReturnsOnCall[len(fake.getCreatorArgsForCall)] + fake.getCreatorArgsForCall = append(fake.getCreatorArgsForCall, struct { + }{}) + fake.recordInvocation("GetCreator", []interface{}{}) + fake.getCreatorMutex.Unlock() + if fake.GetCreatorStub != nil { + return fake.GetCreatorStub() + } + if specificReturn { + return ret.result1, ret.result2 + } + fakeReturns := fake.getCreatorReturns + return fakeReturns.result1, fakeReturns.result2 +} + +func (fake *ChaincodeStub) GetCreatorCallCount() int { + fake.getCreatorMutex.RLock() + defer fake.getCreatorMutex.RUnlock() + return len(fake.getCreatorArgsForCall) +} + +func (fake *ChaincodeStub) GetCreatorCalls(stub func() ([]byte, error)) { + fake.getCreatorMutex.Lock() + defer fake.getCreatorMutex.Unlock() + fake.GetCreatorStub = stub +} + +func (fake *ChaincodeStub) GetCreatorReturns(result1 []byte, result2 error) { + fake.getCreatorMutex.Lock() + defer fake.getCreatorMutex.Unlock() + fake.GetCreatorStub = nil + fake.getCreatorReturns = struct { + result1 []byte + result2 error + }{result1, result2} +} + +func (fake *ChaincodeStub) GetCreatorReturnsOnCall(i int, result1 []byte, result2 error) { + fake.getCreatorMutex.Lock() + defer fake.getCreatorMutex.Unlock() + fake.GetCreatorStub = nil + if fake.getCreatorReturnsOnCall == nil { + fake.getCreatorReturnsOnCall = make(map[int]struct { + result1 []byte + result2 error + }) + } + fake.getCreatorReturnsOnCall[i] = struct { + result1 []byte + result2 error + }{result1, result2} +} + +func (fake *ChaincodeStub) GetDecorations() map[string][]byte { + fake.getDecorationsMutex.Lock() + ret, specificReturn := fake.getDecorationsReturnsOnCall[len(fake.getDecorationsArgsForCall)] + fake.getDecorationsArgsForCall = append(fake.getDecorationsArgsForCall, struct { + }{}) + fake.recordInvocation("GetDecorations", []interface{}{}) + fake.getDecorationsMutex.Unlock() + if fake.GetDecorationsStub != nil { + return fake.GetDecorationsStub() + } + if specificReturn { + return ret.result1 + } + fakeReturns := fake.getDecorationsReturns + return fakeReturns.result1 +} + +func (fake *ChaincodeStub) GetDecorationsCallCount() int { + fake.getDecorationsMutex.RLock() + defer fake.getDecorationsMutex.RUnlock() + return len(fake.getDecorationsArgsForCall) +} + +func (fake *ChaincodeStub) GetDecorationsCalls(stub func() map[string][]byte) { + fake.getDecorationsMutex.Lock() + defer fake.getDecorationsMutex.Unlock() + fake.GetDecorationsStub = stub +} + +func (fake *ChaincodeStub) GetDecorationsReturns(result1 map[string][]byte) { + fake.getDecorationsMutex.Lock() + defer fake.getDecorationsMutex.Unlock() + fake.GetDecorationsStub = nil + fake.getDecorationsReturns = struct { + result1 map[string][]byte + }{result1} +} + +func (fake *ChaincodeStub) GetDecorationsReturnsOnCall(i int, result1 map[string][]byte) { + fake.getDecorationsMutex.Lock() + defer fake.getDecorationsMutex.Unlock() + fake.GetDecorationsStub = nil + if fake.getDecorationsReturnsOnCall == nil { + fake.getDecorationsReturnsOnCall = make(map[int]struct { + result1 map[string][]byte + }) + } + fake.getDecorationsReturnsOnCall[i] = struct { + result1 map[string][]byte + }{result1} +} + +func (fake *ChaincodeStub) GetFunctionAndParameters() (string, []string) { + fake.getFunctionAndParametersMutex.Lock() + ret, specificReturn := fake.getFunctionAndParametersReturnsOnCall[len(fake.getFunctionAndParametersArgsForCall)] + fake.getFunctionAndParametersArgsForCall = append(fake.getFunctionAndParametersArgsForCall, struct { + }{}) + fake.recordInvocation("GetFunctionAndParameters", []interface{}{}) + fake.getFunctionAndParametersMutex.Unlock() + if fake.GetFunctionAndParametersStub != nil { + return fake.GetFunctionAndParametersStub() + } + if specificReturn { + return ret.result1, ret.result2 + } + fakeReturns := fake.getFunctionAndParametersReturns + return fakeReturns.result1, fakeReturns.result2 +} + +func (fake *ChaincodeStub) GetFunctionAndParametersCallCount() int { + fake.getFunctionAndParametersMutex.RLock() + defer fake.getFunctionAndParametersMutex.RUnlock() + return len(fake.getFunctionAndParametersArgsForCall) +} + +func (fake *ChaincodeStub) GetFunctionAndParametersCalls(stub func() (string, []string)) { + fake.getFunctionAndParametersMutex.Lock() + defer fake.getFunctionAndParametersMutex.Unlock() + fake.GetFunctionAndParametersStub = stub +} + +func (fake *ChaincodeStub) GetFunctionAndParametersReturns(result1 string, result2 []string) { + fake.getFunctionAndParametersMutex.Lock() + defer fake.getFunctionAndParametersMutex.Unlock() + fake.GetFunctionAndParametersStub = nil + fake.getFunctionAndParametersReturns = struct { + result1 string + result2 []string + }{result1, result2} +} + +func (fake *ChaincodeStub) GetFunctionAndParametersReturnsOnCall(i int, result1 string, result2 []string) { + fake.getFunctionAndParametersMutex.Lock() + defer fake.getFunctionAndParametersMutex.Unlock() + fake.GetFunctionAndParametersStub = nil + if fake.getFunctionAndParametersReturnsOnCall == nil { + fake.getFunctionAndParametersReturnsOnCall = make(map[int]struct { + result1 string + result2 []string + }) + } + fake.getFunctionAndParametersReturnsOnCall[i] = struct { + result1 string + result2 []string + }{result1, result2} +} + +func (fake *ChaincodeStub) GetHistoryForKey(arg1 string) (shim.HistoryQueryIteratorInterface, error) { + fake.getHistoryForKeyMutex.Lock() + ret, specificReturn := fake.getHistoryForKeyReturnsOnCall[len(fake.getHistoryForKeyArgsForCall)] + fake.getHistoryForKeyArgsForCall = append(fake.getHistoryForKeyArgsForCall, struct { + arg1 string + }{arg1}) + fake.recordInvocation("GetHistoryForKey", []interface{}{arg1}) + fake.getHistoryForKeyMutex.Unlock() + if fake.GetHistoryForKeyStub != nil { + return fake.GetHistoryForKeyStub(arg1) + } + if specificReturn { + return ret.result1, ret.result2 + } + fakeReturns := fake.getHistoryForKeyReturns + return fakeReturns.result1, fakeReturns.result2 +} + +func (fake *ChaincodeStub) GetHistoryForKeyCallCount() int { + fake.getHistoryForKeyMutex.RLock() + defer fake.getHistoryForKeyMutex.RUnlock() + return len(fake.getHistoryForKeyArgsForCall) +} + +func (fake *ChaincodeStub) GetHistoryForKeyCalls(stub func(string) (shim.HistoryQueryIteratorInterface, error)) { + fake.getHistoryForKeyMutex.Lock() + defer fake.getHistoryForKeyMutex.Unlock() + fake.GetHistoryForKeyStub = stub +} + +func (fake *ChaincodeStub) GetHistoryForKeyArgsForCall(i int) string { + fake.getHistoryForKeyMutex.RLock() + defer fake.getHistoryForKeyMutex.RUnlock() + argsForCall := fake.getHistoryForKeyArgsForCall[i] + return argsForCall.arg1 +} + +func (fake *ChaincodeStub) GetHistoryForKeyReturns(result1 shim.HistoryQueryIteratorInterface, result2 error) { + fake.getHistoryForKeyMutex.Lock() + defer fake.getHistoryForKeyMutex.Unlock() + fake.GetHistoryForKeyStub = nil + fake.getHistoryForKeyReturns = struct { + result1 shim.HistoryQueryIteratorInterface + result2 error + }{result1, result2} +} + +func (fake *ChaincodeStub) GetHistoryForKeyReturnsOnCall(i int, result1 shim.HistoryQueryIteratorInterface, result2 error) { + fake.getHistoryForKeyMutex.Lock() + defer fake.getHistoryForKeyMutex.Unlock() + fake.GetHistoryForKeyStub = nil + if fake.getHistoryForKeyReturnsOnCall == nil { + fake.getHistoryForKeyReturnsOnCall = make(map[int]struct { + result1 shim.HistoryQueryIteratorInterface + result2 error + }) + } + fake.getHistoryForKeyReturnsOnCall[i] = struct { + result1 shim.HistoryQueryIteratorInterface + result2 error + }{result1, result2} +} + +func (fake *ChaincodeStub) GetPrivateData(arg1 string, arg2 string) ([]byte, error) { + fake.getPrivateDataMutex.Lock() + ret, specificReturn := fake.getPrivateDataReturnsOnCall[len(fake.getPrivateDataArgsForCall)] + fake.getPrivateDataArgsForCall = append(fake.getPrivateDataArgsForCall, struct { + arg1 string + arg2 string + }{arg1, arg2}) + fake.recordInvocation("GetPrivateData", []interface{}{arg1, arg2}) + fake.getPrivateDataMutex.Unlock() + if fake.GetPrivateDataStub != nil { + return fake.GetPrivateDataStub(arg1, arg2) + } + if specificReturn { + return ret.result1, ret.result2 + } + fakeReturns := fake.getPrivateDataReturns + return fakeReturns.result1, fakeReturns.result2 +} + +func (fake *ChaincodeStub) GetPrivateDataCallCount() int { + fake.getPrivateDataMutex.RLock() + defer fake.getPrivateDataMutex.RUnlock() + return len(fake.getPrivateDataArgsForCall) +} + +func (fake *ChaincodeStub) GetPrivateDataCalls(stub func(string, string) ([]byte, error)) { + fake.getPrivateDataMutex.Lock() + defer fake.getPrivateDataMutex.Unlock() + fake.GetPrivateDataStub = stub +} + +func (fake *ChaincodeStub) GetPrivateDataArgsForCall(i int) (string, string) { + fake.getPrivateDataMutex.RLock() + defer fake.getPrivateDataMutex.RUnlock() + argsForCall := fake.getPrivateDataArgsForCall[i] + return argsForCall.arg1, argsForCall.arg2 +} + +func (fake *ChaincodeStub) GetPrivateDataReturns(result1 []byte, result2 error) { + fake.getPrivateDataMutex.Lock() + defer fake.getPrivateDataMutex.Unlock() + fake.GetPrivateDataStub = nil + fake.getPrivateDataReturns = struct { + result1 []byte + result2 error + }{result1, result2} +} + +func (fake *ChaincodeStub) GetPrivateDataReturnsOnCall(i int, result1 []byte, result2 error) { + fake.getPrivateDataMutex.Lock() + defer fake.getPrivateDataMutex.Unlock() + fake.GetPrivateDataStub = nil + if fake.getPrivateDataReturnsOnCall == nil { + fake.getPrivateDataReturnsOnCall = make(map[int]struct { + result1 []byte + result2 error + }) + } + fake.getPrivateDataReturnsOnCall[i] = struct { + result1 []byte + result2 error + }{result1, result2} +} + +func (fake *ChaincodeStub) GetPrivateDataByPartialCompositeKey(arg1 string, arg2 string, arg3 []string) (shim.StateQueryIteratorInterface, error) { + var arg3Copy []string + if arg3 != nil { + arg3Copy = make([]string, len(arg3)) + copy(arg3Copy, arg3) + } + fake.getPrivateDataByPartialCompositeKeyMutex.Lock() + ret, specificReturn := fake.getPrivateDataByPartialCompositeKeyReturnsOnCall[len(fake.getPrivateDataByPartialCompositeKeyArgsForCall)] + fake.getPrivateDataByPartialCompositeKeyArgsForCall = append(fake.getPrivateDataByPartialCompositeKeyArgsForCall, struct { + arg1 string + arg2 string + arg3 []string + }{arg1, arg2, arg3Copy}) + fake.recordInvocation("GetPrivateDataByPartialCompositeKey", []interface{}{arg1, arg2, arg3Copy}) + fake.getPrivateDataByPartialCompositeKeyMutex.Unlock() + if fake.GetPrivateDataByPartialCompositeKeyStub != nil { + return fake.GetPrivateDataByPartialCompositeKeyStub(arg1, arg2, arg3) + } + if specificReturn { + return ret.result1, ret.result2 + } + fakeReturns := fake.getPrivateDataByPartialCompositeKeyReturns + return fakeReturns.result1, fakeReturns.result2 +} + +func (fake *ChaincodeStub) GetPrivateDataByPartialCompositeKeyCallCount() int { + fake.getPrivateDataByPartialCompositeKeyMutex.RLock() + defer fake.getPrivateDataByPartialCompositeKeyMutex.RUnlock() + return len(fake.getPrivateDataByPartialCompositeKeyArgsForCall) +} + +func (fake *ChaincodeStub) GetPrivateDataByPartialCompositeKeyCalls(stub func(string, string, []string) (shim.StateQueryIteratorInterface, error)) { + fake.getPrivateDataByPartialCompositeKeyMutex.Lock() + defer fake.getPrivateDataByPartialCompositeKeyMutex.Unlock() + fake.GetPrivateDataByPartialCompositeKeyStub = stub +} + +func (fake *ChaincodeStub) GetPrivateDataByPartialCompositeKeyArgsForCall(i int) (string, string, []string) { + fake.getPrivateDataByPartialCompositeKeyMutex.RLock() + defer fake.getPrivateDataByPartialCompositeKeyMutex.RUnlock() + argsForCall := fake.getPrivateDataByPartialCompositeKeyArgsForCall[i] + return argsForCall.arg1, argsForCall.arg2, argsForCall.arg3 +} + +func (fake *ChaincodeStub) GetPrivateDataByPartialCompositeKeyReturns(result1 shim.StateQueryIteratorInterface, result2 error) { + fake.getPrivateDataByPartialCompositeKeyMutex.Lock() + defer fake.getPrivateDataByPartialCompositeKeyMutex.Unlock() + fake.GetPrivateDataByPartialCompositeKeyStub = nil + fake.getPrivateDataByPartialCompositeKeyReturns = struct { + result1 shim.StateQueryIteratorInterface + result2 error + }{result1, result2} +} + +func (fake *ChaincodeStub) GetPrivateDataByPartialCompositeKeyReturnsOnCall(i int, result1 shim.StateQueryIteratorInterface, result2 error) { + fake.getPrivateDataByPartialCompositeKeyMutex.Lock() + defer fake.getPrivateDataByPartialCompositeKeyMutex.Unlock() + fake.GetPrivateDataByPartialCompositeKeyStub = nil + if fake.getPrivateDataByPartialCompositeKeyReturnsOnCall == nil { + fake.getPrivateDataByPartialCompositeKeyReturnsOnCall = make(map[int]struct { + result1 shim.StateQueryIteratorInterface + result2 error + }) + } + fake.getPrivateDataByPartialCompositeKeyReturnsOnCall[i] = struct { + result1 shim.StateQueryIteratorInterface + result2 error + }{result1, result2} +} + +func (fake *ChaincodeStub) GetPrivateDataByRange(arg1 string, arg2 string, arg3 string) (shim.StateQueryIteratorInterface, error) { + fake.getPrivateDataByRangeMutex.Lock() + ret, specificReturn := fake.getPrivateDataByRangeReturnsOnCall[len(fake.getPrivateDataByRangeArgsForCall)] + fake.getPrivateDataByRangeArgsForCall = append(fake.getPrivateDataByRangeArgsForCall, struct { + arg1 string + arg2 string + arg3 string + }{arg1, arg2, arg3}) + fake.recordInvocation("GetPrivateDataByRange", []interface{}{arg1, arg2, arg3}) + fake.getPrivateDataByRangeMutex.Unlock() + if fake.GetPrivateDataByRangeStub != nil { + return fake.GetPrivateDataByRangeStub(arg1, arg2, arg3) + } + if specificReturn { + return ret.result1, ret.result2 + } + fakeReturns := fake.getPrivateDataByRangeReturns + return fakeReturns.result1, fakeReturns.result2 +} + +func (fake *ChaincodeStub) GetPrivateDataByRangeCallCount() int { + fake.getPrivateDataByRangeMutex.RLock() + defer fake.getPrivateDataByRangeMutex.RUnlock() + return len(fake.getPrivateDataByRangeArgsForCall) +} + +func (fake *ChaincodeStub) GetPrivateDataByRangeCalls(stub func(string, string, string) (shim.StateQueryIteratorInterface, error)) { + fake.getPrivateDataByRangeMutex.Lock() + defer fake.getPrivateDataByRangeMutex.Unlock() + fake.GetPrivateDataByRangeStub = stub +} + +func (fake *ChaincodeStub) GetPrivateDataByRangeArgsForCall(i int) (string, string, string) { + fake.getPrivateDataByRangeMutex.RLock() + defer fake.getPrivateDataByRangeMutex.RUnlock() + argsForCall := fake.getPrivateDataByRangeArgsForCall[i] + return argsForCall.arg1, argsForCall.arg2, argsForCall.arg3 +} + +func (fake *ChaincodeStub) GetPrivateDataByRangeReturns(result1 shim.StateQueryIteratorInterface, result2 error) { + fake.getPrivateDataByRangeMutex.Lock() + defer fake.getPrivateDataByRangeMutex.Unlock() + fake.GetPrivateDataByRangeStub = nil + fake.getPrivateDataByRangeReturns = struct { + result1 shim.StateQueryIteratorInterface + result2 error + }{result1, result2} +} + +func (fake *ChaincodeStub) GetPrivateDataByRangeReturnsOnCall(i int, result1 shim.StateQueryIteratorInterface, result2 error) { + fake.getPrivateDataByRangeMutex.Lock() + defer fake.getPrivateDataByRangeMutex.Unlock() + fake.GetPrivateDataByRangeStub = nil + if fake.getPrivateDataByRangeReturnsOnCall == nil { + fake.getPrivateDataByRangeReturnsOnCall = make(map[int]struct { + result1 shim.StateQueryIteratorInterface + result2 error + }) + } + fake.getPrivateDataByRangeReturnsOnCall[i] = struct { + result1 shim.StateQueryIteratorInterface + result2 error + }{result1, result2} +} + +func (fake *ChaincodeStub) GetPrivateDataHash(arg1 string, arg2 string) ([]byte, error) { + fake.getPrivateDataHashMutex.Lock() + ret, specificReturn := fake.getPrivateDataHashReturnsOnCall[len(fake.getPrivateDataHashArgsForCall)] + fake.getPrivateDataHashArgsForCall = append(fake.getPrivateDataHashArgsForCall, struct { + arg1 string + arg2 string + }{arg1, arg2}) + fake.recordInvocation("GetPrivateDataHash", []interface{}{arg1, arg2}) + fake.getPrivateDataHashMutex.Unlock() + if fake.GetPrivateDataHashStub != nil { + return fake.GetPrivateDataHashStub(arg1, arg2) + } + if specificReturn { + return ret.result1, ret.result2 + } + fakeReturns := fake.getPrivateDataHashReturns + return fakeReturns.result1, fakeReturns.result2 +} + +func (fake *ChaincodeStub) GetPrivateDataHashCallCount() int { + fake.getPrivateDataHashMutex.RLock() + defer fake.getPrivateDataHashMutex.RUnlock() + return len(fake.getPrivateDataHashArgsForCall) +} + +func (fake *ChaincodeStub) GetPrivateDataHashCalls(stub func(string, string) ([]byte, error)) { + fake.getPrivateDataHashMutex.Lock() + defer fake.getPrivateDataHashMutex.Unlock() + fake.GetPrivateDataHashStub = stub +} + +func (fake *ChaincodeStub) GetPrivateDataHashArgsForCall(i int) (string, string) { + fake.getPrivateDataHashMutex.RLock() + defer fake.getPrivateDataHashMutex.RUnlock() + argsForCall := fake.getPrivateDataHashArgsForCall[i] + return argsForCall.arg1, argsForCall.arg2 +} + +func (fake *ChaincodeStub) GetPrivateDataHashReturns(result1 []byte, result2 error) { + fake.getPrivateDataHashMutex.Lock() + defer fake.getPrivateDataHashMutex.Unlock() + fake.GetPrivateDataHashStub = nil + fake.getPrivateDataHashReturns = struct { + result1 []byte + result2 error + }{result1, result2} +} + +func (fake *ChaincodeStub) GetPrivateDataHashReturnsOnCall(i int, result1 []byte, result2 error) { + fake.getPrivateDataHashMutex.Lock() + defer fake.getPrivateDataHashMutex.Unlock() + fake.GetPrivateDataHashStub = nil + if fake.getPrivateDataHashReturnsOnCall == nil { + fake.getPrivateDataHashReturnsOnCall = make(map[int]struct { + result1 []byte + result2 error + }) + } + fake.getPrivateDataHashReturnsOnCall[i] = struct { + result1 []byte + result2 error + }{result1, result2} +} + +func (fake *ChaincodeStub) GetPrivateDataQueryResult(arg1 string, arg2 string) (shim.StateQueryIteratorInterface, error) { + fake.getPrivateDataQueryResultMutex.Lock() + ret, specificReturn := fake.getPrivateDataQueryResultReturnsOnCall[len(fake.getPrivateDataQueryResultArgsForCall)] + fake.getPrivateDataQueryResultArgsForCall = append(fake.getPrivateDataQueryResultArgsForCall, struct { + arg1 string + arg2 string + }{arg1, arg2}) + fake.recordInvocation("GetPrivateDataQueryResult", []interface{}{arg1, arg2}) + fake.getPrivateDataQueryResultMutex.Unlock() + if fake.GetPrivateDataQueryResultStub != nil { + return fake.GetPrivateDataQueryResultStub(arg1, arg2) + } + if specificReturn { + return ret.result1, ret.result2 + } + fakeReturns := fake.getPrivateDataQueryResultReturns + return fakeReturns.result1, fakeReturns.result2 +} + +func (fake *ChaincodeStub) GetPrivateDataQueryResultCallCount() int { + fake.getPrivateDataQueryResultMutex.RLock() + defer fake.getPrivateDataQueryResultMutex.RUnlock() + return len(fake.getPrivateDataQueryResultArgsForCall) +} + +func (fake *ChaincodeStub) GetPrivateDataQueryResultCalls(stub func(string, string) (shim.StateQueryIteratorInterface, error)) { + fake.getPrivateDataQueryResultMutex.Lock() + defer fake.getPrivateDataQueryResultMutex.Unlock() + fake.GetPrivateDataQueryResultStub = stub +} + +func (fake *ChaincodeStub) GetPrivateDataQueryResultArgsForCall(i int) (string, string) { + fake.getPrivateDataQueryResultMutex.RLock() + defer fake.getPrivateDataQueryResultMutex.RUnlock() + argsForCall := fake.getPrivateDataQueryResultArgsForCall[i] + return argsForCall.arg1, argsForCall.arg2 +} + +func (fake *ChaincodeStub) GetPrivateDataQueryResultReturns(result1 shim.StateQueryIteratorInterface, result2 error) { + fake.getPrivateDataQueryResultMutex.Lock() + defer fake.getPrivateDataQueryResultMutex.Unlock() + fake.GetPrivateDataQueryResultStub = nil + fake.getPrivateDataQueryResultReturns = struct { + result1 shim.StateQueryIteratorInterface + result2 error + }{result1, result2} +} + +func (fake *ChaincodeStub) GetPrivateDataQueryResultReturnsOnCall(i int, result1 shim.StateQueryIteratorInterface, result2 error) { + fake.getPrivateDataQueryResultMutex.Lock() + defer fake.getPrivateDataQueryResultMutex.Unlock() + fake.GetPrivateDataQueryResultStub = nil + if fake.getPrivateDataQueryResultReturnsOnCall == nil { + fake.getPrivateDataQueryResultReturnsOnCall = make(map[int]struct { + result1 shim.StateQueryIteratorInterface + result2 error + }) + } + fake.getPrivateDataQueryResultReturnsOnCall[i] = struct { + result1 shim.StateQueryIteratorInterface + result2 error + }{result1, result2} +} + +func (fake *ChaincodeStub) GetPrivateDataValidationParameter(arg1 string, arg2 string) ([]byte, error) { + fake.getPrivateDataValidationParameterMutex.Lock() + ret, specificReturn := fake.getPrivateDataValidationParameterReturnsOnCall[len(fake.getPrivateDataValidationParameterArgsForCall)] + fake.getPrivateDataValidationParameterArgsForCall = append(fake.getPrivateDataValidationParameterArgsForCall, struct { + arg1 string + arg2 string + }{arg1, arg2}) + fake.recordInvocation("GetPrivateDataValidationParameter", []interface{}{arg1, arg2}) + fake.getPrivateDataValidationParameterMutex.Unlock() + if fake.GetPrivateDataValidationParameterStub != nil { + return fake.GetPrivateDataValidationParameterStub(arg1, arg2) + } + if specificReturn { + return ret.result1, ret.result2 + } + fakeReturns := fake.getPrivateDataValidationParameterReturns + return fakeReturns.result1, fakeReturns.result2 +} + +func (fake *ChaincodeStub) GetPrivateDataValidationParameterCallCount() int { + fake.getPrivateDataValidationParameterMutex.RLock() + defer fake.getPrivateDataValidationParameterMutex.RUnlock() + return len(fake.getPrivateDataValidationParameterArgsForCall) +} + +func (fake *ChaincodeStub) GetPrivateDataValidationParameterCalls(stub func(string, string) ([]byte, error)) { + fake.getPrivateDataValidationParameterMutex.Lock() + defer fake.getPrivateDataValidationParameterMutex.Unlock() + fake.GetPrivateDataValidationParameterStub = stub +} + +func (fake *ChaincodeStub) GetPrivateDataValidationParameterArgsForCall(i int) (string, string) { + fake.getPrivateDataValidationParameterMutex.RLock() + defer fake.getPrivateDataValidationParameterMutex.RUnlock() + argsForCall := fake.getPrivateDataValidationParameterArgsForCall[i] + return argsForCall.arg1, argsForCall.arg2 +} + +func (fake *ChaincodeStub) GetPrivateDataValidationParameterReturns(result1 []byte, result2 error) { + fake.getPrivateDataValidationParameterMutex.Lock() + defer fake.getPrivateDataValidationParameterMutex.Unlock() + fake.GetPrivateDataValidationParameterStub = nil + fake.getPrivateDataValidationParameterReturns = struct { + result1 []byte + result2 error + }{result1, result2} +} + +func (fake *ChaincodeStub) GetPrivateDataValidationParameterReturnsOnCall(i int, result1 []byte, result2 error) { + fake.getPrivateDataValidationParameterMutex.Lock() + defer fake.getPrivateDataValidationParameterMutex.Unlock() + fake.GetPrivateDataValidationParameterStub = nil + if fake.getPrivateDataValidationParameterReturnsOnCall == nil { + fake.getPrivateDataValidationParameterReturnsOnCall = make(map[int]struct { + result1 []byte + result2 error + }) + } + fake.getPrivateDataValidationParameterReturnsOnCall[i] = struct { + result1 []byte + result2 error + }{result1, result2} +} + +func (fake *ChaincodeStub) GetQueryResult(arg1 string) (shim.StateQueryIteratorInterface, error) { + fake.getQueryResultMutex.Lock() + ret, specificReturn := fake.getQueryResultReturnsOnCall[len(fake.getQueryResultArgsForCall)] + fake.getQueryResultArgsForCall = append(fake.getQueryResultArgsForCall, struct { + arg1 string + }{arg1}) + fake.recordInvocation("GetQueryResult", []interface{}{arg1}) + fake.getQueryResultMutex.Unlock() + if fake.GetQueryResultStub != nil { + return fake.GetQueryResultStub(arg1) + } + if specificReturn { + return ret.result1, ret.result2 + } + fakeReturns := fake.getQueryResultReturns + return fakeReturns.result1, fakeReturns.result2 +} + +func (fake *ChaincodeStub) GetQueryResultCallCount() int { + fake.getQueryResultMutex.RLock() + defer fake.getQueryResultMutex.RUnlock() + return len(fake.getQueryResultArgsForCall) +} + +func (fake *ChaincodeStub) GetQueryResultCalls(stub func(string) (shim.StateQueryIteratorInterface, error)) { + fake.getQueryResultMutex.Lock() + defer fake.getQueryResultMutex.Unlock() + fake.GetQueryResultStub = stub +} + +func (fake *ChaincodeStub) GetQueryResultArgsForCall(i int) string { + fake.getQueryResultMutex.RLock() + defer fake.getQueryResultMutex.RUnlock() + argsForCall := fake.getQueryResultArgsForCall[i] + return argsForCall.arg1 +} + +func (fake *ChaincodeStub) GetQueryResultReturns(result1 shim.StateQueryIteratorInterface, result2 error) { + fake.getQueryResultMutex.Lock() + defer fake.getQueryResultMutex.Unlock() + fake.GetQueryResultStub = nil + fake.getQueryResultReturns = struct { + result1 shim.StateQueryIteratorInterface + result2 error + }{result1, result2} +} + +func (fake *ChaincodeStub) GetQueryResultReturnsOnCall(i int, result1 shim.StateQueryIteratorInterface, result2 error) { + fake.getQueryResultMutex.Lock() + defer fake.getQueryResultMutex.Unlock() + fake.GetQueryResultStub = nil + if fake.getQueryResultReturnsOnCall == nil { + fake.getQueryResultReturnsOnCall = make(map[int]struct { + result1 shim.StateQueryIteratorInterface + result2 error + }) + } + fake.getQueryResultReturnsOnCall[i] = struct { + result1 shim.StateQueryIteratorInterface + result2 error + }{result1, result2} +} + +func (fake *ChaincodeStub) GetQueryResultWithPagination(arg1 string, arg2 int32, arg3 string) (shim.StateQueryIteratorInterface, *peer.QueryResponseMetadata, error) { + fake.getQueryResultWithPaginationMutex.Lock() + ret, specificReturn := fake.getQueryResultWithPaginationReturnsOnCall[len(fake.getQueryResultWithPaginationArgsForCall)] + fake.getQueryResultWithPaginationArgsForCall = append(fake.getQueryResultWithPaginationArgsForCall, struct { + arg1 string + arg2 int32 + arg3 string + }{arg1, arg2, arg3}) + fake.recordInvocation("GetQueryResultWithPagination", []interface{}{arg1, arg2, arg3}) + fake.getQueryResultWithPaginationMutex.Unlock() + if fake.GetQueryResultWithPaginationStub != nil { + return fake.GetQueryResultWithPaginationStub(arg1, arg2, arg3) + } + if specificReturn { + return ret.result1, ret.result2, ret.result3 + } + fakeReturns := fake.getQueryResultWithPaginationReturns + return fakeReturns.result1, fakeReturns.result2, fakeReturns.result3 +} + +func (fake *ChaincodeStub) GetQueryResultWithPaginationCallCount() int { + fake.getQueryResultWithPaginationMutex.RLock() + defer fake.getQueryResultWithPaginationMutex.RUnlock() + return len(fake.getQueryResultWithPaginationArgsForCall) +} + +func (fake *ChaincodeStub) GetQueryResultWithPaginationCalls(stub func(string, int32, string) (shim.StateQueryIteratorInterface, *peer.QueryResponseMetadata, error)) { + fake.getQueryResultWithPaginationMutex.Lock() + defer fake.getQueryResultWithPaginationMutex.Unlock() + fake.GetQueryResultWithPaginationStub = stub +} + +func (fake *ChaincodeStub) GetQueryResultWithPaginationArgsForCall(i int) (string, int32, string) { + fake.getQueryResultWithPaginationMutex.RLock() + defer fake.getQueryResultWithPaginationMutex.RUnlock() + argsForCall := fake.getQueryResultWithPaginationArgsForCall[i] + return argsForCall.arg1, argsForCall.arg2, argsForCall.arg3 +} + +func (fake *ChaincodeStub) GetQueryResultWithPaginationReturns(result1 shim.StateQueryIteratorInterface, result2 *peer.QueryResponseMetadata, result3 error) { + fake.getQueryResultWithPaginationMutex.Lock() + defer fake.getQueryResultWithPaginationMutex.Unlock() + fake.GetQueryResultWithPaginationStub = nil + fake.getQueryResultWithPaginationReturns = struct { + result1 shim.StateQueryIteratorInterface + result2 *peer.QueryResponseMetadata + result3 error + }{result1, result2, result3} +} + +func (fake *ChaincodeStub) GetQueryResultWithPaginationReturnsOnCall(i int, result1 shim.StateQueryIteratorInterface, result2 *peer.QueryResponseMetadata, result3 error) { + fake.getQueryResultWithPaginationMutex.Lock() + defer fake.getQueryResultWithPaginationMutex.Unlock() + fake.GetQueryResultWithPaginationStub = nil + if fake.getQueryResultWithPaginationReturnsOnCall == nil { + fake.getQueryResultWithPaginationReturnsOnCall = make(map[int]struct { + result1 shim.StateQueryIteratorInterface + result2 *peer.QueryResponseMetadata + result3 error + }) + } + fake.getQueryResultWithPaginationReturnsOnCall[i] = struct { + result1 shim.StateQueryIteratorInterface + result2 *peer.QueryResponseMetadata + result3 error + }{result1, result2, result3} +} + +func (fake *ChaincodeStub) GetSignedProposal() (*peer.SignedProposal, error) { + fake.getSignedProposalMutex.Lock() + ret, specificReturn := fake.getSignedProposalReturnsOnCall[len(fake.getSignedProposalArgsForCall)] + fake.getSignedProposalArgsForCall = append(fake.getSignedProposalArgsForCall, struct { + }{}) + fake.recordInvocation("GetSignedProposal", []interface{}{}) + fake.getSignedProposalMutex.Unlock() + if fake.GetSignedProposalStub != nil { + return fake.GetSignedProposalStub() + } + if specificReturn { + return ret.result1, ret.result2 + } + fakeReturns := fake.getSignedProposalReturns + return fakeReturns.result1, fakeReturns.result2 +} + +func (fake *ChaincodeStub) GetSignedProposalCallCount() int { + fake.getSignedProposalMutex.RLock() + defer fake.getSignedProposalMutex.RUnlock() + return len(fake.getSignedProposalArgsForCall) +} + +func (fake *ChaincodeStub) GetSignedProposalCalls(stub func() (*peer.SignedProposal, error)) { + fake.getSignedProposalMutex.Lock() + defer fake.getSignedProposalMutex.Unlock() + fake.GetSignedProposalStub = stub +} + +func (fake *ChaincodeStub) GetSignedProposalReturns(result1 *peer.SignedProposal, result2 error) { + fake.getSignedProposalMutex.Lock() + defer fake.getSignedProposalMutex.Unlock() + fake.GetSignedProposalStub = nil + fake.getSignedProposalReturns = struct { + result1 *peer.SignedProposal + result2 error + }{result1, result2} +} + +func (fake *ChaincodeStub) GetSignedProposalReturnsOnCall(i int, result1 *peer.SignedProposal, result2 error) { + fake.getSignedProposalMutex.Lock() + defer fake.getSignedProposalMutex.Unlock() + fake.GetSignedProposalStub = nil + if fake.getSignedProposalReturnsOnCall == nil { + fake.getSignedProposalReturnsOnCall = make(map[int]struct { + result1 *peer.SignedProposal + result2 error + }) + } + fake.getSignedProposalReturnsOnCall[i] = struct { + result1 *peer.SignedProposal + result2 error + }{result1, result2} +} + +func (fake *ChaincodeStub) GetState(arg1 string) ([]byte, error) { + fake.getStateMutex.Lock() + ret, specificReturn := fake.getStateReturnsOnCall[len(fake.getStateArgsForCall)] + fake.getStateArgsForCall = append(fake.getStateArgsForCall, struct { + arg1 string + }{arg1}) + fake.recordInvocation("GetState", []interface{}{arg1}) + fake.getStateMutex.Unlock() + if fake.GetStateStub != nil { + return fake.GetStateStub(arg1) + } + if specificReturn { + return ret.result1, ret.result2 + } + fakeReturns := fake.getStateReturns + return fakeReturns.result1, fakeReturns.result2 +} + +func (fake *ChaincodeStub) GetStateCallCount() int { + fake.getStateMutex.RLock() + defer fake.getStateMutex.RUnlock() + return len(fake.getStateArgsForCall) +} + +func (fake *ChaincodeStub) GetStateCalls(stub func(string) ([]byte, error)) { + fake.getStateMutex.Lock() + defer fake.getStateMutex.Unlock() + fake.GetStateStub = stub +} + +func (fake *ChaincodeStub) GetStateArgsForCall(i int) string { + fake.getStateMutex.RLock() + defer fake.getStateMutex.RUnlock() + argsForCall := fake.getStateArgsForCall[i] + return argsForCall.arg1 +} + +func (fake *ChaincodeStub) GetStateReturns(result1 []byte, result2 error) { + fake.getStateMutex.Lock() + defer fake.getStateMutex.Unlock() + fake.GetStateStub = nil + fake.getStateReturns = struct { + result1 []byte + result2 error + }{result1, result2} +} + +func (fake *ChaincodeStub) GetStateReturnsOnCall(i int, result1 []byte, result2 error) { + fake.getStateMutex.Lock() + defer fake.getStateMutex.Unlock() + fake.GetStateStub = nil + if fake.getStateReturnsOnCall == nil { + fake.getStateReturnsOnCall = make(map[int]struct { + result1 []byte + result2 error + }) + } + fake.getStateReturnsOnCall[i] = struct { + result1 []byte + result2 error + }{result1, result2} +} + +func (fake *ChaincodeStub) GetStateByPartialCompositeKey(arg1 string, arg2 []string) (shim.StateQueryIteratorInterface, error) { + var arg2Copy []string + if arg2 != nil { + arg2Copy = make([]string, len(arg2)) + copy(arg2Copy, arg2) + } + fake.getStateByPartialCompositeKeyMutex.Lock() + ret, specificReturn := fake.getStateByPartialCompositeKeyReturnsOnCall[len(fake.getStateByPartialCompositeKeyArgsForCall)] + fake.getStateByPartialCompositeKeyArgsForCall = append(fake.getStateByPartialCompositeKeyArgsForCall, struct { + arg1 string + arg2 []string + }{arg1, arg2Copy}) + fake.recordInvocation("GetStateByPartialCompositeKey", []interface{}{arg1, arg2Copy}) + fake.getStateByPartialCompositeKeyMutex.Unlock() + if fake.GetStateByPartialCompositeKeyStub != nil { + return fake.GetStateByPartialCompositeKeyStub(arg1, arg2) + } + if specificReturn { + return ret.result1, ret.result2 + } + fakeReturns := fake.getStateByPartialCompositeKeyReturns + return fakeReturns.result1, fakeReturns.result2 +} + +func (fake *ChaincodeStub) GetStateByPartialCompositeKeyCallCount() int { + fake.getStateByPartialCompositeKeyMutex.RLock() + defer fake.getStateByPartialCompositeKeyMutex.RUnlock() + return len(fake.getStateByPartialCompositeKeyArgsForCall) +} + +func (fake *ChaincodeStub) GetStateByPartialCompositeKeyCalls(stub func(string, []string) (shim.StateQueryIteratorInterface, error)) { + fake.getStateByPartialCompositeKeyMutex.Lock() + defer fake.getStateByPartialCompositeKeyMutex.Unlock() + fake.GetStateByPartialCompositeKeyStub = stub +} + +func (fake *ChaincodeStub) GetStateByPartialCompositeKeyArgsForCall(i int) (string, []string) { + fake.getStateByPartialCompositeKeyMutex.RLock() + defer fake.getStateByPartialCompositeKeyMutex.RUnlock() + argsForCall := fake.getStateByPartialCompositeKeyArgsForCall[i] + return argsForCall.arg1, argsForCall.arg2 +} + +func (fake *ChaincodeStub) GetStateByPartialCompositeKeyReturns(result1 shim.StateQueryIteratorInterface, result2 error) { + fake.getStateByPartialCompositeKeyMutex.Lock() + defer fake.getStateByPartialCompositeKeyMutex.Unlock() + fake.GetStateByPartialCompositeKeyStub = nil + fake.getStateByPartialCompositeKeyReturns = struct { + result1 shim.StateQueryIteratorInterface + result2 error + }{result1, result2} +} + +func (fake *ChaincodeStub) GetStateByPartialCompositeKeyReturnsOnCall(i int, result1 shim.StateQueryIteratorInterface, result2 error) { + fake.getStateByPartialCompositeKeyMutex.Lock() + defer fake.getStateByPartialCompositeKeyMutex.Unlock() + fake.GetStateByPartialCompositeKeyStub = nil + if fake.getStateByPartialCompositeKeyReturnsOnCall == nil { + fake.getStateByPartialCompositeKeyReturnsOnCall = make(map[int]struct { + result1 shim.StateQueryIteratorInterface + result2 error + }) + } + fake.getStateByPartialCompositeKeyReturnsOnCall[i] = struct { + result1 shim.StateQueryIteratorInterface + result2 error + }{result1, result2} +} + +func (fake *ChaincodeStub) GetStateByPartialCompositeKeyWithPagination(arg1 string, arg2 []string, arg3 int32, arg4 string) (shim.StateQueryIteratorInterface, *peer.QueryResponseMetadata, error) { + var arg2Copy []string + if arg2 != nil { + arg2Copy = make([]string, len(arg2)) + copy(arg2Copy, arg2) + } + fake.getStateByPartialCompositeKeyWithPaginationMutex.Lock() + ret, specificReturn := fake.getStateByPartialCompositeKeyWithPaginationReturnsOnCall[len(fake.getStateByPartialCompositeKeyWithPaginationArgsForCall)] + fake.getStateByPartialCompositeKeyWithPaginationArgsForCall = append(fake.getStateByPartialCompositeKeyWithPaginationArgsForCall, struct { + arg1 string + arg2 []string + arg3 int32 + arg4 string + }{arg1, arg2Copy, arg3, arg4}) + fake.recordInvocation("GetStateByPartialCompositeKeyWithPagination", []interface{}{arg1, arg2Copy, arg3, arg4}) + fake.getStateByPartialCompositeKeyWithPaginationMutex.Unlock() + if fake.GetStateByPartialCompositeKeyWithPaginationStub != nil { + return fake.GetStateByPartialCompositeKeyWithPaginationStub(arg1, arg2, arg3, arg4) + } + if specificReturn { + return ret.result1, ret.result2, ret.result3 + } + fakeReturns := fake.getStateByPartialCompositeKeyWithPaginationReturns + return fakeReturns.result1, fakeReturns.result2, fakeReturns.result3 +} + +func (fake *ChaincodeStub) GetStateByPartialCompositeKeyWithPaginationCallCount() int { + fake.getStateByPartialCompositeKeyWithPaginationMutex.RLock() + defer fake.getStateByPartialCompositeKeyWithPaginationMutex.RUnlock() + return len(fake.getStateByPartialCompositeKeyWithPaginationArgsForCall) +} + +func (fake *ChaincodeStub) GetStateByPartialCompositeKeyWithPaginationCalls(stub func(string, []string, int32, string) (shim.StateQueryIteratorInterface, *peer.QueryResponseMetadata, error)) { + fake.getStateByPartialCompositeKeyWithPaginationMutex.Lock() + defer fake.getStateByPartialCompositeKeyWithPaginationMutex.Unlock() + fake.GetStateByPartialCompositeKeyWithPaginationStub = stub +} + +func (fake *ChaincodeStub) GetStateByPartialCompositeKeyWithPaginationArgsForCall(i int) (string, []string, int32, string) { + fake.getStateByPartialCompositeKeyWithPaginationMutex.RLock() + defer fake.getStateByPartialCompositeKeyWithPaginationMutex.RUnlock() + argsForCall := fake.getStateByPartialCompositeKeyWithPaginationArgsForCall[i] + return argsForCall.arg1, argsForCall.arg2, argsForCall.arg3, argsForCall.arg4 +} + +func (fake *ChaincodeStub) GetStateByPartialCompositeKeyWithPaginationReturns(result1 shim.StateQueryIteratorInterface, result2 *peer.QueryResponseMetadata, result3 error) { + fake.getStateByPartialCompositeKeyWithPaginationMutex.Lock() + defer fake.getStateByPartialCompositeKeyWithPaginationMutex.Unlock() + fake.GetStateByPartialCompositeKeyWithPaginationStub = nil + fake.getStateByPartialCompositeKeyWithPaginationReturns = struct { + result1 shim.StateQueryIteratorInterface + result2 *peer.QueryResponseMetadata + result3 error + }{result1, result2, result3} +} + +func (fake *ChaincodeStub) GetStateByPartialCompositeKeyWithPaginationReturnsOnCall(i int, result1 shim.StateQueryIteratorInterface, result2 *peer.QueryResponseMetadata, result3 error) { + fake.getStateByPartialCompositeKeyWithPaginationMutex.Lock() + defer fake.getStateByPartialCompositeKeyWithPaginationMutex.Unlock() + fake.GetStateByPartialCompositeKeyWithPaginationStub = nil + if fake.getStateByPartialCompositeKeyWithPaginationReturnsOnCall == nil { + fake.getStateByPartialCompositeKeyWithPaginationReturnsOnCall = make(map[int]struct { + result1 shim.StateQueryIteratorInterface + result2 *peer.QueryResponseMetadata + result3 error + }) + } + fake.getStateByPartialCompositeKeyWithPaginationReturnsOnCall[i] = struct { + result1 shim.StateQueryIteratorInterface + result2 *peer.QueryResponseMetadata + result3 error + }{result1, result2, result3} +} + +func (fake *ChaincodeStub) GetStateByRange(arg1 string, arg2 string) (shim.StateQueryIteratorInterface, error) { + fake.getStateByRangeMutex.Lock() + ret, specificReturn := fake.getStateByRangeReturnsOnCall[len(fake.getStateByRangeArgsForCall)] + fake.getStateByRangeArgsForCall = append(fake.getStateByRangeArgsForCall, struct { + arg1 string + arg2 string + }{arg1, arg2}) + fake.recordInvocation("GetStateByRange", []interface{}{arg1, arg2}) + fake.getStateByRangeMutex.Unlock() + if fake.GetStateByRangeStub != nil { + return fake.GetStateByRangeStub(arg1, arg2) + } + if specificReturn { + return ret.result1, ret.result2 + } + fakeReturns := fake.getStateByRangeReturns + return fakeReturns.result1, fakeReturns.result2 +} + +func (fake *ChaincodeStub) GetStateByRangeCallCount() int { + fake.getStateByRangeMutex.RLock() + defer fake.getStateByRangeMutex.RUnlock() + return len(fake.getStateByRangeArgsForCall) +} + +func (fake *ChaincodeStub) GetStateByRangeCalls(stub func(string, string) (shim.StateQueryIteratorInterface, error)) { + fake.getStateByRangeMutex.Lock() + defer fake.getStateByRangeMutex.Unlock() + fake.GetStateByRangeStub = stub +} + +func (fake *ChaincodeStub) GetStateByRangeArgsForCall(i int) (string, string) { + fake.getStateByRangeMutex.RLock() + defer fake.getStateByRangeMutex.RUnlock() + argsForCall := fake.getStateByRangeArgsForCall[i] + return argsForCall.arg1, argsForCall.arg2 +} + +func (fake *ChaincodeStub) GetStateByRangeReturns(result1 shim.StateQueryIteratorInterface, result2 error) { + fake.getStateByRangeMutex.Lock() + defer fake.getStateByRangeMutex.Unlock() + fake.GetStateByRangeStub = nil + fake.getStateByRangeReturns = struct { + result1 shim.StateQueryIteratorInterface + result2 error + }{result1, result2} +} + +func (fake *ChaincodeStub) GetStateByRangeReturnsOnCall(i int, result1 shim.StateQueryIteratorInterface, result2 error) { + fake.getStateByRangeMutex.Lock() + defer fake.getStateByRangeMutex.Unlock() + fake.GetStateByRangeStub = nil + if fake.getStateByRangeReturnsOnCall == nil { + fake.getStateByRangeReturnsOnCall = make(map[int]struct { + result1 shim.StateQueryIteratorInterface + result2 error + }) + } + fake.getStateByRangeReturnsOnCall[i] = struct { + result1 shim.StateQueryIteratorInterface + result2 error + }{result1, result2} +} + +func (fake *ChaincodeStub) GetStateByRangeWithPagination(arg1 string, arg2 string, arg3 int32, arg4 string) (shim.StateQueryIteratorInterface, *peer.QueryResponseMetadata, error) { + fake.getStateByRangeWithPaginationMutex.Lock() + ret, specificReturn := fake.getStateByRangeWithPaginationReturnsOnCall[len(fake.getStateByRangeWithPaginationArgsForCall)] + fake.getStateByRangeWithPaginationArgsForCall = append(fake.getStateByRangeWithPaginationArgsForCall, struct { + arg1 string + arg2 string + arg3 int32 + arg4 string + }{arg1, arg2, arg3, arg4}) + fake.recordInvocation("GetStateByRangeWithPagination", []interface{}{arg1, arg2, arg3, arg4}) + fake.getStateByRangeWithPaginationMutex.Unlock() + if fake.GetStateByRangeWithPaginationStub != nil { + return fake.GetStateByRangeWithPaginationStub(arg1, arg2, arg3, arg4) + } + if specificReturn { + return ret.result1, ret.result2, ret.result3 + } + fakeReturns := fake.getStateByRangeWithPaginationReturns + return fakeReturns.result1, fakeReturns.result2, fakeReturns.result3 +} + +func (fake *ChaincodeStub) GetStateByRangeWithPaginationCallCount() int { + fake.getStateByRangeWithPaginationMutex.RLock() + defer fake.getStateByRangeWithPaginationMutex.RUnlock() + return len(fake.getStateByRangeWithPaginationArgsForCall) +} + +func (fake *ChaincodeStub) GetStateByRangeWithPaginationCalls(stub func(string, string, int32, string) (shim.StateQueryIteratorInterface, *peer.QueryResponseMetadata, error)) { + fake.getStateByRangeWithPaginationMutex.Lock() + defer fake.getStateByRangeWithPaginationMutex.Unlock() + fake.GetStateByRangeWithPaginationStub = stub +} + +func (fake *ChaincodeStub) GetStateByRangeWithPaginationArgsForCall(i int) (string, string, int32, string) { + fake.getStateByRangeWithPaginationMutex.RLock() + defer fake.getStateByRangeWithPaginationMutex.RUnlock() + argsForCall := fake.getStateByRangeWithPaginationArgsForCall[i] + return argsForCall.arg1, argsForCall.arg2, argsForCall.arg3, argsForCall.arg4 +} + +func (fake *ChaincodeStub) GetStateByRangeWithPaginationReturns(result1 shim.StateQueryIteratorInterface, result2 *peer.QueryResponseMetadata, result3 error) { + fake.getStateByRangeWithPaginationMutex.Lock() + defer fake.getStateByRangeWithPaginationMutex.Unlock() + fake.GetStateByRangeWithPaginationStub = nil + fake.getStateByRangeWithPaginationReturns = struct { + result1 shim.StateQueryIteratorInterface + result2 *peer.QueryResponseMetadata + result3 error + }{result1, result2, result3} +} + +func (fake *ChaincodeStub) GetStateByRangeWithPaginationReturnsOnCall(i int, result1 shim.StateQueryIteratorInterface, result2 *peer.QueryResponseMetadata, result3 error) { + fake.getStateByRangeWithPaginationMutex.Lock() + defer fake.getStateByRangeWithPaginationMutex.Unlock() + fake.GetStateByRangeWithPaginationStub = nil + if fake.getStateByRangeWithPaginationReturnsOnCall == nil { + fake.getStateByRangeWithPaginationReturnsOnCall = make(map[int]struct { + result1 shim.StateQueryIteratorInterface + result2 *peer.QueryResponseMetadata + result3 error + }) + } + fake.getStateByRangeWithPaginationReturnsOnCall[i] = struct { + result1 shim.StateQueryIteratorInterface + result2 *peer.QueryResponseMetadata + result3 error + }{result1, result2, result3} +} + +func (fake *ChaincodeStub) GetStateValidationParameter(arg1 string) ([]byte, error) { + fake.getStateValidationParameterMutex.Lock() + ret, specificReturn := fake.getStateValidationParameterReturnsOnCall[len(fake.getStateValidationParameterArgsForCall)] + fake.getStateValidationParameterArgsForCall = append(fake.getStateValidationParameterArgsForCall, struct { + arg1 string + }{arg1}) + fake.recordInvocation("GetStateValidationParameter", []interface{}{arg1}) + fake.getStateValidationParameterMutex.Unlock() + if fake.GetStateValidationParameterStub != nil { + return fake.GetStateValidationParameterStub(arg1) + } + if specificReturn { + return ret.result1, ret.result2 + } + fakeReturns := fake.getStateValidationParameterReturns + return fakeReturns.result1, fakeReturns.result2 +} + +func (fake *ChaincodeStub) GetStateValidationParameterCallCount() int { + fake.getStateValidationParameterMutex.RLock() + defer fake.getStateValidationParameterMutex.RUnlock() + return len(fake.getStateValidationParameterArgsForCall) +} + +func (fake *ChaincodeStub) GetStateValidationParameterCalls(stub func(string) ([]byte, error)) { + fake.getStateValidationParameterMutex.Lock() + defer fake.getStateValidationParameterMutex.Unlock() + fake.GetStateValidationParameterStub = stub +} + +func (fake *ChaincodeStub) GetStateValidationParameterArgsForCall(i int) string { + fake.getStateValidationParameterMutex.RLock() + defer fake.getStateValidationParameterMutex.RUnlock() + argsForCall := fake.getStateValidationParameterArgsForCall[i] + return argsForCall.arg1 +} + +func (fake *ChaincodeStub) GetStateValidationParameterReturns(result1 []byte, result2 error) { + fake.getStateValidationParameterMutex.Lock() + defer fake.getStateValidationParameterMutex.Unlock() + fake.GetStateValidationParameterStub = nil + fake.getStateValidationParameterReturns = struct { + result1 []byte + result2 error + }{result1, result2} +} + +func (fake *ChaincodeStub) GetStateValidationParameterReturnsOnCall(i int, result1 []byte, result2 error) { + fake.getStateValidationParameterMutex.Lock() + defer fake.getStateValidationParameterMutex.Unlock() + fake.GetStateValidationParameterStub = nil + if fake.getStateValidationParameterReturnsOnCall == nil { + fake.getStateValidationParameterReturnsOnCall = make(map[int]struct { + result1 []byte + result2 error + }) + } + fake.getStateValidationParameterReturnsOnCall[i] = struct { + result1 []byte + result2 error + }{result1, result2} +} + +func (fake *ChaincodeStub) GetStringArgs() []string { + fake.getStringArgsMutex.Lock() + ret, specificReturn := fake.getStringArgsReturnsOnCall[len(fake.getStringArgsArgsForCall)] + fake.getStringArgsArgsForCall = append(fake.getStringArgsArgsForCall, struct { + }{}) + fake.recordInvocation("GetStringArgs", []interface{}{}) + fake.getStringArgsMutex.Unlock() + if fake.GetStringArgsStub != nil { + return fake.GetStringArgsStub() + } + if specificReturn { + return ret.result1 + } + fakeReturns := fake.getStringArgsReturns + return fakeReturns.result1 +} + +func (fake *ChaincodeStub) GetStringArgsCallCount() int { + fake.getStringArgsMutex.RLock() + defer fake.getStringArgsMutex.RUnlock() + return len(fake.getStringArgsArgsForCall) +} + +func (fake *ChaincodeStub) GetStringArgsCalls(stub func() []string) { + fake.getStringArgsMutex.Lock() + defer fake.getStringArgsMutex.Unlock() + fake.GetStringArgsStub = stub +} + +func (fake *ChaincodeStub) GetStringArgsReturns(result1 []string) { + fake.getStringArgsMutex.Lock() + defer fake.getStringArgsMutex.Unlock() + fake.GetStringArgsStub = nil + fake.getStringArgsReturns = struct { + result1 []string + }{result1} +} + +func (fake *ChaincodeStub) GetStringArgsReturnsOnCall(i int, result1 []string) { + fake.getStringArgsMutex.Lock() + defer fake.getStringArgsMutex.Unlock() + fake.GetStringArgsStub = nil + if fake.getStringArgsReturnsOnCall == nil { + fake.getStringArgsReturnsOnCall = make(map[int]struct { + result1 []string + }) + } + fake.getStringArgsReturnsOnCall[i] = struct { + result1 []string + }{result1} +} + +func (fake *ChaincodeStub) GetTransient() (map[string][]byte, error) { + fake.getTransientMutex.Lock() + ret, specificReturn := fake.getTransientReturnsOnCall[len(fake.getTransientArgsForCall)] + fake.getTransientArgsForCall = append(fake.getTransientArgsForCall, struct { + }{}) + fake.recordInvocation("GetTransient", []interface{}{}) + fake.getTransientMutex.Unlock() + if fake.GetTransientStub != nil { + return fake.GetTransientStub() + } + if specificReturn { + return ret.result1, ret.result2 + } + fakeReturns := fake.getTransientReturns + return fakeReturns.result1, fakeReturns.result2 +} + +func (fake *ChaincodeStub) GetTransientCallCount() int { + fake.getTransientMutex.RLock() + defer fake.getTransientMutex.RUnlock() + return len(fake.getTransientArgsForCall) +} + +func (fake *ChaincodeStub) GetTransientCalls(stub func() (map[string][]byte, error)) { + fake.getTransientMutex.Lock() + defer fake.getTransientMutex.Unlock() + fake.GetTransientStub = stub +} + +func (fake *ChaincodeStub) GetTransientReturns(result1 map[string][]byte, result2 error) { + fake.getTransientMutex.Lock() + defer fake.getTransientMutex.Unlock() + fake.GetTransientStub = nil + fake.getTransientReturns = struct { + result1 map[string][]byte + result2 error + }{result1, result2} +} + +func (fake *ChaincodeStub) GetTransientReturnsOnCall(i int, result1 map[string][]byte, result2 error) { + fake.getTransientMutex.Lock() + defer fake.getTransientMutex.Unlock() + fake.GetTransientStub = nil + if fake.getTransientReturnsOnCall == nil { + fake.getTransientReturnsOnCall = make(map[int]struct { + result1 map[string][]byte + result2 error + }) + } + fake.getTransientReturnsOnCall[i] = struct { + result1 map[string][]byte + result2 error + }{result1, result2} +} + +func (fake *ChaincodeStub) GetTxID() string { + fake.getTxIDMutex.Lock() + ret, specificReturn := fake.getTxIDReturnsOnCall[len(fake.getTxIDArgsForCall)] + fake.getTxIDArgsForCall = append(fake.getTxIDArgsForCall, struct { + }{}) + fake.recordInvocation("GetTxID", []interface{}{}) + fake.getTxIDMutex.Unlock() + if fake.GetTxIDStub != nil { + return fake.GetTxIDStub() + } + if specificReturn { + return ret.result1 + } + fakeReturns := fake.getTxIDReturns + return fakeReturns.result1 +} + +func (fake *ChaincodeStub) GetTxIDCallCount() int { + fake.getTxIDMutex.RLock() + defer fake.getTxIDMutex.RUnlock() + return len(fake.getTxIDArgsForCall) +} + +func (fake *ChaincodeStub) GetTxIDCalls(stub func() string) { + fake.getTxIDMutex.Lock() + defer fake.getTxIDMutex.Unlock() + fake.GetTxIDStub = stub +} + +func (fake *ChaincodeStub) GetTxIDReturns(result1 string) { + fake.getTxIDMutex.Lock() + defer fake.getTxIDMutex.Unlock() + fake.GetTxIDStub = nil + fake.getTxIDReturns = struct { + result1 string + }{result1} +} + +func (fake *ChaincodeStub) GetTxIDReturnsOnCall(i int, result1 string) { + fake.getTxIDMutex.Lock() + defer fake.getTxIDMutex.Unlock() + fake.GetTxIDStub = nil + if fake.getTxIDReturnsOnCall == nil { + fake.getTxIDReturnsOnCall = make(map[int]struct { + result1 string + }) + } + fake.getTxIDReturnsOnCall[i] = struct { + result1 string + }{result1} +} + +func (fake *ChaincodeStub) GetTxTimestamp() (*timestamp.Timestamp, error) { + fake.getTxTimestampMutex.Lock() + ret, specificReturn := fake.getTxTimestampReturnsOnCall[len(fake.getTxTimestampArgsForCall)] + fake.getTxTimestampArgsForCall = append(fake.getTxTimestampArgsForCall, struct { + }{}) + fake.recordInvocation("GetTxTimestamp", []interface{}{}) + fake.getTxTimestampMutex.Unlock() + if fake.GetTxTimestampStub != nil { + return fake.GetTxTimestampStub() + } + if specificReturn { + return ret.result1, ret.result2 + } + fakeReturns := fake.getTxTimestampReturns + return fakeReturns.result1, fakeReturns.result2 +} + +func (fake *ChaincodeStub) GetTxTimestampCallCount() int { + fake.getTxTimestampMutex.RLock() + defer fake.getTxTimestampMutex.RUnlock() + return len(fake.getTxTimestampArgsForCall) +} + +func (fake *ChaincodeStub) GetTxTimestampCalls(stub func() (*timestamp.Timestamp, error)) { + fake.getTxTimestampMutex.Lock() + defer fake.getTxTimestampMutex.Unlock() + fake.GetTxTimestampStub = stub +} + +func (fake *ChaincodeStub) GetTxTimestampReturns(result1 *timestamp.Timestamp, result2 error) { + fake.getTxTimestampMutex.Lock() + defer fake.getTxTimestampMutex.Unlock() + fake.GetTxTimestampStub = nil + fake.getTxTimestampReturns = struct { + result1 *timestamp.Timestamp + result2 error + }{result1, result2} +} + +func (fake *ChaincodeStub) GetTxTimestampReturnsOnCall(i int, result1 *timestamp.Timestamp, result2 error) { + fake.getTxTimestampMutex.Lock() + defer fake.getTxTimestampMutex.Unlock() + fake.GetTxTimestampStub = nil + if fake.getTxTimestampReturnsOnCall == nil { + fake.getTxTimestampReturnsOnCall = make(map[int]struct { + result1 *timestamp.Timestamp + result2 error + }) + } + fake.getTxTimestampReturnsOnCall[i] = struct { + result1 *timestamp.Timestamp + result2 error + }{result1, result2} +} + +func (fake *ChaincodeStub) InvokeChaincode(arg1 string, arg2 [][]byte, arg3 string) peer.Response { + var arg2Copy [][]byte + if arg2 != nil { + arg2Copy = make([][]byte, len(arg2)) + copy(arg2Copy, arg2) + } + fake.invokeChaincodeMutex.Lock() + ret, specificReturn := fake.invokeChaincodeReturnsOnCall[len(fake.invokeChaincodeArgsForCall)] + fake.invokeChaincodeArgsForCall = append(fake.invokeChaincodeArgsForCall, struct { + arg1 string + arg2 [][]byte + arg3 string + }{arg1, arg2Copy, arg3}) + fake.recordInvocation("InvokeChaincode", []interface{}{arg1, arg2Copy, arg3}) + fake.invokeChaincodeMutex.Unlock() + if fake.InvokeChaincodeStub != nil { + return fake.InvokeChaincodeStub(arg1, arg2, arg3) + } + if specificReturn { + return ret.result1 + } + fakeReturns := fake.invokeChaincodeReturns + return fakeReturns.result1 +} + +func (fake *ChaincodeStub) InvokeChaincodeCallCount() int { + fake.invokeChaincodeMutex.RLock() + defer fake.invokeChaincodeMutex.RUnlock() + return len(fake.invokeChaincodeArgsForCall) +} + +func (fake *ChaincodeStub) InvokeChaincodeCalls(stub func(string, [][]byte, string) peer.Response) { + fake.invokeChaincodeMutex.Lock() + defer fake.invokeChaincodeMutex.Unlock() + fake.InvokeChaincodeStub = stub +} + +func (fake *ChaincodeStub) InvokeChaincodeArgsForCall(i int) (string, [][]byte, string) { + fake.invokeChaincodeMutex.RLock() + defer fake.invokeChaincodeMutex.RUnlock() + argsForCall := fake.invokeChaincodeArgsForCall[i] + return argsForCall.arg1, argsForCall.arg2, argsForCall.arg3 +} + +func (fake *ChaincodeStub) InvokeChaincodeReturns(result1 peer.Response) { + fake.invokeChaincodeMutex.Lock() + defer fake.invokeChaincodeMutex.Unlock() + fake.InvokeChaincodeStub = nil + fake.invokeChaincodeReturns = struct { + result1 peer.Response + }{result1} +} + +func (fake *ChaincodeStub) InvokeChaincodeReturnsOnCall(i int, result1 peer.Response) { + fake.invokeChaincodeMutex.Lock() + defer fake.invokeChaincodeMutex.Unlock() + fake.InvokeChaincodeStub = nil + if fake.invokeChaincodeReturnsOnCall == nil { + fake.invokeChaincodeReturnsOnCall = make(map[int]struct { + result1 peer.Response + }) + } + fake.invokeChaincodeReturnsOnCall[i] = struct { + result1 peer.Response + }{result1} +} + +func (fake *ChaincodeStub) PutPrivateData(arg1 string, arg2 string, arg3 []byte) error { + var arg3Copy []byte + if arg3 != nil { + arg3Copy = make([]byte, len(arg3)) + copy(arg3Copy, arg3) + } + fake.putPrivateDataMutex.Lock() + ret, specificReturn := fake.putPrivateDataReturnsOnCall[len(fake.putPrivateDataArgsForCall)] + fake.putPrivateDataArgsForCall = append(fake.putPrivateDataArgsForCall, struct { + arg1 string + arg2 string + arg3 []byte + }{arg1, arg2, arg3Copy}) + fake.recordInvocation("PutPrivateData", []interface{}{arg1, arg2, arg3Copy}) + fake.putPrivateDataMutex.Unlock() + if fake.PutPrivateDataStub != nil { + return fake.PutPrivateDataStub(arg1, arg2, arg3) + } + if specificReturn { + return ret.result1 + } + fakeReturns := fake.putPrivateDataReturns + return fakeReturns.result1 +} + +func (fake *ChaincodeStub) PutPrivateDataCallCount() int { + fake.putPrivateDataMutex.RLock() + defer fake.putPrivateDataMutex.RUnlock() + return len(fake.putPrivateDataArgsForCall) +} + +func (fake *ChaincodeStub) PutPrivateDataCalls(stub func(string, string, []byte) error) { + fake.putPrivateDataMutex.Lock() + defer fake.putPrivateDataMutex.Unlock() + fake.PutPrivateDataStub = stub +} + +func (fake *ChaincodeStub) PutPrivateDataArgsForCall(i int) (string, string, []byte) { + fake.putPrivateDataMutex.RLock() + defer fake.putPrivateDataMutex.RUnlock() + argsForCall := fake.putPrivateDataArgsForCall[i] + return argsForCall.arg1, argsForCall.arg2, argsForCall.arg3 +} + +func (fake *ChaincodeStub) PutPrivateDataReturns(result1 error) { + fake.putPrivateDataMutex.Lock() + defer fake.putPrivateDataMutex.Unlock() + fake.PutPrivateDataStub = nil + fake.putPrivateDataReturns = struct { + result1 error + }{result1} +} + +func (fake *ChaincodeStub) PutPrivateDataReturnsOnCall(i int, result1 error) { + fake.putPrivateDataMutex.Lock() + defer fake.putPrivateDataMutex.Unlock() + fake.PutPrivateDataStub = nil + if fake.putPrivateDataReturnsOnCall == nil { + fake.putPrivateDataReturnsOnCall = make(map[int]struct { + result1 error + }) + } + fake.putPrivateDataReturnsOnCall[i] = struct { + result1 error + }{result1} +} + +func (fake *ChaincodeStub) PutState(arg1 string, arg2 []byte) error { + var arg2Copy []byte + if arg2 != nil { + arg2Copy = make([]byte, len(arg2)) + copy(arg2Copy, arg2) + } + fake.putStateMutex.Lock() + ret, specificReturn := fake.putStateReturnsOnCall[len(fake.putStateArgsForCall)] + fake.putStateArgsForCall = append(fake.putStateArgsForCall, struct { + arg1 string + arg2 []byte + }{arg1, arg2Copy}) + fake.recordInvocation("PutState", []interface{}{arg1, arg2Copy}) + fake.putStateMutex.Unlock() + if fake.PutStateStub != nil { + return fake.PutStateStub(arg1, arg2) + } + if specificReturn { + return ret.result1 + } + fakeReturns := fake.putStateReturns + return fakeReturns.result1 +} + +func (fake *ChaincodeStub) PutStateCallCount() int { + fake.putStateMutex.RLock() + defer fake.putStateMutex.RUnlock() + return len(fake.putStateArgsForCall) +} + +func (fake *ChaincodeStub) PutStateCalls(stub func(string, []byte) error) { + fake.putStateMutex.Lock() + defer fake.putStateMutex.Unlock() + fake.PutStateStub = stub +} + +func (fake *ChaincodeStub) PutStateArgsForCall(i int) (string, []byte) { + fake.putStateMutex.RLock() + defer fake.putStateMutex.RUnlock() + argsForCall := fake.putStateArgsForCall[i] + return argsForCall.arg1, argsForCall.arg2 +} + +func (fake *ChaincodeStub) PutStateReturns(result1 error) { + fake.putStateMutex.Lock() + defer fake.putStateMutex.Unlock() + fake.PutStateStub = nil + fake.putStateReturns = struct { + result1 error + }{result1} +} + +func (fake *ChaincodeStub) PutStateReturnsOnCall(i int, result1 error) { + fake.putStateMutex.Lock() + defer fake.putStateMutex.Unlock() + fake.PutStateStub = nil + if fake.putStateReturnsOnCall == nil { + fake.putStateReturnsOnCall = make(map[int]struct { + result1 error + }) + } + fake.putStateReturnsOnCall[i] = struct { + result1 error + }{result1} +} + +func (fake *ChaincodeStub) SetEvent(arg1 string, arg2 []byte) error { + var arg2Copy []byte + if arg2 != nil { + arg2Copy = make([]byte, len(arg2)) + copy(arg2Copy, arg2) + } + fake.setEventMutex.Lock() + ret, specificReturn := fake.setEventReturnsOnCall[len(fake.setEventArgsForCall)] + fake.setEventArgsForCall = append(fake.setEventArgsForCall, struct { + arg1 string + arg2 []byte + }{arg1, arg2Copy}) + fake.recordInvocation("SetEvent", []interface{}{arg1, arg2Copy}) + fake.setEventMutex.Unlock() + if fake.SetEventStub != nil { + return fake.SetEventStub(arg1, arg2) + } + if specificReturn { + return ret.result1 + } + fakeReturns := fake.setEventReturns + return fakeReturns.result1 +} + +func (fake *ChaincodeStub) SetEventCallCount() int { + fake.setEventMutex.RLock() + defer fake.setEventMutex.RUnlock() + return len(fake.setEventArgsForCall) +} + +func (fake *ChaincodeStub) SetEventCalls(stub func(string, []byte) error) { + fake.setEventMutex.Lock() + defer fake.setEventMutex.Unlock() + fake.SetEventStub = stub +} + +func (fake *ChaincodeStub) SetEventArgsForCall(i int) (string, []byte) { + fake.setEventMutex.RLock() + defer fake.setEventMutex.RUnlock() + argsForCall := fake.setEventArgsForCall[i] + return argsForCall.arg1, argsForCall.arg2 +} + +func (fake *ChaincodeStub) SetEventReturns(result1 error) { + fake.setEventMutex.Lock() + defer fake.setEventMutex.Unlock() + fake.SetEventStub = nil + fake.setEventReturns = struct { + result1 error + }{result1} +} + +func (fake *ChaincodeStub) SetEventReturnsOnCall(i int, result1 error) { + fake.setEventMutex.Lock() + defer fake.setEventMutex.Unlock() + fake.SetEventStub = nil + if fake.setEventReturnsOnCall == nil { + fake.setEventReturnsOnCall = make(map[int]struct { + result1 error + }) + } + fake.setEventReturnsOnCall[i] = struct { + result1 error + }{result1} +} + +func (fake *ChaincodeStub) SetPrivateDataValidationParameter(arg1 string, arg2 string, arg3 []byte) error { + var arg3Copy []byte + if arg3 != nil { + arg3Copy = make([]byte, len(arg3)) + copy(arg3Copy, arg3) + } + fake.setPrivateDataValidationParameterMutex.Lock() + ret, specificReturn := fake.setPrivateDataValidationParameterReturnsOnCall[len(fake.setPrivateDataValidationParameterArgsForCall)] + fake.setPrivateDataValidationParameterArgsForCall = append(fake.setPrivateDataValidationParameterArgsForCall, struct { + arg1 string + arg2 string + arg3 []byte + }{arg1, arg2, arg3Copy}) + fake.recordInvocation("SetPrivateDataValidationParameter", []interface{}{arg1, arg2, arg3Copy}) + fake.setPrivateDataValidationParameterMutex.Unlock() + if fake.SetPrivateDataValidationParameterStub != nil { + return fake.SetPrivateDataValidationParameterStub(arg1, arg2, arg3) + } + if specificReturn { + return ret.result1 + } + fakeReturns := fake.setPrivateDataValidationParameterReturns + return fakeReturns.result1 +} + +func (fake *ChaincodeStub) SetPrivateDataValidationParameterCallCount() int { + fake.setPrivateDataValidationParameterMutex.RLock() + defer fake.setPrivateDataValidationParameterMutex.RUnlock() + return len(fake.setPrivateDataValidationParameterArgsForCall) +} + +func (fake *ChaincodeStub) SetPrivateDataValidationParameterCalls(stub func(string, string, []byte) error) { + fake.setPrivateDataValidationParameterMutex.Lock() + defer fake.setPrivateDataValidationParameterMutex.Unlock() + fake.SetPrivateDataValidationParameterStub = stub +} + +func (fake *ChaincodeStub) SetPrivateDataValidationParameterArgsForCall(i int) (string, string, []byte) { + fake.setPrivateDataValidationParameterMutex.RLock() + defer fake.setPrivateDataValidationParameterMutex.RUnlock() + argsForCall := fake.setPrivateDataValidationParameterArgsForCall[i] + return argsForCall.arg1, argsForCall.arg2, argsForCall.arg3 +} + +func (fake *ChaincodeStub) SetPrivateDataValidationParameterReturns(result1 error) { + fake.setPrivateDataValidationParameterMutex.Lock() + defer fake.setPrivateDataValidationParameterMutex.Unlock() + fake.SetPrivateDataValidationParameterStub = nil + fake.setPrivateDataValidationParameterReturns = struct { + result1 error + }{result1} +} + +func (fake *ChaincodeStub) SetPrivateDataValidationParameterReturnsOnCall(i int, result1 error) { + fake.setPrivateDataValidationParameterMutex.Lock() + defer fake.setPrivateDataValidationParameterMutex.Unlock() + fake.SetPrivateDataValidationParameterStub = nil + if fake.setPrivateDataValidationParameterReturnsOnCall == nil { + fake.setPrivateDataValidationParameterReturnsOnCall = make(map[int]struct { + result1 error + }) + } + fake.setPrivateDataValidationParameterReturnsOnCall[i] = struct { + result1 error + }{result1} +} + +func (fake *ChaincodeStub) SetStateValidationParameter(arg1 string, arg2 []byte) error { + var arg2Copy []byte + if arg2 != nil { + arg2Copy = make([]byte, len(arg2)) + copy(arg2Copy, arg2) + } + fake.setStateValidationParameterMutex.Lock() + ret, specificReturn := fake.setStateValidationParameterReturnsOnCall[len(fake.setStateValidationParameterArgsForCall)] + fake.setStateValidationParameterArgsForCall = append(fake.setStateValidationParameterArgsForCall, struct { + arg1 string + arg2 []byte + }{arg1, arg2Copy}) + fake.recordInvocation("SetStateValidationParameter", []interface{}{arg1, arg2Copy}) + fake.setStateValidationParameterMutex.Unlock() + if fake.SetStateValidationParameterStub != nil { + return fake.SetStateValidationParameterStub(arg1, arg2) + } + if specificReturn { + return ret.result1 + } + fakeReturns := fake.setStateValidationParameterReturns + return fakeReturns.result1 +} + +func (fake *ChaincodeStub) SetStateValidationParameterCallCount() int { + fake.setStateValidationParameterMutex.RLock() + defer fake.setStateValidationParameterMutex.RUnlock() + return len(fake.setStateValidationParameterArgsForCall) +} + +func (fake *ChaincodeStub) SetStateValidationParameterCalls(stub func(string, []byte) error) { + fake.setStateValidationParameterMutex.Lock() + defer fake.setStateValidationParameterMutex.Unlock() + fake.SetStateValidationParameterStub = stub +} + +func (fake *ChaincodeStub) SetStateValidationParameterArgsForCall(i int) (string, []byte) { + fake.setStateValidationParameterMutex.RLock() + defer fake.setStateValidationParameterMutex.RUnlock() + argsForCall := fake.setStateValidationParameterArgsForCall[i] + return argsForCall.arg1, argsForCall.arg2 +} + +func (fake *ChaincodeStub) SetStateValidationParameterReturns(result1 error) { + fake.setStateValidationParameterMutex.Lock() + defer fake.setStateValidationParameterMutex.Unlock() + fake.SetStateValidationParameterStub = nil + fake.setStateValidationParameterReturns = struct { + result1 error + }{result1} +} + +func (fake *ChaincodeStub) SetStateValidationParameterReturnsOnCall(i int, result1 error) { + fake.setStateValidationParameterMutex.Lock() + defer fake.setStateValidationParameterMutex.Unlock() + fake.SetStateValidationParameterStub = nil + if fake.setStateValidationParameterReturnsOnCall == nil { + fake.setStateValidationParameterReturnsOnCall = make(map[int]struct { + result1 error + }) + } + fake.setStateValidationParameterReturnsOnCall[i] = struct { + result1 error + }{result1} +} + +func (fake *ChaincodeStub) SplitCompositeKey(arg1 string) (string, []string, error) { + fake.splitCompositeKeyMutex.Lock() + ret, specificReturn := fake.splitCompositeKeyReturnsOnCall[len(fake.splitCompositeKeyArgsForCall)] + fake.splitCompositeKeyArgsForCall = append(fake.splitCompositeKeyArgsForCall, struct { + arg1 string + }{arg1}) + fake.recordInvocation("SplitCompositeKey", []interface{}{arg1}) + fake.splitCompositeKeyMutex.Unlock() + if fake.SplitCompositeKeyStub != nil { + return fake.SplitCompositeKeyStub(arg1) + } + if specificReturn { + return ret.result1, ret.result2, ret.result3 + } + fakeReturns := fake.splitCompositeKeyReturns + return fakeReturns.result1, fakeReturns.result2, fakeReturns.result3 +} + +func (fake *ChaincodeStub) SplitCompositeKeyCallCount() int { + fake.splitCompositeKeyMutex.RLock() + defer fake.splitCompositeKeyMutex.RUnlock() + return len(fake.splitCompositeKeyArgsForCall) +} + +func (fake *ChaincodeStub) SplitCompositeKeyCalls(stub func(string) (string, []string, error)) { + fake.splitCompositeKeyMutex.Lock() + defer fake.splitCompositeKeyMutex.Unlock() + fake.SplitCompositeKeyStub = stub +} + +func (fake *ChaincodeStub) SplitCompositeKeyArgsForCall(i int) string { + fake.splitCompositeKeyMutex.RLock() + defer fake.splitCompositeKeyMutex.RUnlock() + argsForCall := fake.splitCompositeKeyArgsForCall[i] + return argsForCall.arg1 +} + +func (fake *ChaincodeStub) SplitCompositeKeyReturns(result1 string, result2 []string, result3 error) { + fake.splitCompositeKeyMutex.Lock() + defer fake.splitCompositeKeyMutex.Unlock() + fake.SplitCompositeKeyStub = nil + fake.splitCompositeKeyReturns = struct { + result1 string + result2 []string + result3 error + }{result1, result2, result3} +} + +func (fake *ChaincodeStub) SplitCompositeKeyReturnsOnCall(i int, result1 string, result2 []string, result3 error) { + fake.splitCompositeKeyMutex.Lock() + defer fake.splitCompositeKeyMutex.Unlock() + fake.SplitCompositeKeyStub = nil + if fake.splitCompositeKeyReturnsOnCall == nil { + fake.splitCompositeKeyReturnsOnCall = make(map[int]struct { + result1 string + result2 []string + result3 error + }) + } + fake.splitCompositeKeyReturnsOnCall[i] = struct { + result1 string + result2 []string + result3 error + }{result1, result2, result3} +} + +func (fake *ChaincodeStub) Invocations() map[string][][]interface{} { + fake.invocationsMutex.RLock() + defer fake.invocationsMutex.RUnlock() + fake.createCompositeKeyMutex.RLock() + defer fake.createCompositeKeyMutex.RUnlock() + fake.delPrivateDataMutex.RLock() + defer fake.delPrivateDataMutex.RUnlock() + fake.delStateMutex.RLock() + defer fake.delStateMutex.RUnlock() + fake.getArgsMutex.RLock() + defer fake.getArgsMutex.RUnlock() + fake.getArgsSliceMutex.RLock() + defer fake.getArgsSliceMutex.RUnlock() + fake.getBindingMutex.RLock() + defer fake.getBindingMutex.RUnlock() + fake.getChannelIDMutex.RLock() + defer fake.getChannelIDMutex.RUnlock() + fake.getCreatorMutex.RLock() + defer fake.getCreatorMutex.RUnlock() + fake.getDecorationsMutex.RLock() + defer fake.getDecorationsMutex.RUnlock() + fake.getFunctionAndParametersMutex.RLock() + defer fake.getFunctionAndParametersMutex.RUnlock() + fake.getHistoryForKeyMutex.RLock() + defer fake.getHistoryForKeyMutex.RUnlock() + fake.getPrivateDataMutex.RLock() + defer fake.getPrivateDataMutex.RUnlock() + fake.getPrivateDataByPartialCompositeKeyMutex.RLock() + defer fake.getPrivateDataByPartialCompositeKeyMutex.RUnlock() + fake.getPrivateDataByRangeMutex.RLock() + defer fake.getPrivateDataByRangeMutex.RUnlock() + fake.getPrivateDataHashMutex.RLock() + defer fake.getPrivateDataHashMutex.RUnlock() + fake.getPrivateDataQueryResultMutex.RLock() + defer fake.getPrivateDataQueryResultMutex.RUnlock() + fake.getPrivateDataValidationParameterMutex.RLock() + defer fake.getPrivateDataValidationParameterMutex.RUnlock() + fake.getQueryResultMutex.RLock() + defer fake.getQueryResultMutex.RUnlock() + fake.getQueryResultWithPaginationMutex.RLock() + defer fake.getQueryResultWithPaginationMutex.RUnlock() + fake.getSignedProposalMutex.RLock() + defer fake.getSignedProposalMutex.RUnlock() + fake.getStateMutex.RLock() + defer fake.getStateMutex.RUnlock() + fake.getStateByPartialCompositeKeyMutex.RLock() + defer fake.getStateByPartialCompositeKeyMutex.RUnlock() + fake.getStateByPartialCompositeKeyWithPaginationMutex.RLock() + defer fake.getStateByPartialCompositeKeyWithPaginationMutex.RUnlock() + fake.getStateByRangeMutex.RLock() + defer fake.getStateByRangeMutex.RUnlock() + fake.getStateByRangeWithPaginationMutex.RLock() + defer fake.getStateByRangeWithPaginationMutex.RUnlock() + fake.getStateValidationParameterMutex.RLock() + defer fake.getStateValidationParameterMutex.RUnlock() + fake.getStringArgsMutex.RLock() + defer fake.getStringArgsMutex.RUnlock() + fake.getTransientMutex.RLock() + defer fake.getTransientMutex.RUnlock() + fake.getTxIDMutex.RLock() + defer fake.getTxIDMutex.RUnlock() + fake.getTxTimestampMutex.RLock() + defer fake.getTxTimestampMutex.RUnlock() + fake.invokeChaincodeMutex.RLock() + defer fake.invokeChaincodeMutex.RUnlock() + fake.putPrivateDataMutex.RLock() + defer fake.putPrivateDataMutex.RUnlock() + fake.putStateMutex.RLock() + defer fake.putStateMutex.RUnlock() + fake.setEventMutex.RLock() + defer fake.setEventMutex.RUnlock() + fake.setPrivateDataValidationParameterMutex.RLock() + defer fake.setPrivateDataValidationParameterMutex.RUnlock() + fake.setStateValidationParameterMutex.RLock() + defer fake.setStateValidationParameterMutex.RUnlock() + fake.splitCompositeKeyMutex.RLock() + defer fake.splitCompositeKeyMutex.RUnlock() + copiedInvocations := map[string][][]interface{}{} + for key, value := range fake.invocations { + copiedInvocations[key] = value + } + return copiedInvocations +} + +func (fake *ChaincodeStub) recordInvocation(key string, args []interface{}) { + fake.invocationsMutex.Lock() + defer fake.invocationsMutex.Unlock() + if fake.invocations == nil { + fake.invocations = map[string][][]interface{}{} + } + if fake.invocations[key] == nil { + fake.invocations[key] = [][]interface{}{} + } + fake.invocations[key] = append(fake.invocations[key], args) +} diff --git a/topologies/t9/assets/chaincode/assets-transfer-basic/chaincode-go/chaincode/mocks/statequeryiterator.go b/topologies/t9/assets/chaincode/assets-transfer-basic/chaincode-go/chaincode/mocks/statequeryiterator.go new file mode 100644 index 0000000..27e3034 --- /dev/null +++ b/topologies/t9/assets/chaincode/assets-transfer-basic/chaincode-go/chaincode/mocks/statequeryiterator.go @@ -0,0 +1,232 @@ +// Code generated by counterfeiter. DO NOT EDIT. +package mocks + +import ( + "sync" + + "github.com/hyperledger/fabric-protos-go/ledger/queryresult" +) + +type StateQueryIterator struct { + CloseStub func() error + closeMutex sync.RWMutex + closeArgsForCall []struct { + } + closeReturns struct { + result1 error + } + closeReturnsOnCall map[int]struct { + result1 error + } + HasNextStub func() bool + hasNextMutex sync.RWMutex + hasNextArgsForCall []struct { + } + hasNextReturns struct { + result1 bool + } + hasNextReturnsOnCall map[int]struct { + result1 bool + } + NextStub func() (*queryresult.KV, error) + nextMutex sync.RWMutex + nextArgsForCall []struct { + } + nextReturns struct { + result1 *queryresult.KV + result2 error + } + nextReturnsOnCall map[int]struct { + result1 *queryresult.KV + result2 error + } + invocations map[string][][]interface{} + invocationsMutex sync.RWMutex +} + +func (fake *StateQueryIterator) Close() error { + fake.closeMutex.Lock() + ret, specificReturn := fake.closeReturnsOnCall[len(fake.closeArgsForCall)] + fake.closeArgsForCall = append(fake.closeArgsForCall, struct { + }{}) + fake.recordInvocation("Close", []interface{}{}) + fake.closeMutex.Unlock() + if fake.CloseStub != nil { + return fake.CloseStub() + } + if specificReturn { + return ret.result1 + } + fakeReturns := fake.closeReturns + return fakeReturns.result1 +} + +func (fake *StateQueryIterator) CloseCallCount() int { + fake.closeMutex.RLock() + defer fake.closeMutex.RUnlock() + return len(fake.closeArgsForCall) +} + +func (fake *StateQueryIterator) CloseCalls(stub func() error) { + fake.closeMutex.Lock() + defer fake.closeMutex.Unlock() + fake.CloseStub = stub +} + +func (fake *StateQueryIterator) CloseReturns(result1 error) { + fake.closeMutex.Lock() + defer fake.closeMutex.Unlock() + fake.CloseStub = nil + fake.closeReturns = struct { + result1 error + }{result1} +} + +func (fake *StateQueryIterator) CloseReturnsOnCall(i int, result1 error) { + fake.closeMutex.Lock() + defer fake.closeMutex.Unlock() + fake.CloseStub = nil + if fake.closeReturnsOnCall == nil { + fake.closeReturnsOnCall = make(map[int]struct { + result1 error + }) + } + fake.closeReturnsOnCall[i] = struct { + result1 error + }{result1} +} + +func (fake *StateQueryIterator) HasNext() bool { + fake.hasNextMutex.Lock() + ret, specificReturn := fake.hasNextReturnsOnCall[len(fake.hasNextArgsForCall)] + fake.hasNextArgsForCall = append(fake.hasNextArgsForCall, struct { + }{}) + fake.recordInvocation("HasNext", []interface{}{}) + fake.hasNextMutex.Unlock() + if fake.HasNextStub != nil { + return fake.HasNextStub() + } + if specificReturn { + return ret.result1 + } + fakeReturns := fake.hasNextReturns + return fakeReturns.result1 +} + +func (fake *StateQueryIterator) HasNextCallCount() int { + fake.hasNextMutex.RLock() + defer fake.hasNextMutex.RUnlock() + return len(fake.hasNextArgsForCall) +} + +func (fake *StateQueryIterator) HasNextCalls(stub func() bool) { + fake.hasNextMutex.Lock() + defer fake.hasNextMutex.Unlock() + fake.HasNextStub = stub +} + +func (fake *StateQueryIterator) HasNextReturns(result1 bool) { + fake.hasNextMutex.Lock() + defer fake.hasNextMutex.Unlock() + fake.HasNextStub = nil + fake.hasNextReturns = struct { + result1 bool + }{result1} +} + +func (fake *StateQueryIterator) HasNextReturnsOnCall(i int, result1 bool) { + fake.hasNextMutex.Lock() + defer fake.hasNextMutex.Unlock() + fake.HasNextStub = nil + if fake.hasNextReturnsOnCall == nil { + fake.hasNextReturnsOnCall = make(map[int]struct { + result1 bool + }) + } + fake.hasNextReturnsOnCall[i] = struct { + result1 bool + }{result1} +} + +func (fake *StateQueryIterator) Next() (*queryresult.KV, error) { + fake.nextMutex.Lock() + ret, specificReturn := fake.nextReturnsOnCall[len(fake.nextArgsForCall)] + fake.nextArgsForCall = append(fake.nextArgsForCall, struct { + }{}) + fake.recordInvocation("Next", []interface{}{}) + fake.nextMutex.Unlock() + if fake.NextStub != nil { + return fake.NextStub() + } + if specificReturn { + return ret.result1, ret.result2 + } + fakeReturns := fake.nextReturns + return fakeReturns.result1, fakeReturns.result2 +} + +func (fake *StateQueryIterator) NextCallCount() int { + fake.nextMutex.RLock() + defer fake.nextMutex.RUnlock() + return len(fake.nextArgsForCall) +} + +func (fake *StateQueryIterator) NextCalls(stub func() (*queryresult.KV, error)) { + fake.nextMutex.Lock() + defer fake.nextMutex.Unlock() + fake.NextStub = stub +} + +func (fake *StateQueryIterator) NextReturns(result1 *queryresult.KV, result2 error) { + fake.nextMutex.Lock() + defer fake.nextMutex.Unlock() + fake.NextStub = nil + fake.nextReturns = struct { + result1 *queryresult.KV + result2 error + }{result1, result2} +} + +func (fake *StateQueryIterator) NextReturnsOnCall(i int, result1 *queryresult.KV, result2 error) { + fake.nextMutex.Lock() + defer fake.nextMutex.Unlock() + fake.NextStub = nil + if fake.nextReturnsOnCall == nil { + fake.nextReturnsOnCall = make(map[int]struct { + result1 *queryresult.KV + result2 error + }) + } + fake.nextReturnsOnCall[i] = struct { + result1 *queryresult.KV + result2 error + }{result1, result2} +} + +func (fake *StateQueryIterator) Invocations() map[string][][]interface{} { + fake.invocationsMutex.RLock() + defer fake.invocationsMutex.RUnlock() + fake.closeMutex.RLock() + defer fake.closeMutex.RUnlock() + fake.hasNextMutex.RLock() + defer fake.hasNextMutex.RUnlock() + fake.nextMutex.RLock() + defer fake.nextMutex.RUnlock() + copiedInvocations := map[string][][]interface{}{} + for key, value := range fake.invocations { + copiedInvocations[key] = value + } + return copiedInvocations +} + +func (fake *StateQueryIterator) recordInvocation(key string, args []interface{}) { + fake.invocationsMutex.Lock() + defer fake.invocationsMutex.Unlock() + if fake.invocations == nil { + fake.invocations = map[string][][]interface{}{} + } + if fake.invocations[key] == nil { + fake.invocations[key] = [][]interface{}{} + } + fake.invocations[key] = append(fake.invocations[key], args) +} diff --git a/topologies/t9/assets/chaincode/assets-transfer-basic/chaincode-go/chaincode/mocks/transaction.go b/topologies/t9/assets/chaincode/assets-transfer-basic/chaincode-go/chaincode/mocks/transaction.go new file mode 100644 index 0000000..eea37db --- /dev/null +++ b/topologies/t9/assets/chaincode/assets-transfer-basic/chaincode-go/chaincode/mocks/transaction.go @@ -0,0 +1,164 @@ +// Code generated by counterfeiter. DO NOT EDIT. +package mocks + +import ( + "sync" + + "github.com/hyperledger/fabric-chaincode-go/pkg/cid" + "github.com/hyperledger/fabric-chaincode-go/shim" +) + +type TransactionContext struct { + GetClientIdentityStub func() cid.ClientIdentity + getClientIdentityMutex sync.RWMutex + getClientIdentityArgsForCall []struct { + } + getClientIdentityReturns struct { + result1 cid.ClientIdentity + } + getClientIdentityReturnsOnCall map[int]struct { + result1 cid.ClientIdentity + } + GetStubStub func() shim.ChaincodeStubInterface + getStubMutex sync.RWMutex + getStubArgsForCall []struct { + } + getStubReturns struct { + result1 shim.ChaincodeStubInterface + } + getStubReturnsOnCall map[int]struct { + result1 shim.ChaincodeStubInterface + } + invocations map[string][][]interface{} + invocationsMutex sync.RWMutex +} + +func (fake *TransactionContext) GetClientIdentity() cid.ClientIdentity { + fake.getClientIdentityMutex.Lock() + ret, specificReturn := fake.getClientIdentityReturnsOnCall[len(fake.getClientIdentityArgsForCall)] + fake.getClientIdentityArgsForCall = append(fake.getClientIdentityArgsForCall, struct { + }{}) + fake.recordInvocation("GetClientIdentity", []interface{}{}) + fake.getClientIdentityMutex.Unlock() + if fake.GetClientIdentityStub != nil { + return fake.GetClientIdentityStub() + } + if specificReturn { + return ret.result1 + } + fakeReturns := fake.getClientIdentityReturns + return fakeReturns.result1 +} + +func (fake *TransactionContext) GetClientIdentityCallCount() int { + fake.getClientIdentityMutex.RLock() + defer fake.getClientIdentityMutex.RUnlock() + return len(fake.getClientIdentityArgsForCall) +} + +func (fake *TransactionContext) GetClientIdentityCalls(stub func() cid.ClientIdentity) { + fake.getClientIdentityMutex.Lock() + defer fake.getClientIdentityMutex.Unlock() + fake.GetClientIdentityStub = stub +} + +func (fake *TransactionContext) GetClientIdentityReturns(result1 cid.ClientIdentity) { + fake.getClientIdentityMutex.Lock() + defer fake.getClientIdentityMutex.Unlock() + fake.GetClientIdentityStub = nil + fake.getClientIdentityReturns = struct { + result1 cid.ClientIdentity + }{result1} +} + +func (fake *TransactionContext) GetClientIdentityReturnsOnCall(i int, result1 cid.ClientIdentity) { + fake.getClientIdentityMutex.Lock() + defer fake.getClientIdentityMutex.Unlock() + fake.GetClientIdentityStub = nil + if fake.getClientIdentityReturnsOnCall == nil { + fake.getClientIdentityReturnsOnCall = make(map[int]struct { + result1 cid.ClientIdentity + }) + } + fake.getClientIdentityReturnsOnCall[i] = struct { + result1 cid.ClientIdentity + }{result1} +} + +func (fake *TransactionContext) GetStub() shim.ChaincodeStubInterface { + fake.getStubMutex.Lock() + ret, specificReturn := fake.getStubReturnsOnCall[len(fake.getStubArgsForCall)] + fake.getStubArgsForCall = append(fake.getStubArgsForCall, struct { + }{}) + fake.recordInvocation("GetStub", []interface{}{}) + fake.getStubMutex.Unlock() + if fake.GetStubStub != nil { + return fake.GetStubStub() + } + if specificReturn { + return ret.result1 + } + fakeReturns := fake.getStubReturns + return fakeReturns.result1 +} + +func (fake *TransactionContext) GetStubCallCount() int { + fake.getStubMutex.RLock() + defer fake.getStubMutex.RUnlock() + return len(fake.getStubArgsForCall) +} + +func (fake *TransactionContext) GetStubCalls(stub func() shim.ChaincodeStubInterface) { + fake.getStubMutex.Lock() + defer fake.getStubMutex.Unlock() + fake.GetStubStub = stub +} + +func (fake *TransactionContext) GetStubReturns(result1 shim.ChaincodeStubInterface) { + fake.getStubMutex.Lock() + defer fake.getStubMutex.Unlock() + fake.GetStubStub = nil + fake.getStubReturns = struct { + result1 shim.ChaincodeStubInterface + }{result1} +} + +func (fake *TransactionContext) GetStubReturnsOnCall(i int, result1 shim.ChaincodeStubInterface) { + fake.getStubMutex.Lock() + defer fake.getStubMutex.Unlock() + fake.GetStubStub = nil + if fake.getStubReturnsOnCall == nil { + fake.getStubReturnsOnCall = make(map[int]struct { + result1 shim.ChaincodeStubInterface + }) + } + fake.getStubReturnsOnCall[i] = struct { + result1 shim.ChaincodeStubInterface + }{result1} +} + +func (fake *TransactionContext) Invocations() map[string][][]interface{} { + fake.invocationsMutex.RLock() + defer fake.invocationsMutex.RUnlock() + fake.getClientIdentityMutex.RLock() + defer fake.getClientIdentityMutex.RUnlock() + fake.getStubMutex.RLock() + defer fake.getStubMutex.RUnlock() + copiedInvocations := map[string][][]interface{}{} + for key, value := range fake.invocations { + copiedInvocations[key] = value + } + return copiedInvocations +} + +func (fake *TransactionContext) recordInvocation(key string, args []interface{}) { + fake.invocationsMutex.Lock() + defer fake.invocationsMutex.Unlock() + if fake.invocations == nil { + fake.invocations = map[string][][]interface{}{} + } + if fake.invocations[key] == nil { + fake.invocations[key] = [][]interface{}{} + } + fake.invocations[key] = append(fake.invocations[key], args) +} diff --git a/topologies/t9/assets/chaincode/assets-transfer-basic/chaincode-go/chaincode/smartcontract.go b/topologies/t9/assets/chaincode/assets-transfer-basic/chaincode-go/chaincode/smartcontract.go new file mode 100644 index 0000000..71e8dd8 --- /dev/null +++ b/topologies/t9/assets/chaincode/assets-transfer-basic/chaincode-go/chaincode/smartcontract.go @@ -0,0 +1,185 @@ +package chaincode + +import ( + "encoding/json" + "fmt" + + "github.com/hyperledger/fabric-contract-api-go/contractapi" +) + +// SmartContract provides functions for managing an Asset +type SmartContract struct { + contractapi.Contract +} + +// Asset describes basic details of what makes up a simple asset +type Asset struct { + ID string `json:"ID"` + Color string `json:"color"` + Size int `json:"size"` + Owner string `json:"owner"` + AppraisedValue int `json:"appraisedValue"` +} + +// InitLedger adds a base set of assets to the ledger +func (s *SmartContract) InitLedger(ctx contractapi.TransactionContextInterface) error { + assets := []Asset{ + {ID: "asset1", Color: "blue", Size: 5, Owner: "Tomoko", AppraisedValue: 300}, + {ID: "asset2", Color: "red", Size: 5, Owner: "Brad", AppraisedValue: 400}, + {ID: "asset3", Color: "green", Size: 10, Owner: "Jin Soo", AppraisedValue: 500}, + {ID: "asset4", Color: "yellow", Size: 10, Owner: "Max", AppraisedValue: 600}, + {ID: "asset5", Color: "black", Size: 15, Owner: "Adriana", AppraisedValue: 700}, + {ID: "asset6", Color: "white", Size: 15, Owner: "Michel", AppraisedValue: 800}, + } + + for _, asset := range assets { + assetJSON, err := json.Marshal(asset) + if err != nil { + return err + } + + err = ctx.GetStub().PutState(asset.ID, assetJSON) + if err != nil { + return fmt.Errorf("failed to put to world state. %v", err) + } + } + + return nil +} + +// CreateAsset issues a new asset to the world state with given details. +func (s *SmartContract) CreateAsset(ctx contractapi.TransactionContextInterface, id string, color string, size int, owner string, appraisedValue int) error { + exists, err := s.AssetExists(ctx, id) + if err != nil { + return err + } + if exists { + return fmt.Errorf("the asset %s already exists", id) + } + + asset := Asset{ + ID: id, + Color: color, + Size: size, + Owner: owner, + AppraisedValue: appraisedValue, + } + assetJSON, err := json.Marshal(asset) + if err != nil { + return err + } + + return ctx.GetStub().PutState(id, assetJSON) +} + +// ReadAsset returns the asset stored in the world state with given id. +func (s *SmartContract) ReadAsset(ctx contractapi.TransactionContextInterface, id string) (*Asset, error) { + assetJSON, err := ctx.GetStub().GetState(id) + if err != nil { + return nil, fmt.Errorf("failed to read from world state: %v", err) + } + if assetJSON == nil { + return nil, fmt.Errorf("the asset %s does not exist", id) + } + + var asset Asset + err = json.Unmarshal(assetJSON, &asset) + if err != nil { + return nil, err + } + + return &asset, nil +} + +// UpdateAsset updates an existing asset in the world state with provided parameters. +func (s *SmartContract) UpdateAsset(ctx contractapi.TransactionContextInterface, id string, color string, size int, owner string, appraisedValue int) error { + exists, err := s.AssetExists(ctx, id) + if err != nil { + return err + } + if !exists { + return fmt.Errorf("the asset %s does not exist", id) + } + + // overwriting original asset with new asset + asset := Asset{ + ID: id, + Color: color, + Size: size, + Owner: owner, + AppraisedValue: appraisedValue, + } + assetJSON, err := json.Marshal(asset) + if err != nil { + return err + } + + return ctx.GetStub().PutState(id, assetJSON) +} + +// DeleteAsset deletes an given asset from the world state. +func (s *SmartContract) DeleteAsset(ctx contractapi.TransactionContextInterface, id string) error { + exists, err := s.AssetExists(ctx, id) + if err != nil { + return err + } + if !exists { + return fmt.Errorf("the asset %s does not exist", id) + } + + return ctx.GetStub().DelState(id) +} + +// AssetExists returns true when asset with given ID exists in world state +func (s *SmartContract) AssetExists(ctx contractapi.TransactionContextInterface, id string) (bool, error) { + assetJSON, err := ctx.GetStub().GetState(id) + if err != nil { + return false, fmt.Errorf("failed to read from world state: %v", err) + } + + return assetJSON != nil, nil +} + +// TransferAsset updates the owner field of asset with given id in world state. +func (s *SmartContract) TransferAsset(ctx contractapi.TransactionContextInterface, id string, newOwner string) error { + asset, err := s.ReadAsset(ctx, id) + if err != nil { + return err + } + + asset.Owner = newOwner + assetJSON, err := json.Marshal(asset) + if err != nil { + return err + } + + return ctx.GetStub().PutState(id, assetJSON) +} + +// GetAllAssets returns all assets found in world state +func (s *SmartContract) GetAllAssets(ctx contractapi.TransactionContextInterface) ([]*Asset, error) { + // range query with empty string for startKey and endKey does an + // open-ended query of all assets in the chaincode namespace. + resultsIterator, err := ctx.GetStub().GetStateByRange("", "") + if err != nil { + return nil, err + } + defer resultsIterator.Close() + + var assets []*Asset + for resultsIterator.HasNext() { + queryResponse, err := resultsIterator.Next() + if err != nil { + return nil, err + } + + var asset Asset + err = json.Unmarshal(queryResponse.Value, &asset) + if err != nil { + return nil, err + } + assets = append(assets, &asset) + } + + return assets, nil +} diff --git a/topologies/t9/assets/chaincode/assets-transfer-basic/chaincode-go/chaincode/smartcontract_test.go b/topologies/t9/assets/chaincode/assets-transfer-basic/chaincode-go/chaincode/smartcontract_test.go new file mode 100644 index 0000000..cb001de --- /dev/null +++ b/topologies/t9/assets/chaincode/assets-transfer-basic/chaincode-go/chaincode/smartcontract_test.go @@ -0,0 +1,184 @@ +package chaincode_test + +import ( + "encoding/json" + "fmt" + "testing" + + "github.com/hyperledger/fabric-chaincode-go/shim" + "github.com/hyperledger/fabric-contract-api-go/contractapi" + "github.com/hyperledger/fabric-protos-go/ledger/queryresult" + "github.com/hyperledger/fabric-samples/asset-transfer-basic/chaincode-go/chaincode" + "github.com/hyperledger/fabric-samples/asset-transfer-basic/chaincode-go/chaincode/mocks" + "github.com/stretchr/testify/require" +) + +//go:generate counterfeiter -o mocks/transaction.go -fake-name TransactionContext . transactionContext +type transactionContext interface { + contractapi.TransactionContextInterface +} + +//go:generate counterfeiter -o mocks/chaincodestub.go -fake-name ChaincodeStub . chaincodeStub +type chaincodeStub interface { + shim.ChaincodeStubInterface +} + +//go:generate counterfeiter -o mocks/statequeryiterator.go -fake-name StateQueryIterator . stateQueryIterator +type stateQueryIterator interface { + shim.StateQueryIteratorInterface +} + +func TestInitLedger(t *testing.T) { + chaincodeStub := &mocks.ChaincodeStub{} + transactionContext := &mocks.TransactionContext{} + transactionContext.GetStubReturns(chaincodeStub) + + assetTransfer := chaincode.SmartContract{} + err := assetTransfer.InitLedger(transactionContext) + require.NoError(t, err) + + chaincodeStub.PutStateReturns(fmt.Errorf("failed inserting key")) + err = assetTransfer.InitLedger(transactionContext) + require.EqualError(t, err, "failed to put to world state. failed inserting key") +} + +func TestCreateAsset(t *testing.T) { + chaincodeStub := &mocks.ChaincodeStub{} + transactionContext := &mocks.TransactionContext{} + transactionContext.GetStubReturns(chaincodeStub) + + assetTransfer := chaincode.SmartContract{} + err := assetTransfer.CreateAsset(transactionContext, "", "", 0, "", 0) + require.NoError(t, err) + + chaincodeStub.GetStateReturns([]byte{}, nil) + err = assetTransfer.CreateAsset(transactionContext, "asset1", "", 0, "", 0) + require.EqualError(t, err, "the asset asset1 already exists") + + chaincodeStub.GetStateReturns(nil, fmt.Errorf("unable to retrieve asset")) + err = assetTransfer.CreateAsset(transactionContext, "asset1", "", 0, "", 0) + require.EqualError(t, err, "failed to read from world state: unable to retrieve asset") +} + +func TestReadAsset(t *testing.T) { + chaincodeStub := &mocks.ChaincodeStub{} + transactionContext := &mocks.TransactionContext{} + transactionContext.GetStubReturns(chaincodeStub) + + expectedAsset := &chaincode.Asset{ID: "asset1"} + bytes, err := json.Marshal(expectedAsset) + require.NoError(t, err) + + chaincodeStub.GetStateReturns(bytes, nil) + assetTransfer := chaincode.SmartContract{} + asset, err := assetTransfer.ReadAsset(transactionContext, "") + require.NoError(t, err) + require.Equal(t, expectedAsset, asset) + + chaincodeStub.GetStateReturns(nil, fmt.Errorf("unable to retrieve asset")) + _, err = assetTransfer.ReadAsset(transactionContext, "") + require.EqualError(t, err, "failed to read from world state: unable to retrieve asset") + + chaincodeStub.GetStateReturns(nil, nil) + asset, err = assetTransfer.ReadAsset(transactionContext, "asset1") + require.EqualError(t, err, "the asset asset1 does not exist") + require.Nil(t, asset) +} + +func TestUpdateAsset(t *testing.T) { + chaincodeStub := &mocks.ChaincodeStub{} + transactionContext := &mocks.TransactionContext{} + transactionContext.GetStubReturns(chaincodeStub) + + expectedAsset := &chaincode.Asset{ID: "asset1"} + bytes, err := json.Marshal(expectedAsset) + require.NoError(t, err) + + chaincodeStub.GetStateReturns(bytes, nil) + assetTransfer := chaincode.SmartContract{} + err = assetTransfer.UpdateAsset(transactionContext, "", "", 0, "", 0) + require.NoError(t, err) + + chaincodeStub.GetStateReturns(nil, nil) + err = assetTransfer.UpdateAsset(transactionContext, "asset1", "", 0, "", 0) + require.EqualError(t, err, "the asset asset1 does not exist") + + chaincodeStub.GetStateReturns(nil, fmt.Errorf("unable to retrieve asset")) + err = assetTransfer.UpdateAsset(transactionContext, "asset1", "", 0, "", 0) + require.EqualError(t, err, "failed to read from world state: unable to retrieve asset") +} + +func TestDeleteAsset(t *testing.T) { + chaincodeStub := &mocks.ChaincodeStub{} + transactionContext := &mocks.TransactionContext{} + transactionContext.GetStubReturns(chaincodeStub) + + asset := &chaincode.Asset{ID: "asset1"} + bytes, err := json.Marshal(asset) + require.NoError(t, err) + + chaincodeStub.GetStateReturns(bytes, nil) + chaincodeStub.DelStateReturns(nil) + assetTransfer := chaincode.SmartContract{} + err = assetTransfer.DeleteAsset(transactionContext, "") + require.NoError(t, err) + + chaincodeStub.GetStateReturns(nil, nil) + err = assetTransfer.DeleteAsset(transactionContext, "asset1") + require.EqualError(t, err, "the asset asset1 does not exist") + + chaincodeStub.GetStateReturns(nil, fmt.Errorf("unable to retrieve asset")) + err = assetTransfer.DeleteAsset(transactionContext, "") + require.EqualError(t, err, "failed to read from world state: unable to retrieve asset") +} + +func TestTransferAsset(t *testing.T) { + chaincodeStub := &mocks.ChaincodeStub{} + transactionContext := &mocks.TransactionContext{} + transactionContext.GetStubReturns(chaincodeStub) + + asset := &chaincode.Asset{ID: "asset1"} + bytes, err := json.Marshal(asset) + require.NoError(t, err) + + chaincodeStub.GetStateReturns(bytes, nil) + assetTransfer := chaincode.SmartContract{} + err = assetTransfer.TransferAsset(transactionContext, "", "") + require.NoError(t, err) + + chaincodeStub.GetStateReturns(nil, fmt.Errorf("unable to retrieve asset")) + err = assetTransfer.TransferAsset(transactionContext, "", "") + require.EqualError(t, err, "failed to read from world state: unable to retrieve asset") +} + +func TestGetAllAssets(t *testing.T) { + asset := &chaincode.Asset{ID: "asset1"} + bytes, err := json.Marshal(asset) + require.NoError(t, err) + + iterator := &mocks.StateQueryIterator{} + iterator.HasNextReturnsOnCall(0, true) + iterator.HasNextReturnsOnCall(1, false) + iterator.NextReturns(&queryresult.KV{Value: bytes}, nil) + + chaincodeStub := &mocks.ChaincodeStub{} + transactionContext := &mocks.TransactionContext{} + transactionContext.GetStubReturns(chaincodeStub) + + chaincodeStub.GetStateByRangeReturns(iterator, nil) + assetTransfer := &chaincode.SmartContract{} + assets, err := assetTransfer.GetAllAssets(transactionContext) + require.NoError(t, err) + require.Equal(t, []*chaincode.Asset{asset}, assets) + + iterator.HasNextReturns(true) + iterator.NextReturns(nil, fmt.Errorf("failed retrieving next item")) + assets, err = assetTransfer.GetAllAssets(transactionContext) + require.EqualError(t, err, "failed retrieving next item") + require.Nil(t, assets) + + chaincodeStub.GetStateByRangeReturns(nil, fmt.Errorf("failed retrieving all assets")) + assets, err = assetTransfer.GetAllAssets(transactionContext) + require.EqualError(t, err, "failed retrieving all assets") + require.Nil(t, assets) +} diff --git a/topologies/t9/assets/chaincode/assets-transfer-basic/chaincode-go/go.mod b/topologies/t9/assets/chaincode/assets-transfer-basic/chaincode-go/go.mod new file mode 100644 index 0000000..630a157 --- /dev/null +++ b/topologies/t9/assets/chaincode/assets-transfer-basic/chaincode-go/go.mod @@ -0,0 +1,11 @@ +module github.com/hyperledger/fabric-samples/asset-transfer-basic/chaincode-go + +go 1.14 + +require ( + github.com/golang/protobuf v1.3.2 + github.com/hyperledger/fabric-chaincode-go v0.0.0-20210718160520-38d29fabecb9 + github.com/hyperledger/fabric-contract-api-go v1.1.1 + github.com/hyperledger/fabric-protos-go v0.0.0-20201028172056-a3136dde2354 + github.com/stretchr/testify v1.5.1 +) diff --git a/topologies/t9/assets/chaincode/assets-transfer-basic/chaincode-go/go.sum b/topologies/t9/assets/chaincode/assets-transfer-basic/chaincode-go/go.sum new file mode 100644 index 0000000..577c18b --- /dev/null +++ b/topologies/t9/assets/chaincode/assets-transfer-basic/chaincode-go/go.sum @@ -0,0 +1,154 @@ +cloud.google.com/go v0.26.0/go.mod h1:aQUYkXzVsufM+DwF1aE+0xfcU+56JwCaLick0ClmMTw= +github.com/BurntSushi/toml v0.3.1/go.mod h1:xHWCNGjB5oqiDr8zfno3MHue2Ht5sIBksp03qcyfWMU= +github.com/DATA-DOG/go-txdb v0.1.3/go.mod h1:DhAhxMXZpUJVGnT+p9IbzJoRKvlArO2pkHjnGX7o0n0= +github.com/PuerkitoBio/purell v1.1.1 h1:WEQqlqaGbrPkxLJWfBwQmfEAE1Z7ONdDLqrN38tNFfI= +github.com/PuerkitoBio/purell v1.1.1/go.mod h1:c11w/QuzBsJSee3cPx9rAFu61PvFxuPbtSwDGJws/X0= +github.com/PuerkitoBio/urlesc v0.0.0-20170810143723-de5bf2ad4578 h1:d+Bc7a5rLufV/sSk/8dngufqelfh6jnri85riMAaF/M= +github.com/PuerkitoBio/urlesc v0.0.0-20170810143723-de5bf2ad4578/go.mod h1:uGdkoq3SwY9Y+13GIhn11/XLaGBb4BfwItxLd5jeuXE= +github.com/armon/consul-api v0.0.0-20180202201655-eb2c6b5be1b6/go.mod h1:grANhF5doyWs3UAsr3K4I6qtAmlQcZDesFNEHPZAzj8= +github.com/client9/misspell v0.3.4/go.mod h1:qj6jICC3Q7zFZvVWo7KLAzC3yx5G7kyvSDkc90ppPyw= +github.com/coreos/etcd v3.3.10+incompatible/go.mod h1:uF7uidLiAD3TWHmW31ZFd/JWoc32PjwdhPthX9715RE= +github.com/coreos/go-etcd v2.0.0+incompatible/go.mod h1:Jez6KQU2B/sWsbdaef3ED8NzMklzPG4d5KIOhIy30Tk= +github.com/coreos/go-semver v0.2.0/go.mod h1:nnelYz7RCh+5ahJtPPxZlU+153eP4D4r3EedlOD2RNk= +github.com/cpuguy83/go-md2man v1.0.10/go.mod h1:SmD6nW6nTyfqj6ABTjUi3V3JVMnlJmwcJI5acqYI6dE= +github.com/cucumber/godog v0.8.0/go.mod h1:Cp3tEV1LRAyH/RuCThcxHS/+9ORZ+FMzPva2AZ5Ki+A= +github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38= +github.com/davecgh/go-spew v1.1.1 h1:vj9j/u1bqnvCEfJOwUhtlOARqs3+rkHYY13jYWTU97c= +github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38= +github.com/fsnotify/fsnotify v1.4.7/go.mod h1:jwhsz4b93w/PPRr/qN1Yymfu8t87LnFCMoQvtojpjFo= +github.com/go-openapi/jsonpointer v0.19.2/go.mod h1:3akKfEdA7DF1sugOqz1dVQHBcuDBPKZGEoHC/NkiQRg= +github.com/go-openapi/jsonpointer v0.19.3 h1:gihV7YNZK1iK6Tgwwsxo2rJbD1GTbdm72325Bq8FI3w= +github.com/go-openapi/jsonpointer v0.19.3/go.mod h1:Pl9vOtqEWErmShwVjC8pYs9cog34VGT37dQOVbmoatg= +github.com/go-openapi/jsonreference v0.19.2 h1:o20suLFB4Ri0tuzpWtyHlh7E7HnkqTNLq6aR6WVNS1w= +github.com/go-openapi/jsonreference v0.19.2/go.mod h1:jMjeRr2HHw6nAVajTXJ4eiUwohSTlpa0o73RUL1owJc= +github.com/go-openapi/spec v0.19.4 h1:ixzUSnHTd6hCemgtAJgluaTSGYpLNpJY4mA2DIkdOAo= +github.com/go-openapi/spec v0.19.4/go.mod h1:FpwSN1ksY1eteniUU7X0N/BgJ7a4WvBFVA8Lj9mJglo= +github.com/go-openapi/swag v0.19.2/go.mod h1:POnQmlKehdgb5mhVOsnJFsivZCEZ/vjK9gh66Z9tfKk= +github.com/go-openapi/swag v0.19.5 h1:lTz6Ys4CmqqCQmZPBlbQENR1/GucA2bzYTE12Pw4tFY= +github.com/go-openapi/swag v0.19.5/go.mod h1:POnQmlKehdgb5mhVOsnJFsivZCEZ/vjK9gh66Z9tfKk= +github.com/go-sql-driver/mysql v1.5.0/go.mod h1:DCzpHaOWr8IXmIStZouvnhqoel9Qv2LBy8hT2VhHyBg= +github.com/gobuffalo/envy v1.7.0 h1:GlXgaiBkmrYMHco6t4j7SacKO4XUjvh5pwXh0f4uxXU= +github.com/gobuffalo/envy v1.7.0/go.mod h1:n7DRkBerg/aorDM8kbduw5dN3oXGswK5liaSCx4T5NI= +github.com/gobuffalo/logger v1.0.0/go.mod h1:2zbswyIUa45I+c+FLXuWl9zSWEiVuthsk8ze5s8JvPs= +github.com/gobuffalo/packd v0.3.0 h1:eMwymTkA1uXsqxS0Tpoop3Lc0u3kTfiMBE6nKtQU4g4= +github.com/gobuffalo/packd v0.3.0/go.mod h1:zC7QkmNkYVGKPw4tHpBQ+ml7W/3tIebgeo1b36chA3Q= +github.com/gobuffalo/packr v1.30.1 h1:hu1fuVR3fXEZR7rXNW3h8rqSML8EVAf6KNm0NKO/wKg= +github.com/gobuffalo/packr v1.30.1/go.mod h1:ljMyFO2EcrnzsHsN99cvbq055Y9OhRrIaviy289eRuk= +github.com/gobuffalo/packr/v2 v2.5.1/go.mod h1:8f9c96ITobJlPzI44jj+4tHnEKNt0xXWSVlXRN9X1Iw= +github.com/golang/glog v0.0.0-20160126235308-23def4e6c14b h1:VKtxabqXZkF25pY9ekfRL6a582T4P37/31XEstQ5p58= +github.com/golang/glog v0.0.0-20160126235308-23def4e6c14b/go.mod h1:SBH7ygxi8pfUlaOkMMuAQtPIUF8ecWP5IEl/CR7VP2Q= +github.com/golang/mock v1.1.1/go.mod h1:oTYuIxOrZwtPieC+H1uAHpcLFnEyAGVDL/k47Jfbm0A= +github.com/golang/protobuf v1.2.0/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U= +github.com/golang/protobuf v1.3.2 h1:6nsPYzhq5kReh6QImI3k5qWzO4PEbvbIW2cwSfR/6xs= +github.com/golang/protobuf v1.3.2/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U= +github.com/google/go-cmp v0.2.0/go.mod h1:oXzfMopK8JAjlY9xF4vHSVASa0yLyX7SntLO5aqRK0M= +github.com/hashicorp/hcl v1.0.0/go.mod h1:E5yfLk+7swimpb2L/Alb/PJmXilQ/rhwaUYs4T20WEQ= +github.com/hyperledger/fabric-chaincode-go v0.0.0-20200424173110-d7076418f212 h1:1i4lnpV8BDgKOLi1hgElfBqdHXjXieSuj8629mwBZ8o= +github.com/hyperledger/fabric-chaincode-go v0.0.0-20200424173110-d7076418f212/go.mod h1:N7H3sA7Tx4k/YzFq7U0EPdqJtqvM4Kild0JoCc7C0Dc= +github.com/hyperledger/fabric-chaincode-go v0.0.0-20210718160520-38d29fabecb9 h1:1cAZHHrBYFrX3bwQGhOZtOB4sCM9QWVppd81O8vsPXs= +github.com/hyperledger/fabric-chaincode-go v0.0.0-20210718160520-38d29fabecb9/go.mod h1:N7H3sA7Tx4k/YzFq7U0EPdqJtqvM4Kild0JoCc7C0Dc= +github.com/hyperledger/fabric-contract-api-go v1.1.0 h1:K9uucl/6eX3NF0/b+CGIiO1IPm1VYQxBkpnVGJur2S4= +github.com/hyperledger/fabric-contract-api-go v1.1.0/go.mod h1:nHWt0B45fK53owcFpLtAe8DH0Q5P068mnzkNXMPSL7E= +github.com/hyperledger/fabric-contract-api-go v1.1.1 h1:gDhOC18gjgElNZ85kFWsbCQq95hyUP/21n++m0Sv6B0= +github.com/hyperledger/fabric-contract-api-go v1.1.1/go.mod h1:+39cWxbh5py3NtXpRA63rAH7NzXyED+QJx1EZr0tJPo= +github.com/hyperledger/fabric-protos-go v0.0.0-20190919234611-2a87503ac7c9/go.mod h1:xVYTjK4DtZRBxZ2D9aE4y6AbLaPwue2o/criQyQbVD0= +github.com/hyperledger/fabric-protos-go v0.0.0-20200424173316-dd554ba3746e h1:9PS5iezHk/j7XriSlNuSQILyCOfcZ9wZ3/PiucmSE8E= +github.com/hyperledger/fabric-protos-go v0.0.0-20200424173316-dd554ba3746e/go.mod h1:xVYTjK4DtZRBxZ2D9aE4y6AbLaPwue2o/criQyQbVD0= +github.com/hyperledger/fabric-protos-go v0.0.0-20201028172056-a3136dde2354 h1:6vLLEpvDbSlmUJFjg1hB5YMBpI+WgKguztlONcAFBoY= +github.com/hyperledger/fabric-protos-go v0.0.0-20201028172056-a3136dde2354/go.mod h1:xVYTjK4DtZRBxZ2D9aE4y6AbLaPwue2o/criQyQbVD0= +github.com/hyperledger/fabric-protos-go v0.0.0-20210722212527-1ed7094bb8fe h1:1ef+SKRVYiQCAMhZ5W6HZhStEcNNAm27V8YcptzSWnM= +github.com/hyperledger/fabric-protos-go v0.0.0-20210722212527-1ed7094bb8fe/go.mod h1:xVYTjK4DtZRBxZ2D9aE4y6AbLaPwue2o/criQyQbVD0= +github.com/inconshreveable/mousetrap v1.0.0/go.mod h1:PxqpIevigyE2G7u3NXJIT2ANytuPF1OarO4DADm73n8= +github.com/joho/godotenv v1.3.0 h1:Zjp+RcGpHhGlrMbJzXTrZZPrWj+1vfm90La1wgB6Bhc= +github.com/joho/godotenv v1.3.0/go.mod h1:7hK45KPybAkOC6peb+G5yklZfMxEjkZhHbwpqxOKXbg= +github.com/karrick/godirwalk v1.10.12/go.mod h1:RoGL9dQei4vP9ilrpETWE8CLOZ1kiN0LhBygSwrAsHA= +github.com/konsorten/go-windows-terminal-sequences v1.0.1/go.mod h1:T0+1ngSBFLxvqU3pZ+m/2kptfBszLMUkC4ZK/EgS/cQ= +github.com/konsorten/go-windows-terminal-sequences v1.0.2/go.mod h1:T0+1ngSBFLxvqU3pZ+m/2kptfBszLMUkC4ZK/EgS/cQ= +github.com/kr/pretty v0.1.0/go.mod h1:dAy3ld7l9f0ibDNOQOHHMYYIIbhfbHSm3C4ZsoJORNo= +github.com/kr/pretty v0.2.0 h1:s5hAObm+yFO5uHYt5dYjxi2rXrsnmRpJx4OYvIWUaQs= +github.com/kr/pretty v0.2.0/go.mod h1:ipq/a2n7PKx3OHsz4KJII5eveXtPO4qwEXGdVfWzfnI= +github.com/kr/pty v1.1.1/go.mod h1:pFQYn66WHrOpPYNljwOMqo10TkYh1fy3cYio2l3bCsQ= +github.com/kr/pty v1.1.5/go.mod h1:9r2w37qlBe7rQ6e1fg1S/9xpWHSnaqNdHD3WcMdbPDA= +github.com/kr/text v0.1.0 h1:45sCR5RtlFHMR4UwH9sdQ5TC8v0qDQCHnXt+kaKSTVE= +github.com/kr/text v0.1.0/go.mod h1:4Jbv+DJW3UT/LiOwJeYQe1efqtUx/iVham/4vfdArNI= +github.com/magiconair/properties v1.8.0/go.mod h1:PppfXfuXeibc/6YijjN8zIbojt8czPbwD3XqdrwzmxQ= +github.com/mailru/easyjson v0.0.0-20190614124828-94de47d64c63/go.mod h1:C1wdFJiN94OJF2b5HbByQZoLdCWB1Yqtg26g4irojpc= +github.com/mailru/easyjson v0.0.0-20190626092158-b2ccc519800e h1:hB2xlXdHp/pmPZq0y3QnmWAArdw9PqbmotexnWx/FU8= +github.com/mailru/easyjson v0.0.0-20190626092158-b2ccc519800e/go.mod h1:C1wdFJiN94OJF2b5HbByQZoLdCWB1Yqtg26g4irojpc= +github.com/mitchellh/go-homedir v1.1.0/go.mod h1:SfyaCUpYCn1Vlf4IUYiD9fPX4A5wJrkLzIz1N1q0pr0= +github.com/mitchellh/mapstructure v1.1.2/go.mod h1:FVVH3fgwuzCH5S8UJGiWEs2h04kUh9fWfEaFds41c1Y= +github.com/pelletier/go-toml v1.2.0/go.mod h1:5z9KED0ma1S8pY6P1sdut58dfprrGBbd/94hg7ilaic= +github.com/pmezard/go-difflib v1.0.0 h1:4DBwDE0NGyQoBHbLQYPwSUPoCMWR5BEzIk/f1lZbAQM= +github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4= +github.com/rogpeppe/go-internal v1.1.0/go.mod h1:M8bDsm7K2OlrFYOpmOWEs/qY81heoFRclV5y23lUDJ4= +github.com/rogpeppe/go-internal v1.3.0 h1:RR9dF3JtopPvtkroDZuVD7qquD0bnHlKSqaQhgwt8yk= +github.com/rogpeppe/go-internal v1.3.0/go.mod h1:M8bDsm7K2OlrFYOpmOWEs/qY81heoFRclV5y23lUDJ4= +github.com/russross/blackfriday v1.5.2/go.mod h1:JO/DiYxRf+HjHt06OyowR9PTA263kcR/rfWxYHBV53g= +github.com/sirupsen/logrus v1.4.2/go.mod h1:tLMulIdttU9McNUspp0xgXVQah82FyeX6MwdIuYE2rE= +github.com/spf13/afero v1.1.2/go.mod h1:j4pytiNVoe2o6bmDsKpLACNPDBIoEAkihy7loJ1B0CQ= +github.com/spf13/cast v1.3.0/go.mod h1:Qx5cxh0v+4UWYiBimWS+eyWzqEqokIECu5etghLkUJE= +github.com/spf13/cobra v0.0.5/go.mod h1:3K3wKZymM7VvHMDS9+Akkh4K60UwM26emMESw8tLCHU= +github.com/spf13/jwalterweatherman v1.0.0/go.mod h1:cQK4TGJAtQXfYWX+Ddv3mKDzgVb68N+wFjFa4jdeBTo= +github.com/spf13/pflag v1.0.3/go.mod h1:DYY7MBk1bdzusC3SYhjObp+wFpr4gzcvqqNjLnInEg4= +github.com/spf13/viper v1.3.2/go.mod h1:ZiWeW+zYFKm7srdB9IoDzzZXaJaI5eL9QjNiN/DMA2s= +github.com/stretchr/objx v0.1.0/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME= +github.com/stretchr/objx v0.1.1/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME= +github.com/stretchr/objx v0.2.0/go.mod h1:qt09Ya8vawLte6SNmTgCsAVtYtaKzEcn8ATUoHMkEqE= +github.com/stretchr/testify v1.2.2/go.mod h1:a8OnRcib4nhh0OaRAV+Yts87kKdq0PP7pXfy6kDkUVs= +github.com/stretchr/testify v1.3.0/go.mod h1:M5WIy9Dh21IEIfnGCwXGc5bZfKNJtfHm1UVUgZn+9EI= +github.com/stretchr/testify v1.4.0/go.mod h1:j7eGeouHqKxXV5pUuKE4zz7dFj8WfuZ+81PSLYec5m4= +github.com/stretchr/testify v1.5.1 h1:nOGnQDM7FYENwehXlg/kFVnos3rEvtKTjRvOWSzb6H4= +github.com/stretchr/testify v1.5.1/go.mod h1:5W2xD1RspED5o8YsWQXVCued0rvSQ+mT+I5cxcmMvtA= +github.com/ugorji/go/codec v0.0.0-20181204163529-d75b2dcb6bc8/go.mod h1:VFNgLljTbGfSG7qAOspJ7OScBnGdDN/yBr0sguwnwf0= +github.com/xeipuuv/gojsonpointer v0.0.0-20180127040702-4e3ac2762d5f h1:J9EGpcZtP0E/raorCMxlFGSTBrsSlaDGf3jU/qvAE2c= +github.com/xeipuuv/gojsonpointer v0.0.0-20180127040702-4e3ac2762d5f/go.mod h1:N2zxlSyiKSe5eX1tZViRH5QA0qijqEDrYZiPEAiq3wU= +github.com/xeipuuv/gojsonreference v0.0.0-20180127040603-bd5ef7bd5415 h1:EzJWgHovont7NscjpAxXsDA8S8BMYve8Y5+7cuRE7R0= +github.com/xeipuuv/gojsonreference v0.0.0-20180127040603-bd5ef7bd5415/go.mod h1:GwrjFmJcFw6At/Gs6z4yjiIwzuJ1/+UwLxMQDVQXShQ= +github.com/xeipuuv/gojsonschema v1.2.0 h1:LhYJRs+L4fBtjZUfuSZIKGeVu0QRy8e5Xi7D17UxZ74= +github.com/xeipuuv/gojsonschema v1.2.0/go.mod h1:anYRn/JVcOK2ZgGU+IjEV4nwlhoK5sQluxsYJ78Id3Y= +github.com/xordataexchange/crypt v0.0.3-0.20170626215501-b2862e3d0a77/go.mod h1:aYKd//L2LvnjZzWKhF00oedf4jCCReLcmhLdhm1A27Q= +golang.org/x/crypto v0.0.0-20181203042331-505ab145d0a9/go.mod h1:6SG95UA2DQfeDnfUPMdvaQW0Q7yPrPDi9nlGo2tz2b4= +golang.org/x/crypto v0.0.0-20190308221718-c2843e01d9a2/go.mod h1:djNgcEr1/C05ACkg1iLfiJU5Ep61QUkGW8qpdssI0+w= +golang.org/x/crypto v0.0.0-20190611184440-5c40567a22f8/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI= +golang.org/x/crypto v0.0.0-20190621222207-cc06ce4a13d4/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI= +golang.org/x/lint v0.0.0-20190313153728-d0100b6bd8b3/go.mod h1:6SW0HCj/g11FgYtHlgUYUwCkIfeOF89ocIRzGO/8vkc= +golang.org/x/net v0.0.0-20190311183353-d8887717615a/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg= +golang.org/x/net v0.0.0-20190404232315-eb5bcb51f2a3/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg= +golang.org/x/net v0.0.0-20190522155817-f3200d17e092/go.mod h1:HSz+uSET+XFnRR8LxR5pz3Of3rY3CfYBVs4xY44aLks= +golang.org/x/net v0.0.0-20190613194153-d28f0bde5980/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s= +golang.org/x/net v0.0.0-20190827160401-ba9fcec4b297 h1:k7pJ2yAPLPgbskkFdhRCsA77k2fySZ1zf2zCjvQCiIM= +golang.org/x/net v0.0.0-20190827160401-ba9fcec4b297/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s= +golang.org/x/oauth2 v0.0.0-20180821212333-d2e6202438be/go.mod h1:N/0e6XlmueqKjAGxoOufVs8QHGRruUQn6yWY3a++T0U= +golang.org/x/sync v0.0.0-20190423024810-112230192c58/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= +golang.org/x/sys v0.0.0-20181205085412-a5c9d58dba9a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY= +golang.org/x/sys v0.0.0-20190215142949-d0b11bdaac8a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY= +golang.org/x/sys v0.0.0-20190412213103-97732733099d/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= +golang.org/x/sys v0.0.0-20190422165155-953cdadca894/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= +golang.org/x/sys v0.0.0-20190515120540-06a5c4944438/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= +golang.org/x/sys v0.0.0-20190616124812-15dcb6c0061f/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= +golang.org/x/sys v0.0.0-20190710143415-6ec70d6a5542 h1:6ZQFf1D2YYDDI7eSwW8adlkkavTB9sw5I24FVtEvNUQ= +golang.org/x/sys v0.0.0-20190710143415-6ec70d6a5542/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= +golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ= +golang.org/x/text v0.3.2 h1:tW2bmiBqwgJj/UpqtC8EpXEZVYOwU0yG4iWbprSVAcs= +golang.org/x/text v0.3.2/go.mod h1:bEr9sfX3Q8Zfm5fL9x+3itogRgK3+ptLWKqgva+5dAk= +golang.org/x/tools v0.0.0-20180917221912-90fa682c2a6e/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ= +golang.org/x/tools v0.0.0-20190311212946-11955173bddd/go.mod h1:LCzVGOaR6xXOjkQ3onu1FJEFr0SW1gC7cKk1uF8kGRs= +golang.org/x/tools v0.0.0-20190524140312-2c0ae7006135/go.mod h1:RgjU9mgBXZiqYHBnxXauZ1Gv1EHHAz9KjViQ78xBX0Q= +golang.org/x/tools v0.0.0-20190614205625-5aca471b1d59/go.mod h1:/rFqwRUd4F7ZHNgwSSTFct+R/Kf4OFW1sUzUTQQTgfc= +golang.org/x/tools v0.0.0-20190624180213-70d37148ca0c/go.mod h1:/rFqwRUd4F7ZHNgwSSTFct+R/Kf4OFW1sUzUTQQTgfc= +google.golang.org/appengine v1.1.0/go.mod h1:EbEs0AVv82hx2wNQdGPgUI5lhzA/G0D9YwlJXL52JkM= +google.golang.org/genproto v0.0.0-20180817151627-c66870c02cf8/go.mod h1:JiN7NxoALGmiZfu7CAH4rXhgtRTLTxftemlI0sWmxmc= +google.golang.org/genproto v0.0.0-20180831171423-11092d34479b h1:lohp5blsw53GBXtLyLNaTXPXS9pJ1tiTw61ZHUoE9Qw= +google.golang.org/genproto v0.0.0-20180831171423-11092d34479b/go.mod h1:JiN7NxoALGmiZfu7CAH4rXhgtRTLTxftemlI0sWmxmc= +google.golang.org/grpc v1.23.0 h1:AzbTB6ux+okLTzP8Ru1Xs41C303zdcfEht7MQnYJt5A= +google.golang.org/grpc v1.23.0/go.mod h1:Y5yQAOtifL1yxbo5wqy6BxZv8vAUGQwXBOALyacEbxg= +gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0= +gopkg.in/check.v1 v1.0.0-20180628173108-788fd7840127/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0= +gopkg.in/check.v1 v1.0.0-20190902080502-41f04d3bba15 h1:YR8cESwS4TdDjEe65xsg0ogRM/Nc3DYOhEAlW+xobZo= +gopkg.in/check.v1 v1.0.0-20190902080502-41f04d3bba15/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0= +gopkg.in/errgo.v2 v2.1.0/go.mod h1:hNsd1EY+bozCKY1Ytp96fpM3vjJbqLJn88ws8XvfDNI= +gopkg.in/yaml.v2 v2.2.2 h1:ZCJp+EgiOT7lHqUV2J862kp8Qj64Jo6az82+3Td9dZw= +gopkg.in/yaml.v2 v2.2.2/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI= +gopkg.in/yaml.v2 v2.2.8 h1:obN1ZagJSUGI0Ek/LBmuj4SNLPfIny3KsKFopxRdj10= +gopkg.in/yaml.v2 v2.2.8/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI= +honnef.co/go/tools v0.0.0-20190523083050-ea95bdfd59fc/go.mod h1:rf3lG4BRIbNafJWhAfAdb/ePZxsR/4RtNHQocxwk9r4= diff --git a/topologies/t9/config/config.yaml b/topologies/t9/config/config.yaml new file mode 100644 index 0000000..1f4cc49 --- /dev/null +++ b/topologies/t9/config/config.yaml @@ -0,0 +1,14 @@ +NodeOUs: + Enable: true + ClientOUIdentifier: + Certificate: cacerts/ca-cert.pem + OrganizationalUnitIdentifier: client + PeerOUIdentifier: + Certificate: cacerts/ca-cert.pem + OrganizationalUnitIdentifier: peer + AdminOUIdentifier: + Certificate: cacerts/ca-cert.pem + OrganizationalUnitIdentifier: admin + OrdererOUIdentifier: + Certificate: cacerts/ca-cert.pem + OrganizationalUnitIdentifier: orderer \ No newline at end of file diff --git a/topologies/t9/config/configtx.yaml b/topologies/t9/config/configtx.yaml new file mode 100644 index 0000000..4b37762 --- /dev/null +++ b/topologies/t9/config/configtx.yaml @@ -0,0 +1,419 @@ +# Copyright IBM Corp. All Rights Reserved. +# +# SPDX-License-Identifier: Apache-2.0 +# + +--- +################################################################################ +# +# Section: Organizations +# +# - This section defines the different organizational identities which will +# be referenced later in the configuration. +# +################################################################################ +Organizations: + + # SampleOrg defines an MSP using the sampleconfig. It should never be used + # in production but may be used as a template for other definitions + - &org1 + # DefaultOrg defines the organization which is used in the sampleconfig + # of the fabric.git development environment + Name: org1 + + # ID to load the MSP definition as + ID: org1MSP + + # MSPDir is the filesystem path which contains the MSP configuration + MSPDir: /tmp/crypto-material/orgs/org1/msp + + # Policies defines the set of policies at this level of the config tree + # For organization policies, their canonical path is usually + # /Channel/// + Policies: &org1Policies + Readers: + Type: Signature + Rule: "OR('org1MSP.member')" + Writers: + Type: Signature + Rule: "OR('org1MSP.member')" + Admins: + Type: Signature + Rule: "OR('org1MSP.admin')" + Endorsement: + Type: Signature + Rule: "OR('org1MSP.member')" + + OrdererEndpoints: + - <>-org1-orderer1:7050 + - <>-org1-orderer2:7050 + - <>-org1-orderer3:7050 + + - &org2 + # DefaultOrg defines the organization which is used in the sampleconfig + # of the fabric.git development environment + Name: org2MSP + + # ID to load the MSP definition as + ID: org2MSP + + MSPDir: /tmp/crypto-material/orgs/org2/msp + + # Policies defines the set of policies at this level of the config tree + # For organization policies, their canonical path is usually + # /Channel/// + Policies: &org2Policies + Readers: + Type: Signature + Rule: "OR('org2MSP.member')" + Writers: + Type: Signature + Rule: "OR('org2MSP.member')" + Admins: + Type: Signature + Rule: "OR('org2MSP.admin')" + Endorsement: + Type: Signature + Rule: "OR('org2MSP.peer')" + + # leave this flag set to true. + AnchorPeers: + # AnchorPeers defines the location of peers which can be used + # for cross org gossip communication. Note, this value is only + # encoded in the genesis block in the Application section context + - Host: <>-org2-peer1 + Port: 7051 + + - &org3 + # DefaultOrg defines the organization which is used in the sampleconfig + # of the fabric.git development environment + Name: org3MSP + + # ID to load the MSP definition as + ID: org3MSP + + MSPDir: /tmp/crypto-material/orgs/org3/msp + + # Policies defines the set of policies at this level of the config tree + # For organization policies, their canonical path is usually + # /Channel/// + Policies: &org3Policies + Readers: + Type: Signature + Rule: "OR('org3MSP.member')" + Writers: + Type: Signature + Rule: "OR('org3MSP.member')" + Admins: + Type: Signature + Rule: "OR('org3MSP.admin')" + Endorsement: + Type: Signature + Rule: "OR('org3MSP.peer')" + + # leave this flag set to true. + AnchorPeers: + # AnchorPeers defines the location of peers which can be used + # for cross org gossip communication. Note, this value is only + # encoded in the genesis block in the Application section context + - Host: <>-org3-peer1 + Port: 7051 + +################################################################################ +# +# SECTION: Capabilities +# +# - This section defines the capabilities of fabric network. This is a new +# concept as of v1.1.0 and should not be utilized in mixed networks with +# v1.0.x peers and orderers. Capabilities define features which must be +# present in a fabric binary for that binary to safely participate in the +# fabric network. For instance, if a new MSP type is added, newer binaries +# might recognize and validate the signatures from this type, while older +# binaries without this support would be unable to validate those +# transactions. This could lead to different versions of the fabric binaries +# having different world states. Instead, defining a capability for a channel +# informs those binaries without this capability that they must cease +# processing transactions until they have been upgraded. For v1.0.x if any +# capabilities are defined (including a map with all capabilities turned off) +# then the v1.0.x peer will deliberately crash. +# +################################################################################ +Capabilities: + # Channel capabilities apply to both the orderers and the peers and must be + # supported by both. + # Set the value of the capability to true to require it. + Channel: &ChannelCapabilities + # V2_0 capability ensures that orderers and peers behave according + # to v2.0 channel capabilities. Orderers and peers from + # prior releases would behave in an incompatible way, and are therefore + # not able to participate in channels at v2.0 capability. + # Prior to enabling V2.0 channel capabilities, ensure that all + # orderers and peers on a channel are at v2.0.0 or later. + V2_0: true + + # Orderer capabilities apply only to the orderers, and may be safely + # used with prior release peers. + # Set the value of the capability to true to require it. + Orderer: &OrdererCapabilities + # V2_0 orderer capability ensures that orderers behave according + # to v2.0 orderer capabilities. Orderers from + # prior releases would behave in an incompatible way, and are therefore + # not able to participate in channels at v2.0 orderer capability. + # Prior to enabling V2.0 orderer capabilities, ensure that all + # orderers on channel are at v2.0.0 or later. + V2_0: true + + # Application capabilities apply only to the peer network, and may be safely + # used with prior release orderers. + # Set the value of the capability to true to require it. + Application: &ApplicationCapabilities + # V2_0 application capability ensures that peers behave according + # to v2.0 application capabilities. Peers from + # prior releases would behave in an incompatible way, and are therefore + # not able to participate in channels at v2.0 application capability. + # Prior to enabling V2.0 application capabilities, ensure that all + # peers on channel are at v2.0.0 or later. + V2_0: true + +################################################################################ +# +# SECTION: Application +# +# - This section defines the values to encode into a config transaction or +# genesis block for application related parameters +# +################################################################################ +Application: &ApplicationDefaults + ACLs: &ACLsDefault + + # ACL policy for lscc's "getid" function + lscc/ChaincodeExists: /Channel/Application/Readers + + # ACL policy for lscc's "getdepspec" function + lscc/GetDeploymentSpec: /Channel/Application/Readers + + # ACL policy for lscc's "getccdata" function + lscc/GetChaincodeData: /Channel/Application/Readers + + # ACL Policy for lscc's "getchaincodes" function + lscc/GetInstantiatedChaincodes: /Channel/Application/Readers + + + #---Query System Chaincode (qscc) function to policy mapping for access control---# + + # ACL policy for qscc's "GetChainInfo" function + qscc/GetChainInfo: /Channel/Application/Readers + + + # ACL policy for qscc's "GetBlockByNumber" function + qscc/GetBlockByNumber: /Channel/Application/Readers + + # ACL policy for qscc's "GetBlockByHash" function + qscc/GetBlockByHash: /Channel/Application/Readers + + # ACL policy for qscc's "GetTransactionByID" function + qscc/GetTransactionByID: /Channel/Application/Readers + + # ACL policy for qscc's "GetBlockByTxID" function + qscc/GetBlockByTxID: /Channel/Application/Readers + + #---Configuration System Chaincode (cscc) function to policy mapping for access control---# + + # ACL policy for cscc's "GetConfigBlock" function + cscc/GetConfigBlock: /Channel/Application/Readers + + # ACL policy for cscc's "GetConfigTree" function + cscc/GetConfigTree: /Channel/Application/Readers + + # ACL policy for cscc's "SimulateConfigTreeUpdate" function + cscc/SimulateConfigTreeUpdate: /Channel/Application/Readers + + #---Miscellanesous peer function to policy mapping for access control---# + + # ACL policy for invoking chaincodes on peer + peer/Propose: /Channel/Application/Writers + + # ACL policy for chaincode to chaincode invocation + peer/ChaincodeToChaincode: /Channel/Application/Readers + + #---Events resource to policy mapping for access control###---# + + # ACL policy for sending block events + event/Block: /Channel/Application/Readers + + # ACL policy for sending filtered block events + event/FilteredBlock: /Channel/Application/Readers + + # Chaincode Lifecycle Policies introduced in Fabric 2.x + # ACL policy for _lifecycle's "CheckCommitReadiness" function + _lifecycle/CheckCommitReadiness: /Channel/Application/Writers + + # ACL policy for _lifecycle's "CommitChaincodeDefinition" function + _lifecycle/CommitChaincodeDefinition: /Channel/Application/Writers + + # ACL policy for _lifecycle's "QueryChaincodeDefinition" function + _lifecycle/QueryChaincodeDefinition: /Channel/Application/Readers + + + # Organizations is the list of orgs which are defined as participants on + # the application side of the network + Organizations: + + # Policies defines the set of policies at this level of the config tree + # For Application policies, their canonical path is + # /Channel/Application/ + Policies: + Readers: + Type: ImplicitMeta + Rule: "ANY Readers" + Writers: + Type: ImplicitMeta + Rule: "ANY Writers" + Admins: + Type: ImplicitMeta + Rule: "MAJORITY Admins" + LifecycleEndorsement: + Type: ImplicitMeta + Rule: "MAJORITY Endorsement" + Endorsement: + Type: ImplicitMeta + Rule: "MAJORITY Endorsement" + + Capabilities: + <<: *ApplicationCapabilities +################################################################################ +# +# SECTION: Orderer +# +# - This section defines the values to encode into a config transaction or +# genesis block for orderer related parameters +# +################################################################################ +Orderer: &OrdererDefaults + + # Orderer Type: The orderer implementation to start + OrdererType: etcdraft + + # Addresses used to be the list of orderer addresses that clients and peers + # could connect to. However, this does not allow clients to associate orderer + # addresses and orderer organizations which can be useful for things such + # as TLS validation. The preferred way to specify orderer addresses is now + # to include the OrdererEndpoints item in your org definition + Addresses: + - <>-org1-orderer1:7050 + - <>-org1-orderer2:7050 + - <>-org1-orderer3:7050 + + EtcdRaft: + Consenters: + - Host: <>-org1-orderer1 + Port: 7050 + ClientTLSCert: /tmp/crypto-material/orderers/org1/orderer1/node-tls/msp/signcerts/cert.pem + ServerTLSCert: /tmp/crypto-material/orderers/org1/orderer1/node-tls/msp/signcerts/cert.pem + - Host: <>-org1-orderer2 + Port: 7050 + ClientTLSCert: /tmp/crypto-material/orderers/org1/orderer2/node-tls/msp/signcerts/cert.pem + ServerTLSCert: /tmp/crypto-material/orderers/org1/orderer2/node-tls/msp/signcerts/cert.pem + - Host: <>-org1-orderer3 + Port: 7050 + ClientTLSCert: /tmp/crypto-material/orderers/org1/orderer3/node-tls/msp/signcerts/cert.pem + ServerTLSCert: /tmp/crypto-material/orderers/org1/orderer3/node-tls/msp/signcerts/cert.pem + + # Batch Timeout: The amount of time to wait before creating a batch + BatchTimeout: 2s + + # Batch Size: Controls the number of messages batched into a block + BatchSize: + + # Max Message Count: The maximum number of messages to permit in a batch + MaxMessageCount: 10 + + # Absolute Max Bytes: The absolute maximum number of bytes allowed for + # the serialized messages in a batch. + AbsoluteMaxBytes: 99 MB + + # Preferred Max Bytes: The preferred maximum number of bytes allowed for + # the serialized messages in a batch. A message larger than the preferred + # max bytes will result in a batch larger than preferred max bytes. + PreferredMaxBytes: 512 KB + + # Organizations is the list of orgs which are defined as participants on + # the orderer side of the network + Organizations: + + # Policies defines the set of policies at this level of the config tree + # For Orderer policies, their canonical path is + # /Channel/Orderer/ + Policies: + Readers: + Type: ImplicitMeta + Rule: "ANY Readers" + Writers: + Type: ImplicitMeta + Rule: "ANY Writers" + Admins: + Type: ImplicitMeta + Rule: "MAJORITY Admins" + # BlockValidation specifies what signatures must be included in the block + # from the orderer for the peer to validate it. + BlockValidation: + Type: ImplicitMeta + Rule: "ANY Writers" + +################################################################################ +# +# CHANNEL +# +# This section defines the values to encode into a config transaction or +# genesis block for channel related parameters. +# +################################################################################ +Channel: &ChannelDefaults + # Policies defines the set of policies at this level of the config tree + # For Channel policies, their canonical path is + # /Channel/ + Policies: + # Who may invoke the 'Deliver' API + Readers: + Type: ImplicitMeta + Rule: "ANY Readers" + # Who may invoke the 'Broadcast' API + Writers: + Type: ImplicitMeta + Rule: "ANY Writers" + # By default, who may modify elements at this config level + Admins: + Type: ImplicitMeta + Rule: "MAJORITY Admins" + + # Capabilities describes the channel level capabilities, see the + # dedicated Capabilities section elsewhere in this file for a full + # description + Capabilities: + <<: *ChannelCapabilities + +################################################################################ +# +# Profile +# +# - Different configuration profiles may be encoded here to be specified +# as parameters to the configtxgen tool +# +################################################################################ +Profiles: + MyChannel: + <<: *ChannelDefaults + Orderer: + <<: *OrdererDefaults + OrdererType: etcdraft + Capabilities: + <<: *OrdererCapabilities + Organizations: + - *org1 + + + Application: + <<: *ApplicationDefaults + Organizations: + - *org2 + - *org3 diff --git a/topologies/t9/containers/cas/org1-cas/docker-compose-org1-cas.yml b/topologies/t9/containers/cas/org1-cas/docker-compose-org1-cas.yml new file mode 100644 index 0000000..6dd1b43 --- /dev/null +++ b/topologies/t9/containers/cas/org1-cas/docker-compose-org1-cas.yml @@ -0,0 +1,30 @@ +version: "3.9" +services: + org1-ca-tls: + container_name: ${CURRENT_HL_TOPOLOGY}-org1-ca-tls + command: sh -c 'fabric-ca-server start -d -b org1-ca-tls-admin:org1-ca-tls-adminpw --port 7054' + environment: + - FABRIC_CA_SERVER_HOME=/tmp/hyperledger/fabric-ca/ + - FABRIC_CA_SERVER_TLS_ENABLED=true + - FABRIC_CA_SERVER_CSR_CN=${CURRENT_HL_TOPOLOGY}-org1-ca-tls + - FABRIC_CA_SERVER_CSR_HOSTS=0.0.0.0 + - FABRIC_CA_SERVER_DEBUG=true + - TOPOLOGY=${CURRENT_HL_TOPOLOGY} + - FABRIC_CA_SERVER_TLS_CLIENTAUTH_TYPE=RequireAndVerifyClientCert + - FABRIC_CA_SERVER_TLS_CLIENTAUTH_CERTFILES=/tmp/hyperledger/fabric-ca/ca-cert.pem + volumes: + - "${HL_TOPOLOGIES_BASE_FOLDER}/crypto-material/cas/org1/ca-tls:/tmp/hyperledger/fabric-ca" + - "${HL_TOPOLOGIES_BASE_FOLDER}:/tmp" + org1-ca-identities: + container_name: ${CURRENT_HL_TOPOLOGY}-org1-ca-identities + command: sh -c 'fabric-ca-server start -d -b org1-ca-identities-admin:org1-ca-identities-adminpw --port 7054' + environment: + - FABRIC_CA_SERVER_HOME=/tmp/hyperledger/fabric-ca/ + - FABRIC_CA_SERVER_TLS_ENABLED=true + - FABRIC_CA_SERVER_CSR_CN=${CURRENT_HL_TOPOLOGY}-org1-ca-identities + - FABRIC_CA_SERVER_CSR_HOSTS=0.0.0.0 + - FABRIC_CA_SERVER_DEBUG=true + - TOPOLOGY=${CURRENT_HL_TOPOLOGY} + volumes: + - "${HL_TOPOLOGIES_BASE_FOLDER}/crypto-material/cas/org1/ca-identities:/tmp/hyperledger/fabric-ca" + - "${HL_TOPOLOGIES_BASE_FOLDER}:/tmp" \ No newline at end of file diff --git a/topologies/t9/containers/cas/org2-cas/docker-compose-org2-cas.yml b/topologies/t9/containers/cas/org2-cas/docker-compose-org2-cas.yml new file mode 100644 index 0000000..89a99dc --- /dev/null +++ b/topologies/t9/containers/cas/org2-cas/docker-compose-org2-cas.yml @@ -0,0 +1,30 @@ +version: "3.9" +services: + org2-ca-tls: + container_name: ${CURRENT_HL_TOPOLOGY}-org2-ca-tls + command: sh -c 'fabric-ca-server start -d -b org2-ca-tls-admin:org2-ca-tls-adminpw --port 7054' + environment: + - FABRIC_CA_SERVER_HOME=/tmp/hyperledger/fabric-ca/ + - FABRIC_CA_SERVER_TLS_ENABLED=true + - FABRIC_CA_SERVER_CSR_CN=${CURRENT_HL_TOPOLOGY}-org2-ca-tls + - FABRIC_CA_SERVER_CSR_HOSTS=0.0.0.0 + - FABRIC_CA_SERVER_DEBUG=true + - TOPOLOGY=${CURRENT_HL_TOPOLOGY} + - FABRIC_CA_SERVER_TLS_CLIENTAUTH_TYPE=RequireAndVerifyClientCert + - FABRIC_CA_SERVER_TLS_CLIENTAUTH_CERTFILES=/tmp/hyperledger/fabric-ca/ca-cert.pem + volumes: + - "${HL_TOPOLOGIES_BASE_FOLDER}/crypto-material/cas/org2/ca-tls:/tmp/hyperledger/fabric-ca" + - "${HL_TOPOLOGIES_BASE_FOLDER}:/tmp" + org2-ca-identities: + container_name: ${CURRENT_HL_TOPOLOGY}-org2-ca-identities + command: sh -c 'fabric-ca-server start -d -b org2-ca-identities-admin:org2-ca-identities-adminpw --port 7054' + environment: + - FABRIC_CA_SERVER_HOME=/tmp/hyperledger/fabric-ca/ + - FABRIC_CA_SERVER_TLS_ENABLED=true + - FABRIC_CA_SERVER_CSR_CN=${CURRENT_HL_TOPOLOGY}-org2-ca-identities + - FABRIC_CA_SERVER_CSR_HOSTS=0.0.0.0 + - FABRIC_CA_SERVER_DEBUG=true + - TOPOLOGY=${CURRENT_HL_TOPOLOGY} + volumes: + - "${HL_TOPOLOGIES_BASE_FOLDER}/crypto-material/cas/org2/ca-identities:/tmp/hyperledger/fabric-ca" + - "${HL_TOPOLOGIES_BASE_FOLDER}:/tmp" \ No newline at end of file diff --git a/topologies/t9/containers/cas/org3-cas/docker-compose-org3-cas.yml b/topologies/t9/containers/cas/org3-cas/docker-compose-org3-cas.yml new file mode 100644 index 0000000..446acd5 --- /dev/null +++ b/topologies/t9/containers/cas/org3-cas/docker-compose-org3-cas.yml @@ -0,0 +1,30 @@ +version: "3.9" +services: + org3-ca-tls: + container_name: ${CURRENT_HL_TOPOLOGY}-org3-ca-tls + command: sh -c 'fabric-ca-server start -d -b org3-ca-tls-admin:org3-ca-tls-adminpw --port 7054' + environment: + - FABRIC_CA_SERVER_HOME=/tmp/hyperledger/fabric-ca/ + - FABRIC_CA_SERVER_TLS_ENABLED=true + - FABRIC_CA_SERVER_CSR_CN=${CURRENT_HL_TOPOLOGY}-org3-ca-tls + - FABRIC_CA_SERVER_CSR_HOSTS=0.0.0.0 + - FABRIC_CA_SERVER_DEBUG=true + - TOPOLOGY=${CURRENT_HL_TOPOLOGY} + - FABRIC_CA_SERVER_TLS_CLIENTAUTH_TYPE=RequireAndVerifyClientCert + - FABRIC_CA_SERVER_TLS_CLIENTAUTH_CERTFILES=/tmp/hyperledger/fabric-ca/ca-cert.pem + volumes: + - "${HL_TOPOLOGIES_BASE_FOLDER}/crypto-material/cas/org3/ca-tls:/tmp/hyperledger/fabric-ca" + - "${HL_TOPOLOGIES_BASE_FOLDER}:/tmp" + org3-ca-identities: + container_name: ${CURRENT_HL_TOPOLOGY}-org3-ca-identities + command: sh -c 'fabric-ca-server start -d -b org3-ca-identities-admin:org3-ca-identities-adminpw --port 7054' + environment: + - FABRIC_CA_SERVER_HOME=/tmp/hyperledger/fabric-ca/ + - FABRIC_CA_SERVER_TLS_ENABLED=true + - FABRIC_CA_SERVER_CSR_CN=${CURRENT_HL_TOPOLOGY}-org3-ca-identities + - FABRIC_CA_SERVER_CSR_HOSTS=0.0.0.0 + - FABRIC_CA_SERVER_DEBUG=true + - TOPOLOGY=${CURRENT_HL_TOPOLOGY} + volumes: + - "${HL_TOPOLOGIES_BASE_FOLDER}/crypto-material/cas/org3/ca-identities:/tmp/hyperledger/fabric-ca" + - "${HL_TOPOLOGIES_BASE_FOLDER}:/tmp" \ No newline at end of file diff --git a/topologies/t9/containers/clis/org2-clis/docker-compose-org2-clis.yml b/topologies/t9/containers/clis/org2-clis/docker-compose-org2-clis.yml new file mode 100644 index 0000000..50580aa --- /dev/null +++ b/topologies/t9/containers/clis/org2-clis/docker-compose-org2-clis.yml @@ -0,0 +1,24 @@ +services: + org2-cli-peer1: + container_name: ${CURRENT_HL_TOPOLOGY}-org2-cli-peer1 + tty: true + stdin_open: true + environment: + - GOPATH=/opt/gopath + - CORE_VM_ENDPOINT=unix:///host/var/run/docker.sock + - FABRIC_LOGGING_SPEC=DEBUG + - CORE_PEER_ID=${CURRENT_HL_TOPOLOGY}-org2-cli-peer1 + - CORE_PEER_ADDRESS=${CURRENT_HL_TOPOLOGY}-org2-peer1:7051 + - CORE_PEER_LOCALMSPID=org2MSP + - CORE_PEER_TLS_ENABLED=true + - CORE_PEER_TLS_ROOTCERT_FILE=/tmp/crypto-material/cas/org2/ca-tls/ca-cert.pem + - CORE_PEER_TLS_CLIENTAUTHREQUIRED=true + - CORE_PEER_TLS_CLIENTCERT_FILE=/tmp/crypto-material/orgs/org2/admins/admin-tls/msp/signcerts/cert.pem + - CORE_PEER_TLS_CLIENTKEY_FILE=/tmp/crypto-material/orgs/org2/admins/admin-tls/msp/keystore/key.pem + - CORE_PEER_MSPCONFIGPATH=/tmp/crypto-material/peers/org2/peer1/node/msp + - TOPOLOGY=${CURRENT_HL_TOPOLOGY} + working_dir: /opt/gopath/src/github.com/hyperledger/fabric/org2 + command: sh + volumes: + - "${HL_TOPOLOGIES_BASE_FOLDER}/assets/chaincode:/opt/gopath/src/github.com/hyperledger/fabric-samples/chaincode" + - "${HL_TOPOLOGIES_BASE_FOLDER}:/tmp" diff --git a/topologies/t9/containers/clis/org3-clis/docker-compose-org3-clis.yml b/topologies/t9/containers/clis/org3-clis/docker-compose-org3-clis.yml new file mode 100644 index 0000000..dbaedf1 --- /dev/null +++ b/topologies/t9/containers/clis/org3-clis/docker-compose-org3-clis.yml @@ -0,0 +1,24 @@ +services: + org3-cli-peer1: + container_name: ${CURRENT_HL_TOPOLOGY}-org3-cli-peer1 + tty: true + stdin_open: true + environment: + - GOPATH=/opt/gopath + - CORE_VM_ENDPOINT=unix:///host/var/run/docker.sock + - FABRIC_LOGGING_SPEC=DEBUG + - CORE_PEER_ID=${CURRENT_HL_TOPOLOGY}-org3-cli-peer1 + - CORE_PEER_ADDRESS=${CURRENT_HL_TOPOLOGY}-org3-peer1:7051 + - CORE_PEER_LOCALMSPID=org3MSP + - CORE_PEER_TLS_ENABLED=true + - CORE_PEER_TLS_ROOTCERT_FILE=/tmp/crypto-material/cas/org3/ca-tls/ca-cert.pem + - CORE_PEER_TLS_CLIENTAUTHREQUIRED=true + - CORE_PEER_TLS_CLIENTCERT_FILE=/tmp/crypto-material/orgs/org3/admins/admin-tls/msp/signcerts/cert.pem + - CORE_PEER_TLS_CLIENTKEY_FILE=/tmp/crypto-material/orgs/org3/admins/admin-tls/msp/keystore/key.pem + - CORE_PEER_MSPCONFIGPATH=/tmp/crypto-material/peers/org3/peer1/node/msp + - TOPOLOGY=${CURRENT_HL_TOPOLOGY} + working_dir: /opt/gopath/src/github.com/hyperledger/fabric/org3 + command: sh + volumes: + - ${HL_TOPOLOGIES_BASE_FOLDER}/assets/chaincode:/opt/gopath/src/github.com/hyperledger/fabric-samples/chaincode + - ${HL_TOPOLOGIES_BASE_FOLDER}:/tmp \ No newline at end of file diff --git a/topologies/t9/containers/orderers/org1-orderers/docker-compose-org1-orderers.yml b/topologies/t9/containers/orderers/org1-orderers/docker-compose-org1-orderers.yml new file mode 100644 index 0000000..42c6d2c --- /dev/null +++ b/topologies/t9/containers/orderers/org1-orderers/docker-compose-org1-orderers.yml @@ -0,0 +1,113 @@ +services: + org1-orderer1: + container_name: ${CURRENT_HL_TOPOLOGY}-org1-orderer1 + environment: + - ORDERER_HOME=/tmp/hyperledger/orderer/home + - ORDERER_FILELEDGER_LOCATION=/tmp/hyperledger/orderer/fileledger + - ORDERER_HOST=${CURRENT_HL_TOPOLOGY}-org1-orderer1 + - ORDERER_GENERAL_LISTENADDRESS=0.0.0.0 + - ORDERER_GENERAL_LISTENPORT=7050 + - ORDERER_GENERAL_BOOTSTRAPMETHOD=none + - ORDERER_CHANNELPARTICIPATION_ENABLED=true + # - ORDERER_GENERAL_GENESISFILE=/tmp/crypto-material/artifacts/channels/genesis.block + # - ORDERER_GENERAL_BOOTSTRAPFILE=/tmp/crypto-material/artifacts/channels/genesis.block + - ORDERER_GENERAL_LOCALMSPID=org1MSP + - ORDERER_GENERAL_LOCALMSPDIR=/tmp/crypto-material/orderers/org1/orderer1/node/msp + - ORDERER_GENERAL_TLS_ENABLED=true + - ORDERER_GENERAL_TLS_CERTIFICATE=/tmp/crypto-material/orderers/org1/orderer1/node-tls/msp/signcerts/cert.pem + - ORDERER_GENERAL_TLS_PRIVATEKEY=/tmp/crypto-material/orderers/org1/orderer1/node-tls/msp/keystore/key.pem + # - ORDERER_GENERAL_TLS_ROOTCAS=[/tmp/crypto-material/cas/org1/ca-tls/ca-cert.pem] + - ORDERER_GENERAL_TLS_ROOTCAS=[/tmp/crypto-material/cas/org1/ca-tls/ca-cert.pem,/tmp/crypto-material/cas/org2/ca-tls/ca-cert.pem,/tmp/crypto-material/cas/org3/ca-tls/ca-cert.pem] + - ORDERER_GENERAL_TLS_CLIENTAUTHREQUIRED=true + - ORDERER_GENERAL_TLS_CLIENTROOTCAS=/tmp/crypto-material/cas/org1/ca-tls/ca-cert.pem + - ORDERER_GENERAL_LOGLEVEL=debug + - ORDERER_DEBUG_BROADCASTTRACEDIR=/tmp/data/broadcast + - ORDERER_DEBUG_DELIVERTRACEDIR=/tmp/data/deliver + - ORDERER_OPERATIONS_LISTENADDRESS=0.0.0.0:7051 + - ORDERER_METRICS_STATSD_ADDRESS=0.0.0.0:7052 + - ORDERER_GENERAL_PROFILE_ADDRESS=0.0.0.0:7053 + - TOPOLOGY=${CURRENT_HL_TOPOLOGY} + - ORDERER_ADMIN_LISTENADDRESS=0.0.0.0:7057 + - ORDERER_ADMIN_TLS_ENABLED=true + - ORDERER_ADMIN_TLS_CERTIFICATE=/tmp/crypto-material/orderers/org1/orderer1/node-tls/msp/signcerts/cert.pem + - ORDERER_ADMIN_TLS_PRIVATEKEY=/tmp/crypto-material/orderers/org1/orderer1/node-tls/msp/keystore/key.pem + - ORDERER_ADMIN_TLS_CLIENTAUTHREQUIRED=true + - ORDERER_ADMIN_TLS_CLIENTROOTCAS=/tmp/crypto-material/cas/org1/ca-tls/ca-cert.pem + volumes: + - ${HL_TOPOLOGIES_BASE_FOLDER}/crypto-material:/tmp/crypto-material + - ${HL_TOPOLOGIES_BASE_FOLDER}/homefolders/orderers/org1/orderer1:/tmp/hyperledger/orderer + org1-orderer2: + container_name: ${CURRENT_HL_TOPOLOGY}-org1-orderer2 + environment: + - ORDERER_HOME=/tmp/hyperledger/orderer/home + - ORDERER_FILELEDGER_LOCATION=/tmp/hyperledger/orderer/fileledger + - ORDERER_HOST=${CURRENT_HL_TOPOLOGY}-org1-orderer2 + - ORDERER_GENERAL_LISTENADDRESS=0.0.0.0 + - ORDERER_GENERAL_LISTENPORT=7050 + - ORDERER_GENERAL_BOOTSTRAPMETHOD=none + - ORDERER_CHANNELPARTICIPATION_ENABLED=true + # - ORDERER_GENERAL_GENESISFILE=/tmp/crypto-material/artifacts/channels/genesis.block + # - ORDERER_GENERAL_BOOTSTRAPFILE=/tmp/crypto-material/artifacts/channels/genesis.block + - ORDERER_GENERAL_LOCALMSPID=org1MSP + - ORDERER_GENERAL_LOCALMSPDIR=/tmp/crypto-material/orderers/org1/orderer2/node/msp + - ORDERER_GENERAL_TLS_ENABLED=true + - ORDERER_GENERAL_TLS_CERTIFICATE=/tmp/crypto-material/orderers/org1/orderer2/node-tls/msp/signcerts/cert.pem + - ORDERER_GENERAL_TLS_PRIVATEKEY=/tmp/crypto-material/orderers/org1/orderer2/node-tls/msp/keystore/key.pem + # - ORDERER_GENERAL_TLS_ROOTCAS=[/tmp/crypto-material/cas/org1/ca-tls/ca-cert.pem] + - ORDERER_GENERAL_TLS_ROOTCAS=[/tmp/crypto-material/cas/org1/ca-tls/ca-cert.pem,/tmp/crypto-material/cas/org2/ca-tls/ca-cert.pem,/tmp/crypto-material/cas/org3/ca-tls/ca-cert.pem] + - ORDERER_GENERAL_TLS_CLIENTAUTHREQUIRED=true + - ORDERER_GENERAL_TLS_CLIENTROOTCAS=/tmp/crypto-material/cas/org1/ca-tls/ca-cert.pem + - ORDERER_GENERAL_LOGLEVEL=debug + - ORDERER_DEBUG_BROADCASTTRACEDIR=/tmp/data/broadcast + - ORDERER_DEBUG_DELIVERTRACEDIR=/tmp/data/deliver + - ORDERER_OPERATIONS_LISTENADDRESS=0.0.0.0:7051 + - ORDERER_METRICS_STATSD_ADDRESS=0.0.0.0:7052 + - ORDERER_GENERAL_PROFILE_ADDRESS=0.0.0.0:7053 + - TOPOLOGY=${CURRENT_HL_TOPOLOGY} + - ORDERER_ADMIN_LISTENADDRESS=0.0.0.0:7057 + - ORDERER_ADMIN_TLS_ENABLED=true + - ORDERER_ADMIN_TLS_CERTIFICATE=/tmp/crypto-material/orderers/org1/orderer2/node-tls/msp/signcerts/cert.pem + - ORDERER_ADMIN_TLS_PRIVATEKEY=/tmp/crypto-material/orderers/org1/orderer2/node-tls/msp/keystore/key.pem + - ORDERER_ADMIN_TLS_CLIENTAUTHREQUIRED=true + - ORDERER_ADMIN_TLS_CLIENTROOTCAS=/tmp/crypto-material/cas/org1/ca-tls/ca-cert.pem + + volumes: + - ${HL_TOPOLOGIES_BASE_FOLDER}/crypto-material:/tmp/crypto-material + - ${HL_TOPOLOGIES_BASE_FOLDER}/homefolders/orderers/org1/orderer2:/tmp/hyperledger/orderer + org1-orderer3: + container_name: ${CURRENT_HL_TOPOLOGY}-org1-orderer3 + environment: + - ORDERER_HOME=/tmp/hyperledger/orderer/home + - ORDERER_FILELEDGER_LOCATION=/tmp/hyperledger/orderer/fileledger + - ORDERER_HOST=${CURRENT_HL_TOPOLOGY}-org1-orderer3 + - ORDERER_GENERAL_LISTENADDRESS=0.0.0.0 + - ORDERER_GENERAL_LISTENPORT=7050 + - ORDERER_GENERAL_BOOTSTRAPMETHOD=none + - ORDERER_CHANNELPARTICIPATION_ENABLED=true + # - ORDERER_GENERAL_GENESISFILE=/tmp/crypto-material/artifacts/channels/genesis.block + # - ORDERER_GENERAL_BOOTSTRAPFILE=/tmp/crypto-material/artifacts/channels/genesis.block + - ORDERER_GENERAL_LOCALMSPID=org1MSP + - ORDERER_GENERAL_LOCALMSPDIR=/tmp/crypto-material/orderers/org1/orderer3/node/msp + - ORDERER_GENERAL_TLS_ENABLED=true + - ORDERER_GENERAL_TLS_CERTIFICATE=/tmp/crypto-material/orderers/org1/orderer3/node-tls/msp/signcerts/cert.pem + - ORDERER_GENERAL_TLS_PRIVATEKEY=/tmp/crypto-material/orderers/org1/orderer3/node-tls/msp/keystore/key.pem + # - ORDERER_GENERAL_TLS_ROOTCAS=[/tmp/crypto-material/cas/org1/ca-tls/ca-cert.pem] + - ORDERER_GENERAL_TLS_ROOTCAS=[/tmp/crypto-material/cas/org1/ca-tls/ca-cert.pem,/tmp/crypto-material/cas/org2/ca-tls/ca-cert.pem,/tmp/crypto-material/cas/org3/ca-tls/ca-cert.pem] + - ORDERER_GENERAL_TLS_CLIENTAUTHREQUIRED=true + - ORDERER_GENERAL_TLS_CLIENTROOTCAS=/tmp/crypto-material/cas/org1/ca-tls/ca-cert.pem + - ORDERER_GENERAL_LOGLEVEL=debug + - ORDERER_DEBUG_BROADCASTTRACEDIR=/tmp/data/broadcast + - ORDERER_DEBUG_DELIVERTRACEDIR=/tmp/data/deliver + - ORDERER_OPERATIONS_LISTENADDRESS=0.0.0.0:7051 + - ORDERER_METRICS_STATSD_ADDRESS=0.0.0.0:7052 + - ORDERER_GENERAL_PROFILE_ADDRESS=0.0.0.0:7053 + - TOPOLOGY=${CURRENT_HL_TOPOLOGY} + - ORDERER_ADMIN_LISTENADDRESS=0.0.0.0:7057 + - ORDERER_ADMIN_TLS_ENABLED=true + - ORDERER_ADMIN_TLS_CERTIFICATE=/tmp/crypto-material/orderers/org1/orderer3/node-tls/msp/signcerts/cert.pem + - ORDERER_ADMIN_TLS_PRIVATEKEY=/tmp/crypto-material/orderers/org1/orderer3/node-tls/msp/keystore/key.pem + - ORDERER_ADMIN_TLS_CLIENTAUTHREQUIRED=true + - ORDERER_ADMIN_TLS_CLIENTROOTCAS=/tmp/crypto-material/cas/org1/ca-tls/ca-cert.pem + volumes: + - ${HL_TOPOLOGIES_BASE_FOLDER}/crypto-material:/tmp/crypto-material + - ${HL_TOPOLOGIES_BASE_FOLDER}/homefolders/orderers/org1/orderer3:/tmp/hyperledger/orderer diff --git a/topologies/t9/containers/peers/org2-peers/docker-compose-org2-peers.yml b/topologies/t9/containers/peers/org2-peers/docker-compose-org2-peers.yml new file mode 100644 index 0000000..ab557f0 --- /dev/null +++ b/topologies/t9/containers/peers/org2-peers/docker-compose-org2-peers.yml @@ -0,0 +1,54 @@ +services: + org2-peer1: + container_name: ${CURRENT_HL_TOPOLOGY}-org2-peer1 + environment: + - CORE_PEER_ID=${CURRENT_HL_TOPOLOGY}-org2-peer1 + - CORE_PEER_ADDRESS=${CURRENT_HL_TOPOLOGY}-org2-peer1:7051 + - CORE_PEER_LOCALMSPID=org2MSP + - CORE_PEER_MSPCONFIGPATH=/tmp/crypto-material/peers/org2/peer1/node/msp + - CORE_VM_ENDPOINT=unix:///host/var/run/docker.sock + - CORE_VM_DOCKER_HOSTCONFIG_NETWORKMODE=hl-fabric-${CURRENT_HL_TOPOLOGY} + # - FABRIC_LOGGING_SPEC=debug + - CORE_PEER_TLS_ENABLED=true + - CORE_PEER_TLS_CERT_FILE=/tmp/crypto-material/peers/org2/peer1/node-tls/msp/signcerts/cert.pem + - CORE_PEER_TLS_KEY_FILE=/tmp/crypto-material/peers/org2/peer1/node-tls/msp/keystore/key.pem + - CORE_PEER_TLS_ROOTCERT_FILE=/tmp/crypto-material/peers/org2/peer1/node-tls/msp/tlscacerts/ca-cert.pem + - CORE_PEER_TLS_CLIENTAUTHREQUIRED=true + - CORE_PEER_TLS_CLIENTROOTCAS_FILES=/tmp/crypto-material/peers/org2/peer1/node-tls/msp/tlscacerts/ca-cert.pem + - CORE_PEER_GOSSIP_USELEADERELECTION=true + - CORE_PEER_GOSSIP_ORGLEADER=false + - CORE_PEER_GOSSIP_EXTERNALENDPOINT=${CURRENT_HL_TOPOLOGY}-org2-peer1:7051 + - CORE_PEER_GOSSIP_SKIPHANDSHAKE=true + - TOPOLOGY=${CURRENT_HL_TOPOLOGY} + # - CORE_PEER_CHAINCODELISTENADDRESS=0.0.0.0:7052 + working_dir: /opt/gopath/src/github.com/hyperledger/fabric/org2/peer1 + volumes: + - /var/run:/host/var/run + - ${HL_TOPOLOGIES_BASE_FOLDER}:/tmp + org2-peer2: + container_name: ${CURRENT_HL_TOPOLOGY}-org2-peer2 + environment: + - CORE_PEER_ID=${CURRENT_HL_TOPOLOGY}-org2-peer2 + - CORE_PEER_ADDRESS=${CURRENT_HL_TOPOLOGY}-org2-peer2:7051 + - CORE_PEER_LOCALMSPID=org2MSP + - CORE_PEER_MSPCONFIGPATH=/tmp/crypto-material/peers/org2/peer2/node/msp + - CORE_VM_ENDPOINT=unix:///host/var/run/docker.sock + - CORE_VM_DOCKER_HOSTCONFIG_NETWORKMODE=hl-fabric-${CURRENT_HL_TOPOLOGY} + # - FABRIC_LOGGING_SPEC=debug + - CORE_PEER_TLS_ENABLED=true + - CORE_PEER_TLS_CERT_FILE=/tmp/crypto-material/peers/org2/peer2/node-tls/msp/signcerts/cert.pem + - CORE_PEER_TLS_KEY_FILE=/tmp/crypto-material/peers/org2/peer2/node-tls/msp/keystore/key.pem + - CORE_PEER_TLS_ROOTCERT_FILE=/tmp/crypto-material/peers/org2/peer2/node-tls/msp/tlscacerts/ca-cert.pem + - CORE_PEER_TLS_CLIENTAUTHREQUIRED=true + - CORE_PEER_TLS_CLIENTROOTCAS_FILES=/tmp/crypto-material/peers/org2/peer2/node-tls/msp/tlscacerts/ca-cert.pem + - CORE_PEER_GOSSIP_USELEADERELECTION=true + - CORE_PEER_GOSSIP_ORGLEADER=false + - CORE_PEER_GOSSIP_EXTERNALENDPOINT=${CURRENT_HL_TOPOLOGY}-org2-peer2:7051 + - CORE_PEER_GOSSIP_SKIPHANDSHAKE=true + # - CORE_PEER_CHAINCODELISTENADDRESS=0.0.0.0:7052 + - CORE_PEER_GOSSIP_BOOTSTRAP=${CURRENT_HL_TOPOLOGY}-org2-peer1:7051 + - TOPOLOGY=${CURRENT_HL_TOPOLOGY} + working_dir: /opt/gopath/src/github.com/hyperledger/fabric/org2/peer2 + volumes: + - /var/run:/host/var/run + - ${HL_TOPOLOGIES_BASE_FOLDER}:/tmp diff --git a/topologies/t9/containers/peers/org3-peers/docker-compose-org3-peers.yml b/topologies/t9/containers/peers/org3-peers/docker-compose-org3-peers.yml new file mode 100644 index 0000000..ea90fe3 --- /dev/null +++ b/topologies/t9/containers/peers/org3-peers/docker-compose-org3-peers.yml @@ -0,0 +1,54 @@ +services: + org3-peer1: + container_name: ${CURRENT_HL_TOPOLOGY}-org3-peer1 + environment: + - CORE_PEER_ID=${CURRENT_HL_TOPOLOGY}-org3-peer1 + - CORE_PEER_ADDRESS=${CURRENT_HL_TOPOLOGY}-org3-peer1:7051 + - CORE_PEER_LOCALMSPID=org3MSP + - CORE_PEER_MSPCONFIGPATH=/tmp/crypto-material/peers/org3/peer1/node/msp + - CORE_VM_ENDPOINT=unix:///host/var/run/docker.sock + - CORE_VM_DOCKER_HOSTCONFIG_NETWORKMODE=hl-fabric-${CURRENT_HL_TOPOLOGY} + # - FABRIC_LOGGING_SPEC=debug + - CORE_PEER_TLS_ENABLED=true + - CORE_PEER_TLS_CERT_FILE=/tmp/crypto-material/peers/org3/peer1/node-tls/msp/signcerts/cert.pem + - CORE_PEER_TLS_KEY_FILE=/tmp/crypto-material/peers/org3/peer1/node-tls/msp/keystore/key.pem + - CORE_PEER_TLS_ROOTCERT_FILE=/tmp/crypto-material/peers/org3/peer1/node-tls/msp/tlscacerts/ca-cert.pem + - CORE_PEER_TLS_CLIENTAUTHREQUIRED=true + - CORE_PEER_TLS_CLIENTROOTCAS_FILES=/tmp/crypto-material/peers/org3/peer1/node-tls/msp/tlscacerts/ca-cert.pem + - CORE_PEER_GOSSIP_USELEADERELECTION=true + - CORE_PEER_GOSSIP_ORGLEADER=false + - CORE_PEER_GOSSIP_EXTERNALENDPOINT=${CURRENT_HL_TOPOLOGY}-org3-peer1:7051 + - CORE_PEER_GOSSIP_SKIPHANDSHAKE=true + - TOPOLOGY=${CURRENT_HL_TOPOLOGY} + # - CORE_PEER_CHAINCODELISTENADDRESS=0.0.0.0:7052 + working_dir: /opt/gopath/src/github.com/hyperledger/fabric/org3/peer1 + volumes: + - /var/run:/host/var/run + - ${HL_TOPOLOGIES_BASE_FOLDER}:/tmp + org3-peer2: + container_name: ${CURRENT_HL_TOPOLOGY}-org3-peer2 + environment: + - CORE_PEER_ID=${CURRENT_HL_TOPOLOGY}-org3-peer2 + - CORE_PEER_ADDRESS=${CURRENT_HL_TOPOLOGY}-org3-peer2:7051 + - CORE_PEER_LOCALMSPID=org3MSP + - CORE_PEER_MSPCONFIGPATH=/tmp/crypto-material/peers/org3/peer2/node/msp + - CORE_VM_ENDPOINT=unix:///host/var/run/docker.sock + - CORE_VM_DOCKER_HOSTCONFIG_NETWORKMODE=hl-fabric-${CURRENT_HL_TOPOLOGY} + # - FABRIC_LOGGING_SPEC=debug + - CORE_PEER_TLS_ENABLED=true + - CORE_PEER_TLS_CERT_FILE=/tmp/crypto-material/peers/org3/peer2/node-tls/msp/signcerts/cert.pem + - CORE_PEER_TLS_KEY_FILE=/tmp/crypto-material/peers/org3/peer2/node-tls/msp/keystore/key.pem + - CORE_PEER_TLS_ROOTCERT_FILE=/tmp/crypto-material/peers/org3/peer2/node-tls/msp/tlscacerts/ca-cert.pem + - CORE_PEER_TLS_CLIENTAUTHREQUIRED=true + - CORE_PEER_TLS_CLIENTROOTCAS_FILES=/tmp/crypto-material/peers/org3/peer2/node-tls/msp/tlscacerts/ca-cert.pem + - CORE_PEER_GOSSIP_USELEADERELECTION=true + - CORE_PEER_GOSSIP_ORGLEADER=false + - CORE_PEER_GOSSIP_EXTERNALENDPOINT=${CURRENT_HL_TOPOLOGY}-org3-peer2:7051 + - CORE_PEER_GOSSIP_SKIPHANDSHAKE=true + # - CORE_PEER_CHAINCODELISTENADDRESS=0.0.0.0:7052 + - CORE_PEER_GOSSIP_BOOTSTRAP=${CURRENT_HL_TOPOLOGY}-org3-peer1:7051 + - TOPOLOGY=${CURRENT_HL_TOPOLOGY} + working_dir: /opt/gopath/src/github.com/hyperledger/fabric/org3/peer2 + volumes: + - /var/run:/host/var/run + - ${HL_TOPOLOGIES_BASE_FOLDER}:/tmp diff --git a/topologies/t9/crypto-material/.gitkeep b/topologies/t9/crypto-material/.gitkeep new file mode 100644 index 0000000..e69de29 diff --git a/topologies/t9/docker-compose.yml b/topologies/t9/docker-compose.yml new file mode 100644 index 0000000..ac46843 --- /dev/null +++ b/topologies/t9/docker-compose.yml @@ -0,0 +1,91 @@ +services: + org-shell-cmd: + image: alpine + networks: + - hl-fabric + org1-ca-tls: + image: fabric-ca-openssl:latest + # ports: + # - :7054 + networks: + - hl-fabric + org1-ca-identities: + image: fabric-ca-openssl:latest + # ports: + # - :7054 + networks: + - hl-fabric + org2-ca-tls: + image: fabric-ca-openssl:latest + # ports: + # - :7054 + networks: + - hl-fabric + org2-ca-identities: + image: fabric-ca-openssl:latest + # ports: + # - :7054 + networks: + - hl-fabric + org3-ca-tls: + image: fabric-ca-openssl:latest + # ports: + # - :7054 + networks: + - hl-fabric + org3-ca-identities: + image: fabric-ca-openssl:latest + # ports: + # - :7054 + networks: + - hl-fabric + org2-peer1: + image: hyperledger/fabric-peer:${FABRIC_PEER_VERSION} + # ports: + # - :7051 + networks: + - hl-fabric + org2-peer2: + image: hyperledger/fabric-peer:${FABRIC_PEER_VERSION} + # ports: + # - :7051 + networks: + - hl-fabric + org3-peer1: + image: hyperledger/fabric-peer:${FABRIC_PEER_VERSION} + # ports: + # - :7051 + networks: + - hl-fabric + org3-peer2: + image: hyperledger/fabric-peer:${FABRIC_PEER_VERSION} + # ports: + # - :7051 + networks: + - hl-fabric + org2-cli-peer1: + image: hyperledger/fabric-tools:${FABRIC_TOOLS_VERSION} + networks: + - hl-fabric + org3-cli-peer1: + image: hyperledger/fabric-tools:${FABRIC_TOOLS_VERSION} + networks: + - hl-fabric + org1-orderer1: + image: hyperledger/fabric-orderer:${PEER_ORDERER_VERSION} + # ports: + # - :7050 + networks: + - hl-fabric + org1-orderer2: + image: hyperledger/fabric-orderer:${PEER_ORDERER_VERSION} + # ports: + # - :7050 + networks: + - hl-fabric + org1-orderer3: + image: hyperledger/fabric-orderer:${PEER_ORDERER_VERSION} + # ports: + # - :7050 + networks: + - hl-fabric \ No newline at end of file diff --git a/topologies/t9/homefolders/.gitkeep b/topologies/t9/homefolders/.gitkeep new file mode 100644 index 0000000..e69de29 diff --git a/topologies/t9/images/Dockerfile b/topologies/t9/images/Dockerfile new file mode 100644 index 0000000..141c832 --- /dev/null +++ b/topologies/t9/images/Dockerfile @@ -0,0 +1,4 @@ +FROM hyperledger/fabric-ca:1.5 +RUN apk upgrade --update-cache --available && \ + apk add openssl && \ + rm -rf /var/cache/apk/* \ No newline at end of file diff --git a/topologies/t9/scripts/all-org-peers-commit-chaincode.sh b/topologies/t9/scripts/all-org-peers-commit-chaincode.sh new file mode 100755 index 0000000..43c34db --- /dev/null +++ b/topologies/t9/scripts/all-org-peers-commit-chaincode.sh @@ -0,0 +1,28 @@ +#!/bin/bash +set -e +set -x + +peer lifecycle chaincode checkcommitreadiness -C mychannel --name myccv1 -v "1.0" --sequence 1 + +export CORE_PEER_ADDRESS=${TOPOLOGY}-org2-peer1:7051 +export CORE_PEER_MSPCONFIGPATH=/tmp/crypto-material/orgs/org2/admins/admin/msp +QUERY_INSTALLED=`FABRIC_LOGGING_SPEC=ERROR peer lifecycle chaincode queryinstalled | grep myccv1` +IFS=' ' read -r -a array <<< $QUERY_INSTALLED +PACKAGE_ID=${array[2]} +PACKAGE_ID=${PACKAGE_ID::-1} +echo "The Package ID for the installed chaincode is: $PACKAGE_ID" + +peer lifecycle chaincode commit --channelID mychannel --name myccv1 --version "1.0" \ + --peerAddresses ${TOPOLOGY}-org2-peer1:7051 \ + --peerAddresses ${TOPOLOGY}-org2-peer2:7051 \ + --peerAddresses ${TOPOLOGY}-org3-peer1:7051 \ + --peerAddresses ${TOPOLOGY}-org3-peer2:7051 \ + --tlsRootCertFiles /tmp/crypto-material/cas/org2/ca-tls/ca-cert.pem \ + --tlsRootCertFiles /tmp/crypto-material/cas/org2/ca-tls/ca-cert.pem \ + --tlsRootCertFiles /tmp/crypto-material/cas/org3/ca-tls/ca-cert.pem \ + --tlsRootCertFiles /tmp/crypto-material/cas/org3/ca-tls/ca-cert.pem \ + --sequence 1 --tls --cafile /tmp/crypto-material/cas/org1/ca-tls/ca-cert.pem --clientauth \ + --keyfile /tmp/crypto-material/orgs/org1/admins/admin-tls/msp/keystore/key.pem \ + --certfile /tmp/crypto-material/orgs/org1/admins/admin-tls/msp/signcerts/cert.pem + +peer lifecycle chaincode querycommitted --channelID mychannel --name myccv1 \ No newline at end of file diff --git a/topologies/t9/scripts/all-org-peers-execute-chaincode.sh b/topologies/t9/scripts/all-org-peers-execute-chaincode.sh new file mode 100755 index 0000000..70eff11 --- /dev/null +++ b/topologies/t9/scripts/all-org-peers-execute-chaincode.sh @@ -0,0 +1,20 @@ +#!/bin/sh + +set -e +set -x + +export CORE_PEER_ADDRESS=${TOPOLOGY}-org2-peer1:7051 +export CORE_PEER_MSPCONFIGPATH=/tmp/crypto-material/orgs/org2/users/user/msp +peer chaincode invoke -C mychannel -n myccv1 -c '{"Args":["InitLedger"]}' --waitForEvent --tls --cafile /tmp/crypto-material/cas/org1/ca-tls/ca-cert.pem \ + --peerAddresses ${TOPOLOGY}-org2-peer1:7051 \ + --peerAddresses ${TOPOLOGY}-org2-peer2:7051 \ + --peerAddresses ${TOPOLOGY}-org3-peer1:7051 \ + --peerAddresses ${TOPOLOGY}-org3-peer2:7051 \ + --tlsRootCertFiles /tmp/crypto-material/cas/org2/ca-tls/ca-cert.pem \ + --tlsRootCertFiles /tmp/crypto-material/cas/org2/ca-tls/ca-cert.pem \ + --tlsRootCertFiles /tmp/crypto-material/cas/org3/ca-tls/ca-cert.pem \ + --tlsRootCertFiles /tmp/crypto-material/cas/org3/ca-tls/ca-cert.pem --clientauth \ + --keyfile /tmp/crypto-material/orgs/org1/admins/admin-tls/msp/keystore/key.pem \ + --certfile /tmp/crypto-material/orgs/org1/admins/admin-tls/msp/signcerts/cert.pem + +peer chaincode query -C mychannel -n myccv1 -c '{"Args":["GetAllAssets"]}' diff --git a/topologies/t9/scripts/channels-setup.sh b/topologies/t9/scripts/channels-setup.sh new file mode 100755 index 0000000..04a3662 --- /dev/null +++ b/topologies/t9/scripts/channels-setup.sh @@ -0,0 +1,6 @@ +#!/bin/sh +set -e +set -x + +export FABRIC_CFG_PATH=/tmp/crypto-material/config +configtxgen -profile MyChannel -outputBlock /tmp/crypto-material/artifacts/channels/mychannel.pb -channelID mychannel \ No newline at end of file diff --git a/topologies/t9/scripts/delete-state-data.sh b/topologies/t9/scripts/delete-state-data.sh new file mode 100755 index 0000000..9062431 --- /dev/null +++ b/topologies/t9/scripts/delete-state-data.sh @@ -0,0 +1,10 @@ +#!/bin/bash +set -e +set -x + +# remove crypto material files +rm -rf /tmp/crypto-material/* +touch /tmp/crypto-material/.gitkeep + +rm -rf /tmp/homefolders/* +touch /tmp/homefolders/.gitkeep diff --git a/topologies/t9/scripts/find-ca-private-key.sh b/topologies/t9/scripts/find-ca-private-key.sh new file mode 100755 index 0000000..29b3f95 --- /dev/null +++ b/topologies/t9/scripts/find-ca-private-key.sh @@ -0,0 +1,21 @@ +#!/bin/bash + +set -e +set -x + +find_private_key_path() { + CA_HOME=$1 + CA_CERTFILE=$CA_HOME/tls-cert.pem + CA_HASH=`openssl x509 -noout -pubkey -in $CA_CERTFILE | openssl md5` + + for x in $CA_HOME/msp/keystore/*_sk; do + CA_KEYFILE_HASH=`openssl pkey -pubout -in ${x%} | openssl md5` + if [[ "${CA_KEYFILE_HASH}" == "${CA_HASH}" ]] + then + echo ${x%} + return 0 + fi + done + + return -1 +} \ No newline at end of file diff --git a/topologies/t9/scripts/org1-enroll-identities-with-ca-identities.sh b/topologies/t9/scripts/org1-enroll-identities-with-ca-identities.sh new file mode 100755 index 0000000..7ab78d9 --- /dev/null +++ b/topologies/t9/scripts/org1-enroll-identities-with-ca-identities.sh @@ -0,0 +1,48 @@ +#!/bin/bash +set -e +set -x + +unset FABRIC_CA_CLIENT_TLS_CLIENT_KEYFILE +unset FABRIC_CA_CLIENT_TLS_CLIENT_CERTFILE +# enroll orderer1 node +export FABRIC_CA_CLIENT_HOME=/tmp/crypto-material/orderers/org1/orderer1/node +export FABRIC_CA_CLIENT_TLS_CERTFILES=/tmp/crypto-material/cas/org1/ca-identities/ca-cert.pem +fabric-ca-client enroll -d -u https://org1-orderer1:orderer1PW@0.0.0.0:7054 +mv /tmp/crypto-material/orderers/org1/orderer1/node/msp/cacerts/* /tmp/crypto-material/orderers/org1/orderer1/node/msp/cacerts/ca-cert.pem +cp /tmp/config/config.yaml /tmp/crypto-material/orderers/org1/orderer1/node/msp + +# enroll orderer2 node +export FABRIC_CA_CLIENT_HOME=/tmp/crypto-material/orderers/org1/orderer2/node +export FABRIC_CA_CLIENT_TLS_CERTFILES=/tmp/crypto-material/cas/org1/ca-identities/ca-cert.pem +fabric-ca-client enroll -d -u https://org1-orderer2:orderer2PW@0.0.0.0:7054 +mv /tmp/crypto-material/orderers/org1/orderer2/node/msp/cacerts/* /tmp/crypto-material/orderers/org1/orderer2/node/msp/cacerts/ca-cert.pem +cp /tmp/config/config.yaml /tmp/crypto-material/orderers/org1/orderer2/node/msp + +# enroll orderer3 node +export FABRIC_CA_CLIENT_HOME=/tmp/crypto-material/orderers/org1/orderer3/node +export FABRIC_CA_CLIENT_TLS_CERTFILES=/tmp/crypto-material/cas/org1/ca-identities/ca-cert.pem +fabric-ca-client enroll -d -u https://org1-orderer3:orderer3PW@0.0.0.0:7054 +mv /tmp/crypto-material/orderers/org1/orderer3/node/msp/cacerts/* /tmp/crypto-material/orderers/org1/orderer3/node/msp/cacerts/ca-cert.pem +cp /tmp/config/config.yaml /tmp/crypto-material/orderers/org1/orderer3/node/msp + +# enroll org1 admin +export FABRIC_CA_CLIENT_HOME=/tmp/crypto-material/orgs/org1/admins/admin +export FABRIC_CA_CLIENT_TLS_CERTFILES=/tmp/crypto-material/cas/org1/ca-identities/ca-cert.pem +fabric-ca-client enroll -d -u https://admin-org1:org1adminpw@0.0.0.0:7054 +mv /tmp/crypto-material/orgs/org1/admins/admin/msp/cacerts/* /tmp/crypto-material/orgs/org1/admins/admin/msp/cacerts/ca-cert.pem +cp /tmp/config/config.yaml /tmp/crypto-material/orgs/org1/admins/admin/msp + +mv /tmp/crypto-material/orderers/org1/orderer1/node-tls/msp/keystore/* /tmp/crypto-material/orderers/org1/orderer1/node-tls/msp/keystore/key.pem +mv /tmp/crypto-material/orderers/org1/orderer2/node-tls/msp/keystore/* /tmp/crypto-material/orderers/org1/orderer2/node-tls/msp/keystore/key.pem +mv /tmp/crypto-material/orderers/org1/orderer3/node-tls/msp/keystore/* /tmp/crypto-material/orderers/org1/orderer3/node-tls/msp/keystore/key.pem + +# setup org1 msp +mkdir -p /tmp/crypto-material/orgs/org1/msp +cp /tmp/config/config.yaml /tmp/crypto-material/orgs/org1/msp +mkdir -p /tmp/crypto-material/orgs/org1/msp/cacerts +cp /tmp/crypto-material/cas/org1/ca-identities/ca-cert.pem /tmp/crypto-material/orgs/org1/msp/cacerts/ca-cert.pem +mkdir -p /tmp/crypto-material/orgs/org1/msp/tlscacerts +cp /tmp/crypto-material/cas/org1/ca-tls/ca-cert.pem /tmp/crypto-material/orgs/org1/msp/tlscacerts/org1-tls-ca-cert.pem +cp /tmp/crypto-material/cas/org2/ca-tls/ca-cert.pem /tmp/crypto-material/orgs/org1/msp/tlscacerts/org2-tls-ca-cert.pem +cp /tmp/crypto-material/cas/org3/ca-tls/ca-cert.pem /tmp/crypto-material/orgs/org1/msp/tlscacerts/org3-tls-ca-cert.pem +mkdir -p /tmp/crypto-material/orgs/org1/msp/users \ No newline at end of file diff --git a/topologies/t9/scripts/org1-enroll-identities-with-ca-tls.sh b/topologies/t9/scripts/org1-enroll-identities-with-ca-tls.sh new file mode 100755 index 0000000..2d985cc --- /dev/null +++ b/topologies/t9/scripts/org1-enroll-identities-with-ca-tls.sh @@ -0,0 +1,42 @@ +#!/bin/bash +set -e +set -x + +export FABRIC_CA_CLIENT_TLS_CLIENT_KEYFILE=/tmp/crypto-material/cas/org1/ca-tls-admin/msp/keystore/key.pem +export FABRIC_CA_CLIENT_TLS_CLIENT_CERTFILE=/tmp/crypto-material/cas/org1/ca-tls-admin/msp/signcerts/cert.pem +# enroll orderer1 node-tls +export FABRIC_CA_CLIENT_HOME=/tmp/crypto-material/orderers/org1/orderer1/node-tls +export FABRIC_CA_CLIENT_TLS_CERTFILES=/tmp/crypto-material/cas/org1/ca-tls/ca-cert.pem +fabric-ca-client enroll -d -u https://org1-orderer1:orderer1PW@0.0.0.0:7054 --enrollment.profile tls --csr.hosts ${TOPOLOGY}-org1-orderer1 + +# enroll orderer2 node-tls +export FABRIC_CA_CLIENT_HOME=/tmp/crypto-material/orderers/org1/orderer2/node-tls +export FABRIC_CA_CLIENT_TLS_CERTFILES=/tmp/crypto-material/cas/org1/ca-tls/ca-cert.pem +fabric-ca-client enroll -d -u https://org1-orderer2:orderer2PW@0.0.0.0:7054 --enrollment.profile tls --csr.hosts ${TOPOLOGY}-org1-orderer2 + +# enroll orderer3 node-tls +export FABRIC_CA_CLIENT_HOME=/tmp/crypto-material/orderers/org1/orderer3/node-tls +export FABRIC_CA_CLIENT_TLS_CERTFILES=/tmp/crypto-material/cas/org1/ca-tls/ca-cert.pem +fabric-ca-client enroll -d -u https://org1-orderer3:orderer3PW@0.0.0.0:7054 --enrollment.profile tls --csr.hosts ${TOPOLOGY}-org1-orderer3 + +# enroll org1 admin-tls +unset FABRIC_CA_CLIENT_TLS_CLIENT_KEYFILE +unset FABRIC_CA_CLIENT_TLS_CLIENT_CERTFILE +export FABRIC_CA_CLIENT_HOME=/tmp/crypto-material/orgs/org1/admins/admin-tls +export FABRIC_CA_CLIENT_TLS_CERTFILES=/tmp/crypto-material/cas/org1/ca-tls/ca-cert.pem +export FABRIC_CA_CLIENT_TLS_CLIENT_KEYFILE=/tmp/crypto-material/cas/org1/ca-tls-admin/msp/keystore/key.pem +export FABRIC_CA_CLIENT_TLS_CLIENT_CERTFILE=/tmp/crypto-material/cas/org1/ca-tls-admin/msp/signcerts/cert.pem +export FABRIC_CA_CLIENT_MSPDIR=msp +fabric-ca-client enroll -d -u https://admin-org1:org1AdminPW@0.0.0.0:7054 + +mv /tmp/crypto-material/orgs/org1/admins/admin-tls/msp/keystore/* /tmp/crypto-material/orgs/org1/admins/admin-tls/msp/keystore/key.pem + + + +mv /tmp/crypto-material/orderers/org1/orderer1/node-tls/msp/keystore/* /tmp/crypto-material/orderers/org1/orderer1/node-tls/msp/keystore/key.pem +mv /tmp/crypto-material/orderers/org1/orderer2/node-tls/msp/keystore/* /tmp/crypto-material/orderers/org1/orderer2/node-tls/msp/keystore/key.pem +mv /tmp/crypto-material/orderers/org1/orderer3/node-tls/msp/keystore/* /tmp/crypto-material/orderers/org1/orderer3/node-tls/msp/keystore/key.pem + +mv /tmp/crypto-material/orderers/org1/orderer1/node-tls/msp/tlscacerts/* /tmp/crypto-material/orderers/org1/orderer1/node-tls/msp/tlscacerts/ca-cert.pem +mv /tmp/crypto-material/orderers/org1/orderer2/node-tls/msp/tlscacerts/* /tmp/crypto-material/orderers/org1/orderer2/node-tls/msp/tlscacerts/ca-cert.pem +mv /tmp/crypto-material/orderers/org1/orderer3/node-tls/msp/tlscacerts/* /tmp/crypto-material/orderers/org1/orderer3/node-tls/msp/tlscacerts/ca-cert.pem \ No newline at end of file diff --git a/topologies/t9/scripts/org1-join-channels.sh b/topologies/t9/scripts/org1-join-channels.sh new file mode 100755 index 0000000..f79b8d3 --- /dev/null +++ b/topologies/t9/scripts/org1-join-channels.sh @@ -0,0 +1,24 @@ +#!/bin/sh +set -e +set -x + +osnadmin channel join -o ${TOPOLOGY}-org1-orderer1:7057 \ + --ca-file /tmp/crypto-material/cas/org1/ca-tls-admin/msp/cacerts/0-0-0-0-7054.pem \ + --client-cert /tmp/crypto-material/cas/org1/ca-tls-admin/msp/signcerts/cert.pem \ + --client-key /tmp/crypto-material/cas/org1/ca-tls-admin/msp/keystore/key.pem \ + --channelID mychannel \ + --config-block /tmp/crypto-material/artifacts/channels/mychannel.pb + +osnadmin channel join -o ${TOPOLOGY}-org1-orderer2:7057 \ + --ca-file /tmp/crypto-material/cas/org1/ca-tls-admin/msp/cacerts/0-0-0-0-7054.pem \ + --client-cert /tmp/crypto-material/cas/org1/ca-tls-admin/msp/signcerts/cert.pem \ + --client-key /tmp/crypto-material/cas/org1/ca-tls-admin/msp/keystore/key.pem \ + --channelID mychannel \ + --config-block /tmp/crypto-material/artifacts/channels/mychannel.pb + +osnadmin channel join -o ${TOPOLOGY}-org1-orderer3:7057 \ + --ca-file /tmp/crypto-material/cas/org1/ca-tls-admin/msp/cacerts/0-0-0-0-7054.pem \ + --client-cert /tmp/crypto-material/cas/org1/ca-tls-admin/msp/signcerts/cert.pem \ + --client-key /tmp/crypto-material/cas/org1/ca-tls-admin/msp/keystore/key.pem \ + --channelID mychannel \ + --config-block /tmp/crypto-material/artifacts/channels/mychannel.pb \ No newline at end of file diff --git a/topologies/t9/scripts/org1-register-identities-with-ca-identities.sh b/topologies/t9/scripts/org1-register-identities-with-ca-identities.sh new file mode 100755 index 0000000..1e6dcb2 --- /dev/null +++ b/topologies/t9/scripts/org1-register-identities-with-ca-identities.sh @@ -0,0 +1,11 @@ +#!/bin/bash +set -e +set -x + +export FABRIC_CA_CLIENT_TLS_CERTFILES=/tmp/crypto-material/cas/org1/ca-identities/ca-cert.pem +export FABRIC_CA_CLIENT_HOME=/tmp/crypto-material/cas/org1/ca-identities-admin +fabric-ca-client enroll -d -u https://org1-ca-identities-admin:org1-ca-identities-adminpw@0.0.0.0:7054 +fabric-ca-client register -d --id.name org1-orderer1 --id.secret orderer1PW --id.type orderer -u https://0.0.0.0:7054 +fabric-ca-client register -d --id.name org1-orderer2 --id.secret orderer2PW --id.type orderer -u https://0.0.0.0:7054 +fabric-ca-client register -d --id.name org1-orderer3 --id.secret orderer3PW --id.type orderer -u https://0.0.0.0:7054 +fabric-ca-client register -d --id.name admin-org1 --id.secret org1adminpw --id.type admin --id.attrs "hf.Registrar.Roles=client,hf.Registrar.Attributes=*,hf.Revoker=true,hf.GenCRL=true,admin=true:ecert,abac.init=true:ecert" -u https://0.0.0.0:7054 \ No newline at end of file diff --git a/topologies/t9/scripts/org1-register-identities-with-ca-tls.sh b/topologies/t9/scripts/org1-register-identities-with-ca-tls.sh new file mode 100755 index 0000000..d8cb558 --- /dev/null +++ b/topologies/t9/scripts/org1-register-identities-with-ca-tls.sh @@ -0,0 +1,26 @@ +#!/bin/bash +set -e +set -x +source /tmp/scripts/find-ca-private-key.sh +# get the CA private key file as 2 of them are created: one for the CA cert (ca-cert.pem) and another one for CA TLS cert (tls-cert.pem) +# and we need the keyfile for the CA TLS cert +CA_PRIV_KEYFILE=`find_private_key_path /tmp/crypto-material/cas/org1/ca-tls` +echo $CA_PRIV_KEYFILE +# initial enroll of bootstrap admin will be done with the key and cert of the CA server +# after this, the admin's key and cert will be used for registrations. + +export FABRIC_CA_CLIENT_TLS_CERTFILES=/tmp/crypto-material/cas/org1/ca-tls/ca-cert.pem +# copy the CA server's key file to make it easier to use it +cp $CA_PRIV_KEYFILE /tmp/crypto-material/cas/org1/ca-tls/msp/keystore/key.pem +export FABRIC_CA_CLIENT_TLS_CLIENT_KEYFILE=/tmp/crypto-material/cas/org1/ca-tls/msp/keystore/key.pem +export FABRIC_CA_CLIENT_TLS_CLIENT_CERTFILE=/tmp/crypto-material/cas/org1/ca-tls/tls-cert.pem +export FABRIC_CA_CLIENT_HOME=/tmp/crypto-material/cas/org1/ca-tls-admin +fabric-ca-client enroll -d -u https://org1-ca-tls-admin:org1-ca-tls-adminpw@0.0.0.0:7054 +sleep 1 +mv /tmp/crypto-material/cas/org1/ca-tls-admin/msp/keystore/* /tmp/crypto-material/cas/org1/ca-tls-admin/msp/keystore/key.pem +export FABRIC_CA_CLIENT_TLS_CLIENT_KEYFILE=/tmp/crypto-material/cas/org1/ca-tls-admin/msp/keystore/key.pem +export FABRIC_CA_CLIENT_TLS_CLIENT_CERTFILE=/tmp/crypto-material/cas/org1/ca-tls-admin/msp/signcerts/cert.pem +fabric-ca-client register -d --id.name org1-orderer1 --id.secret orderer1PW --id.type orderer -u https://0.0.0.0:7054 +fabric-ca-client register -d --id.name org1-orderer2 --id.secret orderer2PW --id.type orderer -u https://0.0.0.0:7054 +fabric-ca-client register -d --id.name org1-orderer3 --id.secret orderer3PW --id.type orderer -u https://0.0.0.0:7054 +fabric-ca-client register -d --id.name admin-org1 --id.secret org1AdminPW --id.type admin -u https://0.0.0.0:7054 \ No newline at end of file diff --git a/topologies/t9/scripts/org2-approve-chaincode.sh b/topologies/t9/scripts/org2-approve-chaincode.sh new file mode 100755 index 0000000..7c4d6a8 --- /dev/null +++ b/topologies/t9/scripts/org2-approve-chaincode.sh @@ -0,0 +1,24 @@ +#!/bin/bash +set -e +set -x + +export CORE_PEER_ADDRESS=${TOPOLOGY}-org2-peer1:7051 +export CORE_PEER_MSPCONFIGPATH=/tmp/crypto-material/orgs/org2/admins/admin/msp +QUERY_INSTALLED=`FABRIC_LOGGING_SPEC=ERROR peer lifecycle chaincode queryinstalled | grep myccv1` +IFS=' ' read -r -a array <<< $QUERY_INSTALLED +PACKAGE_ID=${array[2]} +PACKAGE_ID=${PACKAGE_ID::-1} +echo "The Package ID for the installed chaincode is: $PACKAGE_ID" + + +peer lifecycle chaincode approveformyorg --channelID mychannel --name myccv1 --version "1.0" \ + --package-id $PACKAGE_ID \ + --peerAddresses ${TOPOLOGY}-org2-peer1:7051 \ + --peerAddresses ${TOPOLOGY}-org2-peer2:7051 \ + --tlsRootCertFiles /tmp/crypto-material/cas/org2/ca-tls/ca-cert.pem \ + --tlsRootCertFiles /tmp/crypto-material/cas/org2/ca-tls/ca-cert.pem \ + --sequence 1 --tls --cafile /tmp/crypto-material/cas/org1/ca-tls/ca-cert.pem --clientauth \ + --keyfile /tmp/crypto-material/orgs/org2/admins/admin-tls/msp/keystore/key.pem \ + --certfile /tmp/crypto-material/orgs/org2/admins/admin-tls/msp/signcerts/cert.pem + +peer lifecycle chaincode queryapproved -C mychannel --name myccv1 \ No newline at end of file diff --git a/topologies/t9/scripts/org2-enroll-identities-with-ca-identities.sh b/topologies/t9/scripts/org2-enroll-identities-with-ca-identities.sh new file mode 100755 index 0000000..20f8c8d --- /dev/null +++ b/topologies/t9/scripts/org2-enroll-identities-with-ca-identities.sh @@ -0,0 +1,53 @@ +#!/bin/bash +set -e +set -x + +# enroll peer1 node +export FABRIC_CA_CLIENT_HOME=/tmp/crypto-material/peers/org2/peer1/node +export FABRIC_CA_CLIENT_TLS_CERTFILES=/tmp/crypto-material/cas/org2/ca-identities/ca-cert.pem +fabric-ca-client enroll -d -u https://peer1-org2:peer1PW@0.0.0.0:7054 +mv /tmp/crypto-material/peers/org2/peer1/node/msp/cacerts/* /tmp/crypto-material/peers/org2/peer1/node/msp/cacerts/ca-cert.pem +cp /tmp/config/config.yaml /tmp/crypto-material/peers/org2/peer1/node/msp + +# enroll peer2 node +export FABRIC_CA_CLIENT_HOME=/tmp/crypto-material/peers/org2/peer2/node +export FABRIC_CA_CLIENT_TLS_CERTFILES=/tmp/crypto-material/cas/org2/ca-identities/ca-cert.pem +unset FABRIC_CA_CLIENT_TLS_CLIENT_KEYFILE +unset FABRIC_CA_CLIENT_TLS_CLIENT_CERTFILE +fabric-ca-client enroll -d -u https://peer2-org2:peer2PW@0.0.0.0:7054 +mv /tmp/crypto-material/peers/org2/peer2/node/msp/cacerts/* /tmp/crypto-material/peers/org2/peer2/node/msp/cacerts/ca-cert.pem +cp /tmp/config/config.yaml /tmp/crypto-material/peers/org2/peer2/node/msp + +# enroll org2 admin +export FABRIC_CA_CLIENT_HOME=/tmp/crypto-material/orgs/org2/admins/admin +export FABRIC_CA_CLIENT_TLS_CERTFILES=/tmp/crypto-material/cas/org2/ca-identities/ca-cert.pem +unset FABRIC_CA_CLIENT_TLS_CLIENT_KEYFILE +unset FABRIC_CA_CLIENT_TLS_CLIENT_CERTFILE +fabric-ca-client enroll -d -u https://admin-org2:org2AdminPW@0.0.0.0:7054 +mv /tmp/crypto-material/orgs/org2/admins/admin/msp/cacerts/* /tmp/crypto-material/orgs/org2/admins/admin/msp/cacerts/ca-cert.pem +cp /tmp/config/config.yaml /tmp/crypto-material/orgs/org2/admins/admin/msp + +# enroll org2 user +export FABRIC_CA_CLIENT_HOME=/tmp/crypto-material/orgs/org2/users/user +export FABRIC_CA_CLIENT_TLS_CERTFILES=/tmp/crypto-material/cas/org2/ca-identities/ca-cert.pem +fabric-ca-client enroll -d -u https://admin-org2:org2AdminPW@0.0.0.0:7054 +mv /tmp/crypto-material/orgs/org2/users/user/msp/cacerts/* /tmp/crypto-material/orgs/org2/users/user/msp/cacerts/ca-cert.pem +cp /tmp/config/config.yaml /tmp/crypto-material/orgs/org2/users/user/msp + +# enroll org2 client +export FABRIC_CA_CLIENT_HOME=/tmp/crypto-material/orgs/org2/clients/client +export FABRIC_CA_CLIENT_TLS_CERTFILES=/tmp/crypto-material/cas/org2/ca-identities/ca-cert.pem +fabric-ca-client enroll -d -u https://admin-org2:org2AdminPW@0.0.0.0:7054 +mv /tmp/crypto-material/orgs/org2/clients/client/msp/cacerts/* /tmp/crypto-material/orgs/org2/clients/client/msp/cacerts/ca-cert.pem +cp /tmp/config/config.yaml /tmp/crypto-material/orgs/org2/clients/client/msp + +# setup org2 msp +mkdir -p /tmp/crypto-material/orgs/org2/msp +cp /tmp/config/config.yaml /tmp/crypto-material/orgs/org2/msp +mkdir -p /tmp/crypto-material/orgs/org2/msp/cacerts +cp /tmp/crypto-material/cas/org2/ca-identities/ca-cert.pem /tmp/crypto-material/orgs/org2/msp/cacerts/ca-cert.pem +mkdir -p /tmp/crypto-material/orgs/org2/msp/tlscacerts +cp /tmp/crypto-material/cas/org1/ca-tls/ca-cert.pem /tmp/crypto-material/orgs/org2/msp/tlscacerts/org1-tls-ca-cert.pem +cp /tmp/crypto-material/cas/org2/ca-tls/ca-cert.pem /tmp/crypto-material/orgs/org2/msp/tlscacerts/org2-tls-ca-cert.pem +cp /tmp/crypto-material/cas/org3/ca-tls/ca-cert.pem /tmp/crypto-material/orgs/org2/msp/tlscacerts/org3-tls-ca-cert.pem +mkdir -p /tmp/crypto-material/orgs/org2/msp/users diff --git a/topologies/t9/scripts/org2-enroll-identities-with-ca-tls.sh b/topologies/t9/scripts/org2-enroll-identities-with-ca-tls.sh new file mode 100755 index 0000000..6698877 --- /dev/null +++ b/topologies/t9/scripts/org2-enroll-identities-with-ca-tls.sh @@ -0,0 +1,34 @@ +#!/bin/bash +set -e +set -x + +# enroll peer1 node-tls +export FABRIC_CA_CLIENT_HOME=/tmp/crypto-material/peers/org2/peer1/node-tls +export FABRIC_CA_CLIENT_TLS_CERTFILES=/tmp/crypto-material/cas/org2/ca-tls/ca-cert.pem +export FABRIC_CA_CLIENT_TLS_CLIENT_KEYFILE=/tmp/crypto-material/cas/org2/ca-tls-admin/msp/keystore/key.pem +export FABRIC_CA_CLIENT_TLS_CLIENT_CERTFILE=/tmp/crypto-material/cas/org2/ca-tls-admin/msp/signcerts/cert.pem +fabric-ca-client enroll -d -u https://peer1-org2:peer1PW@0.0.0.0:7054 --enrollment.profile tls --csr.hosts ${TOPOLOGY}-org2-peer1 + +# enroll peer2 node-tls +export FABRIC_CA_CLIENT_HOME=/tmp/crypto-material/peers/org2/peer2/node-tls +export FABRIC_CA_CLIENT_TLS_CERTFILES=/tmp/crypto-material/cas/org2/ca-tls/ca-cert.pem +export FABRIC_CA_CLIENT_TLS_CLIENT_KEYFILE=/tmp/crypto-material/cas/org2/ca-tls-admin/msp/keystore/key.pem +export FABRIC_CA_CLIENT_TLS_CLIENT_CERTFILE=/tmp/crypto-material/cas/org2/ca-tls-admin/msp/signcerts/cert.pem +fabric-ca-client enroll -d -u https://peer2-org2:peer2PW@0.0.0.0:7054 --enrollment.profile tls --csr.hosts ${TOPOLOGY}-org2-peer2 + +# enroll org2 admin-tls +export FABRIC_CA_CLIENT_HOME=/tmp/crypto-material/orgs/org2/admins/admin-tls +export FABRIC_CA_CLIENT_TLS_CERTFILES=/tmp/crypto-material/cas/org2/ca-tls/ca-cert.pem +export FABRIC_CA_CLIENT_TLS_CLIENT_KEYFILE=/tmp/crypto-material/cas/org2/ca-tls-admin/msp/keystore/key.pem +export FABRIC_CA_CLIENT_TLS_CLIENT_CERTFILE=/tmp/crypto-material/cas/org2/ca-tls-admin/msp/signcerts/cert.pem +#export FABRIC_CA_CLIENT_MSPDIR=msp +fabric-ca-client enroll -d -u https://admin-org2:org2AdminPW@0.0.0.0:7054 + +mv /tmp/crypto-material/orgs/org2/admins/admin-tls/msp/keystore/* /tmp/crypto-material/orgs/org2/admins/admin-tls/msp/keystore/key.pem + + +mv /tmp/crypto-material/peers/org2/peer1/node-tls/msp/keystore/* /tmp/crypto-material/peers/org2/peer1/node-tls/msp/keystore/key.pem +mv /tmp/crypto-material/peers/org2/peer2/node-tls/msp/keystore/* /tmp/crypto-material/peers/org2/peer2/node-tls/msp/keystore/key.pem + +mv /tmp/crypto-material/peers/org2/peer1/node-tls/msp/tlscacerts/* /tmp/crypto-material/peers/org2/peer1/node-tls/msp/tlscacerts/ca-cert.pem +mv /tmp/crypto-material/peers/org2/peer2/node-tls/msp/tlscacerts/* /tmp/crypto-material/peers/org2/peer2/node-tls/msp/tlscacerts/ca-cert.pem \ No newline at end of file diff --git a/topologies/t9/scripts/org2-install-chaincode.sh b/topologies/t9/scripts/org2-install-chaincode.sh new file mode 100755 index 0000000..b9db570 --- /dev/null +++ b/topologies/t9/scripts/org2-install-chaincode.sh @@ -0,0 +1,13 @@ +#!/bin/bash +set -e +set -x + +export CHAINCODE_DIR=/opt/gopath/src/github.com/hyperledger/fabric-samples/chaincode/assets-transfer-basic/chaincode-go + +export CORE_PEER_ADDRESS=${TOPOLOGY}-org2-peer1:7051 +export CORE_PEER_MSPCONFIGPATH=/tmp/crypto-material/orgs/org2/admins/admin/msp +peer lifecycle chaincode package mycc.tar.gz --path $CHAINCODE_DIR --lang golang --label myccv1 +peer lifecycle chaincode install mycc.tar.gz + +export CORE_PEER_ADDRESS=${TOPOLOGY}-org2-peer2:7051 +peer lifecycle chaincode install mycc.tar.gz \ No newline at end of file diff --git a/topologies/t9/scripts/org2-join-channels.sh b/topologies/t9/scripts/org2-join-channels.sh new file mode 100755 index 0000000..a56a26f --- /dev/null +++ b/topologies/t9/scripts/org2-join-channels.sh @@ -0,0 +1,11 @@ +#!/bin/sh +set -e +set -x + +export CORE_PEER_MSPCONFIGPATH=/tmp/crypto-material/orgs/org2/admins/admin/msp + +export CORE_PEER_ADDRESS=${TOPOLOGY}-org2-peer1:7051 +peer channel join -b /tmp/crypto-material/artifacts/channels/mychannel.pb + +export CORE_PEER_ADDRESS=${TOPOLOGY}-org2-peer2:7051 +peer channel join -b /tmp/crypto-material/artifacts/channels/mychannel.pb \ No newline at end of file diff --git a/topologies/t9/scripts/org2-register-identities-with-ca-identities.sh b/topologies/t9/scripts/org2-register-identities-with-ca-identities.sh new file mode 100755 index 0000000..beb2cd6 --- /dev/null +++ b/topologies/t9/scripts/org2-register-identities-with-ca-identities.sh @@ -0,0 +1,12 @@ +#!/bin/bash +set -e +set -x + +export FABRIC_CA_CLIENT_TLS_CERTFILES=/tmp/crypto-material/cas/org2/ca-identities/ca-cert.pem +export FABRIC_CA_CLIENT_HOME=/tmp/crypto-material/cas/org2/ca-identities-admin +fabric-ca-client enroll -d -u https://org2-ca-identities-admin:org2-ca-identities-adminpw@0.0.0.0:7054 +fabric-ca-client register -d --id.name peer1-org2 --id.secret peer1PW --id.type peer -u https://0.0.0.0:7054 +fabric-ca-client register -d --id.name peer2-org2 --id.secret peer2PW --id.type peer -u https://0.0.0.0:7054 +fabric-ca-client register -d --id.name admin-org2 --id.secret org2AdminPW --id.type admin -u https://0.0.0.0:7054 +fabric-ca-client register -d --id.name user-org2 --id.secret org2UserPW --id.type user -u https://0.0.0.0:7054 +fabric-ca-client register -d --id.name client-org2 --id.secret org2UserPW --id.type client -u https://0.0.0.0:7054 \ No newline at end of file diff --git a/topologies/t9/scripts/org2-register-identities-with-ca-tls.sh b/topologies/t9/scripts/org2-register-identities-with-ca-tls.sh new file mode 100755 index 0000000..fe612eb --- /dev/null +++ b/topologies/t9/scripts/org2-register-identities-with-ca-tls.sh @@ -0,0 +1,24 @@ +#!/bin/bash +set -e +set -x +source /tmp/scripts/find-ca-private-key.sh +# get the CA private key file as 2 of them are created: one for the CA cert (ca-cert.pem) and another one for CA TLS cert (tls-cert.pem) +# and we need the keyfile for the CA TLS cert +CA_PRIV_KEYFILE=`find_private_key_path /tmp/crypto-material/cas/org2/ca-tls` +echo $CA_PRIV_KEYFILE +# initial enroll of bootstrap admin will be done with the key and cert of the CA server +# after this, the admin's key and cert will be used for registrations. +export FABRIC_CA_CLIENT_TLS_CERTFILES=/tmp/crypto-material/cas/org2/ca-tls/ca-cert.pem +# copy the CA server's key file to make it easier to use it +cp $CA_PRIV_KEYFILE /tmp/crypto-material/cas/org2/ca-tls/msp/keystore/key.pem +export FABRIC_CA_CLIENT_TLS_CLIENT_KEYFILE=/tmp/crypto-material/cas/org2/ca-tls/msp/keystore/key.pem +export FABRIC_CA_CLIENT_TLS_CLIENT_CERTFILE=/tmp/crypto-material/cas/org2/ca-tls/tls-cert.pem +export FABRIC_CA_CLIENT_HOME=/tmp/crypto-material/cas/org2/ca-tls-admin +fabric-ca-client enroll -d -u https://org2-ca-tls-admin:org2-ca-tls-adminpw@0.0.0.0:7054 +sleep 1 +mv /tmp/crypto-material/cas/org2/ca-tls-admin/msp/keystore/* /tmp/crypto-material/cas/org2/ca-tls-admin/msp/keystore/key.pem +export FABRIC_CA_CLIENT_TLS_CLIENT_KEYFILE=/tmp/crypto-material/cas/org2/ca-tls-admin/msp/keystore/key.pem +export FABRIC_CA_CLIENT_TLS_CLIENT_CERTFILE=/tmp/crypto-material/cas/org2/ca-tls-admin/msp/signcerts/cert.pem +fabric-ca-client register -d --id.name peer1-org2 --id.secret peer1PW --id.type peer -u https://0.0.0.0:7054 +fabric-ca-client register -d --id.name peer2-org2 --id.secret peer2PW --id.type peer -u https://0.0.0.0:7054 +fabric-ca-client register -d --id.name admin-org2 --id.secret org2AdminPW --id.type admin -u https://0.0.0.0:7054 \ No newline at end of file diff --git a/topologies/t9/scripts/org3-approve-chaincode.sh b/topologies/t9/scripts/org3-approve-chaincode.sh new file mode 100755 index 0000000..d72f268 --- /dev/null +++ b/topologies/t9/scripts/org3-approve-chaincode.sh @@ -0,0 +1,24 @@ +#!/bin/bash +set -e +set -x + +export CORE_PEER_ADDRESS=${TOPOLOGY}-org3-peer1:7051 +export CORE_PEER_MSPCONFIGPATH=/tmp/crypto-material/orgs/org3/admins/admin/msp +QUERY_INSTALLED=`FABRIC_LOGGING_SPEC=ERROR peer lifecycle chaincode queryinstalled | grep myccv1` +IFS=' ' read -r -a array <<< $QUERY_INSTALLED +PACKAGE_ID=${array[2]} +PACKAGE_ID=${PACKAGE_ID::-1} +echo "The Package ID for the installed chaincode is: $PACKAGE_ID" + + +peer lifecycle chaincode approveformyorg --channelID mychannel --name myccv1 --version "1.0" \ + --package-id $PACKAGE_ID \ + --peerAddresses ${TOPOLOGY}-org3-peer1:7051 \ + --peerAddresses ${TOPOLOGY}-org3-peer2:7051 \ + --tlsRootCertFiles /tmp/crypto-material/cas/org3/ca-tls/ca-cert.pem \ + --tlsRootCertFiles /tmp/crypto-material/cas/org3/ca-tls/ca-cert.pem \ + --sequence 1 --tls --cafile /tmp/crypto-material/cas/org1/ca-tls/ca-cert.pem --clientauth \ + --keyfile /tmp/crypto-material/orgs/org2/admins/admin-tls/msp/keystore/key.pem \ + --certfile /tmp/crypto-material/orgs/org2/admins/admin-tls/msp/signcerts/cert.pem + +peer lifecycle chaincode queryapproved -C mychannel --name myccv1 \ No newline at end of file diff --git a/topologies/t9/scripts/org3-enroll-identities-with-ca-identities.sh b/topologies/t9/scripts/org3-enroll-identities-with-ca-identities.sh new file mode 100755 index 0000000..7e3f436 --- /dev/null +++ b/topologies/t9/scripts/org3-enroll-identities-with-ca-identities.sh @@ -0,0 +1,52 @@ +#!/bin/bash +set -e +set -x + +export FABRIC_CA_CLIENT_TLS_CLIENT_KEYFILE=/tmp/crypto-material/cas/org2/ca-tls-admin/msp/keystore/key.pem +export FABRIC_CA_CLIENT_TLS_CLIENT_CERTFILE=/tmp/crypto-material/cas/org2/ca-tls-admin/msp/signcerts/cert.pem + +# enroll peer1 node +export FABRIC_CA_CLIENT_HOME=/tmp/crypto-material/peers/org3/peer1/node +export FABRIC_CA_CLIENT_TLS_CERTFILES=/tmp/crypto-material/cas/org3/ca-identities/ca-cert.pem +fabric-ca-client enroll -d -u https://peer1-org3:peer1PW@0.0.0.0:7054 +mv /tmp/crypto-material/peers/org3/peer1/node/msp/cacerts/* /tmp/crypto-material/peers/org3/peer1/node/msp/cacerts/ca-cert.pem +cp /tmp/config/config.yaml /tmp/crypto-material/peers/org3/peer1/node/msp + +# enroll peer2 node +export FABRIC_CA_CLIENT_HOME=/tmp/crypto-material/peers/org3/peer2/node +export FABRIC_CA_CLIENT_TLS_CERTFILES=/tmp/crypto-material/cas/org3/ca-identities/ca-cert.pem +fabric-ca-client enroll -d -u https://peer2-org3:peer2PW@0.0.0.0:7054 +mv /tmp/crypto-material/peers/org3/peer2/node/msp/cacerts/* /tmp/crypto-material/peers/org3/peer2/node/msp/cacerts/ca-cert.pem +cp /tmp/config/config.yaml /tmp/crypto-material/peers/org3/peer2/node/msp + +# enroll org3 admin +export FABRIC_CA_CLIENT_HOME=/tmp/crypto-material/orgs/org3/admins/admin +export FABRIC_CA_CLIENT_TLS_CERTFILES=/tmp/crypto-material/cas/org3/ca-identities/ca-cert.pem +fabric-ca-client enroll -d -u https://admin-org3:org3AdminPW@0.0.0.0:7054 +mv /tmp/crypto-material/orgs/org3/admins/admin/msp/cacerts/* /tmp/crypto-material/orgs/org3/admins/admin/msp/cacerts/ca-cert.pem +cp /tmp/config/config.yaml /tmp/crypto-material/orgs/org3/admins/admin/msp + +# enroll org3 user +export FABRIC_CA_CLIENT_HOME=/tmp/crypto-material/orgs/org3/users/user +export FABRIC_CA_CLIENT_TLS_CERTFILES=/tmp/crypto-material/cas/org3/ca-identities/ca-cert.pem +fabric-ca-client enroll -d -u https://admin-org3:org3AdminPW@0.0.0.0:7054 +mv /tmp/crypto-material/orgs/org3/users/user/msp/cacerts/* /tmp/crypto-material/orgs/org3/users/user/msp/cacerts/ca-cert.pem +cp /tmp/config/config.yaml /tmp/crypto-material/orgs/org3/users/user/msp + +# enroll org3 client +export FABRIC_CA_CLIENT_HOME=/tmp/crypto-material/orgs/org3/clients/client +export FABRIC_CA_CLIENT_TLS_CERTFILES=/tmp/crypto-material/cas/org3/ca-identities/ca-cert.pem +fabric-ca-client enroll -d -u https://admin-org3:org3AdminPW@0.0.0.0:7054 +mv /tmp/crypto-material/orgs/org3/clients/client/msp/cacerts/* /tmp/crypto-material/orgs/org3/clients/client/msp/cacerts/ca-cert.pem +cp /tmp/config/config.yaml /tmp/crypto-material/orgs/org3/clients/client/msp + +# setup org3 msp +mkdir -p /tmp/crypto-material/orgs/org3/msp +cp /tmp/config/config.yaml /tmp/crypto-material/orgs/org3/msp +mkdir -p /tmp/crypto-material/orgs/org3/msp/cacerts +cp /tmp/crypto-material/cas/org3/ca-identities/ca-cert.pem /tmp/crypto-material/orgs/org3/msp/cacerts/ca-cert.pem +mkdir -p /tmp/crypto-material/orgs/org3/msp/tlscacerts +cp /tmp/crypto-material/cas/org1/ca-tls/ca-cert.pem /tmp/crypto-material/orgs/org3/msp/tlscacerts/org1-tls-ca-cert.pem +cp /tmp/crypto-material/cas/org2/ca-tls/ca-cert.pem /tmp/crypto-material/orgs/org3/msp/tlscacerts/org2-tls-ca-cert.pem +cp /tmp/crypto-material/cas/org3/ca-tls/ca-cert.pem /tmp/crypto-material/orgs/org3/msp/tlscacerts/org3-tls-ca-cert.pem +mkdir -p /tmp/crypto-material/orgs/org3/msp/users diff --git a/topologies/t9/scripts/org3-enroll-identities-with-ca-tls.sh b/topologies/t9/scripts/org3-enroll-identities-with-ca-tls.sh new file mode 100755 index 0000000..30caa0c --- /dev/null +++ b/topologies/t9/scripts/org3-enroll-identities-with-ca-tls.sh @@ -0,0 +1,31 @@ +#!/bin/bash +set -e +set -x + +export FABRIC_CA_CLIENT_TLS_CLIENT_KEYFILE=/tmp/crypto-material/cas/org3/ca-tls-admin/msp/keystore/key.pem +export FABRIC_CA_CLIENT_TLS_CLIENT_CERTFILE=/tmp/crypto-material/cas/org3/ca-tls-admin/msp/signcerts/cert.pem + +# enroll peer1 node-tls +export FABRIC_CA_CLIENT_HOME=/tmp/crypto-material/peers/org3/peer1/node-tls +export FABRIC_CA_CLIENT_TLS_CERTFILES=/tmp/crypto-material/cas/org3/ca-tls/ca-cert.pem +fabric-ca-client enroll -d -u https://peer1-org3:peer1PW@0.0.0.0:7054 --enrollment.profile tls --csr.hosts ${TOPOLOGY}-org3-peer1 + +# enroll peer2 node-tls +export FABRIC_CA_CLIENT_HOME=/tmp/crypto-material/peers/org3/peer2/node-tls +export FABRIC_CA_CLIENT_TLS_CERTFILES=/tmp/crypto-material/cas/org3/ca-tls/ca-cert.pem +fabric-ca-client enroll -d -u https://peer2-org3:peer2PW@0.0.0.0:7054 --enrollment.profile tls --csr.hosts ${TOPOLOGY}-org3-peer2 +# enroll org3 admin-tls +unset FABRIC_CA_CLIENT_TLS_CLIENT_KEYFILE +unset FABRIC_CA_CLIENT_TLS_CLIENT_CERTFILE +export FABRIC_CA_CLIENT_HOME=/tmp/crypto-material/orgs/org3/admins/admin-tls +export FABRIC_CA_CLIENT_TLS_CERTFILES=/tmp/crypto-material/cas/org3/ca-tls/ca-cert.pem +export FABRIC_CA_CLIENT_TLS_CLIENT_KEYFILE=/tmp/crypto-material/cas/org3/ca-tls-admin/msp/keystore/key.pem +export FABRIC_CA_CLIENT_TLS_CLIENT_CERTFILE=/tmp/crypto-material/cas/org3/ca-tls-admin/msp/signcerts/cert.pem +export FABRIC_CA_CLIENT_MSPDIR=msp +fabric-ca-client enroll -d -u https://admin-org3:org3AdminPW@0.0.0.0:7054 +mv /tmp/crypto-material/orgs/org3/admins/admin-tls/msp/keystore/* /tmp/crypto-material/orgs/org3/admins/admin-tls/msp/keystore/key.pem +mv /tmp/crypto-material/peers/org3/peer1/node-tls/msp/keystore/* /tmp/crypto-material/peers/org3/peer1/node-tls/msp/keystore/key.pem +mv /tmp/crypto-material/peers/org3/peer2/node-tls/msp/keystore/* /tmp/crypto-material/peers/org3/peer2/node-tls/msp/keystore/key.pem + +mv /tmp/crypto-material/peers/org3/peer1/node-tls/msp/tlscacerts/* /tmp/crypto-material/peers/org3/peer1/node-tls/msp/tlscacerts/ca-cert.pem +mv /tmp/crypto-material/peers/org3/peer2/node-tls/msp/tlscacerts/* /tmp/crypto-material/peers/org3/peer2/node-tls/msp/tlscacerts/ca-cert.pem \ No newline at end of file diff --git a/topologies/t9/scripts/org3-install-chaincode.sh b/topologies/t9/scripts/org3-install-chaincode.sh new file mode 100755 index 0000000..f6b8789 --- /dev/null +++ b/topologies/t9/scripts/org3-install-chaincode.sh @@ -0,0 +1,13 @@ +#!/bin/bash +set -e +set -x + +export CHAINCODE_DIR=/opt/gopath/src/github.com/hyperledger/fabric-samples/chaincode/assets-transfer-basic/chaincode-go + +export CORE_PEER_ADDRESS=${TOPOLOGY}-org3-peer1:7051 +export CORE_PEER_MSPCONFIGPATH=/tmp/crypto-material/orgs/org3/admins/admin/msp +peer lifecycle chaincode package mycc.tar.gz --path $CHAINCODE_DIR --lang golang --label myccv1 +peer lifecycle chaincode install mycc.tar.gz + +export CORE_PEER_ADDRESS=${TOPOLOGY}-org3-peer2:7051 +peer lifecycle chaincode install mycc.tar.gz diff --git a/topologies/t9/scripts/org3-join-channels.sh b/topologies/t9/scripts/org3-join-channels.sh new file mode 100755 index 0000000..9634a4d --- /dev/null +++ b/topologies/t9/scripts/org3-join-channels.sh @@ -0,0 +1,11 @@ +#!/bin/sh +set -e +set -x + +export CORE_PEER_MSPCONFIGPATH=/tmp/crypto-material/orgs/org3/admins/admin/msp + +export CORE_PEER_ADDRESS=${TOPOLOGY}-org3-peer1:7051 +peer channel join -b /tmp/crypto-material/artifacts/channels/mychannel.pb + +export CORE_PEER_ADDRESS=${TOPOLOGY}-org3-peer2:7051 +peer channel join -b /tmp/crypto-material/artifacts/channels/mychannel.pb \ No newline at end of file diff --git a/topologies/t9/scripts/org3-register-identities-with-ca-identities.sh b/topologies/t9/scripts/org3-register-identities-with-ca-identities.sh new file mode 100755 index 0000000..1c56144 --- /dev/null +++ b/topologies/t9/scripts/org3-register-identities-with-ca-identities.sh @@ -0,0 +1,12 @@ +#!/bin/bash +set -e +set -x + +export FABRIC_CA_CLIENT_TLS_CERTFILES=/tmp/crypto-material/cas/org3/ca-identities/ca-cert.pem +export FABRIC_CA_CLIENT_HOME=/tmp/crypto-material/cas/org3/ca-identities-admin +fabric-ca-client enroll -d -u https://org3-ca-identities-admin:org3-ca-identities-adminpw@0.0.0.0:7054 +fabric-ca-client register -d --id.name peer1-org3 --id.secret peer1PW --id.type peer -u https://0.0.0.0:7054 +fabric-ca-client register -d --id.name peer2-org3 --id.secret peer2PW --id.type peer -u https://0.0.0.0:7054 +fabric-ca-client register -d --id.name admin-org3 --id.secret org3AdminPW --id.type admin -u https://0.0.0.0:7054 +fabric-ca-client register -d --id.name user-org3 --id.secret org3UserPW --id.type user -u https://0.0.0.0:7054 +fabric-ca-client register -d --id.name client-org3 --id.secret org3UserPW --id.type client -u https://0.0.0.0:7054 \ No newline at end of file diff --git a/topologies/t9/scripts/org3-register-identities-with-ca-tls.sh b/topologies/t9/scripts/org3-register-identities-with-ca-tls.sh new file mode 100755 index 0000000..8b2b0f4 --- /dev/null +++ b/topologies/t9/scripts/org3-register-identities-with-ca-tls.sh @@ -0,0 +1,24 @@ +#!/bin/bash +set -e +set -x +source /tmp/scripts/find-ca-private-key.sh +# get the CA private key file as 2 of them are created: one for the CA cert (ca-cert.pem) and another one for CA TLS cert (tls-cert.pem) +# and we need the keyfile for the CA TLS cert +CA_PRIV_KEYFILE=`find_private_key_path /tmp/crypto-material/cas/org3/ca-tls` +echo $CA_PRIV_KEYFILE +# initial enroll of bootstrap admin will be done with the key and cert of the CA server +# after this, the admin's key and cert will be used for registrations. +export FABRIC_CA_CLIENT_TLS_CERTFILES=/tmp/crypto-material/cas/org3/ca-tls/ca-cert.pem +# copy the CA server's key file to make it easier to use it +cp $CA_PRIV_KEYFILE /tmp/crypto-material/cas/org3/ca-tls/msp/keystore/key.pem +export FABRIC_CA_CLIENT_TLS_CLIENT_KEYFILE=/tmp/crypto-material/cas/org3/ca-tls/msp/keystore/key.pem +export FABRIC_CA_CLIENT_TLS_CLIENT_CERTFILE=/tmp/crypto-material/cas/org3/ca-tls/tls-cert.pem +export FABRIC_CA_CLIENT_HOME=/tmp/crypto-material/cas/org3/ca-tls-admin +fabric-ca-client enroll -d -u https://org3-ca-tls-admin:org3-ca-tls-adminpw@0.0.0.0:7054 +sleep 1 +mv /tmp/crypto-material/cas/org3/ca-tls-admin/msp/keystore/* /tmp/crypto-material/cas/org3/ca-tls-admin/msp/keystore/key.pem +export FABRIC_CA_CLIENT_TLS_CLIENT_KEYFILE=/tmp/crypto-material/cas/org3/ca-tls-admin/msp/keystore/key.pem +export FABRIC_CA_CLIENT_TLS_CLIENT_CERTFILE=/tmp/crypto-material/cas/org3/ca-tls-admin/msp/signcerts/cert.pem +fabric-ca-client register -d --id.name peer1-org3 --id.secret peer1PW --id.type peer -u https://0.0.0.0:7054 +fabric-ca-client register -d --id.name peer2-org3 --id.secret peer2PW --id.type peer -u https://0.0.0.0:7054 +fabric-ca-client register -d --id.name admin-org3 --id.secret org3AdminPW --id.type admin -u https://0.0.0.0:7054 \ No newline at end of file diff --git a/topologies/t9/scripts/patch-configtx.sh b/topologies/t9/scripts/patch-configtx.sh new file mode 100755 index 0000000..a4359d7 --- /dev/null +++ b/topologies/t9/scripts/patch-configtx.sh @@ -0,0 +1,8 @@ +#!/bin/bash +set -e +set -x + +mkdir -p /tmp/crypto-material/config +cp /tmp/config/configtx.yaml /tmp/crypto-material/config + +cat /tmp/config/configtx.yaml | sed -r "s;<>;${TOPOLOGY};g" | tee /tmp/crypto-material/config/configtx.yaml > /dev/null \ No newline at end of file diff --git a/topologies/t9/scripts/setup-docker-images.sh b/topologies/t9/scripts/setup-docker-images.sh new file mode 100755 index 0000000..e15aa03 --- /dev/null +++ b/topologies/t9/scripts/setup-docker-images.sh @@ -0,0 +1,2 @@ +cd $1/images +docker build -t fabric-ca-openssl . \ No newline at end of file diff --git a/topologies/t9/setup-network.sh b/topologies/t9/setup-network.sh new file mode 100755 index 0000000..759c13b --- /dev/null +++ b/topologies/t9/setup-network.sh @@ -0,0 +1,168 @@ +#!/bin/bash +# -----stop script execution on error and log commands +set -e +set -x + +# -----get the folder of where the current script is located +export HL_TOPOLOGIES_BASE_FOLDER=$( cd ${0%/*} && pwd -P ) +rm -rf ./topologies + +export CURRENT_HL_TOPOLOGY=t9 +echo "Topology ${CURRENT_HL_TOPOLOGY} Root Folder Set to: ${HL_TOPOLOGIES_BASE_FOLDER}" + +# -----delete any old or existing docker networks and clear any state data---- +echo "Deleting the old network..." +./teardown-network.sh + +# -----bring up the hyperledger admin shell terminal console, used to bootstrap the network +docker compose --env-file ${HL_TOPOLOGIES_BASE_FOLDER}/.env -f ${HL_TOPOLOGIES_BASE_FOLDER}/../docker-compose-base.yml \ + -f ${HL_TOPOLOGIES_BASE_FOLDER}/../docker-compose-shell-cmd.yml up -d + +# -----begin by modifying the configtx yaml file with any string replacements +echo "Starting the setup of the new network..." + +docker exec ${CURRENT_HL_TOPOLOGY}-shell-cmd /bin/sh -c "/bin/sh /tmp/scripts/patch-configtx.sh" + +# Setup docker images for openssl +./scripts/setup-docker-images.sh ${HL_TOPOLOGIES_BASE_FOLDER} + +# ----------------------------------------------------------------------------- +# -----setup the CAs for all orgs and register with these the TLS-CA and Identities-CA users, such as admins, clients, etc...----- +# ----------------------------------------------------------------------------- +# -----org1 CAs +docker compose --env-file ${HL_TOPOLOGIES_BASE_FOLDER}/.env -f ${HL_TOPOLOGIES_BASE_FOLDER}/../docker-compose-base.yml \ + -f ${HL_TOPOLOGIES_BASE_FOLDER}/docker-compose.yml \ + -f ${HL_TOPOLOGIES_BASE_FOLDER}/containers/cas/org1-cas/docker-compose-org1-cas.yml up -d org1-ca-tls +sleep 2 +docker exec ${CURRENT_HL_TOPOLOGY}-org1-ca-tls /bin/sh -c "/bin/sh /tmp/scripts/org1-register-identities-with-ca-tls.sh" + +docker compose --env-file ${HL_TOPOLOGIES_BASE_FOLDER}/.env -f ${HL_TOPOLOGIES_BASE_FOLDER}/../docker-compose-base.yml \ + -f ${HL_TOPOLOGIES_BASE_FOLDER}/docker-compose.yml \ + -f ${HL_TOPOLOGIES_BASE_FOLDER}/containers/cas/org1-cas/docker-compose-org1-cas.yml up -d org1-ca-identities +sleep 2 +docker exec ${CURRENT_HL_TOPOLOGY}-org1-ca-identities /bin/sh -c "/bin/sh /tmp/scripts/org1-register-identities-with-ca-identities.sh" + +# -----org2 CAs +docker compose --env-file ${HL_TOPOLOGIES_BASE_FOLDER}/.env -f ${HL_TOPOLOGIES_BASE_FOLDER}/../docker-compose-base.yml \ + -f ${HL_TOPOLOGIES_BASE_FOLDER}/docker-compose.yml \ + -f ${HL_TOPOLOGIES_BASE_FOLDER}/containers/cas/org2-cas/docker-compose-org2-cas.yml up -d org2-ca-tls +sleep 2 +docker exec ${CURRENT_HL_TOPOLOGY}-org2-ca-tls /bin/sh -c "/bin/sh /tmp/scripts/org2-register-identities-with-ca-tls.sh" + +docker compose --env-file ${HL_TOPOLOGIES_BASE_FOLDER}/.env -f ${HL_TOPOLOGIES_BASE_FOLDER}/../docker-compose-base.yml \ + -f ${HL_TOPOLOGIES_BASE_FOLDER}/docker-compose.yml \ + -f ${HL_TOPOLOGIES_BASE_FOLDER}/containers/cas/org2-cas/docker-compose-org2-cas.yml up -d org2-ca-identities +sleep 2 +docker exec ${CURRENT_HL_TOPOLOGY}-org2-ca-identities /bin/sh -c "/bin/sh /tmp/scripts/org2-register-identities-with-ca-identities.sh" + +# -----org3 CAs +docker compose --env-file ${HL_TOPOLOGIES_BASE_FOLDER}/.env -f ${HL_TOPOLOGIES_BASE_FOLDER}/../docker-compose-base.yml \ + -f ${HL_TOPOLOGIES_BASE_FOLDER}/docker-compose.yml \ + -f ${HL_TOPOLOGIES_BASE_FOLDER}/containers/cas/org3-cas/docker-compose-org3-cas.yml up -d org3-ca-tls +sleep 2 +docker exec ${CURRENT_HL_TOPOLOGY}-org3-ca-tls /bin/sh -c "/bin/sh /tmp/scripts/org3-register-identities-with-ca-tls.sh" + +docker compose --env-file ${HL_TOPOLOGIES_BASE_FOLDER}/.env -f ${HL_TOPOLOGIES_BASE_FOLDER}/../docker-compose-base.yml \ + -f ${HL_TOPOLOGIES_BASE_FOLDER}/docker-compose.yml \ + -f ${HL_TOPOLOGIES_BASE_FOLDER}/containers/cas/org3-cas/docker-compose-org3-cas.yml up -d org3-ca-identities +sleep 2 +docker exec ${CURRENT_HL_TOPOLOGY}-org3-ca-identities /bin/sh -c "/bin/sh /tmp/scripts/org3-register-identities-with-ca-identities.sh" + +# ----------------------------------------------------------------------------- +# -----setup the Peers for all orgs----- +# ----------------------------------------------------------------------------- +# -----begin by enrolling each orgs' TLS-CA and Identities-CA users +docker exec ${CURRENT_HL_TOPOLOGY}-org2-ca-tls /bin/sh -c "/bin/sh /tmp/scripts/org2-enroll-identities-with-ca-tls.sh" +docker exec ${CURRENT_HL_TOPOLOGY}-org2-ca-identities /bin/sh -c "/bin/sh /tmp/scripts/org2-enroll-identities-with-ca-identities.sh" + +docker exec ${CURRENT_HL_TOPOLOGY}-org3-ca-tls /bin/sh -c "/bin/sh /tmp/scripts/org3-enroll-identities-with-ca-tls.sh" +docker exec ${CURRENT_HL_TOPOLOGY}-org3-ca-identities /bin/sh -c "/bin/sh /tmp/scripts/org3-enroll-identities-with-ca-identities.sh" + +# -----org2 peers +docker compose --env-file ${HL_TOPOLOGIES_BASE_FOLDER}/.env -f ${HL_TOPOLOGIES_BASE_FOLDER}/../docker-compose-base.yml \ + -f ${HL_TOPOLOGIES_BASE_FOLDER}/docker-compose.yml \ + -f ${HL_TOPOLOGIES_BASE_FOLDER}/containers/peers/org2-peers/docker-compose-org2-peers.yml up -d org2-peer1 + +docker compose --env-file ${HL_TOPOLOGIES_BASE_FOLDER}/.env -f ${HL_TOPOLOGIES_BASE_FOLDER}/../docker-compose-base.yml \ + -f ${HL_TOPOLOGIES_BASE_FOLDER}/docker-compose.yml \ + -f ${HL_TOPOLOGIES_BASE_FOLDER}/containers/peers/org2-peers/docker-compose-org2-peers.yml up -d org2-peer2 + +# -----org3 peers +docker compose --env-file ${HL_TOPOLOGIES_BASE_FOLDER}/.env -f ${HL_TOPOLOGIES_BASE_FOLDER}/../docker-compose-base.yml \ + -f ${HL_TOPOLOGIES_BASE_FOLDER}/docker-compose.yml \ + -f ${HL_TOPOLOGIES_BASE_FOLDER}/containers/peers/org3-peers/docker-compose-org3-peers.yml up -d org3-peer1 + +docker compose --env-file ${HL_TOPOLOGIES_BASE_FOLDER}/.env -f ${HL_TOPOLOGIES_BASE_FOLDER}/../docker-compose-base.yml \ + -f ${HL_TOPOLOGIES_BASE_FOLDER}/docker-compose.yml \ + -f ${HL_TOPOLOGIES_BASE_FOLDER}/containers/peers/org3-peers/docker-compose-org3-peers.yml up -d org3-peer2 + +# ----------------------------------------------------------------------------- +# -----setup CLIs for each org----- +# ----------------------------------------------------------------------------- +docker compose --env-file ${HL_TOPOLOGIES_BASE_FOLDER}/.env -f ${HL_TOPOLOGIES_BASE_FOLDER}/../docker-compose-base.yml \ + -f ${HL_TOPOLOGIES_BASE_FOLDER}/docker-compose.yml \ + -f ${HL_TOPOLOGIES_BASE_FOLDER}/containers/clis/org2-clis/docker-compose-org2-clis.yml up -d org2-cli-peer1 + +docker compose --env-file ${HL_TOPOLOGIES_BASE_FOLDER}/.env -f ${HL_TOPOLOGIES_BASE_FOLDER}/../docker-compose-base.yml \ + -f ${HL_TOPOLOGIES_BASE_FOLDER}/docker-compose.yml \ + -f ${HL_TOPOLOGIES_BASE_FOLDER}/containers/clis/org3-clis/docker-compose-org3-clis.yml up -d org3-cli-peer1 + +# ----------------------------------------------------------------------------- +# -----setup Orderers ----- +# ----------------------------------------------------------------------------- +# -----enroll orderer users with TLS-CA and Identities-CA for org1, the orderer org +docker exec ${CURRENT_HL_TOPOLOGY}-org1-ca-tls /bin/sh -c "/bin/sh /tmp/scripts/org1-enroll-identities-with-ca-tls.sh" +docker exec ${CURRENT_HL_TOPOLOGY}-org1-ca-identities /bin/sh -c "/bin/sh /tmp/scripts/org1-enroll-identities-with-ca-identities.sh" + +# -----generate the mychannel artifacts +docker exec ${CURRENT_HL_TOPOLOGY}-org2-cli-peer1 /bin/sh -c "/tmp/scripts/channels-setup.sh" + +sleep 1 + +# -----bring up org1 orderers +docker compose --env-file ${HL_TOPOLOGIES_BASE_FOLDER}/.env -f ${HL_TOPOLOGIES_BASE_FOLDER}/../docker-compose-base.yml \ + -f ${HL_TOPOLOGIES_BASE_FOLDER}/docker-compose.yml \ + -f ${HL_TOPOLOGIES_BASE_FOLDER}/containers/orderers/org1-orderers/docker-compose-org1-orderers.yml up -d org1-orderer1 + +docker compose --env-file ${HL_TOPOLOGIES_BASE_FOLDER}/.env -f ${HL_TOPOLOGIES_BASE_FOLDER}/../docker-compose-base.yml \ + -f ${HL_TOPOLOGIES_BASE_FOLDER}/docker-compose.yml \ + -f ${HL_TOPOLOGIES_BASE_FOLDER}/containers/orderers/org1-orderers/docker-compose-org1-orderers.yml up -d org1-orderer2 + +docker compose --env-file ${HL_TOPOLOGIES_BASE_FOLDER}/.env -f ${HL_TOPOLOGIES_BASE_FOLDER}/../docker-compose-base.yml \ + -f ${HL_TOPOLOGIES_BASE_FOLDER}/docker-compose.yml \ + -f ${HL_TOPOLOGIES_BASE_FOLDER}/containers/orderers/org1-orderers/docker-compose-org1-orderers.yml up -d org1-orderer3 + +docker exec ${CURRENT_HL_TOPOLOGY}-org2-cli-peer1 /bin/sh -c "/tmp/scripts/org1-join-channels.sh" + +# -----need to wait until raft leader selection is completed for the orderers----------- +sleep 4 + +# ----------------------------------------------------------------------------- +# -----setup Channels ----- +# ----------------------------------------------------------------------------- +docker exec ${CURRENT_HL_TOPOLOGY}-org2-cli-peer1 /bin/sh -c "/tmp/scripts/org2-join-channels.sh" +docker exec ${CURRENT_HL_TOPOLOGY}-org3-cli-peer1 /bin/sh -c "/tmp/scripts/org3-join-channels.sh" + +# ----------------------------------------------------------------------------- +# -----setup Chaincode ----- +# ----------------------------------------------------------------------------- +docker exec ${CURRENT_HL_TOPOLOGY}-org2-cli-peer1 /bin/sh -c "/tmp/scripts/org2-install-chaincode.sh" +docker exec ${CURRENT_HL_TOPOLOGY}-org3-cli-peer1 /bin/sh -c "/tmp/scripts/org3-install-chaincode.sh" + +sleep 1 + +docker exec ${CURRENT_HL_TOPOLOGY}-org2-cli-peer1 /bin/sh -c "/tmp/scripts/org2-approve-chaincode.sh" +docker exec ${CURRENT_HL_TOPOLOGY}-org3-cli-peer1 /bin/sh -c "/tmp/scripts/org3-approve-chaincode.sh" + +sleep 1 + +docker exec ${CURRENT_HL_TOPOLOGY}-org2-cli-peer1 /bin/sh -c "/tmp/scripts/all-org-peers-commit-chaincode.sh" + +sleep 1 + +# ----------------------------------------------------------------------------- +# -----test Chaincode with invoke and query----- +# ----------------------------------------------------------------------------- +docker exec ${CURRENT_HL_TOPOLOGY}-org2-cli-peer1 /bin/sh -c "/tmp/scripts/all-org-peers-execute-chaincode.sh" + +echo "******* NETWORK SETUP COMPLETED *******" diff --git a/topologies/t9/teardown-network.sh b/topologies/t9/teardown-network.sh new file mode 100755 index 0000000..1d70ed2 --- /dev/null +++ b/topologies/t9/teardown-network.sh @@ -0,0 +1,33 @@ +#!/bin/bash +# -----stop script execution on error and log commands +set -e +set -x + +export CURRENT_HL_TOPOLOGY=t9 + +# ----------------------------------------------------------------------------- +# -----remove current topoloy containers containers +# ----------------------------------------------------------------------------- +# -----the chaincode dockers throw an error as not being able to be deleted, although they are being deleted. ignoring errors here +set +e +docker ps --filter name=${CURRENT_HL_TOPOLOGY}-org --filter status=running -aq | xargs -r docker stop | xargs -r docker rm +docker ps --filter name=${CURRENT_HL_TOPOLOGY}-org --filter status=exited -aq | xargs -r docker rm +set -e + +# ----------------------------------------------------------------------------- +# -----clear any state data written to disk +# ----------------------------------------------------------------------------- +# -----docker exec will throw error if no running container found +if [ "$( docker container inspect -f '{{.State.Status}}' ${CURRENT_HL_TOPOLOGY}-shell-cmd )" == "running" ] +then + docker exec ${CURRENT_HL_TOPOLOGY}-shell-cmd /bin/sh -c "/bin/sh /tmp/scripts/delete-state-data.sh" +else + echo "Shell Cmd Container is not running." +fi + +# ----------------------------------------------------------------------------- +# -----remove the bootstrap shell container +# ----------------------------------------------------------------------------- +# -----remove shell command container +docker ps --filter name=${CURRENT_HL_TOPOLOGY}-shell-cmd --filter status=running -aq | xargs -r docker stop | xargs -r docker rm +docker ps --filter name=${CURRENT_HL_TOPOLOGY}-shell-cmd --filter status=exited -aq | xargs -r docker rm