diff --git a/.github/workflows/README.md b/.github/workflows/README.md index 785ed0ebc..6d0a10db9 100644 --- a/.github/workflows/README.md +++ b/.github/workflows/README.md @@ -2,7 +2,12 @@ This page contains some documentation on the workflows for this repository. ## build_test -The workflow build_test is used to build and test the code (see build_test.yaml). We are using a custom docker image for building and testing the code. You can find the image on our [Docker Hub](https://hub.docker.com/repository/docker/threefolddev/tfchain). The dockerfile build_test.Dockerfile was used to build that image. If the image no longer meets the expectations please follow these steps: +The workflow build_test is used to build and test the code (see build_test.yaml). Notice that the binaries are being cached so that the build process is sped up. Once the binaries are build the pipeline will run both the unit tests and the integration tests. This can take up to 30 minutes. The pipeline is ran on every commit to a PR and also when the PR has been merged with development. PRs should only be merged if the pipeline was green (if all tests passed). + +For performance reasons we are using a self hosted runner for running the pipeline. The runner will only run one pipeline at a time which means that all other runs will be queued. As the pipeline is ran on every commit it will thus also queue runs of consecutive pushed commits. We strongly advice to add `[skip ci]` to the commit messages whenever possible (when the run of a pipeline can be skipped). A pipeline can also be canceled [here](https://github.com/threefoldtech/tfchain/actions). + +### Docker image +We are using a custom docker image for building and testing the code. You can find the image on our [Docker Hub](https://hub.docker.com/repository/docker/threefolddev/tfchain). The dockerfile build_test.Dockerfile was used to build that image. If the image no longer meets the expectations please follow these steps: 1) Update the dockerfile as required (add what you need) 2) Build the new image (execute the comment with .github/workflows as working directory and make sure to increment the version): diff --git a/.github/workflows/build_test.Dockerfile b/.github/workflows/build_test.Dockerfile index 2e3df20ba..a8bf62741 100644 --- a/.github/workflows/build_test.Dockerfile +++ b/.github/workflows/build_test.Dockerfile @@ -1,8 +1,7 @@ FROM ubuntu:20.04 ENV DEBIAN_FRONTEND=noninteractive -COPY clean_disk_space.sh clean_disk_space.sh -RUN apt-get update && \ - apt-get install -y \ +RUN apt update && \ + apt install -y \ build-essential \ clang \ cmake \ @@ -12,16 +11,20 @@ RUN apt-get update && \ libclang-dev \ lld \ lldb \ - python3 \ - python3-pip \ + software-properties-common \ tar \ zstd && \ + add-apt-repository ppa:deadsnakes/ppa && \ + apt install -y python3.10 && \ + curl https://bootstrap.pypa.io/get-pip.py > get-pip.py && \ + python3.10 get-pip.py && \ + rm -rf get-pip.py && \ curl https://sh.rustup.rs -sSf | sh -s -- -y && \ $HOME/.cargo/bin/rustup install nightly-2022-05-11 && \ # cleanup image rm -rf /var/lib/apt/lists/* && \ - apt-get clean && \ - apt-get autoclean && \ - apt-get autoremove && \ + apt -y clean && \ + apt -y autoclean && \ + apt -y autoremove && \ rm -rf /tmp/* RUN /bin/bash \ No newline at end of file diff --git a/.github/workflows/build_test.yaml b/.github/workflows/build_test.yaml index 940ddd254..57a3deca1 100644 --- a/.github/workflows/build_test.yaml +++ b/.github/workflows/build_test.yaml @@ -8,7 +8,7 @@ jobs: build-and-test: runs-on: [self-hosted, poc] container: - image: threefolddev/tfchain:0 + image: threefolddev/tfchain:1 env: DEBIAN_FRONTEND: noninteractive steps: @@ -16,6 +16,8 @@ jobs: - name: Cache build uses: actions/cache@v3 + timeout-minutes: 6 + continue-on-error: true with: path: | ~/.cargo/bin/ @@ -26,7 +28,6 @@ jobs: key: ${{ runner.os }}-tfchain-cargo-${{ hashFiles('**/Cargo.lock') }} restore-keys: ${{ runner.os }}-tfchain-cargo- - - name: Build run: | cd substrate-node @@ -39,3 +40,10 @@ jobs: $HOME/.cargo/bin/cargo +nightly-2022-05-11 test --no-fail-fast cd pallets $HOME/.cargo/bin/cargo +nightly-2022-05-11 test --no-fail-fast + + - name: Integration tests + run: | + python3.10 -m pip install robotframework cryptography substrate-interface + cd substrate-node/tests + robot -d _output_tests/ . + diff --git a/substrate-node/node/src/chain_spec.rs b/substrate-node/node/src/chain_spec.rs index 75893e1b3..5ac2d6376 100644 --- a/substrate-node/node/src/chain_spec.rs +++ b/substrate-node/node/src/chain_spec.rs @@ -7,7 +7,8 @@ use tfchain_runtime::opaque::SessionKeys; use tfchain_runtime::{ AccountId, AuraConfig, BalancesConfig, CouncilConfig, CouncilMembershipConfig, GenesisConfig, GrandpaConfig, SessionConfig, Signature, SudoConfig, SystemConfig, TFTBridgeModuleConfig, - TFTPriceModuleConfig, TfgridModuleConfig, ValidatorSetConfig, WASM_BINARY, + SmartContractModuleConfig, TFTPriceModuleConfig, TfgridModuleConfig, ValidatorSetConfig, + WASM_BINARY, }; // The URL for the telemetry server. @@ -121,6 +122,8 @@ pub fn development_config() -> Result { 10, // TFT price pallet max price 1000, + // billing frequency + 10 ) }, // Bootnodes @@ -209,6 +212,8 @@ pub fn local_testnet_config() -> Result { 10, // TFT price pallet max price 1000, + // billing frequency + 5 ) }, // Bootnodes @@ -239,6 +244,7 @@ fn testnet_genesis( tft_price_allowed_account: AccountId, min_tft_price: u32, max_tft_price: u32, + billing_frequency: u64 ) -> GenesisConfig { GenesisConfig { system: SystemConfig { @@ -329,5 +335,8 @@ fn testnet_genesis( min_tft_price, max_tft_price, }, + smart_contract_module: SmartContractModuleConfig { + billing_frequency: billing_frequency + }, } } diff --git a/substrate-node/pallets/pallet-smart-contract/src/lib.rs b/substrate-node/pallets/pallet-smart-contract/src/lib.rs index 31dd248e2..fb620ae99 100644 --- a/substrate-node/pallets/pallet-smart-contract/src/lib.rs +++ b/substrate-node/pallets/pallet-smart-contract/src/lib.rs @@ -157,6 +157,13 @@ pub mod pallet { #[pallet::getter(fn pallet_version)] pub type PalletVersion = StorageValue<_, types::StorageVersion, ValueQuery>; + #[pallet::type_value] + pub fn DefaultBillingFrequency() -> u64 { T::BillingFrequency::get() } + + #[pallet::storage] + #[pallet::getter(fn billing_frequency)] + pub type BillingFrequency = StorageValue<_, u64, ValueQuery, DefaultBillingFrequency>; + #[pallet::config] pub trait Config: frame_system::Config @@ -297,6 +304,28 @@ pub mod pallet { SolutionProviderNotApproved, } + #[pallet::genesis_config] + pub struct GenesisConfig { + pub billing_frequency: u64, + } + + // The default value for the genesis config type. + #[cfg(feature = "std")] + impl Default for GenesisConfig { + fn default() -> Self { + Self { + billing_frequency: 600, + } + } + } + + #[pallet::genesis_build] + impl GenesisBuild for GenesisConfig { + fn build(&self) { + BillingFrequency::::put(self.billing_frequency); + } + } + #[pallet::call] impl Pallet { #[pallet::weight(10_000 + T::DbWeight::get().writes(1))] @@ -901,7 +930,7 @@ impl Pallet { pallet_tfgrid::Twins::::get(contract.twin_id).ok_or(Error::::TwinNotExists)?; let usable_balance = Self::get_usable_balance(&twin.account_id); - let mut seconds_elapsed = T::BillingFrequency::get() * 6; + let mut seconds_elapsed = BillingFrequency::::get() * 6; // Calculate amount of seconds elapsed based on the contract lock struct let now = >::get().saturated_into::() / 1000; @@ -1324,7 +1353,7 @@ impl Pallet { let now = >::block_number().saturated_into::(); // Save the contract to be billed in now + BILLING_FREQUENCY_IN_BLOCKS - let future_block = now + T::BillingFrequency::get(); + let future_block = now + BillingFrequency::::get(); let mut contracts = ContractsToBillAt::::get(future_block); contracts.push(contract_id); ContractsToBillAt::::insert(future_block, &contracts); diff --git a/substrate-node/pallets/pallet-tfgrid/src/lib.rs b/substrate-node/pallets/pallet-tfgrid/src/lib.rs index ace209581..8a4e8e0d9 100644 --- a/substrate-node/pallets/pallet-tfgrid/src/lib.rs +++ b/substrate-node/pallets/pallet-tfgrid/src/lib.rs @@ -1134,7 +1134,7 @@ pub mod pallet { ensure!( NodeIdByTwinID::::contains_key(twin_id), - Error::::TwinNotExists + Error::::NodeNotExists ); let node_id = NodeIdByTwinID::::get(twin_id); diff --git a/substrate-node/runtime/src/lib.rs b/substrate-node/runtime/src/lib.rs index 67d606509..a331a338e 100644 --- a/substrate-node/runtime/src/lib.rs +++ b/substrate-node/runtime/src/lib.rs @@ -708,7 +708,7 @@ construct_runtime!( Sudo: pallet_sudo::{Pallet, Call, Config, Storage, Event}, Authorship: pallet_authorship::{Pallet, Call, Storage, Inherent}, TfgridModule: pallet_tfgrid::{Pallet, Call, Storage, Event, Config}, - SmartContractModule: pallet_smart_contract::{Pallet, Call, Storage, Event}, + SmartContractModule: pallet_smart_contract::{Pallet, Call, Config, Storage, Event}, TFTBridgeModule: pallet_tft_bridge::{Pallet, Call, Config, Storage, Event}, TFTPriceModule: pallet_tft_price::{Pallet, Call, Storage, Config, Event}, Scheduler: pallet_scheduler::{Pallet, Call, Storage, Event}, diff --git a/substrate-node/tests/SubstrateNetwork.py b/substrate-node/tests/SubstrateNetwork.py new file mode 100644 index 000000000..a5f64465c --- /dev/null +++ b/substrate-node/tests/SubstrateNetwork.py @@ -0,0 +1,185 @@ +import argparse +from datetime import datetime +import logging +import os +from os.path import dirname, isdir, isfile, join +import re +from shutil import rmtree +import signal +import subprocess +from substrateinterface import SubstrateInterface, Keypair +import tempfile +import time + + +SUBSTRATE_NODE_DIR = dirname(os.getcwd()) +TFCHAIN_EXE = join(SUBSTRATE_NODE_DIR, "target", "release", "tfchain") + +RE_NODE_STARTED = re.compile("Running JSON-RPC WS server") + +TIMEOUT_STARTUP_IN_SECONDS = 600 +TIMEOUT_TERMINATE_IN_SECONDS = 1 + +OUTPUT_TESTS = os.environ.get( + "TEST_OUTPUT_DIR", join(os.getcwd(), "_output_tests")) + +PREDEFINED_KEYS = { + "Alice": Keypair.create_from_uri("//Alice"), + "Bob": Keypair.create_from_uri("//Bob"), + "Charlie": Keypair.create_from_uri("//Charlie"), + "Dave": Keypair.create_from_uri("//Dave"), + "Eve": Keypair.create_from_uri("//Eve"), + "Ferdie": Keypair.create_from_uri("//Ferdie") +} + + +def wait_till_node_ready(log_file: str, timeout_in_seconds=TIMEOUT_STARTUP_IN_SECONDS): + start = datetime.now() + while True: + elapsed = datetime.now() - start + + if elapsed.total_seconds() >= TIMEOUT_STARTUP_IN_SECONDS: + raise Exception(f"Timeout on starting the node! See {log_file}") + + with open(log_file, "r") as fd: + for line in reversed(fd.readlines()): + if RE_NODE_STARTED.search(line): + return + +def setup_offchain_workers(port: int, worker_tft: str = "Alice", worker_smct: str = "Bob"): + logging.info("Setting up offchain workers") + substrate = SubstrateInterface(url=f"ws://127.0.0.1:{port}", ss58_format=42, type_registry_preset='polkadot') + + insert_key_params = [ + "tft!", f"//{worker_tft}", PREDEFINED_KEYS[worker_tft].public_key.hex()] + substrate.rpc_request("author_insertKey", insert_key_params) + + insert_key_params = [ + "smct", f"//{worker_smct}", PREDEFINED_KEYS[worker_smct].public_key.hex()] + substrate.rpc_request("author_insertKey", insert_key_params) + +def execute_command(cmd: list, log_file: str | None = None): + if log_file is None: + log_file = tempfile.mktemp() + + dir_of_log_file = dirname(log_file) + if not isdir(dir_of_log_file): + os.makedirs(dir_of_log_file) + + fd = open(log_file, 'w') + logging.info("Running command\n\t> %s\nand saving output in file %s", + " ".join([f"{arg}" for arg in cmd]), log_file) + p = subprocess.Popen(cmd, stdout=fd, stderr=fd) + + return p, fd + + +def run_node(log_file: str, base_path: str, predefined_account: str, port: int, ws_port: int, rpc_port: int, node_key: str | None = None, bootnodes: str | None = None): + logging.info("Starting node with logfile %s", log_file) + + if not isfile(TFCHAIN_EXE): + raise Exception( + f"Executable {TFCHAIN_EXE} doesn't exist! Did you build the code?") + + cmd = [TFCHAIN_EXE, + "--base-path", f"{base_path}", + "--chain", "local", + f"--{predefined_account.lower()}", + "--port", f"{port}", + "--ws-port", f"{ws_port}", + "--rpc-port", f"{rpc_port}", + "--telemetry-url", "wss://telemetry.polkadot.io/submit/ 0", + "--validator", + "--rpc-methods", "Unsafe", + "--rpc-cors", "all" + ] + + if node_key is not None: + cmd.extend(["--node-key", f"{node_key}"]) + + if bootnodes is not None: + cmd.extend(["--bootnodes", f"{bootnodes}"]) + + rmtree(base_path, ignore_errors=True) + + return execute_command(cmd, log_file) + + +class SubstrateNetwork: + def __init__(self): + self._nodes = {} + + def __del__(self): + if len(self._nodes) > 0: + self.tear_down_multi_node_network() + + def setup_multi_node_network(self, log_name: str = "", amt: int = 2): + assert amt >= 2, "more then 2 nodes required for a multi node network" + assert amt <= len(PREDEFINED_KEYS), "maximum amount of nodes reached" + + output_dir_network = join(OUTPUT_TESTS, log_name) + + rmtree(output_dir_network, ignore_errors=True) + + port = 30333 + ws_port = 9945 + rpc_port = 9933 + log_file_alice = join(output_dir_network, "node_alice.log") + self._nodes["alice"] = run_node(log_file_alice, "/tmp/alice", "alice", port, ws_port, + rpc_port, node_key="0000000000000000000000000000000000000000000000000000000000000001") + wait_till_node_ready(log_file_alice) + setup_offchain_workers(ws_port) + + log_file = "" + for x in range(1, amt): + port += 1 + ws_port += 1 + rpc_port += 1 + name = list(PREDEFINED_KEYS.keys())[x].lower() + log_file = join(output_dir_network, f"node_{name}.log") + self._nodes[name] = run_node(log_file, f"/tmp/{name}", name, port, ws_port, rpc_port, node_key=None, + bootnodes="/ip4/127.0.0.1/tcp/30333/p2p/12D3KooWEyoppNCUx8Yx66oV9fJnriXwCcXwDDUA2kj6vnc6iDEp") + wait_till_node_ready(log_file) + setup_offchain_workers(ws_port) + + logging.info("Network is up and running.") + + def tear_down_multi_node_network(self): + for (account, (process, log_file)) in self._nodes.items(): + logging.info("Terminating node %s", account) + process.terminate() + process.wait(timeout=TIMEOUT_TERMINATE_IN_SECONDS) + process.kill() + logging.info("Node for %s has terminated.", account) + if log_file is not None: + log_file.close() + self._nodes = {} + logging.info("Teardown network completed!") + + +def main(): + parser = argparse.ArgumentParser( + description="This tool allows you to start a multi node network.") + + parser.add_argument("--amount", required=False, type=int, default=2, + help=f"The amount of nodes to start. Should be minimum 2 and maximum {len(PREDEFINED_KEYS)}") + args = parser.parse_args() + + logging.basicConfig( + format="%(asctime)s %(levelname)s %(message)s", level=logging.DEBUG) + + network = SubstrateNetwork() + network.setup_multi_node_network(args.amount) + + def handler(signum, frame): + network.tear_down_multi_node_network() + exit(0) + + signal.signal(signal.SIGINT, handler) + logging.info("Press Ctrl-c to teardown the network.") + while True: + time.sleep(0.1) + + +if __name__ == "__main__": + main() diff --git a/substrate-node/tests/TfChainClient.py b/substrate-node/tests/TfChainClient.py new file mode 100644 index 000000000..3ff42fa5e --- /dev/null +++ b/substrate-node/tests/TfChainClient.py @@ -0,0 +1,791 @@ +from datetime import datetime +import json +import logging +from random import randbytes +import time + +from SubstrateNetwork import PREDEFINED_KEYS +from substrateinterface import SubstrateInterface, Keypair + +GIGABYTE = 1024*1024*1024 + +TIMEOUT_WAIT_FOR_BLOCK = 6 + +DEFAULT_SIGNER = "Alice" +DEFAULT_PORT = 9945 + +FARM_CERTIFICATION_NOTCERTIFIED = "NotCertified" +FARM_CERTIFICATION_GOLD = "Gold" +FARM_CERTIFICATION_TYPES = [ + FARM_CERTIFICATION_NOTCERTIFIED, FARM_CERTIFICATION_GOLD] + +NODE_CERTIFICATION_DIY = "Diy" +NODE_CERTIFICATION_CERTIFIED = "Certified" +NODE_CERTIFICATION_TYPES = [ + NODE_CERTIFICATION_DIY, NODE_CERTIFICATION_CERTIFIED] + +UNIT_BYTES = "Bytes" +UNIT_KILOBYTES = "Kilobytes" +UNIT_MEGABYTES = "Mebabytes" +UNIT_GIGABYTES = "Gigabytes" +UNIT_TERRABYTES = "Terrabytes" +UNIT_TYPES = [UNIT_BYTES, UNIT_KILOBYTES, + UNIT_MEGABYTES, UNIT_GIGABYTES, UNIT_TERRABYTES] + + +class TfChainClient: + def __init__(self): + self._setup() + + def _setup(self): + self._wait_for_finalization = False + self._wait_for_inclusion = True + self._pallets_offchain_workers = ["tft!", "smct"] + + def _connect_to_server(self, url: str): + return SubstrateInterface(url=url, ss58_format=42, type_registry_preset='polkadot') + + def _check_events(self, events: list = [], expected_events: list = []): + logging.info("Events: %s", json.dumps(events)) + + # This was a sudo call that failed + for event in events: + if event["event_id"] == "Sudid" and "Err" in event["attributes"]: + raise Exception(event["attributes"]) + + for expected_event in expected_events: + check = False + for event in events: + check = all(item in event.keys( + ) and event[item] == expected_event[item] for item in expected_event.keys()) + if check: + logging.info("Found event %s", expected_event) + break + if not check: + raise Exception( + f"Expected the event {expected_event} in {events}, no match found!") + + def _sign_extrinsic_submit_check_response(self, substrate, call, who: str, expected_events: list = []): + _who = who.title() + if _who == "Sudo": + call = substrate.compose_call("Sudo", "sudo", { + "call": call + }) + _who = "Alice" + else: + assert _who in PREDEFINED_KEYS.keys( + ), f"{who} is not a predefined account, use one of {PREDEFINED_KEYS.keys()}" + + logging.info("Sending signed transaction: %s", call) + signed_call = substrate.create_signed_extrinsic( + call, PREDEFINED_KEYS[_who]) + + response = substrate.submit_extrinsic( + signed_call, wait_for_finalization=False, wait_for_inclusion=True) + logging.info("Reponse is %s", response) + if response.error_message: + raise Exception(response.error_message) + + self._check_events([event.value["event"] + for event in response.triggered_events], expected_events) + + def setup_predefined_account(self, who: str, port: int = DEFAULT_PORT): + logging.info("Setting up predefined account %s (%s)", who, + PREDEFINED_KEYS[who].ss58_address) + self.user_accept_tc(port=port, who=who) + self.create_twin(port=port, who=who) + + def user_accept_tc(self, port: int = DEFAULT_PORT, who: str = DEFAULT_SIGNER): + substrate = self._connect_to_server(f"ws://127.0.0.1:{port}") + + call = substrate.compose_call("TfgridModule", "user_accept_tc", + { + "document_link": "garbage", + "document_hash": "garbage" + }) + self._sign_extrinsic_submit_check_response(substrate, call, who) + + def create_twin(self, ip: str = "::1", port: int = DEFAULT_PORT, who: str = DEFAULT_SIGNER): + substrate = self._connect_to_server(f"ws://127.0.0.1:{port}") + + call = substrate.compose_call( + "TfgridModule", "create_twin", {"ip": ip}) + expected_events = [{ + "module_id": "TfgridModule", + "event_id": "TwinStored" + }] + self._sign_extrinsic_submit_check_response( + substrate, call, who, expected_events=expected_events) + + def update_twin(self, ip: str = "::1", port: int = DEFAULT_PORT, who: str = DEFAULT_SIGNER): + substrate = self._connect_to_server(f"ws://127.0.0.1:{port}") + + call = substrate.compose_call("TfgridModule", "update_twin", { + "ip": ip}) + expected_events = [{ + "module_id": "TfgridModule", + "event_id": "TwinUpdated" + }] + self._sign_extrinsic_submit_check_response( + substrate, call, who, expected_events=expected_events) + + def delete_twin(self, twin_id: int = 1, port: int = DEFAULT_PORT, who: str = DEFAULT_SIGNER): + substrate = self._connect_to_server(f"ws://127.0.0.1:{port}") + + call = substrate.compose_call("TfgridModule", "delete_twin", { + "twin_id": twin_id}) + expected_events = [{ + "module_id": "TfgridModule", + "event_id": "TwinDeleted" + }] + self._sign_extrinsic_submit_check_response( + substrate, call, who, expected_events=expected_events) + + def get_twin(self, id: int = 1, port: int = DEFAULT_PORT): + substrate = self._connect_to_server(f"ws://127.0.0.1:{port}") + + q = substrate.query("TfgridModule", "Twins", [id]) + logging.info(q.value) + return q.value + + def balance_data(self, port: int = DEFAULT_PORT, who: str = DEFAULT_SIGNER): + assert who in PREDEFINED_KEYS.keys( + ), f"{who} is not a predefined account" + substrate = self._connect_to_server(f"ws://127.0.0.1:{port}") + + account_info = substrate.query( + "System", "Account", [PREDEFINED_KEYS[who].ss58_address]) + assert account_info is not None, f"Failed fetching the account data for {who} ({PREDEFINED_KEYS[who].ss58_address})" + assert "data" in account_info, f"Could not find balance data in the account info {account_info}" + + logging.info(account_info) + return account_info["data"].value + + def get_block_number(self, port: int = DEFAULT_PORT): + substrate = self._connect_to_server(f"ws://127.0.0.1:{port}") + q = substrate.query("System", "Number", []) + return q.value + + def wait_x_blocks(self, x: int = 1, port: int = DEFAULT_PORT): + block_to_wait_for = self.get_block_number(port=port) + x + self.wait_till_block(block_to_wait_for, port=port) + + def wait_till_block(self, x: int = 1, port: int = DEFAULT_PORT): + start_time = datetime.now() + current_block = self.get_block_number(port=port) + logging.info("Waiting till block %s. Current is %s", x, current_block) + timeout_for_x_blocks = TIMEOUT_WAIT_FOR_BLOCK * (x-current_block+1) + while self.get_block_number(port=port) < x: + elapsed_time = datetime.now() - start_time + if elapsed_time.total_seconds() >= timeout_for_x_blocks: + raise Exception(f"Timeout on waiting for {x} blocks") + time.sleep(6) + + def create_farm(self, name: str = "myfarm", public_ips: list = [], port: int = DEFAULT_PORT, who: str = DEFAULT_SIGNER): + substrate = self._connect_to_server(f"ws://127.0.0.1:{port}") + + call = substrate.compose_call("TfgridModule", "create_farm", + { + "name": f"{name}", + "public_ips": public_ips + }) + expected_events = [{ + "module_id": "TfgridModule", + "event_id": "FarmStored" + }] + self._sign_extrinsic_submit_check_response( + substrate, call, who, expected_events=expected_events) + + def update_farm(self, id: int = 1, name: str = "", pricing_policy_id: int = 1, port: int = DEFAULT_PORT, + who: str = DEFAULT_SIGNER): + substrate = self._connect_to_server(f"ws://127.0.0.1:{port}") + + call = substrate.compose_call("TfgridModule", "update_farm", + { + "id": id, + "name": f"{name}", + "pricing_policy_id": pricing_policy_id + }) + expected_events = [{ + "module_id": "TfgridModule", + "event_id": "FarmUpdated" + }] + self._sign_extrinsic_submit_check_response( + substrate, call, who, expected_events=expected_events) + + def get_farm(self, id: int = 1, port: int = DEFAULT_PORT): + substrate = self._connect_to_server(f"ws://127.0.0.1:{port}") + q = substrate.query("TfgridModule", "Farms", [id]) + return q.value + + def add_farm_ip(self, id: int = 1, ip: str = "", gateway: str = "", port: int = DEFAULT_PORT, who: str = DEFAULT_SIGNER): + substrate = self._connect_to_server(f"ws://127.0.0.1:{port}") + + call = substrate.compose_call("TfgridModule", "add_farm_ip", + { + "id": id, + "ip": ip, + "gateway": gateway + }) + expected_events = [{ + "module_id": "TfgridModule", + "event_id": "FarmUpdated" + }] + self._sign_extrinsic_submit_check_response( + substrate, call, who, expected_events=expected_events) + + def remove_farm_ip(self, id: int = 1, ip: str = "", port: int = DEFAULT_PORT, who: str = DEFAULT_SIGNER): + substrate = self._connect_to_server(f"ws://127.0.0.1:{port}") + + call = substrate.compose_call("TfgridModule", "remove_farm_ip", + { + "id": id, + "ip": ip + }) + expected_events = [{ + "module_id": "TfgridModule", + "event_id": "FarmUpdated" + }] + self._sign_extrinsic_submit_check_response( + substrate, call, who, expected_events=expected_events) + + def create_node(self, farm_id: int = 1, hru: int = 0, sru: int = 0, cru: int = 0, mru: int = 0, + longitude: str = "", latitude: str = "", country: str = "", city: str = "", interfaces: list = [], + secure_boot: bool = False, virtualized: bool = False, serial_number: str = "", port: int = DEFAULT_PORT, + who: str = DEFAULT_SIGNER): + substrate = self._connect_to_server(f"ws://127.0.0.1:{port}") + + params = { + "farm_id": farm_id, + "resources": { + "hru": hru * GIGABYTE, + "sru": sru * GIGABYTE, + "cru": cru, + "mru": mru * GIGABYTE + }, + "location": { + "longitude": f"{longitude}", + "latitude": f"{latitude}" + }, + "country": country, + "city": city, + "interfaces": interfaces, + "secure_boot": secure_boot, + "virtualized": virtualized, + "serial_number": serial_number + } + call = substrate.compose_call( + "TfgridModule", "create_node", params) + expected_events = [{ + "module_id": "TfgridModule", + "event_id": "NodeStored" + }] + self._sign_extrinsic_submit_check_response( + substrate, call, who, expected_events=expected_events) + + def update_node(self, node_id: int = 1, farm_id: int = 1, hru: int = 0, sru: int = 0, cru: int = 0, mru: int = 0, + longitude: str = "", latitude: str = "", country: str = "", city: str = "", + secure_boot: bool = False, virtualized: bool = False, serial_number: str = "", port: int = DEFAULT_PORT, + who: str = DEFAULT_SIGNER): + substrate = self._connect_to_server(f"ws://127.0.0.1:{port}") + + params = { + "node_id": node_id, + "farm_id": farm_id, + "resources": { + "hru": hru * GIGABYTE, + "sru": sru * GIGABYTE, + "cru": cru, + "mru": mru * GIGABYTE + }, + "location": { + "longitude": f"{longitude}", + "latitude": f"{latitude}" + }, + "country": country, + "city": city, + "interfaces": [], + "secure_boot": secure_boot, + "virtualized": virtualized, + "serial_number": serial_number + } + call = substrate.compose_call( + "TfgridModule", "update_node", params) + expected_events = [{ + "module_id": "TfgridModule", + "event_id": "NodeUpdated" + }] + self._sign_extrinsic_submit_check_response( + substrate, call, who, expected_events=expected_events) + + def add_node_public_config(self, farm_id: int = 1, node_id: int = 1, ipv4: str = "", gw4: str = "", + ipv6: str | None = None, gw6: str | None = None, domain: str | None = None, + port: int = DEFAULT_PORT, who: str = DEFAULT_SIGNER): + substrate = self._connect_to_server(f"ws://127.0.0.1:{port}") + + ip4_config = { + "ip": ipv4, + "gw": gw4 + } + ip6_config = None if ipv6 is None and gw6 is None else { + "ip": ipv6, + "gw": gw6 + } + public_config = { + "ip4": ip4_config, + "ip6": ip6_config, + "domain": domain + } + call = substrate.compose_call("TfgridModule", "add_node_public_config", + { + "farm_id": farm_id, + "node_id": node_id, + "public_config": public_config + }) + expected_events = [{ + "module_id": "TfgridModule", + "event_id": "NodePublicConfigStored" + }] + self._sign_extrinsic_submit_check_response( + substrate, call, who, expected_events=expected_events) + + def delete_node(self, id: int = 1, port: int = DEFAULT_PORT, who: str = DEFAULT_SIGNER): + substrate = self._connect_to_server(f"ws://127.0.0.1:{port}") + + call = substrate.compose_call("TfgridModule", "delete_node", { + "id": id}) + expected_events = [{ + "module_id": "TfgridModule", + "event_id": "NodeDeleted" + }] + self._sign_extrinsic_submit_check_response( + substrate, call, who, expected_events=expected_events) + + def get_node(self, id: int = 1, port: int = DEFAULT_PORT): + substrate = self._connect_to_server(f"ws://127.0.0.1:{port}") + + q = substrate.query("TfgridModule", "Nodes", [id]) + return q.value + + def create_node_contract(self, node_id: int = 1, deployment_data: bytes = randbytes(32), + deployment_hash: bytes = randbytes(32), public_ips: int = 0, + solution_provider_id: int | None = None, port: int = DEFAULT_PORT, who: str = DEFAULT_SIGNER): + substrate = self._connect_to_server(f"ws://127.0.0.1:{port}") + + params = { + "node_id": node_id, + "deployment_data": deployment_data, + "deployment_hash": deployment_hash, + "public_ips": public_ips, + "solution_provider_id": solution_provider_id + } + call = substrate.compose_call( + "SmartContractModule", "create_node_contract", params) + expected_events = [{ + "module_id": "SmartContractModule", + "event_id": "ContractCreated" + }] + self._sign_extrinsic_submit_check_response( + substrate, call, who, expected_events=expected_events) + + def update_node_contract(self, contract_id: int = 1, deployment_data: bytes = randbytes(32), + deployment_hash: bytes = randbytes(32), port: int = DEFAULT_PORT, who: str = DEFAULT_SIGNER): + substrate = self._connect_to_server(f"ws://127.0.0.1:{port}") + + call = substrate.compose_call("SmartContractModule", "update_node_contract", { + "contract_id": contract_id, + "deployment_data": deployment_data, + "deployment_hash": deployment_hash + }) + expected_events = [{ + "module_id": "SmartContractModule", + "event_id": "ContractUpdated" + }] + self._sign_extrinsic_submit_check_response( + substrate, call, who, expected_events=expected_events) + + def create_rent_contract(self, node_id: int = 1, solution_provider_id: int | None = None, port: int = DEFAULT_PORT, + who: str = DEFAULT_SIGNER): + substrate = self._connect_to_server(f"ws://127.0.0.1:{port}") + + call = substrate.compose_call("SmartContractModule", "create_rent_contract", + { + "node_id": node_id, + "solution_provider_id": solution_provider_id + }) + expected_events = [{ + "module_id": "SmartContractModule", + "event_id": "ContractCreated" + }] + self._sign_extrinsic_submit_check_response( + substrate, call, who, expected_events=expected_events) + + def create_name_contract(self, name: str = "", port: int = DEFAULT_PORT, who: str = DEFAULT_SIGNER): + substrate = self._connect_to_server(f"ws://127.0.0.1:{port}") + + call = substrate.compose_call("SmartContractModule", "create_name_contract", + { + "name": name + }) + expected_events = [{ + "module_id": "SmartContractModule", + "event_id": "ContractCreated" + }] + self._sign_extrinsic_submit_check_response( + substrate, call, who, expected_events=expected_events) + + def _cancel_contract(self, contract_id: int = 1, type: str = "Name", port: int = DEFAULT_PORT, who: str = DEFAULT_SIGNER): + substrate = self._connect_to_server(f"ws://127.0.0.1:{port}") + + call = substrate.compose_call("SmartContractModule", "cancel_contract", + { + "contract_id": contract_id + }) + expected_events = [{ + "module_id": "SmartContractModule", + "event_id": f"{type}ContractCanceled" + }] + self._sign_extrinsic_submit_check_response( + substrate, call, who, expected_events=expected_events) + + def cancel_name_contract(self, contract_id: int = 1, port: int = DEFAULT_PORT, who: str = DEFAULT_SIGNER): + self._cancel_contract(contract_id=contract_id, + type="Name", port=port, who=who) + + def cancel_rent_contract(self, contract_id: int = 1, port: int = DEFAULT_PORT, who: str = DEFAULT_SIGNER): + self._cancel_contract(contract_id=contract_id, + type="Rent", port=port, who=who) + + def cancel_node_contract(self, contract_id: int = 1, port: int = DEFAULT_PORT, who: str = DEFAULT_SIGNER): + self._cancel_contract(contract_id=contract_id, + type="Node", port=port, who=who) + + def report_contract_resources(self, contract_id: int = 1, hru: int = 0, sru: int = 0, cru: int = 0, mru: int = 0, + port: int = DEFAULT_PORT, who: str = DEFAULT_SIGNER): + substrate = self._connect_to_server(f"ws://127.0.0.1:{port}") + + params = { + "contract_resources": [{ + "contract_id": contract_id, + "used": { + "hru": hru * GIGABYTE, + "sru": sru * GIGABYTE, + "cru": cru, + "mru": mru * GIGABYTE + } + }] + } + call = substrate.compose_call( + "SmartContractModule", "report_contract_resources", params) + expected_events = [{ + "module_id": "SmartContractModule", + "event_id": "UpdatedUsedResources" + }] + self._sign_extrinsic_submit_check_response( + substrate, call, who, expected_events=expected_events) + + def add_nru_reports(self, contract_id: int = 1, nru: int = 1, port: int = DEFAULT_PORT, who: str = DEFAULT_SIGNER): + block_number = self.get_block_number(port=port) + substrate = self._connect_to_server(f"ws://127.0.0.1:{port}") + + reports = [{ + "contract_id": contract_id, + "nru": nru * GIGABYTE, + "timestamp": block_number, + "window": 6 * block_number + }] + call = substrate.compose_call( + "SmartContractModule", "add_nru_reports", {"reports": reports}) + expected_events = [{ + "module_id": "SmartContractModule", + "event_id": "NruConsumptionReportReceived" + }] + self._sign_extrinsic_submit_check_response( + substrate, call, who, expected_events=expected_events) + + def add_stellar_payout_v2address(self, farm_id: int = 1, stellar_address: str = "", port: int = DEFAULT_PORT, + who: str = DEFAULT_SIGNER): + substrate = self._connect_to_server(f"ws://127.0.0.1:{port}") + + params = { + "farm_id": farm_id, + "stellar_address": stellar_address + } + call = substrate.compose_call( + "TfgridModule", "add_stellar_payout_v2address", params) + expected_events = [{ + "module_id": "TfgridModule", + "event_id": "FarmPayoutV2AddressRegistered" + }] + self._sign_extrinsic_submit_check_response( + substrate, call, who, expected_events=expected_events) + + def get_farm_payout_v2address(self, farm_id: int = 1, port: int = DEFAULT_PORT, who: str = DEFAULT_SIGNER): + substrate = self._connect_to_server(f"ws://127.0.0.1:{port}") + + q = substrate.query( + "TfgridModule", "FarmPayoutV2AddressByFarmID", [farm_id]) + return q.value + + def set_farm_certification(self, farm_id: int = 1, certification: str = FARM_CERTIFICATION_NOTCERTIFIED, + port: int = DEFAULT_PORT, who: str = DEFAULT_SIGNER): + substrate = self._connect_to_server(f"ws://127.0.0.1:{port}") + + params = { + "farm_id": farm_id, + "certification": f"{certification}" + } + call = substrate.compose_call( + "TfgridModule", "set_farm_certification", params) + expected_events = [{ + "module_id": "TfgridModule", + "event_id": "FarmCertificationSet" + }] + self._sign_extrinsic_submit_check_response( + substrate, call, who, expected_events=expected_events) + + def set_node_certification(self, node_id: int = 1, certification: str = NODE_CERTIFICATION_DIY, port: int = DEFAULT_PORT, + who: str = DEFAULT_SIGNER): + substrate = self._connect_to_server(f"ws://127.0.0.1:{port}") + + params = { + "node_id": node_id, + "node_certification": f"{certification}" + } + call = substrate.compose_call( + "TfgridModule", "set_node_certification", params) + expected_events = [{ + "module_id": "TfgridModule", + "event_id": "NodeCertificationSet" + }] + self._sign_extrinsic_submit_check_response( + substrate, call, who, expected_events=expected_events) + + def add_node_certifier(self, account_name: str = "", port: int = DEFAULT_PORT, who: str = DEFAULT_SIGNER): + substrate = self._connect_to_server(f"ws://127.0.0.1:{port}") + + call = substrate.compose_call("TfgridModule", "add_node_certifier", { + "who": f"{PREDEFINED_KEYS[account_name].ss58_address}"}) + expected_events = [{ + "module_id": "TfgridModule", + "event_id": "NodeCertifierAdded" + }] + self._sign_extrinsic_submit_check_response( + substrate, call, who, expected_events=expected_events) + + def remove_node_certifier(self, account_name: str = "", port: int = DEFAULT_PORT, who=DEFAULT_SIGNER): + substrate = self._connect_to_server(f"ws://127.0.0.1:{port}") + + call = substrate.compose_call("TfgridModule", "remove_node_certifier", { + "who": f"{PREDEFINED_KEYS[account_name].ss58_address}"}) + expected_events = [{ + "module_id": "TfgridModule", + "event_id": "NodeCertifierRemoved" + }] + self._sign_extrinsic_submit_check_response( + substrate, call, who, expected_events=expected_events) + + def report_uptime(self, uptime: int, port: int = DEFAULT_PORT, who: str = DEFAULT_SIGNER): + substrate = self._connect_to_server(f"ws://127.0.0.1:{port}") + + call = substrate.compose_call( + "TfgridModule", "report_uptime", {"uptime": uptime}) + expected_events = [{ + "module_id": "TfgridModule", + "event_id": "NodeUptimeReported" + }] + self._sign_extrinsic_submit_check_response( + substrate, call, who, expected_events=expected_events) + + def create_pricing_policy(self, name: str = "", unit: str = UNIT_GIGABYTES, su: int = 0, cu: int = 0, nu: int = 0, + ipu: int = 0, unique_name: int = "", domain_name: int = "", + foundation_account: str = "", certified_sales_account: str = "", + discount_for_dedication_nodes: int = 0, port: int = DEFAULT_PORT, who: str = DEFAULT_SIGNER): + substrate = self._connect_to_server(f"ws://127.0.0.1:{port}") + + params = { + "name": f"{name}", + "su": {"value": su, "unit": unit}, + "cu": {"value": cu, "unit": unit}, + "nu": {"value": nu, "unit": unit}, + "ipu": {"value": ipu, "unit": unit}, + "unique_name": {"value": unique_name, "unit": unit}, + "domain_name": {"value": domain_name, "unit": unit}, + "foundation_account": f"{PREDEFINED_KEYS[foundation_account].ss58_address}", + "certified_sales_account": f"{PREDEFINED_KEYS[certified_sales_account].ss58_address}", + "discount_for_dedication_nodes": discount_for_dedication_nodes + } + call = substrate.compose_call( + "TfgridModule", "create_pricing_policy", params) + expected_events = [{ + "module_id": "TfgridModule", + "event_id": "PricingPolicyStored" + }] + self._sign_extrinsic_submit_check_response( + substrate, call, who, expected_events=expected_events) + + def update_pricing_policy(self, id: int = 1, name: str = "", unit: str = UNIT_GIGABYTES, su: int = 0, cu: int = 0, + nu: int = 0, ipu: int = 0, unique_name: int = "", domain_name: int = "", + foundation_account: str = "", certified_sales_account: str = "", + discount_for_dedication_nodes: int = 0, port: int = DEFAULT_PORT, who: str = DEFAULT_SIGNER): + substrate = self._connect_to_server(f"ws://127.0.0.1:{port}") + + params = { + "id": id, + "name": f"{name}", + "su": {"value": su, "unit": unit}, + "cu": {"value": cu, "unit": unit}, + "nu": {"value": nu, "unit": unit}, + "ipu": {"value": ipu, "unit": unit}, + "unique_name": {"value": unique_name, "unit": unit}, + "domain_name": {"value": domain_name, "unit": unit}, + "foundation_account": f"{PREDEFINED_KEYS[foundation_account].ss58_address}", + "certified_sales_account": f"{PREDEFINED_KEYS[certified_sales_account].ss58_address}", + "discount_for_dedication_nodes": discount_for_dedication_nodes + } + call = substrate.compose_call( + "TfgridModule", "update_pricing_policy", params) + expected_events = [{ + "module_id": "TfgridModule", + "event_id": "PricingPolicyStored" + }] + self._sign_extrinsic_submit_check_response( + substrate, call, who, expected_events=expected_events) + + def get_pricing_policy(self, id: int = 1, port: int = DEFAULT_PORT): + substrate = self._connect_to_server(f"ws://127.0.0.1:{port}") + + q = substrate.query("TfgridModule", "PricingPolicies", [id]) + return q.value + + def create_farming_policy(self, name: str = "", su: int = 0, cu: int = 0, nu: int = 0, ipv4: int = 0, + minimal_uptime: int = 0, policy_end: int = 0, immutable: bool = False, + default: bool = False, node_certification: str = NODE_CERTIFICATION_DIY, + farm_certification: str = FARM_CERTIFICATION_NOTCERTIFIED, port: int = DEFAULT_PORT, + who: str = DEFAULT_SIGNER): + substrate = self._connect_to_server(f"ws://127.0.0.1:{port}") + + params = { + "name": f"{name}", + "su": su, + "cu": cu, + "nu": nu, + "ipv4": ipv4, + "minimal_uptime": minimal_uptime, + "policy_end": policy_end, + "immutable": immutable, + "default": default, + "node_certification": f"{node_certification}", + "farm_certification": f"{farm_certification}" + } + call = substrate.compose_call( + "TfgridModule", "create_farming_policy", params) + expected_events = [{ + "module_id": "TfgridModule", + "event_id": "FarmingPolicyStored" + }] + self._sign_extrinsic_submit_check_response( + substrate, call, who, expected_events=expected_events) + + def update_farming_policy(self, id: int = 1, name: str = "", su: int = 0, cu: int = 0, nu: int = 0, ipv4: int = 0, + minimal_uptime: int = 0, policy_end: int = 0, immutable: bool = False, default: bool = False, + node_certification: str = NODE_CERTIFICATION_DIY, + farm_certification: str = FARM_CERTIFICATION_NOTCERTIFIED, port: int = DEFAULT_PORT, + who: str = DEFAULT_SIGNER): + substrate = self._connect_to_server(f"ws://127.0.0.1:{port}") + + params = { + "id": id, + "name": f"{name}", + "su": su, + "cu": cu, + "nu": nu, + "ipv4": ipv4, + "minimal_uptime": minimal_uptime, + "policy_end": policy_end, + "immutable": immutable, + "default": default, + "node_certification": f"{node_certification}", + "farm_certification": f"{farm_certification}" + } + call = substrate.compose_call( + "TfgridModule", "update_farming_policy", params) + expected_events = [{ + "module_id": "TfgridModule", + "event_id": "FarmingPolicyUpdated" + }] + self._sign_extrinsic_submit_check_response( + substrate, call, who, expected_events=expected_events) + + def get_farming_policy(self, id: int = 1, port: int = DEFAULT_PORT): + substrate = self._connect_to_server(f"ws://127.0.0.1:{port}") + + q = substrate.query("TfgridModule", "FarmingPoliciesMap", [id]) + return q.value + + def attach_policy_to_farm(self, farm_id: int = 1, farming_policy_id: int | None = None, cu: int | None = None, + su: int | None = None, end: int | None = None, node_count: int | None = 0, + node_certification: bool = False, port: int = DEFAULT_PORT, who: str = DEFAULT_SIGNER): + substrate = self._connect_to_server(f"ws://127.0.0.1:{port}") + + limits = { + "farming_policy_id": farming_policy_id, + "cu": cu, + "su": su, + "end": end, + "node_count": node_count, + "node_certification": node_certification + } + params = { + "farm_id": farm_id, + "limits": limits if farming_policy_id is not None else None + } + call = substrate.compose_call( + "TfgridModule", "attach_policy_to_farm", params) + expected_events = [{ + "module_id": "TfgridModule", + "event_id": "FarmingPolicySet" + }] + self._sign_extrinsic_submit_check_response( + substrate, call, who, expected_events=expected_events) + + def create_solution_provider(self, description: str = "", link: str = "", providers: dict = {}, port: int = DEFAULT_PORT, + who: str = DEFAULT_SIGNER): + substrate = self._connect_to_server(f"ws://127.0.0.1:{port}") + + providers = [{"who": PREDEFINED_KEYS[who].ss58_address, + "take": take} for who, take in providers.items()] + call = substrate.compose_call("SmartContractModule", "create_solution_provider", + { + "description": f"{description}", + "link": f"{link}", + "providers": providers + }) + expected_events = [{ + "module_id": "SmartContractModule", + "event_id": "SolutionProviderCreated" + }] + self._sign_extrinsic_submit_check_response( + substrate, call, who, expected_events=expected_events) + + def get_solution_provider(self, id: int = 1, port: int = DEFAULT_PORT, who: str = DEFAULT_SIGNER): + substrate = self._connect_to_server(f"ws://127.0.0.1:{port}") + + q = substrate.query("SmartContractModule", "SolutionProviders", [id]) + return q.value + + def approve_solution_provider(self, solution_provider_id: int = 1, approve: bool = True, port: int = DEFAULT_PORT, + who: str = DEFAULT_SIGNER): + substrate = self._connect_to_server(f"ws://127.0.0.1:{port}") + + call = substrate.compose_call("SmartContractModule", "approve_solution_provider", + { + "solution_provider_id": solution_provider_id, + "approve": approve + }) + expected_events = [{ + "module_id": "SmartContractModule", + "event_id": "SolutionProviderApproved" + }] + self._sign_extrinsic_submit_check_response( + substrate, call, who, expected_events=expected_events) diff --git a/substrate-node/tests/integration_tests.robot b/substrate-node/tests/integration_tests.robot new file mode 100644 index 000000000..fe571a4a9 --- /dev/null +++ b/substrate-node/tests/integration_tests.robot @@ -0,0 +1,488 @@ +*** Settings *** +Documentation Suite for integration tests on tfchain +Library Collections +Library SubstrateNetwork.py +Library TfChainClient.py +Library OperatingSystem + + +*** Keywords *** +Public Ips Should Contain Ip + [Arguments] ${list} ${ip} + + FOR ${pub_ip_config} IN @{list} + IF "${pub_ip_config}[ip]" == "${ip}" + Return From Keyword + END + END + + Fail msg=The list of public ips ${list} does not contain ip ${ip} + +Public Ips Should Not Contain Ip + [Arguments] ${list } ${ip} + ${status} = Run Keyword And Return Status Public Ips Should Contain Ip ${list} ${ip} + + Run Keyword If ${status} Fail The list of public ips ${list} contains the ip ${ip}, it shouldn't! + +Setup Network And Create Farm + [Documentation] Helper function to quickly create a network with 2 nodes and creating a farm using Alice's key + Setup Predefined Account who=Alice + Setup Predefined Account who=Bob + Create Farm name=alice_farm + +Setup Network And Create Node + [Documentation] Helper function to quickly create a network with 2 nodes and creating a farm and a node using Alice's key + Setup Network And Create Farm + Create Node farm_id=${1} hru=${1024} sru=${512} cru=${8} mru=${16} longitude=2.17403 latitude=41.40338 country=Belgium city=Ghent + +Create Interface + [Arguments] ${name} ${mac} ${ips} + ${dict} = Create Dictionary name ${name} mac ${mac} ips ${ips} + [Return] ${dict} + +Ensure Account Balance Increased + [Arguments] ${balance_before} ${balance_after} + IF ${balance_before}[free] >= ${balance_after}[free]-${balance_after}[fee_frozen] + Fail msg=It looks like the billing did not take place. + END + +Ensure Account Balance Decreased + [Arguments] ${balance_before} ${balance_after} + IF ${balance_before}[free] <= ${balance_after}[free]-${balance_after}[fee_frozen] + Fail msg=It looks like the billing did not take place. + END + + + +*** Test Cases *** +Test Start And Stop Network + [Documentation] Starts and immediately stops the network (4 nodes) once correctly started + Setup Multi Node Network log_name=test_start_stop_network amt=${4} + + Tear Down Multi Node Network + +Test Create Update Delete Twin + [Documentation] Testing api calls (create, update, delete) for managing twins + Setup Multi Node Network log_name=test_create_update_delete_twin + + User Accept Tc + + Create Twin ip=::1 + ${twin} = Get Twin ${1} + Should Not Be Equal ${twin} ${None} + Should Be Equal ${twin}[ip] ::1 + + Update Twin ip=0000:0000:0000:0000:0000:0000:0000:0001 + ${twin} = Get Twin ${1} + Should Not Be Equal ${twin} ${None} + Should Be Equal ${twin}[ip] 0000:0000:0000:0000:0000:0000:0000:0001 + + Delete Twin ${1} + + ${twin} = Get Twin ${1} + Should Be Equal ${twin} ${None} + + Tear Down Multi Node Network + +Test Create Update Farm + [Documentation] Testing api calls (create, update) for managing farms + Setup Multi Node Network log_name=test_create_update_farm + + Setup Predefined Account who=Alice + + Create Farm name=this_is_the_name_of_the_farm + ${farm_before} = Get Farm ${1} + Should Not Be Equal ${farm_before} ${None} + Should Be Equal ${farm_before}[name] this_is_the_name_of_the_farm + + Update Farm id=${1} name=name_change pricing_policy_id=1 + ${farm_after} = Get Farm ${1} + Should Not Be Equal ${farm_after} ${None} + Should Be Equal ${farm_after}[name] name_change + + Tear Down Multi Node Network + +Test Add Stellar Payout V2ADDRESS + [Documentation] Testing adding a stellar payout address + Setup Multi Node Network log_name=test_add_stellar_address + + Setup Network And Create Farm + + Add Stellar Payout V2address farm_id=${1} stellar_address=address + ${payout_address} = Get Farm Payout V2address farm_id=${1} + Should Be Equal ${payout_address} address + + Add Stellar Payout V2address farm_id=${1} stellar_address=changed address + ${payout_address} = Get Farm Payout V2address farm_id=${1} + Should Be Equal ${payout_address} changed address + + Run Keyword And Expect Error *'CannotUpdateFarmWrongTwin'* + ... Add Stellar Payout V2address farm_id=${1} who=Bob + + Tear Down Multi Node Network + +Test Set Farm Certification + [Documentation] Testing setting a farm certification + Setup Multi Node Network log_name=test_farm_certification + + Setup Network And Create Farm + + # only possible with sudo + Run Keyword And Expect Error *'BadOrigin'* + ... Set Farm Certification farm_id=${1} certification=Gold + + Set Farm Certification farm_id=${1} certification=Gold who=sudo + + Tear Down Multi Node Network + +Test Set Node Certification + [Documentation] Testing setting a node certification + Setup Multi Node Network log_name=test_node_certification + + Setup Network And Create Node + + # Make Alice a node certifier + Add Node Certifier account_name=Alice who=Sudo + + Set Node Certification node_id=${1} certification=Certified + + Remove Node Certifier account_name=Alice who=Sudo + + # Alice is no longer able to set node certification + Run Keyword And Expect Error *'NotAllowedToCertifyNode'* + ... Set Node Certification node_id=${1} certification=Certified + + Tear Down Multi Node Network + +Test Add Remove Public Ips + [Documentation] Testing api calls (adding, removing) for managing public ips + Setup Multi Node Network log_name=test_add_remove_pub_ips + + Setup Network And Create Farm + + # Add an ip to the farm + Add Farm Ip id=${1} ip=185.206.122.125/16 gateway=185.206.122.1 + ${farm} = Get Farm ${1} + Should Not Be Equal ${farm} ${None} + Public Ips Should Contain Ip ${farm}[public_ips] 185.206.122.125/16 + + # Remove the ip that we added + Remove Farm Ip id=${1} ip=185.206.122.125/16 + ${farm} = Get Farm ${1} + Should Not Be Equal ${farm} ${None} + Public Ips Should Not Contain Ip ${farm}[public_ips] 185.206.122.125/16 + +Test Add Public Ips: Failure InvalidPublicIP + [Documentation] Testing adding an invalid public IP + Setup Multi Node Network log_name=test_add_pub_ips_failure_invalidpubip + + Setup Network And Create Farm + # Add an ip in an invalid format + Run Keyword And Expect Error *'InvalidPublicIP'* + ... Add Farm Ip id=${1} ip="185.206.122.125" gateway=185.206.122.1 + + Tear Down Multi Node Network + +Test Create Update Delete Node + [Documentation] Testing api calls (create, update, delete) for managing nodes + Setup Multi Node Network log_name=test_create_update_delet_node amt=${3} + + Setup Network And Create Farm + Create Node farm_id=${1} hru=${1024} sru=${512} cru=${8} mru=${16} longitude=2.17403 latitude=41.40338 country=Belgium city=Ghent + ${node} = Get Node ${1} + Should Not Be Equal ${node} ${None} + Should Be Equal ${node}[city] Ghent + + Update Node node_id=${1} farm_id=${1} hru=${1024} sru=${512} cru=${8} mru=${16} longitude=2.17403 latitude=41.40338 country=Belgium city=Celles + ${node} = Get Node ${1} + Should Not Be Equal ${node} ${None} + Should Be Equal ${node}[city] Celles + + Delete Node ${1} + ${node} = Get Node ${1} + Should Be Equal ${node} ${None} + + Tear Down Multi Node Network + +Test Reporting Uptime + [Documentation] Testing reporting uptime including a failed attempt to report uptime to a non existing node + Setup Multi Node Network log_name=test_reporting_uptime + + Run Keyword And Expect Error *'TwinNotExists'* + ... Report Uptime ${500} + + Setup Predefined Account who=Alice + + Run Keyword And Expect Error *'NodeNotExists'* + ... Report Uptime ${500} + + Create Farm name=alice_farm + Create Node farm_id=${1} hru=${1024} sru=${512} cru=${8} mru=${16} longitude=2.17403 latitude=41.40338 country=Belgium city=Ghent + + Report Uptime ${500} + + Tear Down Multi Node Network + +Test Add Public Config On Node: Success + [Documentation] Testing adding a public config on a node + Setup Multi Node Network log_name=test_add_pub_config_node + + Setup Network And Create Node + + Add Node Public Config farm_id=${1} node_id=${1} ipv4=185.206.122.33/24 gw4=185.206.122.1 ipv6=2a10:b600:1::0cc4:7a30:65b5/64 gw6=2a10:b600:1::1 domain=some-domain + ${node} = Get Node ${1} + Should Not Be Equal ${node} ${None} + Should Not Be Equal ${node}[public_config] ${None} + Should Not Be Equal ${node}[public_config][ip4] ${None} + Should Be Equal ${node}[public_config][ip4][ip] 185.206.122.33/24 + Should Be Equal ${node}[public_config][ip4][gw] 185.206.122.1 + Should Not Be Equal ${node}[public_config][ip6] ${None} + Should Be Equal ${node}[public_config][ip6][ip] 2a10:b600:1::0cc4:7a30:65b5/64 + Should Be Equal ${node}[public_config][ip6][gw] 2a10:b600:1::1 + Should Be Equal ${node}[public_config][domain] some-domain + + Tear Down Multi Node Network + +Test Add Public Config On Node: Failure InvalidIP4 + [Documentation] Testing adding a public config on a node with an invalid ipv4 + Setup Multi Node Network log_name=test_add_pub_config_node_failure_ipv4 + + Setup Network And Create Node + + Run Keyword And Expect Error *'InvalidIP4'* + ... Add Node Public Config farm_id=${1} node_id=${1} ipv4=185.206.122.33 gw4=185.206.122.1 domain=some-domain + + Tear Down Multi Node Network + +Test Add Public Config On Node: Failure InvalidIP6 + [Documentation] Testing adding a public config on a node with an invalid ipv6 + Setup Multi Node Network log_name=test_add_pub_config_node_failure_ipv6 + + Setup Network And Create Node + + Run Keyword And Expect Error *'InvalidIP6'* + ... Add Node Public Config farm_id=${1} node_id=${1} ipv4=185.206.122.33/24 gw4=185.206.122.1 ipv6=2a10:b600:1::0cc4:7a30:65b5 gw6=2a10:b600:1::1 domain=some-domain + + Tear Down Multi Node Network + +Test Add Public Config On Node: Failure InvalidDomain + [Documentation] Testing adding a public config on a node with an invalid domain + Setup Multi Node Network log_name=test_add_pub_config_node_failure_invaliddomain + + Setup Network And Create Node + Run Keyword And Expect Error *'InvalidDomain'* + ... Add Node Public Config farm_id=${1} node_id=${1} ipv4=185.206.122.33/24 gw4=185.206.122.1 ipv6=2a10:b600:1::0cc4:7a30:65b5/64 gw6=2a10:b600:1::1 domain=some_invalid_domain + + Tear Down Multi Node Network + +Test Create Update Cancel Node Contract: Success + [Documentation] Testing api calls (create, update, cancel) for managing a node contract + Setup Multi Node Network log_name=test_create_node_contract + + Setup Predefined Account who=Alice + Setup Predefined Account who=Bob + + ${ip_1} = Create Dictionary ip 185.206.122.33/24 gw 185.206.122.1 + ${public_ips} = Create List ${ip_1} + Create Farm name=alice_farm public_ips=${public_ips} + + ${interface_ips} = Create List 10.2.3.3 + ${interface_1} = Create Interface name=zos mac=00:00:5e:00:53:af ips=${interface_ips} + ${interfaces} = Create List ${interface_1} + Create Node farm_id=${1} hru=${1024} sru=${512} cru=${8} mru=${16} longitude=2.17403 latitude=41.40338 country=Belgium city=Ghent interfaces=${interfaces} + + # Bob is the one creating the contract and thus being billed + Create Node Contract node_id=${1} public_ips=${1} who=Bob port=9946 + + ${farm} = Get Farm ${1} + Should Not Be Equal ${farm} ${None} msg=Farm with id 1 doesn't exist + Dictionary Should Contain Key ${farm} public_ips msg=The farm doesn't have a key public_ips + Length Should Be ${farm}[public_ips] 1 msg=There should only be one public ip in public_ips + Should Be Equal ${farm}[public_ips][0][ip] 185.206.122.33/24 msg=The public ip address should be 185.206.122.33/24 + Should Be Equal ${farm}[public_ips][0][gateway] 185.206.122.1 msg=The gateway should be 185.206.122.1 + Should Be Equal ${farm}[public_ips][0][contract_id] ${1} msg=The public ip was claimed in contract with id 1 while the farm contains a different contract id for it + + Update Node Contract contract_id=${1} who=Bob port=9946 + + Cancel Node Contract contract_id=${1} who=Bob port=9946 + + Tear Down Multi Node Network + +Test Create Node Contract: Failure Not Enough Public Ips + [Documentation] Testing creating a node contract and requesting too much pub ips + Setup Multi Node Network log_name=test_create_node_contract_failure_notenoughpubips + + # the function below creates a farm containing 0 public ips and a node with 0 configured interfaces + Setup Network And Create Node + # let's request 2 public ips which should result in an error + Run Keyword And Expect Error *'FarmHasNotEnoughPublicIPs'* + ... Create Node Contract node_id=${1} public_ips=${2} + + Tear Down Multi Node Network + +Test Create Rent Contract: Success + [Documentation] Testing api calls (create, cancel) for managing a rent contract + Setup Multi Node Network log_name=test_create_rent_contract + + Setup Network And Create Node + + Create Rent Contract node_id=${1} + + Cancel Rent Contract contract_id=${1} + + Tear Down Multi Node Network + +Test Create Name Contract: Success + [Documentation] Testing api calls (create, cancel) for managing a name contract + Setup Multi Node Network log_name=test_create_name_contract + + Setup Network And Create Node + + Create Name Contract name=my_name_contract + + Cancel Name Contract contract_id=${1} + + Tear Down Multi Node Network + +Test Create Update Pricing Policy + [Documentation] Testing api calls (create, update) for managing pricing policies including failed attempts + Setup Multi Node Network log_name=test_create_update_pricing_policy + + # only possible with sudo + Run Keyword And Expect Error *'BadOrigin'* + ... Create Pricing Policy name=mypricingpolicy unit=Gigabytes su=${55000} cu=${90000} nu=${20000} ipu=${35000} unique_name=${3000} domain_name=${6000} foundation_account=Bob certified_sales_account=Bob discount_for_dedication_nodes=45 + + Create Pricing Policy name=mypricingpolicy unit=Gigabytes su=${55000} cu=${90000} nu=${20000} ipu=${35000} unique_name=${3000} domain_name=${6000} foundation_account=Bob certified_sales_account=Bob discount_for_dedication_nodes=45 who=Sudo + ${pricing_policy} = Get Pricing Policy id=${2} + Should Not Be Equal ${pricing_policy} ${None} + Should Be Equal ${pricing_policy}[name] mypricingpolicy + + # only possible with sudo + Run Keyword And Expect Error *'BadOrigin'* + ... Update Pricing Policy id=${2} name=mypricingpolicyupdated unit=Gigabytes su=${55000} cu=${90000} nu=${20000} ipu=${35000} unique_name=${3000} domain_name=${6000} foundation_account=Bob certified_sales_account=Bob discount_for_dedication_nodes=45 + + Update Pricing Policy id=${2} name=mypricingpolicyupdated unit=Gigabytes su=${55000} cu=${90000} nu=${20000} ipu=${35000} unique_name=${3000} domain_name=${6000} foundation_account=Bob certified_sales_account=Bob discount_for_dedication_nodes=45 who=Sudo + ${pricing_policy} = Get Pricing Policy id=${2} + Should Not Be Equal ${pricing_policy} ${None} + Should Be Equal ${pricing_policy}[name] mypricingpolicyupdated + + Tear Down Multi Node Network + +Test Create Update Farming Policy + [Documentation] Testing api calls (create, update) for managing farming policies including failed attempts + Setup Multi Node Network log_name=test_create_update_farming_policy + + # only possible with sudo + Run Keyword And Expect Error *'BadOrigin'* + ... Create Farming Policy name=myfarmingpolicy su=${12} cu=${15} nu=${10} ipv4=${8} minimal_uptime=${9999} policy_end=${10} immutable=${True} default=${True} node_certification=Diy farm_certification=Gold + + Create Farming Policy name=myfarmingpolicy su=${12} cu=${15} nu=${10} ipv4=${8} minimal_uptime=${9999} policy_end=${15} immutable=${True} default=${True} node_certification=Diy farm_certification=Gold who=Sudo + ${farming_policy} = Get Farming Policy id=${3} + Should Not Be Equal ${farming_policy} ${None} + Should Be Equal ${farming_policy}[name] myfarmingpolicy + + # only possible with sudo + Run Keyword And Expect Error *'BadOrigin'* + ... Update Farming Policy id=${3} name=myfarmingpolicyupdated su=${12} cu=${15} nu=${10} ipv4=${8} minimal_uptime=${9999} policy_end=${10} immutable=${True} default=${True} node_certification=Diy farm_certification=Gold + + Update Farming Policy id=${3} name=myfarmingpolicyupdated su=${12} cu=${15} nu=${10} ipv4=${8} minimal_uptime=${9999} policy_end=${10} immutable=${True} default=${True} node_certification=Diy farm_certification=Gold who=Sudo + ${farming_policy} = Get Farming Policy id=${3} + Should Not Be Equal ${farming_policy} ${None} + Should Be Equal ${farming_policy}[name] myfarmingpolicyupdated + + Tear Down Multi Node Network + +Test Attach Policy To Farm + [Documentation] Testing attaching a policy to a farm including a failed attempt to attach an expired policy + Setup Multi Node Network log_name=test_attach_policy_to_farm + + Setup Network And Create Farm + Create Farming Policy name=myfarmingpolicy su=${12} cu=${15} nu=${10} ipv4=${8} minimal_uptime=${9999} policy_end=${5} immutable=${True} default=${True} node_certification=Diy farm_certification=Gold who=Sudo + ${policy} = Get Farming Policy id=${3} + Should Not Be Equal ${policy} ${None} + Should Be Equal ${policy}[name] myfarmingpolicy + + # only possible with sudo + Run Keyword And Expect Error *'BadOrigin'* + ... Attach Policy To Farm farm_id=${1} farming_policy_id=${3} cu=${20} su=${2} end=${1654058949} node_certification=${False} node_count=${10} + + Attach Policy To Farm farm_id=${1} farming_policy_id=${3} cu=${20} su=${2} end= ${1654058949} node_certification=${False} node_count=${10} who=Sudo + + # farming policy expires after 5 blocks + Wait X Blocks x=${5} + Run Keyword And Expect Error {'Err': {'Module': {'index': 11, 'error': '0x52000000'}}} + ... Attach_policy_to_farm farm_id=${1} farming_policy_id=${3} cu=${20} su=${2} end=${1654058949} node_certification=${False} node_count=${10} who=Sudo + + Tear Down Multi Node Network + +Test Billing + [Documentation] Testing billing. Alice creates a twin and Bob too. Alice creates a farm and a node in that farm while Bob creates a node contract requesting Alice to use her node. Alice will report contract resources. We will wait 6 blocks so that Bob will be billed a single time. + Setup Multi Node Network log_name=test_billing + + # Setup + Setup Predefined Account who=Alice + Setup Predefined Account who=Bob + Create Farm name=alice_farm + Create Node farm_id=${1} hru=${1024} sru=${512} cru=${8} mru=${16} longitude=2.17403 latitude=41.40338 country=Belgium city=Ghent + + ${balance_alice} = Balance Data who=Alice + ${balance_bob} = Balance Data who=Bob port=${9946} + # Bob will be using the node: let's create a node contract in his name + Create Node Contract node_id=${1} port=${9946} who=Bob + Report Contract Resources contract_id=${1} hru=${20} sru=${20} cru=${2} mru=${4} + Add Nru Reports contract_id=${1} nru=${3} + + # Let it run 6 blocks so that the user will be billed 1 time + Wait X Blocks ${6} + Cancel Node Contract contract_id=${1} who=Bob + + # Balance should have decreased + ${balance_alice_after} = Balance Data who=Alice + ${balance_bob_after} = Balance Data who=Bob port=${9946} + Ensure Account Balance Decreased ${balance_bob} ${balance_bob_after} + + Tear Down Multi Node Network + +Test Solution Provider + [Documentation] Testing creating and validating a solution provider + Setup Multi Node Network log_name=test_create_approve_solution_provider amt=${2} + + # Setup + Setup Predefined Account who=Alice + Setup Predefined Account who=Bob + Create Farm name=alice_farm + Create Node farm_id=${1} hru=${1024} sru=${512} cru=${8} mru=${16} longitude=2.17403 latitude=41.40338 country=Belgium city=Ghent + + # lets add two providers: charlie gets 30% and Dave 10% + ${providers} = Create Dictionary Charlie ${30} Dave ${10} + Create Solution Provider description=mysolutionprovider providers=${providers} + ${solution_provider} = Get Solution Provider id=${1} + Should Not Be Equal ${solution_provider} ${None} + Should Be Equal ${solution_provider}[description] mysolutionprovider + Should Be Equal ${solution_provider}[approved] ${False} + Length Should Be ${solution_provider}[providers] ${2} + + # The solution provider has to be approved + Approve Solution Provider solution_provider_id=${1} who=Sudo + ${solution_provider} = Get Solution Provider id=${1} + Should Not Be Equal ${solution_provider} ${None} + Should Be Equal ${solution_provider}[approved] ${True} + + ${balance_charlie_before} = Balance Data who=Charlie + ${balance_dave_before} = Balance Data who=Dave + # Bob will be using the node: let's create a node contract in his name + Create Node Contract node_id=${1} port=9946 who=Bob solution_provider_id=${1} + Report Contract Resources contract_id=${1} hru=${20} sru=${20} cru=${2} mru=${4} + Add Nru Reports contract_id=${1} nru=${3} + # Wait 6 blocks: after 5 blocks Bob should be billed + Wait X Blocks ${6} + # Cancel the contract so that the bill is distributed and so that the providers get their part + Cancel Node Contract contract_id=${1} who=Bob + + # Verification: both providers should have received their part + ${balance_charlie_after} = Balance Data who=Charlie + ${balance_dave_after} = Balance Data who=Dave + Ensure Account Balance Increased ${balance_charlie_before} ${balance_charlie_after} + Ensure Account Balance Increased ${balance_dave_before} ${balance_dave_after} + + Tear Down Multi Node Network \ No newline at end of file diff --git a/substrate-node/tests/readme.md b/substrate-node/tests/readme.md new file mode 100644 index 000000000..92dec3c26 --- /dev/null +++ b/substrate-node/tests/readme.md @@ -0,0 +1,83 @@ +# Integration tests +In this directory you will find the integration tests of the tfchain repository. The following paragraphs will teach you how to write and execute the tests. + +## Robot Framework +We are using an automation framework called [Robot Framework][1] for running the tests. The framework is Python based and its syntax can easily be extended via custom Python modules. + + + +### Creating a test suite +A test suite in the [Robot Framework][1] is a single file with the extension *.robot* containing one or more tests. You can find an example below, reading it should teach you the syntax. + + *** Settings *** + Documentation Write those please. Below you can import custom python modules, very useful!! + Also notice that the word separator is tabs (two or more spaces) and that function names can contain spaces. + + Library my_custom_python_module.py + Suite Setup Function Name Which Will Be Executed At Suite Setup + Suite Teardown Function Name Which Will Be Executed At Suite Teardown + Test Setup Function Name Which Will Be Executed At Test Setup + Test Teardown Function Name Which Will Be Executed At Test Teardown + + + *** Variables *** + ${VARIABLE_NAME} Value here. This one is a string. All variables declared here are known in all tests + + + *** Keywords *** + My Keyword + [Documentation] I should write something useful here but I didn't. Also Keywords are basically functions. + # Execute some "Robot" operations here (this is how you comment things btw) + # Or call functions from your custom Python module + + + *** Test Cases *** + Test 1: This is the first test case (all test cases should pass for the suite to pass) + # Execute "Robot" operations here or function calls to the module + My Function 56 + +### Test hierarchy +You can create a hierarchy of tests by creating subfolders and creating robot files into those folders. Just as in Python you can create init files (*\_\_init__.robot*) where you can define variables, define keywords and import modules. They will be known by all test suits inside that subfolder. The Suite setup and teardown in those files allows you to setup/teardown things only once while what they setup/teardown can be used by all child test suites. + +### Opening python functions to the Robot Framework +As shown in prior example you can call python functions in your test suites. This section shows the Python side, more specifically how the Python side should look like. Reading below snippet should teach you the necessary features. + + # all python functions will be accessible in the test suites + def my_function(my_argument): + # this function can be called from robot test suite via: + # My Function value_my_argument + + def myfunction(my_argument): + # this function can be called from robot test suite via: + # myfunction value_my_argument + +The first way is the preferred way as the Python code complies to pep style guide and the call to the function resembles the Robot coding style. + +### Keeping state throughout function calls +You can keep state using Python classes. There is a limitation though: the class should have the same name as the python file. So if you name your class *my_class* then the file should be *my_class.py* and similarly if you name your class *MyClass* then the python file should be named *MyClass.py*. Once that is done you can use all functions from the class in your robot test suites. The state is kept during the whole suite. It means that only one object of that class is instantiated per suite that imports the module and. If you wish to alter this behavior you can define the variable *ROBOT_LIBRARY_SCOPE* inside the class and set it to the values *GLOBAL* or *TEST*. The former will keep the state for all test suites that import it while the latter will reset the state for every new test. + + class my_class: + ROBOT_LIBRARY_SCOPE = "SUITE" + # all function inside this class are accessible from Robot the same way they are as described above + def my_function(self, my_argument): + # can be called the same as prior example (you don't have to pass it the self object) + # do something with my_argument and save stuff in self so that it can be used later in other function calls + + + +## Running the tests +As the [Robot Framework][1] is Python based you can easily install it using pip: +> pip install robotframework + +Now you can run the tests using: +> robot -d output_tests root_directory_containing_tests/ + +I highly recommend to use the argument **-d** so that the output of the tests (html files) is saved in a new directory. That will avoid accidentally pushing the test results to remote. + +Using the argument **-s** allows you to select which suits you want to run. It can contain the asterisk character (*). Very useful when you are testing specific parts of the code. But please run all tests before creating a PR. + +Using the argument **-t** allows you to select the test(s) you want to run. The good thing is that the suite setup(s) and teardowns of the parent test suites will be executed too. + + +[1]:https://robotframework.org/ +