Skip to content

Commit

Permalink
Install pre-commit and use GitHub Actions (krkn-chaos#94)
Browse files Browse the repository at this point in the history
* added pre-commit and code-cleaning

* removed tox and TravisCI
  • Loading branch information
amitsagtani97 authored May 5, 2021
1 parent 70b1495 commit d00d6ec
Show file tree
Hide file tree
Showing 43 changed files with 424 additions and 438 deletions.
22 changes: 22 additions & 0 deletions .github/workflows/ci.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,22 @@
name: Code Quality Check

on:
- push
- pull_request

jobs:
lint-ci:
runs-on: ubuntu-latest
name: Run pre-commit and install test
steps:
- name: Check out source repository
uses: actions/checkout@v2
- name: Set up Python environment
uses: actions/setup-python@v1
with:
python-version: "3.8"
- name: Run pre-commit
uses: pre-commit/[email protected]
- name: Install Kraken
run: |
python setup.py develop
1 change: 0 additions & 1 deletion .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -31,7 +31,6 @@ tags
# Unittest and coverage
htmlcov/*
.coverage
.tox
junit.xml
coverage.xml
.pytest_cache/
Expand Down
30 changes: 30 additions & 0 deletions .pre-commit-config.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,30 @@
---
repos:
- repo: git://github.com/Lucas-C/pre-commit-hooks
rev: v1.1.1
hooks:
- id: remove-tabs

- repo: git://github.com/pre-commit/pre-commit-hooks
rev: v2.0.0
hooks:
- id: trailing-whitespace
- id: check-merge-conflict
- id: end-of-file-fixer
- id: check-case-conflict
- id: detect-private-key
- id: check-ast

- repo: https://github.com/psf/black
rev: 19.10b0
hooks:
- id: black
args: ['--line-length', '120']

- repo: https://gitlab.com/PyCQA/flake8
rev: '3.7.8'
hooks:
- id: flake8
additional_dependencies: ['pep8-naming']
# Ignore all format-related checks as Black takes care of those.
args: ['--ignore', 'E123,E125', '--select', 'E,W,F', '--max-line-length=120']
15 changes: 0 additions & 15 deletions .travis.yml

This file was deleted.

2 changes: 1 addition & 1 deletion CI/scenarios/post_action_etcd.yml
Original file line number Diff line number Diff line change
Expand Up @@ -18,4 +18,4 @@ scenarios:
# The actions will be executed in the order specified
actions:
- checkPodCount:
count: 3
count: 3
6 changes: 3 additions & 3 deletions CI/scenarios/post_action_etcd_example_py.py
Original file line number Diff line number Diff line change
Expand Up @@ -5,9 +5,9 @@

def run(cmd):
try:
output = subprocess.Popen(cmd, shell=True,
universal_newlines=True, stdout=subprocess.PIPE,
stderr=subprocess.STDOUT)
output = subprocess.Popen(
cmd, shell=True, universal_newlines=True, stdout=subprocess.PIPE, stderr=subprocess.STDOUT
)
(out, err) = output.communicate()
logging.info("out " + str(out))
except Exception as e:
Expand Down
1 change: 0 additions & 1 deletion CI/scenarios/post_action_prometheus.yml
Original file line number Diff line number Diff line change
Expand Up @@ -19,4 +19,3 @@ scenarios:
actions:
- checkPodCount:
count: 2

13 changes: 8 additions & 5 deletions CI/scenarios/post_action_regex.py
Original file line number Diff line number Diff line change
Expand Up @@ -15,8 +15,11 @@ def list_namespaces():
cli = client.CoreV1Api()
ret = cli.list_namespace(pretty=True)
except ApiException as e:
logging.error("Exception when calling \
CoreV1Api->list_namespaced_pod: %s\n" % e)
logging.error(
"Exception when calling \
CoreV1Api->list_namespaced_pod: %s\n"
% e
)
for namespace in ret.items:
namespaces.append(namespace.metadata.name)
return namespaces
Expand Down Expand Up @@ -47,9 +50,9 @@ def check_namespaces(namespaces):

def run(cmd):
try:
output = subprocess.Popen(cmd, shell=True,
universal_newlines=True, stdout=subprocess.PIPE,
stderr=subprocess.STDOUT)
output = subprocess.Popen(
cmd, shell=True, universal_newlines=True, stdout=subprocess.PIPE, stderr=subprocess.STDOUT
)
(out, err) = output.communicate()
except Exception as e:
logging.error("Failed to run %s, error: %s" % (cmd, e))
Expand Down
2 changes: 1 addition & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -42,7 +42,7 @@ Monitoring the Kubernetes/OpenShift cluster to observe the impact of Kraken chao
- Blog post emphasizing the importance of making Chaos part of Performance and Scale runs to mimic the production environments: https://www.openshift.com/blog/making-chaos-part-of-kubernetes/openshift-performance-and-scalability-tests

### Contributions
We are always looking for more enhancements, fixes to make it better, any contributions are most welcome. Feel free to report or work on the issues filed on github.
We are always looking for more enhancements, fixes to make it better, any contributions are most welcome. Feel free to report or work on the issues filed on github.

[More information on how to Contribute](docs/contribute.md)

Expand Down
2 changes: 1 addition & 1 deletion ansible/vars/kraken_vars.yml
Original file line number Diff line number Diff line change
Expand Up @@ -25,7 +25,7 @@ scenarios: "{{ lookup('env', 'SCENARIOS')|default('[[scenarios/etcd.yml, scenari

exit_on_failure: "{{ lookup('env', 'EXIT_ON_FAILURE')|default(false, true) }}"

# Cerberus enabled by user
# Cerberus enabled by user
cerberus_enabled: "{{ lookup('env', 'CERBERUS_ENABLED')|default(false, true) }}"
cerberus_url: "{{ lookup('env', 'CERBERUS_URL')|default('', true) }}"

Expand Down
6 changes: 3 additions & 3 deletions containers/README.md
Original file line number Diff line number Diff line change
@@ -1,12 +1,12 @@
### Kraken image

Container image gets automatically built by quay.io at [Kraken image](https://quay.io/repository/openshift-scale/kraken).
Container image gets automatically built by quay.io at [Kraken image](https://quay.io/repository/openshift-scale/kraken).

### Run containerized version
Refer [instructions](https://github.com/cloud-bulldozer/kraken/blob/master/docs/installation.md#run-containerized-version) for information on how to run the containerized version of kraken.


### Run Custom Kraken Image
### Run Custom Kraken Image
Refer to [instructions](https://github.com/cloud-bulldozer/kraken/blob/master/containers/build_own_image-README.md) for information on how to run a custom containerized version of kraken using podman


Expand All @@ -25,4 +25,4 @@ To run containerized Kraken as a Kubernetes/OpenShift Deployment, follow these s
8. In Openshift, add privileges to service account and execute `oc adm policy add-scc-to-user privileged -z useroot`.
9. Create a Deployment and a NodePort Service using `kubectl apply -f kraken.yml`

NOTE: It is not recommended to run Kraken internal to the cluster as the pod which is running Kraken might get disrupted.
NOTE: It is not recommended to run Kraken internal to the cluster as the pod which is running Kraken might get disrupted.
2 changes: 1 addition & 1 deletion containers/build_own_image-README.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,4 +10,4 @@
1. Git clone the Kraken repository using `git clone https://github.com/cloud-bulldozer/kraken.git` on an IBM Power Systems server.
2. Modify the python code and yaml files to address your needs.
3. Execute `podman build -t <new_image_name>:latest -f Dockerfile-ppc64le` in the containers directory within kraken to build an image from the Dockerfile for Power.
4. Execute `podman run --detach --name <container_name> <new_image_name>:latest` to start a container based on your new image.
4. Execute `podman run --detach --name <container_name> <new_image_name>:latest` to start a container based on your new image.
17 changes: 8 additions & 9 deletions docs/contribute.md
Original file line number Diff line number Diff line change
Expand Up @@ -22,15 +22,17 @@ $ git push
```

## Fix Formatting
You can do this before your first commit but please take a look at the formatting outlined using tox.
Kraken uses [pre-commit](https://pre-commit.com) framework to maintain the code linting and python code styling.
The CI would run the pre-commit check on each pull request.
We encourage our contributors to follow the same pattern, while contributing to the code.

To run:
The pre-commit configuration file is present in the repository `.pre-commit-config.yaml`
It contains the different code styling and linting guide which we use for the application.

```pip install tox ```(if not already installed)
Following command can be used to run the pre-commit:
`pre-commit run --all-files`

```tox```

Fix all spacing, import issues and other formatting issues
If pre-commit is not installed in your system, it can be install with : `pip install pre-commit`

## Squash Commits
If there are mutliple commits, please rebase/squash multiple commits
Expand All @@ -50,6 +52,3 @@ Push your rebased commits (you may need to force), then issue your PR.
```
$ git push origin <my-working-branch> --force
```



19 changes: 8 additions & 11 deletions docs/litmus_scenarios.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
### Litmus Scenarios
Kraken consumes [Litmus](https://github.com/litmuschaos/litmus) under the hood for some infrastructure, pod, and node scenarios

Official Litmus documentation and to read more information on specifics of Litmus resources can be found [here](https://docs.litmuschaos.io/docs/next/getstarted/)


Expand All @@ -10,32 +10,29 @@ There are 3 custom resources that are created during each Litmus scenario. Below
* ChaosExperiment: A resource to group the configuration parameters of a chaos experiment. ChaosExperiment CRs are created by the operator when experiments are invoked by ChaosEngine.
* ChaosResult : A resource to hold the results of a chaos-experiment. The Chaos-exporter reads the results and exports the metrics into a configured Prometheus server.

### Understanding Litmus Scenarios
### Understanding Litmus Scenarios

To run Litmus scenarios we need to apply 3 different resources/yaml files to our cluster
1. **Chaos experiments** contain the actual chaos details of a scenario

i. This is installed automatically by Kraken (does not need to be specified in kraken scenario configuration)
2. **Service Account**: should be created to allow chaosengine to run experiments in your application namespace. Usually sets just enough permissions to a specific namespace to be able to run the experiment properly

2. **Service Account**: should be created to allow chaosengine to run experiments in your application namespace. Usually sets just enough permissions to a specific namespace to be able to run the experiment properly

i. This can be defined using either a link to a yaml file or a downloaded file in the scenarios folder
3. **Chaos Engine** connects the application instance to a Chaos Experiment. This is where you define the specifics of your scenario; ie: the node or pod name you want to cause chaos within

3. **Chaos Engine** connects the application instance to a Chaos Experiment. This is where you define the specifics of your scenario; ie: the node or pod name you want to cause chaos within

i. This is a downloaded yaml file in the scenarios folder, full list of scenarios can be found [here](https://hub.litmuschaos.io/)

**NOTE**: By default all chaos experiments will be installed based on the version you give in the config file.
**NOTE**: By default all chaos experiments will be installed based on the version you give in the config file.

Adding a new Litmus based scenario is as simple as adding references to 2 new yaml files (the Service Account and Chaos engine files for your scenario ) in the Kraken config.

### Current Scenarios

Following are the start of scenarios for which a chaos scenario config exists today.
Following are the start of scenarios for which a chaos scenario config exists today.

Component | Description | Working
------------------------ | ---------------------------------------------------------------------------------------------------| ------------------------- |
Node CPU Hog | Chaos scenario that hogs up the CPU on a defined node for a specific amount of time | :heavy_check_mark: |



16 changes: 8 additions & 8 deletions docs/node_scenarios.md
Original file line number Diff line number Diff line change
Expand Up @@ -16,17 +16,17 @@ Following node chaos scenarios are supported:

**NOTE**: node_start_scenario, node_stop_scenario, node_stop_start_scenario, node_termination_scenario, node_reboot_scenario and stop_start_kubelet_scenario are supported only on AWS and GCP as of now.

#### AWS
#### AWS

**NOTE**: For clusters with AWS make sure [AWS CLI](https://docs.aws.amazon.com/cli/latest/userguide/install-cliv2.html) is installed and properly [configured](https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-quickstart.html) using an AWS account

#### GCP
**NOTE**: For clusters with GCP make sure [GCP CLI](https://cloud.google.com/sdk/docs/install#linux) is installed.

A google service account is required to give proper authentication to GCP for node actions. See [here](https://cloud.google.com/docs/authentication/getting-started) for how to create a service account.

**NOTE**: A user with 'resourcemanager.projects.setIamPolicy' permission is required to grant project-level permissions to the service account.

After creating the service account you'll need to enable the account using the following: ```export GOOGLE_APPLICATION_CREDENTIALS="<serviceaccount.json>"```

#### OPENSTACK
Expand All @@ -47,7 +47,7 @@ You will also need to create a service principal and give it the correct access,

To properly run the service principal requires “Azure Active Directory Graph/Application.ReadWrite.OwnedBy” api permission granted and “User Access Administrator”

Before running you'll need to set the following:
Before running you'll need to set the following:
1. Login using ```az login```

2. ```export AZURE_TENANT_ID=<tenant_id>```
Expand Down Expand Up @@ -87,15 +87,15 @@ node_scenarios:
label_selector: node-role.kubernetes.io/infra
instance_kill_count: 1
timeout: 120
- actions:
- actions:
- stop_start_helper_node_scenario # node chaos scenario for helper node
instance_kill_count: 1
timeout: 120
instance_kill_count: 1
timeout: 120
helper_node_ip: # ip address of the helper node
service: # check status of the services on the helper node
- haproxy
- dhcpd
- named
ssh_private_key: /root/.ssh/id_rsa # ssh key to access the helper node
cloud_type: openstack
cloud_type: openstack
```
2 changes: 1 addition & 1 deletion docs/pod_scenarios.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
### Pod Scenarios
Kraken consumes [Powerfulseal](https://github.com/powerfulseal/powerfulseal) under the hood to run the pod scenarios.
Kraken consumes [Powerfulseal](https://github.com/powerfulseal/powerfulseal) under the hood to run the pod scenarios.


#### Pod chaos scenarios
Expand Down
2 changes: 1 addition & 1 deletion docs/time_scenarios.md
Original file line number Diff line number Diff line change
Expand Up @@ -27,4 +27,4 @@ time_scenarios:
- action: skew_date
object_type: node
label_selector: node-role.kubernetes.io/worker
```
```
6 changes: 3 additions & 3 deletions kraken/invoke/command.py
Original file line number Diff line number Diff line change
Expand Up @@ -5,9 +5,9 @@
# Invokes a given command and returns the stdout
def invoke(command):
try:
output = subprocess.Popen(command, shell=True,
universal_newlines=True, stdout=subprocess.PIPE,
stderr=subprocess.STDOUT)
output = subprocess.Popen(
command, shell=True, universal_newlines=True, stdout=subprocess.PIPE, stderr=subprocess.STDOUT
)
(out, err) = output.communicate()
except Exception as e:
logging.error("Failed to run %s, error: %s" % (command, e))
Expand Down
Loading

0 comments on commit d00d6ec

Please sign in to comment.