Skip to content
This repository has been archived by the owner on Jul 3, 2024. It is now read-only.

Commit

Permalink
Refactored repo, added doc and linting check
Browse files Browse the repository at this point in the history
Signed-off-by: Simone Magnani <[email protected]>
  • Loading branch information
smagnani96 committed Apr 3, 2023
1 parent 7ff6ec3 commit a2ab842
Show file tree
Hide file tree
Showing 18 changed files with 265 additions and 115 deletions.
36 changes: 36 additions & 0 deletions .github/workflows/lint.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,36 @@
name: Check linting

on:
push

jobs:
python-lint:
name: Lint Python files
runs-on: ubuntu-latest

steps:
- name: Checkout
uses: actions/checkout@v2
with:
persist-credentials: false

- name: Installing Flake8
run: sudo pip3 install flake8
- name: Lint with flake8
run: sudo flake8 . --count --max-line-length=127 --statistics

markdown-lint:
name: Lint markdown files (check links validity)
runs-on: ubuntu-20.04

steps:
- name: Checkout
uses: actions/checkout@v2
with:
persist-credentials: false

- name: Check the validity of the links in the documentation
uses: gaurav-nelson/github-action-markdown-link-check@v1
with:
use-quiet-mode: 'yes'
folder-path: '.'
74 changes: 68 additions & 6 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,21 +1,83 @@
<!-- markdown-link-check-disable -->
![Check linting](https://github.com/s41m0n/opportunistic_monitoring/workflows/Check%20linting/badge.svg)
<!-- markdown-link-check-enable -->

# Opportunistic Network Monitoring

Research project on opportunistic network traffic monitoring.

For credits, please reference [this](https://doi.org/10.1109/ACCESS.2022.3202644) publication:

*S. Magnani, F. Risso and D. Siracusa, "A Control Plane Enabling Automated and Fully Adaptive Network Traffic Monitoring with eBPF," in IEEE Access, 2022, doi: 10.1109/ACCESS.2022.3202644.*
*S. Magnani, F. Risso and D. Siracusa, "A Control Plane Enabling Automated and Fully Adaptive Network Traffic Monitoring With eBPF," in IEEE Access, vol. 10, pp. 90778-90791, 2022, doi: 10.1109/ACCESS.2022.3202644.*

## Installation

This project requires [DeChainy](https://github.com/dechainers/dechainy) to be correctly set up in you system, either bare-metal or using containers. For performance reasons, we preferred the bare-metal installation, whoch can be installed by following [this](https://github.com/dechainers/dechainy/blob/master/docs/installation.md#local) guide.

The following requirement is *matplotlib* for creating charts.

## Installation
Concerning the eBPF programs, you must have a recent Linux kernel >= v5.6, since some tests use BPF_QUEUEs which have been recently introduced.

To install all the required packages for testing listed in the [file](Pipfile), I suggest you to use [pipenv](https://pypi.org/project/pipenv/), a very useful wrapper for Python virtual environments which allows you to easily setup a ready-to-use environment by just typing:
Finally, for the tests that require network traffic to be sent, the client requires a working build of [MoonGen](https://github.com/emmericp/MoonGen) under the home directory of the user used for the tests. Please refer to their guide for installing and setting up the correct driver for your network interface.

## Project Structure

```bash
ubuntu@ubuntu: $ ~ pipenv install
.
├── adaptiveness
│ ├── ebpf.c
│ ├── __init__.py
│ └── __main__.py
├── erase
│ ├── ebpf.c
│ ├── __init__.py
│ └── __main__.py
├── nprobe
│ ├── ebpf.c
│ ├── __init__.py
│ └── __main__.py
└── swap
│ ├── ebpf.c
│ ├── __init__.py
│ └── __main__.py
├── moongen.lua
├── plotter.py
└── set_irq_affinity.sh
```

The default Python version listed in the file is 3.8, but you can use also older versions and everything should work fine.
* **adaptiveness**: Test case, containing an eBPF probe for measuring the impact of having a non-adaptive (nProbe), adaptive (pre-defined list of possibilities), and fully-adaptive solution
* **erase**: Test case, containing an eBPF probe for measuring the impact of some populate-erase operations on eBPF maps, including their most recent optimized versions (batch operations)
* **nprobe**: Test case, containing an eBPF probe for measuring the impact of extracting the exact information extracted from nProbe within eBPF, in terms of number of processed packets and memory used
* **swap**: Test case, containing an eBPF probe for measuring the impact of requesting a snapshot-access to the eBPF maps, both in terms of compilation time and performance degradation while swapping in-out the underlying eBPF programs.
* **moongen.lua**: used by the client to generate traffic at line rate using MoonGen
* **plotter.py**: a Python script for plotting the results of the tests leveraging matplotlib and a fancy style for IEEE publications
* **set_irq_affinity.sh**: a Bash script for setting the number of CPU used for handling incoming network traffic to 1 for a specific interface

## Assumptions

![Setup](./setup.png)

This figure represents the set-up used for the tests, and the following assumptions have been made:

1. Server is the machine that receives network traffic and runs eBPF probes
1. Server supports XDP_DRV mode. Change it in the *\_\_main__.py* files of the desired test
1. Client-Server have their ssh keys registered under the home folder of the user used for logging-in.
1. Client has a working MoonGen build under the following directory `~/Moongen/build`. Change it if needed.

## Usage

To run one of the tests between the available ones (*adaptiveness*, *erase*, *nprobe*, *swap*):

1. go to the root of the project (the current directory)
1. `python -m <test_name> --help` to read the available arguments
1. `python -m <test_name> ...` to run the test with the provided arguments

A result file will be available once finished.

## Acknowledgements

If you are using OpportunisticMonitoring's code for a scientific research, whether to replicate experiments or try new ones, please cite the related paper in your manuscript as follows:

`S. Magnani, F. Risso and D. Siracusa, "A Control Plane Enabling Automated and Fully Adaptive Network Traffic Monitoring With eBPF," in IEEE Access, vol. 10, pp. 90778-90791, 2022, doi: 10.1109/ACCESS.2022.3202644.`

Concerning the eBPF programs, you must have a recent Linux kernel >= v5.6, since they use BPF_QUEUEs which have been recently introduced.
I sincerely thank my Ph.D. advisor Domenico Siracusa, and my M.Sc. thesis supervisor prof. Fulvio Risso.
31 changes: 18 additions & 13 deletions src/adaptiveness/__init__.py → adaptiveness/__init__.py
Original file line number Diff line number Diff line change
@@ -1,10 +1,11 @@
import ctypes as ct
import math
import os
from dataclasses import dataclass
from enum import Enum
import math
from typing import ClassVar, OrderedDict

from dechainy.plugins import Probe
import ctypes as ct
import os


class AdaptivenessType(Enum):
Expand All @@ -17,15 +18,16 @@ class Adaptiveness(Probe):
nfeatures: int = 0
adaptiveness_type: AdaptivenessType = AdaptivenessType.FULLY

SUPPORTED_FEATURES: ClassVar[OrderedDict] = OrderedDict([(f"F_{i}", ct.c_uint32) for i in range(1, 101)])
SUPPORTED_FEATURES: ClassVar[OrderedDict] = OrderedDict(
[(f"F_{i}", ct.c_uint32) for i in range(1, 101)])

def __post_init__(self):
self.ingress.required = True
states = [0,0,0,0]
states = [0, 0, 0, 0]

with open(os.path.join(os.path.dirname(__file__), "ebpf.c"), "r") as fp:
self.ingress.code = fp.read()

decl_code = fun_code = ""

if self.adaptiveness_type.value == AdaptivenessType.TRADITIONAL.value:
Expand All @@ -42,21 +44,24 @@ def __post_init__(self):
break
decl_code += f"u32 {k};\n\t"
fun_code += f"features->{k} += 1;\n\t"

self.ingress.code = self.ingress.code.replace("F_DECL_PLACEHOLDER", decl_code)
self.ingress.code = self.ingress.code.replace("F_FUN_PLACEHOLDER", fun_code)

self.ingress.code = self.ingress.code.replace(
"F_DECL_PLACEHOLDER", decl_code)
self.ingress.code = self.ingress.code.replace(
"F_FUN_PLACEHOLDER", fun_code)
with open(os.path.join(os.path.dirname(__file__), "ingress.c"), "w") as fp:
fp.write(self.ingress.code)

self.ingress.cflags += ["-D{}=1".format(self.adaptiveness_type.value), "-DDIMENSION={}".format(math.ceil(len(Adaptiveness.SUPPORTED_FEATURES)/32))]
self.ingress.cflags += ["-D{}=1".format(self.adaptiveness_type.value), "-DDIMENSION={}".format(
math.ceil(len(Adaptiveness.SUPPORTED_FEATURES)/32))]

super().__post_init__(path=__file__)

if self.adaptiveness_type.value == AdaptivenessType.TRADITIONAL.value:
tmp = self["ingress"]["APP_STATE"][0]
for i in range(len(states)):
tmp.s[i] = ct.c_uint32(states[i])
self["ingress"]["APP_STATE"][0] = tmp

def retrieve(self):
return self["INGRESS"]["PACKETS"][0].value
return self["INGRESS"]["PACKETS"][0].value
13 changes: 8 additions & 5 deletions src/adaptiveness/__main__.py → adaptiveness/__main__.py
Original file line number Diff line number Diff line change
Expand Up @@ -3,11 +3,11 @@
import os
import pwd
import subprocess
from bcc import XDPFlags, BPF

from bcc import BPF, XDPFlags
from dechainy.controller import Controller

from . import Adaptiveness, AdaptivenessType
from . import AdaptivenessType


def create_dir(name):
Expand Down Expand Up @@ -48,15 +48,18 @@ def _parse_arguments():
if at.value != AdaptivenessType.TRADITIONAL.value:
continue
results[mode][at.value] = {}
for i in [1,5,10,50,100]:
for i in [1, 5, 10, 50, 100]:
vals = []
for _ in range(args["ntimes"]):
ctr.create_probe(
__package__, "probe", interface=args["interface"], mode=mode, flags=XDPFlags.DRV_MODE, nfeatures=i, adaptiveness_type=at)
__package__, "probe", interface=args["interface"],
mode=mode, flags=XDPFlags.DRV_MODE, nfeatures=i, adaptiveness_type=at)
p = ctr.get_probe(__package__, "probe")
print(f"{at} {i} ... ", end="", flush=True)
subprocess.check_call(
f'ssh -i /home/cube2/.ssh/id_rsa1 {args["ssh_login"]} "cd accio/MoonGen && sudo ./build/MoonGen moongen.lua 1 --core 7 --timeout {args["timeout"]} --ipsnum 256 --portsnum 97"',
f'ssh -i /home/{os.getlogin()}/.ssh/id_rsa1 {args["ssh_login"]} "cd MoonGen && \
sudo ./build/MoonGen moongen.lua 1 --core 7 --timeout {args["timeout"]} \
--ipsnum 256 --portsnum 97"',
shell=True, stdout=subprocess.DEVNULL, stderr=subprocess.DEVNULL)
vals.append(p.retrieve())
del p
Expand Down
File renamed without changes.
18 changes: 10 additions & 8 deletions src/erase/__init__.py → erase/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -12,16 +12,17 @@
# See the License for the specific language governing permissions and
# limitations under the License.

import ctypes as ct
import time
###############################################################
# NB: Need the Docker image to be compiled with "ml" argument #
###############################################################
from dataclasses import dataclass
from enum import Enum
import time
import ctypes as ct

from dechainy.plugins import Probe


class MapType(Enum):
HASH = "HASH"
ARRAY = "ARRAY"
Expand All @@ -35,7 +36,8 @@ class Erase(Probe):

def __post_init__(self):
self.ingress.required = True
self.ingress.cflags = ["-D{}=1".format(self.map_type.value), "-DN_ENTRIES={}".format(self.n_entries)]
self.ingress.cflags = [
"-D{}=1".format(self.map_type.value), "-DN_ENTRIES={}".format(self.n_entries)]
super().__post_init__(path=__file__)

def _populate(self):
Expand All @@ -55,21 +57,23 @@ def _populate_batch(self):
if self.map_type.value == MapType.QUEUE.value:
return 0
else:
_, keys, values = self["INGRESS"]["TABLE"]._alloc_keys_values(alloc_k=True, alloc_v=True, count=self.n_entries)
_, keys, values = self["INGRESS"]["TABLE"]._alloc_keys_values(
alloc_k=True, alloc_v=True, count=self.n_entries)
self["INGRESS"]["TABLE"].items_update_batch(keys, values)
return time.time_ns() - t

def _batch_erase(self):
t = time.time_ns()
if self.map_type.value == "ARRAY":
_, keys, values = self["INGRESS"]["TABLE"]._alloc_keys_values(alloc_k=True, alloc_v=True, count=self.n_entries)
_, keys, values = self["INGRESS"]["TABLE"]._alloc_keys_values(
alloc_k=True, alloc_v=True, count=self.n_entries)
self["INGRESS"]["TABLE"].items_update_batch(keys, values)
elif self.map_type.value == "HASH":
self["INGRESS"]["TABLE"].items_delete_batch()
elif self.map_type.value == "QUEUE":
return 0
return time.time_ns() - t

def _normal_erase(self):
t = time.time_ns()
if self.map_type.value == MapType.QUEUE.value:
Expand All @@ -86,5 +90,3 @@ def retrieve(self):
t3 = self._populate_batch()
t4 = self._batch_erase()
return t1, t2, t3, t4


3 changes: 2 additions & 1 deletion src/erase/__main__.py → erase/__main__.py
Original file line number Diff line number Diff line change
Expand Up @@ -2,12 +2,13 @@
import json
import os
import pwd
from bcc import BPF

from bcc import BPF
from dechainy.controller import Controller

from . import MapType


def create_dir(name):
try:
os.makedirs(name)
Expand Down
File renamed without changes.
File renamed without changes.
10 changes: 6 additions & 4 deletions src/nprobe/__init__.py → nprobe/__init__.py
Original file line number Diff line number Diff line change
@@ -1,13 +1,14 @@
import ctypes as ct
from dataclasses import dataclass
from typing import ClassVar, OrderedDict

from dechainy.plugins import Probe
import ctypes as ct


@dataclass
class Nprobe(Probe):
nfeatures: int = 0

NPROBE_FEATURES: ClassVar[OrderedDict] = OrderedDict([
("input_snmp", ct.c_uint32),
("output_snmp", ct.c_uint32),
Expand All @@ -28,8 +29,9 @@ class Nprobe(Probe):

def __post_init__(self):
self.ingress.required = True
self.ingress.cflags = ["-D{}=1".format(x.upper()) for x in list(Nprobe.NPROBE_FEATURES.keys())[:self.nfeatures]]
self.ingress.cflags = [
"-D{}=1".format(x.upper()) for x in list(Nprobe.NPROBE_FEATURES.keys())[:self.nfeatures]]
super().__post_init__(path=__file__)

def retrieve(self):
return self["INGRESS"]["PACKETS"][0].value
Loading

0 comments on commit a2ab842

Please sign in to comment.