Checking Energy Consumption and Runtime Performance between WebAssembly, JavaScript (asm.js) and C, using 10 microbenchmarks as case study.
This repo contains the source code of 10 distinct benchmarks, implemented in WebAssembly, JavaScript and C. Using Emscripten as a compiler, WebAssembly and Javascript were generated from a C source code.
This framework follows a specific folder structure, which guarantees the correct workflow when the goal is to perform and operation for all benchmarks. Moreover, it must be defined, for each benchmark, how to perform the operations considered.
Next, we explain the folder structure and how to specify, for each language benchmark, the execution of each operation.
The main folder contains 4 elements:
- A
Benchmarks
sub-folder, containing a folder for each microbenchmark. - A
PlotsData
sub-folder, containing all the plots generated from jupyter notebook. - A
RAPL
sub-folder, containing the code of the energy measurement framework. - A
emsdk
subfolder, containing Emscripten SDK.
Basically, the directories tree will look something like this:
| Benchmarks
| <Benchmark-1>
| Large_dataset
| C
| Results
| benchmarkLARGE1.rapl
| benchmarkLARGE1.time
| ...
| Makefile
| benchmark_runLARGE
| JS
| Results
| benchmarkLARGE1.rapl
| benchmarkLARGE1.time
| ...
| Makefile
| benchmark_runJS_LARGE.js
| benchmark_runJS_LARGE.js.mem
| WASM
| Results
| benchmarkLARGE1.rapl
| benchmarkLARGE1.time
| ...
| Makefile
| benchmark_runWASM_Large.js
| benchmark_runWASM_Large.wasm
| Makefile
| benchmarkLARGE.csv
| Medium_dataset
| ...
| Small_dataset
| ...
benchmark.c
datasets.h
inputgen.c
Makefile
| ...
| <Benchmark-10>
| ExampleFolder
| emsdk
| Plotsdata
| RAPL
| compile_all.py
To understand how this system works let's add and run an example.
-
Take a microbenchmark in language C, for example,
fibonacci.c
. -
In
ExampleFolder
(change its name if you want to) replaceexample.c
forfibonacci.c
. -
Deal with input. The microbenchmarks can´t receive input as an argument, so, you need to add the three differents inputs sizes in a header called
datasets.h
. For example, if you want theSmall
,Medium
andLarge
inputs to be 1, 2 and 3, respectively, thedatasets.h
will be like this:
#ifdef SMALL_DATASET
#define INPUT 1
#endif
#ifdef LARGE_DATASET
#define INPUT 3
#endif
#ifndef SMALL_DATASET
#ifndef LARGE_DATASET
#ifndef MEDIUM_DATASET
#define MEDIUM_DATASET
#endif
#endif
#endif
#ifdef MEDIUM_DATASET
#define INPUT 2
#endif
-
Now you just need to change the input that was received as an argument
argv[1]
forINPUT
, like this (don't forget to add#include "datasets.h"
):Before:
int n = argv[1];
After:
int n = INPUT;
-
The next step is the preparation of all the Makefiles inside
ExampleFolder
. In each benchmark, you need to replace allexample.c
to the name of your benchmark, in this case,fibonacci.c
. -
Compile. For this, go to the
Makefile
inExampleFolder
and check if all the commands are correct and working. Then just run the following command:make compileall
-
Now all the executables were created in the correct directories. To check if the programs run perfectly, for example the
fibonacci_runLARGE.js
, you can go toExampleFolder/Medium_dataset/JS/
and run the command:make run
If all works perfectly, you are ready to measure the performance of each language size, one by one.
-
Let's take as example the
LARGE
input. In order to run the program with theC
language, you need to go toExampleFolder/Large_dataset/C/
and open two terminals. In one terminal, you need to run theRAPL Server
, and, for that, you need to execute the following command:make raplserver
On the other terminal, you need to run the
RAPL Client
with the following command:make raplclient
By default, this will run the
fibonacci_runLARGE
20 times. If you want to change that, just go to theResults
folder and change theMakefile
. -
In this moment, you created in the
Results
folder all the.time
and.rapl
files of each execution. Now you just need to do the same forJS
andWASM
. After doing that, you go to theExampleFolder/Large_dataset
and run:make sum
This will create the
fibonacci.csv
file with all the measure values using thecleanresults.py
script fromRAPL
folder.
The Makefiles
have specified, for some cases, the path for the language's compiler/runner.
It is most likely that you will not have them in the same path of your machine.
If you would like to properly test every benchmark of every language, please make sure you have all compilers/runners installed, and adapt the Makefiles
accordingly.