Skip to content

Latest commit

 

History

History
79 lines (65 loc) · 3.38 KB

File metadata and controls

79 lines (65 loc) · 3.38 KB

Comparative Evaluation of Energy Efficiency in Large Language Models: Analyzing Improvements Across Incremental Versions in Inference Tasks

This experiment is a group project for the GreenLab course, under the Computer Science Master's programme at VU Amsterdam.

Experiment Candidates

1. Alibaba Cloud’s QWen:

The versions that are going to be tested are incrementally as follows:

2. Google’s Gemma

The versions that are going to be tested are incrementally as follows:

These versions are all instruct versions of the Gemma model. This will not affect our study because we are not drawing any comparative conclusions on the model performance between the LLM candidates.

3. Mistralai’s Mistral

The versions that are going to be tested are incrementally as follows:

These versions are all instruct versions of the open-source Mistral model model. Just as for Gemma, this will not affect our study because we are not drawing any comparative conclusions on the model performance between the LLM candidates.

Tool Selection

Experiment Automation

We automated the experiment using the following framework: Experiment-Runner

Metrics Extraction

Running the Experiment

Installation

git clone --recursive https://github.com/andrei-calin-dragomir/greenlab-course-project.git
cd ./greenlab-course-project
python3 -m venv venv
source ./venv/bin/activate
pip install -r requirements.txt
cd ./experiment-runner
pip install -r requirements.txt

Execution

python3 -m venv venv
cd ./experiment-runner
python experiment-runner/ ../RunnerConfig.py

Execution Flow

The workflow of the experiment is defined as:

  1. BEFORE_EXPERIMENT
  2. BEFORE_RUN
  3. START_RUN
  4. START_MEASUREMENT
  5. INTERACT
  6. STOP_MEASUREMENT
  7. STOP_RUN
  8. POPULATE_RUN_DATA
  9. AFTER_EXPERIMENT