Skip to content

Latest commit

 

History

History
24 lines (20 loc) · 1.11 KB

README.md

File metadata and controls

24 lines (20 loc) · 1.11 KB

Apache Flink Stream Application with ML Model

This application showcases how to use Apache Flink for real-time stream processing, coupled with the utilization of a remote Machine Learning (ML) model for making predictions on elements within the data stream. The idea here was to test how the two systems - stream processing and ML Model - compete for resources on the same machine or how the latency of running the model on a remote machine (but with designated hardware) affects performance.

Setup

  1. Environment Requirements:

    • Ensure you have Apache Flink installed and configured. Refer to Apache Flink Documentation for installation instructions.

    • Set up the remote ML model server. Ensure it is accessible from the application.

  2. Clone the Repository:

    git clone https://github.com/agemcipe/in_stream_prediction.git
    cd in_stream_prediction
  3. Run Docker containing serving the ML Model

    .run_time_eval.sh
  4. Run Apache Flink Streaming Process

    .run_flink_application.sh