Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Ag manufacturing patches - update openVINO #326

Merged
merged 8 commits into from
May 19, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,7 @@

Additionally, Kubernetes plays a vital role in Contoso Motors' infrastructure, streamlining the deployment and management of containerized applications. With Kubernetes, Contoso Motors ensures smooth manufacturing operations while maintaining scalability to adapt to changing demands, solidifying their position as an industry leader.

Looking ahead, Contoso Motors embraces Microsoft's Adaptive cloud approach, with Azure Arc serving as the foundation. This strategic move unifies teams, sites, and systems into a cohesive operational framework, enabling the harnessing of cloud-native and AI technologies across hybrid, multicloud, edge, and IoT environments. With Azure Arc, Contoso Motors embarks on a journey towards operational agility, security, and innovation, setting new standards in the automotive industry.

Check failure on line 19 in docs/azure_jumpstart_ag/manufacturing/contoso_motors/_index.md

View workflow job for this annotation

GitHub Actions / lint

[vale] reported by reviewdog 🐶 [Vale.Spelling] Did you really mean 'multicloud'? Raw Output: {"message": "[Vale.Spelling] Did you really mean 'multicloud'?", "location": {"path": "docs/azure_jumpstart_ag/manufacturing/contoso_motors/_index.md", "range": {"start": {"line": 19, "column": 287}}}, "severity": "ERROR"}

## Architecture and technology stack

Expand All @@ -43,8 +43,8 @@
|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-----------------------------------------------------------------------------------------------|
| [Deployment guide](../contoso_motors/deployment/) | Not applicable | Not applicable |
| [Data pipeline and reporting across cloud and edge](../contoso_motors/data_opc/) | Operational technology (OT) | Azure IoT Operations, Azure Data Explorer, MQTT, Event Grid, Event Hub, AKS Edge Essentials, InfluxDB, MQTT simulators |
| [Web UI and AI Inference flow](../contoso_motors/ai_inferencing/) | Operational technology (OT) | OpenVINO Open Model Zoo, Yolo8, OpenVINO™ Model Server (OVMS), AKS Edge Essentials, RTSP, Flask, OpenCV |
| [Welding defect scenario using OpenVino and Kubernetes](../contoso_motors/welding_defect/) | Welding monitoring | RTSP simulator, OpenVINO™ Model Server (OVMS), AKS Edge Essentials, Flask, OpenCV |
| [Web UI and AI Inference flow](../contoso_motors/ai_inferencing/) | Operational technology (OT) | OpenVINO Open Model Zoo,Yolo8, OpenVINO™ Model Server (OVMS), AKS Edge Essentials, RTSP, Flask, OpenCV |
| [Welding defect scenario using OpenVINO™ and Kubernetes](../contoso_motors/welding_defect/) | Welding monitoring | RTSP simulator, OpenVINO™ Model Server (OVMS), AKS Edge Essentials, Flask, OpenCV |
| [Enabling AI at the edge to enhance workers safety](../contoso_motors/workers_safety/) | Workers safety | RTSP simulator, OpenVINO™ Model Server (OVMS), AKS Edge Essentials, Flask, OpenCV |
| [Infrastructure observability for Kubernetes and Arc-enabled Kubernetes](../contoso_motors/k8s_infra_observability/) | Infrastructure | Arc-enabled Kubernetes, AKS Edge Essentials, Prometheus, Grafana |
| [Infrastructure observability for Arc-enabled servers using Azure Monitor](../contoso_motors/arc_monitoring_servers/) |Infrastructure | Arc-enabled servers, Azure Monitor |
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -4,19 +4,19 @@
title: Web UI and AI Inference flow
linkTitle: Web UI and AI Inference flow
summary: |
Contoso Motors leverages AI-powered manufacturing with a Kubernetes-based infrastructure which provides a flexible and scalable design that can be easily extended. In this scenario, Contoso Motors wants to implement different AI inferencing models for various use cases, leveraging OpenVINO Model Server (OVMS), a high-performance inference serving software that allows users to deploy and serve multiple AI models. This scenario also explains the architecture of the AI inference flow and the steps involved in the inference/processing.
Contoso Motors leverages AI-powered manufacturing with a Kubernetes-based infrastructure which provides a flexible and scalable design that can be easily extended. In this scenario, Contoso Motors wants to implement different AI inferencing models for various use cases, leveraging OpenVINO Model Server (OVMS), a high-performance inference serving software that allows users to deploy and serve multiple AI models. This scenario also explains the architecture of the AI inference flow and the steps involved in the inference/processing.
serviceOrPlatform: INFRASTRUCTURE
technologyStack:
- AI
- OPENVINO
- OpenVINO™
- AKS EDGE ESSENTIALS
- OPENVINO MODEL SERVER
- OpenVINO™ MODEL SERVER
- RTSP
---

# Web UI and AI Inference

Contoso Motors leverages AI-powered manufacturing with a Kubernetes-based infrastructure which provides a flexible and scalable design that can be easily extended. This document covers how Contoso Motors implements different AI inferencing models for various use cases, leveraging [OpenVINO Model Server (OVMS)](https://docs.openvino.ai/2023.3/ovms_what_is_openvino_model_server.html). The OVMS by Intel, is a high-performance inference serving software that allows users to deploy and serve multiple AI models in a convenient, scalable and efficient manner.
Contoso Motors leverages AI-powered manufacturing with a Kubernetes-based infrastructure which provides a flexible and scalable design that can be easily extended. This document covers how Contoso Motors implements different AI inferencing models for various use cases, leveraging [OpenVINO Model Server (OVMS)](https://docs.openvino.ai/2023.3/ovms_what_is_openvino_model_server.html). The OVMS by Intel, is a high-performance inference serving software that allows users to deploy and serve multiple AI models in a convenient, scalable and efficient manner.

Check failure on line 19 in docs/azure_jumpstart_ag/manufacturing/contoso_motors/ai_inferencing/_index.md

View workflow job for this annotation

GitHub Actions / lint

[vale] reported by reviewdog 🐶 [Vale.Spelling] Did you really mean 'inferencing'? Raw Output: {"message": "[Vale.Spelling] Did you really mean 'inferencing'?", "location": {"path": "docs/azure_jumpstart_ag/manufacturing/contoso_motors/ai_inferencing/_index.md", "range": {"start": {"line": 19, "column": 229}}}, "severity": "ERROR"}

## Architecture

Expand All @@ -26,7 +26,7 @@

1. **Model Downloader:** this is a *Kubernetes Job* that downloads the binary files of the AI models and the corresponding configuration files. Depending on the model type, various formats can be used, such as .onnx, .hd5, .bin, and others. In general, the binary files contain the model weights and architecture, while the configuration files specify the model properties, such as input/output tensor shapes, number of layers, and more.

The models and configurations needed for the OVMS deployment are hosted in a storage account in Azure. During deployment, the **Model Downloader** job pulls these models and configurations by running the ***ovms_config.sh*** script, which downloads all necessary model files and stores them in the **ovms-pvc** persistent volume claim for the OVMS pods to access and serve. All models need to be placed and mounted in a particular directory structure and according to the rules described in [OpenVINO - Model Serving](https://docs.OpenVINO.ai/2022.3/ovms_docs_models_repository.html).
The models and configurations needed for the OVMS deployment are hosted in a storage account in Azure. During deployment, the **Model Downloader** job pulls these models and configurations by running the ***ovms_config.sh*** script, which downloads all necessary model files and stores them in the **ovms-pvc** persistent volume claim for the OVMS pods to access and serve. All models need to be placed and mounted in a particular directory structure and according to the rules described in [OpenVINO - Model Serving](https://docs.openvino.ai/2022.3/ovms_docs_models_repository.html).

For more information, see *[job.yaml](https://github.com/microsoft/jumpstart-agora-apps/blob/main/contoso_manufacturing/operations/charts/ovms/templates/job.yaml)* and *[ovms_config.sh](https://raw.githubusercontent.com/microsoft/jumpstart-agora-apps/manufacturing/contoso_manufacturing/deployment/configs/ovms_config.sh)*.

Expand Down Expand Up @@ -54,17 +54,17 @@

1. **Pre-process and AI inference:** this is the code of the **Web AI Inference and UI** pod that handles the **preprocessing** of the image and sends the processed input to the AI inference server. Each model has it's own Python class ([Yolo8](https://github.com/microsoft/jumpstart-agora-apps/blob/main/contoso_manufacturing/developer/webapp-decode/yolov8.py), [Welding](https://github.com/microsoft/jumpstart-agora-apps/blob/main/contoso_manufacturing/developer/webapp-decode/welding.py) and [Pose Estimator](https://github.com/microsoft/jumpstart-agora-apps/blob/main/contoso_manufacturing/developer/webapp-decode/pose_estimator.py)) that implements the **pre-process**, **post-process** and **run** methods according to the model requirements. Once the input data, generally a **torch** is created, it's then sent to the AI inference server using the [ovmsclient library](https://pypi.org/project/ovmsclient/).

The AI inference server, in our case the [OpenVINO Model Server](https://docs.OpenVINO.ai/2023.3/ovms_what_is_OpenVINO_model_server.html), hosts models and makes them accessible to software components over standard network protocols: a client sends a request to the model server, which performs model inference and sends a response back to the client.
The AI inference server, in our case the [OpenVINO Model Server](https://docs.openvino.ai/2023.3/ovms_what_is_openvino_model_server.html), hosts models and makes them accessible to software components over standard network protocols: a client sends a request to the model server, which performs model inference and sends a response back to the client.

The OVMS model server is configured as part of the helm deployment, using the **OVMS Operator**. The deployment installs the OVMS Operator, sets up the storage for AI models, and configures the OVMS pods and services. For more information about the setup, check [OVMS Helm](https://github.com/microsoft/jumpstart-agora-apps/tree/manufacturing/contoso_manufacturing/operations/charts/ovms).For more information about OVMS Operator, check [OpenVINO Model Server with Kubernetes](https://docs.OpenVINO.ai/archive/2021.4/ovms_docs_kubernetes.html). The OVMS model server is typically deployed as a set of pods. Each pod contains one or more instances of the OVMS software, along with any necessary dependencies. The pods are managed by a Kubernetes controller, which is responsible for ensuring that the desired number of pods are running at all times.
The OVMS model server is configured as part of the helm deployment, using the **OVMS Operator**. The deployment installs the OVMS Operator, sets up the storage for AI models, and configures the OVMS pods and services. For more information about the setup, check [OVMS Helm](https://github.com/microsoft/jumpstart-agora-apps/tree/manufacturing/contoso_manufacturing/operations/charts/ovms).For more information about OVMS Operator, check [OpenVINO Model Server with Kubernetes](https://docs.openvino.ai/archive/2021.4/ovms_docs_kubernetes.html). The OVMS model server is typically deployed as a set of pods. Each pod contains one or more instances of the OVMS software, along with any necessary dependencies. The pods are managed by a Kubernetes controller, which is responsible for ensuring that the desired number of pods are running at all times.

2. **Post-process and UI rendering:** this is the code in the **Web AI Inference and UI** pod is responsible for handling post-processing and final UI rendering for the user. Depending on the model, once the OVMS provides the inference response, post-processing is applied to the image, such as adding labels, bounding boxes, or human skeleton graphs. Once the visual data is added to the frame, it's then served to the UI frontend using a **Flask App** method. For more information on the Flask App, please refer to the [Web UI - app.py](https://github.com/microsoft/jumpstart-agora-apps/blob/main/contoso_manufacturing/developer/webapp-decode/app.py)

## OpenVINO Model Server
## OpenVINO Model Server

The [OpenVINO Model Server](https://www.intel.com/content/www/us/en/developer/articles/technical/deploy-OpenVINO-in-openshift-and-kubernetes.html) by Intel, is a high-performance inference serving software that allows users to deploy and serve AI models. Model serving is taking a trained AI model and making it available to software components over a network API. OVMS offers a native method for exposing models over a gRPC or REST API. Furthermore, it supports various deep learning frameworks such as TensorFlow, PyTorch, OpenVINO and ONNX. By using OvMS developers can easily deploy AI models by specifying the model configuration and the input/output formats.
The [OpenVINO Model Server](https://www.intel.com/content/www/us/en/developer/articles/technical/deploy-openvino-in-openshift-and-kubernetes.html) by Intel, is a high-performance inference serving software that allows users to deploy and serve AI models. Model serving is taking a trained AI model and making it available to software components over a network API. OVMS offers a native method for exposing models over a gRPC or REST API. Furthermore, it supports various deep learning frameworks such as TensorFlow, PyTorch, OpenVINO and ONNX. By using OvMS developers can easily deploy AI models by specifying the model configuration and the input/output formats.

The OpenVINO Model Server offers many advantages for efficient model deployment:
The OpenVINO Model Server offers many advantages for efficient model deployment:

- Remote inference enables using lightweight clients with only the necessary functions to perform API calls to edge or cloud deployments.
- Applications are independent of the model framework, hardware device, and infrastructure.
Expand All @@ -77,4 +77,4 @@

## Next steps

Now that you have completed the data pipeline scenario, it's time to continue to the next scenario, [Welding defect scenario using OpenVino and Kubernetes](../welding_defect/).
Now that you have completed the data pipeline scenario, it's time to continue to the next scenario, [Welding defect scenario using OpenVINO™ and Kubernetes](../welding_defect/).
Original file line number Diff line number Diff line change
@@ -1,24 +1,24 @@
---
type: docs
weight: 4
title: Welding defect scenario using OpenVino and Kubernetes
linkTitle: Welding defect scenario using OpenVino and Kubernetes
title: Welding defect scenario using OpenVINO™ and Kubernetes
linkTitle: Welding defect scenario using OpenVINO™ and Kubernetes
summary: |
The Welding Defect page provides an overview of the welding defect scenario in the Contoso Motors solution. It describes the architecture and flow of information for detecting and classifying welding defects using AI. The page also explains the steps involved in the welding defect inference process, including UI selection, RTSP video simulation, frame capturing, image pre-processing/inferencing, and post-processing/rendering.
serviceOrPlatform: Manufacturing
technologyStack:
- AKS
- OPENVINO
- OpenVINO™
- AI
- AKS EDGE ESSENTIALS
- RTSP
---

# Welding defect scenario using OpenVino and Kubernetes
# Welding defect scenario using OpenVINO™ and Kubernetes

## Overview

Contoso Motors uses AI-enhanced computer vision to improve welding operations on its assembly lines. Welding is one of the four computer vision use cases that Contoso Motors uses, which also include object detection, human pose estimation, and safety helmet detection. While each use case has its own unique characteristics, they all follow the same inferencing architecture pattern and data flow.

Check failure on line 21 in docs/azure_jumpstart_ag/manufacturing/contoso_motors/welding_defect/_index.md

View workflow job for this annotation

GitHub Actions / lint

[vale] reported by reviewdog 🐶 [Vale.Spelling] Did you really mean 'inferencing'? Raw Output: {"message": "[Vale.Spelling] Did you really mean 'inferencing'?", "location": {"path": "docs/azure_jumpstart_ag/manufacturing/contoso_motors/welding_defect/_index.md", "range": {"start": {"line": 21, "column": 351}}}, "severity": "ERROR"}

Welding is a process of joining two or more metal parts by melting and fusing them together. Welding defects are flaws or irregularities that occur during or after the welding process, which can affect the quality, strength, and appearance of the weld. Welding defects can be caused by various factors, such as improper welding parameters, inadequate preparation, poor welding technique, or environmental conditions. In this scenario, an AI model is used to automatically detect and classify welding defects from a video feed. Welding defect inference can help improve the efficiency, accuracy, and safety of weld inspection and quality control.

Expand All @@ -42,7 +42,7 @@
5. Add a new dimension to the input image at the beginning of the array to create a "batch" of images.
6. Flip the order of the color channels from RGB to BGR.

After the pre-processing step is completed, the final frame data is sent to the OpenVINO model server for inference. This is achieved using gRPC and the [ovmsclient](https://pypi.org/project/ovmsclient/) library, which provides a convenient and efficient way to communicate with the server. The server uses the OpenVINO toolkit to perform the inference process, which involves running the input data through a trained machine learning model to generate predictions or classifications. Once the inference is complete, the results are returned to the client for further processing or display.
After the pre-processing step is completed, the final frame data is sent to the OpenVINO model server for inference. This is achieved using gRPC and the [ovmsclient](https://pypi.org/project/ovmsclient/) library, which provides a convenient and efficient way to communicate with the server. The server uses the OpenVINO toolkit to perform the inference process, which involves running the input data through a trained machine learning model to generate predictions or classifications. Once the inference is complete, the results are returned to the client for further processing or display.

1. **Frame post-processing/rednering:** this is the final step and involves parsing the inference reposnse and apply the required post-process. For this welding model, the post-process involves the following transformations:

Expand Down Expand Up @@ -81,7 +81,7 @@

Contoso leverages their AI-enhanced computer vision to monitor the welding process and help OT managers detect welding defects through the "Control Center" interface.

- To access the "Control Center" interface, select the Control center [_env_] option from the _Control center_ Bookmarks folder. Each environment will have it's own "Control Center" instance with a different IP. Select one of the sites and click on the factory image to start navigating the different factory control centers.

Check failure on line 84 in docs/azure_jumpstart_ag/manufacturing/contoso_motors/welding_defect/_index.md

View workflow job for this annotation

GitHub Actions / lint

[vale] reported by reviewdog 🐶 [Vale.Spelling] Did you really mean 'env'? Raw Output: {"message": "[Vale.Spelling] Did you really mean 'env'?", "location": {"path": "docs/azure_jumpstart_ag/manufacturing/contoso_motors/welding_defect/_index.md", "range": {"start": {"line": 84, "column": 73}}}, "severity": "ERROR"}

![Screenshot showing the Control center Bookmark](./img/control-center-menu.png)

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@
serviceOrPlatform: Manufacturing
technologyStack:
- AKS
- OPENVINO
- OpenVINO™
- AI
- AKS EDGE ESSENTIALS
- RTSP
Expand All @@ -18,7 +18,7 @@

## Overview

Contoso Motors uses AI-enhanced computer vision to improve workers' safety by detecting workers with no helmets on the factory floor. Worker safety is one of the four computer vision use cases that Contoso Motors uses, which also include object detection, defect detection, and human pose estimation. While each use case has its own unique characteristics, they all follow the same inferencing architecture pattern and data flow.

Check failure on line 21 in docs/azure_jumpstart_ag/manufacturing/contoso_motors/workers_safety/_index.md

View workflow job for this annotation

GitHub Actions / lint

[vale] reported by reviewdog 🐶 [Vale.Spelling] Did you really mean 'inferencing'? Raw Output: {"message": "[Vale.Spelling] Did you really mean 'inferencing'?", "location": {"path": "docs/azure_jumpstart_ag/manufacturing/contoso_motors/workers_safety/_index.md", "range": {"start": {"line": 21, "column": 383}}}, "severity": "ERROR"}

## Architecture

Expand All @@ -28,7 +28,7 @@

1. **Select Site UI:** The user selects a working station from the interactive UI. Each station corresponds to a specific AI flow. In particular, when the user selects the two workers walking station (highlighted in the image above), the workers safety flow is triggered.

1. **RTSP video simulation:** The workers safety flow requires a particular video of two workers walking with helmets to apply the AI inference. In this scenario, due to the lack of a real video camera, an RTSP simulated feed is used. The simulated feed is designed to closely mimic the behavior of two workers walking on site with the appropaite safety gear, providing a reliable video for the AI inference process.

Check failure on line 31 in docs/azure_jumpstart_ag/manufacturing/contoso_motors/workers_safety/_index.md

View workflow job for this annotation

GitHub Actions / lint

[vale] reported by reviewdog 🐶 [Vale.Spelling] Did you really mean 'appropaite'? Raw Output: {"message": "[Vale.Spelling] Did you really mean 'appropaite'?", "location": {"path": "docs/azure_jumpstart_ag/manufacturing/contoso_motors/workers_safety/_index.md", "range": {"start": {"line": 31, "column": 337}}}, "severity": "ERROR"}

1. **Frame capturing:** This step involves using **OpenCV** to establish a connection with the RTSP video feed and retrieve the welding video frames. Each frame is then passed to the appropriate worker safety AI inference class, which applies the required preprocessing and post process to the frame. The worker safety AI inference class is implemented in [yolov8.py](https://github.com/microsoft/jumpstart-agora-apps/blob/main/contoso_manufacturing/developer/webapp-decode/welding.py) and is designed to handle the specific requirements of the worker safety detection process.

Expand All @@ -39,11 +39,11 @@
4. Transpose the dimensions of the input image from (height, width, channels) to (channels, height, width).
5. Add a new dimension to the input image at the beginning of the array to create a "batch" of images.

After the pre-processing step is completed, the final frame data is sent to the OpenVINO model server for inference. This is achieved using gRPC and the [ovmsclient](https://pypi.org/project/ovmsclient/) library, which provides a convenient and efficient way to communicate with the server. The server uses the OpenVINO toolkit to perform the inference process, which involves running the input data through a trained machine learning model to generate predictions or classifications. Once the inference is complete, the results are returned to the client for further processing or display.
After the pre-processing step is completed, the final frame data is sent to the OpenVINO model server for inference. This is achieved using gRPC and the [ovmsclient](https://pypi.org/project/ovmsclient/) library, which provides a convenient and efficient way to communicate with the server. The server uses the OpenVINO toolkit to perform the inference process, which involves running the input data through a trained machine learning model to generate predictions or classifications. Once the inference is complete, the results are returned to the client for further processing or display.

1. **Frame post-processing/rednering:** this is the final step and involves parsing the inference reposnse and apply the required post-process. For this welding model, the post-process involves the following transformations:

1. Use the OVMS inference result and hte original frame to calculate the bounding box coordinates, scores, and class IDs for each detection in the output tensor.

Check failure on line 46 in docs/azure_jumpstart_ag/manufacturing/contoso_motors/workers_safety/_index.md

View workflow job for this annotation

GitHub Actions / lint

[vale] reported by reviewdog 🐶 [Vale.Spelling] Did you really mean 'hte'? Raw Output: {"message": "[Vale.Spelling] Did you really mean 'hte'?", "location": {"path": "docs/azure_jumpstart_ag/manufacturing/contoso_motors/workers_safety/_index.md", "range": {"start": {"line": 46, "column": 42}}}, "severity": "ERROR"}
1. Apply a non-maximum suppression function to filter out overlapping bounding boxes.
1. Draw each selected detection on the input image and creates a table of the detection information.
1. Return the frame with the detections and labels
Expand Down Expand Up @@ -74,7 +74,7 @@

Contoso uses AI-enhanced computer vision to monitor the safety helmet adherence for workers on the factory floor to help OT managers ensure workers safety through the "Control Center" interface.

- To access the "Control Center" interface, select the Control center [_env_] option from the _Control center_ Bookmarks folder. Each environment will have it's own "Control Center" instance with a different IP. Select one of the sites and click on the factory image to start navigating the different factory control centers.

Check failure on line 77 in docs/azure_jumpstart_ag/manufacturing/contoso_motors/workers_safety/_index.md

View workflow job for this annotation

GitHub Actions / lint

[vale] reported by reviewdog 🐶 [Vale.Spelling] Did you really mean 'env'? Raw Output: {"message": "[Vale.Spelling] Did you really mean 'env'?", "location": {"path": "docs/azure_jumpstart_ag/manufacturing/contoso_motors/workers_safety/_index.md", "range": {"start": {"line": 77, "column": 73}}}, "severity": "ERROR"}

![Screenshot showing the Control center Bookmark](./img/control-center-menu.png)

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -40,7 +40,7 @@ To get started with the "Contoso Supermarket" Jumpstart Agora scenario, we provi
| [Contoso Supermarket deployment guide](../contoso_supermarket/deployment/) | Not applicable | Not applicable |
| [Data pipeline and reporting across cloud and edge for store orders](../contoso_supermarket/data_pos/) | Point of Sale (PoS) | Cosmos DB, Azure Data Explorer, OSS PostgreSQL, AKS Edge Essentials |
| [Data pipeline and reporting across cloud and edge for sensor telemetry](../contoso_supermarket/freezer_monitor/) | Freezer Monitoring for Food Safety | IoT Hub, Azure Data Explorer, Mosquitto MQTT Broker, Prometheus, Grafana, AKS Edge Essentials |
| [Enabling AI at the Edge & Software configurations rollout with basic GitOps flow](../contoso_supermarket/ai/) | Managers Control Center | AKS Edge Essentials, GitOps (Flux), OSS PostgreSQL, Intel OpenVino Inference Engine |
| [Enabling AI at the Edge & Software configurations rollout with basic GitOps flow](../contoso_supermarket/ai/) | Managers Control Center | AKS Edge Essentials, GitOps (Flux), OSS PostgreSQL, Intel OpenVINO™ Inference Engine |
| [Streamlining the Software Delivery Process using CI/CD](../contoso_supermarket/ci_cd/) | Point of Sale (PoS) | AKS, AKS Edge Essentials, Azure Arc, Flux, GitHub Actions, Azure Container Registry |
| [Infrastructure observability for Kubernetes and Arc-enabled Kubernetes](../contoso_supermarket/k8s_infra_observability/) | Infrastructure | AKS, Arc-enabled Kubernetes, AKS Edge Essentials, Prometheus, Grafana |
| [Infrastructure observability for Arc-enabled servers using Azure Monitor](../contoso_supermarket/arc_monitoring_servers/) | Infrastructure | Arc-enabled servers, Azure Monitor |
Expand Down
Loading
Loading