Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

small fixes #311

Merged
merged 11 commits into from
May 17, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
12 changes: 9 additions & 3 deletions docs/azure_jumpstart_ag/manufacturing/contoso_motors/_index.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,17 +10,17 @@

Contoso Motors continues to lead the automotive industry with its unwavering commitment to innovation and employee safety. Positioned at the forefront of the industrial revolution, Contoso Motors seamlessly integrates cutting-edge technologies into every aspect of their operations, embodying the essence of Industrial IoT (Industry 4.0).

At the heart of Contoso Motors' digital infrastructure lies Azure IoT Operations, a transformative solution that harnesses protocols like OPC UA and MQTT. This advanced tool empowers Contoso Motors to extract insights from various equipment and systems within their manufacturing plants, facilitating real-time decision-making and operational optimization. Through data simulation and analytics, Contoso Motors refines processes and makes informed decisions with precision.
At the heart of Contoso Motors' digital infrastructure lies Azure IoT Operations, which providers standards-based technology for working with protocols like OPC UA and MQTT. This advanced tool empowers Contoso Motors to extract insights from various equipment and systems within their manufacturing plants, facilitating real-time decision-making and operational optimization. Through data simulation and analytics, Contoso Motors refines processes and makes informed decisions with precision.

Furthermore, Contoso Motors leverages the power of Artificial Intelligence (AI) across multiple domains, including welding, pose estimation, helmet detection, and object detection. These AI models not only optimize manufacturing processes but also play a crucial role in ensuring employee safety. By proactively identifying potential hazards and monitoring workplace conditions, Contoso Motors fosters a culture of safety and well-being among its workforce.

Additionally, Kubernetes plays a vital role in Contoso Motors' infrastructure, streamlining the deployment and management of containerized applications. With Kubernetes, Contoso Motors ensures smooth manufacturing operations while maintaining scalability to adapt to changing demands, solidifying their position as an industry leader.

Looking ahead, Contoso Motors embraces Microsoft's Adaptive cloud approach, with Azure Arc serving as the foundation. This strategic move unifies teams, sites, and systems into a cohesive operational framework, enabling the harnessing of cloud-native and AI technologies across hybrid, multicloud, edge, and IoT environments. Leveraging Azure Arc, Contoso Motors embarks on a transformative journey towards operational agility, security, and innovation, setting new standards in the automotive industry.
Looking ahead, Contoso Motors embraces Microsoft's Adaptive cloud approach, with Azure Arc serving as the foundation. This strategic move unifies teams, sites, and systems into a cohesive operational framework, enabling the harnessing of cloud-native and AI technologies across hybrid, multicloud, edge, and IoT environments. With Azure Arc, Contoso Motors embarks on a journey towards operational agility, security, and innovation, setting new standards in the automotive industry.

Check failure on line 19 in docs/azure_jumpstart_ag/manufacturing/contoso_motors/_index.md

View workflow job for this annotation

GitHub Actions / lint

[vale] reported by reviewdog 🐶 [Vale.Spelling] Did you really mean 'multicloud'? Raw Output: {"message": "[Vale.Spelling] Did you really mean 'multicloud'?", "location": {"path": "docs/azure_jumpstart_ag/manufacturing/contoso_motors/_index.md", "range": {"start": {"line": 19, "column": 287}}}, "severity": "ERROR"}

## Architecture and technology stack

To support their digital transformation aspirations, Contoso Motors stores has a robust technology stack, services, and processes. To demonstrate the various use cases mentioned below, a set of reference use-cases is included:
Contoso Motors uses an AI technology stack, services, and processes to support their digital transformation. To demonstrate the various use cases mentioned below, a set of reference use-cases is included with the Jumpstart Agora Contoso Motors scenario:

- **Welding** - Optimizing welding processes for precision and efficiency, ensuring high-quality welds and minimizing defects through advanced techniques and technologies.
- **AI-driven Safety** - Implementing artificial intelligence to enhance safety protocols, proactively identifying potential hazards and mitigating risks to ensure a secure working environment for personnel.
Expand All @@ -29,6 +29,12 @@

![Applications and technology stack architecture diagram](./img/architecture_diagram.png)

## Virtual edge environment

Jumpstart Agora provides virtual sandbox environments that simulate edge infrastructure deployments for industry solutions. The automation in the Contoso Motors scenario deploys an Azure Virtual machine to support this "virtual" factory's AI technology. Additional features are included to further enhance the "virtual industry" experience in a lab setting, including simulated RTSP feeds, data emulators, and MQTT and OPC UA devices and data. Review the diagram and dedicated guides below to learn more about the virtual environment.

![Applications and technology stack architecture diagram](./img/simulation_stack.png)

## Getting started

To get started with the "Contoso Motors" Jumpstart Agora scenario, we provided you with a dedicated guide for each step of the way. The guides are designed to be as simple as possible but also keep the detailed-oriented spirit of the Jumpstart.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -17,52 +17,52 @@

# Contoso Motors Web UI and AI Inference

Contoso Motors leverages AI-powered manufacturing with a Kubernetes-based infrastructure which provides a flexible and scalable design that can be easily extended. This document covers how Contoso Motors implements different AI inferencing models for various use cases, leveraging [OpenVINO Model Server (OVMS)](https://docs.openvino.ai/2023.3/ovms_what_is_openvino_model_server.html). The OVMS by Intel, is a high-performance inference serving software that allows users to deploy and serve multiple AI models in a convinient, scalable and efficient way.
Contoso Motors leverages AI-powered manufacturing with a Kubernetes-based infrastructure which provides a flexible and scalable design that can be easily extended. This document covers how Contoso Motors implements different AI inferencing models for various use cases, leveraging [OpenVINO Model Server (OVMS)](https://docs.openvino.ai/2023.3/ovms_what_is_openvino_model_server.html). The OVMS by Intel, is a high-performance inference serving software that allows users to deploy and serve multiple AI models in a convient, scalable and efficient manner.

Check failure on line 20 in docs/azure_jumpstart_ag/manufacturing/contoso_motors/ai_inferencing/_index.md

View workflow job for this annotation

GitHub Actions / lint

[vale] reported by reviewdog 🐶 [Vale.Spelling] Did you really mean 'inferencing'? Raw Output: {"message": "[Vale.Spelling] Did you really mean 'inferencing'?", "location": {"path": "docs/azure_jumpstart_ag/manufacturing/contoso_motors/ai_inferencing/_index.md", "range": {"start": {"line": 20, "column": 229}}}, "severity": "ERROR"}

Check failure on line 20 in docs/azure_jumpstart_ag/manufacturing/contoso_motors/ai_inferencing/_index.md

View workflow job for this annotation

GitHub Actions / lint

[vale] reported by reviewdog 🐶 [Vale.Spelling] Did you really mean 'convient'? Raw Output: {"message": "[Vale.Spelling] Did you really mean 'convient'?", "location": {"path": "docs/azure_jumpstart_ag/manufacturing/contoso_motors/ai_inferencing/_index.md", "range": {"start": {"line": 20, "column": 517}}}, "severity": "ERROR"}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
Contoso Motors leverages AI-powered manufacturing with a Kubernetes-based infrastructure which provides a flexible and scalable design that can be easily extended. This document covers how Contoso Motors implements different AI inferencing models for various use cases, leveraging [OpenVINO Model Server (OVMS)](https://docs.openvino.ai/2023.3/ovms_what_is_openvino_model_server.html). The OVMS by Intel, is a high-performance inference serving software that allows users to deploy and serve multiple AI models in a convient, scalable and efficient manner.
Contoso Motors leverages AI-powered manufacturing with a Kubernetes-based infrastructure which provides a flexible and scalable design that can be easily extended. This document covers how Contoso Motors implements different AI inferencing models for various use cases, leveraging [OpenVINO Model Server (OVMS)](https://docs.openvino.ai/2023.3/ovms_what_is_openvino_model_server.html). The OVMS by Intel, is a high-performance inference serving software that allows users to deploy and serve multiple AI models in a convenient, scalable and efficient manner.


## Architecture

![AI inference flow](./img/ai_flow.png)

The Edge AI inference flow can be divided into two configuration steps and three inference/processing steps.
The Edge AI inference flow can be divided into two configuration steps and three inference/processing steps.

1. **Model Downloader:** this is a *Kubernetes Job* that downloads the binary files of the AI models and the corresponding configuration files. Depending on the model type, various formats can be used, such as .onnx, .hd5, .bin, and others. In general, the binary files contain the model weights and architecture, while the configuration files specify the model properties, such as input/output tensor shapes, number of layers, and more.
1. **Model Downloader:** this is a *Kubernetes Job* that downloads the binary files of the AI models and the corresponding configuration files. Depending on the model type, various formats can be used, such as .onnx, .hd5, .bin, and others. In general, the binary files contain the model weights and architecture, while the configuration files specify the model properties, such as input/output tensor shapes, number of layers, and more.

The models and configurations needed for the OVMS deployment are hosted in a storage account in Azure. During deployment, the **Model Downloader** job pulls these models and configurations by running the **ovms_config.sh** script, which downloads all necessary model files and stores them in the **ovms-pvc** persistent volume claim for the OVMS pods to access and serve. All models need to be placed and mounted in a particular directory structure and according to the rules described in [OpenVINO - Model Serving](https://docs.OpenVINO.ai/2022.3/ovms_docs_models_repository.html).

The models and configurations needed for the OVMS deployment are hosted in a storage account in Azure. During deployment, the **Model Downloader** job pulls these models and configurations by running the **ovms_config.sh** script, which downloads all necessary model files and stores them in the **ovms-pvc** persistent volume claim for the OVMS pods to access and serve. All models need to be placed and mounted in a particular directory structure and according to the rules described in [OpenVINO - Model Serving] (https://docs.OpenVINO.ai/2022.3/ovms_docs_models_repository.html).

For more information, see [job.yaml](https://github.com/microsoft/jumpstart-agora-apps/blob/main/contoso_manufacturing/operations/charts/ovms/templates/job.yaml) and [ovms_config.sh](https://raw.githubusercontent.com/microsoft/jumpstart-agora-apps/manufacturing/contoso_manufacturing/deployment/configs/ovms_config.sh).

| Model | Scenario | Links |
| ----- | -------- | ----- |
| Model | Scenario | Links |
| ----- | -------- | ----- |
| Yolo8 | Object detection | [Yolo8 Models](https://docs.ultralytics.com/modes/#introduction) |
| Safety-Yolo | Helmet detection | [safety-yolo8.bin](https://jsfiles.blob.core.windows.net/ai-models/safety-yolo8.bin), [safety-yolo8.xml](https://jsfiles.blob.core.windows.net/ai-models/safety-yolo8.xml) |
| Weld-porosity | Weld porosity detection (no weld, weld, porosity) | [weld-porosity-detection-0001.bin](https://jsfiles.blob.core.windows.net/ai-models/weld-porosity-detection-0001.bin), [weld-porosity-detection-0001.xml](https://jsfiles.blob.core.windows.net/ai-models/weld-porosity-detection-0001.xml) |
| Pose-estimation | Human pose estimation | [human-pose-estimation-0007.bin](https://jsfiles.blob.core.windows.net/ai-models/human-pose-estimation-0007.bin), [human-pose-estimation-0007.xml](https://jsfiles.blob.core.windows.net/ai-models/human-pose-estimation-0007.xml) |
| Safety-Yolo | Helmet detection | [safety-yolo8.bin](https://jsfiles.blob.core.windows.net/ai-models/safety-yolo8.bin), [safety-yolo8.xml](https://jsfiles.blob.core.windows.net/ai-models/safety-yolo8.xml) |
| Weld-porosity | Weld porosity detection (no weld, weld, porosity) | [weld-porosity-detection-0001.bin](https://jsfiles.blob.core.windows.net/ai-models/weld-porosity-detection-0001.bin), [weld-porosity-detection-0001.xml](https://jsfiles.blob.core.windows.net/ai-models/weld-porosity-detection-0001.xml) |
| Pose-estimation | Human pose estimation | [human-pose-estimation-0007.bin](https://jsfiles.blob.core.windows.net/ai-models/human-pose-estimation-0007.bin), [human-pose-estimation-0007.xml](https://jsfiles.blob.core.windows.net/ai-models/human-pose-estimation-0007.xml) |

1. **Video Downloader:** this is an init-container in Kubernetes responsible for downloading sample video files used to simulate an RTSP video feed. All video streams are simulated using the **RTSP Simulator**. The videos are downloaded from the storage account and stored in a Kubernetes volume for use in the deployment.

| Video | Scenario | Link |
| ----- | -------- | ---- |
| Worker on normal routine | Object detection | [object-detection.mp4](https://jsfiles.blob.core.windows.net/video/agora/object-detection.mp4) |
| Worker walking with safety-gear | Helmet detection | [helmet-detection.mp4](https://jsfiles.blob.core.windows.net/video/agora/helmet-detection.mp4) |
| Welding feed | Weld porosity detection | [welding.mp4](https://jsfiles.blob.core.windows.net/video/agora/welding.mp4) |
| Worker interacting with robot | Human pose estimation | [object-detection.mp4](https://jsfiles.blob.core.windows.net/video/agora/object-detection.mp4) |
| Video | Scenario | Link |
| ----- | -------- | ---- |
| Worker on normal routine | Object detection | [object-detection.mp4](https://jsfiles.blob.core.windows.net/video/agora/object-detection.mp4) |
| Worker walking with safety-gear | Helmet detection | [helmet-detection.mp4](https://jsfiles.blob.core.windows.net/video/agora/helmet-detection.mp4) |
| Welding feed | Weld porosity detection | [welding.mp4](https://jsfiles.blob.core.windows.net/video/agora/welding.mp4) |
| Worker interacting with robot | Human pose estimation | [object-detection.mp4](https://jsfiles.blob.core.windows.net/video/agora/object-detection.mp4) |

After the configuration is complete, the inference flow can be initiated. The **Web AI Inference and UI** application retrieves data, such as video frames from an RTSP source or images, and applies any necessary preprocessing, such as resizing an image to a specific size with a specific RGB/BGR format. The data is then sent to the model server for inference, and any required post processing is applied to store or display the results to the user.
After the configuration is complete, the inference flow can be initiated. The **Web AI Inference and UI** application retrieves data, such as video frames from an RTSP source or images, and applies any necessary preprocessing, such as resizing an image to a specific size with a specific RGB/BGR format. The data is then sent to the model server for inference, and any required post processing is applied to store or display the results to the user.

The steps involved in the inference/procressing can be described as follows:

1. **Video Streamer Capture:** this is the code of the **Web AI Inference and UI** pod that handles the capture of the RTSP feeds. Depending on the AI scenario, this code will grab the appropriate video and using **OpenCV** will handle the video frames for processing and AI inference. For more information on the different videos available, please refer to the [rtsp-simulator.yaml](https://github.com/microsoft/jumpstart-agora-apps/tree/manufacturing/contoso_manufacturing/operations/charts/rtsp-simulator) file.


1. **Pre-process and AI inference:** this is the code of the **Web AI Inference and UI** pod that handles the **preprocessing** of the image and sends the processed input to the AI inference server. Each model has it's own Python class ([Yolo8](https://github.com/microsoft/jumpstart-agora-apps/blob/main/contoso_manufacturing/developer/webapp-decode/yolov8.py), [Welding](https://github.com/microsoft/jumpstart-agora-apps/blob/main/contoso_manufacturing/developer/webapp-decode/welding.py) and [Pose Estimator](https://github.com/microsoft/jumpstart-agora-apps/blob/main/contoso_manufacturing/developer/webapp-decode/pose_estimator.py)) that implements the **pre-process**, **post-process** and **run** methods according to the model requirements. Once the input data, generally a **torch** is created, it's then sent to the AI inference server using the [ovmsclient library](https://pypi.org/project/ovmsclient/).
1. **Pre-process and AI inference:** this is the code of the **Web AI Inference and UI** pod that handles the **preprocessing** of the image and sends the processed input to the AI inference server. Each model has it's own Python class ([Yolo8](https://github.com/microsoft/jumpstart-agora-apps/blob/main/contoso_manufacturing/developer/webapp-decode/yolov8.py), [Welding](https://github.com/microsoft/jumpstart-agora-apps/blob/main/contoso_manufacturing/developer/webapp-decode/welding.py) and [Pose Estimator](https://github.com/microsoft/jumpstart-agora-apps/blob/main/contoso_manufacturing/developer/webapp-decode/pose_estimator.py)) that implements the **pre-process**, **post-process** and **run** methods according to the model requirements. Once the input data, generally a **torch** is created, it's then sent to the AI inference server using the [ovmsclient library](https://pypi.org/project/ovmsclient/).

The AI inference server, in our case the [OpenVINO Model Server](https://docs.OpenVINO.ai/2023.3/ovms_what_is_OpenVINO_model_server.html), hosts models and makes them accessible to software components over standard network protocols: a client sends a request to the model server, which performs model inference and sends a response back to the client. This scenario, uses

The OVMS model server is configured as part of the helm deployment, using the **OVMS Operator**. The deployment installs the OVMS Operator, sets up the storage for AI models, and configures the OVMS pods and services. For more information about the setup, check [OVMS Helm](https://github.com/microsoft/jumpstart-agora-apps/tree/manufacturing/contoso_manufacturing/operations/charts/ovms).For more information about OVMS Operator, check [OpenVINO Model Server with Kubernetes](https://docs.OpenVINO.ai/archive/2021.4/ovms_docs_kubernetes.html). The OVMS model server is typically deployed as a set of pods. Each pod contains one or more instances of the OVMS software, along with any necessary dependencies. The pods are managed by a Kubernetes controller, which is responsible for ensuring that the desired number of pods are running at all times.
The OVMS model server is configured as part of the helm deployment, using the **OVMS Operator**. The deployment installs the OVMS Operator, sets up the storage for AI models, and configures the OVMS pods and services. For more information about the setup, check [OVMS Helm](https://github.com/microsoft/jumpstart-agora-apps/tree/manufacturing/contoso_manufacturing/operations/charts/ovms).For more information about OVMS Operator, check [OpenVINO Model Server with Kubernetes](https://docs.OpenVINO.ai/archive/2021.4/ovms_docs_kubernetes.html). The OVMS model server is typically deployed as a set of pods. Each pod contains one or more instances of the OVMS software, along with any necessary dependencies. The pods are managed by a Kubernetes controller, which is responsible for ensuring that the desired number of pods are running at all times.

1. **Post-process and UI rendering:** this is the code in the **Web AI Inference and UI** pod is responsible for handling post-processing and final UI rendering for the user. Depending on the model, once the OVMS provides the inference response, post-processing is applied to the image, such as adding labels, bounding boxes, or human skeleton graphs. Once the visual data is added to the frame, it's then served to the UI frontend using a **Flask App** method. For more information on the Flask App, please refer to the [Web UI - app.py](https://github.com/microsoft/jumpstart-agora-apps/blob/main/contoso_manufacturing/developer/webapp-decode/app.py)

## OpenVINO Model Server

The [OpenVINO Model Server](https://www.intel.com/content/www/us/en/developer/articles/technical/deploy-OpenVINO-in-openshift-and-kubernetes.html) by Intel, is a high-performance inference serving software that allows users to deploy and serve AI models. Model serving is taking a trained AI model and making it available to software components over a network API. OVMS offers a native method for exposing models over a gRPC or REST API. Furthermore, it supports various deep learning frameworks such as TensorFlow, PyTorch, OpenVINO and ONNX. By using OvMS developers can easily deploy AI models by specifying the model configuration and the input/output formats.

The OpenVINO Model Server offers many advantages for efficient model deployment:
Expand Down
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Original file line number Diff line number Diff line change
Expand Up @@ -16,14 +16,14 @@
- Not enough vCPU quota available in your target Azure region - check vCPU quota and ensure you have at least 40 available vCPU.
- You can use the command *`az vm list-usage --location <your location> --output table`* to check your available vCPU quota.

![Screenshot showing az vm list-usage](./img/az_vm_list_usage.png)

Check failure on line 19 in docs/azure_jumpstart_ag/manufacturing/contoso_motors/troubleshooting/_index.md

View workflow job for this annotation

GitHub Actions / lint

[vale] reported by reviewdog 🐶 [Vale.Spelling] Did you really mean 'az'? Raw Output: {"message": "[Vale.Spelling] Did you really mean 'az'?", "location": {"path": "docs/azure_jumpstart_ag/manufacturing/contoso_motors/troubleshooting/_index.md", "range": {"start": {"line": 19, "column": 26}}}, "severity": "ERROR"}

Check failure on line 19 in docs/azure_jumpstart_ag/manufacturing/contoso_motors/troubleshooting/_index.md

View workflow job for this annotation

GitHub Actions / lint

[vale] reported by reviewdog 🐶 [Vale.Spelling] Did you really mean 'vm'? Raw Output: {"message": "[Vale.Spelling] Did you really mean 'vm'?", "location": {"path": "docs/azure_jumpstart_ag/manufacturing/contoso_motors/troubleshooting/_index.md", "range": {"start": {"line": 19, "column": 29}}}, "severity": "ERROR"}

- Target Azure region does not support all required Azure services - ensure you are running Agora in one of the supported regions listed in the [deployment guide](../deployment/).

Check failure on line 21 in docs/azure_jumpstart_ag/manufacturing/contoso_motors/troubleshooting/_index.md

View workflow job for this annotation

GitHub Actions / lint

[vale] reported by reviewdog 🐶 [Microsoft.Contractions] Use 'doesn't' instead of 'does not'. Raw Output: {"message": "[Microsoft.Contractions] Use 'doesn't' instead of 'does not'.", "location": {"path": "docs/azure_jumpstart_ag/manufacturing/contoso_motors/troubleshooting/_index.md", "range": {"start": {"line": 21, "column": 23}}}, "severity": "ERROR"}

- Not enough Microsoft Entra ID quota to create additional service principals. You may receive a message stating "The directory object quota limit for the Principal has been exceeded. Please ask your administrator to increase the quota limit or delete objects to reduce the used quota."

Check failure on line 23 in docs/azure_jumpstart_ag/manufacturing/contoso_motors/troubleshooting/_index.md

View workflow job for this annotation

GitHub Actions / lint

[vale] reported by reviewdog 🐶 [Vale.Spelling] Did you really mean 'Entra'? Raw Output: {"message": "[Vale.Spelling] Did you really mean 'Entra'?", "location": {"path": "docs/azure_jumpstart_ag/manufacturing/contoso_motors/troubleshooting/_index.md", "range": {"start": {"line": 23, "column": 24}}}, "severity": "ERROR"}
- If this occurs, you must delete some of your unused service principals and try the deployment again.

![Screenshot showing not enough Entra quota for new service principals](./img/aad_quota_exceeded.png)

Check failure on line 26 in docs/azure_jumpstart_ag/manufacturing/contoso_motors/troubleshooting/_index.md

View workflow job for this annotation

GitHub Actions / lint

[vale] reported by reviewdog 🐶 [Vale.Spelling] Did you really mean 'Entra'? Raw Output: {"message": "[Vale.Spelling] Did you really mean 'Entra'?", "location": {"path": "docs/azure_jumpstart_ag/manufacturing/contoso_motors/troubleshooting/_index.md", "range": {"start": {"line": 26, "column": 37}}}, "severity": "ERROR"}

### Exploring logs from the _Ag-VM-Client_ virtual machine

Expand All @@ -33,7 +33,7 @@
| ------- | ----------- |
| _C:\Ag\Logs\AgLogonScript.log_ | Output from the primary PowerShell script that drives most of the automation tasks. |
| _C:\Ag\Logs\ArcConnectivity.log_ | Output from the tasks that onboard servers and Kubernetes clusters to Azure Arc. |
| _C:\Ag\Logs\AzCLI.log_ | Output from Az CLI login. |

Check failure on line 36 in docs/azure_jumpstart_ag/manufacturing/contoso_motors/troubleshooting/_index.md

View workflow job for this annotation

GitHub Actions / lint

[vale] reported by reviewdog 🐶 [Vale.Spelling] Did you really mean 'Az'? Raw Output: {"message": "[Vale.Spelling] Did you really mean 'Az'?", "location": {"path": "docs/azure_jumpstart_ag/manufacturing/contoso_motors/troubleshooting/_index.md", "range": {"start": {"line": 36, "column": 40}}}, "severity": "ERROR"}
| _C:\Ag\Logs\AzPowerShell.log_ | Output from the installation of PowerShell modules. |
| _C:\Ag\Logs\Bookmarks.log_ | Output from the configuration of Microsoft Edge bookmarks. |
| _C:\Ag\Logs\Bootstrap.log_ | Output from the initial bootstrapping script that runs on _Ag-VM-Client_. |
Expand Down Expand Up @@ -61,4 +61,4 @@

- Click your user icon in the upper-right of Azure Data Explorer and "Switch Directory" to the correct Azure environment where you deployed Contoso Motors.

![Screenshot showing switch tenants in ADX](./img/adx_switch.png)
![Screenshot showing switch tenants in ADX](./img/adx_switch.png)
Loading