Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
3 changes: 0 additions & 3 deletions metro-ai-suite/metro-sdk-manager/docs/toc.rst

This file was deleted.

Original file line number Diff line number Diff line change
@@ -1,47 +1,19 @@
Metro SDK Manager
============================================
# Metro SDK Manager


Overview
--------
## Overview

The SDK Manager is a comprehensive development tool that streamlines the process of discovering, installing, and managing multiple software development kits for edge AI applications. Designed for developers working with Intel's edge computing ecosystem, it eliminates the complexity of manually configuring SDK combinations by providing automated dependency resolution, version compatibility checking, and integrated toolchain setup. The tool supports interactive wizard-guided selection for developers to download and install the SDK directly into their development environment.

Built with developer productivity in mind, the SDK Manager handles cross-platform builds, maintains isolated development environments, and provides real-time compatibility validation across different SDK versions. It includes extensive documentation with code samples, API references, and troubleshooting guides. Whether you're prototyping edge AI solutions or deploying production applications, the SDK Manager ensures consistent, reproducible development environments across your team and deployment targets.

.. toctree::
:caption: Metro SDK Manager

install-sdk

.. toctree::
:caption: Metro Vision AI SDK
:hidden:

metro-vision-ai-sdk/get-started.md
metro-vision-ai-sdk/tutorial-1.md
metro-vision-ai-sdk/tutorial-2.md
metro-vision-ai-sdk/tutorial-3.md
metro-vision-ai-sdk/tutorial-4.md
metro-vision-ai-sdk/tutorial-5.md

.. toctree::
:caption: Metro Gen AI SDK
:hidden:

metro-gen-ai-sdk/get-started.md

.. toctree::
:caption: Visual AI Demo Kit
:hidden:

visual-ai-demo-kit/get-started.md
visual-ai-demo-kit/tutorial-1.md
visual-ai-demo-kit/tutorial-2.md
visual-ai-demo-kit/tutorial-3.md

.. toctree::
:caption: Community and Support
:hidden:

support
<!--hide_directive
:::{toctree}
:hidden:

install-sdk
Metro Vision AI SDK <metro-vision-ai-sdk/get-started>
Metro Gen AI SDK <metro-gen-ai-sdk/get-started>
Visual AI Demo Kit <visual-ai-demo-kit/get-started.md>
support
:::
hide_directive-->
Original file line number Diff line number Diff line change
Expand Up @@ -2,4 +2,4 @@

<script type="module" crossorigin src="../_static/installer/iframe-resizer.js"></script>
<meta name="viewport" content="width=device-width, initial-scale=1.0" />
<iframe id="installerFrame" src="../_static/installer/selector.html" style="width: 100%; min-width: 350px; border-radius: 8px; overflow-x: auto;" title="Download Metro SDK"></iframe>
<iframe id="installerFrame" src="../_static/installer/selector.html" style="width: 100%; min-width: 350px; min-height:750px; border-radius: 8px; overflow-x: auto;" title="Download Metro SDK"></iframe>
Original file line number Diff line number Diff line change
Expand Up @@ -7,6 +7,7 @@ The Metro Gen AI SDK provides a comprehensive development environment for genera
## Learning Objectives

Upon completion of this guide, you will be able to:

- Install and configure the Metro Gen AI SDK
- Deploy generative AI microservices for document processing and question-answering
- Understand the architecture of RAG-based applications using Intel's AI frameworks
Expand All @@ -30,7 +31,6 @@ curl https://raw.githubusercontent.com/open-edge-platform/edge-ai-suites/refs/he

![Metro Gen AI SDK Installation](images/metro-gen-ai-sdk-install.png)


## Question-Answering Application Implementation

This section demonstrates a complete RAG (Retrieval-Augmented Generation) application workflow using the installed Gen AI components.
Expand Down Expand Up @@ -69,6 +69,7 @@ Start the complete Gen AI application stack using Docker Compose:
```bash
docker compose up
```

### Step 4: Verify Deployment Status

Check that all application components are running correctly:
Expand All @@ -77,7 +78,6 @@ Check that all application components are running correctly:
docker ps
```


### Step 5: Access the Application Interface

Open a web browser and navigate to the application dashboard:
Expand All @@ -89,13 +89,23 @@ http://localhost:8101
## Additional Resources

### Technical Documentation
- [Audio Analyzer](https://docs.openedgeplatform.intel.com/dev/edge-ai-libraries/audio-analyzer/index.html) - Comprehensive documentation for multimodal audio processing capabilities
- [Document Ingestion - pgvector](https://github.com/open-edge-platform/edge-ai-libraries/blob/main/microservices/document-ingestion/pgvector/docs/get-started.md) - Vector database integration and document processing workflows
- [Multimodal Embedding Serving](https://github.com/open-edge-platform/edge-ai-libraries/blob/main/microservices/multimodal-embedding-serving/docs/user-guide/Overview.md) - Embedding generation service architecture and API documentation
- [Visual Data Preparation For Retrieval](https://github.com/open-edge-platform/edge-ai-libraries/blob/main/microservices/visual-data-preparation-for-retrieval/vdms/docs/user-guide/Overview.md) - VDMS integration and visual data management workflows
- [VLM OpenVINO Serving](https://github.com/open-edge-platform/edge-ai-libraries/blob/main/microservices/vlm-openvino-serving/docs/user-guide/Overview.md) - Vision-language model deployment and optimization guidelines
- [Edge AI Libraries](https://docs.openedgeplatform.intel.com/dev/ai-libraries.html) - Complete development toolkit documentation and microservice API references
- [Edge AI Suites](https://docs.openedgeplatform.intel.com/dev/ai-suite-metro.html) - Comprehensive application suite documentation with Gen AI implementation examples

- [Audio Analyzer](https://docs.openedgeplatform.intel.com/dev/edge-ai-libraries/audio-analyzer/index.html)
\- Comprehensive documentation for multimodal audio processing capabilities
- [Document Ingestion - pgvector](https://github.com/open-edge-platform/edge-ai-libraries/blob/main/microservices/document-ingestion/pgvector/docs/get-started.md)
\- Vector database integration and document processing workflows
- [Multimodal Embedding Serving](https://github.com/open-edge-platform/edge-ai-libraries/blob/main/microservices/multimodal-embedding-serving/docs/user-guide/Overview.md)
\- Embedding generation service architecture and API documentation
- [Visual Data Preparation For Retrieval](https://github.com/open-edge-platform/edge-ai-libraries/blob/main/microservices/visual-data-preparation-for-retrieval/vdms/docs/user-guide/Overview.md)
\- VDMS integration and visual data management workflows
- [VLM OpenVINO Serving](https://github.com/open-edge-platform/edge-ai-libraries/blob/main/microservices/vlm-openvino-serving/docs/user-guide/Overview.md)
\- Vision-language model deployment and optimization guidelines
- [Edge AI Libraries](https://docs.openedgeplatform.intel.com/dev/ai-libraries.html)
\- Complete development toolkit documentation and microservice API references
- [Edge AI Suites](https://docs.openedgeplatform.intel.com/dev/ai-suite-metro.html)
\- Comprehensive application suite documentation with Gen AI implementation examples

### Support Channels
- [GitHub Issues](https://github.com/open-edge-platform/edge-ai-libraries/issues) - Technical issue tracking and community support for Gen AI applications

- [GitHub Issues](https://github.com/open-edge-platform/edge-ai-libraries/issues)
\- Technical issue tracking and community support for Gen AI applications
Original file line number Diff line number Diff line change
Expand Up @@ -7,6 +7,7 @@ The Metro Vision AI SDK provides a comprehensive development environment for com
## Learning Objectives

Upon completion of this guide, you will be able to:

- Install and configure the Metro Vision AI SDK
- Execute object detection inference on video content
- Understand the basic pipeline architecture for computer vision workflows
Expand All @@ -31,6 +32,7 @@ curl https://raw.githubusercontent.com/open-edge-platform/edge-ai-suites/refs/he
![Metro Vision AI SDK Installation](images/metro-vision-ai-sdk-install.png)

The installation process configures the following components:

- Docker containerization platform
- Intel DLStreamer video analytics framework
- OpenVINO inference optimization toolkit
Expand Down Expand Up @@ -96,14 +98,18 @@ The resulting output displays the original video content with overlaid detection
## Technology Framework Overview

### DLStreamer Framework

DLStreamer provides a comprehensive video analytics framework built on GStreamer technology. Key capabilities include:

- Multi-format video input support (files, network streams, camera devices)
- Real-time inference execution on video frame sequences
- Flexible output rendering and storage options
- Modular pipeline architecture for custom workflow development

### OpenVINO Optimization Toolkit

OpenVINO delivers cross-platform inference optimization for Intel hardware architectures. Core features include:

- Model format standardization through Intermediate Representation (IR)
- Hardware-specific performance optimization
- Extensive pre-trained model repository
Expand All @@ -113,28 +119,52 @@ OpenVINO delivers cross-platform inference optimization for Intel hardware archi

Continue your learning journey with these hands-on tutorials:

### [Tutorial 1: OpenVINO Model Benchmark](tutorial-1.md)
### [Tutorial 1: OpenVINO Model Benchmark](./tutorial-1.md)

Learn to benchmark AI model performance across different Intel hardware (CPU, GPU, NPU) and understand optimization techniques for production deployments.

### [Tutorial 2: Multi-Stream Video Processing](tutorial-2.md)
### [Tutorial 2: Multi-Stream Video Processing](./tutorial-2.md)

Build scalable video analytics solutions by processing multiple video streams simultaneously with hardware acceleration.

### [Tutorial 3: Real-Time Object Detection](tutorial-3.md)
### [Tutorial 3: Real-Time Object Detection](./tutorial-3.md)

Develop a complete object detection application with interactive controls, performance monitoring, and production-ready features.

### [Tutorial 4: Advanced Video Analytics Pipelines](tutorial-4.md)
Create sophisticated video analytics using Intel® DL Streamer framework, including human pose estimation and multi-model integration.
### [Tutorial 4: Advanced Video Analytics Pipelines](./tutorial-4.md)

Create sophisticated video analytics using Intel® DL Streamer framework, including human pose estimation and multi-model integration.

## Additional Resources

### Technical Documentation
- [DLStreamer](http://docs.openedgeplatform.intel.com/dev/edge-ai-libraries/dl-streamer/index.html) - Comprehensive documentation for Intel's GStreamer-based video analytics framework
- [DLStreamer Pipeline Server](https://docs.openedgeplatform.intel.com/edge-ai-libraries/dlstreamer-pipeline-server/main/user-guide/Overview.html) - RESTful microservice architecture documentation for scalable video analytics deployment
- [OpenVINO](https://docs.openvino.ai/2025/get-started.html) - Complete reference for Intel's cross-platform inference optimization toolkit
- [OpenVINO Model Server](https://docs.openvino.ai/2025/model-server/ovms_what_is_openvino_model_server.html) - Model serving infrastructure documentation for production deployments
- [Edge AI Libraries](https://docs.openedgeplatform.intel.com/dev/ai-libraries.html) - Comprehensive development toolkit documentation and API references
- [Edge AI Suites](https://docs.openedgeplatform.intel.com/dev/ai-suite-metro.html) - Complete application suite documentation with implementation examples

- [DLStreamer](http://docs.openedgeplatform.intel.com/dev/edge-ai-libraries/dl-streamer/index.html)
\- Comprehensive documentation for Intel's GStreamer-based video analytics framework
- [DLStreamer Pipeline Server](https://docs.openedgeplatform.intel.com/edge-ai-libraries/dlstreamer-pipeline-server/main/user-guide/Overview.html)
\- RESTful microservice architecture documentation for scalable video analytics deployment
- [OpenVINO](https://docs.openvino.ai/2025/get-started.html)
\- Complete reference for Intel's cross-platform inference optimization toolkit
- [OpenVINO Model Server](https://docs.openvino.ai/2025/model-server/ovms_what_is_openvino_model_server.html)
\- Model serving infrastructure documentation for production deployments
- [Edge AI Libraries](https://docs.openedgeplatform.intel.com/dev/ai-libraries.html)
\- Comprehensive development toolkit documentation and API references
- [Edge AI Suites](https://docs.openedgeplatform.intel.com/dev/ai-suite-metro.html)
\- Complete application suite documentation with implementation examples

### Support Channels
- [GitHub Issues](https://github.com/open-edge-platform/edge-ai-suites/issues) - Technical issue tracking and community support

- [GitHub Issues](https://github.com/open-edge-platform/edge-ai-suites/issues)
\- Technical issue tracking and community support

<!--hide_directive
:::{toctree}
:hidden:

tutorial-1
tutorial-2
tutorial-3
tutorial-4
tutorial-5
:::
hide_directive-->
Original file line number Diff line number Diff line change
Expand Up @@ -65,6 +65,7 @@ docker run --rm --user=root \
```

This command will:

- Download the YOLOv10s object detection model
- Convert it to OpenVINO IR format (FP16 precision)
- Store the model files in the `public/yolov10s/FP16/` directory
Expand Down Expand Up @@ -123,4 +124,4 @@ docker run -it --rm \
-m /home/openvino/public/yolov10s/FP16/yolov10s.xml \
-i /home/openvino/bottle-detection.mp4 \
-d NPU
```
```
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@ This tutorial demonstrates advanced video processing capabilities using Intel's

Multi-stream video processing is essential for applications like video surveillance, broadcasting, and media production. This tutorial showcases how Intel's hardware acceleration can efficiently decode and composite 16 simultaneous video streams into a single 4K display output, demonstrating the power of Intel® Quick Sync Video technology.

> ** Platform Compatibility**
> **Platform Compatibility**
> This tutorial requires Intel® Core™ or Intel® Core™ Ultra processors with integrated graphics. Intel® Xeon® processors without integrated graphics are not supported for this specific use case.

## Time to Complete
Expand Down Expand Up @@ -61,6 +61,7 @@ wget -O videos/Big_Buck_Bunny.mp4 "https://test-videos.co.uk/vids/bigbuckbunny/m

**Alternative Download Method:**
If the above link doesn't work, you can download from the official source:

```bash
# Alternative: Download from Internet Archive
wget -O videos/Big_Buck_Bunny.mp4 "https://archive.org/download/BigBuckBunny_124/Content/big_buck_bunny_720p_surround.mp4"
Expand Down Expand Up @@ -138,14 +139,16 @@ EOF
The script creates a complex pipeline with these key components:

**Pipeline Architecture:**

- **Input Sources**: 16 identical video file streams
- **Decoder**: `vah264dec` - Hardware-accelerated H.264 decoding using VAAPI
- **Scaling**: `vapostproc` - Hardware-accelerated video post-processing and scaling
- **Composition**: `vacompositor` - Hardware-accelerated video composition
- **Output**: `xvimagesink` - X11-based video display

**Tiled Layout Configuration:**
```

```text
┌─────────┬─────────┬─────────┬─────────┐
│ Stream1 │ Stream2 │ Stream3 │ Stream4 │ ← Row 1 (y=0)
│ 0,0 │ 960,0 │1920,0 │2880,0 │
Expand All @@ -162,6 +165,7 @@ The script creates a complex pipeline with these key components:
```

**Performance Optimizations:**

- **VAAPI Acceleration**: Hardware-accelerated decoding, scaling, and composition
- **Fast Scaling**: `scale-method=fast` for optimal performance
- **Async Display**: `sync=false` to prevent frame dropping
Expand Down Expand Up @@ -229,7 +233,6 @@ htop
cat /sys/class/drm/card0/gt/gt0/rc6_residency_ms
```


### Step 6: Stop the Application

To stop the video processing pipeline:
Expand All @@ -251,22 +254,23 @@ xhost -local:docker
docker system prune -f
```


## Understanding the Technology

### Intel® Quick Sync Video Technology

This tutorial leverages Intel's hardware-accelerated video processing capabilities:

**Hardware Acceleration Benefits:**

- **Dedicated Video Engines**: Separate silicon for video decode/encode operations
- **CPU Offloading**: Frees CPU resources for other computational tasks
- **CPU Offloading**: Frees CPU resources for other computational tasks
- **Power Efficiency**: Lower power consumption compared to software decoding
- **Parallel Processing**: Multiple decode engines can process streams simultaneously

### VAAPI Integration

**Video Acceleration API (VAAPI)** provides:

- **Hardware Abstraction**: Unified interface across Intel graphics generations
- **Pipeline Optimization**: Direct GPU memory access without CPU copies
- **Format Support**: Hardware acceleration for H.264, H.265, VP9, and AV1 codecs
Expand All @@ -277,14 +281,16 @@ This tutorial leverages Intel's hardware-accelerated video processing capabiliti
The tutorial demonstrates advanced GStreamer concepts:

**Element Types:**

- **Source Elements**: `filesrc` - File input
- **Demuxer Elements**: `qtdemux` - Container format parsing
- **Demuxer Elements**: `qtdemux` - Container format parsing
- **Decoder Elements**: `vah264dec` - Hardware-accelerated decoding
- **Transform Elements**: `vapostproc` - Hardware scaling and format conversion
- **Compositor Elements**: `vacompositor` - Multi-stream composition
- **Sink Elements**: `xvimagesink` - Display output

**Pipeline Benefits:**

- **Zero-Copy Operations**: Direct GPU memory transfers
- **Parallel Processing**: Concurrent decode of multiple streams
- **Dynamic Reconfiguration**: Runtime pipeline modifications
Expand Down
Loading
Loading