Preliminary Knowledge
Last Version: 11/09/2025
Before exploring the examples and features in this chapter, we recommend reviewing some basic background. This section will help you get started quickly and understand the capabilities and ecosystem supported by our SpacemiT RISC-V AI Platform.
Quick Start Examples
In Section 5.1 Quick Start, we provide a series of typical AI function demonstrations, including:
-
Speech Related
- Voice Activity Detection (VAD)
- Automatic Speech Recognition (ASR)
- Text-to-Speech (TTS)
-
Large Model Related
- Large Language Model (LLM)
- Speech Input → LLM Output
- Function Calling (Function Call)
- Vision Language Model (VLM)
These examples cover the complete pipeline from speech to multi-modal applications, helping users quickly experience the edge-side intelligent capabilities of the RISC-V platform.
Since these examples heavily utilize Python for development, please read the Python Development Guide first for for smoother setup.
Third-Party AI Framework Support
In 5.2 AI AI Framework Support, we have compiled usage guides for common third-party AI frameworks on the SpacemiT RISC-V Platform, such as:
- OCR: PaddleOCR
- Visual Detection: Ultralytics YOLO
These practices show how to port and run mainstream AI models, taking advantage of RISC-V performance and ecosystem support.
Note:
- These frameworks are not optimized for hardware acceleration on our platform. Running them on pure CPU may result in slower inference.
- For production use, we recommend the optimized
spacemit-ort (Python/C++)
package, which enables hardware acceleration. - Do not run model training workflows on the SpacemiT RISC-V edge platform.
Demo Zoo & Hardware Acceleration
Beyond basic examples and third-party frameworks, we maintain a Demo Zoo — a complete collection of AI demos built for the SpacemiT RISC-V Platform.
In the Demo Zoo, we highlight support for ONNX Runtime (ORT):
- ORT has been fully adapted to SpacemiT platform, provided as the
spacemit-ort
Python package. - With AI NPU hardware acceleration, inference achieves lower latency and higher throughput.
- A unified API makes it easy to load and run different models quickly.
This allows you to run both standard demos and hardware-accelerated AI workloads with improved performance.
Recommended Learning Order
To make the most of this chapter, we suggest the following order:
- Quick Start → Run speech and large-model demos to get an intuitive feel for platform capabilities.
- Framework Support → Learn how to port and run popular AI frameworks on RISC-V, and validate your algorithms.
- Demo Zoo → Explore ORT-based hardware acceleration, and consider migrating models from PyTorch or other frameworks to ONNX Runtime for performance gains.
By following this path, you will build a complete understanding — from demos, to ecosystem integration, to hardware-accelerated AI inference.