What this enables:
Run demanding simulations, AI training workloads, and cloud-native network functions across scalable edge-to-cloud compute environments with GPU acceleration and containerisation.
Capabilities on offer:
- On-prem AI/ML infrastructure (e.g. NVIDIA GPU nodes)
- Edge clusters for ultra-low latency deployment
- Public/private cloud integration and orchestration
- Containerised 5G/6G core emulation environments
- Platforms for aerial and autonomous AI inference
- Compute server platforms for telecom service prototyping
Example applications:
- Training AI models for radio control
- Running distributed digital twins
- Edge compute validation in transport/logistics
- Supporting private 5G/6G cloud-native functions