Accelerating AI with
advanced processing power
Introducing South Korea's first AI processor, the X220: a remarkable example of efficiency and performance, even though it's built on the traditionally less efficient 28nm process. Its excellent power management capabilities set it apart from products released at the same time. The X220's efficacy is proven in the field – in data center and telecom services in South Korea.
High compute density
The X220 excels in delivering high compute density, crucial for efficient AI processing. Its advanced AICORE design enables the processor to handle intensive AI operations effectively, ensuring optimal performance in a power-efficient manner.
Unique depth-wise scheduling
Featuring unique depth-wise scheduling, the X220 optimizes its processing capabilities. This allows for efficient performance, even with smaller SRAM and limited DRAM bandwidth, enhancing the overall efficiency of AI inference operations.
Chip
X220 Compact
X220 Enterprise
Specifications
X220 Compact
X220 Enterprise
Precision
INT 16 / 8 / 4 bit
INT 16 / 8 / 4 bit
8bit Performance
87 Tera OPS
174 Tera OPS
Memory
Type
LPDDR4 x 5 (ECC)
LPDDR4 x 10 (ECC)
Capacity
8 GB
16 GB
42 GB/s
84 GB/s
Bandwidth
Host Interface
PCle Gen3 16 Lane
PCle Gen3 16 Lane
Max Power Consumption
65 W
135 W
Form Factor
PCle HHHL Single Slot
PCle FHFL Dual Slot
Performance
MLPerf Datacenter Inference Server (ResNet-50)
Performance
(Queries/sec)
NVIDIA A2 (8nm)
SAPEON X220 Compact (28nm)
SAPEON X220 Enterprise (28nm)
2,631
6,145
12,036
2,631
NVIDIA A2 (8nm)
6,145
12,036
SAPEON X220 Compact (28nm)
SAPEON X220 Enterprise (28nm)
Performance(Queries/sec)
Press Version_MLPerf™ Inference v2.1 Results | MLPerf ID 2.1-0109, 2.1-0110. 2.1-0011
Performance comparison of NVIDIA A2 versus SAPEON X220-Compact and X220-Enterprise under MLPerf Inference benchmark testing conditions.
The MLPerf™ name and logo are trademarks of MLCommons Association in the United States and other countries. All rights reserved. Unauthorized use strictly prohibited. See www.mlcommons.org for more information.
Performance/Power
(Queries/sec/watt(TDP))
SAPEON X220 Compact (28nm)
SAPEON X220 Enterprise (28nm)
95
89
44
NVIDIA A2 (8nm)
95
89
SAPEON X220 Compact (28nm)
SAPEON X220 Enterprise (28nm)
Performance/Power
(Queries/sec/watt(TDP))
In addition to MLPerf™ Inference v2.1 Results, Power efficiency is another important feature of these two AI accelerators. The X220-Compact is a low-profile PCle card with a maximum power consumption of 65W, and the X220-Enterprise has two X220 chips with a maximum power consumption of 135W with twice the performance. Measured in September 2022. The MLPerf™ name and logo are trademarks of MLCommons Association in the United States and other countries. All right reserved. Unauthorized use strictly prohibited. See www.micommons.org for more information.
Application demo
Experience the performance and efficiency of the X220 in action across various AI models
Image classification
SAPEON X220 demonstrates superior performance in image classification compared to competitor, with Resnet-50 showing the most significant improvement.
Object detection
SAPEON excels in object detection tasks. In a YOLO-v3 comparison, SAPEON outperformed the competition while consuming less power.
SUPERNOVA
SAPEON outperforms in real-time image upscaling with SKT's SUPERNOVA technology, achieving greater efficiency and cost-effectiveness than competitors.
Transformer-based Language Processing - SQuAD1.1
SAPEON's demo showcases BERT for natural language processing, with X220 outperforming competitors in both vision and language tasks.