Introducing

PandaSet by Hesai and Scale AI

High-quality open-source dataset for autonomous driving

  • Scene #1
  • Scene #2
  • Scene #3
  • Scene #4
  • Scene #5
  • Scene #6
  • Scene #7
  • Scene #8
Overview

Sophisticated LiDAR technology meets high-quality data annotation

PandaSet aims to promote and advance research and development in autonomous driving and machine learning.

The first open-source AV dataset available for both academic and commercial use, PandaSet combines Hesai’s best-in-class LiDAR sensors with Scale AI’s high-quality data annotation. PandaSet features data collected using a forward-facing LiDAR with image-like resolution (PandarGT) as well as a mechanical spinning LiDAR (Pandar64). The collected data was annotated with a combination of cuboid and segmentation annotation (Scale 3D Sensor Fusion Segmentation).

It features:

  • 48,000+ camera images
  • 16,000+ LiDAR sweeps
  • 100+ scenes of 8s each
  • 28 annotation classes
  • 37 semantic segmentation labels
  • Full sensor suite: 1x mechanical spinning LiDAR, 1x forward-facing LiDAR, 6x cameras, On-board GPS/IMU
Point cloud captured by Hesai PandarGT
Point cloud captured by Hesai PandarGT
PandaSet Car
PandaSet Car
Data Collection

Complex Driving Scenarios in Urban Environments

For PandaSet we carefully planned routes and selected scenes that would showcase complex urban driving scenarios, including steep hills, construction, dense traffic and pedestrians, and a variety of times of day and lighting conditions in the morning, afternoon, dusk and evening.

PandaSet scenes are selected from 2 routes in Silicon Valley: (1) San Francisco; and (2) El Camino Real from Palo Alto to San Mateo.

Car Setup

We collected data using a Chrysler Pacifica minivan mounted with a suite of cameras and Hesai LiDARs.

    • 10 Hz capture frequency
    • 1/2.7” CMOS sensor of 1920x1080 resolution
    • Images are unpacked to YUV 4:4:4 format and compressed to JPEG
    5Wide-Angle Cameras
    • 10 Hz capture frequency
    • 1/2.7” CMOS sensor of 1920x1080 resolution
    • Images are unpacked to YUV 4:4:4 format and compressed to JPEG
    1Long-Focus Camera
    • 1 x Mechanical LiDAR
    • 64 channels
    • 200m range @ 10% reflectivity
    • 360° horizontal FOV; 40° vertical FOV (-25° to +15°)
    • 0.2° horizontal angular resolution (10 Hz); 0.167° vertical angular resolution (finest)
    • 10 Hz capture frequency
    1Pandar64: Mechanical Spinning LiDAR
    • Equivalent to 150 channels at 10 Hz
    • 300m range @ 10% reflectivity
    • 60° horizontal FOV; 20° vertical FOV (-10° to +10° with ±5° offset, configurable)
    • 0.1° horizontal angular resolution; 0.07° vertical angular resolution (finest) at 10 Hz
    • 10 Hz capture frequency
    1PandarGT: Forward-Facing LiDAR
    • 10 Hz capture frequency
    • 1/2.7” CMOS sensor of 1920x1080 resolution
    • Images are unpacked to YUV 4:4:4 format and compressed to JPEG
    5Wide-Angle Cameras
    • 10 Hz capture frequency
    • 1/2.7” CMOS sensor of 1920x1080 resolution
    • Images are unpacked to YUV 4:4:4 format and compressed to JPEG
    1Long-Focus Camera
    • 1 x Mechanical LiDAR
    • 64 channels
    • 200m range @ 10% reflectivity
    • 360° horizontal FOV; 40° vertical FOV (-25° to +15°)
    • 0.2° horizontal angular resolution (10 Hz); 0.167° vertical angular resolution (finest)
    • 10 Hz capture frequency
    1Pandar64: Mechanical Spinning LiDAR
    • Equivalent to 150 channels at 10 Hz
    • 300m range @ 10% reflectivity
    • 60° horizontal FOV; 20° vertical FOV (-10° to +10° with ±5° offset, configurable)
    • 0.1° horizontal angular resolution; 0.07° vertical angular resolution (finest) at 10 Hz
    • 10 Hz capture frequency
    1PandarGT: Forward-Facing LiDAR

Sensor Calibration

To achieve a high-quality multi-sensor dataset, it is essential to calibrate the extrinsics and intrinsics of every sensor. We express extrinsic coordinates relative to the ego frame, i.e. the midpoint of the rear vehicle axle.

  • LiDAR extrinsics
  • Camera extrinsics
  • IMU extrinsics
  • Camera intrinsic calibration
  • 5Wide-Angle Cameras
  • 1Long-Focus Camera
  • 1Pandar64: Mechanical Spinning LiDAR
  • 1PandarGT: Forward-Facing LiDAR
Data Annotation

Complex Label Taxonomy

Scale AI’s data annotation platform combines human work and review with smart tools, statistical confidence checks and machine learning checks to ensure the quality of annotations.

The resulting accuracy is consistently higher than what a human or synthetic labeling approach can achieve independently as measured against seven rigorous quality areas for each annotation.

PandaSet includes 3D Bounding boxes for 28 object classes and a rich set of class attributes related to activity, visibility, location, pose. The dataset also includes Point Cloud Segmentation with 37 semantic labels including for smoke, car exhaust, vegetation, and driveable surface.

For detailed definitions of each class and example images, please see the annotation instructions.

Scene #1
About Scale AI

The Data Platform For AI

Scale AI’s mission is to accelerate the development of AI.

Scale AI’s suite of managed labeling services such as Scale 3D Sensor Fusion, Scale Video, Scale Image and Scale Text combine manual labeling with ML-enabled tools and quality assurance systems to deliver large volumes of high-quality training data.

a Scale meeting
a Hesai meeting
Hesai sensors
About Hesai

LiDARs for
Autonomous Driving

Hesai Technology is a global leader in 3D-sensors (LiDARs).

Founded in Silicon Valley and headquartered in Shanghai, Hesai’s team has created a suite of innovative sensor solutions that combine three core strengths: industry-leading performance, manufacturability, and reliability.

Hesai’s proprietary micro-mirror and waveform fingerprint technologies continue to lead the market in sensor innovation, leading to a 400+ patent portfolio and customers spanning 20 countries and 70 cities.