Skip to content

[Under review] SegNet4D: Effective and Efficient 4D LiDAR Semantic Segmentation in Autonomous Driving Environments

Notifications You must be signed in to change notification settings

nubot-nudt/SegNet4D

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

15 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

SegNet4D

This repo contains the implementation of our paper:

SegNet4D: Efficient Instance-Aware 4D LiDAR Semantic Segmentation for Driving Scenarios

Neng Wang, Ruibin Guo, Chenghao Shi, Ziyue Wang, Hui Zhang, Huimin Lu, Zhiqiang Zheng, Xieyuanli Chen

Framework

SegNet4D is an efficient Instance-Aware 4D LiDAR semantic segmentation framework. We first utilize the Motion Features Encoding Module to extract motion features from the sequential LiDAR scans. Following this, the motion features are concatenated with the spatial features of the current scan and fed into the Instance-Aware Feature Extraction Backbone. Then, two separate heads are applied: a motion head for predicting moving states, and a semantic head for predicting semantic category. Finally, the Motion-Semantic Fusion Module integrates the motion and semantic features to achieve 4D semantic segmentation.

Related Video

Our accompanying video is now available on OneDrive.

How to use

The code is tested on the environment with ubuntu20.04, python3.7, cuda11.3, cudnn8.2.1.

We have first released the code for generating bounding boxes from semantic annotations and multi-scan nuScenes labels to facilitate the community's work. The implementation of SegNet4D will be made available after our paper is accepted.

Data

We mainly train our model on the SemanticKITTI and nuScenes dataset.

1. SemanticKITTI

Download the raw LiDAR scan dataset from KITTI website and semantic annotations from SemanticKITTI website.

generating instance bounding box:

python utils/generate_boundingbox.py --data_path ./demo_data/ --view --lshape --save

--data_path: data path --view: Visualizing the instance box

--lshape: using the L-shap for refining the box --save: saving the box in the .npy file.

Before running this, you need to install open3d and PCL in python environment.

You can download the bounding box from the link directly.

2. nuScenes

Download the raw dataset from the website.

generating nuScenes multi-scan dataset

You can find detailed readme here.

Code usage

  • The code will be released after our paper has been accepted.

Citation

If you use our code in your work, please star our repo and cite our paper.

@article{wang2024arxiv,
	title={{SegNet4D: Efficient Instance-Aware 4D LiDAR Semantic Segmentation for Driving Scenarios}},
	author={Wang, Neng and Guo, Ruibin and Shi, Ziyue Wang, Chenghao and Zhang, Hui and Lu, Huimin and Zheng, Zhiqiang and Chen, Xieyuanli},
	journal={arXiv preprint},
	year={2024}
}

Contact

Any question or suggestions are welcome!

Neng Wang: nwang@nudt.edu.cn and Xieyuanli Chen: xieyuanli.chen@nudt.edu.cn

Acknowledgment

We thank for the opensource codebases, MapMOS, AutoMOS

About

[Under review] SegNet4D: Effective and Efficient 4D LiDAR Semantic Segmentation in Autonomous Driving Environments

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages