High-fidelity and Editable Autonomous Driving Simulator
The institute is developing a practical, multifunctional, and high-fidelity simulation system to support both autonomous driving and embodied intelligence research. The system will be built upon 3D Gaussian-based representations and contain models and tools with the following capabilities: (1) reconstructing 3D scenes from limited visual observations, enabling both accurate geometry extraction and free-viewpoint photorealistic rendering; (2) performing artifact-free 3D scene editing and generation to significantly increase the diversity of simulation scenarios; (3) leveraging intelligent agents for automated NPC behavior control, interaction modeling, and corner-case scenario creation; (4) facilitating mutual learning between visual intelligence models and the simulator itself, achieving continuous co-evolving. By the end of this period, we expect to provide not only a suite of models but also an unified framework that supports efficient training and validation for autonomous driving and embodied intelligence tasks.
