For vision-language models (VLMs), understanding the dynamic properties of objects and their interactions in 3D scenes from videos is crucial for effective reasoning about high-level temporal and action semantics. Although humans are adept at understanding these properties by constructing 3D and temporal (4D) representations of the world, current video understanding models struggle to extract these dynamic semantics, arguably because these models use cross-frame reasoning without underlying knowledge of the 3D/4D scenes.
In this work, we introduce DynSuperCLEVR, the first video question answering dataset that focuses on language understanding of the dynamic properties of 3D objects. We concentrate on three physical concepts — velocity, acceleration, and collisions within 4D scenes.
We further generate three types of questions, including factual queries, future predictions, and counterfactual reasoning that involve different aspects of reasoning about these 4D dynamic properties. To further demonstrate the importance of explicit scene representations in answering these 4D dynamics questions, we propose NS-4DPhysics, a Neural-Symbolic VideoQA model integrating Physics prior for 4D dynamic properties with explicit scene representation of videos.
Instead of answering the questions directly from the video text input, our method first estimates the 4D world states with a 3D generative model powered by physical priors, and then uses neural symbolic reasoning to answer the questions based on the 4D world states. Our evaluation on all three types of questions in DynSuperCLEVR shows that previous video question answering models and large multimodal models struggle with questions about 4D dynamics, while our NS-4DPhysics significantly outperforms previous state-of-the-art models. Our code will be available at https://github.com/XingruiWang/DynSuperCLEVR.
DynSuperCLEVR focuses on understanding the dynamic behavior of objects in 3D space over time (i.e., 4D reasoning). The benchmark emphasizes the following key physical dynamics:
We render the video using Kubric, a scalable video generation engine, and extend it to support dynamic acceleration properties. Our modified codebase is available here.
DynSuperCLEVR focuses on understanding the dynamic behavior of objects in 3D space over time (i.e., 4D reasoning). The benchmark emphasizes the following key physical dynamics:
To comprehensively evaluate dynamic scene understanding, we introduces three question types built upon static object properties, 4D dynamical attributes (velocities and acceleration), and collision events.
Together, these dynamics and question types establish a rigorous benchmark for evaluating video-based VQA models on 4D physical understanding.
NS-4DPhysics is a neural-symbolic model designed to answer questions about dynamic physical interactions in 3D scenes over time (i.e., 4D). It integrates explicit physical priors and a 3D neural mesh model to parse and simulate scene dynamics.
(Rt-1, Tt-1)
are propagated using a differentiable physics engine (PyBullet), producing a probabilistic estimate of the next state.
By explicitly modeling physical dynamics and incorporating symbolic reasoning, NS-4DPhysics significantly improves performance on complex VideoQA tasks involving 4D scene understanding.
We compare the NS-4DPhysics model with a range of baseline models on the DynSuperCLEVR benchmark for video question answering. The evaluation covers three question types—factual, predictive, and counterfactual—with factual questions further split into velocity, acceleration, and collision sub-types. As shown in the table, NS-4DPhysics significantly outperforms all baselines across all question types, highlighting the effectiveness of explicit 4D scene representations with physics priors.
Model | Average | Factual | Predictive | Counterfactual | |||
---|---|---|---|---|---|---|---|
All | Vel. | Acc. | Col. | ||||
CNN+LSTM | 48.03 | 40.63 | 41.71 | 56.79 | 25.37 | 56.04 | 47.42 |
FiLM (Perez et al., 2018) | 50.18 | 44.07 | 48.58 | 53.09 | 26.87 | 54.94 | 51.54 |
NS-DR (Yi et al., 2019) | 51.44 | 51.44 | 55.63 | 46.34 | 46.86 | - | - |
PO3D-VQA (Wang et al., 2024) | 62.93 | 61.22 | 62.21 | 73.17 | 51.20 | 65.33 | 62.24 |
InternVideo (Wang et al., 2022) | 52.62 | 51.07 | 59.29 | 49.08 | 36.06 | 54.74 | 59.18 |
Video-LLaVA† (Lin et al., 2023) | 38.09 | 37.04 | 37.62 | 52.76 | 23.56 | 38.78 | 40.88 |
PLLaVA† (Xu et al., 2024) | 59.24 | 54.61 | 55.00 | 63.80 | 46.63 | 67.52 | 73.47 |
GPT-4o† | 51.59 | 50.82 | 51.19 | 57.67 | 44.71 | 54.38 | 50.00 |
GPT-4o + reasoning† | 56.06 | 55.50 | 58.81 | 57.67 | 47.12 | 56.93 | 58.16 |
NS-4DPhysics | 82.64 | 87.70 | 88.66 | 83.73 | 88.46 | 85.71 | 74.51 |
@inproceedings{wang2024compositional,
title = {Compositional 4D Dynamic Scenes Understanding with Physics Priors for Video Question Answering},
author = {Wang, Xingrui and Ma, Wufei and Wang, Angtian and Chen, Shuo and Kortylewski, Adam and Yuille, Alan},
booktitle = {International Conference on Learning Representations (ICLR)},
year = {2025},
url = {https://openreview.net/pdf?id=6Vx28LSR7f}
}