CARI4D: Category Agnostic 4D Reconstruction of
Human-Object Interaction


1NVIDIA 2University of Tübingen 3Tübingen AI Center 4Max Planck Institute for Informatics
* Work done during internship at NVIDIA
Corresponding author

Abstract

Accurate capture of human-object interaction from ubiquitous sensors like RGB cameras is important for applications in human understanding, gaming, and robot learning. However, inferring 4D interactions from a single RGB view is highly challenging due to the unknown object and human information, depth ambiguity, occlusion, and complex motion, which hinder consistent 3D and temporal reconstruction. Previous methods simplify the setup by assuming ground truth object template or constraining to a limited set of object categories. We present CARI4D, the first category-agnostic method that reconstructs spatially and temporarily consistent 4D human-object interaction at metric scale from monocular RGB videos. To this end, we propose a pose hypothesis selection algorithm that robustly integrates the individual predictions from foundation models, jointly refine them through a learned render-and-compare paradigm to ensure spatial, temporal and pixel alignment, and finally reasoning about intricate contacts for further refinement satisfying physical constraints. Experiments show that our method outperforms prior art by 38% on in-distribution dataset and 36% on unseen dataset in terms of reconstruction error. Our model generalizes beyond the training categories and thus can be applied zero-shot to in-the-wild internet videos. Our code and pretrained models will be publicly released.



Overview

Foundation models for shape, pose, and scene reconstruction provide strong priors yet individual predictions do not align and can suffer from noisy input. Our key idea is to design a framework that integrates the predictions to obtain a robust initialization and then design a category agnostic interaction reasoning module to improve contact coherency.



Full Video



Comparison on BEHAVE dataset



Comparison on InterCap dataset



In the wild internet videos



BibTeX

@article{2026cari4d,
  title={CARI4D: Category Agnostic 4D Reconstruction of Human-Object Interaction},
  author={Xianghui Xie, Bowen Wen, Yan Chang, Hesam Rabeti, Jiefeng Li, Ye Yuan, Gerard Pons-Moll, Stan Birchfield},
  journal={arXiv},
  year={2026}
}