Abstract
Recent feed-forward reconstruction models like VGGT and $\pi^3$ achieve impressive reconstruction quality but cannot process streaming videos due to quadratic memory complexity, limiting their practical deployment. While existing streaming methods address this through learned memory mechanisms or causal attention, they require extensive retraining and may not fully leverage the strong geometric priors of state-of-the-art offline models.
We propose LASER, a training-free framework that converts an offline reconstruction model into a streaming system by aligning predictions across consecutive temporal windows. We observe that simple similarity transformation ($\mathrm{Sim}(3)$) alignment fails due to layer depth misalignment: monocular scale ambiguity causes relative depth scales of different scene layers to vary inconsistently between windows. To address this, we introduce layer-wise scale alignment, which segments depth predictions into discrete layers, computes per-layer scale factors, and propagates them across both adjacent windows and timestamps. Extensive experiments show that LASER achieves state-of-the-art performance on camera pose estimation and point map reconstruction while operating at 14 FPS with 6 GB peak memory on a RTX A6000 GPU, enabling practical deployment for kilometer-scale streaming videos.
Overview
Given a video stream, we process frames in overlapping temporal windows with a frozen feed-forward reconstructor. We incrementally register the submap to the global map with $\mathrm{Sim}(3)$ estimation and our proposed layer-wise scale alignment.
Layer-wise Scale Alignment
Raw
Layer-wise aligned
After the global $\mathrm{Sim}(3)$ alignment, surfaces at different depths may exhibit layer-wise scale inconsistency: foreground regions appear over- or under-scaled relative to background structures across consecutive windows. This anisotropic scaling leads to visible distortions and metric drift in the fused reconstruction. We introduce Layer-wise Scale Alignment (LSA), a geometry-driven refinement that corrects distortions based on a layer graph.
Qualitative Results
BibTeX
@article{ding2025laser,
title={LASER: Layer-wise Scale Alignment for Training-Free Streaming 4D Reconstruction},
author={Ding, Tianye and Xie, Yiming and Liang, Yiqing and Chatterjee, Moitreya and Miraldo, Pedro and Jiang, Huaizu},
year={2025}
}