We present Intrinsics in Flux (InFlux), a real-world benchmark that provides per-frame ground truth intrinsics annotations for videos with dynamic intrinsics. Compared to prior benchmarks, InFlux captures a wider range of intrinsic variations and scene diversity, featuring 143K+ annotated frames from 386 high-resolution indoor and outdoor videos with dynamic camera intrinsics. To ensure accurate per-frame intrinsics, we build a comprehensive lookup table of calibration experiments and extend the Kalibr toolbox to improve its accuracy and robustness. Using our benchmark, we evaluate existing baseline methods for predicting camera intrinsics and find that most struggle to achieve accurate predictions on videos with dynamic intrinsics.
                                Erich Liang, 
                                Roma Bhattacharjee*,
                                Sreemanti Dey*,
                                Rafael Moschopoulos,
                                Caitlin Wang, 
                                Michel Liao, 
                                Grace Tan, 
                                Andrew Wang, 
                                Karhan Kayan, 
                                Stamatis Alexandropoulos, 
                                Jia Deng
                                * Equal contribution (random order)
                            
Neural Information Processing Systems Datasets and Benchmarks Track (NeurIPS), 2025
If you use our benchmark, data, or method in your work, please cite our paper.
@misc{liang2025influx,
    title={InFlux: A Benchmark for Self-Calibration of Dynamic Intrinsics of Video Cameras}, 
    author={Erich Liang and Roma Bhattacharjee and Sreemanti Dey and Rafael Moschopoulos and Caitlin Wang and Michel Liao and Grace Tan and Andrew Wang and Karhan Kayan and Stamatis Alexandropoulos and Jia Deng},
    year={2025},
    eprint={2510.23589},
    archivePrefix={arXiv},
    primaryClass={cs.CV},
    url={https://arxiv.org/abs/2510.23589}, 
}
        This work was partially supported by the National Science Foundation. We thank our friends and colleagues at Princeton University for their help with filming the benchmark.