中文标题#
面向具有幾何先驗的即時準確單目 3D 人體姿態估計框架
英文标题#
Toward a Real-Time Framework for Accurate Monocular 3D Human Pose Estimation with Geometric Priors
中文摘要#
單目 3D 人體姿態估計仍然是一個具有挑戰性和病態的問題,尤其是在即時設置和非受控環境中。 雖然直接的圖像到 3D 方法需要大量註釋數據和重型模型,但 2D 到 3D 提升提供了一種更輕量和靈活的替代方案 —— 尤其是當結合先驗知識時。 在本工作中,我們提出了一種框架,將即時 2D 關鍵點檢測與幾何感知的 2D 到 3D 提升相結合,明確利用已知的相機內部參數和特定於受試者的解剖先驗。 我們的方法基於自校準和生物力學約束逆運動學的最新進展,從動作捕捉和合成數據集中生成大規模、合理的 2D-3D 訓練對。 我們討論了這些要素如何使從單目圖像中快速、個性化和準確地進行 3D 姿態估計成為可能,而無需專用硬件。 該提案旨在促進關於彌合數據驅動學習和基於模型的先驗以提高邊緣設備上 3D 人體運動捕捉的準確性、可解釋性和可部署性的討論。
英文摘要#
Monocular 3D human pose estimation remains a challenging and ill-posed problem, particularly in real-time settings and unconstrained environments. While direct imageto-3D approaches require large annotated datasets and heavy models, 2D-to-3D lifting offers a more lightweight and flexible alternative-especially when enhanced with prior knowledge. In this work, we propose a framework that combines real-time 2D keypoint detection with geometry-aware 2D-to-3D lifting, explicitly leveraging known camera intrinsics and subject-specific anatomical priors. Our approach builds on recent advances in self-calibration and biomechanically-constrained inverse kinematics to generate large-scale, plausible 2D-3D training pairs from MoCap and synthetic datasets. We discuss how these ingredients can enable fast, personalized, and accurate 3D pose estimation from monocular images without requiring specialized hardware. This proposal aims to foster discussion on bridging data-driven learning and model-based priors to improve accuracy, interpretability, and deployability of 3D human motion capture on edge devices in the wild.
PDF 獲取#
抖音掃碼查看更多精彩內容