zikele

zikele

人生如此自可乐

面向端到端神經形態事件的3D物體重建無需物理先驗

2501.00741v4

中文标题#

面向端到端神經形態事件的 3D 物體重建無需物理先驗

英文标题#

Towards End-to-End Neuromorphic Event-based 3D Object Reconstruction Without Physical Priors

中文摘要#

神經形態相機,也稱為事件相機,是異步亮度變化傳感器,可以在不產生運動模糊的情況下捕捉極快的運動,這使得它們在極端環境中的三維重建中特別有前景。 然而,使用單目神經形態相機進行三維重建的現有研究有限,大多數方法依賴於估計物理先驗,並採用複雜的多步驟流程。 在本工作中,我們提出了一種使用神經形態相機進行密集體素三維重建的端到端方法,消除了估計物理先驗的需要。 我們的方法結合了一種新的事件表示來增強邊緣特徵,使提出的特徵增強模型能夠更有效地學習。 此外,我們引入了最優二值化閾值選擇原則作為未來相關工作的指導,使用通過閾值優化獲得的最佳重建結果作為基準。 與基線方法相比,我們的方法在重建精度上提高了 54.6%。

英文摘要#

Neuromorphic cameras, also known as event cameras, are asynchronous brightness-change sensors that can capture extremely fast motion without suffering from motion blur, making them particularly promising for 3D reconstruction in extreme environments. However, existing research on 3D reconstruction using monocular neuromorphic cameras is limited, and most of the methods rely on estimating physical priors and employ complex multi-step pipelines. In this work, we propose an end-to-end method for dense voxel 3D reconstruction using neuromorphic cameras that eliminates the need to estimate physical priors. Our method incorporates a novel event representation to enhance edge features, enabling the proposed feature-enhancement model to learn more effectively. Additionally, we introduced Optimal Binarization Threshold Selection Principle as a guideline for future related work, using the optimal reconstruction results achieved with threshold optimization as the benchmark. Our method achieves a 54.6% improvement in reconstruction accuracy compared to the baseline method.

PDF 獲取#

查看中文 PDF - 2501.00741v4

智能達人抖店二維碼

抖音掃碼查看更多精彩內容

載入中......
此文章數據所有權由區塊鏈加密技術和智能合約保障僅歸創作者所有。