Partial scene reconstruction for close range photogrammetry using deep learning pipeline for region masking

dc.contributor.authorEldefrawy, Mahmoud
dc.contributor.authorKing, Scott
dc.contributor.authorStarek, Michael
dc.creator.orcidhttps://orcid.org/0000-0002-4022-0388en_US
dc.creator.orcidhttps://orcid.org/0000-0002-7996-0594en_US
dc.creator.orcidhttps://orcid.org/0000-0002-4022-0388
dc.creator.orcidhttps://orcid.org/0000-0002-7996-0594
dc.date.accessioned2022-09-07T21:36:37Z
dc.date.available2022-09-07T21:36:37Z
dc.date.issued2022-07-03
dc.description.abstract3D reconstruction is a beneficial technique to generate 3D geometry of scenes or objects for various applications such as computer graphics, industrial construction, and civil engineering. There are several techniques to obtain the 3D geometry of an object. Close-range photogrammetry is an inexpensive, accessible approach to obtaining high-quality object reconstruction. However, state-of the-art software systems need a stationary scene or a controlled environment (often a turntable setup with a black background), which can be a limiting factor for object scanning. This work presents a method that reduces the need for a controlled environment and allows the capture of multiple objects with independent motion. We achieve this by creating a preprocessing pipeline that uses deep learning to transform a complex scene from an uncontrolled environment into multiple stationary scenes with a black background that is then fed into existing software systems for reconstruction. Our pipeline achieves this by using deep learning models to detect and track objects through the scene. The detection and tracking pipeline uses semantic-based detection and tracking and supports using available pretrained or custom networks. We develop a correction mechanism to overcome some detection and tracking shortcomings, namely, object-reidentification and multiple detections of the same object. We show detection and tracking are effective techniques to address scenes with multiple motion systems and that objects can be reconstructed with limited or no knowledge of the camera or the environment.en_US
dc.description.abstract3D reconstruction is a beneficial technique to generate 3D geometry of scenes or objects for various applications such as computer graphics, industrial construction, and civil engineering. There are several techniques to obtain the 3D geometry of an object. Close-range photogrammetry is an inexpensive, accessible approach to obtaining high-quality object reconstruction. However, state-of the-art software systems need a stationary scene or a controlled environment (often a turntable setup with a black background), which can be a limiting factor for object scanning. This work presents a method that reduces the need for a controlled environment and allows the capture of multiple objects with independent motion. We achieve this by creating a preprocessing pipeline that uses deep learning to transform a complex scene from an uncontrolled environment into multiple stationary scenes with a black background that is then fed into existing software systems for reconstruction. Our pipeline achieves this by using deep learning models to detect and track objects through the scene. The detection and tracking pipeline uses semantic-based detection and tracking and supports using available pretrained or custom networks. We develop a correction mechanism to overcome some detection and tracking shortcomings, namely, object-reidentification and multiple detections of the same object. We show detection and tracking are effective techniques to address scenes with multiple motion systems and that objects can be reconstructed with limited or no knowledge of the camera or the environment.
dc.description.sponsorshipThis research received no external funding.en_US
dc.description.sponsorshipThis research received no external funding.
dc.identifier.citationEldefrawy, M., King, S. A., & Starek, M. (2022). Partial Scene Reconstruction for Close Range Photogrammetry Using Deep Learning Pipeline for Region Masking. Remote Sensing, 14(13), 3199. MDPI AG. Retrieved from http://dx.doi.org/10.3390/rs14133199en_US
dc.identifier.citationEldefrawy, M., King, S. A., & Starek, M. (2022). Partial Scene Reconstruction for Close Range Photogrammetry Using Deep Learning Pipeline for Region Masking. Remote Sensing, 14(13), 3199. MDPI AG. Retrieved from http://dx.doi.org/10.3390/rs14133199
dc.identifier.doihttps://doi.org/10.3390/rs14133199
dc.identifier.urihttps://hdl.handle.net/1969.6/93953
dc.language.isoen_USen_US
dc.language.isoen_US
dc.rightsAttribution 4.0 International*
dc.rightsAttribution 4.0 International
dc.rights.urihttp://creativecommons.org/licenses/by/4.0/*
dc.rights.urihttp://creativecommons.org/licenses/by/4.0/
dc.subjectdetectionen_US
dc.subjecttrackingen_US
dc.subjectdeep-learningen_US
dc.subjectstructure from motionen_US
dc.subjectpoint clouden_US
dc.subjectdetection
dc.subjecttracking
dc.subjectdeep-learning
dc.subjectstructure from motion
dc.subjectpoint cloud
dc.titlePartial scene reconstruction for close range photogrammetry using deep learning pipeline for region maskingen_US
dc.titlePartial scene reconstruction for close range photogrammetry using deep learning pipeline for region masking
dc.typeArticleen_US
dc.typeArticle

Files

Original bundle

Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
Partial Scene Reconstruction for Close Range Photogrammetry Using Deep Learning Pipeline for Region Masking.pdf
Size:
17.18 MB
Format:
Adobe Portable Document Format
Description:

License bundle

Now showing 1 - 1 of 1
No Thumbnail Available
Name:
license.txt
Size:
1.72 KB
Format:
Item-specific license agreed upon to submission
Description: