VirtuaLot—A case study on combining UAS imagery and terrestrial video with photogrammetry and deep learning to track vehicle movement in parking lots

dc.contributor.authorKoskowich, Bradley
dc.contributor.authorStarek, Michael
dc.contributor.authorKing, Scott A,
dc.creator.orcidhttps://orcid.org/0000-0001-8071-5476en_US
dc.creator.orcidhttps://orcid.org/0000-0002-7996-0594en_US
dc.creator.orcidhttps://orcid.org/0000-0002-4022-0388en_US
dc.creator.orcidhttps://orcid.org/0000-0001-8071-5476
dc.creator.orcidhttps://orcid.org/0000-0002-7996-0594
dc.creator.orcidhttps://orcid.org/0000-0002-4022-0388
dc.date.accessioned2023-03-02T21:19:16Z
dc.date.available2023-03-02T21:19:16Z
dc.date.issued2022-10-29
dc.description.abstractThis study investigates the feasibility of applying monoplotting to video data from a security camera and image data from an uncrewed aircraft system (UAS) survey to create a mapping product which overlays traffic flow in a university parking lot onto an aerial orthomosaic. The framework, titled VirtuaLot, employs a previously defined computer-vision pipeline which leverages Darknet for vehicle detection and tests the performance of various object tracking algorithms. Algorithmic object tracking is sensitive to occlusion, and monoplotting is applied in a novel way to efficiently extract occluding features from the video using a digital surface model (DSM) derived from the UAS survey. The security camera is also a low fidelity model not intended for photogrammetry with unstable interior parameters. As monoplotting relies on static camera parameters, this creates a challenging environment for testing its effectiveness. Preliminary results indicate that it is possible to manually monoplot between aerial and perspective views with high degrees of transition tilt, achieving coordinate transformations between viewpoints within one deviation of vehicle short and long axis measurements throughout 70.5% and 99.6% of the study area, respectively. Attempted automation of monoplotting on video was met with limited success, though this study offers insight as to why and directions for future work on the subject.en_US
dc.description.abstractThis study investigates the feasibility of applying monoplotting to video data from a security camera and image data from an uncrewed aircraft system (UAS) survey to create a mapping product which overlays traffic flow in a university parking lot onto an aerial orthomosaic. The framework, titled VirtuaLot, employs a previously defined computer-vision pipeline which leverages Darknet for vehicle detection and tests the performance of various object tracking algorithms. Algorithmic object tracking is sensitive to occlusion, and monoplotting is applied in a novel way to efficiently extract occluding features from the video using a digital surface model (DSM) derived from the UAS survey. The security camera is also a low fidelity model not intended for photogrammetry with unstable interior parameters. As monoplotting relies on static camera parameters, this creates a challenging environment for testing its effectiveness. Preliminary results indicate that it is possible to manually monoplot between aerial and perspective views with high degrees of transition tilt, achieving coordinate transformations between viewpoints within one deviation of vehicle short and long axis measurements throughout 70.5% and 99.6% of the study area, respectively. Attempted automation of monoplotting on video was met with limited success, though this study offers insight as to why and directions for future work on the subject.
dc.identifier.citationKoskowich, B.; Starek, M.; King, S.A. VirtuaLot—A Case Study on Combining UAS Imagery and Terrestrial Video with Photogrammetry and Deep Learning to Track Vehicle Movement in Parking Lots. Remote Sens. 2022, 14, 5451. https://doi.org/10.3390/ rs14215451en_US
dc.identifier.citationKoskowich, B.; Starek, M.; King, S.A. VirtuaLot—A Case Study on Combining UAS Imagery and Terrestrial Video with Photogrammetry and Deep Learning to Track Vehicle Movement in Parking Lots. Remote Sens. 2022, 14, 5451. https://doi.org/10.3390/ rs14215451
dc.identifier.doihttps://doi.org/10.3390/ rs14215451
dc.identifier.urihttps://hdl.handle.net/1969.6/95560
dc.language.isoen_USen_US
dc.language.isoen_US
dc.rightsAttribution 4.0 International*
dc.rightsAttribution 4.0 International
dc.rights.urihttp://creativecommons.org/licenses/by/4.0/*
dc.rights.urihttp://creativecommons.org/licenses/by/4.0/
dc.subjectmonoplottingen_US
dc.subjectphotogrammetryen_US
dc.subjectcomputer visionen_US
dc.subjectobject detectionen_US
dc.subjectobject trackingen_US
dc.subjectneural networksen_US
dc.subjectmonoplotting
dc.subjectphotogrammetry
dc.subjectcomputer vision
dc.subjectobject detection
dc.subjectobject tracking
dc.subjectneural networks
dc.titleVirtuaLot—A case study on combining UAS imagery and terrestrial video with photogrammetry and deep learning to track vehicle movement in parking lotsen_US
dc.titleVirtuaLot—A case study on combining UAS imagery and terrestrial video with photogrammetry and deep learning to track vehicle movement in parking lots
dc.typeArticleen_US
dc.typeArticle

Files

Original bundle

Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
VirtuaLot_A Case Study on Combining UAS Imagery and Terrestrial Video with Photogrammetry and Deep Learning to Track Vehicle Movement in Parking Lots.pdf
Size:
47.89 MB
Format:
Adobe Portable Document Format
Description:

License bundle

Now showing 1 - 1 of 1
No Thumbnail Available
Name:
license.txt
Size:
1.72 KB
Format:
Item-specific license agreed upon to submission
Description: