College of Engineering Theses and Dissertations
Permanent URI for this collectionhttps://hdl.handle.net/1969.6/94188
Browse
Browsing College of Engineering Theses and Dissertations by Author "Belkhouche, Mohammed"
Now showing 1 - 3 of 3
- Results Per Page
- Sort Options
Item Detecting plant phenotypes from 3D point cloud data(2019-08) Dani, Jimmy; King, Scott A.; Jung, Jinha; Belkhouche, Mohammed; Mandadi, KranthiIn recent years, with the rapid development in indoor plant genotyping, there is a growing need for precise quantification of plant phenotypes. Currently, manual plant phenotyping is being used which is laborious, time-consuming, and prone to errors. This served as a motivation to develop an automated greenhouse phenotyping framework, that uses a 3D point cloud generated from RGB images. This study is focused on variations in plant phenotypes on different genotypes namely, Atlantic and Olalla, under controlled and drought stress treatment, throughout the growing season. The phenotypes considered in this study are: plant height, plant volume, leaf angle distribution and Excessive Greenness Index. Images of the plant are taken from two cameras hung on a post and a 3D point cloud is generated from those images. The phenotypes derived from the point cloud showed high correlation with manual measurements, which shows the system could be used for a variety of indoor plant phenotyping. The 99 percentile height shows the highest correlation with manually estimated height, and the volume and excessive greenness index results shows the Olalla genotype is more susceptible to stress as compared to the Atlantic genotype, the leaf angle distribution shows higher wilting for the drought stress treatment as compared to the control treatment.Item Development of a standardized framework for cost-effective communication system based on 3D data streaming and real-time 3D reconstruction(2017-05) Huynh, Dang Duong Hai; King, Scott A.; Katangur, Ajay; Belkhouche, MohammedThe common approachs for people to converse over a large geographical distance are via either SMS or video conference. A more immersive communication method over the internet that creates an experience which is closer to a face-to-face conversation is more desirable. The closest form is a conversation via the holographic projection of the participants and environment. Many motion pictures have featured this type of communication. While a complete system itself that uses holographic projection is still many years away, the core functions of such a system are not im- possible to achieve now. Two such features include 3D reconstruction of the target and streaming of 3D data. With the current development speed of technology, 3D reconstruction can be achieved with cost-effective depth cameras and 3D streaming can be done after data optimization. The focus of this work is on how to approach the idea by using such devices to create a standardized platform for the implemen- tation of the system with aforementioned features. Specifically, the system is able to capture 3D data from multiple depth sensors, reconstruct a 3D model of the human target to create an avatar, and stream the changes acquired from the sensors to the client to control the avatar in real time.Item Real-time object detection for autonomous driving-based on deep learning(2017-05) Liu, Guangrui; Rahnemoonfar, Maryam; Li, Longzhuang; Belkhouche, MohammedOptical vision is an essential component for autonomouscars. Accurate detection of vehicles, street buildings, pedestrians and road signs could assist self-driving cars the drive as safely as humans. However, object detection has been a challenging task for decades since images of objects in the real-world environment are affected by illumination, rotation, scale, and occlusion. In recent years, many Convolutional Neural Network (CNN) based classification-after-localization methods have improved detection results in various conditions. However, the slow recognition speed of these two-stage methods limits their usage in real-time situations. Recently, a unified object detection model, You Only Look Once (YOLO) [20], was proposed, which could directly regress from input image to object class scores and positions. Its single network structure processes images at 45 fps on PASCAL VOC 2007 dataset [7] and has higher detection accuracy than other current real-time methods. However, when applied to auto-driving object detection tasks, this model still has limitations. It processes images individually despite the fact that an object's position changes continuously in the driving scene. Thus, the model ignores alot of important information between continuous frames. In this research, we applied YOLO to three different datasets to test its general applicability. We fully analyzed its performance from various aspects on KITTI dataset [10] which is specialized for autonomous driving. We proposed a novel technique called memory map, which considers inter-frame information, to strengthen YOLO's detection ability in driving scene. We broadened the model's applicability scope by applying it to a new orientation estimation task. KITTI is our main dataset. Additionally, ImageNet [5] dataset is used for pre-training, and three other datasets. And Pascal VOC 2007/2012 [7], Road Sign [2], and Face Detection Dataset and Benchmark (FDDB) [15] were used for other class domains.