College of Engineering Theses and Dissertations
Permanent URI for this collectionhttps://hdl.handle.net/1969.6/94188
Browse
Browsing College of Engineering Theses and Dissertations by Title
Now showing 1 - 20 of 54
- Results Per Page
- Sort Options
Item 2D and 3D Mapping of a Littoral Zone with UAS and Structure from Motion Photogrammetry(2015-05) Giessel, Justin ZacharyAdvancements in the miniaturization of sensors and their integration in light‐weight, smallscale unmanned aerial systems (UAS) have resulted in an explosion of uses for inexpensive and easily obtained remotely sensed data. This study examines the capabilities of a small‐scale UAS equipped with a consumer grade RGB camera for 2D and 3D mapping of a sandy bay shoreline using Structure from Motion (SfM) photogrammetry. Several key components are analyzed in order to assess the utility of UAS‐based SfM photogrammetry for beach and boundary surveying of the littoral zone. First, the accuracy of the 3D point cloud produced by the SfM densification process over the beach is compared to high accuracy RTK GPS transects. Results show a mean agreement of approximately 7.9 cm over the sub‐aerial beach with increased error in shallow water. Minimal effects of beach slope on vertical accuracy were observed. Secondly, bathymetric measurements extracted from the UAS/SfM point cloud are examined, and an optical inversion approach is implemented where the SfM method fails. Results show that a hybrid elevation model of the beach and littoral zone consisting of automatic SfM products, post‐processed SfM products, and optical inversion provide the most accurate results when mapping over turbid water. Finally, SfM‐derived shoreline elevation contour (boundary) is compared to a shoreline elevation contour derived using the currently accepted RTK GPS method for conducting legal littoral boundary surveys in the state of Texas. Results show mean planimetric offsets < 25 cm demonstrating the potential of UAS‐based SfM photogrammetry for conducting littoral boundary surveys along non‐occluded, sandy shorelines.Item 3-D Hybrid Trajectory Modeling for Unmanned Aerial Vehicles (UAVS)(2019-08) Wang, Baoqian; Xie, Junfei; Garcia Carrillo, Luis Rodolfo; Zhang, NingThe burgeoning use of unmanned aerial vehicles (UAVs) evidences forthcoming environments where innumerable UAVs will appear in the National Airspace System (NAS). The UAS traffic man- agement (UTM) aims to provide solutions to enable safe integration of numerous UAVs into the NAS, but the design of effective UTM strategies faces significant challenges. One of the challenges is to develop high-fidelity trajectory models for UAVs of partially known or unknown dynamics. Tradi- tional physics-based models that require costly system identifications and field tests, and data-based models that require large amount of real flight data may not be feasible. To address this challenge, this paper introduces a hybrid 3-dimensional (3-D) UAV trajectory modeling framework, which in- tegrates the physics-based and data-based models to capture the dynamics of UAVs of interest with high accuracy using only a small amount of real flight data. Simulation studies and field tests validate and demonstrate the good performance of the proposed framework.Item Applications of deep learning and multi-perspective 2D/3D imaging streams for remote terrain characterization of coastal environments(2021-12) Pashaei, Mohammad; Starek, Michael J.; Tissot, Philippe; King, Scott A.; Glennie, Craig L.; Lynch-Davis, KathleenThreats from storms, sea encroachment, and growing population demands put coastal communities at the forefront of engineering and scientific efforts to reduce vulnerabilities for their long-term prosperity. Updated and accurate geospatial information about land cover and elevation (topography) is necessary to monitor and assess the vulnerability of natural and built infrastructure within coastal zones. Advancements in remote sensing (RS) and autonomous systems extend surveying and sensing capabilities to difficult environments, enabling more geospatial data acquisition flexibility, higher spatial resolutions, and allowing humans to “see” in ways previously unattainable. Recent years have witnessed enormous growth in the application of small, unmanned aircraft systems (UASs) equipped with digital cameras for hyperspatial resolution imaging and dense three-dimensional (3D) mapping using structure-from-motion (SfM) photogrammetry techniques. In contrast to photogrammetry, light detection and ranging (lidar) is an active RS technique that uses a pulsed laser mounted on a static or mobile platform (from air or land) to scan in high definition the 3D structure of a scene. Rapid proliferation in lidar technology has resulted in new scanning and imaging modalities with ever increasing capabilities such as geodetic-grade terrestrial laser scanning (TLS) with ranging distances of up to several kilometers from a static tripod. TLS enables 3D sampling of the vertical structure of occluding objects, such as vegetation, and underlying topography. Full waveform (FW) lidar systems have led to a significant increase in the level of information extracted from a backscattered laser signal returned from a scattering object. With this technological advance and increase in remote sensing capabilities and data resolution, comes an increase in information gain at the cost of highly more complex and challenging big data sets to process and extract meaningful information. In this regard, utilizing end-to-end analyzing techniques recently developed in artificial intelligence (AI), in particular, convolutional neural network (CNN), developed under deep learning (DL) framework, seems applicable. DL techniques have recently outperformed state-of-the-art analysis techniques in a wide range of applications including RS. This work presents the application of DL for efficient exploitation of hyperspatial UAS-SfM photogrammetry and FW TLS data for land cover monitoring and topographic mapping in a coastal zone. Hyperspatial UAS images and TLS point cloud data with additional information about the scattering properties of illuminated target in the footprint of the laser beam encoded in returned waveform signals provide valuable geospatial data resources to uncover the accurate 3D structure of the surveyed environment. This study presents three main contributions: 1) Evaluation of different DCNN architectures, and their efficiencies, to classify land cover within a complex wetland setting using UAS imagery is investigated; 2) DCNN-based single image super-resolution (SISR) is employed as a pre-processing technique on low-resolution UAS images to predict higher resolution images over coastal terrain with natural and built land cover, and its effectiveness for enhancing dense 3D scene reconstruction with SfM photogrammetry is tested; 3) Full waveform TLS data is employed for point cloud classification and ground surface detection in vegetation using a developed DCNN framework that works directly off of the raw, digitized echo waveforms. Results show that returned raw waveform signals carry more information about a target’s spatial and radiometric properties in the footprint of the laser beam compared to waveform attributes derived from traditional waveform processing techniques. Collectively, this study demonstrates useful information retrieval from hyperspatial resolution 2D/3D RS data streams in a DL analysis framework.Item Automated radar heading calibration with collaborating participants and multi-sensor fusion(2021-12) Boyd, Josh; King, Scott A.; Li, Longzhuang; Wang, WenluAs unmanned aerial systems (UAS) become more prolific so will the use of radar systems for tracking UAS in the national airspace system (NAS). The future of Urban Air Mobility (UAM) involves large amounts of UAS operating autonomously and simultaneously in urban environments for the purpose of passenger or cargo transportation. Radar detection of UAS in an urban environment can be hindered by line of sight (LOS) blockage by large buildings thus necessitating many surveillance devices to gain full coverage. Currently, UAM is still in development by the Federal Aviation Administration (FAA) and other airspace partners and many cities do not have the need or resources for full radar coverage. Due to the high cost of individual radar systems and the quantity needed to cover urban areas it is currently not practical to have full radar coverage of an area at all times. Permanent stationary radar systems are generally calibrated once with occasional adjustments and low time constraints. Temporary radar systems must be calibrated and aligned before each mission deployment and often under short time constraints. Temporarily stationed mobile radar platforms will be utilized for specific targeted mission objectives until a more permanent solution is developed and implemented. In the case of disaster response or search and rescue, a temporary radar system needs to be quickly deployed. The key abilities required by a temporary radar system are accurate track position reporting and quick setup and breakdown. One of the bottlenecks to quick setup is heading calibration. Radar antenna alignment is crucial to the performance of the system and its ability to accurately determine the position of a tracked object. In this paper, we implement and compare multiple methods of radar heading calibration for accuracy and speed including manually with a handheld compass, manually with a web based heading helper tool, manually with a custom dual Real Time Kinetic (RTK) GPS alignment tool, and automated with a collaborating Radar Cross Section (RCS) device. For RCS devices we use a marine radar reflector and attached RTK GPS when unable to fly and an unmanned aerial vehicle (UAV) also with RTK GPS when able to fly. By leveraging our experience working with UAVs and Radars we show a method to autocalibrate the positioning sensors by using multisensory fusion and collaborating participants, thus reducing the amount of setup time, and increasing the accuracy of the system.Item Automatic canopy plot boundary detection using computer vision(2020-05) Wynn, De Kwaan; King, Scott A.; Gonzales, Xavier; Li, LongzhuangIn Corpus Christi, Texas the United States Department of Agricultre (USDA) funded Texas A&M Agrilife research on large farmlands with hundreds of individual cotton vegetation plots. Each plot is planted uniformly in rows but not all plots grow at the same rate. Every week the plots are photographed using an Unmanned Aircraft System (UAS) flying at a height of 100 feet to record and evaluate growth for various reasons. The research scientists and farmer’s current method of localizing individual plots of vegetation within an image has proven to be very time consuming and inefficient. The algorithm developed in this paper automates the localization process under sunny conditions. The algorithm uses the Hue, Saturation, and Value (HSV) color space during the preprocessing stage to provide a binary image that indicates where each green pixel is located. Various OpenCV functions are then used to automate the crop localization process. Minimum and Maximum threshold values are set for filtering by size sections of the algorithm. Then, morphological operations are employed to further refine the regions of interest. The Connected Components function is used to determine how large each remaining object is and that size is then used to determine how large each localizing polygon will be drawn. After the size of each object is found, the size of each localizing polygon to be drawn is evaluated and split into smaller polygons whenever necessary. Not only the developed algorithm able to detect and classify cotton crop locations quickly but it is able to handle various complex situations. The developed method was evaluated by its accuracy, precision, and recall, which were 92.4%, 100%, and 92.4% respectively.Item Autonomous mission planning for unmanned surface vehicles piloted by multiple specialized agents using heuristic and metaheuristic techniques(2018-12) Krell, Evan Andrew; Krell, Evan Andrew; King, Scott A.; Garcia, Luis; King, Scott A.; Garcia, LuisKing, Scott A.; Garcia, Luis; Sheta, Alaa; Sheta, AlaaState of the art unmanned surface vehicles typically exhibit rudimentary autonomy, apart from navigation controllers. Sophisticated autopilots have enabled these vehicles to follow a path as a sequence of coordinates called waypoints, and related control tasks such as target-following, station keeping, and obstacle avoidance are well established. However, humans are typically making all the mission planning decisions. These increasingly capable platforms could offer an intelligent remote presence for the marine environment, but are used as tools with specific orders rather than as agents responsible for intelligently investigating its environment. This research attempts to increase the autonomy of unmanned surface vehicles by considering them as being controlled by multiple specialized intelligent agents, specifically, the Analyst, the Surveyor, and the Navigator. The Analyst role studies data from its environment to specify objectives. The Surveyor is responsible for conducting mission planning to efficiently meet as many objectives as possible while ensuring missions are within constraints such as time and energy limits. Missions are then executed by the Navigator. The major challenge in increasing autonomy is the high computational complexity of many of the tasks involved, such as path planning. An emphasis is placed on heuristic and metaheuristic algorithms that sacrifice optimality to make autonomy feasible. Examples of Surveyor and Analyst agents have been implemented and initial results of the techniques used to fulfill their roles are examined.Item Bringing a state government agency into the 21st century through geographic information systems (gis) streamlining workflow by integrating database management systems and gis for a non-riparian water use permitting program(2018-08) Jackson, Shawn Lynnette; Huang, Yuxia; Jeffress, Gary A.; Zhang, HuaGovernment entities, both Federal and State, are two of the largest collectors, repositories, and disseminators of digital data. Management of the data is an asset if the system is well designed and developed. This thesis focuses on redesigning the current workflow for a state agency responsible for excess surface water management though a permitting program as ordered through agency rules and state legislation. The current workflow includes four components: an access database, geospatial information system, and geospatial data, and hydrologic model, which is referred to as the watershed delineation The watershed delineation component relies on an archaic manual process which is time consumptive and less than accurate. The manual process is the key component in the determination of an application submitted for non-riparian water use. These components, as a whole, are unwieldy, outmoded, and laborious in the information age. To address the problems encountered with the components of the workflow, a set of research objectives are identified. This thesis focuses on the design and development of workflow for the Non-Riparian Water Use (NRWU) Program at Arkansas Natural Resources Commission (ANRC) utilizing three objectives: (1) Review the current workflow of the NRWU program and specifically identify the problems in the current system (2) Review the advanced technologies related to the NRWU workflow and provide suggestions for the new system (3) Redesign the NRWU system and develop a new workflow that is incorporated with the advanced technologies for the NRWU program at ANRC The methodology facilitates a determination in the processing phase of the application. More specifically, the focus is on the watershed delineation. Since the NRWU program is an active program, two hydrologic modeling tools are selected and utilized to facilitate the determination as to which model would best meet the needs of the program. The two tools chosen that are incorporated into the NRWU program are the tools associated with NHD Plus V2 data set and accompanying tools and USGS’s Streamstats. The results of the two trials show strengths which favor the scopes of the two projects, however for the needs of the NRWU program and ANRC, USGS’s Streamstats is better suited. The state prefers a tool which is expedient and provides a reproducible product. The tool used to delineate the watershed delineation is the most important aspect of the redesign of the workflow. While this part is needed, redesigning the workflow to utilize all the components is needed.Item Classification of medical Images using metaheuristic feature selection methods(2019-12) Maddula, Kuladeep Anand Kumar; King, Scott A.; Sheta, Alaa A.; Yadav, MamtaMagnetic Resonance Imaging (MRI) is a popular non-invasive diagnostic tool for brain imaging. Accurate analysis of brain MRI images help in early detection of brain tumors and could save lot of lives. But accurate classification of the images as normal or pathological is a challenging task from the clinical as well as technology stand point. Brain MRI images consists of a large information set which contain redundancy in determining the condition of the brain. The redundant information would lead to increase in dimensionality of the data. Therefore, using a feature selection algorithm to find an optimum set of features would reduce the time and computation complexity of the classifiers for distinguishing the brain MRI images. This work is to study the performance of feature selection with different meta-heuristic search algorithms with multiple fitness functions. The three meta-heuristic algorithms considered are Binary Genetic Algorithm, Binary Particle Swarm Optimization and Binary Grey Wolf Optimizer for selecting an optimal set of features out of the extracted features from brain MRI images. The feature selection is performed on the 13 statistical features extracted from the brain MRI images using Discrete Wavelet Transform, Principle Component Analysis and Grey Level Co-occurrence matrix. The performance of the feature selection algorithms are compared by applying 4 different sets of features from each algorithm to seven different test classifiers. Our results obtained show high performance using feature selection.Item Comparing and characterizing three-dimensional point clouds derived by structure from motion photogrammetry(2016-12) Schwind, MichaelStructure from Motion (SfM) is a photogrammetric technique whereby three-dimensional structures (3D) are estimated from overlapping two-dimensional (2D) image sequences. It is studied in the field of computer vision and utilized in fields such as archeology, engineering, and the geosciences. Currently, many SfM software packages exist that allow for the generation of 3D point clouds. Little work has been done to show how topographic data generated from these software differ over varying terrain types and why they might produce different results. This work aims to compare and characterize the differences between point clouds generated by three different SfM software packages: two well-known proprietary solutions (Pix4D, Agisoft PhotoScan) and one open source solution (OpenDroneMap). Five terrain types were imaged utilizing a DJI Phantom 3 Professional small unmanned aircraft system (sUAS). These terrain types include a marsh environment, a gently sloped sandy beach and jetties, a forested peninsula, a house, and a flat parking lot. Each set of imagery was processed with each software and then directly compared to each other. Before processing the sets of imagery, the software settings were analyzed and chosen in a manner that allowed for the most similar settings to be set across the three software types. This was done in an attempt to minimize point cloud differences caused by dissimilar settings. The characteristics of the resultant point clouds were then compared with each other. Furthermore, a terrestrial light detection and ranging (LiDAR) survey was conducted over the flat parking lot using a Riegl VZ- 400 scanner. This data served as ground truth in order to conduct an accuracy assessment of the sUAS-SfM point clouds. Differences were found between the different results, apparent not only in the characteristics of the clouds, but also the accuracy. This study allows for users of SfM photogrammetry to have a better understanding of how different processing software compare and the inherent sensitivity of SfM automation in 3D reconstruction. Because this study used mostly default settings within the software, it would be beneficial for further research to investigate the effects of changing parameters have on the fidelity of point cloud datasets generated from different SfM software packages.Item Comparison of airborne surveying techniques for mapping submerged objects in shallow water(2016-08) Nazeri, Behrokh; Starek, Michael J.; Smith, Richard; Jeffress, Gary A.In this study, bathymetric lidar, high resolution aerial imagery, and hyperspatial resolution imagery collected from a small unmanned aircraft system (UAS) were examined in order to delineate submerged objects in shallow coastal water. A region surrounding Shamrock Island in Corpus Christi Bay along the Texas Gulf Coast was chosen for this study. This area is significant because of the existence of submerged structures including oil pipelines, which may influence the marine environment and navigation in shallow water. Therefore, mapping submerged structures is the first step of any further study in this area in terms of environmental litter and navigation hazards. Different methods were compared to each other in these categories in terms of efficiency and accuracy to map the bathymetric surface and detect submerged structures. First, three different interpolation methods including 2D Delaunay triangulated irregular network (TIN), inverse distance weight (IDW), and multilevel B spline were used to create digital elevation models (DEMs) using airborne lidar data to investigate their use on submerged pipeline detection. Then three different algorithms including Sobel, Prewitt, and Canny were examined in edge detection image processing to illustrate the potential pipelines using aerial imagery. To improve visibility, glint correction methods were implemented and compared to non-glint corrected imagery for pipeline delineation. Finally, a small UAS equipped with a digital camera was flown to evaluate structure from motion (SfM) photogrammetry for bathymetric mapping in the shallow bay. Methods examined included glint corrected imagery and single bands vs. original multiband imagery. The goal was to determine the effectiveness of image pre-conditioning methods for improving UAS-SfM mapping of submerged bottom and structures in shallow water. Results showed that B-spline interpolation method was the best fit compared to other methods for deriving bathymetric DEMs from the airborne lidar data. In edge detection image processing, Canny method performed better between all three methods in detecting the pipelines in the aerial imagery. In the last part, using glint removal methods and green single band imagery as inputs into the UAS-SfM photogrammetry workflow increased the quality of the produced point cloud over shallow water in terms of point density and depth estimation respectively. In conclusion, bathymetric lidar data in fusion with aerial imagery improved the pipeline delineation. Due to inherent limitations in current bathymetric lidar system resolvance power, it is recommended that future surveys targeted for this objective plan as best as possible for ideal water conditions in terms of visibility, employ more scan overlap. Sun glint correction improved the quality of the imagery in terms of penetrating through the water column. Avoiding sun glint by choosing appropriate place and time for data collection is the best way to deal with sun glint. In the UAS-SfM part, using a polarized filter on RGB cameras is recommended to assess the sun glint effect in the result.Item Comparison of linear and non-linear feature extraction on vegetation and oil spill hyperspectral images(2015-12) Ramirez-Aguilar, AndresA hyperspectral image provides a multidimensional figure rich in data consisting of hundreds of spectral dimensions. For this research, the method of analysis for a hyperspectral image will consist of two different feature extraction algorithms: principal component analysis locally linear embedding. Analyzing the spectral and spatial information of such image with linear and non-linear algorithms will result in high computational time. In order to overcome this problem, this research proposes a system using a MapReduce-Graphics Processing Unit (GPU) model that can help analyze a hyperspectral image through the usage of parallel hardware and a parallel programming model, which will be simpler to handle compared to other low level parallel programming models. Additionally, Hadoop will be used as an open-source version of the MapReduce parallel programming model. The ultimate goal of this research is to provide a foundation for a simple and powerful system that is scalable and easily extend-able.Item Constructing a digital land record system(2019-08) Gillis, Bryan S.; Smith, Richard; Chu, Tianxing; Rudowsky, Catherine; Tissot, PhilippeLand records represent the legal bindings of a person to their property and assist in the execution of property ownership. Protecting these documents and adopting a clear system to manage them should be a priority to every private citizen with interest in real property in the United States. Unfortunately, existing land record systems have become dated and fail to protect land records and offer little-to-no transparency or accessibility. Fortunately, more modern digital land record systems are being developed to combat these issues. When constructing a digital land record system, it is necessary to (1) identify the economic and functional value of using digital land record systems for a government entity, (2) establish procedures for the digitization of physical land record systems, and (3) provide digital land record system examples that meet the base needs of a land administration system with public access that follows both geospatial data and digital library standards. This thesis evaluates the needs of a successful digital land record system and outlines the development and capabilities of BandoCat, a modern digital land record system project at the Conrad Blucher Institute for Surveying and Science (CBI) at Texas A&M University – Corpus Christi. This thesis will assess the current state of land record systems in the United States and highlight the current inefficiencies and issues existing in these systems, thereby necessitating the development of BandoCat as a modern solution. The design of modern land record systems is founded in the standards of digital libraries. These digital libraries serve as long-term data stewards and provide well-developed standards which land record systems can leverage. This thesis details the parameters of BandoCat, how it leverages digital library standards, its modern features (such as georectification and adherence to metadata standards), and how modern land record systems (such as BandoCat) address current digital land record systems’ shortcomings, facilitate easier access for stakeholders, easier system interoperability, and visualization of land records information. It is the hopes of the author that this thesis will serve as a guide to improving the state of land record systems in the United States. Through the combination of the modern digital land record systems, such as BandoCat, with a consistent and interoperable design, the state of land administration can be vastly improved. The procedures and methodology created by the author provide a baseline for improving land record systems, and the BandoCat system developed by the Spatial {Query} Lab provides a software to begin the transition from physical to digital land record systems.Item A Cooperative object transport system with behavior-based robots(2018-08) Ramaswamy Balasubramanian, Arun Prassanth; Ramaswamy Balasubramanian, Arun Prassanth; King, Scott A.; Carrillo, Luis Garcia; Katangur, Ajay; Katangur, AjayCooperative object transport is an intriguing research area in swarm and multi-agent robotic systems. Global-view is a challenge in cooperative transport where different aspects such as providing the global picture, what information to share and how to share are still being explored. One simple way of addressing global view is by using a situated agent which has an elevated view of the environment and capable of communicating it. Various works that employ this strategy often rely on a centralized or global control where the agent with the global view makes the decisions. We propose a strategy modeled after Behavior-Based Robotics principles which enables the robots to react to the environment and thereby achieve cooperation to accomplish the task. Instead of relying on a sophisticated controller or a centralized leader, the robots simply react to the stimuli in different ways by executing simple behaviors from their own repertoire. The 'Observer', a situated agent with the global view has no decision-making responsibilities and simply serves as a means of stimulus. Also, the Observer employs simple techniques to extract and share very limited information with other agents. Different experiments were performed with real robots and various metrics were collected to demonstrate and evaluate the strategy.Item A deep learning model to predict thunderstorms within 400km2 south Texas domains(2019-12) Kamangir, Hamid; King, Scott A.; Tissot, Philippe; Li, LongzhuangHigh resolution predictions, both temporally and spatially, remain a challenge for the prediction of thunderstorms and related impacts such as lightning strikes. The goal of this work is to improve and extend a machine learning method to predict thunderstorms at a 3km resolution with lead times of up to 15 hours. A deep learning neural networks (DLNN) was developed to post process deterministic High-Resolution Rapid Refresh (HRRR) numerical weather prediction (NWP) model output to develop DLNN thunderstorm prediction models (categorical and/or probabilistic output) with performance exceeding that of the HRRR and other models currently available to National Weather Service (NWS) operational forecasters. Notwithstanding the discovery that shallow neural network models can approximate any continuous function (provided the number of hidden layer neurons is sufficient), studies have demonstrated that DLNN models based on representation learning can perform superior to shallow models with respect to weather and air quality predictions. In particular, we use the method known as stacked autoencoder representation learning, yet more specifically, greedy layer-wise unsupervised pretraining. The training domain is slightly large specific area in Corpus Christi, Texas (CRP). The domain is separated into a grid of 13×22 equidistant points with a grid spacing of 20 km. These points serve as boundaries/centres for 286 20 × 20 km (400 km2 ) square regions. The strategy is that DLNN model train on whole boxes and test on three most important boxes to evaluate the model. The target refers to the existence, or non-existence, of thunderstorms (categorical). Cloud to Ground (CG) lightning was chosen as the proxy for thunderstorm occurrence. Logistic regression was then applied to SDAE output to train the predictive model. An iterative technique was used to determine the optimal SDAE architecture. The performance of the optimized DLNN classifiers exceeded that of the corresponding shallow neural network models developed by Collins and Tissot [12], a classifier via a combination of principal component analysis and logistic regression, and operational weather forecasters, based on the same dataset.Item A deep-learning-based fall-detection system to support aging-in-place(2017-05) Alkittawi, Hend; Rahnemoonfar, Maryam; Mahdy, Ahmed; Sefcik, ElizabethEmergency departments treat around 2.5 million older people for fall injuries each year. Serious head and broken bones injuries occur in 20% of falls. Fall injuries, adjusted for inflation, has direct medical costs of $34 billion a year. Taking into account that people 65 and older are expected to comprise 21.7% of the U.S population in 2040, compared to 14.4% in year 2013, the numbers presented in the statistics will dramatically increase as well. Preserving the elderlys' right of aging in a home of their own choice is mandatory in today's world, as more elderly people are willing to live independently. But, with the statistics showing that falling is a major health problem that has a huge non-desirable impact on elderly lives, fall detection systems become a necessity. Different approaches have been used to design fall detection systems. One approach depends on wearable sensors that measure different physical parameters of a human body or the environment around it, such as the body acceleration or its pressure on the floor. A second approach depends on sensors employed in the environment. These sensors mainly include wide-angle cameras, depth cameras, and microphones. Different approaches used different classifiers for training the system to detect falls. Despite these efforts to detect falls, it is possible that other naturally occurring falls trigger false alarms. Thus, the current implementations of fall detection systems need to be improved. Most recently, computer vision based approaches using depth cameras are the mostly used for such improvement. Using deep neural networks to learn features from video frames have a potential to improve the fall-detection accuracy and reduce triggering false alarms. In this study, a more robust and deep fall-detection system was designed. This approach extends deep convolutional neural networks in time. This extension allows capturing the spatial and temporal information presented through successive video frames. The result of the new approach can be used to implement a reliable surveillance system in a real-world environment.Item DELIVERY OF ONLINE INSTRUCTION FOR LAND SURVEYING/GEOMATICS STUDENTS: ISSUES ENCOUNTERED AND BEST PRACTICES(6/23/2014) McDonald, Roger W.Many states, including Texas, now require a bachelor’s degree for licensure as a professional land surveyor. Currently, only one university and a hand full of junior colleges in Texas offer land surveying degrees, and none of these offer a fully online degree. In a state the size of Texas this is problematic for individuals seeking licensure, who reside in areas where there is no land surveying program within a reasonable commute. College programs in other professional fields are using such tools as synchronous class meetings, pod casts, and streaming video to reach students in geographic areas far removed from their campuses. Like nursing, land surveying requires a certain amount of hands-on experience using the specific tools of the profession that is difficult to acquire without additional expense of both time and money. Issues that emerge because of the need for this hands-on experience include providing and supervising laboratory instructors and providing the necessary software and hardware to distant sites. This paper examines the issues that must be overcome by any institution that wishes to offer an online degree program in land surveying and will then propose a set of best practices that can be used by these institutionsItem Designing geodatabases for the general authority for statistics of the Kingdom of Saudi Arabia(2016-05) Alghamdi, Khalid Abdullah; Huang, YuxiaDeveloping a new system of both statistical surveys and geographic entities for the General Authority for Statistics (GaStat) in Saudi Arabia is needed to respond to fast growing statistical data as well as to be more relevant with users and related resources through providing a diversity of datasets. This project developed a new geodatabase conceptual model for the GaStat by adopting the methods used by the United States Census Bureau. The new model consists of two main components: statistical surveys and geographic entities. First, statistical surveys use the methods of field surveys, partnership agreements, and self-response to feed the database in the Information Bank, which is a data warehouse and will be used in GaStat. More specifically, additional types of statistical surveys are identified and included in the proposed model. These types include the Saudi Community Survey, Saudi Housing Survey, Saudi Income Survey, Saudi Spending Survey, Saudi Economic Survey, Saudi Industry Survey, Saudi Agricultural Survey, and the Saudi Employment Survey. Further, a series of formatting time methods including quarters, one year, three years, and five years is used in the statistical systems in order to provide up-to-date information. Second, similar to the geographic entities used in U.S. Census Bureau, geographic entities for the GaStat are classified into two groups: legal and administrative entities, and statistical entities based on their corresponding geographic subdivision, and they are further organized in a hierarchical structure. This design should provide a powerful tool for collecting information and creating a standard set for the GaStat in order to retrieve the requests from users. More specifically, the legal and administrative entities include the country and its provinces, governorates, holy areas, economic zones, ZIP code areas, school districts and voting districts. Statistical entities include regions, statistical tracts, statistical block groups, statistical blocks, urban areas, urban growth areas, places, sample-data areas and governorates subdivisions. Additionally, a unique reference numeric code is used to integrate between the data from both statistical surveys and geographic entities. The proposed geodatabase model is expected to address the limitations of the current systems in the GaStat.Item Detecting plant phenotypes from 3D point cloud data(2019-08) Dani, Jimmy; King, Scott A.; Jung, Jinha; Belkhouche, Mohammed; Mandadi, KranthiIn recent years, with the rapid development in indoor plant genotyping, there is a growing need for precise quantification of plant phenotypes. Currently, manual plant phenotyping is being used which is laborious, time-consuming, and prone to errors. This served as a motivation to develop an automated greenhouse phenotyping framework, that uses a 3D point cloud generated from RGB images. This study is focused on variations in plant phenotypes on different genotypes namely, Atlantic and Olalla, under controlled and drought stress treatment, throughout the growing season. The phenotypes considered in this study are: plant height, plant volume, leaf angle distribution and Excessive Greenness Index. Images of the plant are taken from two cameras hung on a post and a 3D point cloud is generated from those images. The phenotypes derived from the point cloud showed high correlation with manual measurements, which shows the system could be used for a variety of indoor plant phenotyping. The 99 percentile height shows the highest correlation with manually estimated height, and the volume and excessive greenness index results shows the Olalla genotype is more susceptible to stress as compared to the Atlantic genotype, the leaf angle distribution shows higher wilting for the drought stress treatment as compared to the control treatment.Item Detection of plant characteristics and a comparison of effectiveness between 2d and 3d data visualization in supporting human perception of plant characteristics(2018-05) Pham, Thanh Van; Pham, Thanh Van; King, Scott A.; Lee, Byung Cheol; Sheta, Alaa; Lee, Byung Cheol; Sheta, AlaaLee, Byung Cheol; Sheta, AlaaEfficient agriculture requires the assessment of plant characteristics. A higher crop yield can be achieved with good quality plant characteristic data. In this research, a system was developed using the algorithms presented here to automatically extract plant characteristics. The automatically extracted values were compared with ground-truth data to evaluate the accuracy of the system. As well, the effectiveness of using 2 or 3-dimensional data visualization for determining these characteristics is studied. An experiment was conducted to investigate how effective plant characteristics are evaluated when using 2D or 3D data visualizations. Participants were presented with either plant pictures (2D) or 3D plant models and tasked with identifying plant height and the number of leaves. Task completion times and accuracy rates were gathered for performance analysis.Item Development of a standardized framework for cost-effective communication system based on 3D data streaming and real-time 3D reconstruction(2017-05) Huynh, Dang Duong Hai; King, Scott A.; Katangur, Ajay; Belkhouche, MohammedThe common approachs for people to converse over a large geographical distance are via either SMS or video conference. A more immersive communication method over the internet that creates an experience which is closer to a face-to-face conversation is more desirable. The closest form is a conversation via the holographic projection of the participants and environment. Many motion pictures have featured this type of communication. While a complete system itself that uses holographic projection is still many years away, the core functions of such a system are not im- possible to achieve now. Two such features include 3D reconstruction of the target and streaming of 3D data. With the current development speed of technology, 3D reconstruction can be achieved with cost-effective depth cameras and 3D streaming can be done after data optimization. The focus of this work is on how to approach the idea by using such devices to create a standardized platform for the implemen- tation of the system with aforementioned features. Specifically, the system is able to capture 3D data from multiple depth sensors, reconstruct a 3D model of the human target to create an avatar, and stream the changes acquired from the sensors to the client to control the avatar in real time.
- «
- 1 (current)
- 2
- 3
- »