COE Faculty Works

Permanent URI for this collectionhttps://hdl.handle.net/1969.6/94847

Browse

Recent Submissions

Now showing 1 - 20 of 176
  • Item
    Reorder buffer
    (2024) Hadimlioglu, Ismail Alihan
    Instruction pipelines: Processors execute instructions in a sequence of steps fetch, decode, execute, write back. In-Order Execution: Instructions progress through the pipeline one at a time, waiting for the previous instruction to finish before starting. Out-of-Order Execution: Instructions can be issued, executed, and completed out-of-order as long as there are no dependencies.
  • Item
    Tomasulo algorithm
    (2024) Hadimlioglu, Ismail Alihan
    Introduction: Instruction Level Parallelism (ILP): ILP refers to the potential to execute multiple instructions concurrently within a processor. Modern processors exploit ILP to achieve higher performance. The Tomasulo Algorithm is a key technique that facilitates dynamic instruction scheduling for maximizing ILP.
  • Item
    Branch prediction
    (2024) Hadimlioglu, Ismail Alihan
    The Branching Dilemma: Conditional branches are instructions that alter the program flow based on a condition. Processors typically fetch instructions sequentially. Encountering a branch creates a dilemma: Fetch the next instruction in sequence (assuming not taken). Fetch the target instruction of the branch (assuming taken).
  • Item
    Pipeline scheduling and loop unrolling
    (2024) Hadimlioglu, Ismail Alihan
    Introduction: What is Instruction Level Parallelism (ILP)? The concept of pipelining in a CPU, Data dependencies and their impact on performance
  • Item
    Pipelining hazards and performance
    (2024) Hadimlioglu, Ismail Alihan
    Pipelining Hazards: Structural Hazards: Insufficient Hardware Resources, Data Hazards: Reading data before it's written, Control Hazards: Branch instructions and stalls
  • Item
    Instruction-Level parallelism and pipelining
    (2024) Hadimlioglu, Ismail Alihan
    Introduction: What is Computer Performance? Execution Time, Instructions Per Second (IPS), Clock Speed (GHz), Clock Cycles Per Instruction (CPI)
  • Item
    Memory optimizations
    (2024) Hadimlioglu, Ismail Alihan
    Introduction: Importance of Memory, Bottleneck in performance, Large capacity gap between registers and main memory, Challenges of Memory Access, Latency (access time), Bandwidth (data transfer rate)
  • Item
    Memory hierarchy design
    (2024) Hadimlioglu, Ismail Alihan
    Introduction: What is Memory Hierarchy? A layered system for storing and accessing data in a computer, Organized based on speed, capacity, and cost
  • Item
    Amdahl's Law
    (2024) Hadimlioglu, Ismail Alihan
    Introduction: What is Parallel Processing? Executing tasks simultaneously using multiple processing units. Benefits of Parallel Processing, Increased throughput (more tasks completed per unit time), Reduced execution time for specific tasks
  • Item
    Power, energy, and cost in computer architecture
    (2024) Hadimlioglu, Ismail Alihan
    Introduction: We will explore the critical aspects of power, energy, and cost in computer architecture. We will discuss the design considerations, components, and mathematical models used to optimize these factors in modern computing systems.
  • Item
    Instruction set architecture (ISA) and RISC-V
    (2024) Hadimlioglu, Ismail Alihan
    What is an ISA? The bridge between software (instructions) and hardware (CPU). Defines the set of instructions a processor can understand and execute. Dictates how data is processed and programs are run.
  • Item
    Fundamentals of computer design: Advanced computer architecture
    (2024) Hadimlioglu, Ismail Alihan
    Introduction: In this topic, we will explore: What is Computer Design, Historical Context, Key considerations that go into creating efficient & powerful computer systems
  • Item
    Introduction to advanced computer architecture
    (2024) Hadimlioglu, Ismail Alihan
    Introduction: What is Computer Architecture? The science and art of designing, structuring, and optimizing the core components of a computer system. Defines how instructions are processed, data is accessed, and system components communicate. Acts as the bridge between the software (instructions) and hardware (physical components). Provides a foundation for understanding the trade-offs between performance, power consumption, and cost in advanced systems. Enables an in-depth analysis of advanced techniques like pipelining, superscalar execution, and memory hierarchies. Forms the basis for optimizing software for specific architectures and exploiting their capabilities. Helps in appreciating the challenges and opportunities presented by emerging technologies like multicore processors and GPUs.
  • Item
    COSC 6351.001: Advanced computer architecture
    (2024) Hadimlioglu, Alihan
    Advanced Computer Architecture COSC 6351.001, Department of Computer Science, Fall 2024: Catalog Course Description: An overview of computer architecture, which stresses the underlying design principles and the impact of these principles on computer performance. General topics include design methodology, processor design, control design, memory organization, system organization, and parallel processing.
  • Item
    Terrestrial lidar data classification based on raw waveform samples versus online waveform attributes
    (IEEE, 2021-12-03) Pashaei, Mohammad; Starek, Michael J.; Glennie, Craig L.; Berryhill, Jacob
    In this study, the potential of raw samples of digitized echo waveforms collected by full-waveform (FW) terrestrial laser scanning (TLS) for point cloud classification is investigated. Two different TLS systems are employed, both equipped with a waveform digitizer for access to the raw waveform and online waveform processing which assigns calibrated waveform attributes to each point measurement. Point cloud classification based on samples of the raw single-peak echo waveform is compared with point cloud classification based on the calibrated online waveform attributes. A deep convolutional neural network (DCNN) is designed for the supervised classification. Random forest classifier is used as a benchmark to evaluate the performance of the proposed DCNN model. In addition, feature importance and temporal stability of the raw waveform samples versus the calibrated waveform attributes for point cloud classification are reported. Classification results are evaluated at two study sites, a built environment on a university campus and a coastal wetland environment. Results show that direct classification of the raw waveform samples outperforms classification based on the set of waveform attributes at both study sites. Results also show that the contribution of the range, as the only geometric attribute in the raw waveform feature vector, significantly increases the classification performance. Finally, the performance of the DCNN for filtering ground points to generate a digital terrain model (DTM) based on classification of the raw waveform samples is assessed and compared to a DTM generated from a progressive morphological filter and to real-time kinematic (RTK) GNSS survey data.
  • Item
    The impact of digitalized community-based squarestepping exercise program on cognitive and balance functions among older adults living in senior facilities: A pilot study
    (2024-02-16) Lee, Kyoung Eun; Boham, Mikaela; Zhang, Meng; Ro, YoungHee; Cong, Xiaomei; Huang, Yuxia
    Objectives: Older adults exhibit a high desire for active and healthy aging without physical or mental dysfunction, particularly those living independently in senior facilities. Preserving or improving cognitive function and minimizing fall risks are essential for older adults to live a happy and active lifestyle. The purpose of this pilot study was to examine the feasibility, safety, and preliminary effectiveness of the innovative digitalized community-based square-stepping exercise program (DC-SSEP) in improving cognitive and physical function among older adults residing in senior facilities. Methods: Guided by the Health Promotion Model and Social Cognitive Theory, this pilot study used a quasi-experiment design with one intervention group. A total of 17 older adults recruited from a senior facility in Southern Texas participated in 40 sessions of DC-SSEP over 20 weeks. Cognitive function was measured using the latest version (8.1) of Montreal Cognitive Assessment and the balance function focusing on balance and functional mobility was measured using Berg Balance Scale and Time to Up and Go. Results: Most participants were non-Hispanic white women. The DC-SSEP was a feasible and safe exercise program for older adults living in senior facilities; and the results showed the preliminary effectiveness of the DC-SSEP in improving cognitive and balance function (P < 0.01) among older adults. Conclusion: This pilot study is distinctive as it is among the first to evaluate the multi-layered impacts of DC-SSEP using Internet of Things (IoT) technology and integrated operating software in the United States. Despite the small sample size and homogeneity of participants, this pilot study suggests multiple valuable directions for future research using DC-SSEP.
  • Item
    Stage and discharge prediction from documentary time-lapse imagery
    (2024-04-16) Chapman, Kenneth W.; Gilmore, Troy E.; Mehrubeoglu, Mehrube; Chapman, Christian D.; Mittelstet, Aaron R.; Stranzi, John E.
    Imagery from fixed, ground-based cameras is rich in qualitative and quantitative information that can improve stream discharge monitoring. For instance, time-lapse imagery may be valuable for filling data gaps when sensors fail and/or during lapses in funding for monitoring programs. In this study, we used a large image archive (>40,000 images from 2012 to 2019) from a fixed, ground-based camera that is part of a documentary watershed imaging project (https://plattebasintimelapse.com/). Scalar image features were extracted from daylight images taken at one-hour intervals. The image features were fused with United States Geological Survey stage and discharge data as response variables from the site. Predictions of stage and discharge for simulated year-long data gaps (2015, 2016, and 2017 water years) were generated from Multi-layer Perceptron, Random Forest Regression, and Support Vector Regression models. A Kalman filter was applied to the predictions to remove noise. Error metrics were calculated, including Nash-Sutcliffe Efficiency (NSE) and an alternative threshold-based performance metric that accounted for seasonal runoff. NSE for the year-long gap predictions ranged from 0.63 to 0.90 for discharge and 0.47 to 0.90 for stage, with greater errors in 2016 when stream discharge during the gap period greatly exceeded discharge during the training periods. Importantly, and in contrast to gap-filling methods that do not use imagery, the high discharge conditions in 2016 could be visually (qualitatively) verified from the image data. Half-year test sets were created for 2016 to include higher discharges in the training sets, thus improving model performance. While additional machine learning algorithms and tuning parameters for selected models should be tested further, this study demonstrates the potential value of ground-based time-lapse images for filling large gaps in hydrologic time series data. Cameras dedicated for hydrologic sensing, including nighttime imagery, could further improve results.
  • Item
    Simplified indoor localization using Bluetooth beacons and received signal strength fingerprinting with smartwatch
    (2024-03-25) Bouse, Leana; King, Scott A.; Chu, Tianxing
    Variations in Global Positioning Systems (GPSs) have been used for tracking users’ locations. However, when location tracking is needed for an indoor space, such as a house or building, then an alternative means of precise position tracking may be required because GPS signals can be severely attenuated or completely blocked. In our approach to indoor positioning, we developed an indoor localization system that minimizes the amount of effort and cost needed by the end user to put the system to use. This indoor localization system detects the user’s room-level location within a house or indoor space in which the system has been installed. We combine the use of Bluetooth Low Energy beacons and a smartwatch Bluetooth scanner to determine which room the user is located in. Our system has been developed specifically to create a low-complexity localization system using the Nearest Neighbor algorithm and a moving average filter to improve results. We evaluated our system across a household under two different operating conditions: first, using three rooms in the house, and then using five rooms. The system was able to achieve an overall accuracy of 85.9% when testing in three rooms and 92.106% across five rooms. Accuracy also varied by region, with most of the regions performing above 96% accuracy, and most false-positive incidents occurring within transitory areas between regions. By reducing the amount of processing used by our approach, the end-user is able to use other applications and services on the smartwatch concurrently.
  • Item
    Lower-Dimensional model of the flow and transport processes in thin domains by numerical averaging technique
    (2023-12-25) Vasilyeva, Maria; Mbroh, Nana Adjoah; Mehrubeoglu, Mehrube
    In this work, we present a lower-dimensional model for flow and transport problems in thin domains with rough walls. The full-order model is given for a fully resolved geometry, wherein we consider Stokes flow and a time-dependent diffusion–convection equation with inlet and outlet boundary conditions and zero-flux boundary conditions for both the flow and transport problems on domain walls. Generally, discretizations of a full-order model by classical numerical schemes result in very large discrete problems, which are computationally expensive given that sufficiently fine grids are needed for the approximation. To construct a computationally efficient numerical method, we propose a model-order-reduction numerical technique to reduce the full-order model to a lower-dimensional model. The construction of the lower-dimensional model for the flow and the transport problem is based on the finite volume method and the concept of numerical averaging. Numerical results are presented for three test geometries with varying roughness of walls and thickness of the two-dimensional domain to show the accuracy and applicability of the proposed scheme. In our numerical simulations, we use solutions obtained from the finite element method on a fine grid that can resolve the complex geometry at the grid level as the reference solution to the problem.
  • Item
    Aggregation strategies to improve XAI for geoscience models that use correlated, high-dimensional rasters
    (2023-10-30) Krell, Evan; Kamangir, Hamid; Collins, Waylon; King, Scott A.; Tissot, Philippe
    Complex machine learning architectures and high-dimensional gridded input data are increasingly used to develop highperformance geoscience models, but model complexity obfuscates their decision-making strategies. Understanding the learned patterns is useful for model improvement or scientific investigation, motivating research in eXplainable artificial intelligence (XAI) methods. XAI methods often struggle to produce meaningful explanations of correlated features. Gridded geospatial data tends to have extensive autocorrelation so it is difficult to obtain meaningful explanations of geoscience models. A recommendation is to group correlated features and explain those groups. This is becoming common when using XAI to explain tabular data. Here, we demonstrate that XAI algorithms are highly sensitive to the choice of how we group raster elements. We demonstrate that reliance on a single partition scheme yields misleading explanations. We propose comparing explanations from multiple grouping schemes to extract more accurate insights from XAI. We argue that each grouping scheme probes the model in a different way so that each asks a different question of the model. By analyzing where the explanations agree and disagree, we can learn information about the scale of the learned features. FogNet, a complex three-dimensional convolutional neural network for coastal fog prediction, is used as a case study for investigating the influence of feature grouping schemes on XAI. Our results demonstrate that careful consideration of how each grouping scheme probes the model is key to extracting insights and avoiding misleading interpretations.