Found 152 Papers
(click title to view abstract)

Volume 2014
Joint Terminal Attack Controller-Training Rehearsal System: Competency-based Research

Year: 2014

Authors: Sean Morris, Christine Covas-Smith, Leah Rowe, Christina Kunkle, Keith Westheimer

Abstract: The Joint Terminal Attack Controller (JTAC) warfighter is responsible for supporting the Army Maneuver commanders by controlling aircraft and weapons employment in Close Air Support (CAS) environments. JTACs are typically co-located with Army units, however, they are required to communicate and collaborate with a number of personnel operating external to their locations. JTACs are required to maintain a significant level of proficiency by regularly training in both live and simulation environments. Due to increasing reductions of aircraft to aide in live training events and limitations in simulator technology, trainings gaps have arisen that hinder the opportunities for JTACs to achieve required levels of proficiency. This paper will introduce an ongoing effort to create a robust JTAC training environment ‒ the Joint Terminal Attack Controller-Training Rehearsal System (JTAC-TRS). The JTAC training gaps are being assessed and explored within the JTAC-TRS using problem-based learning approaches by analyzing the Mission Essential CompetenciesSM (MEC). Using MECs, we have identified the primary and supporting competencies, knowledge, skills, and developmental experiences that a JTAC must have to effectively execute the mission. Preliminary evaluations of this system demonstrate that the JTAC-TRS has reduced 50% of these training gaps. We will present data regarding the identified training gaps and how they are addressed in this unique training environment.

Downloads are free for the duration of 2018. Log in with you account to download! Register or Login
Development of a Microscopic Artificially Intelligent Traffic Model for Simulation

Year: 2014

Authors: Viral Raghuwanshi, Sarthak Salunke, Yunfei Hou, Kevin Hulme

Abstract: Roadway safety continues to be a major public health concern. Recent statistics show that more than 30,000 fatalities occur due to motor vehicle accidents, and in the year 2012, motor vehicle crashes resulted in more than 2 million injuries. As a result of these ongoing trends, simulators continue to become more abundant in applications ranging from Intelligent Transportation Systems (ITS) research, autonomous driving, human factors studies, rehabilitation, and driver training and workload applications. However, many current simulators lack realism with regards to accompanying traffic, which often does not satisfactorily respond to the real-time actions of the human subject who is operating the simulation. Artificial traffic simulation models found within many modern-day driving simulators are often “macroscopic” in nature – they aggregate the description of overall traffic flow, which is based on “idealistic” driver behavior. This lack of network realism (particularly in the vicinity of the human subject operating the simulator) limits the application scope. In this paper, we evaluate traffic simulation models for supporting next-generation ITS research applications. This survey justified the need for the design and development of a microscopic Artificially Intelligent Traffic Model (AITM) intended for civilian ground vehicle research applications. The AITM generates a fleet of semi-intelligent vehicles with which a human driver interacts within a virtual driving simulation environment. The behavior of the vehicles is based upon the basic principles of rigid body physics and real-time collision detection, and includes a rule-base for: road-appropriate travel speed behavior, behavior at intersections (e.g., stop signs, street lights), and interactions with other AI and human-driven vehicles on the virtual roads (i.e., lane changing, headway distance). In this paper, the design and development of the baseline AITM is described, and a use-case application is presented, along with recommendations for improvements required subsequent to the deployment of the preliminary model.

Downloads are free for the duration of 2018. Log in with you account to download! Register or Login
Fusing Self-Reported and Sensor Data from Mixed-Reality Training

Year: 2014

Authors: Trevor Richardson, Stephen Gilbert, Joseph Holub, Frederick Thompson, Anastacia MacAllister, Rafael Radkowski, Eliot Winer, Paul Davies, Scott Terry

Abstract: Military and industrial use of smaller, more accurate sensors are allowing increasing amounts of data to be acquired at diminishing costs during training. Traditional human subject testing often collects qualitative data from participants through self-reported questionnaires. This qualitative information is valuable but often incomplete to assess training outcomes. Quantitative information such as motion tracking data, communication frequency, and heart rate can offer the missing pieces in training outcome assessment. The successful fusion and analysis of qualitative and quantitative information sources is necessary for collaborative, mixed-reality, and augmented-reality training to reach its full potential. The challenge is determining a reliable framework combining these multiple types of data. Methods were developed to analyze data acquired during a formal user study assessing the use of augmented reality as a delivery mechanism for digital work instructions. A between-subjects experiment was conducted to analyze the use of a desktop computer, mobile tablet, or mobile tablet with augmented reality as a delivery method of these instructions. Study participants were asked to complete a multi-step technical assembly. Participants’ head position and orientation were tracked using an infrared tracking system. User interaction in the form of interface button presses was recorded and time stamped on each step of the assembly. A trained observer took notes on task performance during the study through a set of camera views that recorded the work area. Finally, participants each completed pre and post-surveys involving self-reported evaluation. The combination of quantitative and qualitative data revealed trends in the data such as the most difficult tasks across each device, which would have been impossible to determine from self-reporting alone. This paper describes the methods developed to fuse the qualitative data with quantified measurements recorded during the study.

Downloads are free for the duration of 2018. Log in with you account to download! Register or Login
Beyond Socio-Cultural Sensemaking: Observing and Interpreting Patterns of Life

Year: 2014

Authors: Tracy Benoit, Clarissa Graffeo

Abstract: Military leaders have identified a need for socio-cultural sensemaking capabilities to support operations in irregular conflicts. However, training programs lack practical applied techniques for such sensemaking. For example, observational training programs such as Combat Hunter instruct warfighters to set socio-cultural baselines, but provide little specific instruction on relevant sensemaking processes; furthermore, little of the existing or proposed socio-cultural training robustly integrates field-tested methodologies and concepts from anthropology or other social sciences. A related issue involves the overemphasis that training and policy recommendations place on “culture” as a rigid concept. Warfighters may overcompensate by focusing too much on culture over other relevant factors, and often treat culture as a fixed entity that can be read at a superficial level. Culture, however, is a fluid construct without fixed boundaries that constantly interacts with other factors and situational exigencies; the complexity of social systems alongside cultural mixing and shifting within operational environments demands a more holistic model. In a 2012 I/ITSEC paper, our team outlined a concept of archetypal, cross-cultural Patterns of Life for training in virtual environments. In this paper we propose a revised concept of Patterns of Life as a critical thinking framework that extends beyond culture to incorporate human and non-human actors, practices, functions, environmental interactions, and temporal, cultural, and situational contexts that better reflect social science theories. We also draw on prior perceptual training and ethnographic methodologies to define an Ethnographically-informed Sensemaking Protocol consisting of a nested, iterative process of framing and baseline construction that supports both individual encounters and the entirety of a warfighter’s deployment; this will improve sensemaking and framing baselines in complex, uncertain environments, and allow applicability across operational environments. We discuss the theoretical foundations of this revised approach, and then provide a brief summary of the current state of the framework and protocol.

Downloads are free for the duration of 2018. Log in with you account to download! Register or Login
Integration of Low-Cost HMD Devices in Existing Simulation Infrastructure

Year: 2014

Authors: Tomer Michael, Yaniv Minkov, Rami Rockah

Abstract: Recent years have seen a sharp increase in the availability of low cost Head Mounted Display Devices or, as they are colloquially referred to, "VR Goggles". These devices pair the live tracking of the orientation of their user's head, with a full stereoscopic view of a 3d environment. Thus, providing users with the illusion that they been transported to a virtual world where they are free look around in a realistic manner. It is this functionality that brought the HMD to the attention of the research simulation world, in particular because of the field's vested interest in providing its test subjects with the most realistic experience possible within a virtual environment. However, the task of integrating the HMD presented a set of unique challenges. From the logistical, such as the lack of visibility between the human subject and input devices, to the physiological, such as the potential and prominent increase in so called "simulation sickness" (a subset of motion sickness), sometimes associated with even short encounters with the device. These phenomena raise questions in regards to the HMD's usefulness in research environments, where unintended side effects directly clash with the realism of the virtual environment and, by extension, with the validity of a given experiment's results. This paper describes an attempt made between 2013 and 2014 to integrate Oculus VR's "Rift" HMD with the IDF Ground Forces Command Battle-Lab's existing simulation infrastructure. It discusses solutions and lessons learned for the integration of the device, technical hurdles encountered in making an HMD work with simulators built on existing frameworks like Vega Prime and Virtual Battlespace, and the application of different methodologies - explored for setting up or converting different simulators for use with an HMD - and their respective effectiveness with human participants.

Downloads are free for the duration of 2018. Log in with you account to download! Register or Login
Lessons Learned Integrating Mobile Technology into Two Army Courses

Year: 2014

Authors: Gregory Goodwin, Michael Prevou, Heather Wolters, Holly Baxter, Mike Hower, Linda McGurn

Abstract: As the Army considers using mobile computing to improve training and assessment, it must be confident in the benefits of that technology and more importantly, it must be able to articulate the requirements needed to achieve those benefits. Although mobile devices and software have proven to be extremely popular in the commercial market, research is needed to identify both the benefits and requirements of this technology before the Army considers its wholesale adoption for training and education. This paper reports on the results of using mobile devices in two Army courses: the Signal Captains Career Course (SCCC) and the School for Command Preparation (SCP). The software developed for the SCCC was an interactive performance assessment tool for the topic of power distribution while the software developed for the SCP was a practice tool for media engagement. In the first experiment, 182 SCCC students either took the traditional paper and pencil practical exercise or the interactive tablet-based version. The tablet-based version significantly reduced the time needed to complete the exercise (1h vs. 3h) without affecting student understanding of the topic. In the second experiment 161 SCP students practiced for the final exercise (mock media engagement) with and without the aid of a tablet-based practice tool. Although the group using the app reported practicing more, their performance on the final exercise was the same as those who practiced without the app. These findings indicate that although mobile technologies have the potential to benefit students and instructors, neither the magnitude nor the type of benefit is easy to predict at this point. These findings and other lessons learned are used as the basis for a proposed strategy for developing mobile applications for use in Dept. of Defense training.

Downloads are free for the duration of 2018. Log in with you account to download! Register or Login
A Novel Approach to Determine Integrated Training Environment Effectiveness

Year: 2014

Author: Glenn Hodges

Abstract: This paper discusses the development and use of an analytical assessment methodology anchored in systems engineering principles, affordance theory, and human abilities, to measure the potential of integrated training environments (ITE) to effectively support training. An integrated training environment is defined here as any human in-the-loop training system that includes live, virtual, constructive or game-based training aids, devices, simulators, or simulations (TADSS) alone or in combination, used to support the deliberate practice of skills for defined mission tasks. Empirical investigation of ITE is costly, lacks formal guidance, and is therefore often unreliable. Ad hoc studies, commissioned by individual organizations, constitute the current state of Army ITE evaluation. These assessments are often entirely based on subjective opinions gained through surveys, which produce results that are linked indirectly and loosely to the ITE. What is required is a repeatable, inexpensive, analytical approach to ITE assessment that bounds the potential of a given system to the support it provides to the deliberate practice of specific tasks. The results of this research include the development and use of the integrated training environment assessment methodology (ITEAM). ITEAM was used to evaluate the ability of several ITE to support the deliberate practice of specific tasks during training. During application, ITEAM consistently predicted where training was supported by an ITE and generally how well. ITEAM is offered as a tool to be used early in the material acquisition process to affordably define and verify the requirements of candidate ITE solutions for Department of Defense needs.

Downloads are free for the duration of 2018. Log in with you account to download! Register or Login
The Largest Field of View Collimated Display Ever Built

Year: 2014

Authors: Justin Knaplund, Dave Fonkalsrud, Terry Linn

Abstract: As flight simulators increase in fidelity and performance, more training tasks can be transferred to the simulator, freeing up aircraft time for other tasks. For some training missions, there are tasks that can currently only be performed in the aircraft due to limitations in the simulator Field of View (FOV). In addition, a large horizontal FOV aides in the pilot’s peripheral cues for aircraft attitude, speed and height above terrain, and the addition of lower chin and side displays are required for helicopter pilots to perform hover and landing tasks, especially in “brown out” conditions. Since Mylar displays are not typically able to extend beyond 65° vertically x 225° horizontal, the customer would have to add supplemental real image or collimated displays located outboard of the Mylar mirror plenum, resulting in a large discontinuity in the image. A better option is to use glass mirrors for the Out the Window collimated display, extending the FOV by adding glass mirror segments to achieve a 300º horizontal FOV. Supplemental chin and side displays can be tucked under the edge of the mirror to eliminate gaps between the displays and extend the vertical FOV down to -65º. However, designing and building such a large FOV display has its own challenges, including engineering a single piece Back Projection Screen (BPS) to cover the full FOV, manufacturing a matched array of glass mirrors, designing a projector turret that locates the array of projectors across the top of the BPS, and fitting the cockpit and Instructor Operating Station within the wedge-shaped gap left between the ends of the mirrors and/or BPS. This paper will focus on the unique challenges our team overcame to build the largest collimated system ever designed, the 300° x 85° FOV display for the US Marine Corps UH-1Y Flight Training Device, and how these lessons can apply to enhancing the FOV of other flight simulators.

Downloads are free for the duration of 2018. Log in with you account to download! Register or Login
Using Virtual Reality as Part of an Intensive Treatment Program for PTSD

Year: 2014

Authors: Deborah Beidel, Sandra Neer, Clint Bowers, Christopher Frueh, Albert Rizzo

Abstract: Up to 18.5% of veterans returning from OIF/OEF are diagnosed with posttraumatic stress disorder (PTSD). In addition to symptoms of anxiety (intrusive thoughts, re-experiencing, hyperarousal, and avoidance), PTSD can result in social maladjustment, poor quality of life, and medical problems. Other emotional problems include guilt, anger, and unemployment, impulsive or violent behavior, and family discord. Many veterans seeking treatment for PTSD also seek disability compensation for debilitating occupational impairment. There are few administrative or research data to indicate veterans are recovering from PTSD. Exposure therapy, a form of behavior therapy, alleviates anxiety symptoms, but may not address the anger, depression and social impairment that accompanies this disorder. In this presentation, we will discuss an intensive treatment program, known as Trauma Management Therapy (TMT), which combines individual virtual reality (VR) assisted exposure therapy with group social and emotional rehabilitation skills training, delivered in a 3 week format. The presentation will demonstrate the VR environment (Virtual Iraq), will discuss how often/successfully various VR elements are integrated into a comprehensive treatment program, and the adaptability of the program for active duty military personnel, as well as veterans. We will discuss the format of the intensive program as well as factors such as compliance and drop-out rates, comparing these important clinical variables to more traditional outpatient treatment programs. Additionally, we will address common clinical concerns regarding the use of VR exposure therapy for individuals suffering from PTSD.

Downloads are free for the duration of 2018. Log in with you account to download! Register or Login
Forces Applied on Laryngoscope during Intubation: A study on Airway Simulators

Year: 2014

Authors: Matthew Mui, Christine Allen, Mojca Konia, Jack Stubbs, David Hananel

Abstract: Excessive forces applied during endotracheal intubation may cause damage to laryngeal structures leading to patient morbidity and mortality. Simulators are widely used for intubation training, but recent studies have shown significant differences in airway anatomy as well as forces applied during intubation when compared to humans. This study assesses the differences in intubation training on three partial-task trainers, TruCorp AirSim Standard, Laerdal Airway Management Trainer, and VBM Air Management Simulator Bill I, against cadavers. Objective force measurements and subjective ratings of difficulty and force used were measured. Using ANOVA and paired t-tests, endotracheal intubation on simulators was found to have significantly different force profiles (i.e., locations and magnitudes of the applied forces in comparison to cadavers. In particular, the Laerdal Airway Management Simulator differed in all three measurement variables, namely torque applied on the laryngoscope, force applied at the laryngoscope tip, and force exerted on the simulator’s teeth. These findings are further supported by the surveys of the participants in the Laerdal group. For the TruCorp and VBM simulators, significant differences are found only in torque and tip forces, respectively. These results suggest that a simulator that offers more realistic endotracheal intubations may be necessary for airway management training. In addition, this study sets a foundation for future studies to further elucidate the effects of various airway simulators on intubation training.

Downloads are free for the duration of 2018. Log in with you account to download! Register or Login