Found 134 Papers
(click title to view abstract)

Volume 2016
Blended Training for Surgeon Education

Year: 2016

Authors: Roger Smith, Alyssa Tanaka, Danielle Julian, Patricia Mattingly

Abstract: Complexity in surgery lies at the crossroad between the complexity of the human body and the capabilities of the surgical tools available. While we continue to improve our understanding of the body, we are also inventing new tools to address and correct issues through surgery. As a result, the complexity of surgery is expanding on two fronts simultaneously. This creates a lifetime learning environment for practitioners and a challenge for the systems which educate, measure, certify, regulate, and privilege surgeons. The models of training in this field are slow to evolve and still rest on a foundation of lecture and hands-on practice which has changed little in 100 years. Surgeons largely believe that real hands-on practice with human tissue – excised organs, cadavers, and live patients – is the most effective form of training. But it is also the most expensive, difficult to facilitate, and least accessible form. The emergence and maturation of the concept of blended learning in public and military education may prove equally valuable in surgical education and training. Creating a learner-centric environment in which multiple modes of education are encouraged, available, integrated, and accredited can potentially increase the level of competence of new surgeons, maintain competence in practicing surgeons, and provide objective metrics to the public and hospital systems. This paper defines a framework for blended surgical training using principles developed for the military. This framework includes knowledge and skills-based training in both an individual and a group learning environment which includes distance and e-learning sessions, face-to-face engagements, laboratory events, and operating room experiences as modes of surgical education that are not integrated into a coherent program with defined metrics. The goal of the framework is to apply blended learning principles to the surgical education and training community, with reference to prior activities in public and military education.

Downloads are free for the duration of 2018. Log in with you account to download! Register or Login
The Reference Model for Disease Progression Combines Disease Models

Year: 2016

Author: Jacob Barhak

Abstract: Simulations based on disease models can be used as a training tool for patient education or caregiver support to improve effectiveness of practice. Disease models provide predictions of patient outcomes, costs, and quality of life information over time. They are typically constructed by various teams around the world based on local data, and currently those do not validate well against multiple external populations. Therefore, universal understanding of disease progression is still unsolved. The Reference Model for Disease Progression addresses this problem by implementing a league of disease models that compete for fitness towards publicly available clinical data. Currently, diabetic populations are of interest and data for the model is drawn from published clinical trials, while model building blocks are based on published risk equations and modeling hypotheses. The Reference Model creates model combinations from those building blocks, then simulates and validates them against multiple populations. High Performance Computing (HPC) techniques are used to cope with the combinatorial number of models that, until recently, took roughly a year of computation time on a single processor. Recently new building blocks have been added to the model, and the number of combinations became too large to compute in reasonable time without access to a large cluster. To cope with this, the structure of the model was changed from a competitive discrete ensemble model, where building blocks are selected from a discrete pool of options to construct the best model, to a cooperative continuous ensemble model, where all building blocks are merged using linear combination. Moving the model toward continuous combination space allowed creation of an assumption engine that employs optimization algorithms to better deduce the most fitting model. This significantly reduces simulation time and produces a better fitting model. This paper will focus on recent modifications, and results obtained from the latest simulations.

Downloads are free for the duration of 2018. Log in with you account to download! Register or Login
Assessment and Content Authoring in Semantic Virtual Environments

Year: 2016

Authors: Christian Greuel, Karen Myers, Grit Denker, Melinda Gervasio

Abstract: Virtual environments (VEs) provide an appealing vehicle for training complex skills, particularly for domains where real-world practice incurs significant time, expense, or risk. Two impediments currently block widespread use of intelligent training tools for VEs. The first impediment is that techniques for assessing performance focus on algorithmic skills that force learners to follow rigid solution paths. The second impediment is the high cost of authoring the models that drive intelligent training capabilities. This paper presents an approach to training in VEs that directly addresses these challenges and summarizes its application to a weapons maintenance task. With our approach, a learner’s actions are recorded as he completes training exercises in a semantically instrumented VE. An example-tracing methodology, in which the learner’s actions are compared to a predefined solution model, is used to generate assessment information with contextually relevant feedback. Novel graph-matching technology, grounded in edit-distance optimization, aligns student actions with solution models while tolerating significant deviation. With this robustness to learner mistakes, assessment can support exploratory learning processes rather than forcing learners down fixed solution paths. Our approach to content creation leverages predefined ontologies, enabling authoring by domain experts rather than technology experts. A semantic mark-up framework supports authors in overlaying ontologies onto VE elements and in specifying actions with their effects. Drawing on these semantics, exercises and their solutions are created through end-user programming techniques: a domain expert demonstrates one or more solutions to a task and then annotates those solutions to define a generalized solution model. A concept validation study shows that users are comfortable with this approach and can apply it to create quality solution models.

Downloads are free for the duration of 2018. Log in with you account to download! Register or Login
Alignment + Isolation + Analytics = Credible Evaluation Impact/ROI Results

Year: 2016

Author: Timothy R. Brock

Abstract: Increased demands for cost accountability and workplace performance improvement evidence are changing how government, military, and civilian organizations manage and fund their human capital development strategies. Training, simulation, and education are assuming greater importance. Global benchmarking trends indicate a shift in executive-level thinking, and expectations are requiring our profession to adopt business intelligence analytics capabilities to answer questions about what we do and also what value we add to the bottom line, however that is defined. Training and simulation professionals are challenged with how to determine a direct cause-and-effect relationship between their training program and workplace performance. Most training programs struggle to demonstrate workplace impact and limit their evaluations to what occurs during the training event, commonly known as Level 1 and Level 2 evaluations. Training that involves simulation technology can assist, but requires real-time data to identify and target workplace behaviors that require the most urgent attention. If we do not isolate or specify the effect that our program delivered, versus the effects of other variables that can likewise influence changed behavior and resulting impact, we undermine the credibility of the training results we are claiming. How do we know our training is aligned to remedy the most pressing workplace competence needs of the organization? How do we know it was our training program that produced the results we claim? How can we collect credible, quantifiable data to answer these questions? This paper provides examples of how we can apply the two most used evaluation methods used around the world, to include military and healthcare training, to answer these questions. This paper then integrates a new and innovative behavioral competency assessment capability using big data analytics technology to diagnose and evaluate individual and team role competency strengths and weaknesses. Insights gained identify opportunities to optimize the allocation of human capital and training resources to resolve the most pressing workplace behaviors affecting individual, team, and organizational performance.

Downloads are free for the duration of 2018. Log in with you account to download! Register or Login
Assembly Training Using Commodity Physiological Sensors

Year: 2016

Authors: Melynda Hoover, Anastasia MacAllister, Joseph Holub, Stephen Gilbert, Eliot Winer, Paul Davies

Abstract: Wearable technology is a thriving industry with projections for continued growth in the next decade and numerous unexplored applications. The U.S. Military has been on the forefront of this technology by supporting the research and development of these devices for today’s warfighters. Smartwatches with sensors that detect physiological responses, like heart rate, have particularly interesting applications to warfighters. These devices have the potential to detect user stress during many different tasks from field operations to maintenance. Specifically, this paper will analyze the use of commodity sensors for evaluating and improving Augmented Reality (AR) work instructions. These AR work instructions have been shown to improve accuracy and efficiency in assembly tasks, which is crucial to the maintenance of military fleets. The study described in this paper compares two different wrist sensors, the Apple Watch and the Empatica E4. The Apple Watch is a popular, low-cost commodity wrist sensor, while the Empatica E4 is a higher cost, medical grade sensor. Participants wore both sensors while assembling a mock aircraft wing using work instructions delivered through an AR system. During the study, data such as errors, completion time, and several self-reported measures were recorded in addition to heart rate. After the study was completed, the heart rate data was extracted from the devices and analyzed. The results showed that the Apple Watch was less reliable because of its lower sample rate and gaps in data possibly due to user hand movement. Alternatively, the Empatica E4 was able to identify heart rate differences in steps of high and low difficulty with a lower standard deviation within steps. Based on these results, it was determined that the Empatica E4 was a more viable sensor for evaluating AR work instructions and that commodity sensors most likely need improvement before use in an industrial / military setting.

Downloads are free for the duration of 2018. Log in with you account to download! Register or Login
Improving Simulation Training Debriefs: Mine Emergency Escape Training Case Study

Year: 2016

Authors: Tim Bauerle, Jennica Bellanca, Tim Orr, William Helfrich, Michael J. Brnich, Jr.

Abstract: While the debriefing literature has made clear the benefits of post-training review and feedback for trainees (e.g., Tannenbaum and Cerasoli, 2013), there appears to be little practical guidance on how to transform the wealth of information inherent in computer-based simulations into easily discernable, actionable feedback relevant to training debriefs. The present study takes an exploratory, research-to-practice approach to address this deficiency by using a mine emergency training simulation as a case study to determine some practical means of improving debriefing. This paper details the following steps of this exploratory process: (1) identify key features of successful debriefs by using relevant findings from debriefing and computer-based training literatures as an evidence-based guideline; (2) determine key trainee actions and behaviors as recorded by the Mine Emergency Escape Training (MEET) simulation, and finally, (3) determine relevant and feasible simulation variables that align with the debrief literature as suggested improvements to the current debrief program design. Future applications for mine emergency simulation training, debriefing research, and quantitative performance assessment in wayfinding tasks are discussed.

Downloads are free for the duration of 2018. Log in with you account to download! Register or Login
A Tactical Decision Trainer for Cross-Platform Command Teams

Year: 2016

Authors: Jared Freeman, Michael Tolland, Heather Priest, Melissa Walwanis, Charles Newton

Abstract: Warfare with near-peer adversaries will require extraordinary integration between warfare domains. For the Navy, this means that commanders of air and sea platforms must coordinate sensors and weapons through informed, timely, and decisive tactical command. Cross-platform command training is rare, so are simulations to support it. Virtual simulations enable operators to execute tactics on one platform, but the cross-platform challenge requires training in command (not execution) between (not within) aircraft and surface vessels. Constructive simulations represent multi-platform battles, but they generally do not enable trainees to coordinate realistically over decisions or manage the execution of complex orders; nor do they typically measure decision making and its effects, or directly support well-specified instructional strategies. A system of instructional techniques and technologies was developed to train cross-platform teams in tactical decision making. This solution consists of: (1) an instructional strategy in which users from air and sea platforms view the battle, argue their tactical options, and issue orders; (2) a graphical user interface on which students issue those orders simply by specifying the asset(s) and tactic(s) to apply to emerging threats; (3) the Next Generation Threat System (NGTS) simulation of the battle assets (platforms, sensors, weapons) and the battlespace; (4) an executive agent that transforms trainee orders into simulator commands without human control; (5) an engine that measures and assesses decisions and their effects; and (6) an After Action Review (AAR) system that reports events over time, reports performance on decision skills, and recommends discussion points during AAR. The Navy is putting this technology to work in training in 2016. This paper describes the instructional strategy, aspects of the system that supports it, and feedback from users of the system.

Downloads are free for the duration of 2018. Log in with you account to download! Register or Login
Virtual Dismounted Infantry Training: Requiem for a Dream

Year: 2016

Authors: Emilie A. Reitz, Kevin Seavey

Abstract: A fire team breaches a two story building, quickly suffering casualties; an infantry platoon comes under mortar fire while conducting a cordon and search supported by an RQ-7 Shadow and AC-130; a squad leader participates in a key leader engagement for the first time to disastrous outcomes. In all cases, the leader steps out of a virtual training environment and into a classroom where he guides his team through an after action review (AAR) that allows him to rewind, review, and truly understand the capabilities, and limitations, of his team and himself. The above scenario is still the ideal: a persistent, demanding training environment accessible from home station that provides joint and coalition context. While this ideal has not yet been achieved, it is within reach. Major strides were made toward this ideal for dismounted infantry units with the inception of the Army’s Dismounted Soldier Training System (DSTS) in 2012. However, the DSTS system suffered from a mismatch between trainee expectations and system capabilities. In response to the limitations of the system, the Army subsequently decided to downsize its DSTS inventory and transfer most of the remaining systems out of the active force. This paper focuses on gains in capability, usability and training transfer experienced with DSTS since 2011, analyzing feedback from multiple Services and nations across six mission types and multiple use cases. It provides an overview of four years of training transfer, systems feedback, and interoperability gains data spanning five experiments and over 200 participants, and the usability challenges which were recorded at every stage of the capability development. It offers recommended areas for future improvements, pitfalls to avoid, and describes future capabilities that could meet soldier expectations about training.

Downloads are free for the duration of 2018. Log in with you account to download! Register or Login
Recording Progress in Soldier Training and Development with an Experience API Database Environment

Year: 2016

Authors: Igor Franken, Richard Kist, Emilie A. Reitz

Abstract: When training Soldiers, it is crucial to monitor the progress being made in Soldier performance to evaluate the effectiveness of a new training method. Whether an organization is considering the effectiveness of training or evaluating Soldiers’ fitness for a task, the true performance cannot be judged without data. During two courses taken by military personnel in The Netherlands, and other data collections during the Bold Quest series, experimental use was made of an automated survey tool, QUEST, to record progress made by Soldiers and eventually to quantify course effectiveness. The data collection consisted of students’ course-content test scores, and multiple choice survey questions asked to the Soldiers. Included were factual tests and questions seeking subjective feedback on certain aspects of the training, looking for areas to improve. These aspects and demographics of the Soldiers were collected in a single database, connecting all data. The Bold Quest series of exercises provides an operational test environment where new technologies and training methods are demonstrated, and the subjective data collected was paired post-event with the objective systems data, providing another use case. When training feedback and effectiveness data is collected from several consecutive courses into a central database, insight into improvements in the training can be gained and trends clarified. It provides the means to correlate demographics, experience and skills, course scores, Soldiers’ feedback and direct tests. Training methods can be improved using this analysis by tailoring training to candidates based on backgrounds and skills. Correlation between demographics, experience, skills and the shown aptitude to the training can also improve the effectiveness of recruiting. This paper focuses on the benefits of using automated surveys for these training improvements, as well as the long-term goal of integrating this collection technique with other technology-enhanced learning capabilities like Experience API (xAPI) (Mueller, Dikke, Dahrendorf, 2014).

Downloads are free for the duration of 2018. Log in with you account to download! Register or Login
Visualization of Communications Data for Enhanced Feedback in Army Staff Training

Year: 2016

Authors: Kara L. Orvis, Tara A. Brown, Robert McCormack, Arwen H. DeCostanza

Abstract: Although the Army recognizes the critical role that effective teamwork plays in supporting mission outcomes, monitoring and providing feedback to staff organizations on teamwork remains challenging. The Army traditionally relies on self-report surveys and the expertise of observer/coach/trainers (OCTs) who mentor team-based organizations throughout collective training exercises and prepare after action reviews (AARs). While OCTs provide invaluable insight into unit strengths and weaknesses, they face multiple challenges, including (1) minimal access to and review of collaboration taking place via digital systems, and (2) reduced numbers of OCTs available for observation and coaching due to resource cuts. To address these challenges, a system was created for OCTs that collects, analyzes, and displays unit communications information from email, chat, and face-to-face interactions. The system dashboard allows OCTs to see how and when unit members are communicating; how events affect their communications (e.g., amount, timing, and content); and how communications vary across groups and time. This paper describes the exploratory use of the tool within the context of a single Army Staff training exercise that took place over 10 days. Observations regarding usability, utility, and validity of the tool were collected, with particular emphasis placed on the display of social network graphs calculated from wearable sensors capturing face-to-face interactions. Overall, OCT feedback indicated that the dashboard was intuitive and useful, and the information provided was consistent with OCT observations. OCTs also referenced the tool for specific time periods and interactions during the exercise. Additionally, social network diagrams were used as “hard data” for individual coaching sessions and the final AAR. This research supports the use of existing communications data for assessment and feedback in training.

Downloads are free for the duration of 2018. Log in with you account to download! Register or Login