Koglbauer / Biede-Straussberger | Aviation Psychology | E-Book | sack.de
E-Book

E-Book, Englisch, 170 Seiten

Koglbauer / Biede-Straussberger Aviation Psychology

Applied Methods and Techniques

E-Book, Englisch, 170 Seiten

ISBN: 978-1-61334-588-7
Verlag: Hogrefe Publishing
Format: EPUB
Kopierschutz: 6 - ePub Watermark



This collection of chapters on the latest methods and tools for applied research in aviation psychology guides the diverse range of professionals working within aviation on how to adapt flexibly to the continuously evolving requirements of the aeronautical landscape. Experts from the industry and academia explore selected applications, ranging from aviation system engineering to bridging the gap between research and industrialization, safety culture, training and examination. Psychological tools are explored, including designing biocybernetic adaptive systems, predictive automation, and support for designing the human role in future human–machine teaming concepts. Special chapters are dedicated to spatial disorientation, reactivity, stress, eye-tracking, electrodermal and cardiac assessment under the influence of G forces.
This is essential reading for aviation psychologists, human factors practitioners, engineers, designers, operational specialists, students and researchers in academia, industry, and government. The practitioners and researchers working in other safety critical domains (e.g., medicine, automotive) will also find the handbook valuable.
Koglbauer / Biede-Straussberger Aviation Psychology jetzt bestellen!

Zielgruppe


Aviation psychologists, human factors practitioners, engineers, designers, operational specialists, students, and researchers in academia, industry, and government.

Weitere Infos & Material


|IX|Foreword
Peter Jorna Human Factor(s): What Do You Do With It?
That was the title of my first colloquium presentation at the Netherlands Aerospace Laboratory (NLR) around 1990. The audience at that time was made up of all kinds of engineers, some scientists, and a couple of engineering pilots. All of them wondering why all these wonderful aircraft were crashing due to pilot error. Workload was apparently an issue and several attempts had already been made to model the human mathematically as a biological part of the aircraft control loop. But pilots did not recognize themselves or their personalities in the description of a variable amplification factor in complex equations that were meant to simulate the effects of their workload. This approach was not accepted as being very useful. It faded away … The presentation explained that humans as test subjects (now called “participants”) are indeed an important part of the control loops, but that the test procedures that had been used to date were either not including the human as a to-be-tested part of the system at all or the tests were way too crude to have any predictive value. Test pilots were the main representatives of flight crew, but they were exceptionally well trained. Thus, they did not really represent “the minimum pilot” who sometimes has to perform under harsh working conditions, being tired, distracted, recently divorced etc. A different, more system- and context-oriented testing perspective was needed. Some steps were taken over the following years. Go Beyond Selection
The Royal Netherlands Navy at that time had an issue with pilots who were able to fly the new maritime patrol aircraft but had problems when combining the flying with fighting. Hunting submarines at low altitude above the sea was not only exciting and risky, but also required the use of an additional computer screen on the flight deck showing tactical information and instructions, creating a “dual-task situation” in psychology language. Some licensed pilots could not do that and were not able to obtain operational status. |X|The management response in those days was (most often) to seek the problem in the humans (blame culture), and thus improved selection was the way to go. Selection research following navy trainees during their career confirmed there were individual differences in the capability of (male) pilots to do two things at the same time (either parallel or by fast serial task switching), but training was also an important factor. The aviation industry had no idea about the existence and relevance of individual differences between users of new technologies: An illustration of the fact that simply adding a display aimed at improving mission performance by presenting extra data to the pilot is not an instant guarantee that it will pay off for everybody. On the contrary, the licensed pilots who could not become operational were now a major cost factor for the navy. Perhaps better test and validation should be recommended already during the design stage? Go Beyond Subjective Opinion(s)
Asking for user opinions is an easy and very tempting method to check your design. But do the users understand the new design? Are they in favor of it or afraid that it will change their jobs? The EUROCONTROL PHARE program (Programme for Harmonised Air Traffic Management [ATM] Research in Europe) included my so-called ground human machine interface (GHMI) project. In this project several human factors specialists and psychologists teamed up to develop a detailed specification of the human–machine interface for future ATM. There was no explaining to others how to do it, but just do it by ourselves. That task allocation was a really good idea made by Mick van Gool who was the PHARE program manager at the time. Part of this project allowed for some experimental research. The big discussion at that time was whether automation in the form of computer advice to the controller would be a help or a burden. The reasoning was as follows. If the controller would compare the advice with their own idea, it would involve an extra task, thus a burden. In the case of high task/traffic load, the task of comparing advice with one’s own idea could be simply dropped, meaning that the advice of a software tool would be ignored. Alternatively, one could simply follow the advice under high individual workload conditions, but in that case the controller would be “out of the loop.” A clear dilemma to be solved. A simulation study at NLR by Brian Hilburn made a comparison between controllers working with various levels of automation support and the “normal” manual control mode. The results revealed clear and consistent workload benefits as a function of the level of automation and in comparison |XI|with manual control as a reference. Benefits were reflected in physiological measures (e.g., heart rate, heart rate variability, pupil size) indicating both lower mental effort or stress and better performance (response times to datalink communication). All these measures indicated the positive effects of automation, and thus less burden and not more. Except for one other measurement: the subjective ratings of workload by the controllers. This measure was the only indicator that went up. A big surprise and a clear dissociation between measures. Closer analysis and friendly discussions with controllers revealed that their cognitive reasoning was, “I have to do my normal work and deal with additional tools,” so “more tools must mean more work.” This lesson learned about possible dissociations between measurements has been experienced more often in research on workload, and therefore it is always necessary and mandatory to measure performance, mental effort, and subjective appreciation in concert. Know your methods and how to apply them! Validate With Humans in the Loop
These experiences showed us that all technical claims assuming better human performance or reduced workload by adding some technology need to be validated and proven. Merely adding colors to a computer screen does not justify the claim that colors will decrease workload. Evidence is always better. Asking pilots and air traffic controllers will provide you with valuable and interesting opinions, but beside the reliability or validity issue there is the popular saying: “Ask 10 pilots and you will get 20 different opinions.” Who has the right opinion? This is a necessary and informative method, but not sufficient. Take Objective Measures Related to the Human Task
Making a detailed (and agreed upon) task description is the starting point, as it already helps to reduce misinterpretations between the various disciplines involved. Also measures for various task performance aspects should be defined as objectively as possible; for example, in terms of time, the quality of human performance and its measurable influence on system parameters. Task definition (what is allocated to the human) is a good starting point, but is only completed if you can define measurements. When is a task performed better, and how can I detect and measure this? Note that task considerations are now also integrated in the airworthiness regulations of |XII|aircraft. Rule 25.1302 addresses the certification of “installed systems for use by the flight crew” and it requires a task-based perspective for defining the system challenges in terms of information required, controls needed, and automation support that is understandable and predictable for the users. A real human factors regulation. But it is even better to also have an idea or hypothesis about the estimated and actual level of effort, especially mental effort, because good performance should be maintainable for a full mission or working period. In this respect, psychophysiological methods came to the rescue. Heart rate and heart rate variability (HRV) provided indications of both physical as well as mental aspects of work, including emotions. Hard data always work better, also in certification to convince people, including managers and agencies. Accept the Help of Our Psychophysiology Friends
My first great helper in getting psychophysiology accepted was Glenn Wilson from the Wright Patterson...


Ihre Fragen, Wünsche oder Anmerkungen
Vorname*
Nachname*
Ihre E-Mail-Adresse*
Kundennr.
Ihre Nachricht*
Lediglich mit * gekennzeichnete Felder sind Pflichtfelder.
Wenn Sie die im Kontaktformular eingegebenen Daten durch Klick auf den nachfolgenden Button übersenden, erklären Sie sich damit einverstanden, dass wir Ihr Angaben für die Beantwortung Ihrer Anfrage verwenden. Selbstverständlich werden Ihre Daten vertraulich behandelt und nicht an Dritte weitergegeben. Sie können der Verwendung Ihrer Daten jederzeit widersprechen. Das Datenhandling bei Sack Fachmedien erklären wir Ihnen in unserer Datenschutzerklärung.