||:: Site Map ::|
About this Website
Introduction: Background and Problem
Automation is the allocation of functions to machines that would otherwise be allocated to humans. The term is also used to refer to the machines which perform those functions. Flight deck automation, therefore, consists of machines on the commercial transport aircraft flight deck which perform functions otherwise performed by pilots. Current flight deck automation includes autopilots, flight management systems, electronic flight instrument systems, and warning and alerting systems.
Flight deck automation has generally been well received by pilots and the aviation industry and accident rates for advanced technology aircraft are generally lower than those of comparable conventional aircraft (see Aircraft Accident Rates page). Nevertheless, with the advent of advanced technology, so called "glass cockpit," commercial transport aircraft and the transfer of safety-critical functions away from human control, pilots, scientists, and aviation safety experts have expressed concerns about flight deck automation. For example, Wiener (1989) surveyed a group of pilots of advanced technology commercial transport aircraft and found significant concerns. Wise and his colleagues (1993) found similar concerns among pilots of advanced technology corporate aircraft. Based on incident and accident data, Billings (1991, 1996) cited problems with flight deck automation and proposed a more human-centered approach to design and use. Sarter and Woods (1992, 1994, 1995) have sought to further investigate and verify some of the concerns expressed by pilots and others in a series of studies exploring pilot interaction with automation.
The fact that flight deck automation human factors issues exist is widely recognized. However, until now there did not exist a comprehensive list of such issues. This has prevented a full understanding of flight deck automation issues and a coordinated effort to address those issues using limited research, development, manufacturing, operational, and regulatory resources.
Objectives and General Approach
The objectives of our study were to
Our general approach followed our objectives. Phase 1 was completed to address objective 1, Phase 2 was completed to address objective 2, and we have created this website to address objective 3. The rest of this page describes our methodology and provides links to details of our studies and results.
Phase 1: Identification of Possible Problems and Concerns
To identify flight deck automation issues, in Phase 1 of the study we compiled a list of possible problems with, or concerns about, flight deck automation, as expressed by pilots, scientists, engineers, and flight safety experts. We reviewed 960 source documents, including papers and articles from the scientific literature as well as the trade and popular press, accident reports, incident reports, questionnaires filled out by pilots and others, and documentation from our own analyses. In these source documents, we found more than 2,000 specific citations of 114 possible problems and concerns, which we organized into two taxonomies.
It is important to note that in Phase 1 we did not attempt to substantiate the claims made about automation problems. Rather, we merely identified and recorded people's perceptions of problems and their concerns about automation as a prelude to our Phase 2 work.Phase 1 Details
Phase 2: Compilation of Evidence Related to Issues
In Phase 2 we located and recorded evidence related to the possible problems and concerns identified in Phase 1 from a wide variety of sources. Because an issue is "[a] point of discussion, debate, or dispute ..." (Morris, 1969), we refer to these possible problems and concerns throughout this website as flight deck automation issues or, just issues, except where referring to the process and results of Phase 1. Associated with each issue is an issue identifier, which we use as a concise representation of an issue; an issue statement, which suggests that a problem may exist; and an abbreviated issue statement, which we use as a concise -- yet meaningful -- representation of an issue. For example:
The sources we reviewed for evidence included accident reports, documents describing incident report studies, and documents describing scientific experiments, surveys and other studies. We also conducted a survey of individuals with broad expertise related to human factors and flight deck automation. We reviewed these sources for data and other objective information related to an issue. For each instance of this evidence we qualitatively assessed the extent to which it supported one side of the issue or the other, and assigned a numeric strength rating between -5 and +5. We assigned a positive strength rating to evidence supporting that side of the issue suggested by its issue statement (supportive evidence) and a negative strength rating to evidence supporting the other side (contradictory evidence). Due to the nature of the sources we reviewed, we found mostly supportive evidence.For each instance of evidence found, we recorded in a database the following:
During the process of collecting and recording evidence, we revised, updated, consolidated, and organized the issues, yielding 92 flight deck automation issues.Phase 2 Details
We conducted a survey of individuals who have a broad experience or knowledge base related to human factors and flight deck automation. The participants included pilots of several automated aircraft types, university researchers, airline management pilots, industry designers and researchers, and government regulators and researchers. The survey requested general demographics information then presented 114 statements, one for each of the problems and concerns identified in Phase 1. Each statement was presented as an unqualified assertion that a problem exists, for example, that pilots do lack mode awareness (see Issue 95 above). We asked the participants to rate their level of agreement that the assertion was true, to rate the criticality of the problem, and to provide the basis for their judgement (their own data, the data of others, personal opinion, etc.). We used their agreement ratings as evidence and the sources they listed to help guide our review of papers and reports describing experiments, surveys, and other studies (see Evidence from Experiments, Surveys, and Other Studies section).Expert Survey Details
We identified 34 aircraft accident reports we thought might contain evidence related to the flight deck automation issues. We were able to obtain 20 of these reports from the US National Transportation Safety Board (http://www.ntsb.gov/Aviation/Aviation.htm) and other national and international agencies that conduct accident investigations. We reviewed these reports, looking for statements by the investigating board identifying one or more of the flight deck automation issues as contributing to the accident. We found evidence related to flight deck automation issues in 17 of the 20 accident reports we reviewed. In addition to accident reports prepared by official investigating boards, we included several accident reviews in our study. These were reviews conducted by qualified individuals after the official investigations, which benefited from additional information and the perspective offered by the individual's field of technical expertise.Accident Analysis Details
We reviewed eight studies of Aviation Safety Reporting System (ASRS) (http://asrs.arc.nasa.gov) incident reports, including one we conducted ourselves. In each of the incident studies we reviewed, the investigators selected a set of incident reports from the larger ASRS database based on study-specific criteria, then reviewed the narratives for information identifying and/or describing automation-related issues. We reviewed the investigators' summaries and conclusions in search of evidence for the flight deck automation issues identified earlier in our study. We found evidence in three of the eight incident studies.Incident Analysis Details
Based on our Phase 1 bibliography, recommendations from the experts who participated in our survey, and our review of recently published literature, we identified 63 studies of flight deck automation. These studies included
We obtained documentation on each study in the form of papers, technical reports, and World Wide Web pages. We analyzed the documents and found evidence related to the flight deck automation issues in 54 of them.Study Analysis Details
In Phase 1 we conducted a broad survey of pilots, aviation safety experts, and others with knowledge about flight deck automation merely to identify possible problems and concerns. In Phase 2 we reviewed questionnaires returned by pilots. In 21 of them, the pilots provided not only citations of the problems and concerns, but also evidence related to the flight deck automation issues, which we recorded.Phase 1 Survey Evidence Details
Flight Deck Automation Issues (FDAI) Website
The information obtained in the Flight Deck Automation Issues (FDAI) study has been recorded in a Microsoft Access 2003 database. This database is connected to a Web application written in ASP.NET and Visual Basic that allows the data to be accessed and sorted in a variety of ways. Through the FDAI website, you can access the following types of data:
Summary, Conclusions and an Invitation
The issues of flight deck automation are well documented and there is evidence related to most of them. In some cases, supportive evidence suggests that problems exist and require solutions. In other cases, the existence of both supportive and contradictory evidence makes the matter less clear, suggesting the need for further clarification. The list of flight deck automation human factors issues and related evidence we compiled in this study should be a valuable resource in the search for solutions and the further clarification of issues. This website makes that information available to the aviation research, development, manufacturing, operational, and regulatory communities. We invite you to use this website and to provide feedback on its contents and format in order to increase its usefulness in improving the safety and effectiveness of commercial air transport.
Many individuals were involved in this work. Below is a listing of those who contributing to each of the phases of this project:
Our goal is to make this website a useful tool for the improvement of air transport safety and effectiveness. If you have comments, questions, criticisms, or suggestions about the content or format of this website, please send them to the appropriate web page authors or to the Flight Deck Automation Issues Website Team at firstname.lastname@example.org.
This work is funded by the Federal Aviation Administration, Office of the Chief Scientific and Technical Advisor for Human Factors (AAR-100) (http://www.hf.faa.gov). We gratefully acknowledge the many contributions of the two individuals from that office who have served as our technical monitor, originally John Zalenchak, currently Tom McCloy. We also thank our colleague Vic Riley, of Honeywell, Inc. who has assisted us at many stages of the work. Finally, we appreciate the cooperation of the many pilots, researchers, aviation safety professionals, and designers who participated in the research.
Sarter, N.B., & Woods, D.D. (1994). Pilot interaction with cockpit automation II : An experimental study of pilot's model and awareness of the Flight Management System. International Journal of Aviation Psychology 4(1), 1-28.
Wise, J.A., Abbott, D.W., Tilden, D., Dyck, J.L., Guide, P.C., & Ryan, L. (1993, August 27). Automation in corporate aviation: Human factors issues (CAAR-15406-93-1). Daytona Beach, FL: Center for Aviation/Aerospace Research, Embry-Riddle Aeronautical University.
|Last update: 18 September 2007||
© 1997-2013 Research Integrations, Inc.