FDAI logo   ::  Site Map  ::   
Home  |  About this Website  |  Contact Us
About this Website » Expert Survey Report

Flight Deck Automation Expert Survey Report
Overview

Introduction

The objective of this study was to gather information to determine whether the human factors concerns with flight deck automation identified in a previous study (Funk, Lyall, and Riley, 1995) could be considered problems for which future resources should be dedicated to identify or develop solutions. To meet this objective we conducted a survey of individuals with a broad base of experience or knowledge related to human factors and flight deck automation. In this survey we asked the respondents to indicate the degree to which they agreed that each of the 114 statements of concern describes an existing problem, how critical they believed the problem to be for flight safety, and to identify the information sources upon which their ratings were based.

Top of Page  


Method

Participants

Forty-seven individuals, each with broad research or performance experience with human factors and flight deck automation, were asked to participate in the survey. Thirty-six agreed to participate and were mailed a survey, of which 30 were returned. The 30 respondents included pilots of several types of automated aircraft, university researchers, airline management pilots, industry designers and researchers, and government regulators and researchers.

Survey

The survey requested general demographic information then presented 114 statements, one for each of the problems and concerns identified by Funk, et al (1995). For each statement, responses were requested on two Likert scales. On one scale the participants indicated the degree to which they agreed that the statement represents an existing problem, and on the other they rated the criticality of the problem. The agreement scale ranged from 1, "strongly disagree", to 5, "strongly agree", with 3 labeled as "neutral". The criticality scale ranged from 1, "not critical", to 5, "extremely critical", with 3 labeled as "moderately critical". Additionally, the respondents were asked to indicate the type of information upon which they based their ratings for each statement. The seven choices for type of information were personal experience, experience of others, personal research data, research data of others, aviation literature, personal opinion, and other (with a space to fill in). The respondents were asked to mark all of the types of information that applied for each statement. A "cannot address" box was also provided to be marked if the respondent had no information to make an assessment of the statement.

Procedure

Individuals were identified who have extensive research or performance experience with human factors and flight deck automation. These experts were contacted by phone or electronic mail and asked to participate in the study. Upon agreement to participate, the survey was sent by mail along with a postage-paid return envelope. The participants were asked to return the completed survey within two weeks of receipt.

Top of Page  


Results

The responses to each statement were analyzed by creating histograms of the agreement and criticality ratings for each statement. The histograms depict the frequency of responses on the y-axis and the agreement or criticality rating scale on the x-axis. The agreement rating histograms include the responses of all 30 participants, while the criticality rating histograms include only the responses of participants who responded with "agree" or "strongly agree" as their agreement rating for the statement. This method was used because the criticality ratings are only meaningful if the respondent considered the statement to be a problem.

To evaluate the extent to which the data suggest existing problems, the agreement rating histograms were organized into six response distribution categories. The first category was used for histograms with data strongly suggesting their statements represent "problems." These histograms have shape was extremely-right-skewed distributions with two responses or less in the strongly disagree and disagree bins combined. The second category was used for those histograms representing "probable problems." This distribution category was used for all right-skewed distributions that did not meet the criteria for the first category. Two similar categories were used to represent histograms of statements considered "not a problem" and "probably not a problem." These categories included extremely-left-skewed and left-skewed distributions which were held to the same criteria as that for the extremely-right-skewed and right-skewed distribution categories. The fifth and sixth categories were used for histograms of statements representing "possible problems." One category included histograms with bipolar or flat distributions and the other included histograms with high frequencies of "cannot address" or "neutral" responses.

The criticality ratings were summarized by taking the mean of the rating numbers (1 through 5) for each statement. The results were then placed into categories with means of:
  • less than 2.99,
  • 3.0 to 3.249,
  • 3.25 to 3.49,
  • 3.5 to 3.749,
  • 3.75 to 3.99,
  • 4.0 to 4.249, and
  • 4.25 to 4.5.

To represent the relation of the agreement and criticality data for all statements, the data were combined in a matrix defined by the agreement categories across the top and the criticality categories down the left side.


Mean Criticality versus Agreement Ratings Matrix: Click each cell for detail of issues contained there.

   

   

Not problem (left skew with 0-2 right of neutral)

Probably not problem (left skew)

Possible problem (high cannot addresses and neutrals)

Possible problem (bipolar or flat)

Probable problem (right skew)

Problem (right skew with 0-2 left of neutral)

less than 2.99

3.0 to 3.249

3.25 to 3.49

   

3.5 to 3.749

 

3.75 to 3.99

     

4.0 to 4.249

 

4.25 to 4.5

 

 

Each cell of the matrix represents statements to which responses were similar in agreement and criticality distributions. Any cell can be selected to see the specific statements and their associated agreement and criticality data.

The only statement represented in the lower right cell of the matrix is statement 44, "Automation changes modes without pilot commands to do so, producing surprising behavior." The pc 44 agreement rating results and pc 44 criticality rating results combined to make this the only statement that was consistently rated high on both scales.

Conversely, only one statement (156) is represented in the left column of the matrix because it generated consistent responses disagreeing that it represents a problem, "Automation induces fatigue which leads to poor performance." The combination of the pc 156 agreement rating results and pc 156 criticality rating results for this statement suggest that the aviation industry should not expend resources addressing this as a problem.

Two other aspects of the data can be seen in the matrix. The first is that both the agreement and criticality data are distributed across their categories. This shows that the respondents considered each statement and did not simply respond that every statement represented a critical problem. The second aspect that can be seen using the matrix is conceptual consistency among the responses to related statements. Two examples are given here. In the first example, all statements related to pilot training are highlighted on a duplicate matrix. These results suggest that pilot training was considered to be a fairly critical problem by the survey respondents. The other example is a group of statements related to how automation may obscure its mode or state from the pilot. These statements are highlighted in a second duplicate matrix, and again suggest that this area was considered a problem to be addressed.

Top of Page  


Discussion

The data resulting from this survey are useful in that they indicate the human factors problems with flight deck automation. Though the data do not provide solutions to these problems, they provide information that can contribute to prioritization and resource allocation decisions for future work to identify solutions.

Top of Page  


References

Funk, K., Lyall, B, & Riley, V. (1995). Perceived Human Factors Problems of Flightdeck Automation (Phase 1 Final Report for Federal Aviation Administration Grant 93-G-039). Corvallis, OR: Dept. of Industrial and Manufacturing Engineering, Oregon State University. Internet address: http://www.flightdeckautomation.com/phase1/phase1report.aspx


Top of Page  

  Last update: 4 June 2003 Flight Deck Automation Issues Website  
© 1997-2013 Research Integrations, Inc.