FDAI logo   ::  Site Map  ::   
Home  |  About This Website  |  Contact Us
Home » ... » Evidence for an Issue

Evidence for an Issue 5 pieces of evidence for this issue.

programming may be susceptible to error (Issue #170) - Programming methods for the FMS/autopilot may be susceptible to error.

  1.  
  2. Evidence Type: Excerpt from Experiment
    Evidence: "During the weapon delivery segment for nine of the twelve pilots, the voice interface had an overall error rate of 3.9%. When the single-repeat deletion errors were removed from the data set, the adjusted overall mean error rate dropped to 1.9%. After removing subject two’s data, the largest sources of error during weapon delivery were substitutions and deletions. Although both types of error only occurred 0.4% of the time, it can still be accounted for. Closer examination of the data showed that the substitution errors occurred when designating multiple targets."
    Strength: +1
    Aircraft: unspecified
    Equipment: automation
    Source: Barbato, G. (1999). Lessons learned: Integrating voice recognition and automation target cueing symbology for fighter attack. In R.S. Jensen, B. Cox, J.D. Callister, & R. Lavis (Eds.), Proceedings of the 10th International Symposium on Aviation Psychology, 203-207. Columbus, OH: The Ohio State University. See Resource details

  3.  
  4. Evidence Type: Excerpt from Experiment
    Evidence: "During the navigation segment for nine of the twelve pilots (voice recordings for data from three pilots were not available recorded due to tape machine malfunctionor tabulated), the overall voice interface error rate averaged 2.2%. Many of the errors were "single-repeat" deletion errors. These were resolved with a single repeat of the command. Most of these errors could have been avoided through more extensive voice template training, so another analysis was performed on an adjusted data set in which these errors were removed. This analysis revealed that, minus the single repeat errors, voice interface average error rate dropped to 0.6%."
    Strength: +1
    Aircraft: unspecified
    Equipment: automation
    Source: Barbato, G. (1999). Lessons learned: Integrating voice recognition and automation target cueing symbology for fighter attack. In R.S. Jensen, B. Cox, J.D. Callister, & R. Lavis (Eds.), Proceedings of the 10th International Symposium on Aviation Psychology, 203-207. Columbus, OH: The Ohio State University. See Resource details

  5.  
  6. Evidence Type: Excerpt from Experiment
    Evidence: "Figure 10 illustrates the speech recognition error rates for weapon delivery. Insertions, deletions, and substitutions (see navigation segment for descriptions of these error types) are shown for nine of the twelve pilots. The voice interface had an overall error rate during weapon delivery of 3.9%."
    Strength: +1
    Aircraft: unspecified
    Equipment: automation
    Source: Barbato, G. (1999). Lessons learned: Integrating voice recognition and automation target cueing symbology for fighter attack. In R.S. Jensen, B. Cox, J.D. Callister, & R. Lavis (Eds.), Proceedings of the 10th International Symposium on Aviation Psychology, 203-207. Columbus, OH: The Ohio State University. See Resource details

  7.  
  8. Evidence Type: Excerpt from Experiment
    Evidence: "Figure 7 illustrates the incidence of three types of speech recognition errors for the navigation segment: insertions, deletions, and substitutions. Insertions are errors where the voice system inserts commands that the pilot did not speak. Deletions are errors where the voice system did not recognize what the pilot correctly spoke and took no action. Substitutions are errors where the voice system misrecognized what the pilot spoke and executed the wrong command. The figure illustrates the voice interface error rate during the navigation segment for nine of the twelve pilots—data from three pilots were not recorded. The figure shows voice interface error rate overall averaged 2.2%."
    Strength: +1
    Aircraft: unspecified
    Equipment: automation
    Source: Barbato, G. (1999). Lessons learned: Integrating voice recognition and automation target cueing symbology for fighter attack. In R.S. Jensen, B. Cox, J.D. Callister, & R. Lavis (Eds.), Proceedings of the 10th International Symposium on Aviation Psychology, 203-207. Columbus, OH: The Ohio State University. See Resource details

  9.  
  10. Evidence Type: Excerpt from Accident Review Study
    Evidence: 4.3.1 The pilot enters 'R' to retrieve a list of waypoints. Design analysis: To communicate the waypoint to the system, the pilot is required by a procedure to type an abbreviation into the FMS, as can be found on the (paper) approach chart, which shows and legislates how to approach an airport. Due to the system logic, only inputs of the exact correct identifier can be recognised. The FMS is given the function of retrieving a list of waypoints from its database that may match the intended waypoint selection. Problem analysis: There was a mismatch between the printed approach charts and the FMS database. Since there were two waypoints with the identifier ‘R’ in same area, pilots needed to type ‘R-O-Z-O’ to get Rozo, not ‘R’ as they expected from experience and approach chart information. Collaboration analysis: The design has not allowed for the possibility that the common reference system may be faulty. Pilots needed to know what the FMS does with the instruction – they needed to understand its restriction of not truly being able to ‘guess’ from first letter, since it can only assign one meaning to one letter in a given area. Hence it could fail to match the abbreviation altogether. (page 5)
    Strength: +1
    Aircraft: B757-223
    Equipment: automation & FMS
    Source: Bruseberg, A., & Johnson, P. (not dated). Collaboration in the Flightdeck: Opportunities for Interaction Design. Department of Computer Science, University of Bath. Available at http://www.cs.bath.ac.uk/~anneb/collwn.pdf. See Resource details
Flight Deck Automation Issues Website  
© 1997-2013 Research Integrations, Inc.