Exposed: FAA Academy Tower Evaluations
How the FAA has radically changed its evaluation procedures to the detriment of trainees' success.
To date, the FAA has never publicly addressed the current washout rate (a.k.a the rate of trainees failing training) in the FAA Academy's Tower Cab Initial Qualification Training Program. In prior years, as numerous public studies and testimony support, the washout rate in this program was 20% or less. However, as recently as 2013, the washout rate in the tower training program has skyrocketed, with 50% washout rates per class becoming the standard. In the worst cases of which we are aware, washout rates have approached 90%.
This article is focused on the changes the FAA Academy implemented in its evaluation process for trainees in the Tower Cab Initial Qualification Training Program. These changes are largely undiscussed in public circles, but trainees and instructors who have attended the academy since the fall of 2013 can attest to this article's accuracy.
Background
In 2012, Congress passed Public Law 112-95, the "FAA Modernization and Reform Act of 2012," which required the FAA to conduct a study that focused on the training program for developmental air traffic controllers. In September 2012, the U.S. Department of Transportation and the FAA completed the study mandated by the law. The study is called "Review and Evaluation of Air Traffic Controller Training at the FAA Academy" and was published in January 2013 by the same agencies.
2026 update: This study is no longer publicly available on the FAA website. (If someone finds a live link, please send us a tip.)
The study details the FAA's evaluation process for trainees in their Tower Training Program:
"To pass the course, developmentals must successfully complete the Performance Verification (PV) in front of examiners for each control position in the tower. The examines evaluate how well developmentals apply learning in a simulated air traffic environment. Developmentals have two opportunities to pass the PV. If a developmental ATC does not pass the PV on the second attempt, the developmentals employment with the FAA will end and he or she will exit all training" (p. 20).
The study was mostly favorable to the operation of the FAA Academy in terms of the academy's ability to produce trainees in concert with the demands of the FAA's Air Traffic Organization (ATO), the organization tasked with managing air traffic control facilities. But within months after this study was released, the entire evaluation process was significantly altered.
There was no mention of or recommendations for any future adjustments to the evaluation scheme presented in the January 2013 study. (To FAA Academy officials and equipment contractors: Interesting timing.)
What are the features of "new and improved" evaluations at the FAA Academy?
- "Second attempts" noted in the January 2013 study were completely eliminated.
RESULT: Trainees routinely wash out of training following their first evaluation. Why does this happen? We'll get to that in a second. - Four evaluations are now administered.
RESULT: Because no second attempts are authorized, each evaluation provides an opportunity for "sudden death"-style elimination from employment with the FAA. If a trainee performs excellently during training, and passes two evaluations without a problem, he or she can still be INSTANTLY terminated following an anomalistic performance on a third or fourth evaluation. (For names of people to whom this has happened, please contact us, and we will provide them.) - Human pseudo pilots are no longer used; instead, the FAA relies on an automated voice recognition tool to communicate and understand trainees' verbal instructions.
RESULT: Massive washout rates because of (1) and (2) above, and because the voice recognition tool is an unvalidated, unmitigated disaster of a software program.
More on the Automated Voice Recognition Tool
In real world air traffic control, controllers communicate instructions to pilots, who in turn respond to those instructions by reading them back to the controller and maneuvering their aircraft to comply with those instructions. At the FAA Academy, this used to be modeled by providing trainees with pseudo pilots -- people whose jobs it was to serve as "pilot voices," playing the role of pilots for trainees operating the tower simulator. These pseudo pilots received, read back, and executed instructions for aircraft to whom trainees would speak during simulator-based evaluations.
However, in 2013, after the study detailing a completely different, more acceptable evaluation method was published, the FAA made significant changes to its evaluation process, including implementing automated voice recognition software to eliminate the need for human pseudo pilots. This is extremely problematic because automated voice software -- as nearly everyone has experienced via Apple's Siri, Amazon Echo, Google Home, etc. -- is extremely unreliable. The automated voice recognition system is no different.
The "tool" is called Integrated Communications Environment (ICE). It is manufactured by Adacel Technologies, the same FAA contractor who is responsible for manufacturing and supporting the simulators in place at the FAA Academy. On paper, and in the company's promotional videos, ICE works very well. However, in practice, the tool is universally hated by trainees and instructors alike. Any trainee at the FAA Academy who went through the tower program within the past 3-4 years will attest to the inadequacy of automated voice recognition in the air traffic control training environment.
The automated voice recognition tool implemented at the FAA Academy is completely unworthy of continued use in the training of air traffic controllers, as it is completely unreliable and is the direct cause of terminations of hundreds of trainee controllers. (Just ask the last hundred or so rounds of FAA Academy washouts about their experiences with the voice recognition system.)
Voice Recognition vs. Real Life
Here's an exchange between a tower controller and the automated speech recognition software employed at the FAA Academy:
"Tower, Luftansa 123, short final Runway 34."
"Luftansa 123, Tower, wind 3-5-0 degrees, 2 knots. Cleared to land Runway 32."
"Cleared to land Runway 32. Luftansa 123."
(NOTE 1: Notice how the automated voice pilot advises the controller it is on final to Runway 34. Runway 34 does not even exist at the airport where this demonstration is taking place. Here's the original source video for proof.)
(NOTE 2: None of the phraseology used by the "controller" in this example is acceptable in U.S. civil aviation. Does the U.S.-based manufacturer of the software know that? [Probably not, hence the issues trainees have in getting the simulator to accept their FAA phraseology.])
Here's a very similar real world exchange between a tower controller and a human pilot:
"Los Angeles Tower, Southwest 2054 just over JETSN now for Runway 24R."
"Southwest 2054, Los Angeles Tower, Runway 24R, cleared to land."
Notice the difference in the length of the two transmissions. To communicate the same message -- a landing clearance -- to the speech recognition computer takes 18 seconds. To do the same in real life takes 9 seconds.
This would inherently be no problem, assuming the FAA had set up its evaluation scenarios to include a traffic level accounting for the slow speech rate of the automated voice computer. This, as truth would have it, is not the case. Evaluation scenarios are overloaded to the point where even the automated voice recognition computer can't produce transmissions and read backs to the trainee controller in time. This is commonly referred to as the computer "falling behind." It is the most common problem experienced by trainees in the tower simulators at the FAA Academy.
Furthermore, the automated voice recognition tool fails to recognize what trainees are actually saying. Occurrences like these happen way more often than they should. Here are some examples of what happens when the tool misreads a trainee's instruction:
- Trainee instructs an airborne aircraft: "Runway 28R, cleared for takeoff."
Automated voice recognition tool reads back "Holding. Runway 28R. Traffic in sight." (The proper response is "Runway 28R, cleared for takeoff," and the aircraft should begin departing.) - Trainee instructs an airborne aircraft that it is cleared to land: "Runway 28L, cleared to land." Automated voice recognition tool causes the same aircraft to make a 360° turn and conflict with other traffic.
- An aircraft lands on a runway as instructed, and a trainee instructs it to "Turn right at taxiway Echo. Contact Ground [control] when off." The automated voice recognition tool reads back "Backing up."
These and dozens of other examples are readily providable if the right people are asked. The "right people" includes FAA Academy instructors and trainees.
Common-Sense Solutions to FAA Academy Evaluations
Re-institution of Human Pilot Operators
When we allow automation tools to compromise an operational environment, the automation tool should be discontinued. This is not only common sense; it is required by the FAA when it comes to implementing automation tools in real life -- whether that's inside airplane flight decks or in the control tower. (Think about how many accidents have happened in aviation due to an over-reliance on automation.)
Simply re-instituting human pilot operators inside the FAA Academy's tower simulators would go a long way to reducing problems associated with the automated voice recognition tool. Here's a fun fact: Of all the real-world control towers that have an in-house tower simulator like the ones used at the FAA Academy, none of them rely on the automated voice recognition tool for providing simulator-based initial and recurrent training. All of those towers use human pilot operators.
Re-institution of Second Attempts on Evaluations
If the FAA wants to keep its automated voice recognition tool in place at the FAA Academy, it should at minimum re-institute the practice of allowing each evaluation to be attempted twice. If the FAA used to permit this when human pilot operators were also in place, then there is no reason they cannot permit it today in light of all of the problems with the automated voice recognition tool. This would allow the agency to easily account for simulator malfunctions, especially those primarily driven by the automated voice recognition tool.
Taking a Holistic Approach to Evaluations
The FAA Academy Tower Cab Initial Qualification Training Program has since instituted a point-based cumulative grading system since the January 2013 study was released. When the study was developed, the FAA Academy used a series of two pass/fail evaluations, which could be attempted twice, to determine whether trainees could graduate from the academy and continue in training at their first real-world control tower assignment.
The current practice is to terminate an FAA Academy trainee as soon as his or her cumulative course score drops below 70%. This is all well-and-good, except for the fact that the final three days of training -- the evaluation period -- accounts for 90% of the course cumulative score. This is unacceptably high for a training course that lasts 10 weeks, as the tower training program does. The remaining 10% of the cumulative score comes from the classroom training portion of the course, where students learn modules from a textbook.
Resultantly, not a single percentage of a trainee's course cumulative score reflects their progress during training. Once classroom training has finished, students spend 2 weeks in "low fidelity" simulators, followed by 3 weeks in "high fidelity" simulators. Despite that there are "skills checks" administered during this 5-week period, the skills checks are administered by the trainees' instructors (as opposed to evaluators) and are not graded; instead, remarks are made on a training report that the trainee signs that ultimately ends up in the trainee's file.
A holistic approach involves reviewing trainees' results on written tests, their scores in prior training courses, their training reports from instructors, and verbal feedback from instructors. The approach used at the FAA Academy today is extremely narrow and fails to account for three variables introduced during evaluations: new simulator scenarios, new evaluators (whom the trainee has essentially never seen), and new remote pilot operators. The unfamiliarity of these variables, combined with the fact that the final evaluations account for 90% of a trainee's cumulative score, makes this practice wholly unacceptable. It is no wonder why 50% of the trainees in each class are washing out.