Speech Recognition meets Air Traffic Control

Speech Recognition meets Air Traffic Control

In air traffic control, as in many other areas, there is a permanent need to increase the performance of the overall system. This need exists in particular at highly congested airports. At the same time, however, an increase of efficiency must never come at the expense of safety. A decisive factor in this equation of efficiency and safety is the air traffic controller, who has a major influence on the overall system performance. A key approach to increase air traffic controller’s efficiency is through digitization and automation. The means used to achieve this are digital assistants that support controllers in carrying out their work. This leads to a reduction of workload and allows air traffic controllers to guide the air traffic more efficiently while maintaining the same level of safety.

Today, even the most advanced digital assistants in air traffic control already have access to a large number of sensors that allow monitoring of traffic in the air and on the ground. Voice communication between air traffic controller and pilot, however, as one of the most central sources of information in air traffic control, is not considered by these assistants. Whenever the information from voice communication has to be digitized, controllers are burdened with entering the information manually into the system by means of other input devices. Research results show that up to one third of the working time of controllers is spent on these manual inputs. This time will increase even more in the coming years as future regulations, e.g. Commission Implementing Regulation (EU) 2021/116, will require more manual inputs – particularly from apron controllers.

Assistant-Based Speech Recognition

Assistant-Based Speech Recognition can close the gap of digitizing air traffic control voice communication. By listening to the communication between air traffic controller and pilot, the system is not only capable of recognizing word-by-word what both parties are communicating, but furthermore also automatically transforms the recognized words into clearly defined air traffic control-related concepts. The table below showcases how this transformation from words to air traffic control concepts can look. It provides a clear interpretation of the relevant content from the air traffic communication. This step enables numerous different applications which go far beyond solely transforming speech into text.


Assistant-Based Speech Recognition makes numerous different applications possible:

  1. Pre-Filling of Radar Labels: Create advance entries in electronic flight strips or radar labels by listening to the air traffic communication and performing the inputs automatically.
  2. Transformation to CPDLC messages: Transform controller voice commands directly to messages and transmit the information via data link or voice or both to aircraft on-board computers.
  3. Callsign-Highlighting: Speech Recognition as a way of automatically highlighting callsigns of a calling pilot does not require additional hardware and allows an easy detection on radar screens.
  4. Readback-Error Detection: Recognizing and understanding air traffic controller instructions and pilot readbacks and comparing the outputs on a conceptual level introduces an efficient and reliable way to detect readback errors.
  5. Integration into Digital Assistants (e.g. A-SMGCS): Integration into digital systems such as an A-SMGCS allows an early adaptation of the route planning and also a partial automation of simulation pilot tasks, just by listening to the communication.


The World ATM Congress in Madrid is the largest event in the field of air navigation and air traffic control. From June 21st to 23rd in Madrid, it combines a major exhibition with a conference. Almost ten thousand specialist visitors from all over the world use this annual opportunity to familiarise themselves with the latest trends and developments in this field. AT-One will be there with several research exhibits at booth no. 351.

Output of the Speech-To-Text recognition and the automatic transformation into air traffic control-related concepts


Project STARFiSH, targeting apron control with three interacting controller working positions, achieves in simulations the following results:

  • Word error rate: 1.8%
  • Command recognition rate: 95.7%
  • Command recognition error rate: 3.2%

The HAAWAII project achieves in an en-route operational airspace with air traffic controller and pilot speech the following results for the air traffic controllers:

  • Word error rate: 2.8%
  • Command recognition rate: 88%
  • Command recognition error rate: 3%

and for the pilots:

  • Word error rate: 10.6%
  • Command recognition rate: 72%
  • Command recognition error rate: 8%

DLR, together with different partners, has integrated and successfully tested the Assistant-Based Speech Recognition technology in multiple projects and for all stages of air traffic movements.

For more information on Assistant-Based Speech Recognition and associated applications, please get in touch or visit the project websites of HAAWAII (haawaii.de), STARFiSH (s.dlr.de/starfish) and MALORCA (malorca-project.de).


Matthias Kleinert (DLR)