The aviation industry has been no stranger to automation for many years now. From autopilots in planes to automated baggage handling in airports, technology has played a significant role in streamlining operations and improving safety. However, with the rise of unmanned aerial systems (UAS) or drones, a new term has been thrown into the mix: autonomy. Unfortunately, this term has caused a lot of confusion, and as a result, the Joint Authorities for Rulemaking on Unmanned Systems (JARUS) has published a document to provide a common framework for discussing the implementation and impact of progressive automation of functions.
Automation has been a part of aviation for a long time, and its benefits are well understood by the industry. By automating certain tasks, pilots can focus on more critical aspects of the flight, and ground operations can be streamlined to reduce costs and improve efficiency. However, with UAS, the term autonomy has been increasingly used, which has led to misunderstandings.
JARUS’s document aims to clarify the use of the term autonomy and provide a common understanding of its meaning within the context of UAS operations. The document emphasizes that the use of the term autonomy should be reserved for systems that can operate entirely independently without human intervention. However, many current UAS systems do not meet this definition, and as a result, the term autonomy should not be used to describe them.
The document also acknowledges that the technology, operational procedures, and infrastructure required to achieve full autonomy may not yet be mature enough. Therefore, the aim of the document is not to advocate for the approval of specific operations or systems but rather to provide a consistent context for regulators, industry, and standardization.
The integration of UAS into our airspace presents complex challenges due to the wide variety of systems and their capabilities. Providing a single classification scheme for automation levels is difficult in this complex and varied environment. However, by developing a common understanding of UAS capabilities and limitations, we can work towards safe and efficient integration of UAS into our airspace.
The solution to this problem is the Operational Design Domain (ODD). OOD is a mechanism that helps define the operational boundary within which a particular system or function has been designed to operate. The ODD enables designers, operators, and regulators to assess the capabilities of an airspace system, a specific UAS operation, a particular UAS, or even a subsystem or function within a UAS. However, it is important to consider that modern aircraft are highly integrated platforms with varying modes of operation and capabilities depending on the available information systems, resulting in different levels of automation used for the same task in different contexts.
The ODD helps simplify complicated functional relationships. For example, describing the level of automation in the “follow-me-mode” is challenging because it involves multiple functions at different levels. With the ODD, each component of the operation (such as sensing the human, controlling flight dynamics, and responding to obstacles) can be described at their specific levels of automation, while the “follow-me-mode” function operates at a potentially different level of automated control.
Before analyzing each of the levels of automation, it is necessary to understand the following concepts:
Is a system control method that involves direct human involvement in providing inputs and evaluating outputs to manage system parameters. This approach allows humans to be an integral part of the control loop and play an active role in system operations.
Is a method of system control in which a human is involved in monitoring a machine that provides inputs and evaluates outputs to manage system parameters. This is different from the Human-in-the-Loop (HITL) method, where a human is directly providing inputs and evaluating outputs to manage system parameters.
In the HOTL method, the human is not in direct control of the system but rather is monitoring its operation. The machine is responsible for making decisions and carrying out tasks, while the human is responsible for ensuring that the system is operating correctly and safely.
The HOTL method provides an additional layer of safety and reliability to the system, as the human can intervene and take control if necessary. This allows for more complex and autonomous systems to be developed while still ensuring that a human is involved in the decision-making process.
Refers to a method of system control where there is no human involvement in the monitoring or management of system parameters. Instead, a machine is responsible for providing inputs and evaluating outputs to ensure that the system functions properly. This type of control is often used in highly automated systems, where the machine is capable of operating independently without the need for human intervention.
The human manually executes the function, receiving no support from the machine.
Functions at this level of automation are designed to assist the human in performing tasks. The machine operates in a supporting role outside the loop of human actions. Although the human is still in control of executing the function, the machine can provide limited assistance within the designated ODD, such as providing relevant information.
This level of automation involves shared control and monitoring between the human and machine, where the machine takes on an in-the-loop management role to help reduce the human workload and/or skill level required to complete the task. Although the human still leads the function’s execution, the machine now provides a more substantial level of support within a clearly defined ODD.
At this level of automation, the machine performs the function while the human supervises and can intervene if necessary. The human is not aware of the machine’s internal states, but supervises the outcomes for safety. The machine leads the execution within a defined ODD, but the human continuously monitors and must have the necessary information to intervene if needed. Careful human factors system design is required to ensure that the human has all the required information to transition from “on-the-loop” to “in-the-loop” when necessary.
At this level of automation, the machine performs the function independently and alerts the human only when an issue arises. Unlike lower levels, the human is not required to monitor the function in real-time, but must be available and able to intervene if needed. Once the machine has proven its ability to perform the entire function effectively and respond to the environment, the crew may trust it to operate without human supervision within a specified ODD. Building trust requires ensuring the system’s trustworthiness, including meeting safety expectations for reliability, integrity, and assurance.
For example, the EASA’s First Usable Guidance for Level 1 Machine Learning Applications – Issue 1 provides guidance for data-driven AI-based methods. A system-level example of this level of automation is the “drone-in-a-box” autonomous surveillance system.
In a fully automated function, the machine assumes full responsibility for executing the task, while the human’s understanding of operational parameters is minimal or non-existent. The human’s interaction with the machine is usually limited to providing strategic directives, such as pre-flight planning, and observing the outcomes. Additionally, without special authorization, the human cannot intervene in real time due to practical limitations or deliberate exclusion within the ODD. Such operations are expected to require advanced technologies, such as Artificial Intelligence, or strict limitations on the ODD to restrict the autonomous function’s operation.
The following is an example of how flight speed control would be used in a UAS according to the indicated levels: