Every technology is prone to attacks or malicious hijacking by different parties that actively research and develop systems designed to attack or misuse technologies. Robot Vehicles or RVs are no exception to this. Even before commercial robotic applications were developed, their security had been a hot topic, and continues to be one right now.
RV’s are a a type of Cyber-Physical Systems (CPS) that consist of both cyber and physical components. With increasing usage of RVs in a wide range of application domains, the security of RV has become an essential requirement and imperative challenge. While shooting a drone down or running over a small robot surely would destroy the RV, it would be too explicit and easy to prevent and investigate. For that reason, attackers look towards passive, indirect means of hijacking RV’s that are difficult to detect and investigate. To make attacks against RVs harder to detect, adversaries have started to target the physical components of a victim vehicle.
Even the most sophisticated AI programs are far from human intelligence and can be “tricked” into acting responding in expected ways to different stimulus. This fact can be misused to attack an RV by essentially controlling it using its environment.
GPS or optical sensors installed on a robot can be “spoofed” or remotely controlled simply by misguiding them through external, non-cyber networks. ABS systems on automobiles can be misguided by tempering with the wheel speed sensor readings using magnetic field injection.
Although there are many different techniques for securing RV’s and autonomous devices against attacks directly aimed at their system, like hacking, there is a scarcity of solutions that help an RV detect attacks on its system without compromising its programming at all.
One such solution that does help an RV detect such attacks is invariant checking. It is a well-established approach to detecting runtime anomalies caused by program bugs or exploits. Traditionally, invariants are properties of the program execution state that should always hold. Such invariants are manually specified by developers or automatically extracted via program analysis.
This technique is explored in a research publication titled, ‘Detecting Attacks Against Robotic Vehicles: A Control Invariant Approach‘ by researchers from Purdue University, Indiana.
The researchers have proposed a novel Control Invariant (CI) checking framework that detects external attacks against an RV’s system. The reason their solution is novel and innovative is because their system does not only check the traditional program-based invariants, but rather control invariants that model both control and physical properties or states of the vehicle.
According to the system developed, the control invariants are determined jointly by physical attributes of the RV like weight and shape, its underlying control algorithm and the laws of physics like inertia. The control invariants reflect and constraint an RV’s normal behaviors according to its control inputs and current physical states at the time of processing; any deviation from them will be deemed anomalous.
The control invariant (CI) framework developed by the researchers works as follows.
First, the system leverages a control system engineering methodology called system identification (SI). The SI method takes a control invariant template, which is simply a set of equations with unknown values of variables, and a large set of vehicle profiling measurement data as input.
The system then instantiates the template’s coefficients so that the resulted equations provide the best fit for the measurement data. These equations are developed to be used at runtime to predict the behaviors of the vehicle based on inputs and states and hence serve as the control invariants of the vehicle.
At runtime, the code will periodically observe the current system state and independently compute the expected state using the control invariant equations. If the discrepancy between the computed and observed states accumulates and exceeds a threshold within a monitoring window, an alarm will be raised.
Citation: Detecting Attacks Against Robotic Vehicles: A Control Invariant Approach, Hongjun Choi, Wen-Chuan Lee, Yousra Aafer, Fan Fei, Zhan Tu, Xiangyu Zhang, Dongyan Xu, Xinyan Deng, Purdue University, DOI:10.1145/3243734.3243752, https://dl.acm.org/citation.cfm?doid=3243734.3243752