Last year in the blog, we looked at the MITRE ATTA&CK model, a tool which is highly valued among the cybersecurity community. We explained then how this is a framework that helps identify common tactics, techniques, and procedures (TTPs) employed by advanced persistent threats against IT platforms, such as Windows systems in enterprise environments.
To do this, it uses a graphic construction -called the ATTA&CK Matrix- comprising an axis of tactics (the reason behind the technique being used), and a technique axis, which indicates how adversaries go about achieving their objective.
MITRE has recently presented, along with organizations such as Microsoft, IBM, Airbus, or Bosch, a new framework with a similar layout though, in this case, it is specifically designed to identify, respond to, and remediate cyberattacks targeting machine learning systems. This is the Adversarial ML Threat Matrix.
This new framework responds to a growing threat. In fact, according to the Gartner Top 10 of strategic technologies, in 2022, 30% of all AI cyberattacks will leverage training-data poisoning, AI model theft, or adversarial samples to attack AI-powered systems.
The engineer and founder of Tech Talks Ben Dickson explains that these techniques could have highly dangerous consequences in the future. He gives the example of self-driving vehicles using machine learning models to interpret traffic signals. If the model of a vehicle has been ‘poisoned’ by a cyberattacker, it could cause the vehicle to confuse, say, speed limit signals with stop signals and lead to accidents. Although this is a hypothetical case at user level, many ML models already implemented in R&D environments in organizations or used in business processes could also be tampered with, leading to serious financial and security consequences. This is why, in a rapidly developing field and with growing risks, it is necessary to have reference frameworks such as the one developed by MITRE. But how does it work?
The matrix is structured in a very similar way to the traditional MITRE model. There is an axis with seven tactics, though in this case they are focused in the area of ML: Reconnaissance, Initial Access, Execution, Persistence, Model Evasion, Exfiltration, and Impact.
Below these are the techniques, categorized into two types: orange techniques are unique to ML systems, while white techniques could be used against ML systems but also in other contexts, and come directly from the Enterprise Matrix.
Threat hunting against threats to ML systems_
As explained earlier, the conventional ATT&CK framework provides cybersecurity teams with a classification of cyberattack actions which is particularly useful for threat hunters. With the patterns that can be extracted from the model, they can establish indicators of compromise (IoCs) with which to formulate a specific response to threats, saving a considerable amount of time. This new model will be equally as useful in dealing with the increasingly likely attacks on ML systems.
And the same premise is behind our Managed Threat Hunting and Incident Response Service, on the Cytomic Orion platform. This service automates malwareless threat searches, alert triages, and investigation of cases by applying event analysis and threat intelligence, speeding up the process of incident investigation, remediation, and response. This offers huge benefits, because, as MITRE is well aware having developed these frameworks, time is of the essence in combating threats.