What implications could artificial intelligence (AI) have for government security? What policies should be implemented to promote it without incurring risks?  The Center for Security and Emerging Technology (CSET) at Georgetown University, is newly established institution that is the largest AI research center in the USA, focusing on governance, legislation, and national security. In other words, it was created to respond to these and other questions related to emerging technologies.

This why it published a study back in May providing multiple keys for legislators and public entities to understand how AI and in particular machine learning (ML) are increasingly present in cybersecurity.

Four points of view on machine learning in cybersecurity_

The report, titled A National Security Research Agenda for Cybersecurity and Artificial Intelligence, analyzes the possible applications of ML in national security and the consequences it could have. To do this, it is divided into four different perspectives:


  • Attack: The report posits to what extent ML could analyze or influence the cyber kill chain (the steps that make up cyberattackers’ attack sequence). In this sense, the report mentions automation tools to discover vulnerabilities, detect spear phishing and study how cyberattacks spread. But it also takes a look from the opposite perspective: How could ML help attackers hide from forensic tools or increase their power against industrial control systems and critical infrastructure?
  • Defense: It considers the possible role of ML in three different stages of cyberdefense: Threat detection through anomalous behaviors on systems; interdiction and mitigation of cyberattacks thugh automated processes; and help to attribute authorship of attacks, even based on analyzing the language used by the cyberattackers.
  • Learning about the adversary: The document states that only 1% of AI research resources are dedicated to defending tools and systems with ML. The CSET believes that this is something that cyberattacers could exploit if they learn how it works and what algorithms it uses. This could be done using data poisoning and data pipeline manipulation (which involve intentionally altering the data handled by the ML algorithm) or model inversion. This involves studying the ML model to be able to reverse it and obtain the confidential information they are after.
  • Other implications: ML could cause cyberaccidents, such as unintentional bugs that cause security damage to systems or cyberattacks whose final consequences go beyond what the cyberattackers intended. It could also help online political influence campaigns on social networks, creating content that seems to come from real users. One such tool is GPT-2, created by the research laboratory OpenAI, which can be used for generating streams of credible text from any given input. What’s more, it could increase the speed of cyberattacks by automating stages of the cyber kill chain. Finally, the study also analyzes it from a more strategic perspective and asks: Does it benefit cyberattackers or cyberdefenders more? Could it be a tool that will proliferate among cyberattack groups, or will it remain in government hands? Because of its offensive and defensive potential, could it be a deterrent between states like nuclear weapons?

Benefits, present reality_

Some of these questions are CSET theorizing about potential risks and, as an academic institution, it is logical that they are considering the future. However, the benefits of AI in cybersecurity are not a hypothetical future situation; they are already part of the advanced solutions available to Cytomic customers.

What’s more, AI is the foundation of all of Cytomic’s technology. For example, in terms of detection, Cytomic Platform correlates and analyzes over 8 million interconnected events in real time thanks to AI and deep learning algorithms. As well as continually classifying applications based on their behavior, these algorithms search for any kind of suspicious activity by applying scalable data analytics in the cloud, even if there are no indications that the process is malicious. In addition, the AI Ranker classifies over 300,000 new binaries every day using ML, as we already discussed when we talked about Alan Turing and AI as the past, present, and future of cybersecurity. For all these reasons, AI is a very present reality in cybersecurity, whose most important pillar is machine learning.