The European Union Agency for Cybersecurity (ENISA ) was set up in 2004 to help establish the cybersecurity policies of the European Union. To this end, it provides recommendations and advice to the public and private sectors through activities including pan-European cybersecurity crisis management exercises, the development of national cybersecurity strategies, and promoting cooperation between agencies that respond to cybersecurity emergencies.
ENISA also publishes reports and studies on security, covering issues as wide ranging as cloud security, privacy, and threat identification. Yet, given that the European Commission also set out that one of its missions would be to prepare member states for future challenges, Artificial Intelligence would always figure significantly in its research and resources. That’s why the Commission made the decision in June to create a specific ad hoc working group on cybersecurity for AI.
Aims and structure of the group_
The ad hoc group comprises observers from other EU institutions, such as the ETSI and Europol, along with 15 experts and representatives from public and private entities with extensive experience in AI, including the German Federal Office for Information Security and Airbus.
The tasks of the ad hoc group, as set out in its web page, include advising ENISA on cybersecurity issues related to AI, helping the institution develop an AI threat landscape, and providing risk-proportionate cybersecurity guidelines for AI. Meanwhile, although the group’s specific duties have yet to be published, they will no doubt stem from the document published in February by the European Commission regarding cybersecurity.
Excellence and trust_
The Commission’s report, entitled White Paper on Artificial Intelligence – A European approach to excellence and trust, aims for the European Union to have an approach towards AI based on regulation and investment, with the dual objective of promoting the adoption of AI technologies and addressing the risks linked to certain uses, although it specifically excludes the military. The white paper does however discuss AI and cybersecurity in a series of contexts:
- The Commission wants to develop an AI ecosystem that brings its benefits closer to European society and businesses, to citizens, and to business development “in areas where Europe is particularly strong”, including cybersecurity, according to the document.
- This technology can provide many advantages and improve the security of products and processes, yet it can also be harmful, and that includes issues related to AI applications in critical infrastructures or misuse.
- It can lead to problems in the field of public liability, as manufacturers are responsible for damage caused by defective products, but, in the case of an AI-based system, it may be difficult to prove that a product is defective and that there is a link between the product and any damage caused, especially if it has occurred due to a cyberattack.
In any case, despite the risks, the European Commission concludes that AI is a strategic technology which brings numerous benefits to citizens, businesses, and society as a whole, as long as it is anthropocentric, ethical, and sustainable and that is why the ENISA group has been set up to study it in further depth.
With this same philosophy and strategic nature, the cybersecurity experts at Cytomic developed the Cytomic Platform, in which AI plays a fundamental role in all services and solutions. Deep learning algorithms, for example, constantly analyze all applications based on their behavior, and the AI ranker classifies more than 300,000 new binaries every day using machine learning. This all serves to illustrate that, for Cytomic.ai, AI is not just another technology, but is part of its very essence.