The increasing focus on regulatory compliance required by the Nis2 Directive will place further pressure on SOC teams
Vectra AI, Inc., a leader in the artificial intelligence-based XDR (extended detection and response) sector, shared the main trends that will characterize the cybersecurity sector in the coming year. With the cost of cybercrime expected to grow in 2025, the emergence of increasingly sophisticated threats thanks to new technologies, and the full implementation of regulations to support cybersecurity, understanding these emerging trends is essential for organizations that want to evolve to address an increasingly complex environment.
Vectra AI predicts that Chief Information Security Officers (CISOs) will face growing challenges driven by the rapid adoption of artificial intelligence and greater and tighter regulatory pressure. The increasingly strong integration between AI and cloud platforms, the speed and sophistication of attacks, and the constraints of Nis2 will require companies to adopt a proactive security strategy capable of detecting and stopping threats before they cause damage. And this is also thanks to Artificial Intelligence.
Vectra AI experts have outlined what threats organizations need to prepare for to stay safe in an ever-evolving digital environment:
Christian Borst, CTO EMEA di Vectra AI:
- The future is certain: "agentic" artificial intelligence will carve out a role for itself in cybersecurity
As enthusiasm for GenAI begins to wane, in 2025 the cybersecurity industry will turn its attention to Agentic AI models as the primary means of creating powerful, enterprise-grade AI systems for accurate customer scrutiny.
Unlike conventional Large Language Models (LLM) systems, these new solutions increasingly make use of LLM "agents", developed and then enabled to carry out tasks independently, make decisions and adapt to changing situations. This is thanks to access to the tools necessary to achieve a well-defined and particular objective, rather than being tasked with a complete end-to-end mission.
The agent-based AI model breaks down high-level goals into well-defined sub-tasks and provides individual agents with the ability to execute each of these sub-goals. Agents can interact, examine each other and collaborate with each other, further improving the accuracy and robustness of the GenAI models that the cybersecurity industry requires.
- Threat actors will focus on increasing AI productivity, but it's unlikely we'll see malicious AI agents already at work
In the short term, we will see attackers focus on trying to refine and optimize their use of artificial intelligence, particularly to seek out targets and execute large-scale spear phishing attacks. While GenAI will be used to save time on tedious and repetitive actions that will be delegated to the LLM, when possible.
In the development of AI agents, threat actors are already in an experimental phase, in particular they are testing the extent to which they can carry out full attacks without requiring human intervention. Nonetheless, we believe there is still some time left to see these types of agents employed effectively and effectively by cybercriminals. However, it should not be underestimated that the extremely time- and cost-effective capability of GenAI agents will push threat actors in the future to create and use them to manage various aspects of an attack: from search and reconnaissance, to reporting and collection of sensitive data, up to the autonomous exfiltration of such data, without the need for human guidance. Once this happens, without any sign of a malicious human presence, companies and agencies will have to radically transform how they spot the signs of an attack.
- Generative AI chatbots will cause high-profile data breaches
In 2025, we will hear about numerous cases where threat actors will trick an enterprise Gen AI solution into providing sensitive information and causing high-profile data breaches. Many companies are using Gen AI to create customer-facing chatbots to automate and facilitate many tasks, from reservations to customer service. Obviously to do this LLMs must have access to information and systems without, in some cases, any restrictions on huge amounts of potentially sensitive data and without adequate security measures.
Because of the simple, human means by which we can “train” LLMs (i.e. natural language), many organizations overlook that hackers can exploit a chat to make that system behave in ways that are harmful to the business. To complicate matters, these types of jailbreaks are likely to be unfamiliar to security professionals who haven't kept up with LLM technology. For example, removal of restrictions can occur following seemingly innocuous interactions where attackers might task an LLM with pretending to be an author of fictional novels, writing a story that includes all the hidden secrets your organization is trying to secure from attackers. We will therefore be faced with an attack on LLMs unlike any other we have seen in more traditional security contexts.
To protect against threat actors looking to outsmart Gen AI tools, organizations must build robust protections to prevent LLMs from leaking sensitive information and set up ways to detect if a threat actor is challenging an LLM, so the conversation is stopped before it's too late.
Massimiliano Galvagna, Country Manager of Vectra AI for Italy:

- Autonomous AI will gain momentum while Copilots lose steam
In 2025, the initial excitement surrounding security copilots will begin to wane as organizations weigh the costs versus the actual value delivered. As a result we will see a shift towards more autonomous AI systems which, unlike AI-based copilots, are designed to operate independently, requiring minimal human intervention. In 2025, autonomous AI models will be highlighted as the next frontier in cybersecurity and emphasis will be placed on their ability to detect, respond to, and even mitigate threats in real time, all without human input.
- Regulatory overhead could give attackers an advantage
The increased focus on regulatory compliance introduced by the Nis2 Directive could risk overwhelming organizations, making it easier for attackers to gain the upper hand. SOC teams are already under great pressure, but nevertheless will find themselves dedicating significant resources to meeting compliance requirements which, while undoubtedly essential, can distract from more dynamic threat detection and response strategies. Focusing on an overly compliance-centric approach could put organizations at risk of adopting a checklist mentality where organizations focus on ticking boxes rather than building an effective holistic and proactive security posture.
To counter this danger, organizations will need to find a better balance between regulatory adherence and adaptive threat management, investing in technologies such as AI-powered cybersecurity tools that can help automate both compliance and defense efforts.






