When AI Meets Cybersecurity: From Risk to Opportunity

When AI Meets Cybersecurity: From Risk to Opportunity

AI is fundamentally reshaping the cyber landscape. The same technology that protects us is now being used by attackers – and the more AI we embed into our operations, the larger the attack surface becomes. To succeed, cybersecurity and AI must not be seen as two separate domains, but as two sides of the same coin.

Sofie Perslow and Pernilla Rönn at the DI Tech Strategy Summit on risks and opportunities with AI and cybersecurity.

When AI Becomes Both Defense and Weapon

AI has already begun to redefine cybersecurity. Cybercriminals use generative models to create convincing phishing emails, deepfakes, and large-scale automated fraud campaigns. Tools that once required advanced technical skills are now commercially available for just a few dozen dollars a month.
AI is also used to develop and modify malicious code, discover vulnerabilities, and manipulate existing AI systems – all with increasing precision and speed.

“We are seeing in practice how the same AI technology is used on both the attacking and defending sides. As we strengthen our security capabilities, new ways for attackers to exploit the technology emerge.”
– Pernilla Rönn, Head of Cybersecurity, HiQ

But AI is not just a threat. As cybersecurity solutions become more intelligent, AI is used to detect and stop intrusions far faster than before. In Security Operations Centers, up to half of all incidents are now handled with the support of AI, which analyzes logs, identifies anomalies, and prioritizes threats in real time.

Three Ways AI Is Transforming Cyber Threats

  1. Attackers scale up with generative AI. Hyper-personalized phishing emails, voice clones, and deepfakes no longer require advanced tools – only access to ready-made solutions.
  2. The attack surface expands rapidly. AI generates new code, finds vulnerabilities, and bypasses defenses in real time.
  3. Defenders become smarter. AI strengthens detection, analysis, and response – but requires proper implementation to avoid new risks.

A Growing Attack Surface and New Vulnerabilities

As AI becomes integrated into more parts of the organization, both efficiency and complexity increase. New attack surfaces emerge in data flows, code, and decision-making processes. At the same time, we see a growing phenomenon known as shadow AI – AI tools used within the organization without approval, risk assessment, or testing.

“Without standardized and secure AI solutions, employees turn to their own tools. The result is a shadow-AI landscape with fragmented data, low traceability, and increased compliance risks.”
– Sofie Perslow, Head of AI, HiQ

Preventing shadow AI requires three things:

  • Clear ownership of AI – with a central authority coordinating strategy, risk assessment, and guidelines.
  • A culture of awareness – where employees understand both the value and risks of AI, and receive training on proper tool usage.
  • Good and approved alternatives – secure platforms that are as smooth and useful as open tools, so employees don’t feel the need to bypass the organization.

    When these elements are missing, invisible risks arise around data leakage, bias, and lack of transparency. This is where governance and culture become crucial – not just to avoid incidents, but to enable sustainable, business-driven AI adoption.

AI as Part of the Cyber Defense

At the same time, AI enables entirely new defensive capabilities.

  • Detection and response: AI-driven XDR/NDR solutions identify anomalies in networks and endpoints in real time.
  • Incident analysis: Generative models summarize incidents, triage cases, and help security teams and SOC analysts act faster and more effectively.
  • Threat intelligence: AI-based knowledge graphs map leaked data, trends, and dark web activity before attackers exploit them.

“AI can make security more accurate, but only if it is built the right way. It’s about combining technological innovation with clear governance and meaningful human oversight.”
– Pernilla Rönn

How to Build Secure AI – Without Slowing Innovation

HiQ’s approach to secure AI is built on three core principles:

  • Privacy: Secure data handling through anonymization, traceability, and awareness of bias.
  • Resilience: Continuous testing, misuse detection, and rollback mechanisms built into the architecture.
  • Transparency: Logging, traceability, and human-in-the-loop – critical both for the AI Act and for trust.

A key insight within resilience: AI systems integrated into critical environments rarely have a simple off switch.

“When AI is used in critical systems, it’s often technically impossible to simply turn it off. Instead, you need rollback mechanisms, misuse detection, and operational kill switches that can be activated in real time without shutting down the entire operation.”
– Sofie Perslow

The goal is not to slow development, but to build systems that remain controllable and recoverable – even under pressure.

“Security shouldn’t slow innovation – it should enable it. When security is part of the design from the start, you create an environment where AI contributes to both efficiency and safety.”
– Sofie Perslow

From Protection to Strategy

AI and cybersecurity are no longer separate disciplines – together they shape how organizations develop, automate, and innovate. For future digital solutions to be sustainable, security must be integrated from the beginning, not added on afterward.

HiQ helps companies and public organizations build AI-driven solutions where security, business value, and innovation go hand in hand. There may be no off switch for tomorrow’s innovations – but there are ways to build them securely.

Get in touch!

Choose your nearest office, looking forward to hear from you!

Read more articles here