The EU’s AI Reset Is the Right Path to Safeguard Europe’s Responsible Competitiveness

The EU’s AI Reset Is the Right Path to Safeguard Europe’s Responsible Competitiveness

Sofie Perslow, Head of AI at HiQ: The European Commission’s decision to partly revise and delay the implementation of the AI Act has sparked both relief and concern. Understandably so. Europe is a key player in the global AI race, where responsible development is essential. A common regulatory framework is important, but to work in practice, it must be clear and feasible so that innovation is not slowed down.

As Head of AI at HiQ, Sofie Perslow follow the development of the AI Act and the EU’s wider digital rulemaking closely. Here, she shares her view on the Commission’s latest announcement and what it may mean for Swedish organisations.

The Commission’s decision on November 19 to adjust and partially postpone elements of the AI Act, as part of the so-called Digital Omnibus package, did not come as a surprise. But it has generated significant reactions. As one of the world’s first comprehensive AI regulations, the AI Act is extensive, complex, and central to Europe’s position in the global AI landscape.

The current proposal represents a targeted reset. In practice, the most extensive high-risk requirements, for example those concerning biometric systems and sensitive applications, will now apply no later than December 2027 (previously August 2026), depending on when technical standards and support tools are ready.

At the same time, the EU is proposing simplifications and regulatory relief: clearer and more predictable rules, reduced administrative burden, and greater flexibility to use data and personal information for training AI models. Some simplifications apply broadly, but the largest relief measures are aimed at small and medium-sized enterprises (SMEs).

For us at HiQ, working closely with the Swedish private and public sectors, the conclusion is clear: regulation is an important foundation. But it is positive that the framework is being revisited to reduce uncertainty and avoid risk-averse behaviour that slows innovation.

Regulation Is an Important Part of the Equation – But It Must Be Clear and Proportionate

Regulation does not have to be the opposite of innovation. In fact, a unified framework can be one of the strongest enablers for European organisations striving to build responsible, long-term AI solutions.

The EU’s decision to reassess parts of the AI Act can therefore be positive, if it results in:

  • clearer definitions and obligations
  • stronger implementation guidance
  • a more balanced relationship between risk management and innovation

This is not only a discussion within the tech industry. In many of our conversations with customers and partners, we hear the same thing: it is uncertainty, not regulation itself, that constitutes the biggest barrier. There is broad support for clear rules, but vague requirements and incomplete guidance create hesitation, delay projects, and stall investments. The uncertainty hits SMEs the hardest – organisations without large legal or compliance departments. But larger organisations also hesitate.

Swedish Organisations Need Predictability on a European Playing Field

Sweden stands strong. We have advanced digital infrastructure, deep AI expertise, and a long tradition of developing technology grounded in responsibility, quality, and societal benefit. This gives Swedish organisations a strong starting point, provided the regulatory landscape is clear and stable.

Reactions to the Commission’s announcement, however, are mixed. Civil society, researchers, and experts – both Swedish and European – warn that weakened requirements and delayed rules for high-risk AI could reduce transparency and protections for citizens. This criticism is important and must be taken seriously.

At the same time, many voices, from politics to industry, welcome the simplifications and the more realistic implementation timeline. And this is where the opportunity lies: a more workable framework increases the likelihood of achieving the core mission –enabling AI that is both innovative and responsible in practice.

“A more workable framework increases the likelihood of achieving the core mission – enabling AI that is both innovative and responsible in practice.”

When organisations understand what applies and how to comply, they are more willing to invest in, develop, and deploy AI solutions aligned with European values. That is how Sweden and the EU can continue to shape global standards for responsible AI, rather than risk falling behind due to overly complex regulations.

Industry organisations such as TechSverige also stress the importance of avoiding a “Swedish special version” of the rules. I fully agree. Sweden should implement the AI Act in line with the EU’s minimum requirements, not exceed them. A harmonised EU framework is essential for scalability, competitiveness, and international expansion. Especially for Swedish companies that often operate across multiple markets from day one.

Four Conditions for Regulation That Drives, Not Limits, Innovation

Based on our work with Swedish industry and the public sector, in line with what many experts emphasise, four conditions are essential to ensure that the AI Act strengthens responsible innovation:

  1. Clear and practical guidance. Organisations must know how to comply, not just that they must. One of the biggest current gaps is the lack of detailed guidance, unclear definitions, and insufficient application support. This leads many organisations to wait rather than act.
  2. Proportionate requirements. Striking the right balance is vital. Extensive or overly administrative requirements risk overwhelming start-ups and smaller companies without their own compliance teams. It is important that high-risk areas have strict safeguards, but requirements for other applications must not become so burdensome as to impede development, pilot projects, or scaling.
  3. Safe innovation environments through regulatory sandboxes. Being able to test AI solutions in real-world settings alongside regulators before full compliance is required is a powerful way to combine innovation with integrity. It ensures that the regulation works in practice. Not only on paper.
  4. Harmonisation and minimal national deviations. We cannot end up with each EU member state interpreting and implementing the AI Act differently. Fragmentation would create additional costs and, in practice, penalise companies wanting to operate internationally – especially SMEs that typically enter multiple markets early. Predictability at the EU level is essential for export and for Europe’s competitive strength.

These are areas where Sweden and the EU have a real opportunity to strengthen their position as global leaders. We now hope the Commission delivers on them.

In Closing

AI should be regulated. It is important for building trust, ensuring responsible usage, and staying aligned with European values.

But the regulation must not become so complex or unclear that it inhibits progress. With the EU now adjusting its digital rulebook and the AI Act, we have an opportunity to shape a framework that both protects people and strengthens Europe’s innovative capacity.

This balance is exactly what Sweden and Europe need in order to remain at the forefront of global AI development.

Get in touch!

Choose your nearest office, looking forward to hear from you!

Read more articles here