Webinar: Security risks of generative AI at work – how to protect customer data, source code, and business secrets
The use of generative AI tools such as ChatGPT, Copilot, and other language models is rapidly increasing across organizations. They streamline workflows and create competitive advantages – but at the same time, they also increase the risk of sensitive information being unintentionally shared with external services.
Today, many organizations lack visibility into how AI is actually being used in practice: What data is being entered? By whom? And how are customer information, source code, and business-critical secrets protected when employees use AI in their daily work?
In this webinar, we’ll explore how a modern AI security solution can give your organization full visibility and control over AI usage – without limiting innovation or productivity.
When: June 5, 09:00–09:30 CET
Location: Online (from your computer)
In just 30 minutes, you’ll learn:
- How sensitive information can leak into language models without being noticed, and how prompt injection attacks work in practice
- How to gain real-time visibility into all AI usage across your organization
- How an AI security solution can automatically identify and redact sensitive information before it is sent to external services. How to control access to AI applications and monitor what information is being entered
- How to control access to AI applications and monitor what information is being entered
- How implementation can be done smoothly using a browser extension – without heavy integration projects
- How the solution supports GDPR compliance and provides better control over personal data, both technically and operationally
Who should attend?
This webinar is for anyone responsible for security, data, or business development who wants to leverage generative AI without losing control over the risks.
The webinar is especially relevant for:
- IT managers or information security leaders responsible for organizational security policies
- CISOs or data protection officers assessing risks related to AI usage
- IT managers responsible for the tools and services employees use
- Business leaders driving innovation with generative AI without compromising customer trust or regulatory compliance
- Development managers or architects planning secure AI integration into enterprise systems
Speakers
Carl-Johan Wahlberg, cybersecurity Lead, HiQ
Carl-Johan has extensive experience supporting companies in cybersecurity, risk management, and regulatory compliance. Today, he is part of HiQ’s efforts to help organizations navigate the AI era with both confidence and forward-thinking innovation.
Patrick Reischl, Staff Solutions Engineer, SentinelOne
Patrick has more than 18 years of experience in cybersecurity and technical sales, progressing from customer support at Symantec to sales and solution engineering roles at Palo Alto Networks, Cybereason, and SentinelOne. Today, he works as a Staff Solutions Engineer at SentinelOne in Stockholm, leading complex customer engagements and contributing technical leadership across endpoint, cloud, identity, and data security.
The webinar will be held in swedish