Table of content

Building trust and security in AI
Talk #1: Securing GenAI - Guardrails against emerging threats
Talk #2: AI-driven filtering: How LLMs cut through false security alarms
Talk #3: At the other side of the spectrum: smol (vision) language models in 2025
Why join us?

Table of content

Table of content

Building trust and security in AI
Talk #1: Securing GenAI - Guardrails against emerging threats
Talk #2: AI-driven filtering: How LLMs cut through false security alarms
Talk #3: At the other side of the spectrum: smol (vision) language models in 2025
Why join us?

Building trust and security in AI

Building trust and security in AI

Cantersteen 47, Central Gate, 1000 Brussels

Are AI systems really safe?
How can we build trust in a world where AI is everywhere?
What are the best ways to protect sensitive data while still driving innovation?

Join us for an evening of engaging talks with top experts as we dive into the key challenges of AI, trust, and security.  Learn about practical solutions to secure generative AI, safeguard open-source software, and balance privacy with progress in AI.

📅 Date: March 12th 2025
🕕 Time: 6:00 PM onwards with a drink (we'll start at 6.30 PM)
📍 Location: Cantersteen 47, Central Gate, 1000 Brussels


Talk #1: Securing GenAI - Guardrails against emerging threats

Agentic AI is on the rise, powered by the immense capabilities of LLMs. But along with new opportunities come fresh challenges. In this session, we uncover how hallucinations can derail workflow autonomy, why prompt injections pose a growing threat when impactful actions are being taken, and how the sheer breadth of the input-output space makes it tough to cover every edge case. We then share hands-on strategies to keep your AI initiatives secure and resilient. Join us to discuss how we can stay one step ahead in this rapidly evolving landscape.

By Thomas Vissers & Tim Van Hamme (Blue41, KU Leuven)

Talk #2: AI-driven filtering: How LLMs cut through false security alarms

Static Application Security Testing (SAST) is a vital approach for identifying potential vulnerabilities in source code. Typically, SAST tools rely on predefined rules to detect risky patterns, which can produce many alerts. Unfortunately, a significant number of these alerts are false positives, placing an excessive burden on developers who must distinguish genuine threats from irrelevant warnings. Large Language Models (LLMs), with their advanced contextual understanding, can effectively address this challenge by filtering out false positives. By reducing the number of alarms, developers are more willing to take action and actually solve the real vulnerabilities.

By Berg Severens (AI Specialist at Aikido)

Talk #3: At the other side of the spectrum: smol (vision) language models in 2025

Large language models have grown to ever larger sizes, but there’s another interesting development at the other side of the spectrum: small language models (SLMs) that can be self-hosted, which can be a great option for data privacy and protection. In this talk, we’ll briefly discuss what small language models are capable, the tooling around them, and how you can use them to balance innovation with privacy and security in your GenAI projects.

By Laurent Sorber (CTO at Superlinear) & Niels Rogge (Machine Learning Engineer at Hugging Face)

Why join us?

At the AI Talks by Superlinear, you’ll…

  • Network with top AI professionals from Belgium

  • Dive into practical AI applications shaping industries

  • Enjoy great finger food, drinks, and inspiring conversations

Struggling to view the form? Don’t worry, you can register here.

Contact Us

Ready to tackle your business challenges?

Stay Informed

Subscribe to our newsletter

Get the latest AI insights and be invited to our digital sessions!

Stay Informed

Subscribe to our newsletter

Get the latest AI insights and be invited to our digital sessions!

Stay Informed

Subscribe to our newsletter

Get the latest AI insights and be invited to our digital sessions!

Locations

Brussels HQ

Central Gate

Cantersteen 47



1000 Brussels

Ghent

Planet Group Arena

Ottergemsesteenweg-Zuid 808 b300

9000 Gent

© 2024 Superlinear. All rights reserved.

Locations

Brussels HQ

Central Gate

Cantersteen 47



1000 Brussels

Ghent

Planet Group Arena
Ottergemsesteenweg-Zuid 808 b300
9000 Gent

© 2024 Superlinear. All rights reserved.

Locations

Brussels HQ

Central Gate

Cantersteen 47



1000 Brussels

Ghent

Planet Group Arena
Ottergemsesteenweg-Zuid 808 b300
9000 Gent

© 2024 Superlinear. All rights reserved.