disclaimer

This page reflects past events organized by Superlinear. AI Talks and AI Tribes are no longer part of our current initiatives. Our focus has shifted toward enterprise-wide orchestration and long-term operational performance in mission-critical European industries.

disclaimer

This page reflects past events organized by Superlinear. AI Talks and AI Tribes are no longer part of our current initiatives. Our focus has shifted toward enterprise-wide orchestration and long-term operational performance in mission-critical European industries.

Building trust and security in AI

Building trust and security in AI

Building trust and security in AI

Mar 12, 2025

Mar 12, 2025

Mar 12, 2025

banner speakers AI Talks 12th March 2025
banner speakers AI Talks 12th March 2025
banner speakers AI Talks 12th March 2025

Event description

Are AI systems truly secure? Join the next AI Talks event to explore the risks and solutions shaping AI trust and security. Learn how to protect sensitive data, secure generative AI, and safeguard open-source software. Connect with experts, enjoy insightful talks, network, and discover the future of AI security in Belgium!

Are AI systems truly secure? Join the next AI Talks event to explore the risks and solutions shaping AI trust and security. Learn how to protect sensitive data, secure generative AI, and safeguard open-source software. Connect with experts, enjoy insightful talks, network, and discover the future of AI security in Belgium!

Are AI systems truly secure? Join the next AI Talks event to explore the risks and solutions shaping AI trust and security. Learn how to protect sensitive data, secure generative AI, and safeguard open-source software. Connect with experts, enjoy insightful talks, network, and discover the future of AI security in Belgium!

Are AI systems really safe?
How can we build trust in a world where AI is everywhere?
What are the best ways to protect sensitive data while still driving innovation?

Join us for an evening of engaging talks with top experts as we dive into the key challenges of AI, trust, and security.  Learn about practical solutions to secure generative AI, safeguard open-source software, and balance privacy with progress in AI.

📅 Date: March 12th 2025
🕕 Time: 6:00 PM onwards with a drink (we'll start at 6.30 PM)
📍 Location: Cantersteen 47, Central Gate, 1000 Brussels


Talk #1: Securing GenAI - Guardrails against emerging threats

Agentic AI is on the rise, powered by the immense capabilities of LLMs. But along with new opportunities come fresh challenges. In this session, we uncover how hallucinations can derail workflow autonomy, why prompt injections pose a growing threat when impactful actions are being taken, and how the sheer breadth of the input-output space makes it tough to cover every edge case. We then share hands-on strategies to keep your AI initiatives secure and resilient. Join us to discuss how we can stay one step ahead in this rapidly evolving landscape.

By Thomas Vissers & Tim Van Hamme (Blue41 & DistriNet, KU Leuven)

Talk #2: AI-driven filtering: How LLMs cut through false security alarms

Static Application Security Testing (SAST) is a vital approach for identifying potential vulnerabilities in source code. Typically, SAST tools rely on predefined rules to detect risky patterns, which can produce many alerts. Unfortunately, a significant number of these alerts are false positives, placing an excessive burden on developers who must distinguish genuine threats from irrelevant warnings. Large Language Models (LLMs), with their advanced contextual understanding, can effectively address this challenge by filtering out false positives. By reducing the number of alarms, developers are more willing to take action and actually solve the real vulnerabilities.

By Berg Severens (AI Specialist at Aikido)

Talk #3: At the other side of the spectrum: smol (vision) language models in 2025

Large language models have grown to ever larger sizes, but there’s another interesting development at the other side of the spectrum: small language models (SLMs) that can be self-hosted, which can be a great option for data privacy and protection. In this talk, we’ll briefly discuss what small language models are capable of, the tooling around them, and how you can use them to balance innovation with privacy and security in your GenAI projects.

By Laurent Sorber (CTO and Co-founder at Superlinear) & Niels Rogge (Machine Learning Engineer at Hugging Face)

Why join us?

At the AI Talks by Superlinear, you’ll…

  • Network with top AI professionals from Belgium

  • Dive into practical AI applications shaping industries

  • Enjoy great finger food, drinks, and inspiring conversations