Opleiding: Generative AI Security (English)
Lesmethode :
Klassikaal
Algemeen :
Generative AI is changing how we build and secure software. In this one-day training, Generative AI & Security, you will learn the inner workings of modern AI systems and why understanding them is essential for security. You will discover where today's real risks lie and how attackers can misuse large language models to extract sensitive data, leak hidden prompts, or trigger unexpected costs. This course helps you build both awareness and practical skills to work safely and confidently with AI in your daily projects.
The course is built around the OWASP Top 10 for LLM Applications, translating each risk into real-world scenarios you will actually encounter. We explore input-side threats such as prompt injection, prompt leakage, and data or model poisoning. You will also examine output-side pitfalls like insecure handling of generated text, sensitive information disclosure, and hallucinations with legal or reputational impact. Finally, we look at architectural issues: supply-chain vulnerabilities, weaknesses in vector stores and RAG pipelines, excessive agent permissions, and uncontrolled resource consumption.
The course is highly interactive and hands-on. In guided lab sessions, you will practice with real LLM applications: crafting and detecting prompt injections, simulating poisoning, extracting secrets, and testing for insecure outputs. For each vulnerability, we link the exercise to concrete defenses, such as input validation, output sanitization, guardrails, and robust deployment strategies so you immediately know how to apply safeguards in practice.
Doel :
After this course, you can identify, reproduce, and mitigate the most important security risks in LLM-powered systems and AI-enabled applications. You’ll leave with tested patterns and checklists you can apply directly in SecOps, DevSecOps, and development workflows.
Doelgroep :
Security engineers and analysts, SecOps, DevSecOps and software developers.
Voorkennis :
The following prior knowledge is required:
- Technical background
- Some experience in software development or general AI
Onderwerpen :
- Module 1: Introduction to Generative AI and its Security Implications
- How models are trained and how they work
- Why AI security is critical now
- Module 2: The OWASP LLM Top 10 - A Comprehensive Overview
- Top vulnerabilities affecting Large Language Models
- Threat landscape and real-world impact
- Module 3: Input-Related Threats
- Prompt Injection
- System Prompt Leakage
- Data & Model Poisoning
- Lab: Prompt-injection and data-poisoning exercises
- Module 4: Output-Related Threats
- Sensitive Information Disclosure
- Improper Output Handling
- Misinformation & Hallucinations (and legal implications)
- Lab: Extracting sensitive data & generating insecure outputs
- Module 5: Infrastructure & Architecture Threats
- Supply Chain Vulnerabilities (incl. slopsquatting)
- Vector & Embedding Weaknesses in RAG
- Excessive Agency
- Unbounded Consumption (cost abuse)
- Lab: Simulate supply-chain attacks & agent stress tests
- Module 6: Defensive Strategies and Best Practices
- Robust input validation & output sanitization
- Guardrails and safety mechanisms in practice
- Secure AI development lifecycle
- Data handling, model selection, deployment best practices
