Training: AI in Medicine & the AI Act – Safe AI Systems in Healthcare
Level
IntermediateDuration
16h / 2 daysDate
Individually arrangedPrice
Individually arrangedTraining: AI in Medicine & the AI Act – Safe AI Systems in Healthcare
This training focuses on designing, documenting, and implementing AI systems in medicine in compliance with the AI Act (EU) and related regulations: MDR, GDPR, ISO/IEC 42001, ISO 13485, and others. Participants will learn how to classify AI model risks, conduct conformity assessments, prepare the required technical documentation, secure AI models at the data, training, and deployment levels, and implement processes for continuous monitoring, retraining, and oversight.
What will you learn?
- AI Act classification of systems – Understand whether your system is “high-risk,” “general purpose,” or “prohibited”
- Model safeguards and oversight – Learn how to design AI models resilient to errors, manipulation, and data breaches
- Conformity assessment and technical documentation – Master the structure of Technical Documentation and how to build it step by step
- Monitoring and continuous improvement – Learn to implement logging, error handling, validation, and retraining processes in line with the AI Act
Prerequisites
- Good understanding of AI and ML fundamentals
- Practical knowledge of Python or another ML programming language
- General awareness of MDR and GDPR regulations
Who is this training for?
AI/ML developers and engineers in the healthcare sector
AI system architects
Compliance and regulatory specialists
Product owners responsible for developing AI-based medical systems
QA/Validation teams in MedTech companies
Training Program
-
Introduction to the AI Act
- Structure and objectives of the AI Act
- Types of AI systems and their classification
- Relationship between the AI Act, MDR, GDPR, and ISO standards
-
Risk Classification and Manufacturer Obligations
- High-risk vs. other AI systems
- Requirements for “high-risk AI” systems
- Roles of provider, user, importer, and distributor
-
Securing AI Models in Healthcare
- Protection against errors and adversarial attacks
- Data source verification and model version control
- Transparent vs. black-box models (in the context of explainability obligations)
-
AI Technical Documentation
- Documentation requirements under the AI Act
- How to prepare: model cards, data sheets, algorithm impact assessments
- Step-by-step creation of Technical Documentation
-
Workshop: Classification & Evaluation of AI Systems
- Hands-on case studies (e.g., ECG evaluation model, medical triage chatbot)
- Defining risk level, documentation requirements, and conformity assessment process
-
Conformity Assessment and Audits
- Overview of notification procedures
- Self-assessment vs. third-party assessment
- Role of the notified body and declaration of conformity
-
Validation and Testing of AI Models in Medical Environments
- Pre-clinical and clinical testing
- Model metric validation vs. clinical validation
- Test cases, validation plans, traceability matrices
-
Post-deployment Monitoring and Oversight Systems
- Logging, alerts, and error handling
- Documenting corrective actions and updates
- Retraining and lifecycle management
-
Ethics and Accountability under the AI Act
- Transparency, human oversight, interpretability
- Implementing “human-in-the-loop” principles
- Examples of violations and legal consequences