Training: Protecting Machine Learning Models Against Attacks

Level

Advanced

Duration

16h / 2 days

Date

Individually arranged

Price

Individually arranged

Training: Protecting Machine Learning Models Against Attacks

An advanced, hands-on course dedicated to key aspects of machine learning model security. The training combines solid theory with intensive workshops, enabling participants to understand and practically counter threats in ML environments. Attendees will learn to identify, analyze, and effectively protect models from modern attacks, gaining unique skills at the intersection of cybersecurity and artificial intelligence.

What will you learn?

  • Identification of advanced attack vectors targeting ML models
  • Methods to prevent manipulation of training data
  • Practical techniques for securing training and inference processes
  • Tools and strategies for protecting sensitive models against cyber threats

Prerequisites

  • Basic knowledge of Python and ML libraries (NumPy, scikit-learn, TensorFlow/PyTorch)
Who is this training for?
  • logo infoshare AI engineers and data scientists
  • logo infoshare ML solution architects
  • logo infoshare Professionals responsible for deploying AI solutions in organizations
  • logo infoshare Cybersecurity specialists
  • logo infoshare Developers working on advanced ML models

Training Program

  1. Day 1: Fundamentals of ML Model Security

  1. Module 1: Introduction to ML ecosystem threats

  • Characteristics of modern AI model attacks
  • Consequences of successful breaches
  • Case studies of intrusions and manipulations in real-world projects
  1. Module 2: Types of attacks on ML models

  • Adversarial attacks: methods of generating adversarial samples
  • Attacks on training data privacy
  • Information leakage from trained models
  • Vulnerability analysis of different ML architectures
  • Attacks targeting ML infrastructure
  1. Module 3: Workshop – Threat identification

  • Simulating attacks on sample classification and regression models
  • Analyzing traces and penetration mechanisms of ML models
  1. Day 2: Advanced Protection Techniques

  1. Module 4: Methods for securing ML models

  • Adversarial training techniques
  • Federated learning for enhanced privacy
  • Implementing obfuscation and data privacy mechanisms
  • Strategies for risk reduction in ML workflows
  1. Module 5: Workshop – Practical model protection

  • Designing resilient ML architectures
  • Implementing advanced defense techniques
  • Security testing of ML models
  • Developing security policies for ML teams
  1. Module 6: Security tools and frameworks

  • Overview of open-source tools for model protection
  • Analysis of specialized ML cybersecurity libraries
  • Automating security verification processes
  • Integrating security tools with ML pipelines

Contact us

we will organize training for you tailored to your needs

Przemysław Wołosz

Key Account Manager

przemyslaw.wolosz@infoShareAcademy.com

    The controller of your personal data is InfoShare Academy Sp. z o.o. with its registered office in Gdańsk, al. Grunwaldzka 427B, 80-309 Gdańsk, KRS: 0000531749, NIP: 5842742121. Personal data are processed in accordance with information clause.