Training: Retrieval Augmented Generation (RAG) with LangChain
Level
IntermediateDuration
24h / 3 daysDate
Individually arrangedPrice
Individually arrangedTraining: Retrieval Augmented Generation (RAG) with LangChain
The “Retrieval Augmented Generation (RAG) with LangChain” training is an intensive 2–3-day hands-on workshop that introduces participants to building practical RAG systems — advanced applications that combine information retrieval with generative language models. Participants will learn the complete process of creating such solutions: from data preparation and vector database indexing to integration with LLMs using the LangChain library. With 80% workshop-based learning, attendees will gain real-world skills in constructing RAG applications, significantly reducing hallucinations and improving the quality of AI responses.
What will you learn?
- Prepare and index data for RAG systems in a complete workflow
- Design and implement Retrieval-Augmented Generation pipelines with LangChain
- Master advanced chunking, embedding, and retrieval techniques
- Build efficient Q&A systems and chatbots powered by custom data
- Optimize prompting and manage conversational context
- Create scalable and secure RAG solutions ready for production deployment
Who is this training for?
Programmers and AI engineers who want to build RAG applications
Data scientists and NLP specialists integrating LLMs with their own data sources
Analysts and developers interested in practical automation of knowledge access
System architects exploring modern approaches to combining retrieval and generation
Training Program
-
Day 1: Fundamentals of ML Model Security
-
Module 1: Introduction to ML ecosystem threats
- Characteristics of modern AI model attacks
- Consequences of successful breaches
- Case studies of intrusions and manipulations in real-world projects
-
Module 2: Types of attacks on ML models
- Adversarial attacks: methods of generating adversarial samples
- Attacks on training data privacy
- Information leakage from trained models
- Vulnerability analysis of different ML architectures
- Attacks targeting ML infrastructure
-
Module 3: Workshop – Threat identification
- Simulating attacks on sample classification and regression models
- Analyzing traces and penetration mechanisms of ML models
-
Day 2: Advanced Protection Techniques
-
Module 4: Methods for securing ML models
- Adversarial training techniques
- Federated learning for enhanced privacy
- Implementing obfuscation and data privacy mechanisms
- Strategies for risk reduction in ML workflows
-
Module 5: Workshop – Practical model protection
- Designing resilient ML architectures
- Implementing advanced defense techniques
- Security testing of ML models
- Developing security policies for ML teams
-
Module 6: Security tools and frameworks
- Overview of open-source tools for model protection
- Analysis of specialized ML cybersecurity libraries
- Automating security verification processes
- Integrating security tools with ML pipelines