Training: Retrieval Augmented Generation (RAG) with LangChain

Level

Intermediate

Duration

24h / 3 days

Date

Individually arranged

Price

Individually arranged

Training: Retrieval Augmented Generation (RAG) with LangChain

The “Retrieval Augmented Generation (RAG) with LangChain” training is an intensive 2–3-day hands-on workshop that introduces participants to building practical RAG systems — advanced applications that combine information retrieval with generative language models. Participants will learn the complete process of creating such solutions: from data preparation and vector database indexing to integration with LLMs using the LangChain library. With 80% workshop-based learning, attendees will gain real-world skills in constructing RAG applications, significantly reducing hallucinations and improving the quality of AI responses.

What will you learn?

  • Prepare and index data for RAG systems in a complete workflow
  • Design and implement Retrieval-Augmented Generation pipelines with LangChain
  • Master advanced chunking, embedding, and retrieval techniques
  • Build efficient Q&A systems and chatbots powered by custom data
  • Optimize prompting and manage conversational context
  • Create scalable and secure RAG solutions ready for production deployment
Who is this training for?
  • logo infoshare Programmers and AI engineers who want to build RAG applications
  • logo infoshare Data scientists and NLP specialists integrating LLMs with their own data sources
  • logo infoshare Analysts and developers interested in practical automation of knowledge access
  • logo infoshare System architects exploring modern approaches to combining retrieval and generation

Training Program

  1. Day 1: Fundamentals of ML Model Security

  1. Module 1: Introduction to ML ecosystem threats

  • Characteristics of modern AI model attacks
  • Consequences of successful breaches
  • Case studies of intrusions and manipulations in real-world projects
  1. Module 2: Types of attacks on ML models

  • Adversarial attacks: methods of generating adversarial samples
  • Attacks on training data privacy
  • Information leakage from trained models
  • Vulnerability analysis of different ML architectures
  • Attacks targeting ML infrastructure
  1. Module 3: Workshop – Threat identification

  • Simulating attacks on sample classification and regression models
  • Analyzing traces and penetration mechanisms of ML models
  1. Day 2: Advanced Protection Techniques

  1. Module 4: Methods for securing ML models

  • Adversarial training techniques
  • Federated learning for enhanced privacy
  • Implementing obfuscation and data privacy mechanisms
  • Strategies for risk reduction in ML workflows
  1. Module 5: Workshop – Practical model protection

  • Designing resilient ML architectures
  • Implementing advanced defense techniques
  • Security testing of ML models
  • Developing security policies for ML teams
  1. Module 6: Security tools and frameworks

  • Overview of open-source tools for model protection
  • Analysis of specialized ML cybersecurity libraries
  • Automating security verification processes
  • Integrating security tools with ML pipelines

Contact us

we will organize training for you tailored to your needs

Przemysław Wołosz

Key Account Manager

przemyslaw.wolosz@infoShareAcademy.com

    The controller of your personal data is InfoShare Academy Sp. z o.o. with its registered office in Gdańsk, al. Grunwaldzka 427B, 80-309 Gdańsk, KRS: 0000531749, NIP: 5842742121. Personal data are processed in accordance with information clause.