AI for Manual Testers
Level
IntermediateDuration
8h / 1 dayDate
Individually arrangedPrice
Individually arrangedAI for Manual Testers
Learn practical applications of local AI in the automation of manual tests and preparation of test data. The training shows how to generate test cases from requirements, automatically build a test base in TestRail format, report defects with descriptions of reproduction steps, and create reports for the Product Owner and the development team. Participants will learn how to use local AI models (Ollama / LM Studio) and automation tools (n8n, Jira) in practice. The workshops allow shortening the time needed to prepare tests, increasing the precision of reports, and improving QA processes in the company. The training is ideal for manual testers and QA analysts who want to introduce intelligent automation into daily work.
Participant Requirements
- Basic knowledge of manual testing and QA processes
- Ability to work with business requirements documentation
- Willingness to learn how to use local AI models in practice
What You Will Learn
- Generate test cases from requirements and user stories
- Automatically build a test base in TestRail format
- Create defect reports with reproduction steps
- Generate test reports for the Product Owner and the development team
- Build a simple AI pipeline supporting the daily work of testers
Manual testers
QA analysts
Specialists responsible for preparing and reporting tests
People who want to implement AI in daily testing processes and documentation automation
Training Program
Scope of topics
- Generating test cases and test data from user stories / requirements
- Automation of test base preparation (TestRail format)
- Faster defect reporting (bug description, reproduction steps)
- Generating a test report for the Product Owner and for the dev team
Theoretical knowledge
- What “local AI” is and why requirements and logs can be processed safely
- How to prepare a prompt to obtain the expected format
- What a test agent is: logic that selects which tests are critical after a new release
- Minimum principles of quality control of generated tests (coverage of acceptance criteria)
Practical tasks
-
Running a local test case generator:
- input: business requirement,
- output: list of step-by-step tests + test data + priorities
-
Generating a defect report:
- from logs / screenshots
-
Automation of test report generation:
- AI summarizes test status and risks separately for the Product Owner and separately in a technical form for developers
Tools
- Ollama / LM Studio (local model)
- n8n (test case generation, export to CSV/JSON, reports)
- TestRail format (as target output)
- Jira (as defect reporting format)
Outcomes
- A ready pipeline “new requirement → ready test cases for import”
- Automatic generator of defect reports with reproduction steps
- An agent that helps decide what to test after each system change
- Post-test report templates (business version and technical version), ready to use after each test cycle