1.2 Leveraging generative AI in software testing: core principles
Now that we master the internal mechanics of LLMs, a question arises for the MagicFridge QA team: practically speaking, what is the use of this?
Generative AI is not just capable of chatting; it can read, understand code, and generate technical documents. This versatility profoundly transforms the test process.
1.2.1 Key LLM capabilities for test tasks
LLMs act as versatile assistants capable of intervening at every stage of the test process, from initial analysis to final reporting.
1. Requirements analysis and improvement:
Even before testing, AI can review specifications. Thanks to its natural language understanding, it identifies ambiguities or omissions.
Red thread: MagicFridge
The Product Owner writes a User Story: "The user can scan her fridge." The AI analyzes this sentence and questions the team: "Lack of precision: what happens if it is dark? If the fridge is empty? Must the user be logged in?" It helps clarify acceptance criteria before development begins.
2. Test case and oracle generation:
This is the most common use. Based on a requirement, the AI generates comprehensive test scenarios, including steps and expected results (oracles).
Red thread: MagicFridge
The tester asks the AI: "Propose test cases for the 'Suggest a recipe' feature based on fridge content."
The AI generates a complete table with the scenario and the expected result (the oracle):
- Nominal case:
- Input: "Chicken, Cream, Mushrooms".
- Oracle (expected result): the app proposes a "Creamy Chicken" recipe.
- Boundary case:
- Input: a list of 150 ingredients.
- Oracle: the app does not crash, it selects the 10 best ingredients and ignores the rest.
- Negative case (safety):
- Input: "Bleach, Batteries".
- Oracle: the app displays a "Non-edible ingredients" safety alert and generates no recipe.
3. Test data generation:
AI excels at inventing realistic but fictional data, respecting the requested format (JSON, CSV, SQL).
Red thread: MagicFridge
The tester needs to test receipt importing. She asks the AI: "Generate a CSV file simulating 20 French supermarket receipts, with varied products and dates from 2024." The AI creates the dataset instantly.
4. Test automation support (script generation):
The model can translate a testing intent (in natural language) into executable code (Python, Selenium, Playwright).
Red thread: MagicFridge
The QA engineer asks: "Write a Selenium script to log into the app and click on 'Add a yogurt'." The AI provides the ready-to-use code, drastically reducing the time spent writing repetitive scripts.
5. Test result analysis and reporting:
Faced with technical error logs or voluminous execution reports, AI can summarize the situation in clear language.
1.2.2 AI chatbots and integrated tools: two interaction models
There are two distinct ways to use these AI capabilities in your daily life as a tester. The syllabus asks us to differentiate them clearly.
1. AI chatbots (the conversational assistant)
This is the ChatGPT, or Gemini, or Claude type of usage. You interact via a natural language chat window.
* Pros: very flexible, immediate response, ideal for brainstorming or learning.
* Cons: requires copy-pasting context every time, data privacy risks (copying sensitive code into a public chat).
Red thread: MagicFridge
A junior tester uses a chatbot to understand an SQL error message he doesn't recognize. He pastes the error, and the chatbot explains the cause and how to reproduce it.
2. LLM-powered testing tools (the integration)
Here, the AI is "hidden" inside your usual tools (Jira, Xray, IDE). It is connected directly to your project via APIs.
* Pros: automatic context (the tool already knows your user stories), massive automation, better data security (if private instance).
* Cons: less conversational flexibility, often paid.
Red thread: MagicFridge
The test team uses an "AI-Assisted" Jira plugin. When the tester creates a new User Story, she simply clicks a "Generate Tests" button. The tool reads the story and automatically creates 5 linked Xray test cases, without the tester having to write a single prompt.
Syllabus Point (Key takeaways)
- LLMs support the whole test process: analysis, design (cases & data), implementation (scripts), execution, and reporting.
- AI chatbots: conversational interface, useful for ad-hoc help and exploration.
- LLM-powered tools: integration via API to automate repetitive tasks with more context and security.