Introduction to chapter 3
Welcome to the turbulence zone: risk management.
So far, we have seen that GUS (the MagicFridge AI) was a formidable assistant for accelerating testing. But like any powerful tool, it comes with dangers. An AI can lie with confidence, reveal industrial secrets, or consume the energy of a small town.
In this chapter, we will learn how to identify, measure, and mitigate these risks to ensure impeccable software quality.
🗺️ Chapter roadmap
We will explore the shadow zones of generative AI through three major axes:
- Quality failures (section 3.1): how to detect hallucinations, reasoning errors, and cognitive biases in GUS.
- Security risks (section 3.2): how to prevent personal data leaks and "Prompt Injection" attacks.
- Societal and environmental impact (section 3.3): understanding LLM energy consumption and ensuring compliance with new laws (such as the AI Act).
As a quality professional, your role is evolving: you are no longer just hunting for functional bugs, you are becoming the guardian of AI ethics and reliability.