π§ͺ Practice lab: master the fundamentals
Welcome to the MagicFridge test kitchen. Theory is good. Practice is better. Here are 4 interactive exercises to verify that you have mastered all Chapter 1 concepts before moving on.
Exercise 1: AI sorting hat π§
(Objective: distinguish AI types - LO 1.1.1)
Task: the MagicFridge application contains several features. For each one, determine, by clicking on the correct answer, which AI technology is at work.
Your turn
1. "IF today > expiration date, send an alert."β
β
β
β
β
β
Exercise 2 : the token scale βοΈ
(Objective: understand tokenization - LO 1.1.2)
Context : you are testing the chatbot input limits. You type: "I want to cook." The model uses a standard tokenizer.
How many tokens does this sentence roughly consume?
Answer: 5 to 6 tokens.
Breakdown analysis: unlike words (4 words), tokens often split verbs or add punctuation.
- Probable breakdown:
[I][ want][ to][ cook][.]
Tester's lesson: if your context window is 4000 tokens, do not think "4000 words". The actual number of corresponding words is lower than 4,000, and depends on the language used.
Exercise 3 : the right chef for the job π¨βπ³
(Objective: select the right LLM model - LO 1.1.3)
Situation: the dev team wants to implement 3 new features. Which LLM type (Base, Instruction-tuned, or Reasoning) do you recommend for the best result?
A. A customer service chatbot that answers complaints politely.
Reveal recommendation
Tester's choice: Instruction-tuned LLM.
Why? It is specifically trained to follow guidelines, maintain a coherent dialogue, and adopt a specific tone (politeness).
B. An autocomplete feature when the user types their shopping list.
Reveal recommendation
Tester's choice: Base model (Foundation).
Why? Its primary function is to predict the next most probable word. It is very performant and fast for completing simple phrases.
C. A "Catering Budget" module that optimizes costs for 50 people with 12 constraints.
Reveal recommendation
Tester's choice: Reasoning model.
Why? This problem requires logic and calculation. The model must use a "Chain of Thought" to solve this complex logical problem step-by-step without hallucinating numbers.
Exercise 4: multimodal inspection ποΈ
(Objective: test multimodal capabilities and identify risks - LO 1.1.4)
The test: you take a picture of a cucumber in the fridge. You ask the AI: "Give me a recipe with this vegetable."
The AI response: "Here is a recipe for zucchini gratin..."
Think like a QA analyst :π§
What is the problem here?
This is a visual hallucination (or a classification error).
The "Vision-Language" model misinterpreted the image pixels (cucumber/zucchini confusion) and generated text consistent with its own mistake.
Tester action: you must add test cases with visually similar vegetables (Apple/Tomato, Lemon/Lime) to verify the visual model's robustness.