Skip to content

πŸ§ͺ Practice lab: master the fundamentals

Welcome to the MagicFridge test kitchen. Theory is good. Practice is better. Here are 4 interactive exercises to verify that you have mastered all Chapter 1 concepts before moving on.


Exercise 1: AI sorting hat 🧠

(Objective: distinguish AI types - LO 1.1.1)

Task: the MagicFridge application contains several features. For each one, determine, by clicking on the correct answer, which AI technology is at work.

Your turn

1. "IF today > expiration date, send an alert."
●
●
●
2. "Scanning a crumpled receipt and recognizing the text."
●
●
●
3. "Inventing a recipe for Chocolate Lasagna that exists nowhere else."
●
●
●

Exercise 2 : the token scale βš–οΈ

(Objective: understand tokenization - LO 1.1.2)

Context : you are testing the chatbot input limits. You type: "I want to cook." The model uses a standard tokenizer.

How many tokens does this sentence roughly consume?

Answer: 5 to 6 tokens.

Breakdown analysis: unlike words (4 words), tokens often split verbs or add punctuation.

  • Probable breakdown: [I] [ want] [ to] [ cook] [.]

Tester's lesson: if your context window is 4000 tokens, do not think "4000 words". The actual number of corresponding words is lower than 4,000, and depends on the language used.


Exercise 3 : the right chef for the job πŸ‘¨β€πŸ³

(Objective: select the right LLM model - LO 1.1.3)

Situation: the dev team wants to implement 3 new features. Which LLM type (Base, Instruction-tuned, or Reasoning) do you recommend for the best result?

A. A customer service chatbot that answers complaints politely.

Reveal recommendation

Tester's choice: Instruction-tuned LLM.

Why? It is specifically trained to follow guidelines, maintain a coherent dialogue, and adopt a specific tone (politeness).

B. An autocomplete feature when the user types their shopping list.

Reveal recommendation

Tester's choice: Base model (Foundation).

Why? Its primary function is to predict the next most probable word. It is very performant and fast for completing simple phrases.

C. A "Catering Budget" module that optimizes costs for 50 people with 12 constraints.

Reveal recommendation

Tester's choice: Reasoning model.

Why? This problem requires logic and calculation. The model must use a "Chain of Thought" to solve this complex logical problem step-by-step without hallucinating numbers.


Exercise 4: multimodal inspection πŸ‘οΈ

(Objective: test multimodal capabilities and identify risks - LO 1.1.4)

The test: you take a picture of a cucumber in the fridge. You ask the AI: "Give me a recipe with this vegetable."

The AI response: "Here is a recipe for zucchini gratin..."

Think like a QA analyst :🧐

What is the problem here?

This is a visual hallucination (or a classification error).

The "Vision-Language" model misinterpreted the image pixels (cucumber/zucchini confusion) and generated text consistent with its own mistake.

Tester action: you must add test cases with visually similar vegetables (Apple/Tomato, Lemon/Lime) to verify the visual model's robustness.




Did these exercises help?

This is the end of Chapter 1! If you enjoyed these exercises, simply buy me a coffee β˜• to express your gratitude 😊.