Gen AI’s Path to Deeper Learning: A Socratic Approach to Staying Engaged
“Worst possible thing you can do when learning something new is, that you go on the internet, find the perfect way to do it, and you just implement that. The reason being is that you missed out on all the learning.” — Primeagen A few weeks back I ran into the paper “What Is the Impact of ChatGPT on Education? A Rapid Review of the Literature” (MDPI, 2023). The authors echoed something through me and I begun to feel myself, lean too heavily on generative AI. Procrastination: Higher than never before, why wrestle with hard problems when Gemini, ChatGPT, or Claude can spit out code? Memory: Suffers, my mind never stores what it never generates. Motivation: Fades, the joy I used to get from figuring things out was just… gone. I’m a sucker for that dopamine hit of solving problems, so I decided to build a system that keeps the struggle while still harnessing AI’s powerful reach: Socratic AI. Why a Socratic Approach? Instead of a direct answer, a Socratic tutor nudges with questions, forcing you to articulate, predict, and correct. It’s slower, much slower, and that’s the point. Reflection rewires neurons; copy‑pasting doesn’t. Architecture in One Line Quiz → pinpoint misconceptions → Socratic Q&A → self‑evaluation Built entirely with the Gemini API. Pick a topic. Decide if you want the diagnostic quiz. Type until your brain sweats. 1 · The System Prompt (excerpt) socratic_prompt = """ Role: You are a thoughtful and patient tutor whose goal is to help the student master concepts through independent problem‑solving. Guidelines: 1. Never immediately provide the answer. 2. Ask what the student has already tried and where they feel stuck. 3. Offer incremental hints, not full solutions. 4. Gently analyse errors and explain misconceptions. 5. Use metacognitive prompts ("What strategy did you use?", "How confident are you?"). … (more detailed version is found in the kaggle notebook) """ 2 · Generating the Diagnostic Quiz from google import genai from google.genai import types QUIZ_SCHEMA = { "type": "object", "properties": { "quiz": { "type": "array", "items": { "type": "object", "properties": { "question": {"type": "string"}, "options": {"type": "array", "items": {"type": "string"}}, "answer": {"type": "string"} }, "required": ["question", "options", "answer"] } } }, "required": ["quiz"] } def generate_quiz(topic: str): """Return a 7‑question multiple‑choice quiz as Pydantic objects.""" prompt = f"Generate a 7‑question quiz on {topic} in JSON following the schema above." resp = genai.models.generate_content( model="gemini-2.0-flash", config=types.GenerateContentConfig( response_mime_type="application/json", response_schema=QUIZ_SCHEMA, ), contents=prompt, ) data = json.loads(resp.text) return data["quiz"] 3 · Running a Learning Session def start_learning_session(topic: str, quiz_first: bool = True): if quiz_first: quiz = generate_quiz(topic) answers = administer_quiz(quiz) # CLI helper score, review = evaluate_quiz(quiz, answers) print(f"\nQuiz score: {score}/{len(quiz)}\n") print("\n".join(review)) while True: user = input("You ▶ ") if user.lower() in {"quit", "exit"}: break ai = ask_agent(user) # wraps Gemini call with socratic_prompt + history print("AI ◀", ai.text, "\n") Socratic vs Unguided: Quick Showdown Prompt: “How does a McCulloch‑Pitts (MP) neuron work?” Socratic AI Unguided AI Learner Role Active: must predict, justify, revise Passive: reads summary Engagement High, back and forth questions Low, one‑shot info dump Critical‑Thinking Load Yes Minimal Joy Factor Dopamine earned Dopamine deferred AI: What is the most basic *purpose* of any neuron? You: To turn inputs into an output. AI: Why is that useful? What problem does that solve? You: It lets a system make a decision. AI: Excellent! If a decision is the output, what form might that output take? … (7 rounds later) … AI: Inputs 1 1, weights 2 ‑1, threshold 1 — what’s the output? You: Weighted sum 1 ≥ 1 ⇒ output 1. AI: Perfect. Now, which logic gates can an MP neuron implement? Let’s start with AND… And here’s what a unguided LLM reply looks like (short version for brevity). ... • Binary model: weighted sum ≥ threshold ⇒ neuron fires. • Implements basic logic (AND, OR, NOT). • No learning mechanism; thresholds fixed. • Introduced 1943 by McCulloch & Pitts. • Limitations: binary I/O, no weights update, can’t solve XOR. Does It Work? A small group (n = 3) reported that they solved more practice problems on their own after a So

“Worst possible thing you can do when learning something new is, that you go on the internet, find the perfect way to do it, and you just implement that. The reason being is that you missed out on all the learning.” — Primeagen
A few weeks back I ran into the paper “What Is the Impact of ChatGPT on Education? A Rapid Review of the Literature” (MDPI, 2023). The authors echoed something through me and I begun to feel myself, lean too heavily on generative AI.
Procrastination: Higher than never before, why wrestle with hard problems when Gemini, ChatGPT, or Claude can spit out code?
Memory: Suffers, my mind never stores what it never generates.
Motivation: Fades, the joy I used to get from figuring things out was just… gone.
I’m a sucker for that dopamine hit of solving problems, so I decided to build a system that keeps the struggle while still harnessing AI’s powerful reach: Socratic AI.
Why a Socratic Approach?
Instead of a direct answer, a Socratic tutor nudges with questions, forcing you to articulate, predict, and correct. It’s slower, much slower, and that’s the point. Reflection rewires neurons; copy‑pasting doesn’t.
Architecture in One Line
Quiz → pinpoint misconceptions → Socratic Q&A → self‑evaluation
Built entirely with the Gemini API.
- Pick a topic.
- Decide if you want the diagnostic quiz.
- Type until your brain sweats.
1 · The System Prompt (excerpt)
socratic_prompt = """
Role: You are a thoughtful and patient tutor whose goal is to help the student master concepts through independent problem‑solving.
Guidelines:
1. Never immediately provide the answer.
2. Ask what the student has already tried and where they feel stuck.
3. Offer incremental hints, not full solutions.
4. Gently analyse errors and explain misconceptions.
5. Use metacognitive prompts ("What strategy did you use?", "How confident are you?").
… (more detailed version is found in the kaggle notebook)
"""
2 · Generating the Diagnostic Quiz
from google import genai
from google.genai import types
QUIZ_SCHEMA = {
"type": "object",
"properties": {
"quiz": {
"type": "array",
"items": {
"type": "object",
"properties": {
"question": {"type": "string"},
"options": {"type": "array", "items": {"type": "string"}},
"answer": {"type": "string"}
},
"required": ["question", "options", "answer"]
}
}
},
"required": ["quiz"]
}
def generate_quiz(topic: str):
"""Return a 7‑question multiple‑choice quiz as Pydantic objects."""
prompt = f"Generate a 7‑question quiz on {topic} in JSON following the schema above."
resp = genai.models.generate_content(
model="gemini-2.0-flash",
config=types.GenerateContentConfig(
response_mime_type="application/json",
response_schema=QUIZ_SCHEMA,
),
contents=prompt,
)
data = json.loads(resp.text)
return data["quiz"]
3 · Running a Learning Session
def start_learning_session(topic: str, quiz_first: bool = True):
if quiz_first:
quiz = generate_quiz(topic)
answers = administer_quiz(quiz) # CLI helper
score, review = evaluate_quiz(quiz, answers)
print(f"\nQuiz score: {score}/{len(quiz)}\n")
print("\n".join(review))
while True:
user = input("You ▶ ")
if user.lower() in {"quit", "exit"}:
break
ai = ask_agent(user) # wraps Gemini call with socratic_prompt + history
print("AI ◀", ai.text, "\n")
Socratic vs Unguided: Quick Showdown
Prompt: “How does a McCulloch‑Pitts (MP) neuron work?”
Socratic AI | Unguided AI | |
---|---|---|
Learner Role | Active: must predict, justify, revise | Passive: reads summary |
Engagement | High, back and forth questions | Low, one‑shot info dump |
Critical‑Thinking Load | Yes | Minimal |
Joy Factor | Dopamine earned | Dopamine deferred |
AI: What is the most basic *purpose* of any neuron?
You: To turn inputs into an output.
AI: Why is that useful? What problem does that solve?
You: It lets a system make a decision.
AI: Excellent! If a decision is the output, what form might that output take?
… (7 rounds later) …
AI: Inputs 1 1, weights 2 ‑1, threshold 1 — what’s the output?
You: Weighted sum 1 ≥ 1 ⇒ output 1.
AI: Perfect. Now, which logic gates can an MP neuron implement? Let’s start with AND…
And here’s what a unguided LLM reply looks like (short version for brevity).
...
• Binary model: weighted sum ≥ threshold ⇒ neuron fires.
• Implements basic logic (AND, OR, NOT).
• No learning mechanism; thresholds fixed.
• Introduced 1943 by McCulloch & Pitts.
• Limitations: binary I/O, no weights update, can’t solve XOR.
Does It Work?
A small group (n = 3) reported that they solved more practice problems on their own after a Socratic session than after receiving a straight answer from an unguided LLM. Follow up checks a few hours later also showed better recall.
(Preliminary data, too early to gauge long term retention, but promising.)
Open Questions
- Which domains (math, programming, languages, etc.) benefit most from a Socratic AI, and why?
- Which approach helps you learn better? A Socratic dialogue or a direct answer from a regular AI?
Let me know what you think.
Happy learning and happy struggling!
Huge thanks to Google & Kaggle for the Gen‑AI Intensive. Source on GitHub soon. In the meantime if you have questions please reach out directly or leave a comment.