Skip to content
← All case studies
#002March 19, 2026

Using AI to Write Case Studies from Session Logs

~45 min
Total time
5-10×
Acceleration
12 MB
Data processed
3,052
Records analyzed
8
Human prompts
2 case studies
Output
From session log to published case study
12 MB · 3,052 records · 45 minutes
01Source
Claude Code session log
12 MB JSONL file, auto-generated
3,052 records: prompts, responses, tool calls
02AI Extraction
[AI]
Subagent analyzes session data
Timeline, prompts, technologies extracted
Metrics computed: effort ratios, categories
Structured JSON + narrative summary output
03AI Drafting
[AI]
Case study follows Kodulabor framework
9-dimension assessment generated
Data-driven sections auto-populated
Framework compliance enforced by template
04Human Review
[HUMAN]
Context, correction, taste
Corrected '3 hours' to actual 5–6h effort
Added context only participant knows
Shaped tone from 'report' to 'case study'
05Outcome
2 published case studies
Revalia Homes (#001) + this meta-study (#002)
Both follow identical framework structure
Impact
Time
45 min vs 4–8 hours
Cost
Negligible vs €400–1,200
Effort
20 min AI + 25 min human

Problem

Kodulabor's core proposition is: integrate AI, then measure and publish the impact. But writing detailed case studies is time-consuming. Each project assessment requires reviewing what happened, extracting metrics, benchmarking against traditional approaches, and synthesizing findings into a structured narrative. If every case study takes hours to write manually, the publishing cadence slows and the lab's public output suffers.

The question: can the case study production process itself be AI-assisted? And can we use the same data source — Claude Code session logs — that powered the original project?

This is a meta-case-study: using AI to analyze AI-assisted work and produce a structured assessment of it.


AI Approach

The approach exploited a useful property of AI-assisted development: it generates its own audit trail. Claude Code stores complete session histories as JSONL files — every user prompt, every AI response, every tool call, every file operation. This is machine-readable project documentation that exists as a byproduct of the work itself.

Pipeline:

  1. Data source: Claude Code session log (12MB JSONL file, 3,052 records) from the Revalia Homes project (Case Study #001)
  2. Analysis agent: A Claude subagent was tasked with reading the session log and extracting structured data — timeline, prompts, technologies, challenges, effort metrics
  3. Structured output: The agent produced three deliverables: a full narrative analysis, a quick-reference summary, and a structured JSON data file
  4. Case study authoring: A second Claude session (this Cowork conversation) used the extracted data to write the formal case study following the Kodulabor Assessment Framework
  5. Human review: Author reviewed, corrected the "3 hours" claim against actual session data, and added context that only the human participant would know (e.g., the face-to-face session structure, the DNS waiting time)

Tools used:

  • Claude Code (source data)
  • Claude Cowork with subagent delegation (analysis and writing)
  • Kodulabor Assessment Framework (structure)

Human Effort

Total time for the analysis and case study writing: ~45 minutes

Breakdown:

ActivityTimeHuman/AI
Deciding to do this0 minHuman (pre-existing idea)
Locating session files2 minAI navigated the file system
Analyzing 12MB session log~4 minAI subagent (autonomous)
Reviewing extracted data5 minHuman
Correcting assumptions (the "3 hours" framing)3 minHuman
Writing the case study document~8 minAI (following framework)
Writing this meta-case-study~8 minAI (following framework)
Human review and context additions~15 minHuman
Total~45 min~25 min human, ~20 min AI processing

Prompt count: 8 human messages in the conversation to go from "I have a session history" to two completed case studies.


Traditional Benchmark

Writing a detailed technical case study manually — reviewing project artifacts, interviewing the developer, structuring the narrative, benchmarking costs — typically takes:

ApproachTimeCost
Technical writer (external)8–16 hours€400–1,200
Developer writes it themselves4–8 hoursOpportunity cost
Marketing agency case study10–20 hours€800–2,000

These estimates assume the writer has access to the developer for interviews and the project artifacts. The write-up itself — not the research — is the bulk of the work.


Acceleration Factor

MetricTraditional (self-authored)AI-assistedFactor
Time to finished case study4–8 hours~45 minutes5–10x
Research/data extraction2–3 hours~6 minutes (automated)20–30x
Writing2–5 hours~16 minutes (AI) + 15 min (human review)4–10x

The largest acceleration is in data extraction. The AI subagent processed 3,052 session records and produced structured analysis in ~4 minutes. A human doing the same — reading through logs, tallying prompts, categorizing work — would spend hours.

The writing acceleration is more modest because the human review step is essential. AI can draft the structure and fill in data-driven sections quickly, but the nuance — "actually, the 3 hours was face-to-face time with my friend" — requires the person who was there.


Quality Assessment

What the automated pipeline produced well:

  • Accurate extraction of all 35 user prompts and their categorization
  • Correct identification of the full technology stack
  • Precise timeline reconstruction with timestamps
  • Comprehensive listing of features, integrations, and challenges
  • Useful metrics (prompt-to-action ratio, effort distribution)

What required human correction:

  • The initial "3 hours" framing was the user's rough estimate. The session data showed 11 hours wall clock. Neither number alone tells the true story — human context was needed to explain the actual work pattern (face-to-face session + gaps + remote follow-up)
  • Quality assessment required subjective judgment about what "good enough for a small business" means
  • The traditional benchmark costs are estimates based on the author's industry experience, not data the AI could extract from logs
  • Tone and narrative voice needed human shaping to avoid reading like a generated report

Quality verdict: The AI-assisted pipeline produces a solid first draft with accurate data. The human contribution — roughly 25 minutes of review and context — elevates it from "accurate report" to "case study worth reading." The 80/20 rule applies: AI gets you 80% of the way there in 20% of the time.


Gotchas & Limitations

1. Session logs don't capture everything The JSONL file records prompts and AI actions but not: what the human was thinking between prompts, what they saw in the browser that prompted the next request, or what happened outside the Claude Code session (DNS configuration, conversations with the client). The human fills these gaps.

2. The meta-case-study requires the original author An AI analyzing someone else's session logs could extract the technical data but would miss the context entirely. This pipeline works best when the person who did the project reviews the output. It's an authoring accelerator, not an authoring replacement.

3. JSONL file size could be a constraint The 12MB file was processable but required chunked reading. Larger projects with longer sessions could exceed context windows. A preprocessing step (extracting only user prompts and key events) would make this more robust.

4. Framework compliance requires a template The case study structure (Problem, Approach, Effort, Benchmark, etc.) was followed because the Kodulabor framework was defined in the same conversation. Without an explicit framework, AI-generated case studies tend toward generic structures that lack the specific assessment dimensions that make this methodology distinctive.


Replicability Score

5 out of 5

This is the most replicable process Kodulabor can offer. The ingredients are:

  1. A Claude Code session log (exists automatically for any Claude Code project)
  2. The Kodulabor Assessment Framework (a template)
  3. An AI session to process the log and draft the case study
  4. 20–30 minutes of human review

Any Kodulabor project that uses Claude Code automatically generates the raw data needed for a case study. This means the assessment pipeline can be a standard part of every engagement — not an afterthought.

The implication for Kodulabor's publishing cadence: if every project naturally produces a session log, and that log can be processed into a case study draft in under an hour, then publishing one case study per project becomes sustainable even as a solo operation.


Verdict

The automated case study pipeline works. It reduces a 4–8 hour writing task to ~45 minutes while producing a more data-rich output than manual writing typically achieves. The 1:25 prompt-to-action ratio from the original Revalia Homes project is mirrored here: a small number of human inputs (8 prompts + 25 minutes of review) produces a substantial, structured output.

The key insight: AI-assisted development creates machine-readable project histories as a byproduct. Most teams ignore this data. Kodulabor's methodology treats it as the primary input for impact assessment. The session log is not just a debugging artifact — it's a dataset.

This has a compounding effect. Each project produces a session log. Each log feeds a case study. Each case study demonstrates the methodology. The methodology attracts the next project. The flywheel is self-documenting.


This case study was itself produced using the pipeline it describes. The session data is from the Cowork conversation dated March 19, 2026. Methodology and findings published openly at kodulabor.ai.


Data Appendix

MetricValue
Source sessionc8c0de2a (Revalia Homes)
Source file size12MB, 3,052 JSONL records
Analysis agent processing time~4 minutes
Total human prompts (this session)8
Human review time~25 minutes
Total wall clock time~45 minutes
Output2 case studies, 1 project brief
ToolsClaude Cowork, Claude subagent