Problem
A household with three access points configured as a single AP group, advertised by the ISP at 500 Mbit. The symptoms did not match the throughput claim: once a device finished connecting, speed was acceptable, but the first connection in some rooms took several seconds, certain spots had no usable connectivity at all, and devices roaming between APs occasionally got stuck on the wrong one. The router exposes an admin UI and a per-AP API, but the management interface showed each AP as "OK" and produced no actionable diagnosis.
The traditional path here is a paid site visit from an IT consultant: an afternoon walking through the house with a spectrum analyzer, a follow-up report, and a remediation invoice. In the Estonian and broader European SMB / household market, that costs €640 to €2,400 and takes one to two weeks of calendar time once scheduling is factored in.
The question this case study set out to answer: can an AI-assisted session, with only admin-API access to the router, identify the root causes and apply fixes in under one workout window — and end up with a reusable toolkit for the next time the network misbehaves?
AI Approach
The session ran in a single Claude Code worktree against an existing repository that already had the EnGenius router credentials provisioned through 1Password and a bin/run wrapper that resolves them at process start. No new authentication scaffolding was needed.
The approach was to build a small diagnostic toolkit first, then run it repeatedly while the operator adjusted AP settings in the admin UI:
- —
ap-inventory.mjs— pulls each AP's model, firmware, MAC, location label, current channel, current Tx power, and connected-station count from the router API. Run this every few minutes to see what is actually configured vs. what the AP group says it should be. - —
station-log.mjs— appends one line per associated station per poll todata/stations.jsonl. Each line captures station MAC, parent AP, RSSI, band, link rate, and timestamp. Over twenty minutes this builds a roam-history corpus. - —
roam-events.mjs— reads the station log and emits roam transitions (station X moved from AP A to AP B between t1 and t2), including the RSSI at handoff. Detects sticky-client cases where a station holds an AP too long. - —
connect-probe.sh— runs from a Linux client (the household's always-on server); measures DHCP-to-first-byte time over many short-lived connections to catch the "takes time to connect" symptom that doesn't show up in steady-state throughput tests. - —
engenius.mjs— a small wrapper around the EnGenius admin API used by all the other scripts. - —
wired-baseline.sh— a control: same throughput test over Ethernet to confirm the ISP link itself is delivering near 500 Mbit, isolating the problem to the WiFi side.
Claude proposed the toolkit in the first response, scaffolded all six scripts in a few minutes, and then drove the diagnostic loop: run inventory, ask the operator to apply one change in the admin UI, screenshot the result, re-run inventory, compare. The screenshots were pasted directly into the chat — admin UIs are not accessible to a headless tool — and the model read them to confirm whether the AP group had actually applied the proposed change.
What was not automated:
- —Logging in to the EnGenius admin UI to apply settings — changes that touch global AP-group configuration require a human at the console.
- —Walking through the house to test connectivity at the failure spots; only the operator can do that.
Human Effort
Total active engagement: ~2.3 hours, all in one sitting.
- —AI-assisted (~2 hours): scaffolding the six scripts, repeatedly running inventory and station-log polls, parsing the screenshots of the admin UI, comparing before/after, formulating the next change to test.
- —Manual (~20 minutes): logging into the admin UI several times, applying channel and Tx-power changes, taking screenshots of the result, walking to the previously-dead spots to verify.
28 user prompts. Half of them were a single phrase: check now. The diagnostic loop is dominated by waiting — for stations to roam, for changes to propagate, for the operator to apply a setting and screenshot the result. The cadence is "small tweak, observe, repeat" rather than "long stretch of reasoning."
Traditional Benchmark
A residential WiFi survey by an IT consultant with site visit, spectrum analyzer, and written report typically prices at:
| Item | Estimate |
|---|---|
| Site visit and survey | 4–8 hours |
| Spectrum and speed measurements at multiple spots | 2–4 hours |
| Report and recommendations | 2–4 hours |
| Total | 8–16 hours |
| Cost (€80–150/hr consultant) | €640–2,400 |
| Calendar time | 1–2 weeks (scheduling + visit + report) |
Mid-range planning value: ~15 hours, ~€1,500, 1 week.
Acceleration Factor
| Metric | Traditional (mid-range) | AI-assisted | Factor |
|---|---|---|---|
| Active human hours | ~15 | ~2.3 | 6× |
| Calendar time | ~1 week | 2 hours one morning | ~25× |
| Direct cost | ~€1,500 | ~€6 + time | ~250× |
The acceleration is large because most of a traditional consultant's time is travel, scheduling, and the report. AI compresses all of that — Claude can build a diagnostic toolkit and run the diagnostic loop faster than a consultant can drive to the house.
Quality Assessment
The diagnosis identified a small set of issues that, taken together, explained the symptoms:
- —Channel overlap on the 5 GHz band. "Auto" channel selection put two APs on adjacent UNII-1 channels, producing co-channel interference for stations roaming between them.
- —Tx power too high on the boundary AP. The middle AP was at maximum Tx, which extended its cell so far that distant stations stayed associated with it instead of roaming to a closer AP — the classic sticky-client pattern. The station log captured several real instances.
- —No upper-band coverage. The AP group had not been told to use UNII-3 (channel 136 and above), so all three APs were competing in the lower 5 GHz band.
After applying: Tx power to "auto", a manual UNII-3 channel on the AP closest to the previously-dead room, and verifying the AP group accepted the changes (via the inventory script run after each save) — the dead spot regained connectivity and the first-connect latency dropped visibly on the connect-probe test.
The toolkit itself is reusable. The six scripts are now in the household's repo and run from any future session with bin/run node scripts/wifi-diag/<name>.
Gotchas & Limitations
1. AP-group settings did not apply where individual-AP settings were expected. Early in the session the model proposed changes that would be made per-AP in the admin UI. The reality was that all three APs are managed as a group, and individual settings cannot be set unless the AP is removed from the group. This took two iterations and one round of screenshots to discover.
2. Screenshot pasting was the bandwidth bottleneck. Several changes required the operator to take a phone-screen photo of the admin UI, paste it into the chat, and have the model read it. This works but breaks the otherwise-headless flow. A worthwhile follow-up: a headless screenshotter for the admin UI, or a small Playwright recorder.
3. "Auto" channel selection in the admin UI is opaque. The admin UI does not surface what "auto" actually decided. The inventory script had to be run twice (before and after a reset) to infer the decision. Vendor admin UIs that don't expose internal state make any kind of automated diagnostic harder than it needs to be.
4. Diagnosis depends on the corpus accumulating. The station log only becomes useful after ~15 minutes of polling. The first roam-events output had too few events to draw conclusions from. Running the toolkit "for a day" rather than "for a session" would give a better baseline.
5. The connect-probe test is client-side. It runs from the always-on Linux client, so it tests connectivity from that fixed location. To diagnose the dead spots, the operator still has to walk to those spots with a laptop or phone, which is the manual step that scaled poorly.
Replicability Score
4 out of 5.
The pattern transfers cleanly to any home or small office with managed APs that expose an admin API. The specifics that limit it:
- —The toolkit speaks the EnGenius admin API directly. For other vendors (Ubiquiti, TP-Link Omada, Aruba Instant On) the API client (
engenius.mjs) would need rewriting, but the diagnostic logic is the same. - —The toolkit assumes the operator can apply settings via the admin UI when the API doesn't accept them directly. A fully-headless version is possible but would require a vendor with full API parity.
- —Households with non-managed consumer-grade routers (a single all-in-one box) cannot adopt this directly — there's no per-AP visibility to mine.
A technically-comfortable household with three managed APs can replicate this in roughly the same time. The intellectual property here is the diagnostic methodology (inventory → station log → roam events → connect probe → wired baseline), not the vendor-specific glue.
Verdict
This is the kind of project a typical homeowner does not undertake. The traditional alternative is paying a consultant or living with the problem; most people pick the second. Two hours and ~€6 of API cost produced a diagnosis that named the three actual issues and a reusable script suite for the next time the network misbehaves.
The case for AI-assisted home-network diagnostics is straightforward: the work is fundamentally script-shaped (poll an API, log results, look for patterns), the cost of the consultant alternative is high enough that most people skip it, and the household already has all the data needed once the API access is solved. The only meaningful friction was the admin UI bottleneck — screenshots pasted into chat — and that is a UX problem, not a tooling-capability problem.
A reasonable follow-up is to schedule the station-log poller to run continuously and write to a small SQLite store, so that the next time someone in the household says "the WiFi was bad earlier," there is a corpus to answer from. That work is not yet done.
This case study was produced using the Kodulabor Assessment Framework. Methodology and findings published openly at kodulabor.ai.
Data Appendix
| Metric | Value |
|---|---|
| Session window | 10 May 2026, ~2.3 hours active |
| User prompts | 28 (about half were single-phrase iteration: "check now") |
| Assistant messages | 129 |
| Bash commands | 37 |
| Files created | 7 (six diagnostic scripts + one data directory) |
| Files edited | 0 |
| Output tokens | ~128,000 |
| Cache-read tokens | ~12,500,000 |
| Cache-create tokens | ~660,000 |
| Estimated direct cost | ~€6 (Sonnet rates, cache-heavy) |
| Toolkit location | scripts/wifi-diag/ in the household's repo |
| Routing platform | Managed AP group, three access points, EnGenius admin API |
| ISP advertised speed | 500 Mbit |