Hackathon Project
LangGraph-powered AI agents for OpenClaw — bringing stateful, multi-model workflows to Telegram with code-enforced safety and autonomous tool use.
LangGraph agents as OpenClaw skill backends unlock a class of scenarios that prompt-only and script-based skills structurally cannot handle.
A single conversation, multiple AI capabilities — from photo identification to guided repair to automatic escalation.
{status:"processing"} immediately. OpenClaw polls /poll_result/{SID}. Avoids timeouts on slow vision calls (30–60s). Request deduplication via description + photo hash.4 models, each chosen for its specific strength
| Task | Model | Why |
|---|---|---|
| Photo analysis | GLM-4.6V | Multimodal vision — identifies appliance from photo |
| Safety classification | GLM-4-Plus | Reliable LLM judge with ambiguity handling |
| Root cause diagnosis | GLM-4-Plus | Reasoning over symptoms and error codes |
| Repair guidance | GLM-4-Plus | Autonomous decision-making (instruct/ask/search/escalate) |
| Manual search | GLM-5 | Built-in web search tool support |
| Escalation report | GLM-4.5-Air | Structured, professional writing |
Same repair scenario, three approaches
| Metric | LangGraph Agent | Script Wrapper | Prompt-Only |
|---|---|---|---|
| Total time | 55.5 s | 81.0 s | N/A |
| Backend tokens | 1,664 | 1,510 | 0 |
| Total tokens | 4,064 | 3,610 | 11,100 |
| Safety enforcement | LLM judge + ambiguity | LLM YES/NO | Prompt (bypassable) |
| State persistence | SQLite checkpoint | JSON file | None |
| Agent autonomy | Full ✓ | None ✗ | Prompt ✗ |
| Process startup | 0 s (daemon) | ~3 s / call | N/A |