Executive-grade decision tools for AI transformation on the Claude platform. Built for CIOs, CDOs, CTOs, and CHROs sizing Claude adoption — and architects defending the choice to leadership.
As of 2026-05. Pricing, model versions, and feature surface change. Verify at anthropic.com and docs.claude.com before final decisions.
Most “use Claude” content is developer tutorials. This repo fills the executive gap:
Not Anthropic marketing. Decision-frame first, features second.
Single source of truth for feature status: docs/feature-inventory.md. Refresh weekly — the rest of the artifacts cite it.
Don’t know which row you are? Open decision-spine.html — single front-door flow that walks you through the seven decisions in order (anti-use → pattern → build-vs-buy → cost → ship safely → CLI rollout → measurement) and routes to the right artifact at each branch.
Three abstraction layers. Same artifacts, three lenses — pick the lens that matches what you’re doing.
| Layer | Reader’s question | Artifacts |
|---|---|---|
| Strategy — why and whether | “Should we use Claude here? At what cost? Build or buy?” | executive-briefing.html · anti-use-cases.md · build-vs-buy-worksheet.html · cost-calculator.html |
| Architecture — how it fits together | “Which pattern? Which features? What governance shape?” | reference-architectures.html · feature-decision-matrix.html · governance-overlay.md |
| Execution — how it ships | “How do we run the pilot? Score candidates? Roll out the CLI? Measure quality?” | adoption-playbook.md · pilot-selection-worksheet.html · claude-code-adoption-guide.md · claude-code-starter-skills.md · hooks-starter-pack.md · mcp-starter-pack.md · eval-starter-pack.md |
Cross-cutting: decision-spine.html (entry-point flow across all three layers) · decision-memes.html (8 ice-breaker memes, each pointing at a real artifact) · claude-misconceptions.md (skeptic disarmer in text form — myths that drive mis-budget / mis-architect calls) · data-advisory.md (pre-pilot data sizing — how much data, from where, governance flags per source) · docs/feature-inventory.md (single source of truth, cited by every artifact above).
Different roles enter at different layers. CIOs/CTOs read Strategy first. Architects start at Architecture. Transformation leads + engineering managers live in Execution. The spine is for anyone who doesn’t already know which layer they’re in.
| Artifact | Type | What it does |
|---|---|---|
decision-spine.html |
Reference (entry-point flow) | Single front-door flowchart routing to the right artifact for the question at hand. 7 branches (anti-use → pattern → build-vs-buy → cost → ship safely → CLI rollout → measurement). Hand-drawn SVG; print-friendly; no JS deps. |
decision-memes.html |
Ice-breaker (8 memes) | Eight CSS-drawn memes, each pointing at a real decision artifact. Slide-1 opener, skeptic disarmer, onboarding ice-breaker, workshop facilitation prompt. The artifacts the memes point to are not jokes. |
claude-misconceptions.md |
Reference (skeptic disarmer) | ~15 myths about Claude that drive measurable mis-budget / mis-architect / mis-staff calls — context window, hooks, sandbox, caching, batch, refusal, rate limits, Computer Use. Format: myth → reality → what you’d mis-decide → cite. All cites primary (docs.claude.com / anthropic.com). Text-form companion to decision-memes.html. |
anti-use-cases.md |
Reference (reject filter) | Explicit list of where Claude should not be used — 5 categories (Hard nos, Wrong tool, Wrong economics, Governance no-go, Premature). Each entry: pattern → why not → do this instead → cite. Runs before pilot-selection-worksheet. |
data-advisory.md |
Reference (pre-pilot sizing) | How much data, and from where. Context window vs. RAG threshold, eval corpus minimums, distillation trigger volume, cache eligibility shape, and a source-of-data taxonomy with governance flags per source. Pre-pilot checklist. |
executive-briefing.html |
10-slide deck | Full-screen leadership deck: platform shift, Claude in 60s, when Claude wins, cost economics, time-to-value, governance, 90-day plan, risks. Arrow-key nav, print-to-PDF. |
cost-calculator.html |
Interactive | Inputs: monthly volume × token mix × model mix × cache hit rate × batch eligible %. Outputs: monthly $, per-request cost, savings vs naive baseline. |
feature-decision-matrix.html |
Decision grid | Use-case patterns × Claude features (caching, thinking, tool use, computer use, Files, Skills, MCP, Agent SDK, batch, citations). Hover for rationale. |
adoption-playbook.md |
Operational | 90-day rollout: week 0 pre-flight, weeks 1-4 pilot, 5-8 guardrails, 9-13 scale. 8 failure modes. Reference team structure. |
pilot-selection-worksheet.html |
Decision tool | Score 2–6 candidate pilot use cases on 5 axes (value, time-to-signal, data readiness, risk, sponsor clarity) → ranked verdicts (Strong / Viable / Risky / Not yet) with blocker flags. Operationalizes Week 0 of the playbook. |
governance-overlay.md |
Reference | Data flow taxonomy, no-train terms, ZDR scope + eligibility, HIPAA / BAA per-feature coverage, data residency (inference_geo + Workspace geo), retention defaults (30-day / 2-year AUP / 7-year T&S / 5-year feedback), certifications (ISO 27001, ISO 42001, SOC 2), EU AI Act + NIST AI RMF mapping, audit trail, prompt versioning, vendor concentration, cost-as-governance. |
build-vs-buy-worksheet.html |
Decision tool | Add use case → score 6 axes (regulated data · latency · customization · scale · expertise · strategic moat) → recommendation: Claude direct / via Bedrock or Vertex / OpenAI / open-source / packaged SaaS. TCO band + rationale. |
reference-architectures.html |
Reference | 6 canonical patterns with hand-drawn SVG diagrams: RAG copilot, agentic workflow, batch enrichment, domain expert assistant, code automation, embedded copilot. |
claude-code-adoption-guide.md |
Operational | Engineering-team rollout for Claude Code (CLI). Pilot selection, hooks, skills, settings, MCP, security model, measurement. |
claude-code-starter-skills.md |
Templates | 8 team-grade Skill templates (PR review, test gen, migration guard, bug triage, doc refresh, release notes, on-call, refactor scout). Each framed by when-to-use / failure-mode / owner before the prompt body. |
hooks-starter-pack.md |
Templates | 10 Claude Code hook templates (block-secrets, run-linter, log-cost, PII scrub, branch guard, dependency-license check, audit log, commit-msg, session-context, eval-trigger). Each framed by when-to-use / failure-mode / owner. Phased Phase 1→4 rollout matrix + blocking-vs-advisory defaults. |
mcp-starter-pack.md |
Templates | 7 read-only MCP server templates (issue tracker, internal docs, CI logs, DB read replica, observability, API catalog, code search). Each framed by when-to-use / failure-mode / owner / scope. Read-only by design; mutate variants explicitly deferred to Phase 4. |
eval-starter-pack.md |
Templates | 8 evaluation templates (regression, format compliance, tool-call accuracy, grounding, adversarial, cost-per-task, latency, refusal calibration). Each framed by what it catches / failure-mode of the eval itself / owner. Plus a blocking-vs-advisory matrix. |
Live at https://gmanch94.github.io/claude-platform-playbook/ — direct in-browser preview, JS executes, print-to-PDF works.
Or fork the repo, customize for your org, host wherever you want.
© gmanch94 · CC-BY-4.0 · As of 2026-05. Verify at anthropic.com.ai-architect-showcase — vendor-neutral AI strategy artifacts (NIST AI RMF, EU AI Act, EEOC anchored). Read this first if you’re earlier in the AI journey.ai-enablement-ws — architect-grade operational reference (cookbooks, governance frameworks, audit templates, ADRs, ~25 Claude Code skills).CC-BY-4.0 — free to share, adapt, and reuse with attribution.
© gmanch94 · CC-BY-4.0 · As of 2026-05.