Decision Memes As of 2026-05

Eight ice-breakers for getting Claude conversations going. Each meme points at a real decision the rest of this repo takes seriously.
Read this first. These are ice-breakers, not arguments. Comedy lowers activation energy for execs who think AI strategy decks are dry; the artifacts each meme links to do the actual work — anti-use lists, scoring frameworks, governance overlays, eval templates. The artifacts are not jokes. Use the memes to start the conversation; use the artifacts to make the decision.
×
Sole-decider on credit denial
Sub-100ms trading loop
Generating prod SQL, no review
50-page contract → human-reviewed summary
Ticket triage with confidence threshold
Batch + cache where $/task math survives
1drake

Where Claude is "no" vs "yes"

"Anywhere a human still has to take the blame? Yes. Anywhere the machine takes the blame? No."
The first decision in this repo isn't which model — it's whether to use Claude here at all. Five categories of reject: hard nos, wrong tool, wrong economics, governance no-go, premature.
→ anti-use-cases.md — 5 reject categories with cite columns (HIPAA, EEOC, EU AI Act, GDPR).
"Use Opus for everything."
◉◉
"Use Sonnet for everything — cost!"
◉◉◉
"Use the right model per task."
◉◉◉◉
"Score 8 patterns × 12 features. Mix Haiku/Sonnet/Opus by step. Gate $/task. Wire evals before launch."
2galaxy brain

Model selection enlightenment ladder

"There is no smartest model. There's a model mix that survives the cost gate."
Sonnet 4.6 wins ~80% of production patterns at ~5× lower cost than Opus. Opus 4.7 earns its keep on agentic + deep-reasoning paths. Haiku 4.5 wins high-volume classification. Mix beats max.
→ feature-decision-matrix.html — 8 patterns × 12 features with model fit per cell.
"We'll add governance next sprint."(no eval baseline · no kill switch · no audit log · no rollback path)
3this is fine

Shipping without guardrails

"Adding governance after launch costs 3–5× what it costs at Week 0."
"Evals exist but nobody runs them" is named failure mode #2 in the adoption playbook. The 90-day arc is pilot → guardrails → scale for a reason — guardrails are not optional, they are step 2.
→ adoption-playbook.md — 8-failure-mode heatmap with probability × cost × early signal.
EA
Enterprise architect
3-day-old GitHub repo47 stars · no tests · MIT
"Is this our production agentic framework?"
4is this a pigeon

The shiny-framework misclassification

"The framework you tweet about is rarely the framework you ship."
Production patterns are boring on purpose: typed inputs, retry budgets, refusal calibration, cost gates, eval coverage. The premise-check section of the decision spine names this trap — "Pick the smartest model and tune later" is Myth #3. New ≠ production-ready. Star count ≠ governance review.
→ decision-spine.html — premise-check section, Myth #3.
~ ~ sweat ~ ~
Build it on direct API
Buy the packaged SaaS
"…we don't have a moat in this domain."
5two buttons

Build vs buy without the moat axis

"Build where you have a moat. Buy where you don't. The button you can't press tells you which one."
Most build-vs-buy worksheets score 4–5 axes (regulation, latency, customization, scale, expertise). The 6th axis is strategic moat depth. If your candidate is pure commodity, the answer is buy — even if you could build.
→ build-vs-buy-worksheet.html — 6-axis scorer with ranked verdict + rationale.
⊙﹏⊙
"You shipped Claude in prod without a regression eval?"
6pikachu

The shocked face when silent drift hits

"Without a regression eval, you can't tell model drift from prompt drift from data drift."
Eight categories of eval cover the surface: regression · format · tool-call · grounding · adversarial · cost · latency · refusal. Each has a blocking-vs-advisory posture. Wire them before pilot launch, not after.
→ eval-starter-pack.md — 8 eval templates with blocking/advisory matrix.
The RAG copilot we built
The RAG copilot the vendor sold us
7spider-man pointing

Build vs buy converges to the same shape

"By month 18, your custom build looks like the SaaS you rejected. Plan accordingly."
Convergence is not failure — it's evidence the SaaS solved the same problem. The build-vs-buy worksheet exists to surface which axes justify the build delta (regulated data, custom workflows, scale economics, moat depth) before you spend 18 months proving the convergence.
→ build-vs-buy-worksheet.html — score before you build, not after.
Astronaut 1:"Wait — cached input still costs 10% of fresh? And the cache TTL is only 5 minutes?"
Astronaut 2 (with gun):"Always has been."
8always has been

The prompt-cache reality check

"Cache is a discount, not a free pass. Idle break = full re-pay."
Cached input ≈ 10% of fresh, not 0%. Cache TTL is 5 minutes — step away for coffee, come back, your next turn pays full price. Long-context calls at scale still cost real money. The cost calculator models the math; the governance overlay names the kill-switch volume cap.
→ cost-calculator.html — cache-hit-rate slider with monthly $ output.

How to use these

Slide-deck opener. Drop one meme into slide 1 of an AI strategy deck. The room laughs; the meme dissolves to the artifact link; you're three minutes ahead of where you'd be with a feature list.

Skeptic disarmer. When a CIO says "I've seen 47 AI strategy decks this month," send them the meme page. Comedy as Trojan horse for governance discipline.

Onboarding ice-breaker. New hire on the AI team. Walk them through 8 memes. They have the repo's mental model in 12 minutes — anti-use → patterns → cost → ship → measure → CLI → premises.

Workshop opener. Print 8 cards. Hand them out face-down. Each attendee picks one, reads the punchline, then together you debate which artifact it points to and why. The repo turns into a 30-minute facilitation.
What this is not. Not a substitute for the artifacts. Not a marketing piece. Not a victory lap. The memes mock the bad decisions the rest of this repo helps you avoid — that's the point.