Five Patterns That Complete the Skills Layer — And Why That’s Not Enough — Luminity Digital
Enterprise AI Architecture  ·  Companion to the Skills Trap Series
Enterprise Architecture  ·  Decision Intelligence

Five Patterns That Complete the Skills Layer — And Why That’s Not Enough

Google Cloud’s ADK design pattern library is the most precise public articulation of the Skills Execution Plane available today. It is technically rigorous, practically grounded, and compositionally elegant. It also stops exactly where the architectural gap begins.

March 2026 Tom M. Gomez 9 Min Read

This post is a companion to The Skills Trap in Enterprise AI, which introduced the distinction between the Skills Execution Plane and the Decision Intelligence Layer. A recent LinkedIn post from Google Cloud, shared alongside the launch of their GEAR skilling initiative, presented a taxonomy of five agent skill design patterns for ADK developers. This piece uses that taxonomy as a lens to examine where the Skills Layer ends — and what architecture begins on the other side of that boundary.

The five patterns arrived well-argued and precisely named. A Tool Wrapper that gives an agent library expertise on demand. A Generator that produces structured artifacts from reusable templates. A Reviewer that scores outputs against a severity checklist. An Inversion pattern where the agent interviews the user before acting. A Pipeline that enforces multi-step workflows with gated checkpoints. The composability observation at the end — that these patterns are not mutually exclusive, that a Pipeline can contain a Reviewer, that a Generator can open with an Inversion — is the most underappreciated point in the entire post.

It is also a complete and authoritative description of the Skills Execution Plane. Every pattern, mapped against the two-plane architectural framework introduced in the Skills Trap post, lands in the same place: execution surface area. Faster agents. Better artifacts. More reliable workflows. Real and measurable productivity gains — and not a single pattern that addresses what happens to the reasoning after the pipeline closes.

Mapping the Patterns to the Architecture

The Skills Execution Plane has five components: a trigger surface where human or system initiation occurs; a context assembly layer that gathers documents, data, and tool state; a stateless reasoning engine; a tool and artifact execution layer; and a workflow outcome. The five ADK patterns map onto this plane with near-perfect fidelity.

The Tool Wrapper operates at the context assembly layer — its function is to load domain knowledge into the agent’s active context on demand, expanding what the reasoning engine can draw on. The Generator operates at the execution layer — its function is to compress artifact synthesis velocity by pairing a style guide with a structured output template, producing a document faster and more consistently than unguided generation would allow. The Inversion pattern operates at the trigger and context assembly boundary — it gathers structured input before the reasoning engine fires, ensuring that context is complete before generation begins. The Pipeline pattern is orchestration runtime design, enforcing sequencing and gate conditions across the execution plane.

The Skills Execution Plane — Where All Five Patterns Live

The Skills Execution Plane is the architectural layer where almost all current enterprise AI investment is concentrated. It optimizes for execution speed, automation coverage, workflow compression, and instruction responsiveness. The five ADK patterns represent the most sophisticated public design language for this plane available today.

This plane generates real and measurable productivity gains. It does not, by itself, create compounding enterprise intelligence. Reasoning generated in the execution plane is ephemeral — consumed and discarded with each inference cycle. Without an orthogonal layer to capture and compound that reasoning, the organization’s judgment does not improve even as its execution accelerates.

The Reviewer Pattern — Where the Argument Crystallizes

Of the five, the Reviewer pattern deserves the most careful attention — because it is the one that most closely resembles evaluation infrastructure. It loads a checklist. It scores outputs by severity. It produces findings and recommendations. In the language of the two-plane framework, it looks like it might reach the Decision Intelligence Layer. It does not.

What the Reviewer pattern evaluates is whether an artifact meets a predefined specification — whether generated code violates documented standards, whether a document conforms to a style guide, whether an output scores above a quality threshold. This is execution quality control. It is a gate on the output of the Skills Execution Plane, and it is genuinely valuable. Production pipelines should have it.

The Decision Intelligence Layer’s Outcome Evaluation Engine does something fundamentally different. It evaluates whether the judgment embedded in the artifact was sound — whether the reasoning that produced a decision led to the right outcome when measured against real business results over time. It links decision quality to observable consequences. It feeds that signal back into the context model so that future decisions are informed by what prior decisions actually produced. It is not a quality gate on execution output. It is a learning loop grounded in causality.

Reviewer Pattern — Skills Execution Plane

Execution Quality Control

Evaluates whether an artifact meets a predefined specification. Loads a severity checklist. Scores outputs. Produces findings and recommendations. The evaluation target is the output — the document, the code, the generated artifact.

The standard being evaluated against is fixed at design time. The Reviewer does not observe what happens after the artifact leaves the pipeline. It has no mechanism to determine whether the judgment inside the artifact was correct, or whether the organization is improving its ability to make that judgment over time.

Execution Layer · Output Quality
Outcome Evaluation Engine — Decision Intelligence Layer

Judgment Quality Measurement

Evaluates whether the reasoning that produced a decision led to the right outcome when measured against real business results. The evaluation target is the judgment — the causal chain from context to inference to consequence.

The standard being evaluated against evolves as outcomes are observed. The engine feeds that signal back into the enterprise context model so that future decisions are informed by what prior decisions actually produced. It is a learning loop, not a quality gate.

Intelligence Layer · Judgment Quality

An organization that deploys the Reviewer pattern has better execution hygiene. It does not have a feedback loop that improves the quality of its decisions over time. The distinction is architectural, and it is not bridged by combining the five patterns in increasingly sophisticated compositions.

The Token Efficiency Tell

The LinkedIn caption accompanying the pattern taxonomy contained a phrase that is worth examining closely. Describing the benefit of the ADK SkillToolset’s progressive disclosure architecture, the post noted that agents using it “only spend context tokens on the exact patterns they need at runtime.”

This is correct, and it reflects genuinely thoughtful infrastructure design. It is also a precise description of the optimization target: computational efficiency within the inference cycle. Minimize token cost. Maximize execution throughput. Load what is needed, discard what is not.

The optimization target of the Skills Layer is inference efficiency. The optimization target of the Decision Intelligence Layer is judgment quality over time. These are not the same objective, and infrastructure designed for one does not produce the other as a byproduct.

— Enterprise AI Architecture, Luminity Digital

The Decision Intelligence Layer has no corresponding token economy. Decision traces do not expire at the end of an inference cycle — they are captured, stored, and linked to outcomes that may not be observable for days, weeks, or quarters. Semantic knowledge graphs persist institutional context across decisions. Outcome evaluation telemetry accumulates over time. The infrastructure required is not a more efficient version of the Skills Execution Plane. It is an orthogonal layer with different persistence requirements, different data models, and different success metrics.

What GEAR Confirms About the Broader Landscape

The five patterns appeared alongside Google Cloud’s GEAR program — Gemini Enterprise Agent Ready — a skilling initiative designed, in Google’s own framing, to help developers and practitioners build and deploy enterprise-grade agents. The program offers learning paths covering agent anatomy, Gemini Enterprise workflows, deployment patterns, security guardrails, and scalability architecture on Vertex AI.

Every learning path sits in the Skills Execution Plane. There is no published curriculum for decision trace capture. No path for outcome evaluation infrastructure. No path for building the semantic knowledge graph that allows organizational judgment to compound. GEAR is an excellent on-ramp to the execution layer — and it confirms that the execution layer now has a mature, well-resourced, hyperscaler-backed design pattern library that practitioners can learn and deploy.

~25%

The share of enterprises that have moved even 40% of their AI pilots to production, according to Deloitte’s 2026 State of AI in the Enterprise report — cited in InfoWorld’s coverage of the GEAR launch. The gap between pilot and production is not primarily a skills gap. It is a governance and architectural gap that the Skills Layer, however well-designed, was not built to close.

Google Cloud’s own Office of the CTO published a reflection in late 2025 that articulates the missing layer with unusual clarity: every GenAI project rapidly becomes an evaluation project, and successful AI deployment requires building infrastructure for systems that learn, evaluation frameworks that measure improvement, and trust mechanisms that integrate AI into workflows gradually. That is a direct description of the Decision Intelligence Layer. The GEAR curriculum, launched months later, does not teach any of it.

The tension between Google Cloud’s own strategic insight and their flagship developer program is not a contradiction — it is a market reality. Hyperscalers have structural incentives to make the Skills Layer feel like the whole game. Training a million developers on Vertex AI creates ecosystem lock-in. The “skills” framing is a go-to-market strategy as much as it is a technical position. Which is precisely why the architectural distinction matters for enterprises trying to make durable investment decisions rather than platform adoption decisions.

The Open Question the Patterns Don’t Answer

The five ADK patterns answer a well-formed question: given that an agent needs to execute a task reliably, what design patterns govern how it acquires knowledge, produces structured output, validates quality, gathers context, and enforces workflow sequencing? That is the right question for practitioners building production execution infrastructure, and the patterns answer it well.

The question the patterns do not address is different: after the pipeline closes and the artifact is delivered, where does the reasoning go? Who captures the decision trace? What system links the judgment embedded in that artifact to the business outcome it was meant to produce? How does the organization’s next decision benefit from observing what this one actually caused?

The Structural Gap

The five patterns compose beautifully inside the execution plane. A Pipeline with a Reviewer gate at the end. A Generator opened by an Inversion to gather variables before filling a template. These compositions produce faster, more reliable, better-structured execution. None of them produce a decision trace. None of them link output quality to outcome quality. None of them feed a learning signal back into the institutional knowledge model. The gap is not a missing sixth pattern. It is an orthogonal layer of infrastructure.

Enterprises that build both layers — that invest in the Skills Execution Plane and then build the Decision Intelligence infrastructure that allows judgment to compound — will develop structural advantages over time: faster strategic learning cycles, reduced reasoning drift, explainable decision lineage, and measurable improvement in the quality of decisions, not just the speed of producing them. Enterprises that stop at the execution plane will have powerful, well-patterned agents and shallow organizational intelligence. The five patterns are excellent. They are also, in this sense, a ceiling — the ceiling of what execution architecture alone can achieve.

The Central Insight

The Skills Layer now has a complete design pattern library. Five patterns, composable, well-documented, backed by the largest cloud vendor in the space. The execution architecture is mature. And it still does not compound judgment. That is not a limitation of the patterns — it is an architectural fact about what the execution plane was built to do.

The Skills Trap Series — Foundation Reading

This post builds on the two-plane architectural framework introduced in The Skills Trap in Enterprise AI. Part 2 of that series examines the Decision Intelligence Layer in depth — reference architectures, evaluation infrastructure, and the governance frameworks required to operationalize compounding judgment.

The Skills Trap  ·  Related Reading
Part 1 · Skills Trap Series The Skills Trap in Enterprise AI: Why Faster Agents Are Not the Same as Smarter Organizations
Companion · Now Reading Five Patterns That Complete the Skills Layer — And Why That’s Not Enough
References & Sources

Share this:

Like this:

Like Loading...