The Three-Layer
Patent Ideation Drill
A structured worksheet that takes engineers from “we built something clever” to “here’s an invention disclosure your patent attorney can use.” Three layers. One page. Works every time.
The Problem
Most engineers describe their inventions at the wrong level of abstraction. That kills the patent before it starts.
of software patent applications are rejected under Alice/Section 101
wasted per filing when ideas are too abstract to survive examination
engineering teams have never filed a patent despite shipping patentable work weekly
Where This Fits
The principle is the compass.
The mechanism is the patent.
The Three-Layer Drill is Step 3 of a unified patent ideation workflow. TRIZ tells you where to look. The drill tells you how deep to go. The Alice pre-screen tells you if it will survive.
Three-Layer Drill
Obvious → Architectural → Inventive mechanism
Alice Pre-Screen
4 questions to verify your Layer 3 is patent-safe
Start at Step 1 with a TRIZ contradiction, or jump straight into the drill if you already know what you built. Either way, always run the Alice pre-screen before filing.
The Method
Three layers of description. One invention.
Every patentable system can be described at three levels. Most engineers stop at Layer 1. The patent lives in Layer 3.
If you started with a TRIZ contradiction, the inventive principle you found is your compass for Layer 3. If not, you can use this drill standalone — just describe what you built at three levels of depth.
The Obvious Description
How you'd describe it in a standup
This is how most engineers describe what they built. Generic, high-level, and devoid of the specifics that make it novel. A recruiter would understand it. That's the problem.
Example
“We use caching to reduce latency for our API responses.”
Gut check: Would a junior engineer say 'yeah obviously'?
The Architectural Detail
How you'd explain it in a design review
Now you're adding the specific architectural choices: data structures, topology, protocols. This is where things get interesting, but we're not done yet.
Example
“We use a two-tier cache — L1 in-process with LFU eviction, L2 in Redis with TTL-based expiry — where L1 keys are promoted based on access frequency weighted by recency.”
Gut check: Would a senior engineer at another company say 'interesting, we didn't think of that'?
The Inventive Mechanism
The part you were quietly proud of building
This is where patents live. The mechanism is specific, the combination is non-obvious, and it produces a measurable technical improvement. If you used a TRIZ principle as your compass, this is where you describe exactly how you applied it. The principle gives the direction; your Layer 3 gives the coordinates.
Example
“The eviction policy per cache key is dynamically selected by a lightweight gradient-boosted classifier trained on access-pattern features. The classifier is retrained incrementally every 10 minutes using the cache-miss stream as ground truth, creating a closed-loop adaptive system.”
Gut check: Can you point to a specific, measurable technical improvement?
Stuck on Layer 3? Use a TRIZ principle as your compass.
If you can describe Layer 1 and Layer 2 but struggle to find the inventive mechanism, go back to the Software TRIZ Contradiction Matrix. Identify the trade-off your system resolves, look up the cell, and use the suggested inventive principles to guide what your Layer 3 should describe.
Open the TRIZ Contradiction Matrix →Step 4 — Alice / Section 101 Pre-Screen
Four questions before you file
Over 60% of software patent rejections cite Alice v. CLS Bank. Before you spend $15–25K filing, run your Layer 3 through these four questions. All four must pass.
Does it improve a technical process?
✅ Alice-safe
“Reduces p99 latency by 40% using per-key learned eviction”
❌ Alice-risky
“Makes the user experience faster”
The improvement must be to a technical process, not just a business outcome. Alice rejects claims directed at abstract business methods.
Is the improvement tied to a specific mechanism?
✅ Alice-safe
“A gradient-boosted classifier selects eviction policies per cache key based on access-pattern features”
❌ Alice-risky
“Uses AI to optimize caching for better performance”
Vague references to “AI” or “machine learning” without describing how they work are fatal under Alice. Specificity is survival.
Would it require a specific implementation to work?
✅ Alice-safe
“The classifier is retrained every 10 minutes using the cache-miss stream as ground truth labels”
❌ Alice-risky
“The system learns and adapts over time”
If your claim could be implemented a hundred different ways, it’s probably too abstract. The narrower the implementation, the safer it is.
Is there something unconventional about how the components interact?
✅ Alice-safe
“Cache-miss stream doubles as training data for the eviction model, creating a closed-loop adaptive system”
❌ Alice-risky
“Components work together to improve performance”
The Supreme Court looks for an “inventive concept” — a non-conventional arrangement of components. If every part is standard and the combination is obvious, it fails Step 2 of the Alice test.
If any answer is “no,” go back to Layer 3 and add more specificity. The drill’s built-in Alice toggle checks each layer automatically.
Interactive Worksheet
Try it yourself
Think of a system or feature your team built that felt clever. Walk through each layer. See where the invention emerges.
Patent Ideation Drill
Three-layer worksheet: from obvious to invention
Your System
Name the system or feature you built that you think might contain an invention.
The Obvious Description
How you'd describe it in a standup
Describe what your system does in one plain sentence. Use generic terms. This is the 'everyone does this' version.
The Architectural Detail
How you'd explain it in a design review
Now add the specific architectural choices. What data structures? What topology? What protocol? What makes your approach different from the textbook version?
The Inventive Mechanism
The part you were quietly proud of building
Now describe the novel mechanism — the clever bit. The part where you solved a contradiction in a way that wasn't taught in any textbook, blog post, or StackOverflow answer. Be specific about how it works, not what it achieves.
Alice / Section 101 Pre-Screen
Quick check: is your Layer 3 anchored to a technical improvement?
Does this improve a technical process (not just a business outcome)?
Good: 'reduces p99 latency by 40%'. Bad: 'increases revenue'.
Is the improvement tied to a specific mechanism (not an abstract idea)?
Good: 'by dynamically partitioning the hash ring based on load signals'. Bad: 'by using AI to optimize'.
Would this require a specific technical implementation to work?
If someone can't build it from your description alone, add more detail. If they can build it 10 different ways, you might be too abstract.
Is there something unconventional about how components interact?
Novelty often hides in the wiring between known components, not in the components themselves.
Anchor Your Invention
What specific, measurable technical improvement does your Layer 3 mechanism produce?
Prior Art Notes
What existing solutions come closest? Why is your approach different?
Worked Examples
See the full workflow in action
Two real engineering scenarios walked through the complete method: TRIZ contradiction → inventive principle → three-layer drill. Notice how the same system transforms as you add specificity.
Adaptive Rate Limiter with Behavioral Trust Scoring
Throughput vs. Security🧭 TRIZ compass: Principle 15 — Dynamics (adaptive algorithms, self-tuning systems)
We use rate limiting to prevent abuse.
We use a token bucket rate limiter with per-user quotas stored in Redis, with sliding window counters to handle burst patterns.
Each user's token replenishment rate is dynamically adjusted based on a behavioral trust score computed from request entropy (URL diversity, temporal distribution, payload variance). High-entropy users get higher limits automatically; low-entropy users (bot-like patterns) get progressively throttled. The trust score is updated per-request using an exponentially weighted moving average.
ML Feature Store with Point-in-Time Consistency
Data Freshness vs. Consistency🧭 TRIZ compass: Principle 13 — The Other Way Around (invert the dependency)
We store features for our ML models in a feature store.
We use a dual-write feature store where online features go to Redis and offline features go to a Parquet-based lake, with a reconciliation job that checks for drift.
Feature reads for inference are point-in-time consistent by attaching a logical timestamp (derived from the triggering event's Kafka offset) to every feature request. The store maintains a per-feature versioned log and serves the latest version that precedes the request timestamp, guaranteeing train-serve parity without duplicating storage.
Applications
When to use this drill
Invention Disclosure Meetings
Give this worksheet to every engineer before they walk into an invention disclosure session. They arrive with Layer 3 specificity instead of Layer 1 generalities.
Patent Sprint Kickoffs
Use it as the first exercise in a 72-hour patent sprint. Teams complete one worksheet per candidate idea, then the group reviews which Layer 3s are strongest.
Architecture Reviews
After a design review or ADR approval, ask the team: can you fill out all three layers for what you just decided? If they can, it's worth a patent conversation.
Onboarding Senior Engineers
New hires often bring patentable ideas from their previous work. The worksheet helps them articulate what was novel in their past systems without revealing trade secrets.
Ready for more?
The worksheet is free.
The platform does the rest.
IP Ramp combines TRIZ contradiction analysis, the three-layer drill, and AI-powered Alice scoring into one continuous workflow. Add prior art search and claim generation, and you go from trade-off to filing in a single tool.