Source: This framework comes from Sam Gaudet's appearance on the Agency Podcast, who attributes it to Paddy Galloway. Sam is the Creative Director for Dan Martell, generating 200M views/month with this approach to content planning.
Content teams face a constant tension. Play it safe and your content gets stale. Experiment too much and your metrics tank. The 70/20/10 rule gives you a structure for managing this tradeoff deliberately rather than accidentally.
The Framework
| Allocation | Content Type | Risk Level | Purpose |
|---|---|---|---|
| 70% | Proven formats and topics | Low | Reliable performance, baseline growth |
| 20% | Modifications on what works | Medium | Incremental improvement, testing variables |
| 10% | Completely new concepts | High | Breakthrough potential, learning |
The ratio isn't arbitrary. It's designed to maintain momentum while creating space for the experiments that occasionally become your next proven format.
70%: What You Know Works
This is your foundation. Topics and formats that have already proven demand, either from your past content or from modeling successful content in your space.
What counts as "proven":
- Your own past outliers (videos that hit 2x+ your average)
- Formats you've repeated successfully multiple times
- Topics with consistent engagement regardless of execution
- Content types your audience explicitly requests
How to apply it:
Before each content planning cycle, audit your last 20-30 pieces. Identify which topics and formats consistently outperform. These become your 70% baseline.
The trap is letting 70% become 100%. Reliable performance feels good, so teams unconsciously drift toward only making what's safe. This works until it doesn't, and then you have no pipeline of new ideas ready to replace what stopped working.
Example: Dan Martell's team knows that "frameworks for scaling" and "founder mindset" content reliably performs. These topics form the 70% core. They don't have to guess whether these will work because they've already proven it.
20%: Slight Modifications
This is your testing ground. Take something that works and change one variable: a different thumbnail style, a new hook structure, a variation on the topic angle.
What counts as a modification:
- Same topic, different format (turn a talking head into a story-driven piece)
- Same format, different topic (apply your proven structure to a new subject)
- Same content, different packaging (test thumbnail colors, title structures)
- Proven concept from another creator, adapted to your voice
How to apply it:
For every 10 pieces of content, 2 should be deliberate experiments on variables you want to test. Track what you changed and measure the impact.
The key word is "deliberate." Random variation isn't testing. You need a hypothesis: "If I change X, I expect Y to happen." Then you measure whether it did.
Example: Sam's team tested a 59-second YouTube video when conventional wisdom said longer watch time wins. They hypothesized that extreme brevity with a complete payoff might outperform. The video got 7 million views and became their top YouTube performer.
10%: Completely New
This is your moonshot allocation. Ideas you've never tried, formats that feel risky, topics outside your normal lane.
What counts as completely new:
- A topic you've never covered
- A format you've never used
- A platform you've never seriously invested in
- A collaboration or concept that breaks your usual patterns
How to apply it:
Protect this allocation. Teams under pressure will cannibalize the 10% to make more "safe" content. This is a mistake. The 10% is where your next 70% comes from.
Accept that most of your 10% will underperform. That's the point. You're buying lottery tickets with a small portion of your output. The ones that hit become your new proven formats.
What happens when 10% works:
When a new concept outperforms, it graduates. First it becomes part of your 20% (you test variations). If those variations also work, it joins the 70%. This is how your content strategy evolves over time.
Example: Sam's team made their first AI-focused video as a 10% experiment. It hit 1 million views. AI content moved into the 20% for testing variations, then into the 70% as a core content bucket.
What happens when 10% fails:
Nothing catastrophic. A flopped 10% video (like their "quitting drinking" video that got only 10K views) doesn't damage your channel because 90% of your content is still performing. You learn what doesn't resonate and move on.
When This Framework Works
- Teams producing enough volume to have meaningful percentages (10+ pieces/month)
- Channels past the pure experimentation phase (you know what generally works)
- Content strategies that need to balance growth with stability
- Teams that have been playing it too safe and need permission to experiment
When It Doesn't
- Very early-stage creators still finding their voice (100% should be experimentation)
- News or reactive content where you can't plan allocations
- Teams producing so little content that percentages don't apply (if you make 4 videos/month, this is just "make 3 proven, 1 new")
Quick Reference
| Allocation | Question to Ask | If Missing |
|---|---|---|
| 70% Proven | "Has this topic/format worked before?" | You're gambling instead of compounding |
| 20% Modified | "What variable am I testing here?" | You're not learning from what works |
| 10% New | "Have we ever tried anything like this?" | You're not building your future 70% |
Planning check: Before finalizing any content calendar, label each piece as 70, 20, or 10. If your ratio is off, adjust before production starts.

















































