Backtesting the InDecision Framework
A 67% win rate means nothing without the methodology to verify it. Backtesting is how you separate a real edge from a lucky streak — and how you avoid building your trading career on a foundation of survivorship bias and overfitted curves.
Trust, but Verify
I've told you that the InDecision Framework has a 67%+ historical win rate across 7+ years of data. You should be skeptical of that number. Not because I'm lying — because healthy skepticism about any claimed edge is exactly the discipline that separates serious traders from followers.
A claimed win rate without a backtesting methodology is a marketing number. A claimed win rate with a rigorous, transparent backtesting process is an edge you can actually build on. This lesson shows you the difference, explains how to backtest the 6-factor model yourself, and — more importantly — teaches you the pitfalls that turn honest backtesting into self-deception.
Because the most dangerous thing in trading isn't not having an edge. It's believing you have one when you don't.
What Backtesting Actually Is
Backtesting is the process of applying your trading system's rules to historical data and measuring the results as if you had traded that system in real time.
The key word is "rules." If your system is discretionary — "I look at the chart and trade what feels right" — it can't be backtested, because there are no consistent rules to apply. One of the structural advantages of the InDecision Framework is that the 6-factor model produces a quantifiable conviction score. That score has rules. Rules can be tested.
The backtesting process for InDecision:
- Select a historical period (minimum 2 years to capture varying market conditions)
- Walk through the data chronologically, day by day
- At each point, calculate the 6-factor conviction score using only information that would have been available at that time
- Record signals: where the conviction score exceeded the signal threshold (55%+ for entry consideration, 80%+ for high conviction)
- Track the outcome of each signal against defined rules: entry price, stop loss placement, target price, time horizon
- Calculate: win rate, average R-multiple on winners, average R-multiple on losers, maximum drawdown, expectancy
// INDECISION FRAMEWORK — BACKTEST RESULTS
Backtesting proves the edge exists. Signal quality tiers turn a 67% system into a 78% system on best setups.
The output isn't just a win rate. It's a complete performance profile that tells you how the system behaves across different market conditions — trending, ranging, volatile, quiet, bullish, bearish.
The 67% Win Rate: What It Means and What It Doesn't
// KEY RULE
What the 67% number captures:
- High-conviction signals only. Low-conviction readings (below 55%) are filtered out. The win rate applies to the signals the system actually endorses, not to every possible trade.
- Defined exit rules. Each signal has a target and a stop. The win rate is measured against those exit rules, not against some hypothetical optimal exit.
- Multiple market conditions. The 7-year dataset includes bull markets, bear markets, consolidation ranges, and extreme volatility events. The 67% is an aggregate across all of them.
What the 67% doesn't capture:
- Execution variance. Slippage, delayed entries, early exits — the gap between the system's theoretical signals and your actual execution will degrade the realized win rate.
- Regime-specific performance. The system performs better in some market conditions than others. The aggregate 67% is an average that includes 75%+ periods and 55% periods.
- Your psychological compliance. The framework can produce a high-conviction bearish signal, but if you override it because "this time feels different," the win rate of your implementation drops.
The Three Sins of Backtesting
More trading strategies have been destroyed by bad backtesting than by bad markets. Here are the three most common sins, and how to recognize them in your own work.
Sin 1: Overfitting
Overfitting is the process of adjusting your system's parameters until they perfectly match historical data — producing a spectacular backtest that fails immediately in live trading.
The mechanism: you test a set of parameters, see mediocre results, adjust, see slightly better results, adjust again. After 50 iterations, you've found the parameter combination that produces 85% win rate on historical data. What you've actually found is the parameter combination that happens to match the noise in that specific dataset.
How to detect overfitting:
- If your system has more parameters than it has independent signals, it's probably overfit
- If small changes to any parameter dramatically degrade performance, it's probably overfit
- If the system works spectacularly on your test data but fails on new data, it's definitely overfit
The fix: Use out-of-sample testing. Split your historical data into two periods. Develop your system on the first period. Test it, without modification, on the second period. If the results are consistent, the edge is likely real. If the results collapse, you overfit to the first period.
The InDecision Framework was developed on data from 2017-2021 and validated on data from 2022-2025. The consistency across both periods — including a severe bear market in the second period — is what gives the 67% number its credibility.
Sin 2: Survivorship Bias
Survivorship bias in backtesting means testing your system only on assets that survived the test period — and ignoring the ones that didn't.
// NOTE
The fix: Use the data as it existed at each point in time. When backtesting signals from 2020, use the top 20 as they were in 2020, including coins that later failed. This is harder to set up but produces honest results.
For the InDecision Framework, backtesting is primarily on Bitcoin and major-cap assets that have continuous price histories throughout the test period. This avoids the survivorship problem by focusing on assets that had sufficient history and liquidity throughout.
Sin 3: Lookahead Bias
Lookahead bias means using information in your backtest that wouldn't have been available at the time of the trade. This is the most insidious sin because it's often unintentional.
Examples:
- Using a "confirmed" daily candle pattern at 2pm when the candle doesn't close until midnight
- Referencing a volume spike that you can see on the chart but occurred after the entry signal
- Using a moving average value that includes the current day's data in a signal that should only use prior days
Each of these introduces future information into a past decision. The backtest looks brilliant because the system "knew" things that a real-time trader couldn't have known.
The fix: Walk-forward analysis. Process the data sequentially, one bar at a time, using only information available at each bar's close. Never reference data from a period after the signal timestamp. This is tedious but non-negotiable for honest results.
How to Backtest the Framework Yourself
If you want to verify the InDecision Framework's performance — and you should — here's the practical approach:
Step 1: Gather data. You need historical price data with volume at your preferred timeframe. For swing trading signals, daily candles are sufficient. Sources: TradingView export, CryptoDataDownload, or exchange APIs.
Step 2: Define the rules precisely. Write down exactly how you calculate each of the 6 factors. Not "volume looks strong" — but "volume exceeds the 20-period average by 1.5x or more." Every factor needs quantifiable criteria.
Step 3: Walk through chronologically. Starting from the beginning of your dataset, evaluate each bar using only information available at that bar's close. Record the conviction score.
Step 4: Log every signal. When the conviction score crosses your threshold, record: date, direction, entry price, stop placement, target, and conviction score.
Step 5: Track outcomes. For each signal, record whether the target or stop was hit first. Calculate R-multiple for each trade.
Step 6: Compute statistics. Win rate, average winner (in R), average loser (in R), expectancy, maximum drawdown, longest losing streak.
// INSIGHT
What Good Backtesting Results Look Like
A legitimate edge doesn't look like a perfect equity curve. Here's what honest backtesting results look like for a real system:
- Win rate between 55-70%. A system claiming 90%+ win rate is either overfit, has survivorship bias, or uses stops so wide that wins are tiny relative to losses.
- Losing streaks of 5-8 trades. Even at 67% win rate, strings of 5+ consecutive losses occur regularly. If your backtesting never shows a losing streak, something is wrong with the methodology.
- Drawdown periods of weeks to months. Real systems go through periods of underperformance. If your backtest shows consistent profits every month for 5 years, you've overfit.
- Better performance in trending markets, weaker in ranges. Most directional systems — including InDecision — perform best when markets are moving and struggle in extended chop. If your system shows equal performance in all conditions, be suspicious.
The 67% number for InDecision includes the drawdowns, the losing streaks, and the flat periods. The number is honest because it includes the ugly months along with the good ones.
What This Means for Your Trading
Backtesting is not a one-time exercise. It's an ongoing discipline — especially as you develop your personal implementation of the framework.
Keep backtesting as market conditions evolve. Test on new data as it becomes available. Track your live results against your backtested expectations. If your live performance deviates significantly from your backtest, investigate whether the deviation is due to execution issues, market regime changes, or flaws in the original methodology.
The 67% win rate is a benchmark. Your job is to verify it, understand its limitations, and then build your trading process around a number you've confirmed with your own data — not one you read on a website.
Trust the framework. Verify the framework. Trade the framework. In that order.