The Question This Report Is Answering
Every four weeks, we run the same uncomfortable exercise: rank every live strategy by what it actually returned, not what the backtest promised. The goal is not to celebrate winners or bury losers. It's to ask a harder question โ when the market environment shifts, which signal types hold up, which collapse, and what does the distribution of outcomes tell us about the edge we thought we had?
| # | Strategy | Type | Win Rate | Sharpe | PF | N | Status |
|---|---|---|---|---|---|---|---|
| #1 | Triple Oversold Extreme | multi | 63.5% | 0.94 | 2.68 | 263 | TEST |
| #2 | RSI + Volume Combo Long | multi | 56.6% | 0.94 | 1.92 | 106 | TEST |
| #3 | Breakout + Volume Surge | multi | 50.7% | 0.90 | 2.10 | 69 | TEST |
| #4 | VWAP Mean Reversion Long | mean_reversion | 53.6% | 0.72 | 1.57 | 7,147 | EDGE |
| #5 | BB + Stochastic Double Oversold | multi | 58.9% | 0.69 | 1.61 | 2,501 | EDGE |
The trailing 30-day window closing April 17, 2026 produced 30 closed signals. That's a meaningful sample โ just large enough to take seriously, not large enough to declare anything permanent. What it shows is a strategy environment under genuine stress: a win rate of 33.3%, an average return per trade of -0.74%, and a spread between best and worst that should give every systematic trader pause. This report unpacks that spread, examines where the damage came from, and draws the narrow set of conclusions the data actually supports.
The Headline Numbers Don't Flatter Us
Let's start with the aggregate and resist the temptation to spin it. Across 30 closed signals over the trailing 30 days, our strategies produced a win rate of 33.3% โ meaning exactly 10 of 30 trades closed in positive territory. The average return per trade was -0.74%. That is a net-negative period by any honest accounting.
For context: a strategy running at 33.3% win rate is not automatically broken. If the average winner is meaningfully larger than the average loser โ what practitioners call a favorable payoff ratio โ a sub-50% win rate can still produce positive expectancy over time. The critical unknown here, which we will address directly in the limitations section, is that our current data release does not disaggregate average winner size from average loser size. We know the count. We know the mean. We know the extremes. The full distribution remains partially obscured.
What we can say: the best single trade was ZM (Zoom Video Communications), a BUY signal, returning +7.82%. The worst single trade was NFLX (Netflix), also a BUY signal, returning -10.6%. The asymmetry between those two numbers is notable. The best winner captured roughly 74 basis points for every 100 basis points lost by the worst loser. That is not a favorable ratio if it's representative of the broader distribution โ and we don't yet know whether it is.
ZM Leads the Board โ Here's Why That Matters
ZM topping the 30-day leaderboard with a +7.82% return on a BUY signal is worth examining in isolation, not because one trade defines a strategy, but because it illustrates something specific about where alpha surfaced during this window.
Zoom has spent much of the post-pandemic period in a prolonged re-rating process โ revenue growth decelerating, valuation multiples compressing, institutional sentiment shifting from growth to show-me. A BUY signal generating +7.82% in a 30-day window suggests either a mean-reversion setup triggered near a local trough, a sentiment catalyst that our signal system captured ahead of broader consensus, or simple favorable timing within a volatile name. We cannot determine which from the data available. What we can note is that ZM was the only trade in this cohort to return above +5% โ a threshold that, given the -0.74% average, means it almost certainly functioned as a partial offset to a heavier distribution of losers.
That is actually an important structural observation. When your win rate is 33.3% and your average return is -0.74%, the arithmetic implies that your 10 winners are not collectively outrunning your 20 losers. ZM's +7.82% was a meaningful contribution to keeping that average from being worse. One trade's outsized return carrying the leaderboard is a pattern worth watching. It suggests concentration of positive outcomes rather than broad-based signal strength โ a fragile form of performance that is harder to rely on systematically.
NFLX and the Cost of Being Wrong on BUYs
The worst performer, NFLX at -10.6% on a BUY signal, deserves equal analytical attention. Netflix is not an obscure small-cap where liquidity or information asymmetry explains a bad trade. It is one of the most liquid, most-covered equities in the U.S. market. A -10.6% loss on a BUY signal in a 30-day window reflects either a significant adverse price move in the underlying, poor entry timing relative to a known catalyst, or a signal framework that did not adequately account for the macro or sector environment at time of entry.
We are not in a position to diagnose the precise failure mode from aggregate data alone. But the NFLX loss matters to the leaderboard calculus in a specific way: the spread between best and worst performer is 18.42 percentage points (from +7.82% to -10.6%). That is a wide dispersion. In a 30-trade sample where 20 trades are losers, wide dispersion on the downside pulls the mean sharply negative. The -0.74% average return is almost certainly being held closer to zero by a small number of solid wins โ ZM prominently among them โ while the loss-side distribution is doing more damage.
One additional observation: both the best and worst performers this period were BUY signals. This is not a sell-side vs. buy-side performance story. It's a signal quality and timing story. We are not generating differentiated outcomes through directional variety; we are generating them โ or failing to โ within the same directional category.
What the Data Does Not Support
This section exists because intellectual honesty requires us to state the limits of our own conclusions. Here is what the trailing 30-day data does not tell us, and where we could be drawing the wrong inferences:
- We cannot confirm strategy-level attribution. The aggregate numbers โ 30 signals, 10 wins, -0.74% average โ do not tell us which specific strategy or signal type generated the wins versus the losses. ZM and NFLX are identified by symbol, but we do not have a breakdown of performance by signal category, sector, market-cap tier, or holding period. Conclusions about which strategy is leading are therefore premature.
- We cannot assess payoff ratio. As noted above, without the average winner and average loser broken out separately, we cannot calculate expectancy. A 33.3% win rate with a 2:1 payoff ratio is a profitable system. A 33.3% win rate with a 0.8:1 payoff ratio is a losing one. The current data does not resolve this.
- 30 trades is not a large enough sample to declare structural underperformance. Thirty observations gives us directional signal about recent conditions, but variance is high at this sample size. A single 30-day window with a negative mean return is consistent with both a broken strategy and a profitable strategy experiencing a normal drawdown period. We don't know which this is yet.
- Survivorship and timing effects may be present. We are reporting on closed signals. Open positions that have not yet closed are excluded. If our best-performing open positions are still running, the closed-signal picture may be systematically pessimistic. Conversely, if we closed winners early and held losers longer, the reverse could apply.
The honest summary: this period's data raises questions. It does not yet answer them definitively.
What Traders Should Actually Do With This
Despite the limitations, the 30-day leaderboard data supports a narrow set of actionable inferences for traders running or following our strategies:
- Reduce position sizing until win rate recovers. A 33.3% win rate over 30 closed signals is a concrete, recent data point. Regardless of longer-run expectations, the prudent response to a documented negative-mean-return period is to reduce per-trade exposure. This is not panic โ it's Kelly-consistent risk management. When your measured edge is negative, you size down.
- Watch ZM-type setups for pattern repeatability. The +7.82% return on ZM's BUY signal is the one bright spot in this cohort. Whether that setup โ whatever its specific trigger โ is repeatable in the next 30-day window is worth tracking explicitly. If a similar configuration appears in another high-volatility, post-derating name, it deserves scrutiny as a potential high-conviction candidate, with the caveat that one data point is not a pattern.
- Treat NFLX as a case study, not an outlier to ignore. The instinct when a large-cap liquid name generates a -10.6% loss on a BUY signal is to attribute it to bad luck and move on. The more productive response is to examine the entry conditions carefully. Was there a known catalyst risk at time of entry? Did the signal fire into a technically extended setup? If the answer to either is yes, that's a process note, not just a loss.
- Do not change strategy rules based on a single 30-day window. This is the mirror image of the sizing point. Reducing exposure is appropriate. Abandoning or redesigning signal logic based on 30 trades is premature. Strategy evaluation requires larger samples โ typically 100+ closed signals at minimum โ before structural conclusions about edge are warranted.
| Metric | Value | Context |
|---|---|---|
| Total Closed Signals | 30 | Full 30-day trailing window |
| Win Rate | 33.3% | 10 of 30 signals closed positive |
| Average Return per Trade | -0.74% | Net negative period |
| Best Performer | ZM (BUY) +7.82% | Top of leaderboard by realized return |
| Worst Performer | NFLX (BUY) -10.6% | Largest single-trade loss this period |
| Best/Worst Spread | 18.42 pp | Wide dispersion across closed signals |
Methodology: What We're Measuring and Where the Gaps Are
This leaderboard covers 30 closed signals generated by Stocks365's live signal system over the trailing 30 calendar days ending April 17, 2026. "Closed" means the position was entered and exited within the measurement window; return percentages reflect the move from signal entry price to exit price, before transaction costs and slippage, which are not modeled in this release. The sample of 30 is the minimum threshold we use before publishing aggregate performance commentary โ below that number, the noise-to-signal ratio is too high to report responsibly.
We do not have, in this release, a per-strategy or per-category breakdown. ZM and NFLX are identified as the distribution extremes; the remaining 28 closed signals are represented only in aggregate. This is a significant limitation and one we are actively working to address in future leaderboard releases. Additionally, this analysis covers one trailing window only. We make no claim that 30-day performance is predictive of the next 30 days โ the data here is descriptive, not forecast. Anyone using this report to size up exposure based on ZM's outperformance alone would be drawing conclusions well beyond what the evidence supports. We are publishing this because our readers deserve to see the numbers as they are, not as we would prefer them to be.