Signals & Trading
๐Ÿ“Š Signal Scanner ๐Ÿ“ก Live Monitor ๐Ÿ“ˆ Performance ๐Ÿงฎ Calculators ๐ŸŒ Geo Risk Tracker
News & Research
๐Ÿ“ฐ Market News โœ๏ธ Blog & Analysis ๐ŸŽ“ Learn Trading ๐Ÿ”ฌ Strategy Research ๐Ÿข Newsroom
Account
๐Ÿ‘ค My Dashboard

What Worked, What Didn't: An Honest April 2026 Retrospective

A 30-signal audit of our live system reveals a 33.3% win rate and -0.74% average return. Here's what the numbers actually say.

The Question We Owe You an Answer To

Every quantitative shop publishes its winners. The Sharpe ratios get highlighted. The best trade of the quarter gets a case study. What rarely gets published โ€” with the same rigor, the same word count, the same front-page placement โ€” is the accounting of what didn't work. This report is that accounting.

Stocks365 Research ยท Leaderboard
Top Trading Strategies
Ranked by Sharpe ratio from our walk-forward backtests.
# Strategy Type Win Rate Sharpe PF N Status
#1 Triple Oversold Extreme multi 63.5% 0.94 2.68 263 TEST
#2 RSI + Volume Combo Long multi 56.6% 0.94 1.92 106 TEST
#3 Breakout + Volume Surge multi 50.7% 0.90 2.10 69 TEST
#4 VWAP Mean Reversion Long mean_reversion 53.6% 0.72 1.57 7,147 EDGE
#5 BB + Stochastic Double Oversold multi 58.9% 0.69 1.61 2,501 EDGE
๐Ÿ“Š See all strategies on our Insights page ยท Based on real backtest data from Stocks365
Stocks365 Research

Over the trailing 30 and 90 days ending April 18, 2026, our proprietary signal system generated and closed 30 trades. The aggregate result: a 33.3% win rate and an average return of -0.74% per trade. Those numbers are not good. They are worth examining carefully โ€” not to explain them away, but to understand precisely where the model is failing and why honest retrospectives are the only kind worth reading.

The question this report answers is narrow and deliberate: what does our signal system's recent live performance actually look like, trade by trade, and what can we responsibly conclude from it? The answer, as you will see, contains more uncertainty than resolution. That is also worth saying plainly.

The Ledger: Thirty Trades, One Uncomfortable Number

The headline figures are identical across our 30-day and 90-day lookback windows, which itself is a data point. Both periods show 30 closed signals, 10 wins, 20 losses, a 33.3% win rate, and an average return per trade of -0.74%. The convergence of the two windows is not a sign of stability โ€” it is a sign that the same 30 trades comprise both samples. In other words, we are working with a single cohort of signals, not two independently measured periods. Readers should hold that in mind throughout.

Within that cohort, the distribution is skewed in the wrong direction. The best single trade โ€” a BUY signal on ZM (Zoom Video Communications) โ€” returned +7.82%. The worst โ€” a BUY signal on NFLX (Netflix) โ€” returned -10.6%. The asymmetry matters: the largest loss is roughly 35% larger in absolute magnitude than the largest gain. In a system where wins are already outnumbered two-to-one, a loss distribution that skews more negative than the gain distribution skews positive is a compounding problem, not a canceling one.

To make the arithmetic concrete: at a 33.3% win rate, a system needs its average win to be at least twice its average loss to break even on a per-trade expected value basis. We do not have granular per-trade return data for all 30 signals to calculate that ratio precisely. What we can say is that the aggregate average return of -0.74% across all 30 trades โ€” wins and losses combined โ€” indicates the system is currently destroying value on a per-trade basis, not preserving it.

Metric30-Day Window90-Day Window
Total Closed Signals3030
Wins1010
Win Rate33.3%33.3%
Average Return per Trade-0.74%-0.74%
Best Trade (Symbol / Return)ZM BUY / +7.82%ZM BUY / +7.82%
Worst Trade (Symbol / Return)NFLX BUY / -10.6%NFLX BUY / -10.6%

Reading the Extremes: ZM and NFLX as Diagnostic Tools

The two outlier trades tell different stories about what the model is doing well and badly โ€” and looking at them as individual case studies, rather than just boundary conditions, is where some diagnostic value lives.

The ZM BUY at +7.82% represents the system functioning as designed: a momentum or mean-reversion signal (we do not have signal-type granularity beyond the BUY label in this dataset) that captured a directional move cleanly. Zoom has been a structurally volatile name since its post-pandemic rerating, and it remains a stock where information asymmetry between retail and institutional flow can create short-lived mispricings. A nearly 8% return on a single closed signal is meaningful. It also, notably, was not enough to offset the losses generated by the other 29 trades, which averaged out to a negative result even with this outlier included.

The NFLX BUY at -10.6% is more instructive. Netflix is a large-cap, highly liquid, extensively covered name โ€” the kind of security where systematic signals face the steepest competition from better-capitalized participants. A double-digit loss on a BUY signal in a name like Netflix suggests one of several possibilities: the signal fired near a local peak, macro conditions shifted adversely during the holding period, or the model's entry criteria are not sufficiently conditioned on broader market regime. We cannot determine which from the data available. But the NFLX loss is the single largest drag on the cohort, and in a 30-trade sample, one trade of that magnitude carries real aggregate weight.

Both signals were BUY-type. The data does not surface any SELL or SHORT signals in the best/worst fields, which may indicate our signal mix is currently BUY-skewed, or simply that BUY signals dominated this particular 30-trade window. Either way, in an environment where the broader market has experienced meaningful volatility through early 2026, a predominantly long-biased signal book operating at a 33.3% win rate is going to produce negative expected value unless the win/loss magnitude ratio compensates decisively.

What the Data Does Not Support โ€” And Where We Could Be Wrong

There is a version of this retrospective that could be written more charitably. We are not going to write it, but we should acknowledge it exists.

First, 30 trades is not a statistically robust sample. We will not claim otherwise. At n=30, the confidence interval around a 33.3% win rate is wide enough that the true underlying win rate could plausibly range from the low 20s to the high 40s percent. We need several hundred closed signals before we can make any claim about the system's structural edge โ€” or lack of one โ€” with real confidence. This is preliminary. That word deserves to be in bold: this is preliminary.

Second, the 30-day and 90-day windows being identical strongly suggests we are in a period of reduced signal generation, not a period of sustained underperformance. If the system had generated, say, 90 signals over 90 days, we would have a much richer picture. The fact that only 30 closed signals exist across what should be a 90-day observation window means either holding periods are long, signal generation criteria are restrictive, or both. A lower-frequency system can look worse in short retrospectives simply because it has less data to average over. We do not know which dynamic is driving the identical window results, and that uncertainty is material.

Third, we do not have drawdown data, holding period lengths, or position sizing information in this dataset. A -0.74% average return per trade could reflect very different portfolio-level outcomes depending on how large each position was and how long it was held. A system that holds positions for two weeks and averages -0.74% per trade is meaningfully different from one that holds positions for two days and generates the same per-trade average. This report cannot adjudicate between those scenarios with the data available.

What the data does support, plainly, is this: over the most recent measurable window, the system has not generated positive expected value. That is the honest read. Everything else requires more data.

What Traders Should Actually Do With This

Analytical honesty only earns its keep if it translates into practical guidance. Here is what we think the data supports, and what it doesn't, in terms of actionable takeaways.

  • Do not size aggressively into current signals. A 33.3% win rate with a negative average return is not a baseline from which to run full-sized positions. Until the win rate or the win/loss magnitude ratio improves, position sizing should be conservative. The NFLX loss at -10.6% is a reminder of what happens when a high-conviction BUY signal in a liquid large-cap fails โ€” it fails hard.
  • Track the ZM signal archetype separately. The +7.82% return on ZM suggests the system can identify genuine mispricings in volatile, sentiment-driven names. If there is a coherent signal logic behind the ZM trade โ€” and we believe there is โ€” it may be worth isolating that signal subtype for further analysis. Not all 30 signals in this cohort are necessarily generated by the same mechanism.
  • Watch the BUY/SELL signal balance going forward. Both the best and worst trades in this window were BUY signals. If the system is predominantly long-biased in a regime that has punished long exposure, rebalancing toward more neutral or short signals โ€” where the model has conviction โ€” may improve aggregate performance. We do not have the data to confirm this hypothesis yet, but it is worth monitoring actively.
  • Treat this window as a stress test, not a verdict. Thirty trades is a stress test of the model's behavior in one specific market environment, not a verdict on its long-run validity. The appropriate response is increased scrutiny, not abandonment. Systematic strategies routinely underperform for periods of this duration before reverting โ€” but they also sometimes underperform because they are structurally broken. Distinguishing between those two cases requires more data and more time than we currently have.
  • Do not average down on the model itself. The temptation when a system underperforms is to add more capital on the assumption that mean reversion will bail you out. That logic applies to individual trades under specific conditions. It does not automatically apply to signal systems. If the next 30 trades show a similar win rate and average return, that will be a different and more serious conversation.

Methodology: What We Measured and What We Couldn't

The figures in this report derive from our proprietary backtest and live signal system, capturing 30 closed signals over the trailing window ending April 18, 2026. The 30-day and 90-day lookback periods are structurally identical in this release โ€” both contain the same 30 trades โ€” which limits our ability to draw trend comparisons across time horizons. Win rate is defined as the percentage of closed signals with a positive return; average return is the arithmetic mean of all closed-signal returns, including both wins and losses, unweighted by position size. We do not have access to holding period data, drawdown metrics, or signal subtype breakdowns for this publication. The sample size of 30 is below the threshold we would require to assert statistical significance on any of the figures presented. All conclusions in this report should be treated as directional and preliminary. We will update this retrospective as additional closed-signal data accumulates.

Koutaibah Al Aboud
Edited by
Koutaibah Al Aboud
Content Strategist & Market Editor at Stocks365. Specializes in clear, actionable market commentary and conversion-focused financial content that makes institutional insights accessible.
LinkedIn โ†’ Editorial Standards โ†’

Get Live Trading Signals

See what our AI analysis says about 200+ instruments right now.

Open Signals Dashboard

You Might Also Like

More insights from our research desk

Welcome to Stocks365

or continue with
No account? Sign Up