Validation & Testing
Validation & Testing
Signal.Engine employs a multi-layered validation strategy to ensure the reliability of the AI's market reasoning, API stability, and trading execution. The testing suite is designed to verify that the "Hybrid Brain" maintains high accuracy while adhering to quantitative risk parameters.
1. Test Suite Overview
The project uses pytest for automated testing. The test suite covers API health, scanning triggers, and data retrieval integrity.
To run the full test suite, execute:
pytest tests/
2. API Integrity Testing
The FastAPI backend is validated using TestClient to ensure endpoints return expected schemas and status codes. This prevents regressions in the communication layer between the AI engine and the frontend.
| Endpoint | Test Case | Purpose |
| :--- | :--- | :--- |
| GET / | test_home_endpoint | Verifies API heartbeat and online status. |
| GET /api/scan | test_scan_trigger | Ensures background scanning tasks initiate correctly. |
| GET /api/results | test_results_endpoint | Validates that scan data, simulation state, and logs are structured correctly. |
Example: Running a specific API test
pytest tests/test_api.py::test_results_endpoint
3. Model Performance Validation
The LSTMPredictor utilizes PyTorch Lightning’s validation hooks to monitor model health during and after training. This ensures the agent isn't just memorizing data but generalizing to new market conditions.
- Validation Steps: During training, the system processes a distinct validation split to calculate
val_lossandval_acc. - Metrics: The system logs Cross-Entropy Loss and Accuracy. A "Golden Dataset" (ZigZag labeled) is used as the benchmark for "Common Sense" trading (Target: >75% Accuracy).
- Stability: Batch Normalization and Dropout layers are validated to ensure gradient stability and prevent overfitting.
4. Backtesting & Simulation Engine
Before deploying to live paper trading, the system runs through a Simulation Engine that mimics real-world exchange constraints.
- Logic Validation: The simulation processes the Brain's decisions (Buy/Sell/Hold) against historical or real-time data ticks.
- Portfolio Tracking: The engine tracks Win Rate, Expected Value (EV), and Value at Risk (VaR 95%).
- Triggering a Backtest:
You can trigger a validation backtest via the API to generate performance charts:
curl -X GET http://localhost:8000/backtest
5. Heuristic & Risk Validation
The "Hybrid" nature of Signal.Engine means it combines RL (Reinforcement Learning) with Heuristic Experts. The validation layer checks for:
- Regime Detection: Validates if the system correctly identifies "High Volatility" vs. "Calm" markets.
- Confidence Thresholds: High-confidence signals (>= 85%) are subjected to a secondary
QuantRiskcheck before being marked for execution. - Notification Alerts: The
NotificationServiceacts as a final sanity check, alerting the user of high-confidence signals that meet specific volatility and rational criteria.
6. Manual Verification (Dry Run)
For developers, the trader_alpaca.py script includes a "Dry Run" mode. This allows you to validate the entire pipeline—from data fetching to model inference—without sending actual orders to Alpaca.
# Verify the Brain's logic for a specific symbol without executing trades
python -m src.trader_alpaca --symbol RELIANCE.NS --qty 1