Skip to main content

Overview

The Anomaly Detection Tools provide statistical analysis capabilities to identify unusual patterns in portfolio metrics, transactions, and validation breaches. Using industry-standard methods (z-score and Interquartile Range), these tools detect outliers that may indicate risk, fraud, or data quality issues. Two detection methods work in tandem:
  • Z-Score: Measures standard deviations from the mean, effective for normally-distributed data
  • IQR (Interquartile Range): Robust quartile-based method, resistant to extreme outliers
Anomalies trigger automatically when either method detects deviation, with severity ratings ranging from medium to critical.

Quick Reference

ToolPurposeKey Parameters
detect_anomaliesComprehensive anomaly detection across risk metrics, transactions, and breachesportfolio_id, detection_type, sensitivity
get_anomaly_alertsRetrieve recent anomaly alerts with filtering capabilitiesportfolio_id, severity, limit

Statistical Methods

Z-Score Detection

The z-score measures how many standard deviations a value deviates from the mean:
z-score = |current_value - mean| / standard_deviation
Sensitivity Levels (configured via sensitivity parameter):
  • Low: Triggers at 3.0 sigma (99.7% confidence)
  • Medium: Triggers at 2.5 sigma (98.8% confidence)
  • High: Triggers at 2.0 sigma (95.4% confidence)
Higher sensitivity increases false positives; use medium for balanced detection.

IQR (Interquartile Range) Detection

The IQR method identifies outliers based on quartile boundaries:
Q1 = 25th percentile value
Q3 = 75th percentile value
IQR = Q3 - Q1
Lower Bound = Q1 - (1.5 × IQR)
Upper Bound = Q3 + (1.5 × IQR)
A value is flagged as an outlier if it falls outside these bounds. The IQR score represents the normalized distance from bounds. Advantages:
  • Non-parametric (doesn’t assume normal distribution)
  • Robust against extreme outliers
  • Better for skewed data
Anomalies are flagged when either z-score or IQR method detects deviation.

Tools

detect_anomalies

Detect statistical anomalies across portfolio risk metrics, transaction patterns, and validation breaches. Signature:
detect_anomalies(
    portfolio_id: str,
    detection_type: str = "all",
    lookback_days: int = 90,
    sensitivity: str = "medium",
    include_recommendations: bool = True
) -> Dict[str, Any]
Parameters:
  • portfolio_id (string, required): The portfolio ID to analyze. Example: "port_123abc"
  • detection_type (string, default: "all"): Type of anomalies to detect
    • "risk_metrics": VaR, volatility, beta, Sharpe ratio, max drawdown
    • "transactions": Daily volume patterns and individual transaction sizes
    • "breaches": Validation breach frequency and patterns
    • "all": Run all detection types
  • lookback_days (integer, default: 90): Historical window for baseline calculation. Range: 10-365 days
  • sensitivity (string, default: "medium"): Detection threshold
    • "low": 3.0 sigma (fewest alerts)
    • "medium": 2.5 sigma (balanced)
    • "high": 2.0 sigma (most sensitive)
  • include_recommendations (boolean, default: true): Generate action recommendations for anomalies
Returns: Success response structure:
{
  "success": true,
  "portfolio_id": "port_123abc",
  "detection_type": "all",
  "lookback_days": 90,
  "sensitivity": "medium",
  "summary": {
    "total_anomalies": 5,
    "critical": 1,
    "high": 2,
    "medium": 2,
    "low": 0,
    "by_type": {
      "risk_metric": 2,
      "transaction_volume": 1,
      "large_transaction": 1,
      "breach_frequency": 1
    }
  },
  "anomalies": [
    {
      "type": "risk_metric",
      "metric_name": "volatility",
      "current_value": 45.3,
      "historical_mean": 28.5,
      "historical_std": 5.2,
      "zscore": 3.21,
      "iqr_outlier": true,
      "severity": "critical",
      "detected_at": "2025-10-29",
      "description": "volatility shows significant deviation from historical norms"
    }
  ],
  "recommendations": [
    {
      "action": "Investigate volatility spike",
      "priority": "critical",
      "details": "Check for recent market events or position changes."
    }
  ],
  "detected_at": "2025-10-29T14:32:00.000000"
}
Anomaly Types:
TypeDescriptionMetricsMinimum Data
risk_metricStatistical deviation in portfolio risk measuresVaR (95/99), volatility, beta, Sharpe, drawdown10 historical records
transaction_volumeUnusual daily trading volumeSum of daily transaction amounts5 daily periods
large_transactionIndividual transaction exceeds 50% of average daily volumeTransaction size5 daily periods
breach_frequencyIncreasing rate of validation breachesBreaches per week3+ total breaches, 40%+ in last 7 days
Examples: Detect all anomalies with default settings:
result = detect_anomalies(
    portfolio_id="port_12345"
)
High-sensitivity risk metric detection over 60 days:
result = detect_anomalies(
    portfolio_id="port_12345",
    detection_type="risk_metrics",
    lookback_days=60,
    sensitivity="high",
    include_recommendations=True
)

if result["success"]:
    for anomaly in result["anomalies"]:
        print(f"{anomaly['severity']}: {anomaly['description']}")
Transaction-only detection (volume spikes and large trades):
result = detect_anomalies(
    portfolio_id="port_12345",
    detection_type="transactions",
    sensitivity="low"  # Fewer false positives on trading activity
)

get_anomaly_alerts

Retrieve recent anomaly alerts with optional filtering by portfolio and severity. Signature:
get_anomaly_alerts(
    portfolio_id: Optional[str] = None,
    severity: Optional[str] = None,
    limit: int = 50
) -> Dict[str, Any]
Parameters:
  • portfolio_id (string, optional): Filter alerts to specific portfolio. If omitted, scans first 10 active portfolios
  • severity (string, optional): Filter by severity level
    • "critical": Highest priority, requires immediate attention
    • "high": Significant deviation, review recommended
    • "medium": Notable variance, standard review
    • "low": Minor anomalies, informational
  • limit (integer, default: 50): Maximum alerts to return. Range: 1-1000
Returns: Single portfolio response:
{
  "success": true,
  "portfolio_id": "port_123abc",
  "severity_filter": "critical",
  "total_alerts": 3,
  "alerts": [
    {
      "type": "risk_metric",
      "metric_name": "max_drawdown",
      "current_value": -35.2,
      "historical_mean": -18.5,
      "severity": "critical",
      "detected_at": "2025-10-29",
      "description": "max_drawdown shows significant deviation from historical norms"
    }
  ],
  "recommendations": [
    {
      "action": "Assess downside risk",
      "priority": "critical",
      "details": "Maximum drawdown exceeded historical norms."
    }
  ]
}
Multi-portfolio response (when portfolio_id not specified):
{
  "success": true,
  "severity_filter": "high",
  "total_alerts": 8,
  "alerts": [
    {
      "portfolio_id": "port_123abc",
      "portfolio_name": "Growth Fund A",
      "type": "transaction_volume",
      "severity": "high",
      "detected_at": "2025-10-29"
    }
  ]
}
Examples: Retrieve all critical alerts for a portfolio:
result = get_anomaly_alerts(
    portfolio_id="port_12345",
    severity="critical"
)

for alert in result["alerts"]:
    print(f"[{alert['severity']}] {alert['description']}")
Monitor all portfolios for high-severity issues (last 20 alerts):
result = get_anomaly_alerts(
    severity="high",
    limit=20
)

if result["success"]:
    print(f"Found {result['total_alerts']} high-severity alerts across portfolios")
Get recent alerts for specific portfolio:
result = get_anomaly_alerts(
    portfolio_id="port_12345",
    limit=10
)

Severity Ratings

Severity is determined by the z-score deviation:
SeverityZ-Score RangeConfidenceMeaning
Critical>= 3.099.7%Immediate investigation required
High2.5 - 2.9998.8%Significant deviation, review recommended
Medium2.0 - 2.4995.4%Notable variance, standard monitoring
Low< 2.0< 95%Minor anomalies, informational only

Data Requirements

Detection TypeMinimum RecordsMinimum DurationNotes
Risk Metrics1010 daysRequires historical portfolio_risk_metrics
Transactions55 daysDaily aggregation of transaction amounts
Breaches3AnyEvaluates frequency over lookback period
Insufficient data returns empty anomalies array without error.

Recommendations Engine

Recommended actions are auto-generated based on anomaly type and metric:
Anomaly TypeConditionRecommendation
VaR Anomalyvar_95 or var_99 spikeReview portfolio risk exposure; consider rebalancing or hedging
Volatility Spikevolatility deviationInvestigate market events or position changes
Max Drawdowndrawdown increasesAssess downside risk; maximum drawdown exceeded norms
Transaction VolumeDaily volume spikeReview trading activity; verify with fund manager
Large TransactionSingle txn > 50% daily avgVerify transaction validity; check with operations
Breach Frequency40%+ recent breachesAddress recurring breaches; root cause analysis required

Best Practices

Sensitivity Configuration:
  • Use "low" for stable portfolios or when false positives impact operations
  • Use "medium" for standard monitoring (recommended default)
  • Use "high" for active management or fraud detection
Lookback Period:
  • Use 90 days for standard baseline (captures quarterly patterns)
  • Use 30 days for recent activity focusing
  • Use 365 days for year-over-year comparisons
Detection Types:
  • Run periodic "all" scans for comprehensive reviews
  • Use specific types (e.g., "transactions") for targeted investigations
  • Combine with include_recommendations=true for actionable insights
Alert Monitoring:
  • Check critical alerts daily or enable real-time subscription
  • Review high-severity alerts during trading hours
  • Archive or investigate resolved anomalies to improve baselines

Error Handling

Both tools return "success": false on errors:
{
  "success": false,
  "error": "Portfolio not found or contains no data",
  "portfolio_id": "port_invalid"
}
Common Errors:
  • Portfolio not found or no historical data
  • Insufficient records for statistical analysis (< minimum thresholds)
  • Database connectivity issues
  • Invalid sensitivity or detection_type values

Performance Considerations

Computation Time:
  • Single portfolio detection: 1-5 seconds typical
  • Multi-portfolio scan (10 portfolios): 10-30 seconds
  • Lookback window affects calculation complexity linearly
Data Points Analyzed:
  • Risk metrics: 6 metrics × lookback records = max 540 calculations
  • Transactions: Variable based on trading frequency
  • Breaches: Depends on validation rule triggers
Optimization Tips:
  • Use specific detection_type instead of "all" for faster results
  • Reduce lookback_days for real-time monitoring
  • Cache results from get_anomaly_alerts for dashboards
  • Run comprehensive scans during off-hours

These tools complement anomaly detection with additional analysis and monitoring capabilities.