RADAR (Risk-aware Anomaly Detection-based Automated Response) is a security orchestration subsystem within IDPS-ESCAPE that integrates anomaly detection with automated response capabilities. The architecture combines signature-based detection with machine learning-based anomaly detection using Random Cut Forest (RCF) algorithms, orchestrating automated security responses through SOAR playbooks.
RADAR is designed with a modular architecture where each scenario can be independently developed, deployed, and maintained. New scenarios can be added by creating the necessary components (decoders, rules, active responses, detectors) without modifying the core framework.
Through Ansible automation, RADAR ensures that deployments are idempotent—running the same playbook multiple times produces the same result. This allows for safe re-deployment and updates without side effects.
The architecture separates:
This separation enables independent scaling, testing, and evolution of each layer.
RADAR implements a defense-in-depth approach combining:
This hybrid approach provides resilience against adversarial interference and reduces false positives.
All scenario specifications (index patterns, features, thresholds, detector parameters) are centralized in config.yaml, enabling declarative infrastructure and reducing manual configuration errors.
RADAR consists of six primary components working together in an orchestrated pipeline:
| Component | Role | Key Functions |
|---|---|---|
| Wazuh Agents | Endpoint monitoring | Log collection, enrichment, response execution |
| RADAR Helper | Data enrichment | Scenario-specific log enhancement |
| Wazuh Manager | Central control | Log processing, rule evaluation, response coordination |
| Opensearch/Wazuh Indexer | Data platform | Storage, indexing, anomaly detection (RCF models) |
| Webhook Endpoint | Alert routing | Notification handling, log writing |
| RADAR Controller | Orchestration | Deployment automation, scenario management |

Figure 1: RADAR component architecture showing the six primary components and their interactions
The Wazuh Agent module runs on monitored endpoints and serves as the data collection and response execution layer.
The RADAR Helper is a scenario-specific enrichment component that augments raw logs with contextual information before transmission to the manager.
Purpose: Enrich logs with scenario-relevant data that aids detection
Functions:
Implementation: Python script deployed to agent endpoints
The Wazuh Manager is the Wazuh central control plane, it processes incoming logs and orchestrates responses.
Purpose: Parse and normalize log data into structured fields
Format: XML-based decoder definitions
Function: Extract relevant fields from enriched logs for rule matching
Purpose: Define detection logic for security events
Function: Evaluate decoded logs, match with conditions and trigger alerts
Purpose: Act on incidents and anomalies
Function:
Purpose: Continuously evaluate anomaly detector outputs and trigger webhooks
Configuration: Threshold-based evaluation (anomaly score > threshold and confidence > threshold)
Function:
Purpose: Send anomaly alerts to webhook endpoint for processing
Protocol: HTTP POST with JSON payload
Opensearch provides the data platform for storage, search, and ML-based anomaly detection.
Purpose: Centralized data storage and retrieval
Technology: Opensearch with Wazuh indexer integration
Key Functions:
Index Patterns: Scenario-specific (e.g., wazuh-alerts-4.x-*, custom-suspicious-login-*)
Purpose: Machine learning-based behavioral anomaly detection
Algorithm: Robust Random Cut Forest (RRCF)
Key Features:
Detection Process:
The Webhook Endpoint receives anomaly notifications and bridges Opensearch monitors to Wazuh’s rule engine.
Purpose: Receive and process anomaly alerts from Opensearch monitors
Implementation: Custom HTTP service (Python/Flask)
Endpoint: POST /notify
Function:
/var/log/ad_alerts.log)The orchestration layer automates deployment, configuration, and lifecycle management.
build-radar.sh
radar-cli Docker container./build-radar.sh <scenario> --agent <local|remote> --manager <local|remote> --manager_exists <true|false>run-radar.sh
./run-radar.sh <scenario>stop-radar.sh
radar-helper service./stop-radar.sh --manager <local|remote> --agent <local|remote> [--purge] [--disable-wazuh-agent]The anomaly_detector/ folder contains Python modules that interface with the OpenSearch Anomaly Detection plugin API. These scripts are executed inside the radar-cli Docker container during run-radar.sh.
detector.py
Creates and starts OpenSearch anomaly detectors based on scenario configuration.
Key functions:
find_detector_id(): Searches for existing detector by namedetector_spec(): Builds detector specification from config.yaml scenario parameterscreate_detector(): Creates a new detector via OpenSearch AD APIstart_detector(): Starts the detector to begin anomaly detectionDetector specification includes:
indices: Index pattern to monitor (e.g., wazuh-ad-log-volume-*)feature_attributes: Aggregation queries defining what to measure (from config.yaml)category_field: Field for high-cardinality detection (e.g., per-user baselines)shingle_size: Window size for RCF algorithmdetection_interval: How often to run detectionUsage: python detector.py <scenario_name> → outputs detector ID
monitor.py
Creates OpenSearch monitors that evaluate detector results and trigger webhooks when anomalies exceed thresholds.
Key functions:
find_monitor_id(): Searches for existing monitor by namemonitor_payload(): Builds monitor specification with trigger conditionscreate_monitor(): Creates monitor via OpenSearch Alerting APIMonitor configuration includes:
detector_interval if monitor_interval not specified in config.yaml)anomaly_grade > threshold AND confidence > thresholdNote: The monitor_interval parameter is optional in scenario configurations. If not specified, the monitor uses the same interval as the detector (detector_interval). See monitor.py implementation: int(scn.get("monitor_interval", scn.get("detector_interval", 5))).
Usage: python monitor.py <scenario_name> <detector_id> → outputs monitor ID
webhook.py
Manages OpenSearch notification destinations (webhooks) for monitor alerts.
Key functions:
notif_find_id(): Searches for existing webhook destinationnotif_create(): Creates new webhook notification configensure_webhook(): Idempotently ensures webhook existsUsage: Called internally by monitor.py to ensure webhook destination exists before creating monitor.
The RADAR Helper is a log enrichment service that runs on Wazuh agents. It tails authentication logs, enriches them with geographic and behavioral context, and writes to a separate log file for Wazuh to ingest.
radar-helper.py
A multi-threaded Python daemon that processes authentication logs in real-time.
Core classes:
BaseLogWatcher: Abstract base class implementing tail -F style log following with rotation handlingAuthLogWatcher: Processes /var/log/auth.log, enriches SSH events with RADAR fieldsUserState: Per-user state tracking (last location, ASN history, timestamps)Enrichment fields added to each log line:
| Field | Description |
|---|---|
outcome |
success or failure |
asn |
Autonomous System Number from MaxMind |
asn_placeholder_flag |
true if ASN lookup failed |
country |
ISO country code |
region |
State/province name |
city |
City name |
geo_velocity_kmh |
Calculated travel speed since last login |
country_change_i |
1 if country changed from previous login |
asn_novelty_i |
1 if this ASN is new for this user (90-day window) |
Key algorithms:
haversine_km(): Calculates great-circle distance between two coordinatesgeo_lookup(): Queries MaxMind GeoLite2 databases for IP geolocationdrop_old(): Maintains 90-day sliding window for ASN novelty detectiondistance / time_delta, capped at 2000 km/hDependencies:
/usr/share/GeoIP/GeoLite2-City.mmdb and GeoLite2-ASN.mmdbmaxminddb libraryOutput: Writes enriched logs to /var/log/suspicious_login.log
config.yaml
scenarios/active_responses/ar.yaml
AR_RISK_CONFIG environment variable in .envinventory.yaml
wazuh_manager_ssh, wazuh_agents_ssh.env
volumes.yml
RADAR uses multiple Docker Compose files to provide modular, composable container orchestration. These files can be combined using Docker Compose’s -f flag to build different deployment configurations.
docker-compose.core.yml
Defines the core Wazuh stack:
| Service | Image | Ports | Purpose |
|---|---|---|---|
wazuh.indexer |
wazuh/wazuh-indexer:4.14.1 |
9200 | OpenSearch-based data storage and anomaly detection engine |
wazuh.manager |
wazuh/wazuh-manager:4.14.1 |
1514, 1515, 514/udp, 55000 | Central log processing, rule evaluation, agent management |
wazuh.dashboard |
Custom build | 443→5601 | Web UI for Wazuh and OpenSearch (includes AD plugin UI) |
Key configuration:
./config/wazuh_indexer_ssl_certs/It does not build if own Wazuh already exists (
--manager-exists true)
docker-compose.agents.yml
Defines containerized Wazuh agents for local/demo deployments:
| Service | Container | Purpose |
|---|---|---|
agent.insider |
agent.insider |
Insider threat scenario agent |
agent.ddos |
agent.ddos |
DDoS detection agent (exposes port 8800) |
agent.malcom |
agent.malcom |
Malware communication detection agent |
agent.suspicious |
agent.suspicious |
Suspicious login agent (Keycloak on port 8080) |
agent.geoip |
agent.geoip |
GeoIP detection agent |
agent.logvolume |
agent.logvolume |
Log volume monitoring agent |
Each agent:
scenarios/dockerfiles/Dockerfile.suspicious_login_agent)These agent containers are for lab usage. It does not build if own agent exists, which are defined in
inventory.yamlunderwazuh_agents_sshandbuild-radar.shis run with key--agent remote.
docker-compose.webhook.yml
Defines the webhook endpoint service for ML-based detection:
| Service | Container | Ports | Purpose |
|---|---|---|---|
webhook |
ad-webhook |
8080 | Receives anomaly alerts from OpenSearch monitors |
Key configuration:
.env: WAZUH_AGENT_VERSION, WAZUH_MANAGER_ADDRESSwazuh-webhook-logs)Dockerfile.radar-cli
Defines the radar-cli container used by run-radar.sh:
python:3.12-slimdetector.py, monitor.py, webhook.py, config.yamlwazuh_ingest.py)requests, PyYAMLradar-cli
detector.py, monitor.py, webhook.py, wazuh_ingest.pyDocker Compose
Ansible
Flowintel
pyflowintel clientFLOWINTEL_* environment variables in .envWazuh Agent (Logs) → RADAR Helper (Enrichment) → Wazuh Manager (Decode) →
↓
[Signature-Based Path]
Wazuh Manager (Rules) → Active Response
[Behavior-Based Path]
Wazuh Indexer → Anomaly Detector → Monitor → Webhook →
Wazuh Manager (Rules) → Active Response
sequenceDiagram
participant Agent as Wazuh Agent
participant Helper as RADAR Helper
participant Manager as Wazuh Manager
participant Rules as Rules Engine
participant AR as Active Response
Agent->>Helper: Raw log event
Helper->>Helper: Enrich with GeoIP, ASN, velocity
Helper->>Manager: Enriched log
Manager->>Manager: Decode log fields
Manager->>Rules: Evaluate rules
alt Rule matches (e.g., non-whitelist country)
Rules->>AR: Trigger alert
AR->>Agent: Execute response
AR->>AR: Log action
else No match
Rules->>Rules: Continue monitoring
end
Step 1: Log Collection and Enrichment
Step 2: Parsing
Step 3: Rule Matching
Step 4: Active Response
active-responses.logsequenceDiagram
participant Agent as Wazuh Agent
participant Helper as RADAR Helper
participant Manager as Wazuh Manager
participant Index as OpenSearch Index
participant Detector as Anomaly Detector
participant Monitor as Monitor
participant Webhook as Webhook
participant AR as Active Response
Agent->>Helper: Raw log event
Helper->>Helper: Enrich with features
Helper->>Manager: Enriched log
Manager->>Manager: Decode log
Manager->>Index: Store in scenario index
loop Every detection_interval
Detector->>Index: Query feature data
Detector->>Detector: Compute RCF anomaly score
Detector->>Detector: Store results
end
loop Every monitor_interval
Monitor->>Detector: Check anomaly scores
alt Score > threshold AND confidence > threshold
Monitor->>Webhook: HTTP POST alert
Webhook->>Webhook: Write to log file
Webhook->>Manager: Log entry detected
Manager->>AR: Trigger response
end
end
Step 1: Log Collection and Enrichment
Step 2: Indexing
Step 3: Anomaly Detection
Step 4: Monitoring and Alerting
Step 5: Webhook Processing
Step 6: Rule Triggering
Step 7: Active Response
The config.yaml file serves as the single source of truth for all scenario configurations.
Key Sections:
The ar.yaml file configures the risk-based active response system for each scenario. This file is located at the path specified by the AR_RISK_CONFIG environment variable in .env (default: ar.yaml in the Wazuh manager’s active response directory).
Key Configuration Parameters:
| Parameter | Description | Example Values |
|---|---|---|
ad.rule_ids |
Rule IDs from ML-based anomaly detection | ["100021", "100309"] |
signature.rule_ids |
Rule IDs from signature-based detection | ["210012", "210013"] |
w_ad |
Weight for anomaly detection in risk score (0.0-1.0) | 0.3 for 30% weight |
w_sig |
Weight for signature-based detection in risk score (0.0-1.0) | 0.4 for 40% weight |
w_cti |
Weight for cyber threat intelligence in risk score (0.0-1.0) | 0.3 for 30% weight |
delta_ad_minutes |
Time window for AD alerts correlation (minutes) | 10 |
delta_signature_minutes |
Time window for signature alerts correlation (minutes) | 1 |
signature_impact |
Impact score for signature detections (0.0-1.0) | 0.7 |
signature_likelihood |
Likelihood score for signature detections (0.0-1.0) or weighted rules | 0.8 or rule-specific weights |
risk_threshold |
Minimum risk score to trigger response (0.0-1.0) | 0.51 |
tiers.tier1_max |
Maximum risk score for Tier 1 (low risk) | 0.33 |
tiers.tier2_max |
Maximum risk score for Tier 2 (medium risk) | 0.66 |
mitigations |
List of active response actions to execute | ["firewall-drop", "lock_user_linux"] |
create_case |
Whether to create a case in FlowIntel | true or false |
allow_mitigation |
Whether to execute automated mitigations | true or false |
Connection details and credentials stored securely:
OS_URL=https://OS_IP:9200
OS_USER=admin
OS_PASS=SecretPassword
WAZUH_API_URL=https://WAZUH_IP:55000
WAZUH_AGENT_VERSION=4.14.1-1
WEBHOOK_URL=http://WEBHOOK_IP:8080/notify
AR_RISK_CONFIG=/var/ossec/active-response/bin/ar.yaml
AR_LOG_FILE=/var/ossec/logs/active-responses.log
FLOWINTEL_BASE_URL=http://FLOWINTEL_IP:7006/api
FLOWINTEL_API_KEY=API_KEY
Remote host definitions for distributed deployments:
all:
children:
wazuh_manager_ssh:
hosts:
manager-node:
ansible_host: 192.168.5.10
ansible_user: linuxuser
wazuh_agents_ssh:
hosts:
agent-node-1:
ansible_host: 192.168.5.20
ansible_user: linuxuser
agent-node-2:
ansible_host: 192.168.5.21
ansible_user: linuxuser
Encrypted credentials for remote access:
# Create vault for host
ansible-vault create host_vars/edge.vm.yml
# Content (encrypted):
ansible_become_password: sudo_password_here
RADAR provides pre-configured scenarios for common security use cases:
Production-Ready (v0.5+)
Demo/Development (Requires Adaptation)
These are currently under radar/archives folder.