run-radar.sh is an orchestration script that automates the deployment of anomaly detection scenarios in a Wazuh environment. It executes a three-stage pipeline: data ingestion, detector creation/configuration, and monitoring setup.
run-radar.sh <scenario>
↓
[1] Data Ingestion → wazuh_ingest.py
↓
[2] Detector Setup → detector.py → returns DETECTOR_ID
↓
[3] Monitor Setup → monitor.py → returns MONITOR_ID
OS_URL: Wazuh Indexer endpointOS_USER: Wazuh Indexer authentication usernameOS_PASS: Wazuh Indexer authentication passwordOS_VERIFY_SSL: Wazuh Indexer TLS certificate verificationDASHBOARD_URL: Wazuh Dashboard endpointDASHBOARD_USER: Wazuh Dashboard authentication usernameDASHBOARD_PASS: Wazuh Dashboard authentication passwordDASHBOARD_VERIFY_SSL: Wazuh Dashboard TLS certificate verificationWEBHOOK_URL: Webhook endpoint (usually located in Wazuh Manager)WEBHOOK_NAME: Webhook nameThe script accepts these scenario names:
suspicious_loginlog_volumeNote on archived scenarios: The scenarios insider_threat, ddos_detection, and malware_communication are archived demo scenarios located in /radar/archives/. They require adaptation of indices, field mappings, and datasets to match your environment and are not production-ready. See the main RADAR README for scenario status details.
Note on GeoIP detection: The geoip_detection scenario is not listed here because it uses signature-based detection only (Wazuh rules and decoders, no OpenSearch AD pipeline). It is deployed via Ansible during build-radar.sh and does not require the run-radar.sh data ingestion/detector/monitor setup. See GeoIP Detection Guide for manual setup instructions.
Each supported scenario has its own:
wazuh_ingest.py in /radar/scenarios/ingest_scripts/<scenario>/)config.yamlconfig.yamlradar-cli:latest , the image is built during build-radar.shwazuh_ingest.py)Generates and ingests synthetic time-series log data into OpenSearch indices for anomaly detection training.
Configuration:
edge.vm (ID: 001)/var/logwazuh-ad-log-volume-*Data Generation Algorithm:
(240 minutes × 60) / 20 seconds = 720 pointsstart_value + (delta × point_index)_bulk API {
"@timestamp": "ISO8601_timestamp",
"agent": {
"name": "edge.vm",
"id": "001"
},
"data": {
"log_path": "/var/log",
"log_bytes": <value>
},
"predecoder": {
"program_name": "log_volume_metric"
}
}
detector.py)Creates or retrieves ID of an OpenSearch Anomaly Detection detector configured for the specified scenario.
config.yaml for scenario definitions.env{SCENARIO}_DETECTOR{"term": {"name.keyword": "<detector_name>"}}log_volume:
index_prefix: wazuh-ad-log-volume-*
result_index: opensearch-ad-plugin-result-log-volume
log_index_pattern: wazuh-ad-log-volume-*
time_field: "@timestamp"
categorical_field: "agent.name"
detector_interval: 5
delay_minutes: 1
monitor_name: "LogVolume-Monitor"
trigger_name: "LogVolume-Growth-Detected"
anomaly_grade_threshold: 0.3
confidence_threshold: 0.3
features:
- feature_name: log_volume_max
feature_enabled: true
aggregation_query:
log_volume_max:
max:
field: data.log_bytes
monitor.py)Creates alerting monitors that trigger notifications when anomalies exceed defined thresholds.
ensure_webhook() from webhook.pyctx.results[0].aggregations.max_anomaly_grade.value > {anomaly_grade_threshold} &&
ctx.results[0].hits.hits[0]._source.confidence > {confidence_threshold}