RADAR orchestrates risk-aware anomaly detection (OpenSearch AD) and automated response across several components of the IDPS-ESCAPE architecture (see HARC-003). Setting up and running RADAR is enabled by two entrypoint scripts:
build-radar.sh – prepares the environment (Wazuh core stack + agents + RADAR dependencies), runs Ansible pipelines for the chosen scenario, and builds various Docker containers.run-radar.sh – ingests a scenario dataset, ensures/starts an AD detector, and ensures a monitor with a webhook (prints DET_ID/MON_ID).Below, we explain the pre-requisites and steps for bringing a scenario to life. At the end you will find a script for testing via a lightweight Docker runner.
For a very detailed breakdown of the Ansible playbook providing the automation pipeline for deploying and setting up the Wazuh manager, see our dedicated page describing our approach to the automated manager deployment via an Ansible playbook.
For a detailed description of the run-radar.sh workflow, refer to the dedicated documentation page.
The design of RADAR allows for flexibility in terms of the endpoints placement, i.e., where the RADAR components and monitoring elements/agents can be deployed. The following deployment modes are supported:
| Wazuh manager | Wazuh agents |
|---|---|
| Local | Local |
| Local | Remote |
| Remote | Remote |
local and remote are used relative to where the runners and bootstrapping scripts are executed, e.g., in a GNU/Linux virtual machine (VM) denoted by vm-1, you run build-radar.sh and set --manager local and --agent remote with the inventory.yaml file specifying coordinates (IP, sudo user, SSH key file path, etc.) for other VMs in which agents are to be deployed, e.g. edge-vm-2 and edge-vm-3. The Wazuh manager gets deployed on vm-1 and the agents on edge-vm-2 and edge-vm-3.
Alternatively, if the --manager remote is set and build-radar.sh is run from vm-1, the runner will look for the coordinates of some other VM in which the manager is to be deployed, e.g. vm-2 defined in the inventory.yaml for the remote docker host.
The host node/endpoint is where the RADAR controller is located. The Wazuh manager and agents can be deployed either in the same node (local) or in a different one (remote).
The deployment of RADAR supports also:
See the Usage section for details.
Verify your environment meets the requirements:
docker --version # Should be 20.10+
ansible --version # Should be 2.15+
build-radar.sh)
pipx is available, follow the instruction` in Ansible Installation Documentation using pipx.2.15+.4.14.1 (automatically handled by our deployment artifacts)FLOWINTEL_* variables in .env.OS_URL, OS_USER, OS_PASS, DASHBOARD_URL, DASHBOARD_USER, DASHBOARD_PASS and SMTP credentials are available to the tester. The tester has network access (SSH) from the test controller node to controlled endpoints.remote:
remote nodes, the Wazuh agents must be installed in the monitored endpoints using the official documentation.v0.7) has been tested with Wazuh v4.14.1.Create a .env file at idps-escape/radar/ with endpoint URLs, credentials, and SSL flags. This file is read by detector.py and monitor.py inside the radar-cli container.
Essential configuration (modify IPs and credentials):
# OpenSearch Configuration
OS_URL=https://192.168.0.28:9200
OS_USER=admin
OS_PASS=SecretPassword
OS_VERIFY_SSL="/app/config/wazuh_indexer_ssl_certs/root-ca.pem"
# Wazuh Configuration
WAZUH_API_URL=https://192.168.0.28:55000
WAZUH_AUTH_USER=wazuh-wui
WAZUH_AUTH_PASS=MyS3cr37P450r.*-
WAZUH_MANAGER_ADDRESS=192.168.0.28
# Webhook
WEBHOOK_URL=http://192.168.0.28:8080/notify
# SMTP (for email alerts)
SMTP_HOST=smtp.example.com
SMTP_PORT=587
SMTP_USER=user@example.com
SMTP_PASS=password
EMAIL_TO=recipient@example.com
FLOWINTEL_BASE_URL=http://192.168.0.28:7006/api # Change IP accordingly
FLOWINTEL_API_KEY=API_KEY # Change accordingly
FLOWINTEL_VERIFY_SSL=VERIFY_SSL # Change accordingly
PYFLOWINTEL_PATH=/var/ossec/active-response/bin/pyflowintel
See the complete .env template for all available configuration options including logging, FLOWINTEL, and advanced settings.
If either the manager or the agent are set to be remote:
(i) Edit the inventory.yaml file with the corresponding endpoint information.
For a remote manager, update:
wazuh_manager_ssh:
hosts:
For remote agents, update:
wazuh_agents_ssh:
hosts:
(ii) Add valid endpoint login credentials into the encrypted Ansible vault:
ansible-vault create host_vars/**HOST_NAME**.yml
In this step:
inventory.yaml in (i).ansible_become_password: <sudo-password>
The volumes.yml file defines bind-mount mappings between host directories and Wazuh manager container paths. This file is critical for RADAR’s Ansible playbook to locate and modify Wazuh configuration files on the host filesystem.
Default volumes.yml:
version: "3.7"
services:
wazuh.manager:
volumes:
- /srv/wazuh/manager/api/configuration:/var/ossec/api/configuration
- /srv/wazuh/manager/etc:/var/ossec/etc
- /srv/wazuh/manager/logs:/var/ossec/logs
- /srv/wazuh/manager/queue:/var/ossec/queue
- /srv/wazuh/manager/var/multigroups:/var/ossec/var/multigroups
- /srv/wazuh/manager/integrations:/var/ossec/integrations
- /srv/wazuh/manager/active-response/bin:/var/ossec/active-response/bin
- /srv/wazuh/manager/filebeat/etc:/etc/filebeat
- /srv/wazuh/manager/filebeat/var:/var/lib/filebeat
When to update volumes.yml:
Existing Wazuh installation: If you have an existing Wazuh manager with different volume paths, update volumes.yml to match your current bind-mount configuration. The Ansible playbook reads this file to determine where to deploy decoders, rules, active responses, and configuration files on the host.
Custom deployment paths: If you prefer different host directories (e.g., /opt/wazuh/ instead of /srv/wazuh/), modify the left side of each mapping accordingly.
How to find your existing Wazuh volumes:
If Wazuh is already running, inspect the current volume mappings with correct MANAGER_CONTAINER_NAME:
docker inspect MANAGER_CONTAINER_NAME --format ':\n'
Then update volumes.yml to match the output.
Required mappings:
The Ansible playbook validates that these container paths have corresponding bind-mounts:
/var/ossec/etc — for ossec.conf, decoders, rules, lists/var/ossec/active-response/bin — for active response scripts/etc/filebeat — for Filebeat configuration (log volume scenario)If any required mapping is missing, the playbook will fail with a validation error.
The config.yaml file is essential for ML-based (behavior-based) anomaly detection scenarios. It defines how OpenSearch anomaly detectors and monitors are configured for each scenario. If you are using signature-based detection only (e.g., GeoIP detection), this file requires no changes. However, for ML-based scenarios (log volume, suspicious login behavioral, insider threat, DDoS, malware communication), proper configuration is critical.
Key parameters explained:
| Parameter | Description | Impact |
|---|---|---|
categorical_field |
Field used for high-cardinality detection (e.g., per-user, per-agent baselines) | Enables UEBA-style detection where each entity has its own baseline |
shingle_size |
Number of consecutive data points the RCF algorithm considers | Higher values detect longer-term patterns; lower values are more sensitive to sudden changes |
detector_interval |
How often the detector runs (in minutes) | Affects detection latency and resource usage |
anomaly_grade_threshold |
Minimum anomaly score to trigger an alert (0.0-1.0) | Lower = more sensitive (more alerts); Higher = fewer false positives |
confidence_threshold |
Minimum confidence level required (0.0-1.0) | Higher values require more certainty before alerting |
features |
OpenSearch aggregation queries defining what metrics to analyze | Determines what behavioral patterns the detector learns |
Feature aggregation types:
Features use OpenSearch aggregation queries to extract metrics:
value_count: Count of documents/eventssum: Total of a numeric fieldavg: Average of a numeric fieldmax: Maximum value of a numeric fieldcardinality: Count of unique values (useful for detecting anomalous diversity)Tuning recommendations:
anomaly_grade_threshold and confidence_thresholddetector_intervalcategorical_field points to the correct entity identifier (user, host, IP)shingle_size to capture longer behavioral cyclesThe scenarios/active_responses/ar.yaml file configures the risk-aware active response system for each scenario. This file is located at the path specified by AR_RISK_CONFIG in your .env file and should be customized according to your operational requirements and risk tolerance.
Key parameters explained:
| Parameter | Description | Example Values |
|---|---|---|
ad.rule_ids |
Rule IDs from ML-based anomaly detection | ["100021", "100309"] |
signature.rule_ids |
Rule IDs from signature-based detection | ["210012", "210013"] |
w_ad |
Weight for anomaly detection in risk score (0.0-1.0) | 0.3 for 30% weight |
w_sig |
Weight for signature-based detection in risk score (0.0-1.0) | 0.4 for 40% weight |
w_cti |
Weight for cyber threat intelligence in risk score (0.0-1.0) | 0.3 for 30% weight |
delta_ad_minutes |
Time window for AD alerts correlation (minutes) | 10 |
delta_signature_minutes |
Time window for signature alerts correlation (minutes) | 1 |
signature_impact |
Impact score for signature detections (0.0-1.0) | 0.7 |
signature_likelihood |
Likelihood score for signature detections (0.0-1.0) or weighted rules | 0.8 or rule-specific weights |
risk_threshold |
Minimum risk score to trigger response (0.0-1.0) | 0.51 |
tiers.tier1_max |
Maximum risk score for Tier 1 (low risk) | 0.33 |
tiers.tier2_max |
Maximum risk score for Tier 2 (medium risk) | 0.66 |
mitigations |
List of active response actions to execute | ["firewall-drop", "lock_user_linux"] |
create_case |
Whether to create a case in FlowIntel | true or false |
allow_mitigation |
Whether to execute automated mitigations | true or false |
Tuning recommendations:
w_ad, w_sig, w_cti) based on confidence in each detection method - they must sum to 1.0risk_threshold for more aggressive automated responseallow_mitigation: false for alert-only mode without automated actionsRADAR can optionally create cases/tasks and push investigation context to a FlowIntel instance. To enable this, we must have Flowintel running and reachable from the Wazuh manager / active response context, and we must configure the FLOWINTEL_* variables in .env.
Future integration: SATRAP integration is planned for future releases to enhance threat intelligence correlation and automated investigation workflows.
Run build-radar.sh to bring up core services, optionally agent containers, run the Ansible playbook limited to the manager + agent group for the selected scenario, and build docker containers.
Usage:
build-radar.sh <scenario> --agent <local|remote> --manager <local|remote>
--manager_exists <true|false> [--ssh-key </path/to/private_key>]
Scenarios:
suspicious_login | insider_threat | ddos_detection | malware_communication | geoip_detection | log_volume
Flags:
--agent Where agents live: local (docker-compose.agents.yml) | remote (SSH endpoints)
--manager Where manager lives: local (docker-compose.core.yml) | remote (SSH host)
--manager_exists Whether the manager already exists at that location:
- true : do not bootstrap a manager
- false : bootstrap (local: docker compose up; remote: let Ansible bootstrap)
--ssh-key Optional: path to the SSH private key used for remote manager/agent access.
If not provided, defaults to: $HOME/.ssh/id_ed25519
Examples:
./build-radar.sh suspicious_login --agent remote --manager local --manager_exists false
./build-radar.sh geoip_detection --agent remote --manager remote --manager_exists false --ssh-key "$HOME/.ssh/mykeys/id_ed25519"
Local manager: When
--manager local,build-radar.shshould be run with sudo to eliminate permission conflicts for volume mounts. Existing (ansible) hosts supported: Ansible can target already-running Wazuh manager and agents (agents not running in containers). Configure theinventory.yamlfile with information about your host and edge node groups (e.g.,wazuh_manager_sshandwazuh_agents_ssh), ensure SSH connectivity, and use--agent remoteor--manager remotewithbuild-radar.sh.You must have an appropriate authorized user with sudo privileges on each endpoint where a Wazuh agent is already installed.
./run-radar.sh log_volume
This script ingests the scenario dataset, then ensures/starts an AD detector (prints DET_ID), and finally sets up a monitor with a webhook (prints MON_ID).