idps-escape

Getting started

RADAR orchestrates risk-aware anomaly detection (OpenSearch AD) and automated response across several components of the IDPS-ESCAPE architecture (see HARC-003). Setting up and running RADAR is enabled by two entrypoint scripts:

Below, we explain the pre-requisites and steps for bringing a scenario to life. At the end you will find a script for testing via a lightweight Docker runner.

For a very detailed breakdown of the Ansible playbook providing the automation pipeline for deploying and setting up the Wazuh manager, see our dedicated page describing our approach to the automated manager deployment via an Ansible playbook.

For a detailed description of the run-radar.sh workflow, refer to the dedicated documentation page.


Modes of deployment

The design of RADAR allows for flexibility in terms of the endpoints placement, i.e., where the RADAR components and monitoring elements/agents can be deployed. The following deployment modes are supported:

Wazuh manager Wazuh agents
Local Local
Local Remote
Remote Remote

local and remote are used relative to where the runners and bootstrapping scripts are executed, e.g., in a GNU/Linux virtual machine (VM) denoted by vm-1, you run build-radar.sh and set --manager local and --agent remote with the inventory.yaml file specifying coordinates (IP, sudo user, SSH key file path, etc.) for other VMs in which agents are to be deployed, e.g. edge-vm-2 and edge-vm-3. The Wazuh manager gets deployed on vm-1 and the agents on edge-vm-2 and edge-vm-3.

Alternatively, if the --manager remote is set and build-radar.sh is run from vm-1, the runner will look for the coordinates of some other VM in which the manager is to be deployed, e.g. vm-2 defined in the inventory.yaml for the remote docker host.

The host node/endpoint is where the RADAR controller is located. The Wazuh manager and agents can be deployed either in the same node (local) or in a different one (remote).

The deployment of RADAR supports also:

See the Usage section for details.

Prerequisites

Verify your environment meets the requirements:

docker --version     # Should be 20.10+
ansible --version    # Should be 2.15+

Setup

1. Setup connection and authorization variables

Create a .env file at idps-escape/radar/ with endpoint URLs, credentials, and SSL flags. This file is read by detector.py and monitor.py inside the radar-cli container.

Essential configuration (modify IPs and credentials):

# OpenSearch Configuration
OS_URL=https://192.168.0.28:9200
OS_USER=admin
OS_PASS=SecretPassword
OS_VERIFY_SSL="/app/config/wazuh_indexer_ssl_certs/root-ca.pem"

# Wazuh Configuration
WAZUH_API_URL=https://192.168.0.28:55000
WAZUH_AUTH_USER=wazuh-wui
WAZUH_AUTH_PASS=MyS3cr37P450r.*-
WAZUH_MANAGER_ADDRESS=192.168.0.28

# Webhook
WEBHOOK_URL=http://192.168.0.28:8080/notify

# SMTP (for email alerts)
SMTP_HOST=smtp.example.com
SMTP_PORT=587
SMTP_USER=user@example.com
SMTP_PASS=password
EMAIL_TO=recipient@example.com

FLOWINTEL_BASE_URL=http://192.168.0.28:7006/api  # Change IP accordingly
FLOWINTEL_API_KEY=API_KEY                        # Change accordingly
FLOWINTEL_VERIFY_SSL=VERIFY_SSL                  # Change accordingly
PYFLOWINTEL_PATH=/var/ossec/active-response/bin/pyflowintel

See the complete .env template for all available configuration options including logging, FLOWINTEL, and advanced settings.

2. Configure users in remote endpoints

If either the manager or the agent are set to be remote:

(i) Edit the inventory.yaml file with the corresponding endpoint information. For a remote manager, update:

wazuh_manager_ssh:
    hosts:

For remote agents, update:

wazuh_agents_ssh:
    hosts:

(ii) Add valid endpoint login credentials into the encrypted Ansible vault:

ansible-vault create host_vars/**HOST_NAME**.yml

In this step:

3. Configure volume mappings

The volumes.yml file defines bind-mount mappings between host directories and Wazuh manager container paths. This file is critical for RADAR’s Ansible playbook to locate and modify Wazuh configuration files on the host filesystem.

Default volumes.yml:

version: "3.7"

services:
  wazuh.manager:
    volumes:
      - /srv/wazuh/manager/api/configuration:/var/ossec/api/configuration
      - /srv/wazuh/manager/etc:/var/ossec/etc
      - /srv/wazuh/manager/logs:/var/ossec/logs
      - /srv/wazuh/manager/queue:/var/ossec/queue
      - /srv/wazuh/manager/var/multigroups:/var/ossec/var/multigroups
      - /srv/wazuh/manager/integrations:/var/ossec/integrations
      - /srv/wazuh/manager/active-response/bin:/var/ossec/active-response/bin
      - /srv/wazuh/manager/filebeat/etc:/etc/filebeat
      - /srv/wazuh/manager/filebeat/var:/var/lib/filebeat

When to update volumes.yml:

How to find your existing Wazuh volumes:

If Wazuh is already running, inspect the current volume mappings with correct MANAGER_CONTAINER_NAME:

docker inspect MANAGER_CONTAINER_NAME --format ':\n'

Then update volumes.yml to match the output.

Required mappings:

The Ansible playbook validates that these container paths have corresponding bind-mounts:

If any required mapping is missing, the playbook will fail with a validation error.

4. Configure scenario specifications (for ML-based detection)

The config.yaml file is essential for ML-based (behavior-based) anomaly detection scenarios. It defines how OpenSearch anomaly detectors and monitors are configured for each scenario. If you are using signature-based detection only (e.g., GeoIP detection), this file requires no changes. However, for ML-based scenarios (log volume, suspicious login behavioral, insider threat, DDoS, malware communication), proper configuration is critical.

Key parameters explained:

Parameter Description Impact
categorical_field Field used for high-cardinality detection (e.g., per-user, per-agent baselines) Enables UEBA-style detection where each entity has its own baseline
shingle_size Number of consecutive data points the RCF algorithm considers Higher values detect longer-term patterns; lower values are more sensitive to sudden changes
detector_interval How often the detector runs (in minutes) Affects detection latency and resource usage
anomaly_grade_threshold Minimum anomaly score to trigger an alert (0.0-1.0) Lower = more sensitive (more alerts); Higher = fewer false positives
confidence_threshold Minimum confidence level required (0.0-1.0) Higher values require more certainty before alerting
features OpenSearch aggregation queries defining what metrics to analyze Determines what behavioral patterns the detector learns

Feature aggregation types:

Features use OpenSearch aggregation queries to extract metrics:

Tuning recommendations:

5.Configure active response parameters

The scenarios/active_responses/ar.yaml file configures the risk-aware active response system for each scenario. This file is located at the path specified by AR_RISK_CONFIG in your .env file and should be customized according to your operational requirements and risk tolerance.

Key parameters explained:

Parameter Description Example Values
ad.rule_ids Rule IDs from ML-based anomaly detection ["100021", "100309"]
signature.rule_ids Rule IDs from signature-based detection ["210012", "210013"]
w_ad Weight for anomaly detection in risk score (0.0-1.0) 0.3 for 30% weight
w_sig Weight for signature-based detection in risk score (0.0-1.0) 0.4 for 40% weight
w_cti Weight for cyber threat intelligence in risk score (0.0-1.0) 0.3 for 30% weight
delta_ad_minutes Time window for AD alerts correlation (minutes) 10
delta_signature_minutes Time window for signature alerts correlation (minutes) 1
signature_impact Impact score for signature detections (0.0-1.0) 0.7
signature_likelihood Likelihood score for signature detections (0.0-1.0) or weighted rules 0.8 or rule-specific weights
risk_threshold Minimum risk score to trigger response (0.0-1.0) 0.51
tiers.tier1_max Maximum risk score for Tier 1 (low risk) 0.33
tiers.tier2_max Maximum risk score for Tier 2 (medium risk) 0.66
mitigations List of active response actions to execute ["firewall-drop", "lock_user_linux"]
create_case Whether to create a case in FlowIntel true or false
allow_mitigation Whether to execute automated mitigations true or false

Tuning recommendations:

6. Configure FlowIntel

RADAR can optionally create cases/tasks and push investigation context to a FlowIntel instance. To enable this, we must have Flowintel running and reachable from the Wazuh manager / active response context, and we must configure the FLOWINTEL_* variables in .env.

Future integration: SATRAP integration is planned for future releases to enhance threat intelligence correlation and automated investigation workflows.

Usage

1) Deploy the RADAR infrastructure

Run build-radar.sh to bring up core services, optionally agent containers, run the Ansible playbook limited to the manager + agent group for the selected scenario, and build docker containers.

Usage:

build-radar.sh <scenario> --agent <local|remote> --manager <local|remote> 
                          --manager_exists <true|false> [--ssh-key </path/to/private_key>]

Scenarios:
  suspicious_login | insider_threat | ddos_detection | malware_communication | geoip_detection | log_volume

Flags:
  --agent           Where agents live:      local (docker-compose.agents.yml) | remote (SSH endpoints)
  --manager         Where manager lives:    local (docker-compose.core.yml)   | remote (SSH host)
  --manager_exists  Whether the manager already exists at that location:
                      - true  : do not bootstrap a manager
                      - false : bootstrap (local: docker compose up; remote: let Ansible bootstrap)
  --ssh-key         Optional: path to the SSH private key used for remote manager/agent access.
                    If not provided, defaults to: $HOME/.ssh/id_ed25519

Examples:

2) Run a scenario end-to-end

./run-radar.sh log_volume

This script ingests the scenario dataset, then ensures/starts an AD detector (prints DET_ID), and finally sets up a monitor with a webhook (prints MON_ID).