1.0 ADBox test case specifications

1.1 Deploy ADBox via Docker and shell scripts TST-001

Preconditions and setup actions

  • Docker version 26.0.0
  • User with root privileges
  • 26 GB storage on the machine
  • git version 2.44.0

Test steps

  1. Clone the ADBox repository. git clone https://github.com/AbstractionsLab/idps-escape.git

  2. Change the working directory to the cloned folder containing all the files along with the Dockerfile.

cd siem-mtad-gat
  1. Build the image.

3a. Make the script executable: chmod +x build-adbox.sh

3b. And execute it as follows

./build-adbox.sh
  1. Run the container by executing the bash file containing the run commands.

4a. Make it executable: chmod +x adbox.sh

4b. And execute it as follows

./adbox.sh -h

Expected outcome

Step 1. A copy of siem-mtad-gat folder in local folder

Step 3. In the list of docker images (docker images), the following should be present

REPOSITORY                      TAG       IMAGE ID       CREATED         SIZE
siem-mtad-gat                   v0.1.4    ...   ...            ...

Step 4b. Display ADBox help message

usage: driver.py [-h] [-i] [-u USECASE] [-c] [-s]

IDPS-ESCAPE ADBox, an open-source anomaly detection toolbox, developed in project CyFORT.

options:
  -h, --help            show this help message and exit
  -i, --interactive     run the interactive console for training and prediction
  -u USECASE, --usecase USECASE
                        specify a configuration scenario/use-case file for training and prediction
  -c, --connection      check connection with Wazuh
  -s, --shipping        enable data shipping to Wazuh

Parent links: SRS-046 Cross-platform SONAR deployment

Child links: TRP-001 TCER: Deploy ADBox via Docker and shell scripts, TRP-023 TCER: ADBox deployment

Attribute Value
platform Ubuntu 22.04.4 LTS
execution_type M
verification_method T
release alpha
complexity 1
test_data see referenced files
version 0.2

1.3 Install ADBox as dev container TST-003

Preconditions and setup actions

  • Docker engine version 26.1.3
  • Docker desktop version 1.0.35
  • Visual Studio Code version 1.83.1 (system setup)
  • Dev Containers extension for VS Code by Microsoft version v0.315.1
  • at least 26 GB of persistent/disk storage

Test steps

  1. Clone this repository: git clone https://github.com/AbstractionsLab/idps-escape.git
  2. Start Docker Desktop if not already running.
  3. Open the project folder in VS Code.
  4. Select the "Reopen in Container" option in the notification that pops up in VS Code or run it via the command palette.
  5. Open a terminal in VS Code and run poetry install in the container to install all dependencies.
  6. Run ADBox using its entrypoint.
  poetry run adbox

Expected outcome

Step 1. Repository creation

Step 4. Container created and terminal open.

Step 5. Poetry install terminated with Installing the current project: siem-mtad-gat (0.1.4).

Step 6. Run ADBox default mode:

  IDPS-ESCAPE ADBox running in default mode
Are you sure you wish to run the default ADBox in default mode? (y/n):

Parent links: SRS-044 Platform-Independent Deployment

Child links: TRP-003 TCER: Install ADBox as dev container, TRP-004 TCER: Install ADBox as dev container, TRP-020 TCER: Install ADBox as dev container, TRP-024 TCER: ADBox in dev container

Attribute Value
platform Ubuntu 22.04.4 LTS
execution_type M
verification_method T
release alpha
complexity 2
test_data see referenced files
version 0.2

1.4 Run ADBox console TST-004

Preconditions and setup actions

  • Docker version 26.0.0
  • User with root privileges
  • Built image siem-mtad-gat:v0.1.4
  • No existing container named siem-mtad-gat-container should be running.

Test steps

  1. Run the container “siem-mtad-gat-container” by executing the adbox script without any parameters. ./adbox.sh -i

Expected outcome

ADBox starts with

IDPS-ESCAPE ADBox driver running in interactive console mode.

Enter a number and press enter to select an ADBox action to perform:
1. Train an anomaly detector.
2. Predict anomalies using one of the available detectors.
3. Select an existing anomaly detector for prediction.
4. Exit
Enter a number (1-4):

Parent links: SRS-047 Interactive Use Case Builder

Child links: TRP-005 TCER: Run ADBox console, TRP-025 TCER: ADBox console

Attribute Value
platform Ubuntu 22.04.4 LTS
execution_type M
verification_method T
release alpha
complexity 2
test_data see referenced files
version 0.2

1.5 Run ADBox in default mode with a Wazuh connection TST-005

Preconditions and setup actions

  • Docker version 26.0.0
  • User with root privileges
  • Built image siem-mtad-gat:v0.1.4
  • No existing container named siem-mtad-gat-container should be running.
  • An instance of a Wazuh distribution should be running.
  • Wazuh indexer RESTful API should be listening on port 9200.
  • TST-034 and TST-035 should succeed

Test steps

  1. Run the container “siem-mtad-gat-container” by executing the adbox script without any parameters. ./adbox.sh
  2. Input y after Are you sure you wish to run the default ADBox in default mode? (y/n):

Expected outcome

  • ADBox starts with
IDPS-ESCAPE ADBox running in default mode
Are you sure you wish to run the default ADBox in default mode? (y/n): y
No input use-case: a detection with default config will be created.
Start training pipeline.
Init Data managers.
JSON file 'detector_input_parameters.json' saved at /home/alab/siem-mtad-gat/siem_mtad_gat/assets/detector_models/{detector_id}/input/detector_input_parameters.json.
Data ingestion.
...
  • A detector {detector_id} is trained according to the default parameters specified in siem_mtad_gat/assets/default_configs/default_detector_input_config.json.
  • Training response should be seen in the console.
  • Training outputs and artifacts should be available in the following folder: /home/root/siem-mtad-gat/siem_mtad_gat/assets/detector_models/{detector_id}.

Parent links: SRS-048 Default Detector Training

Child links: TRP-006 TCER: Run ADBox in default mode with a Wazuh connection, TRP-026 TCER: ADBox in default mode with Wazuh

Attribute Value
platform Ubuntu 22.04.4 LTS
execution_type M
verification_method T
release alpha
complexity 3
test_data see referenced files
version 0.2

1.6 Run ADBox in default mode without a Wazuh connection TST-006

Preconditions and setup actions

  • Docker version 26.0.0
  • User with root privileges
  • Built image siem-mtad-gat:v0.1.0
  • No existing container named “siem-mtad-gat-container” should be running.
  • No instance of Wazuh distribution running.

Test steps

  1. Run the container “siem-mtad-gat-container” by executing the adbox script without any parameters. ./adbox.sh
  2. Input y after Are you sure you wish to run the default ADBox in default mode? (y/n):

Expected outcome

  • ADBox starts with
IDPS-ESCAPE ADBox running in default mode
Are you sure you wish to run the default ADBox in default mode? (y/n): y
No input use-case: a detection with default config will be created.
Start training pipeline.
Init Data managers.
JSON file 'detector_input_parameters.json' saved at /home/alab/siem-mtad-gat/siem_mtad_gat/assets/detector_models/fb5faf1c-7913-4e90-880f-51f3a178a053/input/detector_input_parameters.json.
Data ingestion.
Wazuh data ingestor establishing connection to Wazuh...
Could not establish a connection with OpenSearch.
More details see logs.
...
  • A detector {detector_id} is trained according to the default parameters specified in siem_mtad_gat/assets/default_configs/default_detector_input_config.json and using default stored data
...
The file '/home/alab/siem-mtad-gat/siem_mtad_gat/assets/data/train/sample-alerts-train-2024-11.json' does not exist, returning all default data.
...
  • Training response should be seen in the console.
  • Training outputs and artifacts should be available in the /home/root/siem-mtad-gat/siem_mtad_gat/assets/detector_models/{detector_id} folder.

Parent links: SRS-048 Default Detector Training

Child links: TRP-007 TCER: Run ADBox in default mode without a Wazuh connection, TRP-019 TCER: Run ADBox in default mode without a Wazuh connection

Attribute Value
platform Ubuntu 22.04.4 LTS
execution_type M
verification_method T
release alpha
complexity 2
test_data see referenced files
version 0.2

1.7 ADBox use case 1 with a Wazuh connection TST-007

Preconditions and setup actions

  • Docker version 26.0.0
  • User with root privileges
  • Built image siem-mtad-gat:v0.1.4
  • No existing container named siem-mtad-gat-container should be running.
  • An instance of a Wazuh distribution should be running.
  • Wazuh indexer RESTful API should be listening on port 9200.
  • Wazuh configured to monitor linux resource utilization. (Monitoring Linux resource usage with Wazuh)
  • 2024-07-* index not empty.
  • TST-034 and TST-035 should succeed

Test steps

  1. Run the container siem-mtad-gat-container by executing the adbox script with use case 1 parameters. ./adbox.sh -u 1

Expected outcome

  • ADBox starts with IDPS-ESCAPE ADBox driver running use-case scenario configuration uc_1.yaml.
  • The training should run for 10 epochs.
  • Training response should be seen in the console.
  • Training outputs and artifacts should be available in the /home/root/siem-mtad-gat/siem_mtad_gat/assets/detector_models/{detector_id} folder.
  • Prediction starts after training with the message Predicting in real-time mode with interval 1 (min).
  • A prediction response should be seen in the output console after every 1 minute.
  • Prediction outputs and artifacts should be available in the /home/root/siem-mtad-gat/siem_mtad_gat/assets/detector_models/{detector_id}/prediction folder.

Parent links: SRS-027 ML-Based Anomaly Detection

Child links: TRP-008 TCER: ADBox use case 1 with a Wazuh connection

Attribute Value
platform Ubuntu 22.04.4 LTS
execution_type M
verification_method T
release alpha
complexity 2
test_data see referenced files
version 0.2

1.8 ADBox use case 1 without a Wazuh connection TST-008

Preconditions and setup actions

  • Docker version 26.0.0
  • User with root privileges
  • Built image siem-mtad-gat:v0.1.4
  • No existing container named siem-mtad-gat-container should be running.
  • No instance of Wazuh distribution running.

Test steps

  1. Run the container siem-mtad-gat-container by executing the adbox script with use case 1 parameters.
  ./adbox.sh -u 1

Expected outcome

  • ADBox starts with Running AD driver with user configuration uc_1.yaml.
  • Output screen should show the message Could not establish a connection with OpenSearch.
  • And collect training data from the default file with message Returning data from file /home/root/siem-mtad-gat/siem_mtad_gat/assets/data/train/sample-alerts-train-2024-07.json.
  • The training should run for 10 epochs.
  • Training response should be seen in the console.
  • Training outputs and artifacts should be available in the /home/root/siem-mtad-gat/siem_mtad_gat/assets/detector_models/{detector_id} folder.
  • Prediction starts after training with message Predicting in real-time mode with interval 1 (min).
  • Output screen should show the following messages:
  • Could not establish a connection with OpenSearch.
  • Prediction in run_mode.REALTIME requires a connection with OpenSearch.
  • No data found for given input.
  • And the application should exit.

Parent links: SRS-027 ML-Based Anomaly Detection

Child links: TRP-009 TCER: ADBox use case 1 without a Wazuh connection, TRP-018 TCER: ADBox use case 1 without a Wazuh connection

Attribute Value
platform Ubuntu 22.04.4 LTS
execution_type M
verification_method T
release alpha
complexity 2
test_data see referenced files
version 0.1

1.9 ADBox use case 2 with a Wazuh connection TST-009

Preconditions and setup actions

  • Docker version 26.0.0
  • User with root privileges
  • Built image siem-mtad-gat:v0.1.4
  • No existing container named siem-mtad-gat-container should be running.
  • An instance of a Wazuh distribution should be running.
  • Wazuh indexer RESTful API should be listening on port 9200.
  • 2024-07-* index not empty.
  • TST-034 and TST-035 should succeed

Test steps

  1. Run the container siem-mtad-gat-container by executing the adbox script with use case 2 parameters. ./adbox.sh -u 2

Expected outcome

  • ADBox starts with Running AD driver with user configuration uc_2.yaml.
  • The training should run for 10 epochs.
  • Training response should be seen in the console.
  • Training outputs and artifacts should be available in the /home/root/siem-mtad-gat/siem_mtad_gat/assets/detector_models/{detector_id} folder.
  • Prediction starts after training with message Predicting in historical mode.
  • Prediction response should be seen in the output console.
  • Prediction outputs and artifacts should be available in the /home/root/siem-mtad-gat/siem_mtad_gat/assets/detector_models/{detector_id}/prediction folder.

Parent links: SRS-035 Offline Anomaly Detection

Child links: TRP-010 TCER: ADBox use case 2 with a Wazuh connection, TRP-027 TCER: ADBox UC scenario 2 with Wazuh

Attribute Value
platform Ubuntu 22.04.4 LTS
execution_type M
verification_method T
release alpha
complexity 2
test_data see referenced files
version 0.2

1.10 ADBox use case 2 without a Wazuh connection TST-010

Preconditions and setup actions

  • Docker version 26.0.0
  • User with root privileges
  • Built image siem-mtad-gat:v0.1.4
  • No existing container named siem-mtad-gat-container should be running.
  • No instance of Wazuh distribution running.

Test steps

  1. Run the container siem-mtad-gat-container by executing the adbox script with use case 2 parameters. ./adbox.sh -u 2

Expected outcome

  • ADBox starts with Running AD driver with user configuration uc_2.yaml.
  • Output screen should show the message Could not establish a connection with OpenSearch.
  • And collect training data from the default file with message Returning data from file /home/root/siem-mtad-gat/siem_mtad_gat/assets/data/train/sample-alerts-train-2024-07.
  • The training should run for 10 epochs.
  • Training response should be seen on the console.
  • Training outputs and artifacts should be available in the /home/root/siem-mtad-gat/siem_mtad_gat/assets/detector_models/{detector_id} folder.
  • Prediction starts after training with message Predicting in historical mode.
  • Output screen should show message Could not establish a connection with OpenSearch.
  • And collect training data from default file with message The file '/home/root/siem-mtad-gat/siem_mtad_gat/assets/data/predict/sample-alerts-predict-2024-07-26' does not exist, returning all default data., depending upon the date it was run.
  • Prediction response should be seen in the output console.
  • Prediction outputs and artifacts should be available in the /home/root/siem-mtad-gat/siem_mtad_gat/assets/detector_models/{detector_id}/prediction folder.

Parent links: SRS-035 Offline Anomaly Detection

Child links: TRP-017 TCER: ADBox use case 2 without a Wazuh connection

Attribute Value
platform Ubuntu 22.04.4 LTS
execution_type M
verification_method T
release alpha
complexity 2
test_data see referenced files
version 0.2

1.11 ADBox use case 3 with a Wazuh connection TST-011

Preconditions and setup actions

  • Docker version 26.0.0
  • User with root privileges
  • Built image siem-mtad-gat:v0.1.4
  • No existing container named siem-mtad-gat-container should be running.
  • An instance of a Wazuh distribution should be running.
  • Wazuh indexer RESTful API should be listening on port 9200.
  • 2024-07-* index not empty.
  • TST-034 and TST-035 should succeed

Test steps

  1. Run the container siem-mtad-gat-container by executing the adbox script with use case 3 parameters.
  ./adbox.sh -u 3

Expected outcome

  • ADBox starts with Running AD driver with user configuration uc_3.yaml.
  • The training should run for 10 epochs.
  • Training response should be seen in the console.
  • Training outputs and artifacts should be available in the /home/root/siem-mtad-gat/siem_mtad_gat/assets/detector_models/{detector_id} folder.
  • Prediction starts after training with message Predicting in batch mode with batch interval 5 (min).
  • Prediction response should be seen on the output console after every 5 minutes.
  • Prediction outputs and artifacts should be available in the /home/root/siem-mtad-gat/siem_mtad_gat/assets/detector_models/{detector_id}/prediction folder.

Parent links: SRS-027 ML-Based Anomaly Detection

Child links: TRP-011 TCER: ADBox use case 3 with a Wazuh connection, TRP-028 TCER: ADBox UC scenario 3 with Wazuh

Attribute Value
platform Ubuntu 22.04.4 LTS
execution_type M
verification_method T
release alpha
complexity 2
test_data see referenced files
version 0.2

1.12 ADBox use case 3 without a Wazuh connection TST-012

Preconditions and setup actions

  • Docker version 26.0.0
  • User with root privileges
  • Built image siem-mtad-gat:v0.1.4
  • No existing container named siem-mtad-gat-container should be running.
  • No instance of Wazuh distribution running.

Test steps

  1. Run the container siem-mtad-gat-container by executing the adbox script with use case 3 parameters.
  ./adbox.sh -u 3

Expected outcome

  • ADBox starts with Running AD driver with user configuration uc_3.yaml.
  • Output screen should show the message Could not establish a connection with OpenSearch.
  • And collect training data from the default file with the message Returning data from file /home/root/siem-mtad-gat/siem_mtad_gat/assets/data/train/sample-alerts-train-2024-07.json.
  • The training should run for 10 epochs.
  • Train response should be seen on the console.
  • Training outputs and artifacts should be available in the /home/root/siem-mtad-gat/siem_mtad_gat/assets/detector_models/{detector_id} folder.
  • Prediction starts after training with message Predicting in batch mode with batch interval 5 (min).
  • Output screen should show the following messages:
    • Could not establish a connection with OpenSearch.
    • Prediction in run_mode.BATCH requires a connection with OpenSearch.
    • No data found for given input.
    • And the application should exit.

Parent links: SRS-027 ML-Based Anomaly Detection

Child links: TRP-016 TCER: ADBox use case 3 without a Wazuh connection

Attribute Value
platform Ubuntu 22.04.4 LTS
execution_type M
verification_method T
release alpha
complexity 2
test_data see referenced files
version 0.1

1.13 ADBox use case 4 with a Wazuh connection TST-013

Preconditions and setup actions

  • Docker version 26.0.0
  • User with root privileges
  • Built image siem-mtad-gat:v0.1.0
  • No existing container named siem-mtad-gat-container should be running.
  • An instance of a Wazuh distribution should be running.
  • Wazuh indexer RESTful API should be listening on port 9200.
  • Wazuh agent configured to read Suricata logs.
  • "2024-03-*" not empty.
  • TST-034 and TST-035 should succeed

Test steps

  1. Run the container siem-mtad-gat-container by executing the adbox script with use case 4 parameters.
./adbox.sh -u 4

Expected outcome

  • ADBox starts with Running AD driver with user configuration uc_4.yaml.
  • The training should run for 10 epochs.
  • Training response should be seen in the console.
  • Training outputs and artifacts should be available in the /home/root/siem-mtad-gat/siem_mtad_gat/assets/detector_models/{detector_id} folder.
  • Prediction starts after training with message Predicting in historical mode.
  • Prediction response should be seen in the output console.
  • Prediction outputs and artifacts should be available in the /home/root/siem-mtad-gat/siem_mtad_gat/assets/detector_models/{detector_id}/prediction folder.

Parent links: SRS-038 Joint Host-Network Training

Child links: TRP-012 TCER: ADBox use case 4 with a Wazuh connection

Attribute Value
platform Ubuntu 22.04.4 LTS
execution_type M
verification_method T
release alpha
complexity 2
test_data see referenced files
version 0.2

1.14 ADBox use case 4 without a Wazuh connection TST-014

Preconditions and setup actions

  • Docker version 26.0.0
  • User with root privileges
  • Built image siem-mtad-gat:v0.1.4
  • No existing container named siem-mtad-gat-container should be running.
  • No instance of Wazuh distribution running.

Test steps

  1. Run the container siem-mtad-gat-container by executing the adbox script with use case 4 parameters.
./adbox.sh -u 4

Expected outcome

  • ADBox starts with Running AD driver with user configuration uc_4.yaml.
  • Output screen should show message Could not establish a connection with OpenSearch.
  • And collect training data from default file with message Returning data from file /home/root/siem-mtad-gat/siem_mtad_gat/assets/data/train/sample-alerts-train-2024-03.json.
  • The training should run for 10 epochs.
  • Training response should be seen on the console.
  • Training outputs and artifacts should be available in the /home/root/siem-mtad-gat/siem_mtad_gat/assets/detector_models/{detector_id} folder.
  • Prediction starts after training with message Predicting in historical mode.
  • Output screen should show message Could not establish a connection with OpenSearch.
  • And collect training data from default file with message The file '/home/root/siem-mtad-gat/siem_mtad_gat/assets/data/predict/wazuh-alerts-*.*-2024.07.22.json' does not exist, returning all default data., depending upon the date it was run.
  • Prediction response should be seen on the output console.
  • Prediction outputs and artifacts should be available in the /home/root/siem-mtad-gat/siem_mtad_gat/assets/detector_models/{detector_id}/prediction folder.

Parent links: SRS-038 Joint Host-Network Training

Child links: TRP-015 TCER: ADBox use case 4 without a Wazuh connection

Attribute Value
platform Ubuntu 22.04.4 LTS
execution_type M
verification_method T
release alpha
complexity 2
test_data see referenced files
version 0.2

1.15 ADBox use case 5 with a Wazuh connection TST-015

Preconditions and setup actions

  • Docker version 26.0.0
  • User with root privileges
  • Built image siem-mtad-gat:v0.1.4
  • No existing container named siem-mtad-gat-container should be running.
  • An instance of a Wazuh distribution should be running.
  • Wazuh indexer RESTful API should be listening on port 9200.
  • Wazuh agent configured to read Suricata logs.
  • TST-034 and TST-035 should succeed

Test steps

  1. Run the container siem-mtad-gat-container by executing the adbox script with use case 5 parameters. ./adbox.sh -u 5

Expected outcome

  • ADBox starts with Running AD driver with user configuration uc_5.yaml.
  • The training should run for 10 epochs.
  • Training response should be seen in the console.
  • Training outputs and artifacts should be available in the /home/root/siem-mtad-gat/siem_mtad_gat/assets/detector_models/{detector_id} folder.
  • Prediction starts after training with message Predicting in historical mode.
  • Prediction response should be seen on the output console.
  • Prediction outputs and artifacts should be available in the /home/root/siem-mtad-gat/siem_mtad_gat/assets/detector_models/{detector_id}/prediction folder.

Parent links: SRS-037 Anomaly-Based NIDS

Child links: TRP-013 TCER: ADBox use case 5 with a Wazuh connection

Attribute Value
platform Ubuntu 22.04.4 LTS
execution_type M
verification_method T
release alpha
complexity 2
test_data see referenced files
version 0.2

1.16 ADBox use case 5 without a Wazuh connection TST-016

Preconditions and setup actions

  • Docker version 26.0.0
  • User with root privileges
  • Built image siem-mtad-gat:v0.1.4
  • No existing container named siem-mtad-gat-container should be running.
  • No instance of Wazuh distribution running.

Test steps

  1. Run the container siem-mtad-gat-container by executing the adbox script with use case 5 parameters.
./adbox.sh -u 5

Expected outcome

  • ADBox starts with Running AD driver with user configuration uc_5.yaml.
  • Output screen should show message Could not establish a connection with OpenSearch.
  • And collect training data from default file with message Returning data from file /home/root/siem-mtad-gat/siem_mtad_gat/assets/data/train/sample-alerts-train-2024-03.json.
  • The training should run for 10 epochs.
  • Training response should be seen in the console.
  • Training outputs and artifacts should be available in the /home/root/siem-mtad-gat/siem_mtad_gat/assets/detector_models/{detector_id} folder.
  • Prediction starts after training with message Predicting in historical mode.
  • Output screen should show message Could not establish a connection with OpenSearch.
  • And collect training data from default file with message The file '/home/root/siem-mtad-gat/siem_mtad_gat/assets/data/predict/wazuh-alerts-*.*-2024.07.22.json' does not exist, returning all default data., depending upon the date it was run.
  • Prediction response should be seen in the output console.
  • Prediction outputs and artifacts should be available in the /home/root/siem-mtad-gat/siem_mtad_gat/assets/detector_models/{detector_id}/prediction folder.

Parent links: SRS-037 Anomaly-Based NIDS

Child links: TRP-014 TCER: ADBox use case 5 without a Wazuh connection

Attribute Value
platform Ubuntu 22.04.4 LTS
execution_type M
verification_method T
release alpha
complexity 2
test_data see referenced files
version 0.2

1.17 ADBox shipping install TST-017

Preconditions and setup actions

  • Docker version 26.0.0
  • User with root privileges
  • Built image siem-mtad-gat:v0.1.4
  • No existing container named siem-mtad-gat-container should be running.
  • An instance of a Wazuh distribution should be running.

Test steps

  1. Run the container siem-mtad-gat-container by executing the adbox script with shipping flag.
  ./adbox.sh -s

Expected outcome

CLI:

IDPS-ESCAPE ADBox shipping on
ADBox Shipper establishing connection to Wazuh...
ADBox shipper connected to Wazuh
Template for mtad_gat initialized
Exit shipper installation. Check ADBox templates and policy correct installation from Wazuh Dashboard!

Wazuh Dashboard* :

  • Indexer Management>Index Management>Templates check for presence of base templatesadbox_stream_template,adbox_stream_template_mtad_gat;
  • Indexer Management>Index Management>Templates>Component templates check for presence of base templatescomponent_template_mtad_gat;
  • Indexer Management>Index Management>State management policies check for presence of the policy adbox_detectors_rollover.

Parent links: SRS-049 Anomaly Shipping to Indexer

Child links: TRP-022 TCER: ADBox shipping install

Attribute Value
platform Ubuntu 22.04.4 LTS
execution_type M
verification_method I
release alpha
complexity 2
test_data see referenced files
version 0.2

1.18 ADBox Create detector data stream TST-018

Preconditions and setup actions

  • Docker version 26.0.0
  • User with root privileges
  • Built image siem-mtad-gat:v0.1.4
  • No existing container named siem-mtad-gat-container should be running.
  • An instance of a Wazuh distribution should be running.

Test steps

  1. Run the container siem-mtad-gat-container by executing the adbox use case 12 with shipping flag.
  ./adbox.sh -u 12 -s

Expected outcome

CLI:

  • Training model for 2 epochs
  IDPS-ESCAPE ADBox shipping on
  ADBox Shipper establishing connection to Wazuh...
  IDPS-ESCAPE ADBox driver running use case scenario configuration uc_12.yaml.
  Start training pipeline.
  Init Data managers.
  JSON file 'detector_input_parameters.json' saved at /home/root/siem-mtad-gat/siem_mtad_gat/assets/detector_models/1bda6dd3-0947-43e3-9ca2-bd3c711ae69f/input/detector_input_parameters.json.
  Data ingestion.
  Wazuh data ingestor establishing connection to Wazuh...
  • Historical prediction: the start time shall be the present day at '00:06:00Z' (notice window size is 6 and granularity 1min) and the end time/present timestamp rounded to the closest unit timestamp.
Start prediction pipeline. Detector: 1bda6dd3-0947-43e3-9ca2-bd3c711ae69f
Init Data managers.
Spot object 0 loaded.
Spot object 1 loaded.
...
Prediction response:
{'run_mode': 'HISTORICAL', 'detector_id': '1bda6dd3-0947-43e3-9ca2-bd3c711ae69f', 'start_time': '2024-11-25T00:06:00Z', 'end_time': '2024-11-25T16:58:00Z', 'results': []}
Prediction ended

Local storage:

  • Training outputs and artifacts available in the /home/root/siem-mtad-gat/siem_mtad_gat/assets/detector_models/{detector_id} folder.

Wazuh Dashboard:

  • When verifying the output using the dashboard vizualisations, remember that time shift due to TimeZone may apply.
  • Indexer Management>Index Management>Templates check if the template adbox_detector_mtad_gat_{detector_id} is there;
  • Indexer Management>Index Management>Templates> Component templates check if the templates component_template_{detector_id} is there;
  • Indexer Management>Data streams check if adbox_detector_mtad_gat_{detector_id} is there and contains at least a document. DashboardTST-18-detector-stream.png" width="80%" />

Parent links: SRS-042 Prediction Shipping Feature

Attribute Value
platform Ubuntu 22.04.4 LTS
execution_type M
verification_method I
release alpha
complexity 2
test_data see referenced files
version 0.2

1.19 ADBox Wazuh integration Dashboard TST-033

Preconditions and setup actions

  • Docker version 26.0.0
  • User with root privileges
  • An instance of CyFORT-Wazuh distribution should be running.
  • A detector data stream adbox_detector_mtad_gat_{detector_id} available in Wazuh indexer

Test steps

A detailed description of the integration procedure can be found at docs/manual/dashboard_tutorial.md

  1. Open Dashboard Management.
  2. Select Dashboard Management>Index patterns and create a new index.
  3. Add the data stream pattern using the.
  4. Select timestamp as time field.
  5. Finally, the pattern is created. The field names correspond to the prediction outcome fields.

Expected outcome

  1. The pattern is created. The field names correspond to the prediction outcome fields.
  2. The data can be navigated using Discover dashboard.

Parent links: SRS-043 AD Data Visualization

Attribute Value
platform Ubuntu 22.04.4 LTS
execution_type M
verification_method I
release alpha
complexity 4
test_data see referenced files
version 0.2

1.20 ADBox set up indexer host address TST-034

Preconditions and setup actions

  • ADBox repository https://github.com/AbstractionsLab/idps-escape.git cloned on the host.
  • Wazuh central component deployed.
  • User with root privileges
  • Built image siem-mtad-gat:v0.1.4
  • No existing container named siem-mtad-gat-container should be running.

Test steps

  1. Open the cloned ADBox repository
  2. Open the /siem_mtad_gat/assets/secrets/wazuh_credentials.json
  3. Set "host" as indexer's host address
  4. Set "port" as indexer's port
  5. Run
./adbox.sh -c

Expected outcome

  1. Succesful connection
IDPS-ESCAPE ADBox checking connection with Wazuh/OpenSearch...
Wazuh data ingestor establishing connection to Wazuh...
Connection with Wazuh established successfully!

Parent links: SRS-017 Custom Data Source

Attribute Value
platform Ubuntu 22.04.4 LTS
execution_type M
verification_method T
release alpha
complexity 2
test_data see referenced files
version 0.2

1.21 ADBox change indexer credentials TST-035

Preconditions and setup actions

  • ADBox repository https://github.com/AbstractionsLab/idps-escape.git cloned on the host.
  • Wazuh central component deployed.
  • User with root privileges
  • Built image siem-mtad-gat:v0.1.4
  • No existing container named siem-mtad-gat-container should be running.

Test steps

  1. Open the cloned ADBox repository
  2. Open the /siem_mtad_gat/assets/secrets/wazuh_credentials.json
  3. Update "username" as indexer's username
  4. Update "password" as indexer's password
  5. Run
./adbox.sh -c

Expected outcome

  1. Succesful connection
IDPS-ESCAPE ADBox checking connection with Wazuh/OpenSearch...
Wazuh data ingestor establishing connection to Wazuh...
Connection with Wazuh established successfully!

Parent links: SRS-022 Indexer Credentials Update

Attribute Value
platform MacOS, Windows, GNU/Linux
execution_type M
verification_method T
complexity 2
test_data see referenced files
version 0.2

1.22 Open prediction file of training data TST-037

Preconditions and setup actions

  • ADBox dev container running.
  • Wazuh central component deployed.
  • User with root privileges
  • At lease a trained detector.

Test steps

  1. Open the cloned ADBox repository
  2. Choose one of the detectors avaliable in siem_mtad_gat/assets/detector_models. E.g, siem_mtad_gat/assets/detector_models/9a447100-39d1-4e00-83fc-8c618444edf7.
  3. Open siem_mtad_gat/frontend/viznotebook/result_visualizer.ipynb within dev container
  4. Insert dector id in Variables and path to be modified
  5. Run Notebook
  6. Go to Running ADBox with a use-case>Training>Training output table

Expected outcome

  1. Table showing prediction aver train data

Parent links: SRS-030 AD Results Visualization

Attribute Value
platform MacOS, Windows, GNU/Linux
execution_type M
verification_method I
release alpha
complexity 2
test_data see referenced files
version 0.2

1.23 Visualize train losses TST-038

Preconditions and setup actions

  • Wazuh central component deployed.
  • ADBox deployed
  • User with root privileges
  • At lease a trained detector.

Test steps

  1. Open the cloned ADBox repository
  2. Choose one of the detectors avaliable in siem_mtad_gat/assets/detector_models.
  3. Open siem_mtad_gat/assets/detector_models/uuid/training/train_losses.png using system default software.

Expected outcome

  1. Graph displaying train_losses.png

Parent links: SRS-031 Training Loss Visualization

Attribute Value
platform MacOS, Windows, GNU/Linux
execution_type M
verification_method I
release alpha
complexity 1
test_data see referenced files
version 0.2

1.24 Open prediction raw outcome TST-039

Preconditions and setup actions

  • Wazuh central component deployed.
  • ADBox deployed
  • User with root privileges
  • At lease a trained detector used for running prediction at least once (i.e., stored prediction data).

Test steps

  1. Open the cloned ADBox repository
  2. Choose one of the detectors avaliable in siem_mtad_gat/assets/detector_models.
  3. Open siem_mtad_gat/assets/detector_models/uuid/prediction
  4. Open a prediction file of the uc-x_predicted_anomalies_data-*.json using system default software.

Expected outcome

  1. Prediction info store using json format. E.g.

    { "run_mode": "REALTIME", "detector_id": "563394be-5079-424d-b2ba-cfda0812cf88", "start_time": "2024-10-25T08:51:00Z", "end_time": "2024-10-25T08:52:00Z", "results": [ { "timestamp": "2024-10-25T08:51:00Z", "is_anomaly": false, "anomaly_score": 0.029846280813217163, "threshold": 0.12186630221790373, "Forecast_data.5mins_loadAverage_average": 0.1334819793701172, "Recon_data.5mins_loadAverage_average": 0.11187170445919037, "True_data.5mins_loadAverage_average": 0.1077537015080452, "A_Score_data.5mins_loadAverage_average": 0.029846280813217163, "Thresh_data.5mins_loadAverage_average": 0.12186630221790373, "A_Pred_data.5mins_loadAverage_average": 0.0 } ] }

Parent links: SRS-032 Predicted Anomalies Visualization

Attribute Value
platform MacOS, Windows, GNU/Linux
execution_type Automated/Manual
verification_method I
release alpha
test_data see referenced files
version 0.2

2.0 IDPS-ESCAPE foundation test cases

2.1 Suricata installation in a containerized environment TST-019

Preconditions and setup actions

  • Docker version 26.0.0
  • User with root privileges

Test steps

  1. Create a new directory for suricata docker deployment.
  2. In the directory download the suricata.yaml configuration file deployment/suricata/suricata.yaml.
  3. Make the required changes in the configuration file as per the network settings of the respective system as explained deployment/suricata/suricata_installation.md#suricata-configuration-file.
  4. In the directory download the Dockerfile deployment/suricata/Dockerfile.
  5. Pull the ubuntu docker base image. sudo docker pull ubuntu
  6. Build the docker image.
sudo docker build -t suricata-container .
  1. Run the docker container.
sudo docker run --network=host --hostname=suricata-instance --name=suricatainstance
-it suricata-container

Expected outcome

  • Container named suricata-instance is running. Verify it by checking the running docker container with this command. sudo docker ps -a
  • Upon running the ps aux command inside the running container's bash, Suricata services can be seen as a running process. The container bash can be accessed by the following command. sudo docker exec -it suricata-instance bash
  • In the container's bash run the following command to read Suricata log files. tail /var/log/suricata/suricata.log

Parent links: SRS-008 Dockerized NIDS Deployment

Attribute Value
platform Ubuntu 22.04.4 LTS
execution_type M
verification_method I
release alpha
complexity 3
test_data see referenced files
version 0.2

2.2 Wazuh installation in a containerized environment TST-020

Preconditions and setup actions

  • Docker version 26.0.0
  • User with root privileges
  • Docker Compose version v2.25.0

Test steps

  1. Clone the Wazuh docker repository in the system as mentioned in section
git clone https://github.com/wazuh/wazuh-docker.git -b v4.8.1
  1. Navigate to the directory since it is a single node deployment.
cd wazuh-docker/single-node
  1. Provide the group of certificates for secure communication. sudo docker-compose -f generate-indexer-certs.yml run --rm generator

  2. Deploy Wazuh as single-node using docker-compose. sudo docker-compose up -d This could take some time since it would pull all the images.

Expected outcome

  • The following output should be visible on the screen after deployment in the background (with -d option)
✔ Container single-node-wazuh.indexer-1 Started
0.1s
✔ Container single-node-wazuh.manager-1 Started
0.1s
✔ Container single-node-wazuh.dashboard-1 Started
  • Run the following command and you should see 3 containers running. By sudo docker ps -a you shall see wazuh/wazuh-dashboard:4.8.1,wazuh/wazuh-manager:4.8.1,wazuh-indexer:4.8.1 .

  • The Wazuh dashboard is accessible through any browser, on the IP address of the system that it is deployed on.

Parent links: SRS-001 Centralized C&C Deployment

Attribute Value
platform Ubuntu 22.04.4 LTS
execution_type M
verification_method I
release alpha
complexity 3
test_data see referenced files
version 0.2

2.3 Wazuh agent installation and enrollment: the local machine TST-021

Preconditions and setup actions

  • Docker version 26.0.0
  • User with root privileges
  • Docker Compose version v2.25.0
  • Deployment of Wazuh Dashboard version 4.8.1 on the local monitoring machine inside a container
  • Deployment of Wazuh Indexer version 4.8.1 on the local monitoring machine inside a container
  • Deployment of Wazuh Manager version 4.8.1 on the local monitoring machine inside a container
  • GNU bash, version 5.1.16
  • No prior agent should already be running on the local monitoring machine

Test steps

  1. On the Wazuh Dashboard, go to Endpoint Summary, and click on Deploy new agent.
  2. Select the package to download and install on the respective system i.e., Linux DEB amd64.
  3. Provide a server address and an optional name for the agent. The server address should be the IP address of the docker host address in the case of docker installation on the local monitoring machine. - The docker host address can be found by:
sudo docker network inspect bridge | grep Gateway
  1. Install the agent by running the command that should be provided on the dashboard after entering the above information. For example, - download
wget https://packages.wazuh.com/4.x/apt/pool/main/w/wazuh-agent/wazuh-agent_4.8.1-1_arm64.deb && sudo WAZUH_MANAGER='172.17.0.1' WAZUH_AGENT_GROUP='default' WAZUH_AGENT_NAME='idps-escape-1' dpkg -i ./wazuh-agent_4.8.1-1_arm64.deb
  1. Start the agent by running the three commands that should also be provided on the dashboard, i.e.,
sudo systemctl daemon-reload
sudo systemctl enable wazuh-agent
sudo systemctl start wazuh-agent

Expected outcome

  • The status of the agent should be “Started Wazuh agent.” Which is seen by running the command: sudo systemctl status wazuh-agent.
  • On the Wazuh Dashboard the Agent should appear as active in the summary page.

Parent links: SRS-003 HIDS Agent Deployment

Attribute Value
platform Ubuntu 22.04.4 LTS
execution_type M
verification_method I
release alpha
complexity 3
test_data see referenced files
version 0.2

2.4 Wazuh agent installation and enrollment: remote machine TST-022

Preconditions and setup actions

  • Docker version 26.0.0
  • User with root privileges
  • Docker Compose version v2.25.0
  • Deployment of Wazuh Dashboard version 4.8.1 on the local monitoring machine inside a container
  • Deployment of Wazuh Indexer version 4.8.1 on the local monitoring machine inside a container
  • Deployment of Wazuh Manager version 4.8.1 on the local monitoring machine inside a container
  • GNU bash, version 5.1.16
  • No prior agent should already be running on a REMOTE monitoring machine

Test steps

  1. On the Wazuh Dashboard, go to Endpoint Summary, and click on Deploy new agent.
  2. Select the package to download and install on the REMOTE MACHINE system i.e., Linux DEB amd64.
  3. Provide a server address (this should be the machine hosting the manager) and an optional name for the agent. - The docker host address can be found by:
sudo docker network inspect bridge | grep Gateway
  1. Install the agent by running the command in the REMOTE HOST that should be provided on the dashboard after entering the above information. For example, - download
wget https://packages.wazuh.com/4.x/apt/pool/main/w/wazuh-agent/wazuh-agent_4.8.1-1_arm64.deb && sudo WAZUH_MANAGER='172.17.0.1' WAZUH_AGENT_GROUP='default' WAZUH_AGENT_NAME='idps-escape-1' dpkg -i ./wazuh-agent_4.8.1-1_arm64.deb
  1. Start the agent by running the three commands in the REMOTE HOST that should also be provided on the dashboard, i.e.,
sudo systemctl daemon-reload
sudo systemctl enable wazuh-agent
sudo systemctl start wazuh-agent

Expected outcome

  • The status of the agent should be “Started Wazuh agent.” Which is seen by running the command: sudo systemctl status wazuh-agent.
  • On the Wazuh Dashboard the Agent should appear as active in the summary page.

Parent links: SRS-023 Agent Registration Process

Attribute Value
platform Ubuntu 22.04.4 LTS
execution_type M
verification_method I
release alpha
complexity 3
test_data see referenced files
version 0.2

2.5 Wazuh agent deletion and uninstallation TST-023

Preconditions and setup actions

  • Docker version 26.0.0
  • User with root privileges
  • Docker Compose version v2.25.0 curl version 7.81.0
  • Wazuh agent installed and enrolled with the manager
  • Wazuh API Token
  • Deployment of Wazuh Dashboard on the local monitoring machine inside a container (4.8.1)
  • Deployment of Wazuh Indexer on the local monitoring machine inside a container (4.8.1)
  • Deployment of Wazuh Manager on the local monitoring machine inside a container (4.8.1)

Test steps

  1. Using the Wazuh API Token, send a DELETE request through the Wazuh API to the Wazuh manager. The request should also contain the IDs of the agents to be deleted.
curl -k -X DELETE "https://[Wazuh Server Address]:55000/agents?pretty=true&older_than=0s&agents_list=[Comma separated agent IDs]&status=all"
-H "Authorization: Bearer $TOKEN"
  1. Uninstall the agent by running the following commands.
sudo apt-get remove --purge wazuh-agent
sudo systemctl disable wazuh-agent
sudo systemctl daemon-reload

Expected outcome

  • After deleting the agent, the agent should not be visible in the list of agents on the Wazuh Dashboard.
  • The agent should also not be present in the list of available agents on the machine, which can be checked by the following command. sudo /var/ossec/bin/manage_agents -l
  • After uninstalling the agent, no command regarding the agent should work.
  • The agent removal can also be verified from the Wazuh manager.
  1. Access the bash of the container which is running the Wazuh Manager. sudo docker exec -it [Container name or ID of Wazuh Manager] bash
  2. Run the command to access the Agent Manager. /var/ossec/bin/manage_agents
  3. The removal can be confirmed by entering L and pressing ENTER to verify its absence from the Available Agents listing.

Parent links: SRS-004 HIDS Agent Management

Attribute Value
platform Ubuntu 22.04.4 LTS
execution_type M
verification_method I
release alpha
complexity 3
test_data see referenced files
version 0.2

2.6 Wazuh agent unenrollment TST-024

Preconditions and setup actions

  • Docker version 26.0.0
  • User with root privileges
  • Docker Compose version v2.25.0 curl version 7.81.0
  • Wazuh agent installed and enrolled with the manager
  • Deployment of Wazuh Dashboard on the local monitoring machine inside a container (4.8.1)
  • Deployment of Wazuh Indexer on the local monitoring machine inside a container (4.8.1)
  • Deployment of Wazuh Manager on the local monitoring machine inside a container (4.8.1)

Test steps

  1. Access the bash of the container which is running the Wazuh Manager.
sudo docker exec -it [Container name or ID of
Wazuh Manager] bash
  1. Run the command to access the Agent Manager. /var/ossec/bin/manage_agents
  2. Press <R> which is the option for removing and agent and press enter.
  3. Enter the ID of the agent to be removed and unenroll from the manager. e. Confirm the removal action by pressing <Y> and enter.

Expected outcome

  • After confirming the output should be a confirmation message, "Agent ID removed".
  • After deleting the agent, the agent should not be visible in the list of agents on the Wazuh Dashboard.
  • The agent should also not be present in the list of available agents on the machine, which can be checked by the following command. sudo /var/ossec/bin/manage_agents -l

Parent links: SRS-004 HIDS Agent Management

Attribute Value
platform Ubuntu 22.04.4 LTS
execution_type M
verification_method I
release alpha
complexity 2
test_data see referenced files
version 0.2

2.7 Suricata and Wazuh Integration TST-025

Preconditions and setup actions

  • Docker version 26.0.0
  • User with root privileges
  • Docker Compose version v2.25.0 curl version 7.81.0
  • Wazuh agent installed and enrolled with the manager
  • Deployment of Wazuh Dashboard on the local monitoring machine inside a container (4.8.1)
  • Deployment of Wazuh Indexer on the local monitoring machine inside a container (4.8.1)
  • Deployment of Wazuh Manager on the local monitoring machine inside a container (4.8.1)
  • Deployment of Suricata version 7.0.3 on the local monitoring machine inside a container
  • GNU nano, version 6.2
  • curl version 7.81.0

Test steps

  1. Edit the configuration file of the Wazuh agent to make it able to read Suricata logs. sudo nano /var/ossec/etc/ossec.conf
  2. Add the suricata eve logs the end of the file, before the last </ossec_config> closing tag.
<ossec_config>
...
  <localfile>
    <log_format>json</log_format>
    <location>/var/log/suricata/eve.json</location>
  </localfile>
</ossec_config>

  1. Remove the Suricata container. sudo docker rm suricata-instance
  2. Run the Suricata container again with adding arguments to create docker volumes.
sudo docker run -v
/var/log/suricata:/var/log/suricata --network=host
--hostname=suricata-instance --name=suricatainstance
-d suricata-container
  1. Restart the Wazuh agent
sudo systemctl restart wazuh-agent
  1. Generate some malicious network traffic by sending a curl request to the following tool.
curl -sSL
https://raw.githubusercontent.com/3CORESec/testmynids.org/master/tmNIDS -o /tmp/tmNIDS && chmod +x /tmp/tmNIDS && /tmp/tmNIDS
  1. Enter any number from 1 to 11 to generate the respective traffic.
  2. On the Wazuh dashboard, filter the alerts to see the events particularly generated by Suricata. This is done by Threat Hunting > Add Filter. And to create the filter, in the Field dropdown, select rule.groups, in the operator dropdown, select is and in the Value, write suricata.

Expected outcome

-The alert generated by Suricata for the malicious traffic should be seen on the Wazuh Dashboard Threat Hunting. - After filter application, all alerts generated by suricata should be visible.

Parent links: SRS-010 Centralized Threat Management

Attribute Value
platform Ubuntu 22.04.4 LTS
execution_type M
verification_method I
release alpha
complexity 3
test_data see referenced files
version 0.2

2.8 Port mirroring for remote machines TST-026

Preconditions and setup actions

  • Ubuntu 22.04.4 LTS (Machine 1, central monitoring host)
  • Ubuntu 22.04.4 LTS (Machine 2, target host or the remote machine)
  • Prerequisites for Machine 1:

    • tcpdump version 4.99.1
    • User with root privileges
  • Prerequisites for Machine 2:

    • User with root privileges
    • ping from iputils 20211215
    • GNU bash, version 5.1.16
    • No existing tunnel is configured from machine 2 to machine 1

    Test steps

    1. Create a GRE tunnel from machine 2 to machine 1 and mirror all the ingress and egress (incoming and outgoing) network traffic generated on machine 2 to machine 1. Using the procedure describe in deployment/remote_monitoring/remote_monitoring.md.
  1. From machine 2, send some ICMP echo request packets to machine 1.
ping -c3 [IP of the capture interface of machine
1]
  1. On machine 1, use tcpdump to capture network traffic on the default interface.
sudo tcpdump -n -i [Capture interface of machine
1]

Expected outcome

tcpdump output should show that machine 1 receives both the ingress and egress ICMP packets from machine 2, encapsulated within GRE packets

Parent links: SRS-007 Raw Traffic Capture

Attribute Value
platform Ubuntu 22.04.4 LTS
execution_type M
verification_method A
release alpha
complexity 4
test_data see referenced files
version 0.2

2.9 Traffic monitoring on Wazuh (local) TST-027

Preconditions and setup actions

  • Docker version 26.0.0
  • User with root privileges
  • Docker Compose version v2.25.0 curl version 7.81.0
  • Wazuh agent installed and enrolled with the manager
  • Deployment of Wazuh Dashboard on the local monitoring machine inside a container (4.8.1)
  • Deployment of Wazuh Indexer on the local monitoring machine inside a container (4.8.1)
  • Deployment of Wazuh Manager on the local monitoring machine inside a container (4.8.1)
  • Deployment of Suricata version 7.0.3 on the local monitoring machine inside a container
  • GNU nano, version 6.2
  • curl version 7.81.0
  • Wazuh agent configured to integrate with Suricata

Test steps

  1. Generate some malicious network traffic on the monitored machine by sending a curl request to the following tool.
curl -sSL
https://raw.githubusercontent.com/3CORESec/testmynids.org/master/tmNIDS -o /tmp/tmNIDS && chmod +x /tmp/tmNIDS && /tmp/tmNIDS
  1. Enter any number from 1 to 11 to generate the respective traffic on the console.

Expected outcome

The alert generated by Suricata for the malicious traffic should be seen on the Wazuh Dashboard.

Parent links: SRS-011 Network Event Visualization

Attribute Value
platform Ubuntu 22.04.4 LTS
execution_type M
verification_method I
release alpha
complexity 4
test_data see referenced files
version 0.2

2.10 Traffic monitoring on Wazuh (remote) TST-028

Preconditions and setup actions

  • Docker version 26.0.0
  • User with root privileges
  • Docker Compose version v2.25.0 curl version 7.81.0
  • Wazuh agent installed and enrolled with the manager in remote machine
  • Deployment of Wazuh Dashboard on the local monitoring machine inside a container (4.8.1)
  • Deployment of Wazuh Indexer on the local monitoring machine inside a container (4.8.1)
  • Deployment of Wazuh Manager on the local monitoring machine inside a container (4.8.1)
  • Deployment of Suricata version 7.0.3 on the remote monitoring machine inside a container
  • GNU nano, version 6.2
  • curl version 7.81.0
  • Wazuh agent configured to integrate with Suricata
  • GRE tunnel created from the remote machine to the local monitoring machine which mirrors all the ingress and egress network traffic from the remote machine to the local monitoring machine

Test steps

  1. Generate some malicious network traffic on the remote monitored machine by sending a curl request to the following tool. curl -sSL https://raw.githubusercontent.com/3CORESec/testmynids.org/master/tmNIDS -o /tmp/tmNIDS && chmod +x /tmp/tmNIDS && /tmp/tmNIDS

  2. Enter any number from 1 to 11 to generate the respective traffic on the console.

  3. On the Wazuh dashboard, filter the alerts to see the events particularly related to the remote machine. This is done bya adding a filter. And to create the filter, in the Field dropdown, select data.tunnel.src_ip, in the operator dropdown, select is and in the Value, write the IP of the capture interface of the remote machine where the GRE tunnel was created.

Note that after creating the tunnel there could be a possible delay of some minutes between the creation of the tunnel and the display of the events on the Wazuh dashboard.

Expected outcome

Step 2.

The alert generated by Suricata for the malicious traffic should be seen on the Wazuh Dashboard.

Step 3.

  • The alert generated by Suricata for the malicious traffic should be seen on the Wazuh Dashboard.
  • The filter application should show the Suricata alert for the test traffic that was run.

Parent links: SRS-011 Network Event Visualization

Attribute Value
platform Ubuntu 22.04.4 LTS
execution_type M
verification_method I
release alpha
complexity 4
test_data see referenced files
version 0.2

2.11 Changing password for Wazuh indexer users TST-029

Preconditions and setup actions

  • Docker version 26.0.0
  • User with root privileges
  • Docker Compose version v2.25.0 curl version 7.81.0
  • Wazuh agent installed and enrolled with the manager
  • Deployment of Wazuh Dashboard on the local monitoring machine inside a container (4.8.1)
  • Deployment of Wazuh Indexer on the local monitoring machine inside a container (4.8.1)
  • Deployment of Wazuh Manager on the local monitoring machine inside a container (4.8.1)
  • Deployment of Suricata version 7.0.3 on the local monitoring machine inside a container
  • GNU nano, version 6.2

Test steps

The change password procedure is more detailed in deployment/wazuh/change_passwords.md.

Part 1: Genereate Hash

  1. Stop the deployment stack if it's running. From wazuh repository:
sudo docker-compose down
  1. Run the following command to generate the hash of the new password. Once the container launches, input the new password and press Enter.
sudo docker run --rm -ti wazuh/wazuhindexer:4.8.1 bash /usr/share/wazuhindexer/plugins/opensearchsecurity/tools/hash.sh
  1. Copy the generated hash.
  2. Open the config/wazuh_indexer/internal_users.yml file. Locate the block for the user for whom the password is being changed for.

Part 2: Set the new password.

  1. In the docker-compose.yml file, replace all instances of the old password with the new one.

Part 3: Apply changes

  1. Start the deployment stack.
sudo docker-compose up -d
  1. Enter the container bash.
sudo docker exec -it [Name or ID of the Wazuh Indexer container] bash
  1. Set the following variables:
export INSTALLATION_DIR=/usr/share/wazuhindexer
CACERT=$INSTALLATION_DIR/certs/root-ca.pem
KEY=$INSTALLATION_DIR/certs/admin-key.pem
CERT=$INSTALLATION_DIR/certs/admin.pem
export JAVA_HOME=/usr/share/wazuhindexer/jdk
  1. Wait for the Wazuh indexer to initialize properly. The waiting time can vary from two to five minutes. It depends on the size of the cluster, the assigned resources, and the speed of the network. Then, run the securityadmin.sh script to apply all changes.
bash /usr/share/wazuhindexer/plugins/opensearchsecurity/tools/securityadmin.sh -cd /usr/share/wazuh-indexer/opensearchsecurity/ -nhnv -cacert $CACERT -cert $CERT -key $KEY -p 9200 -icl
  1. Exit the Wazuh indexer container and login with the new credentials on the Wazuh dashboard.

  2. Enter any number from 1 to 11 to generate the respective traffic on the console.

Expected outcome

-The Wazuh dashboard should show "Invalid username or password. Please try again." error while trying to login with the old password. - The Wazuh dashboard should be accessible through the new password.

Parent links: SRS-016 Indexer Credential Management

Attribute Value
platform Ubuntu 22.04.4 LTS
execution_type Automated/Manual
verification_method I
release alpha
complexity 4
test_data see referenced files
version 0.2

2.12 Changing password for Wazuh API users TST-030

Preconditions and setup actions

  • Docker version 26.0.0
  • User with root privileges
  • Docker Compose version v2.25.0 curl version 7.81.0
  • Wazuh agent installed and enrolled with the manager
  • Deployment of Wazuh Dashboard on the local monitoring machine inside a container (4.8.1)
  • Deployment of Wazuh Indexer on the local monitoring machine inside a container (4.8.1)
  • Deployment of Wazuh Manager on the local monitoring machine inside a container (4.8.1)
  • Deployment of Suricata version 7.0.3 on the local monitoring machine inside a container
  • GNU nano, version 6.2

Test steps

The change password procedure is more detailed in deployment/wazuh/change_passwords.md.

Part 1: Genereate Hash

  1. Stop the deployment stack if it's running. From wazuh repository:
sudo docker-compose down
  1. Run the following command to generate the hash of the new password. Once the container launches, input the new password and press Enter.
sudo docker run --rm -ti wazuh/wazuhindexer:4.8.1 bash /usr/share/wazuhindexer/plugins/opensearchsecurity/tools/hash.sh
  1. Copy the generated hash.
  2. Open the config/wazuh_indexer/internal_users.yml file. Locate the block for the user for whom the password is being changed for.

Part 2: Set the new password.

  1. In the docker-compose.yml file, replace all instances of the old password with the new one at API_PASSWORD occurrence.

Part 3: Apply changes

  1. Start the deployment stack.
sudo docker-compose up -d

Expected outcome

  1. Trying to obtain Wazuh API token using the old password, which can be done in the following way (make sure that the password is within inverted quotes). TOKEN=$(curl -u wazuh-wui:"MyS3cr37P450r.*-" -k -X POST "https://172.17.0.1:55000 /security/user/authenticate?raw=true"); echo $TOKEN This hould return the following error. {"title": "Unauthorized", "detail": "Invalid credentials"}
  2. The token can be successfully obtained using the new password.

Parent links: SRS-016 Indexer Credential Management

Attribute Value
platform Ubuntu 22.04.4 LTS
execution_type M
verification_method I/T
release alpha
complexity 4
test_data see referenced files
version 0.2

2.13 Wazuh filters using the RESTful API TST-031

Preconditions and setup actions

  • Docker version 26.0.0
  • User with root privileges
  • Docker Compose version v2.25.0 curl version 7.81.0
  • Wazuh agent installed and enrolled with the manager in remote machine
  • Wazuh agent installed and enrolled with the manager in local machine
  • Deployment of Wazuh Dashboard on the local monitoring machine inside a container (4.8.1)
  • Deployment of Wazuh Indexer on the local monitoring machine inside a container (4.8.1)
  • Deployment of Wazuh Manager on the local monitoring machine inside a container (4.8.1)
  • Deployment of Suricata version 7.0.3 on the remote monitoring machine inside a container
  • GNU nano, version 6.2
  • curl version 7.81.0
  • Wazuh agent configured to integrate with Suricata
  • GRE tunnel created from the remote machine to the local monitoring machine which mirrors all the ingress and egress network traffic from the remote machine to the local monitoring machine

Test steps

  1. Get the Wazuh API Token.
  2. Run the following query to apply the filter: curl -G "q=data.tunnel.src_ip=[IP of the tunnel source interface]" -k -X GET "https://[Wazuh Manager IP address]:55000/manager/logs?limit=500&pretty=true" -H "Authorization: Bearer $TOKEN"

Expected outcome

  1. The output of the query should show JSON decoded logs and in the last there should be a success message. "message: Logs were successfully read, error: 0"

Parent links: SRS-024 Event Querying Capability

Attribute Value
platform Ubuntu 22.04.4 LTS
execution_type M
verification_method I
release alpha
complexity 4
test_data see referenced files
version 0.2

2.14 Wazuh filters using the Wazuh Dashboard TST-032

Preconditions and setup actions

  • Docker version 26.0.0
  • User with root privileges
  • Docker Compose version v2.25.0 curl version 7.81.0
  • Wazuh agent installed and enrolled with the manager in remote machine
  • Wazuh agent installed and enrolled with the manager in local machine
  • Deployment of Wazuh Dashboard on the local monitoring machine inside a container (4.8.1)
  • Deployment of Wazuh Indexer on the local monitoring machine inside a container (4.8.1)
  • Deployment of Wazuh Manager on the local monitoring machine inside a container (4.8.1)
  • Wazuh agent configured to integrate with Suricata
  • GRE tunnel created from the remote machine to the local monitoring machine which mirrors all the ingress and egress network traffic from the remote machine to the local monitoring machine

Test steps

  1. On the Wazuh dashboard navigate to Threat Hunting and filter by data.tunnel.src_ip:[IP remote machine].

Expected outcome

  1. The output of the query should show JSON decoded logs and in the last there should be a success message. "message: Logs were successfully read, error: 0"

Parent links: SRS-011 Network Event Visualization

Attribute Value
platform Ubuntu 22.04.4 LTS
execution_type M
verification_method I
release alpha
complexity 2
test_data see referenced files
version 0.2

2.15 Map a detected event to MITRE ATT&CKS TST-036

Preconditions and setup actions

  • Wazuh central component deployed.
  • Presence of alerts collected in Wazuh indexer.

Test steps

  1. Open Wazuh Dashboard
  2. Go to Threat Hunting > Events> Add Filter and choose rule.mitre.id exists.
  3. Inspect filtered output (possibly extend time interval for more results).

Expected outcome

  1. In json rule.mitre.id key should include the matching MITRE ID. E.g. T1078, T1021
"mitre": {
      "technique": [
        "Valid Accounts",
        "Remote Services"
      ],
      "id": [
        "T1078",
        "T1021"
      ],
      "tactic": [
        "Defense Evasion",
        "Persistence",
        "Privilege Escalation",
        "Initial Access",
        "Lateral Movement"
      ]
    },
    "id": "5715",
    "gpg13": [
      "7.1",
      "7.2"
    ]
  }

Parent links: SRS-025 MITRE ATT&CK Mapping

Attribute Value
platform MacOS, Windows, GNU/Linux
execution_type Automated/Manual
verification_method I
release alpha
complexity 2
test_data see referenced files
version 0.2

2.16 Visualize IDPS-ESCAPE high level architecture TST-040

Preconditions and setup actions

  • A copy of ADBox project repository

Test steps

  1. Open the docs/traceability/HARC.html file using the browser

Expected outcome

  1. Wab page displaying IDPS architecture diagrams

Parent links: SRS-045 High-Level Architecture Overview

Attribute Value
platform MacOS, Windows, GNU/Linux
execution_type M
verification_method I
release alpha
complexity 1
test_data see referenced files
version 0.2

3.0 RADAR test case specifications

3.1 Setup RADAR foundation TST-041

Preconditions and setup actions

  • A test controller host/node is available:
    • A Linux machine/VM (the Ansible control node) is available to the tester.
    • OS: recent GNU/Linux distribution, preferably an Ubuntu distribution (22.04+).
  • A second test node (accessible via the controller node) is available: GNU/Linux distro e.g. Debian GNU/Linux 13 (trixie)
  • Ansible is installed on the controller host under the root account:
  • Targeted Wazuh manager and agent versions: 4.14.1
  • Environment values such as OS_URL, OS_USER, OS_PASS, DASHBOARD_URL, DASHBOARD_USER, DASHBOARD_PASS and SMTP credentials are available to the tester. The tester has network access (SSH) from the test controller node to controlled endpoints.
  • If either the Wazuh agent (aka agent) or the Wazuh manager (aka manager) is chosen to be remote:
    • the remote agent/manager needs to have Docker and Docker Compose installed following the official documentations for Docker and Docker Compose.
    • an available user in remote agent or manager with sudo access, and tester needs to have SSH access to the user from test controller host.
  • If the agents are to be deployed on remote nodes, the Wazuh agents must be installed in the monitored endpoints using the official documentation.
  • The agent must be registered with the Wazuh manager following the official documentation.
  • The DECIPHER Stack (DECIPHER, FlowIntel and MISP) is deployed and reachable from the Wazuh manager. The validated version is available in the DECIPHER folder of the SATRAP-DL repository, tested against v0.4 of SATRAP-DL. Once deployed, configure the DECIPHER_BASE_URL variable in .env to point to the running DECIPHER instance.

Test steps

  1. In the controller host, the working directory is required to be radar:
cd radar
  1. Set .env file with required variables:
# =======================
# OpenSearch Configuration
# =======================
OS_URL=https://192.168.5.4:9200  # Change accordingly
OS_USER=admin
OS_PASS=SecretPassword
OS_VERIFY_SSL="/app/config/wazuh_indexer_ssl_certs/root-ca.pem"

DASHBOARD_URL=https://192.168.5.4  # Change accordingly
DASHBOARD_USER=admin
DASHBOARD_PASS=SecretPassword
DASHBOARD_VERIFY_SSL="/app/config/wazuh_indexer_ssl_certs/root-ca.pem"

WAZUH_API_URL=https://192.168.5.4:55000  # Change accordingly
WAZUH_AUTH_USER=wazuh-wui
WAZUH_AUTH_PASS=MyS3cr37P450r.*-

WAZUH_AGENT_VERSION=4.14.1-1
WAZUH_MANAGER_ADDRESS=192.168.5.4  # Change accordingly

WEBHOOK_NAME="RADAR_Webhook"
WEBHOOK_URL=http://192.168.5.4:8080/notify  # Change accordingly

# =======================
# Logging
# =======================

AR_LOG_FILE=/var/ossec/logs/active-responses.log
AR_RISK_CONFIG=/var/ossec/active-response/bin/ar.yaml

# =======================
# SMTP
# =======================

SMTP_HOST=SMTP_HOST # Change accordingly
SMTP_PORT=SMTP_PORT # Change accordingly
SMTP_USER=SMTP_USER # Change accordingly
SMTP_PASS=SMTP_PASS # Change accordingly
EMAIL_TO=RECEIVER_EMAIL@example.com # Change accordingly
SMTP_STARTTLS=yes

# =======================
# DECIPHER
# =======================

DECIPHER_BASE_URL=http://localhost:8000  # Change IP accordingly
DECIPHER_VERIFY_SSL=VERIFY_SSL           # Change IP accordingly
DECIPHER_TIMEOUT_SEC=30                  # Change IP accordingly

  1. In case either the agent or the manager is remote:

    3.1. Change inventory.yaml based on remote agent/manager data, user credentials and docker container names. Change is needed:

    • for remote agents, the endpoint part wazuh_agents_ssh must be configured.
    • for a remote manager, the endpoint part wazuh_manager_ssh must be configured.
    • in each of the above two, you may need to define a ansible_ssh_private_key_file: <path-to-ssh-key> attribute (to enable access from the control node to the controlled node)

    3.2. Create a vault for each host in inventory.yaml (IMPORTANT: the vault name and host name must match):

    • for remote agents: ansible-vault create host_vars/edge.vm.yml
    • for remote manager: ansible-vault create host_vars/mgr.remote.yml

    3.3. Assign a vault password and set sudo password: ansible_become_password: <selected-user-sudo-password>

  2. Existence of Wazuh manager on targeted endpoint (local/remote) (signaled via manager_exists flag used by build-radar.sh):

    4.1. In case the manager does not exist, certificates must be issued matching the MANAGER_IP and INDEXER_IP according to the documentation in config/wazuh_indexer_ssl_certs/README.md; in short, once you have adapted config/certs.yml according to your endpoints (IP address and container name), from within the config folder, run

    docker run --rm -it -v "PWD/certs.yml:/config/certs.yml:ro"−v"PWD/certs.yml:/config/certs.yml:ro" -v "PWD/certs.yml:/config/certs.yml:ro"−v"PWD/wazuh_indexer_ssl_certs:/certificates" wazuh/wazuh-certs-generator:0.0.2

    4.2. If the manager is already in place, certificates must be copied into config/wazuh_indexer_ssl_certs.

Expected outcome

  1. In the controller host, the vaults are created and SSH access is set correctly:

    • if the agent is chosen to be remote:

    3.1 The vault is created and exists in host_vars/: ansible-vault view host_vars/edge.vm.yml

    3.2 Ansible reaches the SSH group: ansible -i inventory.yaml wazuh_agents_ssh -m ping --ask-vault-pass

    • if the manager is chosen to be remote:

    3.1 The vault is created and exists in host_vars/: ansible-vault view host_vars/mgr.remote.yml

    3.2 Ansible reaches the SSH group: ansible -i inventory.yaml wazuh_manager_ssh -m ping --ask-vault-pass

  2. The certificates are available under config/wazuh_indexer_ssl_certs/:

ls config/wazuh_indexer_ssl_certs/

Child links: TRP-030 TCER: Setup RADAR foundation

Attribute Value
platform GNU/Linux (Ubuntu+Debian)
execution_type Manual
verification_method T
release alpha
complexity 3
test_data see referenced files
version 0.8

3.2 Build suspicious login TST-042

Preconditions and setup actions

  • RADAR is setup according to TST-041

Test steps

  1. In the controller host, the working directory is required to be radar:
cd radar
  1. Build RADAR with SCENARIO_NAME suspicious_login, if manager is local then the command should be run with sudo:
./build-radar.sh suspicious_login --agent remote/local --manager remote/local --manager_exists true/false

In this step, please choose options according to your setup:

  • --agent Where agents live: local (docker-compose.agents.yml) | remote (SSH endpoints)

  • --manager Where manager lives: local (docker-compose.core.yml) | remote (SSH host)

  • --manager_exists Whether the manager already exists at that location:

    • true : do not bootstrap a manager
    • false : bootstrap a manager
  • --ssh-key Optional: path to the SSH private key used for remote manager/agent access. If not provided, defaults to: $HOME/.ssh/id_ed25519

  1. To clean up the test environment only after a successful test execution AND a successful execution of TST-046:
  • In case the manager is chosen to be local, in the controller node/host, run these commands:
./stop-radar.sh --manager local
  • In case the manager is chosen to be remote and bootstrapped, in the controller node/host, run these commands:
./stop-radar.sh --manager remote
  • In case the agent is chosen to be local, in the controller node/host, run this command:
./stop-radar.sh --agent local
  • In case the agent is chosen to be remote, in the controller node/host, run this command:
./stop-radar.sh --agent remote

Expected outcome

2. Check the setup

In Wazuh Manager

Connect to the Wazuh manager to be able to perform the following verifications:

docker exec -it wazuh.manager bash
  • 2.1 Custom 0310 SSH decoder deployed; verify this by running
ls -l /var/ossec/etc/decoders/0310-ssh.xml

to ensure that 0310-ssh.xml exists, and is owned by root:wazuh, mode 0640.

  • 2.2 Custom 0310 SSH decoder should contain the decoders sshd-*-with-radar (e.g., including RADAR fields like ASN, country, geo_velocity)
tail -n20 /var/ossec/etc/decoders/0310-ssh.xml

to ensure that RADAR enrichment fields will be parsed.

  • 2.3 Default 0310 decoder is excluded in ossec.conf; verify via
grep '<decoder_exclude>0310-ssh_decoders.xml</decoder_exclude>' /var/ossec/etc/ossec.conf

to ensure that a <decoder_exclude>0310-ssh_decoders.xml</decoder_exclude> line appears (ideally once) under <ruleset>.

  • 2.4 RADAR rules for suspicious_login are in /var/ossec/etc/rules/; verify via
ls -al /var/ossec/etc/rules/ | grep "suspicious-login"

to ensure rules for suspicious login exists.

  • 2.5 Suspicious login rules are bound to active responses; verify via
grep -A5 'radar_ar' /var/ossec/etc/ossec.conf

to ensure that suspicious login rule ID(s) references the intended <group>, <options> or <active-response> that trigger radar_ar.py.

  • 2.6 Agent is registered with the manager; verify via
/var/ossec/bin/agent_control -l

to ensure that an entry exists for each agent.

  • 2.7 Agent configurations are in place; verify via
cat /var/ossec/etc/shared/default/agent.conf | grep "suspicious_login"

to ensure that agent configurations for suspicious login exists.

  • 2.8 Wazuh Manager is running after restart; verify via
/var/ossec/bin/wazuh-control status

to ensure that status output indicates the manager is running with no error messages.

In endpoints running agents

  • 2.9 RADAR helper directory and script are installed on the SSH agent host; verify via
ls -ld /opt/radar
ls -l /opt/radar/radar-helper.py

to ensure /opt/radar exists with mode 0755 and owner root:root; radar-helper.py exists with mode 0750 and owner root:root.

  • 2.10 radar-helper.service systemd unit is installed, enabled, and running; verify via
systemctl status radar-helper.service

to ensure the service is active.

Post-test cleanup

  • 3 Check the cleanup after successful validations, the command output should be empty:
docker ps

Parent links: SRS-051 RADAR scenario: suspicious login, SRS-061 RADAR: Tiered active response logic

Child links: TRP-031 TCER: Build suspicious login

Attribute Value
platform GNU/Linux (Ubuntu+Debian)
execution_type Manual
verification_method T
release alpha
complexity 3
test_data see referenced files
version 0.8

3.3 Build non-whitelist GeoIP detection TST-043

Preconditions and setup actions

  • RADAR is setup according to TST-041

Test steps

  1. In the controller host, the working directory is required to be radar:
cd radar
  1. Build RADAR with SCENARIO_NAME geoip_detection, if manager is local then the command should be run with sudo:
sudo ./build-radar.sh geoip_detection --agent remote/local --manager remote/local --manager_exists true/false

In this step, please choose options according to your setup:

  • --agent Where agents live: local (docker-compose.agents.yml) | remote (SSH endpoints)

  • --manager Where manager lives: local (docker-compose.core.yml) | remote (SSH host)

  • --manager_exists Whether the manager already exists at that location:

    • true : do not bootstrap a manager
    • false : bootstrap a manager
  • --ssh-key Optional: path to the SSH private key used for remote manager/agent access. If not provided, defaults to: $HOME/.ssh/id_ed25519

  1. To clean up the test environment only after a successful test execution AND a successful execution of TST-047:
  • In case the manager is chosen to be local, in the controller node/host, run these commands:
./stop-radar.sh --manager local
  • In case the manager is chosen to be remote and bootstrapped, in the controller node/host, run these commands:
./stop-radar.sh --manager remote
  • In case the agent is chosen to be local, in the controller node/host, run this command:
./stop-radar.sh --agent local
  • In case the agent is chosen to be remote, in the controller node/host, run this command:
./stop-radar.sh --agent remote

Expected outcome

2. Check the setup

In Wazuh Manager

Connect to the Wazuh manager to be able to perform the following verifications:

docker exec -it wazuh.manager bash
  • 2.1 Whitelist file exists
ls -l /var/ossec/etc/lists/whitelist_countries

to ensure that /var/ossec/etc/lists/whitelist_countries exists, its contents match the scenario’s whitelist_countries, and its owner is root:wazuh, with mode 0664.

  • 2.2 ossec.conf has <list> entry for whitelist_countries
grep -n 'whitelist_countries' /var/ossec/etc/ossec.conf

to ensure under <ruleset>, there is a <list>...whitelist_countries</list> entry with the RADAR marker for geoip_detection.

  • 2.3 Custom 0310 SSH decoder deployed; verify this by running
ls -l /var/ossec/etc/decoders/0310-ssh.xml

to ensure that 0310-ssh.xml exists, and is owned by root:wazuh, mode 0640.

  • 2.4 Default 0310 decoder is excluded in ossec.conf; verify via
grep '<decoder_exclude>0310-ssh_decoders.xml</decoder_exclude>' /var/ossec/etc/ossec.conf

to ensure that a <decoder_exclude>0310-ssh_decoders.xml</decoder_exclude> line appears (ideally once) under <ruleset>.

  • 2.5 RADAR rules for geoip_detection exist
ls -al /var/ossec/etc/rules/ | grep "geoip-detection"

to ensure rules for geoip detection exists.

  • 2.6 GeoIP non-whitelist rules are bound to active responses
grep -A10 'radar_ar' /var/ossec/etc/ossec.conf

to check GeoIP non-whitelist rule ID(s) reference the correct <active-response>.

  • 2.7 Active response config files are in place
ls -al /var/ossec/active-response/bin/ | grep "ar.yaml"
ls -al /var/ossec/active-response/bin/ | grep "pyflowintel"

to check active response config files are copied.

  • 2.8 Agent configurations are in place; verify via
cat /var/ossec/etc/shared/default/agent.conf | grep "suspicious_login"

to ensure that agent configurations for geoip detection exists.

In endpoints running agents

  • 2.9 RADAR helper directory and script are installed on the SSH agent host.
ls -ld /opt/radar
ls -l /opt/radar/radar-helper.py

to check /opt/radar exists with mode 0755 and owner root:root; radar-helper.py exists with mode 0750 and owner root:root.

  • 2.10 radar-helper.service systemd unit is installed, enabled, and running.
systemctl status radar-helper.service

to check that the service is active.

Post-test cleanup

  • 3 Check the cleanup after successful validations, the command output should be empty:
docker ps

Parent links: SRS-055 RADAR scenario: Geo-IP AC via whitelisting, SRS-061 RADAR: Tiered active response logic

Child links: TRP-032 TCER: Build non-whitelist GeoIP detection

Attribute Value
platform GNU/Linux (Ubuntu+Debian)
execution_type Manual
verification_method T
release alpha
complexity 3
test_data see referenced files
version 0.8

3.4 Build log volume abnormal growth TST-044

Preconditions and setup actions

  • RADAR is setup according to TST-041

Test steps

  1. The working directory is required to be radar:
cd radar
  1. Build RADAR with SCENARIO_NAME log_volume, if manager is local then the command should be run with sudo:
./build-radar.sh log_volume --agent remote/local --manager remote/local --manager_exists true/false

In this step, please choose options according to your setup:

  • --agent Where agents live: local (docker-compose.agents.yml) | remote (SSH endpoints)

  • --manager Where manager lives: local (docker-compose.core.yml) | remote (SSH host)

  • --manager_exists Whether the manager already exists at that location:

    • true : do not bootstrap a manager
    • false : bootstrap a manager
  • --ssh-key Optional: path to the SSH private key used for remote manager/agent access. If not provided, defaults to: $HOME/.ssh/id_ed25519

  1. To clean up the test environment only after a successful test execution AND a successful execution of TST-048:
  • In case the manager is chosen to be local, in the controller node/host, run these commands:
./stop-radar.sh --manager local
  • In case the manager is chosen to be remote and bootstrapped, in the controller node/host, run these commands:
./stop-radar.sh --manager remote
  • In case the agent is chosen to be local, in the controller node/host, run this command:
./stop-radar.sh --agent local
  • In case the agent is chosen to be remote, in the controller node/host, run this command:
./stop-radar.sh --agent remote

Expected outcome

  1. Check the setup:

2. Check the setup

In Wazuh Manager

Connect to the Wazuh manager to be able to perform the following verifications:

docker exec -it wazuh.manager bash
  • 2.1 RADAR snippet for log_volume is present in ossec.conf
grep -n 'RADAR: log_volume' /var/ossec/etc/ossec.conf

to check ossec.conf contains a RADAR: log_volume BEGIN/END block with the log_volume-related settings.

  • 2.2 RADAR decoders for log_volume exist
ls -al /var/ossec/etc/decoders/ | grep "log-volume"

to check decoders for log volume are in place.

  • 2.3 RADAR rules for log_volume exist
ls -al /var/ossec/etc/rules/ | grep "log-volume"

to check rules for log volume are in place.

  • 2.4 OpenSearch template radar-log-volume is present
curl -u "$OS_USER:$OS_PASS" -k "$OS_URL/_index_template/radar-log-volume?pretty"

to ensure HTTP status is 200 and the template JSON exists with mappings for log_bytes as a numeric type.

  • 2.5 log_volume rules are bound to active responses
grep -A5 'radar_ar' /var/ossec/etc/ossec.conf

to check Log volume anomaly rule ID(s) reference radar_ar.py as intended.

  • 2.6 Agent configurations are in place; verify via
cat /var/ossec/etc/shared/default/agent.conf | grep "log_volume_metric"

to ensure that agent configurations for log volume exists.

In endpoints running agents

  • 2.7 RADAR helper directory and script are installed on the SSH agent host.
ls -ld /opt/radar
ls -l /opt/radar/radar-helper.py

to check /opt/radar exists with mode 0755 and owner root:root; radar-helper.py exists with mode 0750 and owner root:root.

  • 2.8 radar-helper.service systemd unit is installed, enabled, and running.
systemctl status radar-helper.service

to check that the service is active.

Post-test cleanup

  • 3 Check the cleanup after successful validations, the command output should be empty:
docker ps

Parent links: SRS-056 RADAR scenario: log size change, SRS-061 RADAR: Tiered active response logic

Child links: TRP-033 TCER: Build log volume abnormal growth

Attribute Value
platform GNU/Linux (Ubuntu+Debian)
execution_type Manual
verification_method T
release alpha
complexity 2
test_data see referenced files
version 0.8

3.5 Run RADAR for log volume abnormal growth TST-045

Preconditions and setup actions

  • RADAR is built according to the scenario TST-044.

Test steps

  1. The working directory is required to be radar:
cd radar
  1. After building the scenario, wait about 3 minutes for index creation and correct mapping. Then, run RADAR with SCENARIO_NAME log_volume
./run-radar.sh log_volume

Expected outcome

  1. In Wazuh Dashboard:

    2.1. The detector LOG_VOLUME_DETECTOR is created in Wazuh Dashboard with the correct configurations. In Anomaly Detection menu the created detector should be visible. The correct configurations can be found in config.yaml under log_volume key.

    2.2. The monitor is created in Wazuh Dashboard with the trigger to the webhook. In Alerting menu created Monitor should be listed under Monitors tab.

Parent links: SRS-056 RADAR scenario: log size change, SRS-061 RADAR: Tiered active response logic

Child links: TRP-034 TCER: Run RADAR for log volume abnormal growth

Attribute Value
platform GNU/Linux (Ubuntu+Debian)
execution_type Manual
verification_method T
release alpha
complexity 3
test_data see referenced files
version 0.8

3.6 DECIPHER-RADAR detection validation for Suspicious login TST-046

Preconditions and setup actions

  • RADAR is setup according to TST-042

Test steps

  1. Configure common and suspicious_login parts in /radar/radar-test-framework/simulate/scenarios/config.yaml.
  2. From the radar/, run: bash ./simulate-radar.sh suspicious_login --agent remote

Expected outcome

2.1. The ansible playbook executes successfully.

2.2. In the Wazuh Dashboard Discovery page, at least one of rule IDs 210012, 210013, 210020, or 210021 is triggered for the remote agent.

2.3. In Wazuh manager check alert tier:

grep "Risk computed" /var/ossec/logs/active-responses.log | tail -1
  • Expected risk_score range is between 0.0 and 1.0.
  • If tier is:
    • Tier 0:
      • No email is received at EMAIL_TO.
      • No FlowIntel case is created.
      • No mitigation command appears in /var/ossec/logs/active-responses.log.
    • Tier 1:
      • The email address specified in EMAIL_TO (from .env) receives a RADAR alert notification, containing the risk score, tier, and FlowIntel case URL.
      • A FlowIntel case is created and can be accessed at the Flowintel URL included in the email notification.
      • The FlowIntel case contains relevant information about the suspicious login event, related events found in MISP, and a breakdown of the risk score including the assigned tier.
      • Open the FlowIntel case at the URL in the email. Verify that misp_available: true appears in the case description, confirming that DECIPHER's analysis was enriched with MISP data.
      • The Flowintel case is tagged with low priority.
      • No mitigation command appears in /var/ossec/logs/active-responses.log.
    • Tier 2:
      • The email address specified in EMAIL_TO (from .env) receives a RADAR alert notification, containing the risk score, tier, and FlowIntel case URL.
      • A FlowIntel case is created and can be accessed at the Flowintel URL included in the email notification.
      • The FlowIntel case contains relevant information about the suspicious login event, related events found in MISP, and a breakdown of the risk score including the assigned tier.
      • Open the FlowIntel case at the URL in the email. Verify that misp_available: true appears in the case description, confirming that DECIPHER's analysis was enriched with MISP data.
      • The Flowintel case is tagged with medium priority.
      • Mitigation commands appear in /var/ossec/logs/active-responses.log (check mitigation commands in ar.yaml field mitigations_tier2 for suspicious_login).
    • Tier 3:
      • The email address specified in EMAIL_TO (from .env) receives a RADAR alert notification, containing the risk score, tier, and FlowIntel case URL.
      • A FlowIntel case is created and can be accessed at the Flowintel URL included in the email notification.
      • The FlowIntel case contains relevant information about the suspicious login event, related events found in MISP, and a breakdown of the risk score including the assigned tier.
      • Open the FlowIntel case at the URL in the email. Verify that misp_available: true appears in the case description, confirming that DECIPHER's analysis was enriched with MISP data.
      • The FlowIntel case is tagged with high priority.
      • Mitigation commands appear in /var/ossec/logs/active-responses.log (check mitigation commands in ar.yaml field mitigations_tier3 for suspicious_login). Considering the mitigation check:
        • firewall-drop: check the IP addresses are blocked in the agent
        • lock_user_linux.sh: check the user is locked in the agent
        • terminate_service.sh: check the service is terminated in the agent

Parent links: SRS-059 RADAR Scenario Simulation Framework

Child links: TRP-035 TCER: DECIPHER-RADAR detection validation for suspicious login

Attribute Value
platform GNU/Linux (Dockerized C5-DEC deployment environment)
execution_type Automated
verification_method T
release alpha
complexity 1
test_data see referenced files
version 0.8

3.7 Detection validation for GeoIP detection TST-047

Preconditions and setup actions

  • RADAR is setup according to TST-043

Test steps

  1. Configure common and geoip_detection parts in /radar/radar-test-framework/simulate/scenarios/config.yaml.
  2. From the radar/, run: bash ./simulate-radar.sh geoip_detection --agent remote

Expected outcome

2.1. The ansible playbook executes successfully.

2.2. In the Wazuh Dashboard Discovery page, a rule with ID matching 10090* is triggered for the remote agent.

2.3. In Wazuh manager check alert tier:

grep "Risk computed" /var/ossec/logs/active-responses.log | tail -1
  • Expected risk_score range is between 0.0 and 1.0.
  • If tier is:
    • Tier 0:
      • No email is received at EMAIL_TO.
      • No mitigation command appears in /var/ossec/logs/active-responses.log.
    • Tier 1:
      • The email address specified in EMAIL_TO (from .env) receives a RADAR alert notification, containing the risk score, tier.
      • No mitigation command appears in /var/ossec/logs/active-responses.log.
    • Tier 2:
      • The email address specified in EMAIL_TO (from .env) receives a RADAR alert notification, containing the risk score, tier.
      • Mitigation commands appear in /var/ossec/logs/active-responses.log (check mitigation commands in ar.yaml field mitigations_tier2 for geoip_detection).
    • Tier 3:
      • The email address specified in EMAIL_TO (from .env) receives a RADAR alert notification, containing the risk score, tier.
      • Mitigation commands appear in /var/ossec/logs/active-responses.log (check mitigation commands in ar.yaml field mitigations_tier3 for geoip_detection). Considering the mitigation check:
        • firewall-drop: check the IP addresses are blocked in the agent
        • lock_user_linux.sh: check the user is locked in the agent
        • terminate_service.sh: check the service is terminated in the agent

Parent links: SRS-059 RADAR Scenario Simulation Framework

Child links: TRP-036 TCER: Detection validation for GeoIP detection

Attribute Value
platform GNU/Linux (Dockerized C5-DEC deployment environment)
execution_type Automated
verification_method T
release alpha
complexity 1
test_data see referenced files
version 0.8

3.8 Detection validation for Log volume abnormal growth TST-048

Preconditions and setup actions

  • RADAR is setup according to TST-045

Test steps

  1. Configure common and log_volume parts in /radar/radar-test-framework/simulate/scenarios/config.yaml. The values for abnormal growth should be adapted to detector configurations, if the suppression rules are 10%, the values should be calculated to pass the rules.
  2. From the radar/, run:
./simulate-radar.sh log_volume --agent remote

Expected outcome

2.1. The ansible playbook executes successfully.

2.2. In the Wazuh Dashboard Discovery page, rule ID 100309 is triggered for the remote agent.

2.3. Verify that spike has happened in the agent:

echo /var/log $(du -sb /var/log 2>/dev/null | awk '{print $1}')

2.4. In Wazuh manager check alert tier:

grep "Risk computed" /var/ossec/logs/active-responses.log | tail -1
  • Expected risk_score range is between 0.0 and 1.0.
  • If tier is:
    • Tier 0:
      • No email is received at EMAIL_TO.
      • No mitigation command appears in /var/ossec/logs/active-responses.log.
    • Tier 1:
      • The email address specified in EMAIL_TO (from .env) receives a RADAR alert notification, containing the risk score, tier.
      • No mitigation command appears in /var/ossec/logs/active-responses.log.
    • Tier 2:
      • The email address specified in EMAIL_TO (from .env) receives a RADAR alert notification, containing the risk score, tier.
      • Mitigation commands appear in /var/ossec/logs/active-responses.log (check mitigation commands in ar.yaml field mitigations_tier2 for log_volume).
    • Tier 3:
      • The email address specified in EMAIL_TO (from .env) receives a RADAR alert notification, containing the risk score, tier.
      • Mitigation commands appear in /var/ossec/logs/active-responses.log (check mitigation commands in ar.yaml field mitigations_tier3 for log_volume). Considering the mitigation check:
        • firewall-drop: check the IP addresses are blocked in the agent
        • lock_user_linux.sh: check the user is locked in the agent
        • terminate_service.sh: check the service is terminated in the agent 2.5. Verify that the spike file (log_volume.target_dir / log_volume.spike_filename) is removed after cleanup_minutes have elapsed (if cleanup_minutes > 0), or remove it manually if cleanup_minutes is set to 0.

Parent links: SRS-059 RADAR Scenario Simulation Framework

Child links: TRP-037 TCER: Detection validation for log volume abnormal growth

Attribute Value
platform GNU/Linux (Dockerized C5-DEC deployment environment)
execution_type Automated
verification_method T
release alpha
complexity 1
test_data see referenced files
version 0.8

3.9 RADAR deployment integrity verification TST-049

Preconditions and setup actions

  1. Wazuh host (vm-cyfort-radar):
  • A Wazuh multi-node Docker stack (master + worker + indexer + dashboard + nginx) is deployed from the Wazuh docker repository. All containers are running (Up status in docker ps).
  • The docker-compose.yml (or its override) uses host bind-mount directories for all Wazuh container volumes with separate volumes for master and worker nodes. The bind-mount root used in this test run is /srv/docker-data/wazuh/. All bind-mount paths must exist on the host filesystem before the containers are started.
  • An NFS snapshot of historical Wazuh data is accessible and mountable at /mnt/snapshots inside the indexer container. The snapshot data is restored into multi-node Wazuh and the Opensearch cluster is green.
  • The user has sudo privileges on the Wazuh host.
  1. Controller host (vm-cyfort-1):
  • The RADAR v0.8.2 repository is checked out.
  • The following three configuration files in the RADAR repository root are correctly configured before running RADAR:

  • inventory.yaml: defines the SSH connection details for the Wazuh master and worker nodes (under wazuh_manager_ssh) and for at least one remote agent endpoint (under wazuh_agents_ssh). Other required variables are set:

        mgr.remote:
          ansible_host: WAZUH_HOST_IP
          ansible_user: linuxuser
          manager_mode: "docker_remote"
          manager_address: "WAZUH_HOST_IP"
          manager_address_public: "WAZUH_HOST_IP"
          manager_service_name: "wazuh.master"
          manager_container_name: "multi-node-wazuh.master-1"
          ansible_python_interpreter: /usr/bin/python3
          ansible_become: true
          ansible_become_method: sudo
          ansible_ssh_common_args: "-o StrictHostKeyChecking=no -o ConnectTimeout=10"
        mgr.remote1:
          ansible_host: WAZUH_HOST_IP
          ansible_user: linuxuser
          manager_mode: "docker_remote"
          manager_address: "WAZUH_HOST_IP"
          manager_address_public: "WAZUH_HOST_IP"
          manager_service_name: "wazuh.worker"
          manager_container_name: "multi-node-wazuh.worker-1"
          ansible_python_interpreter: /usr/bin/python3
          ansible_become: true
          ansible_become_method: sudo
          ansible_ssh_common_args: "-o StrictHostKeyChecking=no -o ConnectTimeout=10"
  • volumes.yml: maps each Wazuh container path to its corresponding host bind-mount path on the Wazuh host. The RADAR Ansible playbook reads this file to locate configuration files (ossec.conf, decoders, rules, lists, filebeat.yml) on the Wazuh host filesystem. Every entry must reflect the actual host paths configured in the Wazuh host docker-compose.yml. If volumes.yml does not match the real bind-mounts, the playbook will fail or deploy files to the wrong location.
services:
  wazuh.master:
    volumes:
      - /root/docker-data/wazuh/manager/var/ossec/api/configuration:/var/ossec/api/configuration
      - /root/docker-data/wazuh/manager/var/ossec/etc:/var/ossec/etc
      - /root/docker-data/wazuh/manager/var/ossec/logs:/var/ossec/logs
      - /root/docker-data/wazuh/manager/var/ossec/queue:/var/ossec/queue
      - /root/docker-data/wazuh/manager/var/ossec/var/multigroups:/var/ossec/var/multigroups
      - /root/docker-data/wazuh/manager/var/ossec/integrations:/var/ossec/integrations
      - /root/docker-data/wazuh/manager/var/ossec/active-response/bin:/var/ossec/active-response/bin
      - /root/docker-data/wazuh/manager/etc/filebeat:/etc/filebeat
      - /root/docker-data/wazuh/manager/var/lib/filebeat:/var/lib/filebeat

  wazuh.worker:
    volumes:
      - /root/docker-data/wazuh/worker/var/ossec/api/configuration:/var/ossec/api/configuration
      - /root/docker-data/wazuh/worker/var/ossec/etc:/var/ossec/etc
      - /root/docker-data/wazuh/worker/var/ossec/logs:/var/ossec/logs
      - /root/docker-data/wazuh/worker/var/ossec/queue:/var/ossec/queue
      - /root/docker-data/wazuh/worker/var/ossec/var/multigroups:/var/ossec/var/multigroups
      - /root/docker-data/wazuh/worker/var/ossec/integrations:/var/ossec/integrations
      - /root/docker-data/wazuh/worker/var/ossec/active-response/bin:/var/ossec/active-response/bin
      - /root/docker-data/wazuh/worker/etc/filebeat:/etc/filebeat
      - /root/docker-data/wazuh/worker/var/lib/filebeat:/var/lib/filebeat
  • .env: contains all runtime secrets and configuration values consumed by the Ansible playbook and by active response scripts at runtime. This includes OpenSearch credentials (OS_URL, OS_USER, OS_PASS), Wazuh API credentials (WAZUH_API_URL, WAZUH_API_USER, WAZUH_API_PASS), SMTP settings for email notifications, webhook address (WAZUH_MANAGER_ADDRESS), active response log path (AR_LOG_FILE), and risk config path (AR_RISK_CONFIG).
# =======================
# OpenSearch Configuration
# =======================
OS_URL=https://192.168.5.4:9200  # Change accordingly
OS_USER=admin
OS_PASS=SecretPassword
OS_VERIFY_SSL="/app/config/wazuh_indexer_ssl_certs/root-ca.pem"

DASHBOARD_URL=https://192.168.5.4  # Change accordingly
DASHBOARD_USER=admin
DASHBOARD_PASS=SecretPassword
DASHBOARD_VERIFY_SSL="/app/config/wazuh_indexer_ssl_certs/root-ca.pem"

WAZUH_API_URL=https://192.168.5.4:55000  # Change accordingly
WAZUH_AUTH_USER=wazuh-wui
WAZUH_AUTH_PASS=MyS3cr37P450r.*-

WAZUH_AGENT_VERSION=4.14.1-1
WAZUH_MANAGER_ADDRESS=192.168.5.4  # Change accordingly

WEBHOOK_NAME="RADAR_Webhook"
WEBHOOK_URL=http://192.168.5.4:8080/notify  # Change accordingly

# =======================
# Logging
# =======================

AR_LOG_FILE=/var/ossec/logs/active-responses.log
AR_RISK_CONFIG=/var/ossec/active-response/bin/ar.yaml

# =======================
# SMTP
# =======================

SMTP_HOST=SMTP_HOST # Change accordingly
SMTP_PORT=SMTP_PORT # Change accordingly
SMTP_USER=SMTP_USER # Change accordingly
SMTP_PASS=SMTP_PASS # Change accordingly
EMAIL_TO=RECEIVER_EMAIL@example.com # Change accordingly
SMTP_STARTTLS=yes

Test steps


1. Set session variables (Wazuh host)

Set these variables in your shell before running any pre-deployment or post-deployment check commands. They must remain set for the entire session.

SNAPSHOT_DIR=/srv/radar-integrity-snapshots
mkdir -p $SNAPSHOT_DIR
DATE=$(date +%Y%m%d_%H%M%S)
INDEX_PATTERN="wazuh-alerts-4.x-2026.*,wazuh-archives-4.x-2026.*"
TOKEN=$(curl -sk -u "WAZUH_AUTH_USER:WAZUH_AUTH_PASS" \
  -X POST "https://localhost:55000/security/user/authenticate?raw=true")
sudo chown $USER:$USER $SNAPSHOT_DIR

Choose a value for FINGERPRINT_CUTOFF. This timestamp defines the upper time boundary for all OpenSearch aggregation queries used in the index integrity check (Pre-05 and Post-06). It must be a UTC timestamp in the format YYYY-MM-DDTHH:MM:SS.000Z.

The value must fall within the restored snapshot data range — that is, before the most recent document in the restored snapshot — so that the query window is entirely frozen historical data. Choosing a time that lies before any live agent activity begins ensures that new alerts written after the restore cannot enter the query window and cause a false mismatch between the pre-deployment and post-deployment fingerprints. The same value must be passed identically to both Pre-05 and Post-06:

FINGERPRINT_CUTOFF="<chosen_cutoff>"   # e.g. 2026-03-12T23:59:59.000Z
echo "Session: DATE=$DATE  CUTOFF=$FINGERPRINT_CUTOFF"

2. Pre-deployment snapshot (Wazuh host)

Run each check in order. Do not run any build-radar.sh command until all pre-deployment checks are complete.

Pre-01: Wazuh configuration backup

sudo cp /srv/docker-data/wazuh/manager/var/ossec/etc/ossec.conf \
  $SNAPSHOT_DIR/manager_ossec.conf_${DATE}.bak
sudo cp /srv/docker-data/wazuh/worker/var/ossec/etc/ossec.conf \
  $SNAPSHOT_DIR/worker_ossec.conf_${DATE}.bak
ls -lh $SNAPSHOT_DIR/*ossec.conf*${DATE}*

Pre-02: Active response scripts backup

sudo find /srv/docker-data/wazuh/manager/var/ossec/active-response/bin -type f \
  | sort | sudo xargs sha256sum \
  > $SNAPSHOT_DIR/manager_ar_hashes_${DATE}.sha256
echo "AR baseline files: $(wc -l < $SNAPSHOT_DIR/manager_ar_hashes_${DATE}.sha256)"

Pre-03: Wazuh config file hashes

sudo find /srv/docker-data/wazuh/manager/var/ossec/etc -type f \
  | sort | sudo xargs sha256sum \
  > $SNAPSHOT_DIR/manager_hashes_${DATE}.sha256

sudo find /srv/docker-data/wazuh/worker/var/ossec/etc -type f \
  | sort | sudo xargs sha256sum \
  > $SNAPSHOT_DIR/worker_hashes_${DATE}.sha256

echo "Manager: $(wc -l < $SNAPSHOT_DIR/manager_hashes_${DATE}.sha256) files"
echo "Worker:  $(wc -l < $SNAPSHOT_DIR/worker_hashes_${DATE}.sha256) files"

Pre-04: File permissions snapshot

sudo find /srv/docker-data/wazuh/manager/var/ossec/etc -type f \
  -printf "%m %u %g %p\n" | sort \
  > $SNAPSHOT_DIR/manager_permissions_${DATE}.txt

sudo find /srv/docker-data/wazuh/worker/var/ossec/etc -type f \
  -printf "%m %u %g %p\n" | sort \
  > $SNAPSHOT_DIR/worker_permissions_${DATE}.txt

echo "Permission entries: $(wc -l < $SNAPSHOT_DIR/manager_permissions_${DATE}.txt)"

Pre-05: Indexer document fingerprint

FINGERPRINT_FILE=$SNAPSHOT_DIR/index_fingerprint_${DATE}.txt
echo "Time range: 2026-01-01T00:00:00Z to ${FINGERPRINT_CUTOFF}" > $FINGERPRINT_FILE
echo "---" >> $FINGERPRINT_FILE

for INDEX in $(curl -sk -u admin:SecretPassword \
  "https://localhost:9200/_cat/indices/${INDEX_PATTERN}?h=index" | sort); do

  echo "=== $INDEX ===" >> $FINGERPRINT_FILE

  curl -sk -u admin:SecretPassword \
    "https://localhost:9200/$INDEX/_search" \
    -H "Content-Type: application/json" \
    -d "{
      \"size\": 0,
      \"query\": {\"range\": {\"timestamp\": {
        \"gte\": \"2026-01-01T00:00:00.000Z\",
        \"lte\": \"${FINGERPRINT_CUTOFF}\"
      }}},
      \"aggs\": {
        \"total_docs\": {\"value_count\": {\"field\": \"timestamp\"}},
        \"min_timestamp\": {\"min\": {\"field\": \"timestamp\"}},
        \"max_timestamp\": {\"max\": {\"field\": \"timestamp\"}},
        \"rule_id_distribution\": {
          \"terms\": {\"field\": \"rule.id\", \"size\": 2000, \"order\": {\"_key\": \"asc\"}},
          \"aggs\": {\"count\": {\"value_count\": {\"field\": \"timestamp\"}}}
        }
      }
    }" | python3 -c "
import json, sys
data = json.load(sys.stdin)
aggs = data.get('aggregations', {})
total = aggs.get('total_docs', {}).get('value', 0)
min_ts = aggs.get('min_timestamp', {}).get('value_as_string', 'N/A')
max_ts = aggs.get('max_timestamp', {}).get('value_as_string', 'N/A')
print(f'  total_docs: {total}')
print(f'  min_timestamp: {min_ts}')
print(f'  max_timestamp: {max_ts}')
buckets = aggs.get('rule_id_distribution', {}).get('buckets', [])
print(f'  rule_id_count: {len(buckets)}')
for b in buckets:
    print(f'  rule:{b[\"key\"]} count:{b[\"doc_count\"]}')
" >> $FINGERPRINT_FILE
  echo "---" >> $FINGERPRINT_FILE
done

echo "Fingerprint file: $(wc -l < $FINGERPRINT_FILE) lines written"

FILTER_PY=$(mktemp /tmp/radar_filter_XXXXXX.py)
python3 -c "
code = [
    'import sys, hashlib',
    'def filter_file(path):',
    '    with open(path) as f:',
    '        lines = f.read().splitlines()',
    '    out = []',
    '    i = 0',
    '    while i < len(lines):',
    '        line = lines[i]',
    '        if line.startswith(\"Time range:\"):',
    '            i += 1',
    '            continue',
    '        if line.startswith(\"=== \") and line.endswith(\" ===\"):',
    '            block = [line]',
    '            j = i + 1',
    '            while j < len(lines) and not (lines[j].startswith(\"=== \") and lines[j].endswith(\" ===\")):',
    '                block.append(lines[j])',
    '                j += 1',
    '            total_line = next((l for l in block if \"total_docs:\" in l), \"\")',
    '            total = total_line.split(\":\")[-1].strip() if total_line else \"?\"',
    '            if total != \"0\":',
    '                out.extend(block)',
    '            i = j',
    '        else:',
    '            out.append(line)',
    '            i += 1',
    '    return chr(10).join(out)',
    'import hashlib',
    'text = filter_file(sys.argv[1])',
    'print(hashlib.sha256(text.encode()).hexdigest())',
]
with open('$FILTER_PY', 'w') as fh:
    fh.write(chr(10).join(code) + chr(10))
"
python3 "$FILTER_PY" "$FINGERPRINT_FILE" > $SNAPSHOT_DIR/index_fingerprint_${DATE}.sha256
rm -f "$FILTER_PY"
echo "Fingerprint hash: $(cat $SNAPSHOT_DIR/index_fingerprint_${DATE}.sha256)"

Pre-06: Cluster health

curl -sk -H "Authorization: Bearer $TOKEN" \
  "https://localhost:55000/cluster/nodes" > /tmp/cluster_raw.json

python3 -c "
import json
with open('/tmp/cluster_raw.json') as f:
    data = json.load(f)
for n in data.get('data', {}).get('affected_items', []):
    print(n.get('name','?'), n.get('type','?'), n.get('status','?'))
" | sort > $SNAPSHOT_DIR/cluster_${DATE}.txt
cat $SNAPSHOT_DIR/cluster_${DATE}.txt

Pre-07: Agent connectivity

curl -sk -H "Authorization: Bearer $TOKEN" \
  "https://localhost:55000/agents?limit=100" \
  | python3 -c "
import json, sys
data = json.load(sys.stdin)
for a in data['data']['affected_items']:
    print(f\"{a['id']} {a['name']} {a['status']}\")
" | sort > $SNAPSHOT_DIR/agents_${DATE}.txt
cat $SNAPSHOT_DIR/agents_${DATE}.txt

Pre-08: Wazuh API health

curl -sk -H "Authorization: Bearer $TOKEN" \
  "https://localhost:55000/manager/info" \
  | python3 -c "
import json, sys
data = json.load(sys.stdin)
info = data['data']['affected_items'][0]
print(f\"version: {info['version']}\")
print(f\"uuid: {info['uuid']}\")
print(f\"type: {info['type']}\")
" > $SNAPSHOT_DIR/api_info_${DATE}.txt
cat $SNAPSHOT_DIR/api_info_${DATE}.txt

Pre-09: Existing detection test

Run a known log line through wazuh-logtest on both master and worker before any RADAR deployment. This records the baseline decoder and rule match to compare against after deployment.

docker exec -it multi-node-wazuh.master-1 bash -c \
  'echo "2026-03-26T13:43:53.668837+01:00 vm-cyfort-testing-2 sudo: pam_unix(sudo:session): session closed for user root" \
  | /var/ossec/bin/wazuh-logtest 2>&1 \
  | grep -E "Phase|name:|id:|level:|description:|groups:"'

docker exec -it multi-node-wazuh.worker-1 bash -c \
  'echo "2026-03-26T13:43:53.668837+01:00 vm-cyfort-testing-2 sudo: pam_unix(sudo:session): session closed for user root" \
  | /var/ossec/bin/wazuh-logtest 2>&1 \
  | grep -E "Phase|name:|id:|level:|description:|groups:"'

Pre-10: Filebeat pipeline state

curl -sk -u admin:SecretPassword \
  "https://localhost:9200/_ingest/pipeline/filebeat-7.10.2-wazuh-archives-pipeline" \
  > $SNAPSHOT_DIR/filebeat_pipeline_${DATE}.json
echo "Pipeline captured: $(wc -c < $SNAPSHOT_DIR/filebeat_pipeline_${DATE}.json) bytes"

Pre-11: Wazuh manager service state

docker exec multi-node-wazuh.master-1 \
  /var/ossec/bin/wazuh-control status \
  > $SNAPSHOT_DIR/wazuh_status_${DATE}.txt
cat $SNAPSHOT_DIR/wazuh_status_${DATE}.txt

Save snapshot date:

echo "PRE_DATE=$DATE" > $SNAPSHOT_DIR/latest.env
echo "Pre-deployment snapshot complete: $DATE"
ls -lh $SNAPSHOT_DIR/

3. RADAR deployment (controller host)

Before running build-radar.sh, verify that the three configuration files described in the preconditions are correct and up to date:

  • inventory.yaml — SSH coordinates for the Wazuh manager host and agent endpoints are correct and reachable.
  • volumes.yml — bind-mount paths match the actual Docker volume configuration on the Wazuh host.
  • .env — all credentials, addresses, and paths are set correctly for this environment.

Deploy all three scenarios sequentially:

cd ~/radar

./build-radar.sh geoip_detection \
  --manager remote --agent remote --manager_exists true

./build-radar.sh suspicious_login \
  --manager remote --agent remote --manager_exists true

./build-radar.sh log_volume \
  --manager remote --agent remote --manager_exists true

Verify each run completes with Ansible play recap showing failed=0 for all hosts.


4. Post-deployment checks (Wazuh host)

Load baseline and refresh token. Use the same FINGERPRINT_CUTOFF value set in step 1:

source /srv/radar-integrity-snapshots/latest.env
SNAPSHOT_DIR=/srv/radar-integrity-snapshots
FINGERPRINT_CUTOFF="<same value as step 2>"
INDEX_PATTERN="wazuh-alerts-4.x-2026.*,wazuh-archives-4.x-2026.*"
TOKEN=$(curl -sk -u "WAZUH_AUTH_USER:WAZUH_AUTH_PASS" \
  -X POST "https://localhost:55000/security/user/authenticate?raw=true")

echo "PRE_DATE=$PRE_DATE"

In case of errors in the following commands, reset these variables again.

Post-01: Wazuh index file hashes

sudo find /srv/docker-data/wazuh/manager/var/ossec/etc -type f \
  | sort | sudo xargs sha256sum > /tmp/manager_hashes_now.sha256

MODIFIED=$(diff $SNAPSHOT_DIR/manager_hashes_${PRE_DATE}.sha256 \
  /tmp/manager_hashes_now.sha256 \
  | grep "^<" | awk '{print $NF}' \
  | grep -v "decoders\|rules\|lists\|active-response\|ossec.conf\|shared/\|client.keys\|sslmanager")
[ -z "$MODIFIED" ] \
  && echo "PASS - No unexpected file modifications (manager)" \
  || { echo "FAIL - Modified files outside RADAR scope (manager):"; echo "$MODIFIED"; }
echo "New files (manager):"
diff $SNAPSHOT_DIR/manager_hashes_${PRE_DATE}.sha256 /tmp/manager_hashes_now.sha256 \
  | grep "^>" | awk '{print $NF}'

sudo find /srv/docker-data/wazuh/worker/var/ossec/etc -type f \
  | sort | sudo xargs sha256sum > /tmp/worker_hashes_now.sha256

MODIFIED=$(diff $SNAPSHOT_DIR/worker_hashes_${PRE_DATE}.sha256 \
  /tmp/worker_hashes_now.sha256 \
  | grep "^<" | awk '{print $NF}' \
  | grep -v "decoders\|rules\|lists\|active-response\|ossec.conf\|shared/\|client.keys\|sslmanager")
[ -z "$MODIFIED" ] \
  && echo "PASS - No unexpected file modifications (worker)" \
  || { echo "FAIL - Modified files outside RADAR scope (worker):"; echo "$MODIFIED"; }

Post-02: File permissions

sudo find /srv/docker-data/wazuh/manager/var/ossec/etc -type f \
  -printf "%m %u %g %p\n" | sort > /tmp/manager_permissions_now.txt

PERM_DIFF=$(diff $SNAPSHOT_DIR/manager_permissions_${PRE_DATE}.txt \
  /tmp/manager_permissions_now.txt \
  | grep "^<" \
  | grep -v "decoders\|rules\|lists\|active-response\|ossec.conf\|shared/\|client.keys\|sslmanager")
[ -z "$PERM_DIFF" ] \
  && echo "PASS - No unexpected permission changes" \
  || { echo "FAIL - Permission changes outside RADAR scope:"; echo "$PERM_DIFF"; }

Post-03: Existing detection test

Re-run the identical log line through wazuh-logtest on both master and worker after RADAR deployment. The decoder name, rule ID, level, description, and groups must be identical to the pre-deployment baseline recorded in Pre-09:

docker exec -it multi-node-wazuh.master-1 bash -c \
  'echo "2026-03-26T13:43:53.668837+01:00 vm-cyfort-testing-2 sudo: pam_unix(sudo:session): session closed for user root" \
  | /var/ossec/bin/wazuh-logtest 2>&1 \
  | grep -E "Phase|name:|id:|level:|description:|groups:"'

docker exec -it multi-node-wazuh.worker-1 bash -c \
  'echo "2026-03-26T13:43:53.668837+01:00 vm-cyfort-testing-2 sudo: pam_unix(sudo:session): session closed for user root" \
  | /var/ossec/bin/wazuh-logtest 2>&1 \
  | grep -E "Phase|name:|id:|level:|description:|groups:"'

Post-04: Agent connectivity

curl -sk -H "Authorization: Bearer $TOKEN" \
  "https://localhost:55000/agents?limit=100" \
  | python3 -c "
import json, sys
data = json.load(sys.stdin)
for a in data['data']['affected_items']:
    print(f\"{a['id']} {a['name']} {a['status']}\")
" | sort > /tmp/agents_now.txt

PRE_AGENTS=$(cat $SNAPSHOT_DIR/agents_${PRE_DATE}.txt)
POST_AGENTS=$(cat /tmp/agents_now.txt)
[ "$PRE_AGENTS" = "$POST_AGENTS" ] \
  && { echo "PASS - All agents unchanged"; cat /tmp/agents_now.txt; } \
  || { echo "CHECK - Agent changes (webhook enrollment expected):"; \
       diff <(echo "$PRE_AGENTS") <(echo "$POST_AGENTS") || true; }

Post-05: Cluster health

curl -sk -H "Authorization: Bearer $TOKEN" \
  "https://localhost:55000/cluster/nodes" > /tmp/cluster_raw_now.json

python3 -c "
import json
with open('/tmp/cluster_raw_now.json') as f:
    data = json.load(f)
for n in data.get('data', {}).get('affected_items', []):
    print(n.get('name','?'), n.get('type','?'), n.get('status','?'))
" | sort > /tmp/cluster_now.txt

[ "$(cat $SNAPSHOT_DIR/cluster_${PRE_DATE}.txt)" = "$(cat /tmp/cluster_now.txt)" ] \
  && { echo "PASS - Cluster nodes unchanged"; cat /tmp/cluster_now.txt; } \
  || { echo "CHECK - Cluster changes:"; \
       diff $SNAPSHOT_DIR/cluster_${PRE_DATE}.txt /tmp/cluster_now.txt || true; }

Post-06: Indexer document fingerprint

DATE_POST=$(date +%Y%m%d_%H%M%S)
FINGERPRINT_POST=/tmp/index_fingerprint_post_${DATE_POST}.txt
echo "Time range: 2026-01-01T00:00:00Z to ${FINGERPRINT_CUTOFF}" > $FINGERPRINT_POST
echo "---" >> $FINGERPRINT_POST

for INDEX in $(curl -sk -u admin:SecretPassword \
  "https://localhost:9200/_cat/indices/${INDEX_PATTERN}?h=index" | sort); do

  echo "=== $INDEX ===" >> $FINGERPRINT_POST

  curl -sk -u admin:SecretPassword \
    "https://localhost:9200/$INDEX/_search" \
    -H "Content-Type: application/json" \
    -d "{
      \"size\": 0,
      \"query\": {\"range\": {\"timestamp\": {
        \"gte\": \"2026-01-01T00:00:00.000Z\",
        \"lte\": \"${FINGERPRINT_CUTOFF}\"
      }}},
      \"aggs\": {
        \"total_docs\": {\"value_count\": {\"field\": \"timestamp\"}},
        \"min_timestamp\": {\"min\": {\"field\": \"timestamp\"}},
        \"max_timestamp\": {\"max\": {\"field\": \"timestamp\"}},
        \"rule_id_distribution\": {
          \"terms\": {\"field\": \"rule.id\", \"size\": 2000, \"order\": {\"_key\": \"asc\"}},
          \"aggs\": {\"count\": {\"value_count\": {\"field\": \"timestamp\"}}}
        }
      }
    }" | python3 -c "
import json, sys
data = json.load(sys.stdin)
aggs = data.get('aggregations', {})
total = aggs.get('total_docs', {}).get('value', 0)
min_ts = aggs.get('min_timestamp', {}).get('value_as_string', 'N/A')
max_ts = aggs.get('max_timestamp', {}).get('value_as_string', 'N/A')
print(f'  total_docs: {total}')
print(f'  min_timestamp: {min_ts}')
print(f'  max_timestamp: {max_ts}')
buckets = aggs.get('rule_id_distribution', {}).get('buckets', [])
print(f'  rule_id_count: {len(buckets)}')
for b in buckets:
    print(f'  rule:{b[\"key\"]} count:{b[\"doc_count\"]}')
" >> $FINGERPRINT_POST
  echo "---" >> $FINGERPRINT_POST
done

echo "Fingerprint file: $(wc -l < $FINGERPRINT_POST) lines written"

FILTER_PY=$(mktemp /tmp/radar_filter_XXXXXX.py)
python3 -c "
code = [
    'import sys, hashlib',
    'def filter_file(path):',
    '    with open(path) as f:',
    '        lines = f.read().splitlines()',
    '    out = []',
    '    i = 0',
    '    while i < len(lines):',
    '        line = lines[i]',
    '        if line.startswith(\"Time range:\"):',
    '            i += 1',
    '            continue',
    '        if line.startswith(\"=== \") and line.endswith(\" ===\"):',
    '            block = [line]',
    '            j = i + 1',
    '            while j < len(lines) and not (lines[j].startswith(\"=== \") and lines[j].endswith(\" ===\")):',
    '                block.append(lines[j])',
    '                j += 1',
    '            total_line = next((l for l in block if \"total_docs:\" in l), \"\")',
    '            total = total_line.split(\":\")[-1].strip() if total_line else \"?\"',
    '            if total != \"0\":',
    '                out.extend(block)',
    '            i = j',
    '        else:',
    '            out.append(line)',
    '            i += 1',
    '    return chr(10).join(out)',
    'print(filter_file(sys.argv[1]))',
]
with open('$FILTER_PY', 'w') as fh:
    fh.write(chr(10).join(code) + chr(10))
"

PRE_HASH=$(python3 "$FILTER_PY" "$SNAPSHOT_DIR/index_fingerprint_${PRE_DATE}.txt" \
  | sha256sum | awk '{print $1}')
POST_HASH=$(python3 "$FILTER_PY" "$FINGERPRINT_POST" \
  | sha256sum | awk '{print $1}')
rm -f "$FILTER_PY"

echo "Pre-RADAR:  $PRE_HASH"
echo "Post-RADAR: $POST_HASH"
[ "$PRE_HASH" = "$POST_HASH" ] \
  && echo "PASS - Index data fingerprint identical" \
  || { echo "FAIL - Fingerprint changed:"; \
       diff <(grep -v "^Time range:" $SNAPSHOT_DIR/index_fingerprint_${PRE_DATE}.txt) \
            <(grep -v "^Time range:" $FINGERPRINT_POST) \
            | grep "^[<>]" | head -30 || true; }

Post-07: Wazuh API

curl -sk -H "Authorization: Bearer $TOKEN" \
  "https://localhost:55000/manager/info" \
  | python3 -c "
import json, sys
data = json.load(sys.stdin)
info = data['data']['affected_items'][0]
print(f\"version: {info['version']}\")
print(f\"uuid: {info['uuid']}\")
print(f\"type: {info['type']}\")
" > /tmp/api_info_now.txt

[ "$(cat $SNAPSHOT_DIR/api_info_${PRE_DATE}.txt)" = "$(cat /tmp/api_info_now.txt)" ] \
  && { echo "PASS - API info unchanged"; cat /tmp/api_info_now.txt; } \
  || { echo "CHECK - API info changed:"; \
       diff $SNAPSHOT_DIR/api_info_${PRE_DATE}.txt /tmp/api_info_now.txt || true; }

Post-08: No new errors in ossec.log

docker exec multi-node-wazuh.master-1 bash -c \
  'grep -c "ERROR\|CRITICAL" /var/ossec/logs/ossec.log 2>/dev/null || echo 0'

echo "--- Last 10 ERROR/CRITICAL entries ---"
docker exec multi-node-wazuh.master-1 bash -c \
  'grep "ERROR\|CRITICAL" /var/ossec/logs/ossec.log 2>/dev/null | tail -10 || echo "none"'

Post-09: Webhook service

docker inspect ad-webhook --format '{{.State.Status}}' 2>/dev/null \
  && echo "Webhook container: running" \
  || echo "FAIL - Webhook container not found or not running"

curl -sf http://localhost:8080/health \
  && echo "PASS - Webhook HTTP endpoint responding" \
  || echo "FAIL - Webhook not responding on port 8080"

Post-10: Active response wiring

echo "--- radar_ar_geoip ---"
docker exec multi-node-wazuh.master-1 bash -c \
  'grep -c "radar_ar_geoip" /var/ossec/etc/ossec.conf' \
  && echo "PASS - radar_ar_geoip present" || echo "FAIL"

echo "--- radar_ar_log_volume ---"
docker exec multi-node-wazuh.master-1 bash -c \
  'grep -c "radar_ar_log_volume" /var/ossec/etc/ossec.conf' \
  && echo "PASS - radar_ar_log_volume present" || echo "FAIL"

echo "--- suspicious_login AR scripts ---"
docker exec multi-node-wazuh.master-1 bash -c \
  'grep -c "terminate_service\|lock_user_linux" /var/ossec/etc/ossec.conf' \
  && echo "PASS - suspicious_login AR blocks present" || echo "FAIL"

Post-11: Decoder and rule deployment

echo "--- Decoders ---"
docker exec multi-node-wazuh.master-1 bash -c \
  'for f in 0310-ssh.xml 0375-web-accesslog.xml 0001-ad-common.xml 0001-log-volume.xml; do
     path="/var/ossec/etc/decoders/$f"
     if [ -f "$path" ]; then
       stat -c "PASS %n owner=%U:%G mode=%a" "$path"
     else
       echo "FAIL $f not found"
     fi
   done'

echo "--- Rules ---"
docker exec multi-node-wazuh.master-1 bash -c \
  'for f in a2-geoip-detection.xml a3-suspicious-login.xml a1-log-volume.xml; do
     path="/var/ossec/etc/rules/$f"
     if [ -f "$path" ]; then
       stat -c "PASS %n owner=%U:%G mode=%a" "$path"
     else
       echo "FAIL $f not found"
     fi
   done'

Post-12: Python dependencies

docker exec multi-node-wazuh.master-1 python3 -c \
  "import yaml, requests; print('PASS - pyyaml and requests importable')" \
  || echo "FAIL - Python dependency missing in container"

Post-13: GeoIP databases on agents (in agent host)

for db in GeoLite2-City.mmdb GeoLite2-ASN.mmdb; do
  path="/usr/share/GeoIP/$db"
     [ -f "$path" ] \
       && stat -c "PASS %n size=%s bytes" "$path" \
       || echo "FAIL $db not found"
done

Post-14: Existing rules unmodified

UNCHANGED=$(diff $SNAPSHOT_DIR/manager_hashes_${PRE_DATE}.sha256 \
  /tmp/manager_hashes_now.sha256 \
  | grep "^<" | awk '{print $NF}' \
  | grep -v "decoders\|rules\|lists\|active-response\|ossec.conf\|shared/\|client.keys\|sslmanager")

[ -z "$UNCHANGED" ] \
  && echo "PASS - All pre-existing files byte-for-byte unmodified" \
  || { echo "FAIL - Pre-existing files changed outside RADAR scope:"; echo "$UNCHANGED"; }

Post-15: Filebeat pipeline integrity

curl -sk -u admin:SecretPassword \
  "https://localhost:9200/_ingest/pipeline/filebeat-7.10.2-wazuh-archives-pipeline" \
  | python3 -c "
import json, sys
content = sys.stdin.read()
print('PASS - log_volume patch in OpenSearch pipeline' if 'log_volume_metric' in content \
  else 'FAIL - patch not in OpenSearch pipeline')
"

Post-16: OpenSearch index mappings

echo "--- Existing wazuh-alerts-* template ---"
curl -sk -u admin:SecretPassword \
  "https://localhost:9200/_index_template/wazuh" \
  | python3 -c "
import json, sys
data = json.load(sys.stdin)
for t in data.get('index_templates', []):
    print(t.get('name'), '->', t.get('index_template', {}).get('index_patterns'))
"

echo "--- RADAR-specific templates (log_volume) ---"
curl -sk -u admin:SecretPassword \
  "https://localhost:9200/_index_template/radar*" \
  | python3 -c "
import json, sys
data = json.load(sys.stdin)
templates = data.get('index_templates', [])
if templates:
    for t in templates:
        print('PRESENT:', t.get('name'), t.get('index_template', {}).get('index_patterns'))
else:
    print('No RADAR templates (expected unless log_volume deployed)')
"

5. Cleanup

cd ~/wazuh-docker/multi-node
docker compose down -v
docker stop ad-webhook 2>/dev/null || true
sudo rm -rf /srv/docker-data/wazuh/
sudo rm -rf /tmp/

Expected outcome

Step 2 — Pre-deployment: All baseline files created. And all commands run without any error and permission issues.

Step 3 — Deployment: All three build-radar.sh and run-radar.sh runs complete.

Step 3 — Post-deployment:

  • Post-01: PASS - No unexpected file modifications (manager) and (worker).
  • Post-02: PASS - No unexpected permission changes.
  • Post-03: On both master and worker: decoder pam, rule 5502, level 3, description PAM: Login session closed., groups pam, syslog. Must be identical before and after deployment.
  • Post-04: Existing agents unchanged. Webhook agent enrollment expected and labelled.
  • Post-05: PASS - Cluster nodes unchanged.
  • Post-06: PASS - Index data fingerprint identical.
  • Post-07: PASS - API info unchanged.
  • Post-08: No ERROR/CRITICAL entries attributable to RADAR in ossec.log.
  • Post-09: Webhook container running and HTTP endpoint responding.
  • Post-10: All three scenario AR command blocks confirmed present in ossec.conf.
  • Post-11: All six decoder and rule files present with root:wazuh ownership.
  • Post-12: PASS - pyyaml and requests importable.
  • Post-13: Both GeoLite2 .mmdb files present on remote agent.
  • Post-14: PASS - All pre-existing files byte-for-byte unmodified.
  • Post-15: log_volume_metric patch present in container pipeline file (2 matches). OpenSearch pipeline filebeat-7.10.2-wazuh-archives-pipeline also contains the patch.
  • Post-16: No changes to wazuh-alerts-* template. wazuh-ad-log-volume-* template present after log_volume deployment.

Parent links: SRS-051 RADAR scenario: suspicious login, SRS-055 RADAR scenario: Geo-IP AC via whitelisting, SRS-056 RADAR scenario: log size change

Child links: TRP-038 TCER: RADAR deployment integrity verification

Attribute Value
platform GNU/Linux (Dockerized C5-DEC deployment environment)
execution_type Manual
verification_method T
release alpha
complexity 4
test_data see referenced files
version 0.8.2