1 SATRAP SRS-SATRAP

System-level requirements for the SATRAP CTI analysis platform.

1.1 Data modelling language SRS-001

The data model of SATRAP SHALL be specified using a data modelling language based on description logics, such as OWL, or on type theory such as TypeQL.

Rationale

To enforce a rigorous logical model specification close to the conceptual model where the semantics of information are captured.

Acceptance criteria

See validation test case specification

Parent links: MRS-001 Semantic data model, MRS-015 Semantic relations preservation

Child links: ARC-001 SATRAP: System structure overview, TST-001 Verify data modelling artifacts, ARC-002 Logical view of SATRAP, ARC-004 ETL components

Attribute Value
type A
urgency 5
vm R
release Alpha

1.2 Database paradigm SRS-002

SATRAP SHALL rely on a database paradigm that allows for knowledge representation based on semantics as opposed to based on structure of the information. Possible candidates are the PERA model and the graph model implemented by TypeDB and, e.g., Neo4J respectively.

Rationale

To enable intrinsic semantic search capabilities and automated reasoning over the data model.

Acceptance criteria

See validation test case specification

Parent links: MRS-002 CTI knowledge base, MRS-015 Semantic relations preservation

Child links: ARC-001 SATRAP: System structure overview, TST-001 Verify data modelling artifacts, ARC-002 Logical view of SATRAP, ARC-004 ETL components

Attribute Value
type A
urgency 5
vm R
release Alpha

1.3 Semantic search SRS-003

SATRAP SHALL support querying the CTI SKB based on semantic criteria.

Rationale

To enable users to perform meaningful searches and data manipulation based on semantics rather than just data structure.

Acceptance criteria

See validation test case specification

Parent links: MRS-002 CTI knowledge base

Child links: TST-001 Verify data modelling artifacts

Attribute Value
type A
urgency 5
vm R
release Alpha

1.4 Extensibility of the data model SRS-004

The data model of the CTI SKB SHALL be extensible to accommodate for the integration of new information (e.g., facts, entities, or relationships) without requiring a complete redesign.

Rationale

Extensibility of the data model allows for gradual enrichment of the CTI SKB by combining multiple threat frameworks, as CTI might not be expressible in a single one.

Acceptance criteria

See validation test case specification

Parent links: MRS-003 CTI SKB extensibility

Child links: TST-001 Verify data modelling artifacts

Attribute Value
type A
urgency 5
vm R
release Alpha

1.5 NoSQL data model SRS-005

The data model of the CTI SKB SHALL rely on either a NoSQL graph-based or a document-based database solution or a type-theoretic polymorphic entity-relation-attribute (PERA) data model to allow for the addition of new entities and relationships without requiring a schema migration.

Rationale

Flexibility enables further customization for specific domains, such as healthcare or military related ones.

Acceptance criteria

See validation test case specification

Parent links: MRS-004 SKB data model flexibility

Child links: TST-001 Verify data modelling artifacts, ARC-004 ETL components

Attribute Value
type A
urgency 5
vm R
release Alpha

1.6 Integration of common CTI SRS-006

As an administrator of SATRAP, I want to run an ETL pipeline that retrieves MITRE ATT&CK datasets published in STIX 2.1, transforms the content into an adequate format and loads this content into the CTI SKB, so that I have up-to-date, reliable and curated data available for automated reasoning and analysis.

For this purpose:

  1. I have a way to configure the MITRE ATT&CK STIX 2.1 source URL or local file
  2. I can run the integration process from the CLI of SATRAP
  3. SATRAP runs the ETL pipeline to downloads/reads the STIX 2.1 payload, transform STIX into the internal TypeQL load statements and insert them into the CTI SKB
  4. I get notified of API authentication failures, network errors, and invalid STIX payloads with clear error messages and appropriate exit codes
  5. I get a summary at the end the ETL process and can see the log for detailed information.

Rationale

For scenarios that deal with CTI generation and operation, we consider that a knowledge base requires information from at least three categories integrating both, public and internal threat landscapes. This requirement addresses the integration of common cybersecurity knowledge.

Acceptance criteria

See validation test case specification

Parent links: MRS-005 CTI SKB content: public CTI knowledge

Child links: ARC-003 ETL high-level design, ARC-004 ETL components, TST-008 Test setup + MITRE ATT&CK ingestion

Attribute Value
type F
urgency 5
vm T
release Alpha

1.7 Semantic data integrity SRS-007

The data model SHALL enforce semantic integrity ensuring that relationships and constraints adhere to the intended meaning. Semantic data integrity can be enforced by measures such as data validation with respect to schemas and relationships constraints, automated checks for data redundancy and inference powered with a reasoning engine.

Rationale

To ensure consistency, accuracy and reliability of data, preventing among others contradictory and repeated data to be stored.

Acceptance criteria

See validation test case specification

Parent links: MRS-008 CTI SKB data integrity

Child links: TST-020 Verify enforcement of semantic data integrity

Attribute Value
type S
urgency 5
vm R
release Alpha

1.8 ETL subsystem SRS-008

SATRAP SHALL contain a component in charge of integrating data from external sources into the CTI SKB. This component, referred to as the ETL (extract-transform-load) subsystem, is responsible of extracting datasets in STIX 2.1 from diverse sources, transforming them into the representation language of the CTI SKB and loading the transformed content into the CTI SKB.

Rationale

To provide a single means of data ingestion regardless of the data source, enforcing separation of duties and modularity in the design.

Acceptance criteria

See validation test case specification

Parent links: MRS-011 Ingestion of CTI in a standard format

Child links: ARC-001 SATRAP: System structure overview, ARC-002 Logical view of SATRAP, ARC-003 ETL high-level design, ARC-004 ETL components, TST-009 Verify ETL architecture

Attribute Value
type A
urgency 5
vm R
release Alpha

1.9 ETL Transformer SRS-009

The ETL subsystem SHALL have a component in charge of transforming data in STIX 2.1 format into the representation language of the the CTI SKB schema.

Rationale

To address data parsing enforcing separation of duties and modularity.

Acceptance criteria

See validation test case specification

Parent links: MRS-011 Ingestion of CTI in a standard format

Child links: ARC-003 ETL high-level design, ARC-004 ETL components, TST-009 Verify ETL architecture

Attribute Value
type A
urgency 5
vm R
release Alpha

1.10 Database manager SRS-010

The system SHALL have a component in charge of managing database operations and connections.

Rationale

To deal with database management enforcing separation of duties and modularity.

Acceptance criteria

See validation test case specification

Parent links: MRS-011 Ingestion of CTI in a standard format

Child links: ARC-001 SATRAP: System structure overview, ARC-002 Logical view of SATRAP, ARC-003 ETL high-level design, ARC-004 ETL components, TST-009 Verify ETL architecture

Attribute Value
type A
urgency 5
vm R
release Alpha

1.11 Ingestion of organizational CTI SRS-011

As a CTI analyst or SATRAP administrator, I want to have a configurable way to integrate CTI from MISP or OpenCTI into the CTI SKB so that our organization's internally-tracked threat intelligence is available for automated reasoning and analysis with other data in SATRAP.

For this purpose:

  1. I have a way to configure my TIP API credentials and endpoint URL.
  2. I can run the integration process from the CLI of SATRAP
  3. SATRAP programmatically extracts STIX 2.1 bundles from either platform (e.g., via PyMISP or the OpenCTI Python SDK) and inserts them into the CTI SKB via the ETL subsystem.
  4. I get notified of API authentication failures, network errors, and invalid STIX payloads with clear error messages and appropriate exit codes.
  5. I get a summary at the end the ETL process and can see the log for detailed information.

Rationale

For scenarios that deal with CTI generation and operation, we consider that a knowledge base requires information from at least three categories integrating both, public and internal threat landscapes. This requirement addresses the integration of organizational CTI (internal and shared with the organization).

Acceptance criteria

See validation test case specification

Parent links: MRS-040 CTI SKB content: organizational CTI

Child links: ARC-001 SATRAP: System structure overview, ARC-002 Logical view of SATRAP, ARC-003 ETL high-level design

Attribute Value
type F
importance 4
urgency 4
vm T
release Beta

1.12 Inference rules SRS-012

SATRAP SHALL implement inference rules that allow for the automated derivation of knowledge over existing relations in the CTI SKB.

Rationale

To address one of the major challenges for incident responders, namely, manual data correlation and contextualization of collected IoCs.

Acceptance criteria

See validation test case specification

Parent links: MRS-014 Automated CTI enrichment

Child links: ARC-001 SATRAP: System structure overview, TST-010 Verify CTI SKB inference rules

Attribute Value
type F
urgency 4
vm A
release Alpha

1.13 STIX 2.1 data model SRS-013

The data model of SATRAP SHALL be aligned with the data model of STIX 2.1.

Rationale

Such a design enables a direct mapping of the imported data into the concepts in the database and allows for the use of the integrity checks defined over the database model.

Acceptance criteria

See validation test case specification

Parent links: MRS-015 Semantic relations preservation

Child links: ARC-003 ETL high-level design, ARC-004 ETL components, TST-004 Verify STIX 2.1-based data model

Attribute Value
type A
urgency 5
vm A
release Alpha

1.14 Native reasoning engine SRS-014

SATRAP SHALL use a DBMS technology that integrates or has compatibility with a reasoning engine. The preferred solution is TypeDB.

Rationale

A native implementation of the KB and reasoning engine in one platform typically optimizes performance as it allows for the implementation of efficient data management strategies.

Acceptance criteria

See validation test case specification

Parent links: MRS-018 Automated reasoning

Child links: ARC-001 SATRAP: System structure overview, ARC-002 Logical view of SATRAP, TST-003 Verify STIX and reasoning engine

Attribute Value
type A
urgency 5
vm R, I
release Alpha

1.15 Jupyter Notebook frontend SRS-015

SATRAP SHALL implement an analysis frontend in the form of a set of Jupyter notebooks that make use of the CTI analysis toolbox SDK. This frontend SHALL showcase the usage of the functions in the SDK providing reusable blocks of code and playbooks for CTI investigations.

Rationale

For interoperability with the ecosystem, to enable the automation of the CTI lifecycle through the integration of multiple complementary solutions.

Acceptance criteria

See validation test case specification

Parent links: MRS-020 Interactive frontend, MRS-022 Storage of CTI investigations, MRS-023 Query parameterization, MRS-025 SATRAP-DL service, MRS-026 Query result viewer, MRS-027 Frontend query status, MRS-029 Frontend design, MRS-034 Frontend cross-platform support

Child links: ARC-001 SATRAP: System structure overview, ARC-002 Logical view of SATRAP, TST-011 Test Jupyter notebook frontend, TST-019 Verify layered architecture of SATRAP

Attribute Value
type F
urgency 3
vm T
release Alpha

1.16 Integration of behavioral data SRS-017

SATRAP SHALL implement a mechanism for retrieving data sourced by IDPS-ESCAPE from the CyFORT CTI repository (handled by an open-source TIP), via programmatic API access. Other SATRAP components can then adequately process and insert the information into the CTI SKB.

Rationale

For scenarios that deal with CTI generation and operation, we consider that a knowledge base requires information from at least three categories. This requirement addresses the integration of behavioral data.

Acceptance criteria

See validation test case specification

Parent links: MRS-039 CTI SKB content: from IDPS-ESCAPE

Child links: ARC-001 SATRAP: System structure overview, ARC-002 Logical view of SATRAP

Attribute Value
type F
importance 4
urgency 3
vm T
release Beta

1.17 Automated CTI analysis SRS-018

SATRAP SHALL provide an automated mechanism for running a knowledge derivation workflow:

  1. Data retrieval from the CyFORT CTI repository.
  2. Execution of predefined inference queries.

Such a mechanism SHALL support customizable settings, e.g., in a configuration file.

Rationale

For supporting automation of CTI maintenance processes, e.g, via a chron job that runs this script/program.

Acceptance criteria

See validation test case specification

Parent links: MRS-042 CyFORT CTI continuous analysis

Attribute Value
type F
importance 4
urgency 2
vm T
release Beta

1.18 CTI export to STIX 2.1 SRS-019

SATRAP SHALL provide a feature for obtaining the results of CTI analysis toolbox inference functions in STIX 2.1 format.

Rationale

For persistence of the analysis results in a standard human-readable format, and for enabling transfer of newly derived CTI into other tools (e.g., the CyFORT CTI repository).

Acceptance criteria

See validation test case specification

Parent links: MRS-019 Exporting inferred CTI

Attribute Value
type F
importance 5
urgency 4
vm T
release Beta

1.19 System configuration file SRS-020

SATRAP SHALL allow for customization of system parameters (e.g., logging severity: debug, info, warn, error; db connections; file paths) in a dedicated configuration file.

Rationale

In agreement with clean code and best practices for software development, to promote code maintainability.

Acceptance criteria

See validation test case specification

Parent links: MRS-044 Modular architecture

Child links: ARC-003 ETL high-level design, ARC-004 ETL components, TST-005 Verify centralized management

Attribute Value
type Q
urgency 3
vm I
release Alpha

1.20 CTI representation in STIX 2.1 SRS-023

SATRAP SHALL use STIX 2.1 as the default standard format for CTI representation.

Rationale

For interoperability

Acceptance criteria

See validation test case specification

Parent links: MRS-045 STIX compliance

Child links: ARC-003 ETL high-level design, TST-003 Verify STIX and reasoning engine, ARC-004 ETL components

Attribute Value
type C
urgency 5
vm R, I
release Alpha

1.21 Functional ETL events logging SRS-033

SATRAP SHALL log at least one timestamped event with an associated log level recording the ETL execution status (success/failure) per each phase, i.e., extraction, transformation, and loading.

Rationale

To provide information for security investigations.

Acceptance criteria

See validation test case specification

Parent links: MRS-053 Secure programming compliance

Child links: TST-012 Test ETL logging

Attribute Value
type F
urgency 3
vm T
release Alpha

1.22 Detailed event logging SRS-034

SATRAP SHALL log events in a detailed manner according to well-established logging levels, capturing relevant information for each event. Specifically, the levels to be used are:

  • DEBUG level for detailed diagnostic information during development and troubleshooting
  • INFO level for general informational messages about system operations
  • WARNING level for potentially problematic situations
  • ERROR level for error conditions and failures

Rationale

To provide information for debugging purposes.

Acceptance criteria

See validation test case specification

Parent links: MRS-053 Secure programming compliance

Attribute Value
type S
urgency 3
vm T
release Beta

1.23 Consistent logging format SRS-035

SATRAP SHALL ensure that all logs generated by the system follow a consistent format, including standardized fields such as timestamp, log level, source component, and message content.

Rationale

To generate human-readable and informative logs.

Acceptance criteria

See validation test case specification

Parent links: MRS-053 Secure programming compliance

Attribute Value
type C
urgency 1
vm T
release FID

1.24 TypeQL to STIX 2.1 transformer SRS-039

SATRAP SHALL have the capability of representing data expressed in the language of the CTI SKB (TypeQL), in terms of the STIX 2.1 data model.

Rationale

To provide the low-level components supporting SRS-019.

Acceptance criteria

None

Parent links: MRS-019 Exporting inferred CTI

Attribute Value
type F
importance 5
urgency 4
vm T,I
release Beta

1.25 Configuration management mechanism SRS-041

The system SHALL provide a configuration management mechanism with a set of predefined values for the user to adjust various system settings.

Rationale

To enforce a centralized user-configurable mechanism for managing system settings.

Acceptance criteria

See validation test case specification

Parent links: MRS-016 Configuration management, MRS-017 Conformance with user settings

Child links: TST-013 Inspect settings for CM

Attribute Value
type F
urgency 2
vm I
release Alpha

1.26 Command line interface (CLI) SRS-042

As a SATRAP administrator/CTI analyst, I want to interact with SATRAP via a command line interface so that I can set up the backend CTI SKB and launch the ETL pipeline without the need of a graphical user interface (GUI). Thus, I want the CLI to provide at least the following commands:

  • setup: Initialize the CTI SKB.
  • etl: Launch the ETL pipeline.
  • help: Display a list of available commands and their descriptions.

Rationale

To provide easy-to-use and efficient access to core data processing functionality for users who prefer command line tools.

Acceptance criteria

See validation test case specification

Parent links: MRS-020 Interactive frontend, MRS-033 File-based SKB update, MRS-034 Frontend cross-platform support

Child links: TST-014 Test command line interface (CLI)

Attribute Value
type F
urgency 5
vm T
release Alpha

1.27 TypeDB Studio SRS-043

The SATRAP-DL system SHALL adopt TypeDB Studio as a Graphical User Interface (GUI) for users to interact with the SATRAP CTI SKB using the native TypeQL query language.

Rationale

To provide the means to the user to execute queries in the native query language of the CTI SKB.

Acceptance criteria

See validation test case specification

Parent links: MRS-020 Interactive frontend, MRS-028 Native query execution, MRS-029 Frontend design, MRS-032 User-controlled CTI curation

Child links: TST-008 Test setup + MITRE ATT&CK ingestion

Attribute Value
type F
urgency 5
vm T
release Alpha

1.28 Open-source TIP integration SRS-044

SATRAP-DL SHALL adopt both MISP and OpenCTI as open-source TIPs (Threat Intelligence Platform) for central storage and management of threat intelligence data.

Rationale

To host and manage CTI data, by relying on stable and mature solutions, without reinventing the wheel. The choice of integrating a well-established open-source TIP would

  • provide a solution capable of ingesting, storing, and distributing threat intelligence data from various sources, including open-source feeds, commercial feeds, and internal sources.
  • provide a user-friendly interface for analysts to search, filter, and visualize threat intelligence data.
  • support integration with other security tools and platforms, such as SIEM (Security Information and Event Management) systems, SOAR (Security Orchestration, Automation, and Response) platforms, and threat intelligence sharing platforms.
  • support standardized formats for threat intelligence data, such as STIX (Structured Threat Information Expression) and TAXII (Trusted Automated Exchange of Indicator Information), to facilitate interoperability with other systems.
  • support the ability to create and manage threat intelligence feeds, including the ability to schedule updates and manage data retention policies.
  • support the ability to create and manage threat intelligence reports, including the ability to generate reports in various formats, such as PDF, HTML, and CSV.
  • support the ability to create and manage threat intelligence dashboards, including the ability to customize the layout and content of the dashboards.
  • support the ability to create and manage threat intelligence alerts, including the ability to configure alert thresholds and notification mechanisms.

Acceptance criteria

See validation test case specification

Parent links: MRS-012 CyFORT CTI repository, MRS-039 CTI SKB content: from IDPS-ESCAPE

Child links: TST-017 Verify open-source TIP integration

Attribute Value
type A
urgency 5
vm R
release Alpha

1.29 CTI analysis engine SRS-045

SATRAP SHALL have a component in charge of the CTI analysis operational logic, mediating the interaction between the service layer and the data layer.

Rationale

For separation of duties in SATRAP and to support automation of CTI analysis tasks.

Acceptance criteria

None

Parent links: MRS-024 CTI analysis

Child links: ARC-001 SATRAP: System structure overview, ARC-002 Logical view of SATRAP, TST-019 Verify layered architecture of SATRAP

Attribute Value
type A
importance 5
urgency 5
vm R,I
release Alpha

1.30 CTI analysis toolbox SRS-046

SATRAP SHALL have a component providing end-user functionality accessible programmatically via the frontend. This functionality consists primarily of parametrizable high-level CTI queries.

Rationale

For separation of duties in SATRAP and to support automation of CTI analysis tasks.

Acceptance criteria

None

Parent links: MRS-023 Query parameterization, MRS-025 SATRAP-DL service, MRS-037 SATRAP as software library

Child links: ARC-001 SATRAP: System structure overview, ARC-002 Logical view of SATRAP, TST-019 Verify layered architecture of SATRAP

Attribute Value
type F,A
importance 5
urgency 5
vm T,R
release Alpha

1.31 OSINT feeds configuration and catalog SRS-047

The CyFORT CTI repository instantiated by MISP and/or OpenCTI SHOULD ingest a pre-configured set of OSINT feeds, informed by a catalog documenting relevant feeds per application domain (e.g. energy sector, space sector, IT company, etc.).

Rationale

To integrate up to date relevant CTI in SATRAP according to the domain of interest. The catalog acts as a reusable knowledge base that supports the setup of new SATRAP deployments in diverse application domains.

Acceptance criteria

None

Parent links: MRS-013 OSINT ingestion

Attribute Value
type F
importance 4
urgency 2
vm R,I
release FID

2 DECIPHER SRS-DECIPHER

System-level requirements for DECIPHER.

2.1 DECIPHER infrastructure stack: deployment SRS-048

As a DECIPHER operator, I want to deploy the full DECIPHER infrastructure stack using a containerized approach with a single entrypoint script, so that I can bring up, configure, and tear down the required services reproducibly across environments.

I want the deployment component to:

  1. Include the following services conforming DECIPHER's infrastructure stack:
    • the threat intelligence platform MISP,
    • the case management system Flowintel, and
    • the DECIPHER REST analysis service, which provides a platform-independent, human and programmatic access point to CTI analysis
  2. Provide a single entrypoint script for selective deployment and startup of the stack components, e.g., deploying the DECIPHER service and Flowintel but not MISP, or deploying the entire stack.
  3. Provide a complementary teardown script analogous to the one defined above, with an additional option to remove associated data volumes.
  4. Use a configuration file to set environment variables for all services.
  5. Support addition of services (e.g. SATRAP for deeper CTI investigations) without modifying the deployment of existing services.

Rationale

A containerized, single-entrypoint deployment reduces operational friction when setting up and tearing down the DECIPHER incident handling pipeline. Selective service startup allows operators to deploy only the components desired in their environment without modifying the deployment artifacts.

Acceptance criteria

  1. Invoking the startup script with the full-stack flag starts the CTI platform, the case management system, and the analysis service containers and makes each service reachable on its configured port.
  2. Invoking the teardown script with the full-stack and volume-purge flags stops all running containers and removes their associated data volumes.
  3. Invoking the startup script with the analysis-service-only flag starts only the analysis service container; the CTI platform and case management system containers are not started.
  4. Copying the provided configuration template and setting the required environment variables is sufficient to configure all services before startup.

Parent links: MRS-035 Integration with open-source tools for incident handling

Child links: ARC-005 DECIPHER context diagram, TST-021 Test containerized deployment, ARC-006 DECIPHER infrastructure deployment diagram

Attribute Value
type F
importance 5
urgency 5
vm T
release Beta

2.2 DECIPHER REST service and API SRS-049

As a security engineer, I want to have programmatic access to a service that supports my automated incident handling workflow, so that I can efficiently analyze alerts concerning diverse threat scenarios with real-time CTI enrichment and create prioritized cases based on threat severity.

I want this service to:

  1. Be accessible through a REST API.
  2. Expose a dedicated endpoint to post alert data for a named threat scenario. As part of the response, I want to receive a severity score and the analysis report. This is called the analysis endpoint.
  3. Expose a dedicated endpoint to post a score and case data for a named threat scenario. I want this input to be used for creating a prioritized incident case in the case management platform. This is called the incidents endpoint.
  4. Expose a discovery endpoint that lists all registered alert types together with their input schemas. This is called the analyzers endpoint.
  5. Expose a health endpoint returning the service status and the number of loaded analyzers. This is called the health endpoint.
  6. Show information about the service version and documentation references in the home endpoint.
  7. Return structured, machine-readable results in JSON format following a predefined schema depending on the endpoint.

Rationale

A REST API with schema-validated endpoints enables integration in larger automated workflows with SIEM and orchestration platforms and supports operational monitoring. Structured results make it straightforward for orchestration systems and analysts to consume the output without additional parsing.

Dedicated analysis and incidents endpoints enables security teams to automate incident handling workflows and ensure critical threats are escalated for human investigation minimizing manual effort.

A standardized health check endpoint allows orchestration systems to automate service monitoring and proceed with service usage upon healthy status validation. A home endpoint with version and documentation information supports operational visibility and helps users access relevant resources.

Acceptance criteria

  1. An analysis endpoint that accepts a POST request with alert data for a named threat scenario is exposed. The endpoint returns a JSON response containing the scenario identifier, severity score, and analysis report.
  2. An incidents endpoint that accepts a POST request with a score and case data for a named threat scenario is exposed. The endpoint returns a JSON response enabling case creation in the case management platform.
  3. An analyzers endpoint that accepts GET requests and returns a JSON response listing all registered alert types with their input schemas is exposed.
  4. A health endpoint that accepts GET requests and returns a JSON response with the service status, count of loaded analyzers, and log level is exposed.
  5. A home endpoint that accepts GET requests and returns service version and documentation references in JSON format is exposed.
  6. All endpoints return structured, machine-readable results in JSON format with a predefined schema for each endpoint.

Parent links: MRS-035 Integration with open-source tools for incident handling, MRS-038 Platform-independent API, MRS-058 Automated alert triage and case escalation guided by CTI, MRS-059 Standalone incident escalation from an externally-computed triage score

Child links: ARC-005 DECIPHER context diagram, TST-022 Test DECIPHER REST service and API, ARC-008 DECIPHER microservice container diagram

Attribute Value
type F
importance 5
urgency 5
vm T
release Beta

2.3 DECIPHER service: analysis endpoint SRS-050

As a security engineer operating an alert management system and/or SIEM, I want to automatically submit security alerts to the analysis endpoint of the DECIPHER REST service, so that I can analyze diverse threat scenarios using real-time CTI and create prioritized cases without manual investigation.

I want this endpoint to:

  1. Accept a POST request with the identifier of one of the supported threat scenarios (a.k.a. alert type) as a parameter.
  2. Accept alert data for the selected threat scenario in the request body, validate it against the corresponding schema, and return structured error details in case of validation failure.
  3. Search for information on the alert IOCs in the integrated CTI platform (MISP).
  4. Compute a severity score using the retrieved threat intelligence, taking into account at least the following MISP concepts: event threat level, analysis stage, admiralty-scale tags, sighting counts, and presence of relevant threat identification tags (e.g., MITRE ATT&CK tags).
  5. Optionally (set through a configuration file), create a prioritized case in Flowintel pre-populated with the alert data, score breakdown, and a priority tag assigned according to configurable thresholds. If a template is available for the specific threat scenario, I want the case to be aligned with the corresponding case template.
  6. For each successfully processed request, return a structured machine-readable analysis result containing the identifier of the analyzed scenario, a severity score in [0, 1] calculated based on CTI aspects, the analysis report, and the ID and URL of the case created or 0 and an empty string if no case was created. The analysis report contains at least a breakdown of the score calculation, the ids of related events found in MISP, and a summary of errors or relevant log messages.
  7. Support additional threat scenarios without modification to the code of the existing ones.

Rationale

A fully automated CTI analysis leveraging programmatic access to MISP and Flowintel provides a first stage investigation based on commonly checked severity and confidence aspects, allowing analysts to focus on further aspects that benefit from human attention. The report provides a clear view on the aspects considered and the analysis findings. The optional case creation feature allows DECIPHER to be used both, as a standalone analysis, scoring and case handling service, or as part of a larger automated workflow that incorporates the CTI score into an external scoring and triage system.

Acceptance criteria

Assuming that MISP and Flowintel instances are correctly configured and enabled in the configuration files, and that at least one analyzer is registered in the DECIPHER service:

  1. A POST request to the analysis endpoint with valid alert data returns HTTP 200 with a JSON body containing the analyzed scenario, severity score, report, and created case information.
  2. A POST request with structurally invalid alert data returns HTTP 422 with field-level validation error details.
  3. A POST request referencing an unregistered alert type returns HTTP 404.
  4. A POST request that triggers an internal analysis failure returns HTTP 500.
  5. When case creation is enabled and the analysis completes successfully, a case is created in Flowintel and its ID appears in the created case information of the analysis result.
  6. When case creation is disabled, the analysis endpoint still returns a complete analysis response, however, no case is created in Flowintel and the created case information shows default values id=0 and url="".
  7. When MISP search is disabled, the analysis returns a successful response with a severity score of 0 indicating in the report that MISP search was disabled.

Parent links: MRS-035 Integration with open-source tools for incident handling, MRS-058 Automated alert triage and case escalation guided by CTI

Child links: ARC-007 RADAR-DECIPHER pipeline overview, TST-028 Test analysis endpoint core behavior

Attribute Value
type F
importance 5
urgency 5
vm T
release Beta

2.4 Analysis endpoint: IOC search in MISP for CTI enrichment SRS-052

As a security analyst, I want DECIPHER to query the integrated MISP instance for indicators of compromise (IOCs) extracted from an alert, so that the automated analysis is informed with available threat intelligence rather than relying solely on the raw alert data.

I want the enrichment to:

  1. Accept lists of values mapped to specific MISP attribute types (e.g. ip-src, ip-dst, target-user) and search each value against MISP.
  2. Exclude attributes identified in MISP warning lists from the results.
  3. Group the returned MISP attributes by event and convert them into an internal representation that carries threat level, analysis stage, admiralty-scale tags, identified attributes with a count of the true positive and false positive sightings, and whether threat identification tags are present.
  4. Deduplicate results across multiple IOC searches so that each MISP event appears only once.
  5. Return an empty result set without error when no MISP events are found for the submitted IOCs.
  6. Proceed without MISP enrichment and record the unavailability in the analysis report when the MISP client is not reachable or not configured.
  7. Allow configuring the time window and the result limit of the MISP search.

Rationale

IOC enrichment is the primary mechanism by which DECIPHER grounds its severity assessment in real threat intelligence. Deduplication prevents artificially inflating scores when several searched values point to the same MISP event. Skipping CTI enrichment when MISP is unavailable prevents blocking pipelines by allowing the analysis to continue and return a result.

Configurability of the search time window and result limit allows tuning the MISP search for different use cases and data volumes.

Acceptance criteria

  1. Submitting alert data containing an IOC that is present as an attribute (alone or as part of a MISP object) in a MISP event causes that event id to appear in the analysis report response.
  2. Submitting alert data where none of the IOCs is present in the MISP instance returns an analysis result with an empty list of events found and a severity of 0.
  3. A single MISP event matching multiple searched IOCs appears only once in the enrichment results.
  4. The time window and result limit for the MISP search can be configured and are respected in the search results.
  5. When MISP is unreachable, the analysis result informs of this situation and the severity is 0.

Parent links: MRS-035 Integration with open-source tools for incident handling, MRS-058 Automated alert triage and case escalation guided by CTI

Child links: TST-023 Test full workflow of analysis endpoint, ARC-009 REST service: analysis endpoint interaction, TST-026 Test analysis endpoint graceful degradation

Attribute Value
type F
importance 5
urgency 5
vm T
release Beta

2.5 Analysis endpoint: CTI-driven scoring engine for MISP SRS-053

As a security analyst, I want DECIPHER to compute a quantitative severity score from MISP threat intelligence events, so that I can prioritize incident response effort on the basis of an objective, reproducible assessment quantifying both, the intrinsic danger of a threat and the degree of confidence on the trustworthiness of the assessment.

I want the scoring engine to:

  1. Compute a CTI-based score considering a severity component and a confidence component, both bounded in [0, 1].
  2. Derive the severity component from the threat level in MISP events and tags associating the event with a known threat.
  3. Derive the confidence component from evidence-based signals such as MISP sightings (true-positive and false-positive counts) and Admiralty-scale source reliability and information credibility tags.
  4. Return a score of 0 when no MISP events are provided.
  5. Output a detailed breakdown of the score so analysts can understand how it was computed.
  6. Allow all scoring weights and thresholds to be overridden through a YAML configuration file without requiring code changes, to allow calibration against historical analyst assessments.
  7. Reload the scoring configuration at runtime without service restart.

Requirements for the CTI severity score computation

Dual-axis design. The score must separately capture severity (how dangerous is the threat) and confidence (how much should we trust that assessment). These factors are multiplied to produce the final score, so both must be strong simultaneously for the score to be high.

Score in [0,1]. The score ranges from 0 to 1, where 0 means no actionable input (no information, unconfirmed, or unreliable) while 1 means a maximally severe threat that has been fully confirmed by independent, reliable sources.

Zero by default. With no available information, the score must output exactly 0. Confidence must be earned, not assumed.

Score of 1 requires full confirmation. A score approaching 1 must require high threat level, completed analysis, strong empirical sighting support, and reliable Admiralty-scale provenance simultaneously.

Threat identification tags as a severity amplifier. The presence of relevant tags identifying the event or attribute as a potential threat (e.g. mitre-attack-pattern and mitre-intrusion-set) must boost severity multiplicatively, not additively. The presence of such tags should never manufacture severity where none exists.

Sightings as empirical confirmation. True positive and false positive sightings must be incorporated using a statistically principled method that rewards confirmed sightings and penalizes false positives. The absence of any sightings must contribute zero confidence, not neutral uncertainty.

Admiralty scale as provenance signal. Both admiralty-scale:source-reliability and admiralty-scale:information-credibility tags must be used. Attribute-level tags take precedence over event-level tags. Absent tags must contribute zero confidence, not a neutral default.

Graceful degradation on missing data. The formula must not collapse entirely to zero when one confidence stream is missing. Missing data should degrade the score but not eliminate it entirely, provided the remaining streams are strong.

Aggregation across attributes and events. Individual attribute confidence scores must aggregate to an event confidence score, and event scores must aggregate to a final score, using a method that compounds independent evidence correctly.

Scope fidelity. Each concept contributing tot he calculation must be aggregated at the level where it is generated. Event-level concepts must not be embedded in per-attribute calculations, and attribute-level concepts must not bypass per-attribute aggregation.

Score computation

Step 1 — Severity (event-level)

Severity captures the intrinsic danger of the threat based on event metadata.

Threat level. The threat level of an event is mapped to a base severity score according to a configurable mapping; use defaults high = 1.0, medium = 0.5, low = 0.25. An undefined threat level maps to 0 in alignment with the zero-by-default requirement.

Threat tags multiplier. Occurrence of tags used to identify the event or attribute as a potential threat (e.g. mitre-attack-pattern and mitre-intrusion-set) is checked at the event level and across all attributes. Presence anywhere in the event is counted as 1, absence equates to 0.

tags_multiplier = 1.0 + 0.5 × tags_present
                = 1.5  if any threat tag is present
                = 1.0  otherwise

This factor is applied multiplicatively so that threat tag presence amplifies existing severity but cannot manufacture severity from nothing (a threat level of 0 remains 0 regardless).

Severity formula:

Severity = min(1.0, threat_level × tags_multiplier)

The min(1.0, ...) keeps the result bounded. Examples:

  • High + tags: min(1.0, 1.0 × 1.5) = 1.0

  • Medium + tags: min(1.0, 0.5 × 1.5) = 0.75

  • Medium, no tags: min(1.0, 0.5 × 1.0) = 0.50

  • Undefined + tags: min(1.0, 0.0 × 1.5) = 0.0

Step 2 — Confidence

Confidence is structured across two tiers corresponding to the scope at which each signal is generated. Attribute-level evidence-based signals are aggregated first; the result is then combined with the event-level analyst judgment.

Tier 1 — Attribute-level confidence

Each attribute contributes an empirical confidence score from its own sightings and Admiralty tags.

2a — Sightings (Beta Posterior)

Sightings use a Beta distribution posterior to estimate the empirical confirmation rate from true positives (TP) and false positives (FP). The raw posterior is then transformed so that zero sightings contributes zero confidence rather than neutral uncertainty (0.5):

P_sightings = (TP + 1) / (TP + FP + 2)          # Beta posterior mean
C_sightings = max(0,  2 × P_sightings − 1)       # Rescaled to [0,1], clipped
            = max(0,  2 × (TP + 1) / (TP + FP + 2) − 1)

Expected behaviour:

  • No sightings (TP=0, FP=0): 2 × 0.5 − 1 = 0.0
  • All false positives: clips to 0.0
  • Balanced TP and FP: stays near 0.0
  • Strong true positives: approaches 1.0

2b — Admiralty Scale

Resolve admiralty per attribute with attribute-level taking precedence over event-level tags. Then, define configurable source reliability and information credibility mappings from the admiralty scale to [0,1].

Source reliability "G — Deliberately deceptive", information credibility "6 — Truth cannot be judged" and no admiralty tag neither at event nor at attribute level, map to 0.

Both tags are combined using a geometric mean to produce the Admiralty confidence score:

C_admiralty = sqrt(source_reliability × info_credibility)

Unlike the arithmetic mean, the geometric mean ensures that a weak value in either dimension suppresses the result: if either source reliability or information credibility is zero, the combined score is zero regardless of the other dimension's strength. This reflects Admiralty scale semantics, where both axes are necessary conditions for trustworthiness, not interchangeable components.

2c — Attribute confidence weighted combination

The two attribute-scoped streams are combined as a weighted sum, where the weights are configurable:

C_attr = w_sightings × C_sightings + w_admiralty × C_admiralty

Default weights and rationale:

  • Sightings = 0.6: empirical confirmation from the community is the stronger attribute-level signal as it represents direct observation of the indicator in the wild.

  • Admiralty = 0.4: provenance quality is a supporting signal that modulates the reliability of the sighting evidence but does not substitute it.

2d — Attribute confidence aggregation (Noisy-OR)

Attribute confidence scores are aggregated via Noisy-OR, correctly modelling the intuition that independent converging indicators compound; each additional confirmed attribute pushes the evidence-based confidence upward:

C_evidence = 1 − ∏(1 − C_attr_i)

When all individual scores are 0, C_evidence is 0 and when a single score approaches 1, C_evidence is near 1.

The independence assumption underlying Noisy-OR is satisfied here because sightings and Admiralty tags are genuinely scoped to distinct attribute instances. MISP creates a separate attribute instance per event and maintains only a correlation link between instances of the same indicator across events, so each C_attr_i is an independent observation.

Tier 2 — Event-level confidence

2e — Analysis Stage

Map analysis stages to confidence scores according to a configurable mapping, with defaults:

C_analysis:  completed = 1.00
             ongoing = 0.5
             initial = 0.2

The initial stage maps to 0.2 rather than 0 because the event exists and has been looked at, but conclusions are not yet to be trusted.

2f — Event confidence combination

The maturity of the analysis and the aggregated evidence-based signal are combined as a weighted sum at event level:

Confidence = 0.5 × C_analysis + 0.5 × C_evidence

Rationale for weights (configurable):

  • Analysis (0.5): the analyst's direct assessment of the event is the primary epistemic signal. It reflects structured investigation and contextual judgment that cannot be reduced to indicator-level statistics.

  • Evidence (0.5): the aggregated attribute-level evidence is a co-equal primary signal. Strong empirical confirmation without analyst sign-off, and analyst sign-off without empirical grounding, should both produce the same partial confidence ceiling.

The weighted sum rather than a product ensures graceful degradation: if one stream is entirely absent, the other can still contribute meaningfully to the final score.

Note: all weights in both tiers are calibrated starting points. They should be validated against a labelled historical dataset and treated as tuneable parameters per deployment context.

Step 3 — Final score for an event

Score_event = Severity × Confidence

Expanding fully:

Score_event = min(1.0, threat_level × tags_multiplier)
      × ( 0.5 × C_analysis
        + 0.5 × ( 1 − ∏(1 − (0.6 × C_sightings_i + 0.4 × C_admiralty_i)) ) )

Step 4 — Aggregation across events

Finally, use Noisy-OR to aggregate multiple event scores into a final score:

Final score = 1 − ∏(1 − Score_event_i)

Rationale

A transparent, formula-driven score that incorporates threat level, analytic confidence, empirical sightings, and intelligence credibility produces assessments that analysts can inspect and calibrate over time. The Noisy-OR aggregation correctly reflects the increasing probability of a genuine threat as independent evidence accumulates, without allowing a single weak signal to dominate.

After initial deployment, analysts collect verdicts on a sample of scored events and use them to empirically validate or adjust the weights used throughout the computation. Configurable parameters allow the scoring computation to be tuned by the user as per these observations.

Acceptance criteria

The following scenarios are verified, constrained to the default mappings and weights:

Scenario Severity Confidence Score
No information at all 0.0 0.0 0.0
High threat, no analysis, no sightings, no Admiralty 1.0 0.5×0 + 0.5×0 = 0.0 0.0
High + tags, completed, many TPs, A/1 Admiralty 1.0 0.5×1.0 + 0.5×→1.0 → 1.0 1.0
Medium, completed, many TPs, A/1 Admiralty 0.5 → 1.0 0.5
High + tags, completed, many TPs, no Admiralty 1.0 0.5×1.0 + 0.5×(→1.0) = 0.5 + 0.5×0.6×→1.0 ≈ 0.80 ≈ 0.80
High + tags, completed, no sightings, A/1 Admiralty 1.0 0.5×1.0 + 0.5×0.4×1.0 = 0.5 + 0.2 = 0.70 0.70
High + tags, ongoing, some TPs, B/2 Admiralty 1.0 0.5×0.5 + 0.5×~0.6 ≈ 0.55 ≈ 0.55

The fifth row illustrates graceful degradation: a highly confirmed, high-severity event with no Admiralty tags still scores approximately 0.80 rather than collapsing, because the sightings stream carries sufficient weight within the attribute tier.

The sixth row illustrates the symmetric case: strong Admiralty with no sightings yields 0.70, reflecting that provenance alone is a meaningful but incomplete confirmation signal.

Parent links: MRS-058 Automated alert triage and case escalation guided by CTI

Child links: TST-024 Test runtime-configurable DECIPHER features, ARC-010 Analysis endpoint: scoring data flow diagram

Attribute Value
type F
importance 5
urgency 5
vm T
release Beta

2.6 Analysis endpoint: optional creation of prioritized case SRS-054

As a security analyst, I want the DECIPHER analysis endpoint to offer an option for the automatic creation of a prioritized incident case in Flowintel for each executed analysis, so that incident responders have an auditable trace of the analysis with context and a suitable priority level.

I want the case creation service to:

  1. Map the computed severity score to a priority tag (e.g. low, medium, high, severe) using configurable score thresholds.
  2. Populate the case with a title that includes the threat scenario identifier and a creation timestamp.
  3. Populate the case description with the alert data, severity score, and a human-readable score breakdown.
  4. Create the case using a scenario-specific template when one is defined in the template catalog, and fall back to a template-free creation otherwise.
  5. Return the Flowintel case ID and a direct URL to the created case as part of the analysis result.
  6. Skip case creation and record the reason in the analysis report when case creation is disabled in the runtime configuration.
  7. Record the failure in the analysis report without interrupting the analysis result when Flowintel is unreachable or returns an error.

Rationale

Automatic case creation with pre-populated context and analysis results eliminates the manual step between alert triage and case management, reducing response latency. Configurable priority thresholds allow organizations to tune escalation to their operational constraints.

Acceptance criteria

  1. When case creation is enabled for the analysis in the configuration file, a case is created in Flowintel after a successful analysis. The ID of the created case appears in the created_case.id field of the analysis result and the URL to access the case in Flowintel appears in the created_case.url field.
  2. The priority tag assigned to a case corresponds to the level defined in the configuration file for the resulting score.
  3. The case description contains the alert data, the computed severity score, and a breakdown of the score in human-readable format.
  4. When a template is defined for the analyzed threat scenario, the created case is aligned with that template. When no template is defined for the analyzed threat scenario, the case contains only a description with the report on the analysis.
  5. When case creation is disabled in the configuration file, the analysis result is returned with created_case.id equal to 0 and the report lists the skip reason.
  6. When Flowintel is unreachable, the analysis result is still returned with a valid severity score; the report contains an entry indicating case creation failure.

Parent links: MRS-035 Integration with open-source tools for incident handling, MRS-058 Automated alert triage and case escalation guided by CTI

Child links: TST-023 Test full workflow of analysis endpoint, ARC-009 REST service: analysis endpoint interaction, TST-026 Test analysis endpoint graceful degradation

Attribute Value
type F
importance 4
urgency 3
vm T
release Beta

2.7 DECIPHER service: incidents endpoint SRS-055

As a security engineer, I want to use the DECIPHER service incidents endpoint for creating a case in Flowintel by providing a score that determines a priority assignment and data for informing the case, so that I can automatically escalate incidents as part of my incident handling workflow.

I want the incident creation endpoint to:

  1. Accept a POST request with the identifier of one of the supported threat scenarios (a.k.a. alert type) as a parameter.
  2. Accept a severity score in [0, 1], an optional case title, and arbitrary additional fields defining the Flowintel case in JSON format, all provided in a single request body.
  3. Map the provided score to a priority tag using configurable thresholds.
  4. Use the alert type endpoint parameter to select a case template for the specific scenario when available.
  5. Format non-score metadata fields into the case description.
  6. Return the created case ID and a direct URL to the case in Flowintel.
  7. Reject requests with a score outside [0, 1] with a structured validation error.
  8. Reject requests with an unrecognized alert type with an appropriate error.

Rationale

Decoupling case creation from the internal analysis workflow allows external tools (e.g. SIEM active responses, SATRAP notebooks, third-party SOAR playbooks) to integrate DECIPHER's incident creation as part of existing workflows. This broadens the interoperability of the SATRAP-DL ecosystem.

Acceptance criteria

  1. A POST request to the incident endpoint with a valid score and a registered alert type returns HTTP 200 with a JSON body containing id (non-zero integer) and link (string starting with the configured Flowintel base URL).
  2. A POST request to the incident endpoint with a score of 1.5 returns HTTP 422 with a validation error message.
  3. A POST request to the incident endpoint with an unrecognized alert type returns HTTP 404.
  4. The priority tag of the created case corresponds to the tier matching the provided score according to the configured thresholds.
  5. Additional metadata fields provided in the request body appear in the case description in Flowintel.

Parent links: MRS-035 Integration with open-source tools for incident handling, MRS-038 Platform-independent API, MRS-059 Standalone incident escalation from an externally-computed triage score

Child links: TST-025 Test incidents endpoint, ARC-011 REST service: incident endpoint interaction

Attribute Value
type F
importance 4
urgency 5
vm T
release Beta

2.8 Runtime-configurable DECIPHER features SRS-057

As a DECIPHER operator, I want to enable or disable DECIPHER features for the analysis and incident creation, and tune their parameters at runtime without restarting or recompiling the service, so that I can adapt the system to the deployment environment and to evolving operational requirements.

I want the runtime configuration to control:

  1. Whether the MISP IOC search step is executed during alert analysis (enabled by default).
  2. Whether an incident case is automatically created in Flowintel after a successful analysis (disabled by default).
  3. Additional search parameters forwarded to MISP (e.g. time window, result limit).
  4. The score thresholds that map a severity score to a priority tier (low, medium, high, severe).

The configuration SHALL be loaded from a YAML file at analysis and incident creation time so that changes take effect for the next analysis request without restarting the service.

Rationale

Operational environments differ in which components are available and what escalation policies apply. Runtime flags and threshold configuration avoid hard-coded coupling between DECIPHER and specific tool availability, and allow operators to tune prioritization without code changes.

Acceptance criteria

  1. Disabling MISP search in the runtime configuration causes the analysis to return a severity of 0 and include a message in the report informing that MISP search is not enabled. MISP client initialization is not attempted.
  2. Disabling case creation in the runtime configuration causes the analysis to return created_case.id equal to 0 and include a skip reason in the report, without contacting Flowintel.
  3. Changing the priority_thresholds in the runtime configuration and submitting an analysis request with a score at the boundary of the new threshold results in the updated priority tier being assigned.
  4. Changes to the runtime configuration YAML file are reflected in the next analysis request without restarting the DECIPHER service process.
  5. An invalid YAML file fails gracefully and logs detailed error messages.

Parent links: MRS-058 Automated alert triage and case escalation guided by CTI, MRS-059 Standalone incident escalation from an externally-computed triage score

Child links: TST-028 Test analysis endpoint core behavior, ARC-008 DECIPHER microservice container diagram, TST-024 Test runtime-configurable DECIPHER features

Attribute Value
type F
importance 5
urgency 4
vm T
release Beta

2.9 Supported analysis for threat scenario: suspicious login SRS-051

As a security engineer, I want to use the DECIPHER analysis endpoint to automatically obtain a CTI analysis score of alerts related to suspicious login activity, so that I can automate next steps in my incident handling workflow.

Examples of suspicious login scenarios include: - failed-login burst: ≥ 5 failed login attempts with the same user within 60 seconds - impossible travel: a failed or successful login of a user from country X followed by a successful login of the same user from county Y, where the computed travel speed between X and Y exceeds 900 km/h (physically impossible without a plane).

For this threat scenario, I want to:

  1. Make a POST request to the DECIPHER analysis endpoint with the suspicious_login alert/threat scenario identifier.
  2. Include in the body of my request the username, IP of the target host, source IP addresses, and timestamp of the detected activity.

I want the DECIPHER REST service to:

  1. Find MISP events containing the alert IOCs mapped to the following MISP attribute types: alert.src_ips $\rightarrow$ "ip-src", alert.target_host $\rightarrow$ "ip-dst", alert.username $\rightarrow$ "target-user".
  2. Compute a severity score as specified in SRS-050 and SRS-053.
  3. Give me the option, by means of a configuration file, to create a prioritized Flowintel case, using a template with predefined tasks for a suspicious login threat scenario.
  4. Return a structured analysis result as specified in SRS-050.

Rationale

To reduce alert fatigue by automating the triage and escalation of suspicious login incidents, ensuring that responders only handle cases supported by threat intelligence evidence and receive the context needed to act immediately.

Acceptance criteria

  1. A POST request to the DECIPHER analysis endpoint with the suspicious_login identifier and a body containing username, target host IP, source IP addresses, and timestamp triggers the analysis pipeline and returns a structured analysis result with a severity score and a score breakdown.
  2. When any IOC matches a known malicious indicator in the CTI platform, the resulting severity score is non-zero. If case creation is enabled in the runtime configuration file, a prioritized Flowintel case is created and the id and url returns as part of the analysis response.
  3. The created case carries a priority tag consistent with the computed severity score and the configured escalation thresholds, and includes the alert data and score breakdown in its description.
  4. When no matching threat intelligence is found for any of the alert IOCs, the severity score is 0 and no case is created.
  5. When the MISP service is unavailable, the analysis endpoint handles the failure gracefully, returning severity 0 and including an error message in the analysis report without interrupting the response.
  6. The pipeline operates end-to-end without manual intervention, from alert submission to case creation.

Parent links: MRS-035 Integration with open-source tools for incident handling, MRS-038 Platform-independent API, MRS-058 Automated alert triage and case escalation guided by CTI, MRS-060 Multi-scenario threat coverage and extensibility

Child links: TST-023 Test full workflow of analysis endpoint, TST-026 Test analysis endpoint graceful degradation, ARC-012 Analysis endpoint: support for suspicious login

Attribute Value
type F
importance 4
urgency 4
vm T
release Beta

2.10 Extensible analyzer framework SRS-056

As a security developer, I want DECIPHER to enable the implementation of multiple threat scenario analyzers without modification to the core service logic, so that the platform can evolve to cover emerging threat types as operational needs grow.

I want the analyzer framework to:

  1. Allow new analyzers to be registered and implemented adopting architectural patterns from existing analyzers, without requiring changes to the API layer or existing analyzers.
  2. Prevent duplicate registration of the same threat scenario identifier and surface the conflict as a startup error.
  3. Enforce schema-based input validation for each analyzer independently, so that invalid data for one analyzer does not affect others.

Rationale

To isolate each threat scenario in a self-contained unit. Contributors can add new analyzers by following the documented class contract, without touching the API layer or the existing analyzers, which keeps the system maintainable as coverage grows.

Acceptance criteria

  1. Creating and registering a new analyzer in the codebase causes its identifier to appear in the discovery endpoint response without any changes to the REST API.
  2. Attempting to register two analyzers with the same identifier raises an error at registration time. The REST service cannot be started.
  3. Submitting a request to the analysis endpoint for one registered alert type with data valid for a different registered analyzer returns an HTTP validation error.

Parent links: MRS-060 Multi-scenario threat coverage and extensibility

Child links: ARC-008 DECIPHER microservice container diagram, ARC-009 REST service: analysis endpoint interaction, TST-027 Test extensible analyzer framework

Attribute Value
type A
importance 4
urgency 5
vm I
release Beta

3 Non-functional requirements SRS-NFun

System-wide low-level requirements on code quality, security, compliance, and open-source release, applicable to all SATRAP-DL components.

3.1 Centralized logging SRS-021

The logs of the system SHALL be handled in a central location.

Rationale

In agreement with clean code and best practices for software development, to promote code maintainability.

Acceptance criteria

See validation test case specification

Parent links: MRS-044 Modular architecture

Child links: TST-005 Verify centralized management

Attribute Value
type Q
urgency 2
vm I
release All

3.2 Centralized exception handling SRS-022

SATRAP-DL components SHALL manage exceptions in a centralized manner, providing actionable error messages and ensuring that unhandled exceptions do not propagate unexpectedly across component boundaries.

Rationale

In agreement with clean code and best practices for software development, to promote code maintainability and system robustness.

Acceptance criteria

See validation test case specification

Parent links: MRS-044 Modular architecture

Child links: TST-005 Verify centralized management

Attribute Value
type Q
urgency 5
vm I
release All

3.3 Design and implementation principles SRS-024

The design and implementation of SATRAP SHALL adhere to software best practices such as naming convention, clean code, SOLID principles, etc.

Rationale

Among others, for maintainability, security, reliability and robustness of code.

Acceptance criteria

See validation test case specification

Parent links: MRS-044 Modular architecture, MRS-046 C5-DEC compliance

Child links: TST-002 Verify software engineering practices, ARC-003 ETL high-level design, ARC-004 ETL components

Attribute Value
type A, S, Q
urgency 5
vm R
release Alpha

3.4 Code readability SRS-025

The source code of SATRAP-DL SHALL be self explanatory and well documented.

Rationale

To support maintainability, extensibility and adoption of the software.

Acceptance criteria

See validation test case specification

Parent links: MRS-046 C5-DEC compliance

Child links: TST-006 Verify code clarity

Attribute Value
type Q
urgency 5
vm I
release All

3.5 Public release SRS-026

The source code of SATRAP-DL SHALL be released in a GitHub public repository.

Rationale

Open-source releases allow contributions and usage by the community, which in turn foster adoption and constant exchange of feedback.

Acceptance criteria

See validation test case specification

Parent links: MRS-051 Open-source releases

Child links: TST-018 Verify release and licensing

Attribute Value
type C
urgency 3
vm I
release All

3.6 Open-source licensing SRS-027

Third-party libraries used in SATRAP-DL SHALL have open source licenses that do not restrict the privileges granted by the license selected for SATRAP-DL.

Rationale

To avoid the introduction of limitations in the distribution and use of SATRAP-DL derived from the use of third-party software.

Acceptance criteria

See validation test case specification

Parent links: MRS-051 Open-source releases

Child links: TST-018 Verify release and licensing

Attribute Value
type C
urgency 3
vm A
release All

3.7 Input validation SRS-028

SATRAP-DL components receiving external input SHALL validate the input and reject it in case the validation fails. The validation may include integrity checks, syntactic checks, semantic checks, parameter out-of-range checks, etc.

Rationale

To prevent code injection.

Acceptance criteria

See validation test case specification

Parent links: MRS-053 Secure programming compliance

Child links: ARC-003 ETL high-level design, ARC-004 ETL components, TST-007 Verify secure programming

Attribute Value
type S
urgency 5
vm I
release FID

3.8 Input sanitization SRS-029

SATRAP-DL components SHALL perform sanitization of input and output (data passed across a trust boundary). Sanitization may include removing, replacing, encoding, or escaping unwanted characters.

Rationale

To prevent code injection.

Acceptance criteria

See validation test case specification

Parent links: MRS-053 Secure programming compliance

Child links: ARC-003 ETL high-level design, ARC-004 ETL components, TST-007 Verify secure programming

Attribute Value
type S
urgency 5
vm I
release FID

3.9 Resource management SRS-030

Network connections and other resources accessed SHALL be properly terminated and released when no further required.

Rationale

To prevent data leakage or DoS attacks.

Acceptance criteria

See validation test case specification

Parent links: MRS-053 Secure programming compliance

Child links: TST-007 Verify secure programming

Attribute Value
type S
urgency 5
vm I
release All

3.10 Code static analysis SRS-031

The code of SATRAP-DL SHALL be statically analyzed using an appropriate software to identify potential issues. The static analysis of Python code shall aim to detect:

  • error handling
  • commented-out code
  • input validation
  • code injection
  • concurrency and race conditions (if applicable)
  • canonical representation vulnerabilities
  • minimum amount of dependencies

Rationale

To detect common security vulnerabilities in an automated way.

Acceptance criteria

See validation test case specification

Parent links: MRS-053 Secure programming compliance

Attribute Value
type S
importance 5
urgency 3
vm I, A
release FID

3.11 Dependencies management SRS-032

All software dependencies including third-party libraries SHALL be listed and maintained in a configuration file.

Rationale

To enforce a centralized control over external dependencies.

Acceptance criteria

See validation test case specification

Parent links: MRS-053 Secure programming compliance

Child links: TST-007 Verify secure programming

Attribute Value
type S
urgency 5
vm I
release FID

3.12 Log validation SRS-036

Log strings SHALL be sanitized and validated before logging to prevent log injection attacks.

Rationale

To prevent log injection attacks.

Acceptance criteria

See validation test case specification

Parent links: MRS-053 Secure programming compliance

Child links: TST-007 Verify secure programming

Attribute Value
type S
urgency 4
vm I
release FID

3.13 Sensitive information SRS-037

SATRAP SHALL not log sensitive information such as passwords or entity identifiers.

Rationale

To avoid intended or unintended leakage of sensitive information.

Acceptance criteria

See validation test case specification

Parent links: MRS-053 Secure programming compliance

Child links: TST-007 Verify secure programming

Attribute Value
type S
urgency 5
vm I
release Alpha

3.14 Software identification SRS-038

Any deployment build of SATRAP-DL SHALL provide the means to retrieve its version and other identification details.

Rationale

To inform the user of the specific version of the system that is being used, often required for consulting user manuals, reporting bugs, etc.

Acceptance criteria

See validation test case specification

Parent links: MRS-053 Secure programming compliance

Attribute Value
type F,S
urgency 3
vm T
release All

3.15 Authentication and authorization SRS-040

For automated ingestion from and enrichment of the CyFORT CTI repository, SATRAP SHALL rely on the TIP built-in solution for user authentication and authorization, e.g., OpenCTI, MISP LDAP, or native user management.

Rationale

To enforce user identification and resource access authorization by building on well-established solutions.

Acceptance criteria

See validation test case specification

Parent links: MRS-056 Access control

Attribute Value
type S, A
importance 4
urgency 3
vm I
release Beta

3.16 Encrypted data transport for external service connections SRS-058

SATRAP-DL SHALL encrypt all data in transit between components crossing a trust boundary using TLS 1.2 or higher, independently of any network-layer encryption provided by underlying VPN infrastructure if in place. This includes network connections to TypeDB, MISP, and Flowintel instance endpoints.

Rationale

Transport layer encryption ensures that data exchanged between SATRAP-DL and external services (knowledge base, threat intelligence platform, case management) is protected from eavesdropping and tampering when traversing untrusted network segments. VPN tunnels terminate at network gateways; application-layer TLS ensures end-to-end confidentiality for sensitive data regardless of VPN topology.

Acceptance criteria

See validation test case specification

Parent links: MRS-057 Secure channels to the CyFORT ecosystem

Child links: ARC-006 DECIPHER infrastructure deployment diagram

Attribute Value
type S
importance 4
urgency 3
vm I
release All

3.17 API based on OAS SRS-016

The API of SATRAP-DL SHALL comply with the OpenAPI Specification (OAS) standard.

Rationale

To enable automatic generation of documentation, automated API testing and validation, and a language-agnostic human and machine-readable specification.

Acceptance criteria

See validation test case specification

Parent links: MRS-038 Platform-independent API

Attribute Value
type C
urgency 2
vm R
release FID