Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Our core mission is to deliver solutions that reduce incident response investigation times through unparalleled forensic-level visibility, automation, speed and collaboration.
You can also view our support terms and conditions .
A brief introduction to Binalyze AIR
Automated Investigation and Response platform
Binalyze AIR is an Automated Investigation and Response platform that provides the most complete feature set for remotely collecting 680+ different evidential artifact types in minutes across multiple Operating Systems platforms. It’s lightning-fast and extremely easy to use.
AIR accelerates your investigation process via a comprehensive and integrated analyzer called DRONE. DRONE's findings for multiple assets are presented in a single pane of glass or Investigation Hub.
AIR will perform simultaneous triage on thousands of assets using YARA, Sigma, and osquery rules.
AIR protects employee privacy with targeted collections when required. It also captures the 'forensic state' of multiple assets and presents this information in an Investigation Hub.
The Investigation Hub serves as an all-encompassing, user-friendly DFIR intelligence resource. This unifying Investigation Hub consolidates Acquisition and Triage data from all assets, presenting it in an easily digestible format. It also integrates DRONE analyzer findings through intuitive graphical visualizations, thereby identifying the most critical machines that warrant further immediate, focused investigation or remediation. The Investigation Hub streamlines the investigative process by:
Providing actionable findings to prioritize and guide investigators,
Offering comprehensive listings of all evidential artifacts,
Including a range of advanced filtering options, and
Featuring a powerful global search capability.
The AIR console is very simple to deploy, and thanks to it being Docker-based, it can easily be deployed on-premise or on a server in AWS or Azure Clouds.
The AIR platform integrates with your existing SIEM, SOAR solutions, and many EDR products. This is done via Webhooks and an open API that empowers analysts to automate the response phase of IR.
So, all forensic collections can be scheduled, automated, remote, and scalable.
With evidence hashing, AES256 encryption, and RFC3161 time-stamping, the Chain of Custody for evidence handling by AIR is completely secure.
Other key features include our patent pending Baseline Comparison technology. This allows you to be more proactive and focused in the way you target your efforts. Here, you can compare acquisitions against one another and easily identify additions, changes, and deletions to key system areas often exploited by attackers.
AIR helps you cut through the noise of security data with live YARA, Sigma, and osquery scanning combined with rapid keyword searching, automated post-acquisition analysis, and Event Scoring.
These features all combine to enable most digital forensics investigations to be concluded in less than 4 hours - which is a dramatic improvement over what is commonly achieved today with other solutions.
A brief overview of system architecture
Binalyze AIR is an on-premise or cloud-based, client-server solution that allows you to remotely perform various tasks on assets such as collecting forensic evidence and performing triage with YARA, Sigma, or osquery.
Management Console is a web-based application that can be viewed from any device with an up-to-date browser.
Assets are connected to the management console via a lightweight "passive" responder that can be deployed manually or via other mechanisms such as SCCM.
AIR responders;
DO NOT scan anything on the asset that may cause slowdowns (e.g. your Antivirus),
DO NOT block anything on the asset that may cause false positives (e.g. your DLP),
DO NOT create any alerts that may cause "alert fatigue".
A note on Cloud Infrastructure
All of the web services and API backends listed above are hosted on Microsoft Azure preferably in East/West US Datacenters and protected by Cloudflare.
UPDATE
This domain is used by AIR Server instances to check if there is any new version to update.
LICENSE
This domain is used by AIR Server instances to check the licence information
TIMESTAMP
This domain is used by AIR Server for RFC 3161 features which requires integration with a timestamp server.
UPDATE
This domain is used by AIR Server instances to update artefacts like MITRE Attack Rules , docker compose files, update scripts, offline installer packages.
FIS USAGE STATS
FEATURE FLAGS
USAGE ANALYTICS
This domain is used by AIR Server instances to
Collect case activity & Organization ID metrics for FIS License charges/billing.
Feature flag service to enable/disable features on AIR.
Analytics to analyse usage statistics.
UPDATE
This domain is a container registry for AIR Server instances to update server components like the application server images, database images, caching server images etc.
Domain
Data Sent To Domain
Data Received From Domain
N/A
Version Information
License Key
License Status Details
RFC-3161 Timestamp Token
N/A
Packages
FIS USAGE STATS:
OrganizationID’s, Case Id, License Key, CaseEventType, CaseEventTime, endpoint Id, Task Id
i.e.: "logId": 764149386100000, "type": "endpointTaskAddedToCaseEvent", "publishedDate": "2022-06-03T10:22:18.610Z", "data": { "caseId": "C-2022-0028", "endpointId": "2b2ea7b0-be61-445c-b735-ac1a9a39e448", "taskAssignmentId": "2b1d5b2c-72ac-4828-9a82-b3510ce9fd5a" }, "license": "LICENSE-KEY"
FEATURE FLAGS: License Key
USAGE ANALYTICS: Amplitude event structure
FEATURE FLAGS: Feature flag states
USAGE ANALYTICS: N/A
N/A
Binary Packages
How do assets communicate with the console?
All routine communication between assets and the AIR console is initiated by the assets—they do not receive incoming requests from external sources. Communication occurs through various protocols and channels:
HTTPS (TCP 443) – The main communication channel from assets to the console (e.g., yourcompany.binalyze.io
).
WebSocket over HTTPS (TCP 443) – Used for interACT features.
NATS (TCP 4222) (Optional) – Supports real-time task pushes to assets. If this port is unavailable, AIR defaults to HTTP(S) polling for task retrieval.
DNS (UDP/TCP 53) – Required for name resolution services.
HTTPS to responder.cdn.binalyze.com
– Used for responder updates and installation packages. If the CDN is unavailable, the AIR console acts as a fallback source.
Cloud Storage: HTTPS communication to services like Amazon S3 and Azure.
Traditional Storage: Supported via SFTP, FTPS, or SMB.
If a proxy is configured in your environment, assets can communicate using:
HTTP
HTTPS
SOCKS5
The console installer automatically adds inbound allow rules for the required ports in the Windows Firewall.
The responder installer does not modify firewall settings. You must ensure that enterprise firewall policies allow assets to communicate with the console over the required ports.
The AIR user interface (UI) requires access to the following domains:
Domain
Categories
Description
UPDATE
This domain is used by AIR Server instances to check if there is any new version to update.
LICENSE
This domain is used by AIR Server instances to check the licence information
TIMESTAMP
This domain is used by AIR Server for RFC 3161 features which requires integration with a timestamp server.
UPDATE
This domain is used by AIR Server instances to update artefacts like MITRE Attack Rules , docker compose files, update scripts, offline installer packages.
FIS USAGE STATS
FEATURE FLAGS
USAGE ANALYTICS
This domain is used by AIR Server instances to
Collect case activity & Organization ID metrics for FIS License charges/billing.
Feature flag service to enable/disable features on AIR.
Analytics to analyse usage statistics.
UPDATE
This domain is a container registry for AIR Server instances to update server components like the application server images, database images, caching server images, etc.
Domain
Data Sent To Domain
Data Received From Domain
N/A
Version Information
License Key
License Status Details
RFC-3161 Timestamp Token
N/A
Installation Packages
FIS USAGE STATS:
OrganizationID’s, Case Id, License Key, CaseEventType, CaseEventTime, endpoint Id, Task Id
i.e.: "logId": 764149386100000, "type": "endpointTaskAddedToCaseEvent", "publishedDate": "2022-06-03T10:22:18.610Z", "data": { "caseId": "C-2022-0028", "endpointId": "2b2ea7b0-be61-445c-b735-ac1a9a39e448", "taskAssignmentId": "2b1d5b2c-72ac-4828-9a82-b3510ce9fd5a" }, "license": "LICENSE-KEY"
FEATURE FLAGS: License Key
FEATURE FLAGS: Feature flag states
USAGE ANALYTICS: N/A
N/A
Binary Packages
A brief overview of AIR terminology
A group of evidence types, application artifacts, and custom content items. There are acquisition profiles provided 'out-of-box' but you can also create additional ones by visiting "Integrations" in the Main Menu.
AIR has a web-based management interface that allows you to efficiently manage assets and assign tasks. Users can customize their experience by switching between light and dark modes from the main AIR menu, enhancing usability and overall satisfaction.
In Binalyze AIR, an asset is defined as any entity, whether a device or system, physical or virtual, that operates on a supported operating system such as Windows, macOS, Linux, Chrome, IBM AIX, and ESXi. Assets are the foundational elements on which Binalyze AIR performs various actions, including evidence collection and task execution, crucial for responding to and hunting cyber threats. Examples of assets encompass computers, servers, hosts, cloud accounts, and disk images.
Managed: The asset's responder has been successfully deployed to the device and is ready to collect tasking assignments from the console.
Unmanaged: An asset is categorized as Unmanaged under two specific conditions:
Discovery without Deployment: The asset is identified through Active Directory or Cloud Account scans but does not have the AIR responder installed.
Unreachable with No Data: The asset has been disconnected from the AIR console for over 30 days (Unreachable), and there is no stored forensic data from that asset in the AIR console.
Off-Network: An asset is classified as Off-Network under two specific scenarios:
Data Supplied: The asset has previously provided data through methods such as an Off-Network Acquisition or Triage task.
Unreachable with Stored Data: The asset holds forensic data within the console but is currently inaccessible for further data collection or task assignments.
For both scenarios, investigation of the existing data is possible, and additional data can be manually imported as required.
The Assets Summary window on the home page can also report the asset as:
Unreachable: The asset's responder is currently unreachable. If an Asset's Responder fails to connect to the Binalyze AIR console for over 30 days, its status changes to "unreachable." Until then, its status will be managed as online or offline.
Update Required: The responder on the asset requires an update to function correctly.
Update Advised: The responder is still functional, but for full functionality, an update is recommended.
Isolated: The asset is currently isolated from the network apart from communication with the AIR console only.
Persistent Saved Filters enable users to create and store custom asset filters, making it easier to locate and manage assets without having to reapply filter conditions in each session.
Evidence Item:
In the context of Binalyze AIR and cybersecurity generally, an evidence item refers to data extracted from various components of a computer operating system and associated system areas crucial for recording, managing, or operating the system. These items often produce digital evidence that can be analyzed to uncover details of user activity and potential security incidents or anomalies.
Artifact:
On the other hand, in Binalyze AIR, artifacts are files produced by applications during their execution. These files contain valuable information about the activities performed by the application, including logs, configuration files, temporary files, and other artifacts of potential interest for forensic analysis and investigation.
A remote location for saving evidence collected as the result of an AIR tasking. These include:
SMB
SFTP
FTPS
Amazon S3
Azure Blob
Network Shares
To create a New Repository go to Settings in the Main Menu and select Evidence Repositories from the secondary Menu. From the window 'New Repository' complete the mandatory fields and select the type of repository you wish to add.
In Binalyze AIR, an organization is a structural entity that allows for the separation of assets, users, and cases within a multi-tenant environment. The multi-tenancy capability of AIR enables a single console to manage multiple organizations, each with its own isolated environment. Here’s how it works:
Asset Management: An asset (e.g., a device or endpoint) can belong to only one organization, ensuring clear boundaries between different organizational environments. However, within that organization, the same asset can be assigned to multiple cases.
Case Management: Cases could perhaps also be called 'investigations' or 'incidents' and they are also aligned to a specific organization. Access to cases can be restricted based on user privileges within that organization.
Global and Organization-Specific Settings: Certain settings, such as policies and evidence repositories, can be configured globally across all organizations or individually for each organization. This flexibility allows administrators to enforce global standards while still providing the ability to customize configurations at the organizational level when required.
Policies and Evidence Repositories: Policies can be applied either globally or on an organization-by-organization basis. For example, evidence repositories, which store collected data, can be aligned to all organizations (global) or set up uniquely for each organization, allowing for localized data control.
This multi-tenant architecture in Binalyze AIR ensures that organizations can operate independently within the same platform, benefiting from both shared resources and isolated environments, depending on their needs.
The AIR responder is a streamlined 40MB standalone package that brings the expertise of level 3 and 4 analysts directly to your digital assets.
Unlike 'agents' that constantly monitor systems and consume significant resources, AIR responders only activate to perform precise, user-defined DFIR tasks on demand. This approach allows for deploying thousands of virtual responders across your IT ecosystem, ready to execute proactive and reactive incident response activities such as evidence collection, threat hunting, and forensic-level analysis as needed. Binalyze's approach prioritizes efficient security enhancement, marrying minimal asset impact with maximum readiness and incident response capability.
Operations that are assigned to the assets by the AIR console either manually or automatically via a trigger. A task can be assigned to multiple assets, and this is managed through 'task assignments.' Each individual assignment, known as a 'task assignment,' creates a one-to-one correspondence between the task assigned by the console and the specific asset on which the task assignment is executed, ensuring precise management and tracking across all assigned tasks.
Tasks could be either:
Manual: Assigned manually by users,
Scheduled: Created by users to start in the future. Scheduled tasks could either be one-time or recurring (daily/weekly/monthly).
Triggered: Assigned to as the assets as a response to a trigger request which is sent by a SIEM/SOAR/EDR solution.
Triggers are the main extensibility mechanism for AIR to receive alerts from other security suites such as SIEM/SOAR/EDRs.
A trigger is the combination of a parser, an acquisition profile, and a destination for saving the collected evidence (either local or remote).
Binalyze AIR takes this to the next level by allowing the trigger to further automate the post-acquisition analysis by leveraging DRONE and MITRE&CK scanners. So in effect, the alert from your security tools can launch AIR into the collection of relevant forensic data, the analysis of that data, and the delivery of any DFIR findings into the Intelligence Hub with no analyst intervention whatsoever.
Searching for pieces of evidence such as a file hash, process, or malicious domain at scale. AIR provides you with 'out-of-box' examples for YARA, Sigma, and osquery, making it fast and easy to start sweeping your environment.
In today's dynamic digital environment, managing tasks efficiently within a software system is crucial for reliability, flexibility, and optimal performance. This guide delves into a sophisticated task management system designed to handle a wide array of operational scenarios, focusing on task retrieval, execution, prioritization, and system resilience against failures and network disruptions.
The AIR platform features an intuitive web-based console designed to orchestrate and dispatch tasks to designated remote AIR responders effectively. Serving as the nerve center for task allocation, this console guarantees that each task is accurately assigned for execution, optimizing operational efficiency. Within this ecosystem, the assignment of a specific task to a particular asset is termed a 'task assignment,' ensuring a clear, one-to-one correspondence between tasks and assets for precise management and tracking.
To accommodate diverse operational needs and customer network policies, the system employs two primary mechanisms for task checking:
Regular Interval Checks: Tasks are checked at predefined intervals, which can be dynamically adjusted based on the system's current configuration and operational demands.
The NATS Protocol: For immediate task fetching or near real time communications with assets, the system incorporates a specialized protocol named "NATS." This protocol is designed to bypass the standard checking intervals, allowing for urgent tasks to be retrieved and executed with minimal delay
Task Checking Intervals
Task-checking intervals are not static; they vary dynamically from seconds to hours, influenced by the system's configuration. This flexibility ensures the system can adapt to changing workloads and priorities efficiently.
Certain tasks, such as "cancel tasks," receive priority in the execution queue. This prioritization is crucial to prevent delays in the cancellation process, ensuring tasks are halted promptly when required.
The system adopts a first-in-first-out (FIFO) queue model for task execution. This model guarantees that tasks are processed in the order received, with special considerations for tasks that might block or delay subsequent operations unnecessarily.
If a Tasking Assignment has been collected by the Responder but is interrupted before the completion of collection, triage, or analysis, the task will not resume where it left off. Instead, this interruption will result in a task failure. Such failures are automatically recorded within the console's tasking details.
When this occurs, the status of the task in the AIR console will reflect the failure, and it will be necessary to manually restart or initiate a new task to ensure that the intended data collection and analysis are completed. This approach ensures clarity and accuracy in the management of tasking assignments, even in cases of unexpected interruptions.
For tasks that require file uploads, such as uploading to an evidence repository, the system includes built-in retry mechanisms. These mechanisms are activated to re-attempt uploads if network issues interrupt the process. The number of retries and the specific procedures for handling these retries vary depending on the task type and the destination of the file.
Additionally, if "direct collection" is enabled for an acquisition task and there is a failure, the user must restart the acquisition process from the beginning. This ensures that all necessary data is properly collected without partial or corrupt files.
A specialized "purge local" task type exists for the efficient cleanup of local data related to completed or failed tasks. This function is integral to maintaining optimal disk space usage and system resource allocation.
This guide underscores the necessity of a flexible system capable of adapting to varied customer policies, including specific network configurations and security requirements. The choice of protocols and mechanisms for task management is influenced by these diverse operational needs.
Continuous improvement is a cornerstone of system development. The commitment to updating documentation reflects ongoing efforts to refine task management processes and system functionalities based on operational insights and technical advancements.
The guide provides an in-depth look at the technical underpinnings of the system, including the use of the "NATS" protocol, dynamic adjustment of task-checking intervals, and the logic behind task prioritization and queue management. These details offer a comprehensive understanding of the system's operational logic and its capability to handle various scenarios efficiently.
Efficient task management is pivotal in ensuring the reliability and performance of software systems. Through innovative mechanisms like the air console and NATS protocol, alongside dynamic task-checking intervals and a robust FIFO queue model, the system outlined in this guide represents a state-of-the-art solution for managing tasks in complex software environments. The emphasis on flexibility, resilience, and continuous improvement underscores the system's readiness to meet the evolving demands of modern digital operations.
Investigators and analysts can use the Binalyze AIR DFIR Platform to perform DFIR activities on the machines located in the cloud platforms. Binalyze AIR DFIR Platform supports cloud virtual machines like on-premise and off-network devices. Investigators and analysts can install Binalyze AIR responders on virtual machines located on the cloud infrastructure for investigations and analysis. Amazon Web Services and Microsoft Azure are supported.
With Binalyze, investigators and analysts can easily and quickly deploy Binalyze AIR to their cloud assets and immediately start to make investigations, compromise assessments, and hunting activities. By taking the automation advantages of the cloud platforms, users can easily deploy many responders only using one authorized cloud platform account.
After adding the authorized account to Binalyze AIR Console, it enumerates the cloud platform to discover and list assets. Then, investigators and analysts can deploy responders to cloud assets individually or multiple assets with one click.
Since different cloud platforms use different identity and access management infrastructures and have different working mechanisms, their requirements may differ however, ultimately, all we need is an authorized account with list and control permissions on cloud assets.
Investigators and analysts can add a cloud account to the Binalyze AIR Console by using the Assets page:
From the Main menu select Assets
Click Add New and click Cloud Account
Then click the Add Account button according to the cloud platform you want to add on the appearing Cloud Platforms window
The configurations that need to be performed according to the cloud platforms are listed below.
Either of the two different ways mentioned above will redirect investigators and analysts to similar pages, allowing them to enter account details. They can either enter their existing account details, which are given below, or use the cloud formation link provided by Binalyze AIR to create a new account with enough permissions.
Cloud account needs the following permissions to deploy virtual machines Binalyze AIR responder.
The creation of an AWS Account with sufficient permissions flow is explained below.
Click on the URL and Create an Account
Open AWS Console -> IAM -> Users
Select the User -> Security Credentials -> Create Access Key
Fill out the Account Details Form
Either two different ways mentioned above will redirect investigators and analysts to similar pages allowing them to enter account details. They can either enter their existing account details, which are given below or create a new account with enough permissions.
Cloud account needs the following permissions to deploy virtual machines Binalyze AIR responder.
The creation of an Azure Account with sufficient permissions flow is explained below.
Azure portal -> App Registrations -> New Registration
Assign required roles to the new app registration for the subscription
App Registrations -> Open the created App Registration
Certificates & Secrets -> Create a new client secret
Fill out the Account Details Form
Coming soon
Binalyze AIR Console immediately starts to enumerate the cloud platform and pulls the assets list and assets details after the cloud account addition. It discovers the assets depending on the permissions and authorizations of the cloud accounts. All discovered assets will be shown under the Amazon AWS category under the associated organization.
The assets and their details are shown on the right side of the Secondary Menu in the Assets page as a list. Assets in which the Binalyze AIR responder is deployed are shown in blue, and the assets in which the Binalyze AIR responder is not deployed are shown in grey in that list.
All deployment actions are considered tasks by the Binalyze AIR Console and listed under the Tasks as responder Deployment tasks. Therefore, all responder deployment actions and their status can be seen on the Tasks list.
The primary advantage of responder deployment in a cloud platform is automation. Analysts and investigators don't need to choose the operating systems and their versions. They only assign deployment tasks to the associated devices, and all deployment processes are performed quicker and easier automatically.
Investigators and analysts can deploy Binalyze AIR responders to cloud assets by using the Endpoint page:
From Assets in the Main Menu: All cloud assets are listed here in the Secondary Menu. Investigators and analysts can search, filter and see the details of the assets on this page.
Investigators and analysts can deploy the responders individually, with multiple selections or all of them with one click.
Individual deploy: Click the assets and then click the Deploy button
Multiple selections: Select the assets in the list by clicking a checkbox at the beginning of the asset line. Then Actions button immediately appears at the top of the page. Click Deploy Responder the under the Actions menu.
Deploy to All Assets: Click the three-dot on the right side of the Amazon AWS or Tag, which includes associated cloud assets. Then click Deploy Responder
AIR setup instructions
This article contains instructions on how to install the Binalyze AIR console using Docker and it also covers the models of deployment.
Debian Bookworm 12 (stable)
Debian Bullseye 11 (oldstable)
Ubuntu Lunar 23.04
Ubuntu Kinetic 22.10
Ubuntu Jammy 22.04 (LTS)
Ubuntu Focal 20.04 (LTS)
Red Hat Enterprise Linux 7 on s390x (IBM Z)
Red Hat Enterprise Linux 8 on s390x (IBM Z)
Red Hat Enterprise Linux 9 on s390x (IBM Z)
CentOS 7
CentOS 8 (stream)
CentOS 9 (stream)
Fedora 37
Fedora 38
You can deploy AIR in one of two models:
Before you start
Make sure you have updated package repositories of the Operating System you are using. Please find below the commands for CentOS and Ubuntu:
For CentOS:
For Ubuntu:
Start and enable docker service by executing the following command:
This deployment model installs all components into a single machine.
Run the one-liner below and wait for it to complete
Create a folder for the Binalyze AIR under /opt directory and cd into it
Download the docker-compose.yml file and save it
Create the directory for the database volume with defined access:
Create environment variables:
Run the following command to start the Binalyze AIR installation in docker:
Wait for the installation to complete. It may take several minutes
This deployment model requires you to deploy the Database Component first (Step 1) and then start the deployment of the Console Server (Step 2) by pointing to the database server's address.
Run the one-liner below and wait for it to complete (this script will deploy the database component - Step 1)
Once the database is deployed, the above script will output the commands that need to be executed on the Console Server machine (Step 2)
SSH into the Console Server machine
Run the commands provided by the above script and wait for it to complete
You should execute the commands below on the Database Server!
SSH into the Database Server
Create a folder for the Binalyze AIR DB under /opt directory and cd into it
Download the docker compose.yml file and save it
Create the directory for the database volume with defined access:
Create environment variables
Run the following command to start the Binalyze AIR Database component in Docker:
Wait for the installation to complete. It may take several minutes.
Proceed to the installation of the Console Server
You should execute the commands below on the Console Server!
SSH into the Console Server
Create a folder for the Binalyze AIR under /opt directory and cd into it
Download the docker-compose.yml file and save it
Create the directory for the volume of the services with defined access:
Set the database URI for connecting the Console Server to the DB
IMPORTANT:
You must fill in the values of the following three variables. We need the passwords created on the DB server and the DB server IP address.
You can find the passwords in the /opt/binalyze-air-db/.env
file on the DB Server.
Run the following command to start Binalyze AIR installation in docker
Wait for the installation to complete. It may take several minutes
Regardless of the deployment model you chose, you will be asked for several configurations at the end of the deployment, such as an organization name, the credentials of the first user account, etc.
Once you have completed the above steps successfully, you should:
Visit http://IP-ADDRESS for accessing the Console (IP address here is the public IP address of the machine you have deployed Binalyze AIR)
Accept EULA and provide the configuration you are asked for in each step
Complete the setup and login using the credentials you have provided
Enjoy Binalyze AIR!
To ensure your Binalyze AIR deployment is functioning correctly, regularly checking the status of your Docker containers is crucial. Here’s how you can monitor and manage the health of your containers:
Check Container Status:
Run the command sudo docker ps
to list all active Docker containers. This command shows the current state of each container, helping you identify any that aren't running as expected.
Recommended Method for Restarting Containers in Binalyze AIR
When restarting your Binalyze AIR containers, we recommend using the docker compose restart
command instead of the docker compose down/up
method.
The docker compose restart
command allows you to restart all the containers without removing them, ensuring that important logs and state information are retained. This is crucial for effective troubleshooting, as it helps preserve valuable data that could provide insights into potential issues.
Additionally, the docker compose restart
command is a faster and less disruptive option compared to using docker compose down/up
. The down/up approach can result in data loss from container recreation, whereas docker compose restart
avoids this by maintaining the containers' state.
By using the docker compose restart
command, you help ensure logs remain intact, which can assist in resolving issues more efficiently.
Regular monitoring and proactive management of your Docker containers help maintain the stability and reliability of your Binalyze AIR deployment. By keeping an eye on the container statuses and knowing how to quickly restart services, you can ensure continuous operational performance.
What are the minimum requirements for running Binalyze AIR?
Below are the minimum hardware requirements for AIR components:
Cloud Platform systems that meet the relevant hardware requirements are outlined below:
USAGE ANALYTICS:
Read more details about
At Binalyze, we understand the unique challenges of investigating cloud-based attacks like Business Email Compromise (BEC). That’s why we have introduced the Tornado preview version, a standalone desktop application designed to simplify evidence collection from Google Workspace and Microsoft Office 365. Learn all about Tornado .
For more details about Docker installation requirements, please visit:
All platform components, such as App, Web, NATS, DB, and Redis, are installed and run on the same machine.
All components except the Database Layer are installed and run on a single instance, while the Database has its own dedicated instance.
Proceed with the
Proceed with the
Proceed with the
Proceed with the
Learn more about the for Binalyze AIR.
NB: MongoDB, a component of your AIR installation, depends on your AIR server's CPU architecture. As of AIR v3.10, both Intel and AMD processors need to be newer than 2011. If your processor is older, avoid updating MongoDB to versions distributed with AIR post v3.9. For further clarification, contact .
Minimum
Suggested
RAM
16 GB
32 GB
CPU
8 Cores
16 Cores
DISK
256 GB
512 GB
Minimum
Suggested
RAM
8 GB
16 GB
CPU
4 Cores
8 Cores
DISK
256 GB
256 GB
Minimum
Suggested
RAM
16 GB
32 GB
CPU
8 Cores
16 Cores
DISK
256 GB
512 GB
Minimum
Suggested
Hetzner
CPX41
CPX51
Azure
D8ds_v4
D16ds_v4
AWS
c5.2xlarge
c5.4xlarge
GCP
n1-highcpu-8
n1-highcpu-16
Minimum
Suggested
Hetzner
CX41
CX51
Azure
D4ds_v4
D8s_v4
AWS
c5.xlarge
c5.2xlarge
GCP
n1-highcpu-8
n2-highcpu-16
Minimum
Suggested
Hetzner
CX51
CPX51
Azure
D8ds_v4
D16ds_v4
AWS
m5d.2xlarge
m5d.4xlarge
GCP
n1-highcpu-8
n2-highcpu-16
Before you start with the setup
Assign Static IP Addresses
Ensure each server running the AIR Console and Database is assigned a static IP address to maintain a stable network connection.
Configure Ports for Initial and Ongoing Access
Port 80 (HTTP): Only enabled for initial configuration access through the user interface (UI). This allows for system setup upon installation.
Port 443 (HTTPS): After initial setup, use this port permanently for all administrative access. It provides a secure, encrypted connection for managing the AIR Console UI.
Additional Port Configuration for Responders
Port 443 (HTTPS): Keep open for secure communication and ongoing operations.
Port 4222 (NATS.io): Enable to allow inbound traffic for asset responders using NATS.io, facilitating effective communication across distributed systems.
2-Tier Deployment Specific Configuration
Allow inbound access from the AIR Console server to the MongoDB Server on:
27017 (MongoDB)
5432 (PostgreSQL)
Internet Access for Essential Domains (2-Tier Deployment on AIR Console Server Only)
Additional Optional Steps
If you're using EDR/XDR or EPP software along with Binalyze, please take a look at our exclusion/exception rules page.
(Optional) Create an SSL certificate for the provided Static IP Address or FQDN.
(Optional) Allow inbound access for alternative secure access to the web UI on the AIR Console server on:
8443 (HTTPS) inbound
(Optional) Create a password-protected network share on the server.
(Optional) Create an Active Directory user for Binalyze AIR to enumerate LDAP computers on your network. This account should have limited rights, sufficient only to enumerate computers, and not hold privileged status like a Domain Admin.
This structured approach ensures that every step and detail is laid out clearly, making it easier to follow and implement for a secure and efficient server setup.
AIR Relay Server is a specialized SOCKS5 proxy server specifically designed to facilitate communication between AIR responders and the AIR console. Its primary function is to act as an intermediary, enabling the seamless proxying of connections between the responders and the console.
With Relay Server, you can enhance the security of both responders and the console by only granting access via the Relay Server, eliminating the requirement for direct access to the AIR console from the responder environment.
This indirect access approach adds an extra layer of protection to the overall system architecture.
Considerations for Relay Server Setup:
Public IP Requirement:
The need for a public IP address for the Relay Server depends on its intended use and positioning.
If the Relay Server is set up for internet-facing properties, a public IP address or a Fully Qualified Domain Name (FQDN) might be required.
For a Relay Server intended for internal network entities, a public IP address is not necessary.
Use Case for Managed Security Service Providers (MSSPs):
In scenarios where the Relay Server is used by MSSPs for external entities, it is likely that a public IP or FQDN will be needed.
To ensure a successful installation of Relay Server, it is necessary to have a Linux operating system based on either Debian or Redhat, with a minimum kernel version of 3.9.0. This requirement guarantees compatibility and optimal performance.
The Relay Server functionality in an AIR deployment is not automatically available for all users. This feature's availability is contingent upon specific license configurations.
Users considering the installation and configuration of a Relay Server should liaise with their Binalyze installation advisor as part of the setup process.
Currently, the supported versions for Relay Server are as follows:
Debian 7 and above
RHEL (Red Hat Enterprise Linux) 7 and above
CentOS 7 and above
Fedora 21 and above
Ubuntu 14.04 and above
Pardus 17 and above
Please note that this list may be subject to updates, and you can always refer to the download page and click on "See Supported Versions" for the most up-to-date information on supported systems.
Additionally, for Relay Server to function properly, a responder must be installed and registered. The responder acts as the intermediary between the Relay Server and the AIR Console, which serves as the management interface for controlling Relay Server's operations. The seamless interaction between the responder and AIR Console facilitates efficient management of the Relay Server's functionalities.
As of now, the Relay Server’s default listening port, 1080, cannot be changed and the AIR responders will always try to connect to port 1080 if they are configured to use a relay server.
The application registration process creates an identity for your instance in Azure AD, enabling it to authenticate and access resources securely.
Go to Microsoft Entra ID Directory and select Overview. Keep the "Tenant ID" information for the field required in the Azure Integration configuration page.
Navigate to Manage > App Registrations and click New Registration.
Name the application, select the account type, and click the Register button.
In the Overview section, note the "Application (client) ID" for the field required in the Azure Integration configuration page.
Navigate to Certificates & Secrets and click New client secret.
Provide a description, select the expiration period click Add.
Note the value for the "Key (Client Secret)" information for the field required in the Azure Integration configuration page.
Assigning roles to the registered application ensures it has the necessary permissions to access and manage the resources within the selected Azure subscription.
Go to Subscriptions and select the subscription from the list.
In the Overview section, note the "Subscription ID" information for the field required in the Azure Integration configuration page.
Navigate to Access control (IAM), click Add, and select Add role assignment.
To add Reader roles to the registered application:
Select Reader from the job function roles list and click Next.
Select Assign access to > User, group, or service principal.
Click Select members, search for the registered application's name, and select it.
Click Review + Assign.
To add Contributor roles to the registered application:
Select Contributor from the privileged administrator roles list and click Next.
Select Assign access to > User, group, or service principal.
Click Select members, search for the registered application's name, and select it.
Click Review + Assign.
Now make sure that the roles Reader and Contributors are assigned to the application in the Role Assignment list.
This final step involves entering the collected information into the AIR Console UI, which will integrate the application with Azure, allowing it to operate within your Azure environment.
Go to the AIR Console UI and enter all the required information on the Azure Integration configuration page.
Click the Save button.
As a final task, make sure that the Account is listed in the Microsoft Azure cloud integrations list.
The AIR responder, a 40MB standalone package, acts as a virtual incident responder, delivering SOC level 3-4 expertise to your assets for unmatched cyber resilience and readiness. It interfaces with the AIR console for executing precise, user-defined tasks, providing wide-ranging coverage with minimal resource use, bypassing the need for constant monitoring.
The AIR responder maintains regular communication with the AIR Console via what in its simplest form is known as HTTP polling, and what we like to call, ‘a visit’. The visit interval is normally about 30 seconds for environments with fewer than 1000 assets. For larger environments, the interval is calculated using the following formula:
intervalSeconds = MANAGED_ENDPOINT_COUNT / 100
For instance, in a scenario with 5000 assets, the calculated visit interval would be 50 seconds.
The responder sends these visit requests to tell the AIR console that it is online and ready to receive any task assignments that are awaiting actioning.
If the responder does not make a visit at the required interval, it will be shown as offline in the AIR console.
If the responder does not make a visit for 30 days, it will be marked as unreachable. This status will immediately be fixed once the asset is back online.
If a task assignment is not collected by the responder within 30 days of its creation, it will expire and will not be actioned even when the asset reconnects and the responder visits next.
Simply put, when the AIR responder collects a task assignment from the AIR console, it carries out the task and provides a report back to the AIR console upon completion. On the other hand, when the AIR responder is in an idle state, it periodically (as discussed above) sends visit requests to the AIR console, checking if any new tasks have been assigned to it. During these visit requests, the AIR responder only checks for task assignments and does not perform any other operations.
The AIR responder is capable of executing various tasks when assigned by the AIR Console. These tasks include:
Acquisition
Triage scanning (YARA, Sigma, osquery, MITRE ATT&CK)
Isolation
interACT sessions
Auto Tagging
Disk/Volume Imaging
Investigation (Timeline)
Baseline
Log Retrieval
Certificate Authority Update
Migration
Reboot
Shutdown
Update
Uninstall
Both the Acquisition and Disk Image tasks support the ability to upload collected evidence to external repositories such as Amazon S3, Azure Blob Storage, FTPS, SFTP, and SMB. These tasks enable the AIR responder to securely transfer the acquired evidence or disk images to the designated repositories for storage and further analysis.
By utilizing the supported protocols and repositories, the AIR responder ensures that the collected evidence or disk images are safely transmitted and stored in the desired locations. This allows for efficient storage, accessibility, and collaboration, making it easier to manage and analyze the acquired data in a secure and scalable manner.
AIR has an option for the Windows, macOS, and Linux AIR responders to transmit evidential collections directly to external evidence repositories, thereby efficiently minimizing the utilization of local disk space:
The AIR responder maintains robust security by implementing a range of measures including:
Encrypted Traffic: The traffic between the AIR responder and the AIR Console, as well as between the AIR responder and any evidence repositories, is encrypted with TLS 1.2 and TLS 1.3 if available on the server. If neither of these two TLS protocols is available, the connection will not be established. This ensures that data in transit is protected against interception and unauthorized access.
Communication: The AIR Console does not initiate the sending of task assignments to the AIR responder; rather, it is the AIR responder that initiates the interaction by asking the AIR Console if it has any tasking assignments ready for it to run. This approach significantly reduces the risk of various security attacks, as it controls the communication flow and reduces the AIR responder's exposure to external threats.
Privileged Account Usage: On macOS and Linux, the AIR responder uses the root account, while on Windows, it uses the system account. This level of access control makes it difficult for other users to tamper with the application, thereby enhancing its security.
Regular Internal Penetration Testing: Before every release, our internal penetration test security team conducts thorough penetration testing. This proactive approach helps identify and mitigate potential vulnerabilities.
Secure Libraries and Third-Party Applications: We consistently use updated and vulnerability-free libraries and third-party applications. This precaution in maintaining up-to-date software components protects against known security vulnerabilities.
Supply Chain Attack Prevention: Measures are in place to protect against supply chain attacks, and these are continuously improved by our DevOps team. This is crucial to prevent threats that could compromise the software development and deployment process.
Continuous Source Code Scanning: The source code is regularly scanned by security tools. This constant monitoring helps to quickly identify and resolve any security issues that arise in the codebase.
Digital Signing: The use of digital signatures adds a layer of security, ensuring the authenticity and integrity of our software. This helps to prevent tampering and to verify that the software has not been altered after it was signed.
Blackbox Analysis: The binary undergoes Blackbox analysis, a method of testing the software’s external functioning without delving into its internal structure. This type of analysis has been performed on the AIR responder. It helps in identifying security vulnerabilities from an outsider’s perspective, providing a critical view of the system's external defenses.
Graybox Analysis: For the AIR responder project, Graybox analysis has been conducted. This testing method combines both the internal and external examination of the software, providing a more comprehensive security overview.
Functioning like a server application, the AIR responder does not use databases. Rather, it operates by saving reports as individual files using SQLite. These reports are subsequently forwarded to the AIR Console. This approach simplifies the data handling process, enabling the efficient and secure storage and transfer of information.
We continuously advance our development process by implementing the SCRUM methodology, complemented by unit and integration testing. The use of both unit and integration testing is crucial for maintaining high-quality standards and ensuring that each component of our product functions seamlessly individually and as part of the whole system.
After the initial installation, it is normal to observe a small amount of memory being allocated, typically around 30MB to 40MB, with no significant CPU or disk usage during idle states. This behavior is expected and can be attributed to the necessary resources required for the AIR responder to function properly.
During idle states, the AIR responder remains in standby mode, pending its next call to the Console to collect any new tasking assignments. The allocated memory is utilized to maintain the AIR responder's core functionality and to ensure prompt responsiveness when new tasks are assigned.
When the AIR responder receives an acquisition task, the evidence collection process is carried out by a sub-process called Tactical (or Incident Response Evidence Collector on Windows). During the acquisition process, it is normal to observe increased CPU and memory usage as the Tactical sub-process actively collects and processes the evidence.
The increase in CPU and memory usage is a result of the intensive data gathering and analysis performed by the Tactical sub-process. It utilizes system resources to efficiently capture and process the required evidence, ensuring the integrity and completeness of the collected data.
The extent of CPU and memory usage during the acquisition task may vary depending on factors such as the size and complexity of the evidence being collected. Once the acquisition is completed, the CPU and memory usage will typically return to normal levels, reflecting the completion of the resource-intensive task.
A Triage task does not involve running the Tactical sub-process for evidence collection. Instead, the Triage task is executed within the AIR responder, utilizing its internal capabilities to analyze and evaluate the collected data.
While the CPU usage for a Triage task may typically be low, it is still possible to set a CPU policy for the Triage task.
The log file of the running AIR responder provides valuable information about CPU usage, memory usage, and other system resources. Here is an example of the log entries about system and service resources:
INFO 2024-01-04 18:45:25+03:00 2.31.2 triage: resmon: SysStats{GoHeapAlloc: 2.3 MB, GoHeapSys: 12 MB, NumGoroutines: 27, NumCPU: 16} file:pkg/resmon/handlers.go:16 func:resmon.(*LoggingStatsHandler).HandleSysStats
INFO 2024-01-04 18:45:26+03:00 2.31.2 triage: resmon: PidStats{PID: 9460, Name: AIR.exe, CPU: 14.7%, AvgCPU: 25.9%, Mem: 56 MB, NumFDs: 341, NumCPU: 16} file:pkg/resmon/handlers.go:21 func:resmon.(*LoggingStatsHandler).HandlePidStats
The log file for the AIR responder can be found at the following location:
C:\Program Files (x86)\Binalyze\AIR\agent\AIR.log.txt
You can navigate this path on your system to access the log file and view the relevant information about CPU usage, memory usage, and other resources as logged by the AIR responder during its operation.
Similar scenarios can be observed on macOS with the built-in Activity Monitor application. To access detailed process information, simply click on the (i) button within the Activity Monitor.
On Linux, an alternative option for resource monitoring is to use htop
instead of the built-in app top. The htop
option offers enhanced capabilities and can be installed by following these steps:
Open the terminal.
Run the command: sudo apt-get install htop
(for Ubuntu/Debian-based distributions) or sudo yum install htop
(for CentOS/Fedora-based distributions).
Once installed, type htop
in the terminal and press Enter to launch the application.
Using htop
provides a more comprehensive and user-friendly interface for monitoring system resources on Linux.
resmon
There is also a CLI tool named resmon
specifically developed for internal usage. It can be used to gather resource usage data related to the AIR responder and its subprocesses, storing them in a local database.
By default, resmon
will monitor the AIR responder if no flags are given. However, you can monitor other processes by providing a PID flag or a process name flag. For more detailed information on its usage, a usage document for resmon
can be provided upon request.
The information collected by resmon
is stored in a local database, which includes numerous entries for the monitored process and its subprocesses. Due to the abundance of entries with comprehensive details, reading and interpreting the data can be challenging.
To address this, a script has been developed alongside resmon
to visualize these outputs. It displays the CPU and memory usage of the processes (including subprocesses) monitored by resmon
in a graphical format.
In the following section, we will share the resmon
results as it monitored various task assignments being executed by the AIR responder. Throughout the tasks, resmon
will continuously monitor the AIR responder and its subprocesses, generating a comprehensive local database that captures the output of resource monitoring.
For easy visualization, we will utilize a feature of a resmon
designed to focus on visualizing its output by presenting the CPU and memory usage in intuitive graphical representations. These visualizations provide valuable insights into the resource utilization of the AIR responder and its subprocesses from the beginning to the end of each tasking assignment.
Analysis of an Acquisition Task
Below, you will find two graphs illustrating the CPU and Memory usage of the AIR responder. These graphs represent the resource utilization from the moment an acquisition task is started through to its completion.
Duration
Report Size (Zipped)
Database Size
Event Record Count
Drone
Total Disk Space
Used Disk Space
06m29s
199KB
38KB
10091
Enabled
512 GB
176 GB
Analysis of an Acquisition Task (with CPU limit of 50%)
In this scenario, we will examine the CPU and Memory usage of the AIR responder while running tasks received from the AIR Console, with a specific condition: the CPU usage of the AIR responder is limited to 50%.
The visualized graphs provided below depict the resource utilization, specifically focusing on the CPU and Memory usage of the AIR responder. These graphs showcase the performance of the AIR responder, highlighting its ability to effectively manage the CPU allocation while carrying out tasks received from the AIR Console.
The script can occasionally display temporary CPU usage spikes that surpass the process's CPU limit as a result of aggregating subprocesses.
Duration
Report Size (Zipped)
Database Size
Event Record Count
Drone
Total Disk Space
Used Disk Space
06m48s
200KB
39KB
10102
Enabled
512 GB
176 GB
Analysis of a Triage Task
Let’s examine the resource usage of the AIR responder when a Triage task is received from the AIR Console.
Duration
Triage Rule Type
Total Disk Space
Used Disk Space
CPU Limit
19m33s
YARA
512 GB
176 GB
100%
Analysis of a Triage Task (with CPU limit of 50%)
Similar to an acquisition task, a Triage task can also be configured with a CPU limit for executing the AIR responder. The following graphs illustrate the resource usage of a Triage task running with a CPU limit of 50%.
Duration
Triage Rule Type
Total Disk Space
Used Disk Space
CPU Limit
27m09s
YARA
512 GB
176 GB
50%
Before deploying a new endpoint through Relay Server, you need to choose the IP address of the Relay Server to which the endpoints will connect. This chosen IP address will serve as the connection route for the endpoints that will be routed through this Relay Server.
You can change the address of the Relay Server by accessing the "Relay Server Details" section, which can be found in the "Organization Detail" page.
To add the proxy configuration to Relay Server, you can modify the /etc/profile
file by following these steps:
Open a terminal or command line session.
Open the /etc/profile
file in your favorite editor with administrative privileges:
Scroll to the end of the file and add the following lines:
Alternatively, you can use the following method to set proxy settings, but be aware that it will impact the proxy behavior of other programs.
Save the changes and exit the text editor.
To apply the changes, restart your system:
By adding these lines to the /etc/profile
file, the specified proxy settings will be exported as environment variables.
Like the responder, the Relay Server has the capability to proxy connections through a proxy server when communicating with responders or AIR Console. This provides flexibility in terms of configuring proxies for the responder, which connects through a Relay Server, or setting a proxy specifically for the Relay Server. Additionally, it is also possible to set proxies for both the responder and the Relay Server simultaneously. The diagram below illustrates all the possibilities for proxying the Relay Server and the responder.
As Relay Server operates as a service/daemon managed by systemd, you can utilize the following systemctl
commands to start, stop, restart and reload the service:
To start the service:
To stop the service:
To restart the service:
Relay Server provides support for reloading its configuration file without the need to restart the service. You can accomplish this by executing the following systemctl
command:
By running this command, any changes made to the Relay Server's configuration file will be applied without requiring a full restart of the service. This allows you to update the configuration manually and have the new settings take effect immediately.
Relay Server is designed to facilitate communication between the AIR Console and the responder. As a result, Relay Server carefully examines all connection attempts to ensure they are directed towards AIR Console and blocks any connection requests to other destinations. This strict enforcement guarantees that only connections to AIR Console are permitted, thereby providing a secure environment and ensuring that no undesired connections to other addresses occur.
To enable connections to addresses other than the AIR Console, a configuration called "Whitelist" is utilized. By specifying addresses or IP/FQDN patterns in the whitelist, the Relay Server allows communication between clients and the whitelisted addresses. In such cases, the Relay Server acts as a proxy between the client and the whitelisted address, ensuring seamless communication while still maintaining the necessary security measures.
To add or modify the whitelist in the configuration file, you can follow these steps:
Locate the config.yml
file: /opt/binalyze/air/relay/config.yml
If the Whitelist field is not present in the file, add it as a YAML array in the following format:
In the Whitelist array, you can include various elements such as IP addresses, fully qualified domain names (FQDNs), FQDNs with wildcards, CIDR notations, IP ranges, or use an asterisk (*) to allow all connections.
The Whitelist elements support the following formats:
IP address: Enter the specific IP address.
FQDN: Provide the fully qualified domain name.
FQDN with wildcard: Use an asterisk (*) as a wildcard character in the domain name.
CIDR: Specify the IP range using CIDR notation.
IP range: Indicate the range of IP addresses using a hyphen (-) between the start and end IP addresses.
FQDN addresses that have been added to the whitelist are not resolved to IP addresses. Therefore, destinations using IP addresses without FQDN will be denied. Relay Server only resolves IP addresses of Console Address in the configuration file.
By configuring the whitelist, you can specify the allowed addresses or domains that Relay Server will permit connections to.
After modifying the config file for Relay Server, it is essential to reload the configuration if Relay Server is already running. To accomplish this, you can use the following systemctl
command:
To run the Relay Server correctly, the responder must be running to manage tasks received from the AIR Console. Any modification or removal of these requirements will cause the Relay Server to fail to operate.
If responders attempting to connect through the Relay Server cannot establish a connection, they will fallback to a direct connection. Alternatively, if a proxy connection is configured, they will fallback to using the proxy for the connection.
If connections through the Relay Server are experiencing failures, you can try resolving the issue by restarting the service using the following systemctl
command:
This command will initiate a restart of the Relay Server service, which may help in resolving any connectivity issues.
Modifying the config file manually can introduce several problems, particularly due to the YAML format. Therefore, if you intend to make changes to the config file, such as adding whitelist addresses, it is crucial to adhere to the correct YAML format.
Ensure that you follow the YAML syntax rules while making any alterations to the config file. This includes correctly indenting elements, using appropriate punctuation, and maintaining the structure specified by the YAML format.
Relay Server logs all incoming connections and other operational activities for the purpose of troubleshooting and investigating any failures. To access these logs, you can retrieve by clicking on "Log Retrieval" on the "Relay Server Details" page.
Alternatively, you can refer to the following files located in the Relay Server directory:
/opt/binalyze/air/relay/air_relay.log.txt
/opt/binalyze/air/relay/air_relay.process.log.txt
These log files contain valuable information that can assist in diagnosing issues, identifying errors, and understanding the overall behavior of the Relay Server. By reviewing the logs, you can gain insights into the operations and events occurring within the Relay Server environment.
Multiple options can be selected.
Relay Server serves these metrics produced by Prometheus from a Unix socket.
To retrieve metrics from the Relay Server's Unix socket, you can use the curl
command to make a request to the following endpoint: http://localhost/metrics
on the Unix socket /var/run/Binalyze.AIR.Relay.sock
. Here's an example command:
Executing this command will provide you with a response containing various metrics related to the Relay Server. The response will include information such as the number of active connections, request duration, request errors, reload count, and more.
Please note that you need to run this command on the same system where the Relay Server is running, as it communicates via the Unix socket.
Here's a sample response:
These metrics provide insights into the Relay Server's performance, resource utilization, and various other statistics related to its operations.
When deploying a new responder to an asset, you will encounter a new configuration where you can choose a connection route. This allows you to deploy a responder that either directly connects to the AIR Console or utilizes a connection route via a Relay Server.
By selecting "Relay Server Connection," you will be presented with a list of registered Relay Servers associated with this organization. From this list, you can select one and proceed with the configuration.
The subsequent steps remain the same when deploying a new responder, regardless of the connection route chosen (Relay Server). Once you have successfully installed a new asset using the Relay Server connection, you will observe the newly deployed asset associated with the Relay Server on the "Organization Detail" page.
After selecting the installed Relay Server from the list on the "Organization Detail" page, you can access your associated assets by clicking on the "Assets" tab. Additionally, you can view comprehensive details of your Relay Server on this page by clicking on the "Information" tab.
Furthermore, you can view and manage assets that are connected through this Relay Server. If you wish to modify the connection routing of your assets, you can do so on the "Assets" tab. Simply select the asset that you would like to view or edit.
Within the "Connection Route" setting, you have the option to choose between a direct connection to the AIR Console or selecting another Relay Server for your asset to connect to. This action will bring up the same settings page for connection routing that you encountered when deploying a new asset.
To update the connection route addresses for multiple assets in the same organization, follow these steps:
Go to the organization's page (Organization of the assets you want to update) or the Assets page.
Selected the desired assets within the same organization.
Edit the connection route by selecting the icon at the end of the connection route row.
Modify the connection route addresses or choose a Direct connection.
By following these steps, you can easily update the connection route addresses for multiple assets in the same organization.
To manually update the connection route of your responder, you can run the responder with the "configure" flag. Follow these steps:
Open a Terminal.
Navigate to the directory where the responder is located.
Run the configure command as shown below:
Upon running the configuration command, the responder service will automatically restart with the updated configuration, including the new connection route that has been set. This ensures that the responder incorporates the changes and operates according to the new configuration.
Windows 7 SP1 (with latest updates)
Windows 8
Windows 8.1
Windows 10
Windows 11
Windows Server 2008 R2 (with latest updates)
Windows Server 2012
Windows Server 2012 R2
Windows Server 2016
Windows Server 2019
Windows Server 2022
Windows Server 2025
Centos 7
Centos 8
Centos 9
Fedora 21
Fedora 22
Fedora 24
Fedora 26
Fedora 34
Fedora 36
Amazon Linux 1 Latest
Amazon Linux 2 Latest
Redhat 7
Redhat 8
Redhat 9
Pardus 17
Pardus 21
Rockylinux 9
Rockylinux 8
Debian 7
Debian 8
Debian 10
Debian 11
Debian 12
Ubuntu 12.04
Ubuntu 14.04
Ubuntu 16.04
Ubuntu 18.04
Ubuntu 20.04
Ubuntu 22.10
Ubuntu 23.04
Boss Linux 7
Boss Linux 8 (failed the off-network tests and is still being worked on. It passed the triage, direct collection, and acquisition tests)
Boss Linux 9
Boss Linux 10
All Linux distros can run on 32/64 bit and ARM64 architectures.
macOS 10.15
macOS 11.0
macOS 12.0
macOS 13.0
macOS 14.0
macOS 15.0
Operating Systems that can run the AIR responder
The Binalyze AIR responder can be installed on Microsoft Windows, Linux, and Apple macOS operating systems. All supported operating systems and associated versions are listed below.
Golden Image is for customers who want to use the same Operating System Images to start new machines. As we use the computer name/hostname of the machine/asset as a unique identifier for the machine/asset, customers cannot use the same image in which AIR responder is already installed without the newly introduced golden image support.
It basically cleans some configuration options set during registration and then disables and stops the AIR responder service before the image of the operating system is taken. To do this, we use --prepare-golden-image
flag that is explained below. This must be called before the imaging process takes place.
After the image is prepared, the user must use --init-golden-image
flag, which is explained below, before the image is used to create a new instance.
--prepare-golden-image
The user must use this flag before creating a golden image.
Windows:
"C:\Program Files (x86)\Binalyze\AIR\agent\AIR.exe" configure --prepare-golden-image
Linux/macOS:
/opt/binalyze/air/agent/air configure --prepare-golden-image
This flag does the following:
Stops the service.
Disables the service.
Cleans the RegisteredTo, SecurityToken, and EndpointID fields in the config.yml.
Uninstalls the watchdog (if tamper detection was enabled)
--init-golden-image
This flag activates the responder again after the golden image is up and after the hostname is changed.
Windows:
"C:\Program Files (x86)\Binalyze\AIR\agent\AIR.exe" configure --init-golden-image --deployment-token 769aca0ff45a433a --console-address air-qa.binalyze.com --organization-id 0
Linux/macOS:
/opt/binalyze/air/agent/air configure --init-golden-image --deployment-token 769aca0ff45a433a --console-address air-qa.binalyze.com --organization-id 0
Note: The use of --deployment-token
is required. Because the deployment token is clean after the registration of the AIR responder. The use of --console-address
and --organization-id
is optional. They are used to overwrite the console address and organization ID, which are already set in the configuration file at the first installation before the image was taken.
This flag does the following:
Updates the DeploymentToken, ConsoleAddress, and OrganizationID values entered as a command in the config.yml.
Starts the service.
Enables the service.
Watchdog is installed automatically after registration if it is enabled by AIR Console.
Exit code other than 0 (zero) means an error occurred while executing commands. The terminal will print the error messages, and the log file will contain the error messages.
If something goes wrong, the first option is to re-run the same command.
If a re-run of the command doesn’t succeed, the user should perform the same steps manually.
Chrome 90+
The AIR responder requires the asset on which it runs to meet the following hardware requirements:
Minimum of 2 GB of RAM.
Minimum of 2 GB of free disk space.
This page summarizes the capabilities and current limitations of Responder for Organization Units (OUs) within an Active Directory (AD) environment.
Key Points:
Current Capability:
Once Active Directory integration is complete, the AIR will display the domain on the Assets page.
Users can filter assets by clicking on their Organization Unit on the Assets page. Further filtering for "Managed Status in Managed" will show assets where the Responder is installed.
Limitation and Requests:
As of now, AIR does not support querying or installing Responders directly at specific OU levels (e.g., SecurityTesting.Binalyze.local) beyond the root AD level (e.g., binalyze.local).
A feature request has been submitted to allow integration directly at the OU level to enhance targeted management within the domain structure.
Installation Note:
The AIR Responder will report on systems where it is installed. It does not automatically install on systems within an AD environment where it is not already installed.
When integrating AIR with Active Directory, it is important to note that the account used for this integration does not require Domain Admin permissions. The integration primarily involves LDAP searches for reading directory information. Therefore, having Domain Users permission is sufficient for LDAP integration with AIR. This ensures that the necessary operations can be performed securely without granting excessive privileges.
Conclusion: Efforts to extend AIR's integration capabilities to specific OUs are ongoing, following feedback and feature requests. This enhancement aims to provide more granular control and efficiency in managing cybersecurity operations across different organizational units.
C:\ProgramData\.binalyze-air\
or %ProgramData%\.binalyze-air\*
The Binalyze AIR Watchdog Folder (C:\ProgramData\.binalyze-air\
or %ProgramData%\.binalyze-air\
) is a critical directory used by the Binalyze AIR responder for storing internal data required to maintain and monitor the health and proper functioning of the AIR responder agent. This folder contains temporary files, logs, and configuration data that help the Watchdog component of the AIR platform ensure that the responder agent is running correctly and automatically restarts the agent if any issues arise.
Health Monitoring: The Watchdog monitors the responder agent’s status. If the agent stops unexpectedly or malfunctions, the Watchdog uses this folder to store diagnostic data and trigger the necessary actions (e.g., restarting the agent).
Temporary Storage: The folder stores temporary files used by the AIR responder during its forensic and investigative processes. These may include logs, process monitoring data, or execution-related files.
Configuration Data: The directory can also house configuration and state files that help the agent track its operational state, ensuring that it maintains continuity of processes even in the event of interruptions.
When configuring EDR (Endpoint Detection and Response) or AV (Antivirus) software, it is essential to exclude this folder from being scanned or interfered with. Failure to do so may cause unnecessary alerts or interruptions to the operations of the AIR responder, potentially halting the forensic collection process or causing data collection to fail.
Absolute Path:
C:\ProgramData\.binalyze-air\*
This is the standard path used by the Binalyze AIR Watchdog on Windows systems.
Environment Variable Path:
%ProgramData%\.binalyze-air\*
This variation uses the %ProgramData%
environment variable, which points to the C:\ProgramData\
folder. It's a more dynamic way of referencing the same location in different system configurations.
For Binalyze AIR to function seamlessly, especially during critical incident response tasks, excluding this folder from AV/EDR scans or interference is vital. The Watchdog service ensures that the responder is continuously running and can self-correct when issues arise. Blocking access to or deleting files from this folder could disrupt the AIR responder's ability to perform its monitoring tasks, leading to downtime and delayed investigations.
To ensure uninterrupted operation, follow these allow-listing rules in your security setup:
Windows AV/EDR Systems: Allow-list the folder C:\ProgramData\.binalyze-air\*
Linux/macOS Equivalents: Similar watchdog components may exist in those environments within paths like /usr/share/.binalyze-air/
or /opt/binalyze/air/agent/
(adjust based on OS).
By allowing the Watchdog folder, you ensure Binalyze AIR remains resilient and responsive, even in the event of unexpected issues.
C:\Program Files (x86)\Binalyze\AIR\agent\
C:\ProgramData.binalyze-air
C:\Program Files (x86)\Binalyze\AIR\agent\AIR.exe
C:\Program Files (x86)\Binalyze\AIR\agent\DRONE.exe
C:\Program Files (x86)\Binalyze\AIR\agent\TACTICAL.exe
%ProgramData%.binalyze-air\WATCHDOG.exe
C:\Program Files (x86)\Binalyze\AIR\agent\utils\curl.exe
C:\Program Files (x86)\Binalyze\AIR\agent\utils\osqueryi.exe
/opt/binalyze/air/agent/air
/opt/binalyze/air/agent/drone
/opt/binalyze/air/agent/tactical
/opt/binalyze/air/agent/utils/osqueryi
/opt/binalyze/air/agent/utils/curl
/usr/share/.binalyze-air/watchdog
/opt/binalyze/air/agent/air
/opt/binalyze/air/agent/drone
/opt/binalyze/air/agent/tactical
/opt/binalyze/air/agent/utils/osqueryi
/opt/binalyze/air/agent/utils/curl
/usr/share/.binalyze-air/watchdog
Are you encountering security warnings when accessing AIR? Let's demystify the process of changing SSL certificates in Binalyze AIR to ensure seamless and secure connections.
Understanding Self-Signed Certificates:
Many users wonder why they receive security warnings when accessing AIR. These warnings often stem from the use of self-signed certificates, which are SSL certificates created without a certificate authority. While self-signed certificates offer convenience, they lack the validation provided by certificate authorities, leading to "untrusted" status in browsers.
Navigating CSR and Certificate Authorities:
To obtain SSL/TLS certificates, organizations generate Certificate Signing Requests (CSRs) and submit them to certificate authorities (CAs) for validation. However, Binalyze AIR does not handle CSRs due to identity verification complexities and associated fees. Instead, users are encouraged to create CSRs using tools like OpenSSL and obtain certificates from trusted CAs.
Importing Certificates into Binalyze AIR:
To import corporate or self-signed certificates into Binalyze AIR, users must establish the certificate chain, including the certificate itself, intermediate certificate, and root certificate. This process can be accomplished using a text editor and imported via the user interface.
Clarifying Common Queries:
Properly posed questions, such as "How can I import my corporate certificate to AIR?" or "How can I upload my self-signed certificate to AIR?" help streamline the process. The answer lies in establishing the certificate chain and ensuring inclusion in clients' trusted root-certificates section for self-signed certificates.
Final Thoughts:
While SSL certificate management may seem complex, the tasks involved are straightforward. Support engineers need not possess extensive SSL knowledge, and consulting system administrator is always recommended for SSL-related tasks. By understanding these fundamentals, users can navigate SSL certificate changes with confidence in Binalyze AIR.
To update Relay Server, you can initiate an update task for the Relay Server from the AIR Console.
Follow these steps from AIR Console:
Locate the Relay Server from the “Organization Detail” page.
Select the relay server and click “Update” button from “Relay Server Details” page.
By running the update task from the AIR Console, the Relay Server installed on the asset and the responder itself will be updated to the latest version.
Similarly, if you wish to uninstall Relay Server from an asset, you can click to red button next to the “Update” in “Relay Server Details” page from AIR Console. This will remove Relay Server and responder from the system.
Installation of the AIR Responder on your assets is managed via the Assets button in the Main Menu:
The AIR Responder installer is a zero-configuration package that contains the console address already embedded in it.
You can deploy the AIR Responder in multiple ways:
Downloading an installation package (Windows, macOS, Linux, Chrome and ESXi)
Copying a PowerShell Command (Windows)
Copying a CURL Command (macOS and Linux)
Copying a WGET Command (macOS and Linux)
Downloading a PowerShell Script (Windows)
Downloading the Asset installer (macOS and Linux)
Manual installation via Active Directory/SCCM.
Generation of a shareable Deployment Link (Windows, macOS, Linux, Chrome and ESXi)
In the sections that follow, we will look at the deployment of AIR Responders to Windows, Linux and Mac operating systems.
The AIR Responder is a ‘zero-config’ deployment as the file name has all the information you need for quickly deploying a Responder.
This level of detail in the filename provides all the information needed as a digitally signed binary - this prevents issues with security solutions and to date, not one issue has arisen.
The file name example shown here has 4 main components:
2.38.7 - is the Responder version number.
air-demo.binalyze.com - is the address of the console with which the Responder will be communicating
176 - is the console's internal organization number ID.
And the apparently random mixture of letters and numbers, 9df51c56a73341f4, is the - Deployment Token.
386 - describes the processor architecture of the machine on which the Responder will run.
There are multiple ways of deploying the Responder all of which are designed to be quick and scalable. Let's take a look at the different ways in which you can deploy the AIR Responder to your assets:
From the Main Menu select 'Assets' and then 'All Assets' from the Secondary Menu. Now you will see the page name 'Assets' and next to that is the Action Button which for the Assets page is labeled '+ Add New.'
When this '+ Add New' button is selected three deployment options are offered in a drop-down menu:
Each one of the options will present the user with a wizard that will walk through the options needed for the chosen deployment method:
Deploy New - For assets that are attached to a network that is visible to the AIR console
Cloud Account - For assets that reside in AWS EC2, and Virtual Machines in Microsoft Azure.
Off-Network - To generate triage and collection packages for assets that are not connected to a visible network.
When you choose 'Deploy New', you'll be prompted via a wizard to determine if the Responder should establish a direct connection to the AIR console or if utilizing a Relay Server connection would be more suitable for your environment. Relay Server is explained here.
The second step of the deployment wizard provides distinct deployment options for all of the currently supported, network-attached operating systems; Windows, Linux, and macOS:
The command varies based on the Organization affiliation. An example PowerShell command to copy is provided below:
This command is specific to your console address and Organization.
This script can be downloaded from your AIR Console. Ensure you select or are working in the appropriate Organization before downloading.
If you prefer, the Windows responder can be deployed using SCCM with the following command:
For a silent installation you can use the following command:
These commands are specific to your console address and Organization.
The MSI for the Windows Responder can be downloaded directly from the page, as depicted in the screenshot below:
All three operating systems support the Shareable deployment link available in the console. This method is often the most straightforward—simply share the link with your client, allowing them to download and install the Responder. An example link is shown below:
Unlike Windows, macOS and Linux do not utilize PowerShell commands or scripts. Instead, they can employ CURL or WGET commands. Alternatively, you can use the Shareable deployment page link mentioned above.
Example of CURL deployment command:
Example of WGET deployment command:
These commands are specific to your console address and Organization.
For macOS, the user/administrator has to allow Full Disk Access (FDA) to the AIR Responder for it to have full access to the disk for collections.
Open “System Settings -> Privacy & Security -> Full Disk Access”
Toggle the switch 'on' to enable Full Disk Access for the AIR Responder.
After installing Responder on macOS, users need to grant Full Disk Access permission. To guide users through this process, a popup will appear after installation:
If Full Disk Access permission is not granted when starting any Acquisition, this will be shown in the Acquisition logs:
After toggling on the FDA on this page, select the /opt/binalyze/air/agent/air file in the file manager that opens. Once this is done, our responder will appear in the list under the name 'air' ready for the user to toggle 'on'.
The AIR Responder operates as an executable binary running as a service rather than a traditional macOS application. This approach ensures consistency across platforms like Linux and macOS.
Since AIR is not packaged as a macOS app, it does not include a .plist
file, which typically contains the application icon metadata. Consequently, it cannot display a logo on the Full Disk Access page.
This design choice does not affect the functionality or performance of Binalyze AIR.
While the popup effectively guides users in manually installed scenarios, it presents challenges for enterprise environments where macOS devices are managed via Mobile Device Management (MDM). MDM allows remote application installation and security policy enforcement, including granting Full Disk Access.
Customers prefer silent installations for MDM-deployed Responder without the popup, as permissions are already set through security policies. However, our current setup cannot distinguish between user-initiated and MDM-initiated installations, causing the popup to appear in all cases.
We are actively working on a solution to address this issue for seamless enterprise deployments.
AIR Responder Operation in Windows Safe Mode
Binalyze AIR Responder is now capable of functioning in Safe Mode, allowing forensic acquisition and remote tasking on machines operating in a restricted state. However, to maintain full functionality and allow task execution via the AIR Console, specific registry modifications must be applied before entering Safe Mode.
Before booting into Safe Mode, execute the following Registry modifications to register the AIR Agent Service:
REG ADD "HKLM\SYSTEM\CurrentControlSet\Control\SafeBoot\Network\Binalyze.AIR.Agent.Service" /VE /T REG_SZ /D "Service" /F
REG ADD "HKLM\SYSTEM\CurrentControlSet\Control\SafeBoot\Minimal\Binalyze.AIR.Agent.Service" /VE /T REG_SZ /D "Service" /F
These registry changes can also be enforced via the Windows UI by running msconfig to get to the System Configuration window where in the Boot tab, the user can select Safe Boot with the Network button active:
These registry entries ensure the Binalyze AIR Agent Service is recognized and loaded in Safe Mode.
Safe Mode with Networking
If a machine enters Safe Mode with Networking, the AIR Agent will continue operating as expected, maintaining communication with the AIR Console.
Safe Mode (Without Networking)
The AIR Agent cannot communicate with the console if networking is unavailable unless an off-network package is used for forensic acquisitions.
Remote Task Execution
Without the registry modifications, the AIR Console cannot issue remote tasks to the endpoint in Safe Mode.
Adding the registry keys before booting into Safe Mode ensures that Responder and interACT remain functional.
If the registry modifications are not applied and the AIR Agent does not load, users can manually execute AIR.exe after entering Safe Mode to establish a temporary connection.
However, this approach is not recommended due to potential inconsistencies and administrative overhead.
By proactively applying the recommended registry changes, organizations can ensure seamless forensic investigations even when endpoints are booted in Safe Mode.
1- Click evidence collection for Chrome
The AIR responder standalone collector currently provides support for execution on Chrome v90+ operating systems.
AIR For Chrome is the evidence collector extension for Chrome and ChromeOS. AIR For Chrome extension allows investigators and analysts to capture forensically sound data with a single click at machine speed. All data is collected into a well-organized HTML report that is accompanied by individual CSV files. Investigators and analysts can use AIR For Chrome Extension to collect forensically sound data from Google Chrome and ChromeOS.
AIR For Chrome is the fastest and easiest way of capturing forensically sound data from Google Chrome browsers. The forensically sound data collected by AIR For Chrome are listed below.
Browser History
Bookmarks
Cookies
Downloads
Extensions
Platform Keys
Privacy Settings
Proxy Settings
Sessions
Storage
Top Sites
Windows & Tabs
To address a security vulnerability involving Host header injection, we have (with AIR v4.33) implemented more stringent controls on AIR Console access.
This update enhances security protocols and provides administrators with better control over access settings:
Access Restriction: The AIR Console will now only be accessible through the specific address registered during the initial setup, ensuring that only legitimate requests are processed.
Technical Enforcement: This measure counters manipulations of the Host header that could potentially allow unauthorized access.
Configuration Flexibility: For legitimate access needs from multiple domains or IP addresses, users can specify allowable entries via the AIR_CONSOLE_ADDRESSES environment variable.
Enhanced Security: This change prevents unauthorized access and aligns with best practices for secure network management.
If you are unsure of your AIR Console Address, you can check the config.yml
file on one of your assets:
Windows: Program files x86 /binalyze/agent/config.yml
Linux or macOS; /opt/binalyze/agent/config.yml
Troubleshooting Console Access Issues
If you encounter the error message “Invalid Host Header. Host must be the Console Address” when accessing Binalyze AIR, it means the system is enforcing stricter security controls to prevent unauthorized access. This typically occurs after upgrading to AIR Console v4.33 or later. To understand why this happens and how to resolve it, refer to our for step-by-step instructions on configuring additional console addresses.
Login to the AIR Console Web UI using a Global Admin Account.
Navigate to the Backup Management section by selecting the Gear Button and then "Backup History" from the drop-down list.
Make a backup of the system by clicking the "Backup Now" button on the top right corner.
Download the backup file by clicking the Vertical Ellipsis Button under the "Actions" column and clicking Download from the drop-down list.
This will download a zip file with the ABF extension (AIR Backup File).
NOTE: You must stop AIR System first.
Use a terminal emulator, such as PuTTY to connect to the CLI of the AIR Server via SSH.
Navigate to the AIR folder (/opt/binalyze-air by default) by executing the following command:
cd /opt/binalyze-air
Stop containers by executing the following command:
docker compose down -v
Navigate to the AIR DB folder (/opt/binalyze-air-db by default) by executing the following command:
cd /opt/binalyze-air-db
Stop containers by executing the following command:
docker compose down -v
NOTE: You must upgrade DB first.
Use a terminal emulator, such as PuTTY to connect to the CLI of the DB Server via SSH.
Navigate to the AIR DB folder (/opt/binalyze-air-db by default) by executing the following command:
cd /opt/binalyze-air-db
Pull the latest images by executing the following command:
docker compose pull
Start containers by executing the following command:
docker compose up -d
Use a terminal emulator, such as PuTTY to connect to the CLI of the AIR Server via SSH.
Navigate to the AIR folder (/opt/binalyze-air by default) by executing the following command:
cd /opt/binalyze-air
Pull the latest images by executing the following command:
docker compose pull
Start containers by executing the following command:
docker compose up -d
Login to the AIR Console Web UI using a Global Admin Account.
Navigate to the Backup management section by selecting the Gear Button and then "Backup History" from the drop-down list.
Make a backup of the system by clicking the "Backup Now" button in the top right corner.
Download the backup file by clicking the Vertical Ellipsis Button under the "Actions" column and clicking Download from the drop-down list.
This will download a zip file with the ABF extension (AIR Backup File).
Use a terminal emulator, such as PuTTY to connect to the CLI of the AIR Console & DB Server via SSH.
Navigate to the AIR folder (/opt/binalyze-air by default) by executing the following command:
cd /opt/binalyze-air
Stop containers by executing the following command:
docker compose -p binalyze-air down -v
Pull latest images by executing the following command:
docker compose pull
Start containers by executing the following command:
docker compose -p binalyze-air up -d
How to restore from an AIR backup (.abf) using the CLI
Follow these commands to restore your AIR backup using the CLI.
This procedure applies to both Single-tier and 2-tier systems and is performed on the Console server.
Install a fresh AIR Console (choose Single-tier or 2-tier based on your needs).
Important: After installation, do not access the UI for first-time setup until you have completed the steps below.
Use your preferred file transfer tool to transfer your backup file to the new server.
Ensure the file has the .abf extension (e.g., 23-8-14_16.43.5.216_v3.abf).
Copy the backup file from the host system to the APP container:
docker cp <backup_file.abf> binalyze-air-app-1:/binalyze-air/
If using a 2-tier system, retrieve the AIR DB connection strings by running:
cat /opt/binalyze-air/.env
These URIs will be needed for validation in the next step.
Execute the CLI tool with:
sudo docker exec -ti binalyze-air-app-1 /air-cli
In the AIR-CLI interface, select:
"1) Restore using a backup file"
When prompted, enter the directory and filename where the backup file was copied, for example:
/binalyze-air/23-8-14_16.43.5.216_v3.abf
Press Enter through the default options until you see the confirmation message: "This operation will drop your current database and restore the provided backup. Are you sure to continue?" Type 'y' and press Enter to continue.
Once the restore completes successfully, a confirmation message will appear.
Exit the container by typing: Exit
Restart the containers by running:
cd /opt/binalyze-air
docker compose down && docker compose up -d
Don't hesitate to get in touch with [email protected] if you have any problems or questions about the restoration process
The AIR standalone collector currently provides support for execution on ESXi 6.5+ systems.
VMware ESXi is a type of hypervisor, which is software that creates and runs virtual machines (VMs). It is a part of VMware's vSphere product suite and is used for enterprise-level virtualization. ESXi is popular due to its stability, performance, and extensive feature set for managing and running virtual machines.
Binalyze AIR offers a robust approach for evidence collection from ESXi platforms. DRONE is not currently supported for ESXi systems. This is achieved through a standalone ESXi collector, available for download on the Assets page of your AIR console:
Assets>Add New>Deploy New>Direct connection to AIR Console >ESXi
After running Responder using your chosen method, the collected evidence should be converted into a PPC file. This PPC file can then be imported into the AIR Console. Once imported, the asset will be displayed alongside all other assets in AIR, ensuring seamless integration and visibility within the platform.
After ingestion into AIR the ESXi evidence is parsed and pesented in the Investigation Hub in the normal way:
However, you can if required decompress the tar.gz file to independently access and examine the evidence. Typically, the evidence will include the following: :
System Info: Basic system information about the ESXi machine.
Bash History: Command history executed on the Bash shell.
Collect Bash Files: Gathering files associated with the Bash shell.
Environment Variables: Variables defined in the system environment.
Collect /etc Files: Gather files under the /etc directory.
Log Files: Collecting various log files.
SSH Config: Retrieves the configuration settings related to the SSH (Secure Shell) protocol.
SSH Authorized Keys: Collects information about authorized SSH keys, which are used for secure authentication.
SSH Known Hosts: Gathers details about known hosts in the context of SSH.
File System Enumeration: Involves enumerating and collecting information about the file system on the ESXi machine.
A full list of ESXi collected items is shown here
Having run the binary the progress will be displayed in the user's terminal/shell:
ID
Collector Name
Collected Files
1
History Files
.ash_history, .bash_history, .sh_history, .tsch_history, .psql_history, .sqlite_history, .mysql_history, .vsql_history, .lesshst, .viminfo
2
Files of Interest
.bashrc, .bash_logout, .bash_login, .bash_profile .mkshrc, .pam_environment, .profile, .zshrc, authorized_keys, known_hosts, ssh_config
3
Cronjob Files
/etc/cron.hourly, /etc/cron.daily, /etc/cron.weekly, /etc/cron.monthly, /etc/cron.d
4
Cronjob Related Files
*If any executable file is found in crontabs, it is collected.
5
/etc Collector
All files under /etc is collected
6
Log Files
All files under /var/log and /scratch/log is collected
7
Spool Files
All files under /var/spool is collected
ID
Collector Name
1
Process Snapshot Detailed
2
Process Snapshot Verbose
3
Open Files
4
User Info
5
Disk Usage
6
Disk Usage By User
7
Disk Usage Human Readable
8
System Hostname
9
VMware Version
10
System Info
11
Shell Aliases
12
Environment Variables
13
ESX Advanced Configuration
14
ESX FCoE Configuration
15
ESX FCoE Networking
16
ESX IPSec Configuration
17
ESX IPsec Policy
18
ESX Module List
19
ESX Module Query
20
ESX Multipathing Info
21
ESX NAS Configuration
22
ESX Network Interface Cards
23
ESX Routing Table
24
ESX Network Routes
25
ESX IPv6 Routing Table
26
ESX IPv6 Network Routes
27
ESX SCSI Devices List
28
ESX VMKnic List
29
ESX Volume List
30
ESX VSwitch List
31
ESX Configuration Info
32
List all of the CPUs on this host.
33
List usb devices and their passthrough status.
34
List the boot device order, if available, for this host.
35
Display the current hardware clock time.
36
Get information about memory.
37
List all of the PCI devices on this host.
38
Get information about the platform.
39
Information about the status of trusted boot. (TPM, DRTM status).
40
List active TCP/IP connections.
41
List configured IPv4 routes.
42
List configured IPv6 routes.
43
List ARP table entries.
44
List the VMkernel network interfaces currently known to the system.
45
List configured Security Associations.
46
List configured Security Policys.
47
Print a list of the DNS server currently configured on the system in the order in which they will be used.
48
List the rulesets in firewall.
49
List the Physical NICs currently installed and loaded on the system.
50
List the virtual switches current on the ESXi host.
51
Hostname
52
Get Open Network Files
53
Get Unix Socket Files
54
Get the network configuration.
55
Get the DNS configuration.
56
Get the IP forwarding table.
57
Gets information about virtual NICs.
58
Displays information about virtual switches.
59
Lists the installed VIB packages.
60
Gets the host acceptance level. This controls what VIBs will be allowed on a host.
61
Display the installed image profile.
62
List the VMkernel UserWorld processes currently on the host.
63
Collect the list open files.
64
Report a snapshot of the current processes including used time, verbose, session ID and process group, state and type.
65
List the NAS volumes currently known to the ESX host.
66
List the NFS v4.1 volumes currently known to the ESX host.
67
List the volumes available to the host. This includes VMFS, NAS, VFAT and UFS partitions.
68
Display the mapping of logical volumes with physical disks.
69
List the VMkernel modules that the system knows about.
70
List the enforcement level for each domain.
71
Get FIPS140 mode of ssh.
72
Get FIPS140 mode of rhttpproxy.
73
List the advanced options available from the VMkernel.
74
List VMkernel kernel settings.
75
Display the date and time when this system was first installed. Value will not change on subsequent updates.
76
Show the current global syslog configuration values.
77
Show the currently configured sub-loggers.
78
Display WBEM Agent configuration.
79
List local user accounts.
80
Display the current system clock parameters.
81
List permissions defined on the host.
82
Display the product name, version and build information.
83
List networking information for the VM's that have active ports.
84
List the virtual machines on this system. This command currently will only list running VMs on the system.
85
Get the list of virtual machines on the host.
86
List Summary status from the vm.
87
Configuration object for the vm.
88
Virtual devices for the vm.
89
Datastores for all virtual machines.
90
List of networks for all virtual machines.
91
List registered VMs.
ID
Collector Name
Description
1
File Listing
All files in the system is enumerated with following infos; File Name,File Type,Size (bytes),Access Rights,User ID,User Name,Group ID,Group Name,Number of Hard Links,Mount Point,Inode Number,Birth Time,Last Access Time,Modification Time,Change Time
2
Executable Hashes
All files' MD5 hashes that has executable permission in the system is collected
Jamf is a software company that supplies one of the most well-known and popular Mobile Device Management (MDM) software solutions used to manage Apple devices. Using Jamf, and following the steps below, you can silently grant full disk access to AIR responder’s remotely.
Full Disk Access (FDA) on macOS can be activated by importing a Privacy Preferences Policy Control (PPPC) config file instead of manually providing permission options via the Jamf UI.
AIR (and all other platforms) will only achieve complete macOS data acquisitions if FDA is enabled. Typically some of the artifacts that will give partial or no results if FDA is not active include:
App Usage
Bluetooth Connections
Document Revisions
Downloads
DS_Store
Notification Info
TCC
A PPPC config file in macOS manages permissions for apps to access sensitive data and system features like Full Disk Access, camera, and microphone. It's used by organizations to pre-configure these permissions, often through MDM, ensuring necessary apps run without user prompts. These files are in .mobileconfig
(XML) format and help balance security with convenience by automating privacy settings for applications.
Steps to follow:
Download and open the Jamf PPPC Utility: https://github.com/jamf/PPPC-Utility/releases/tag/1.5.0
From a MacBook where Binalyze AIR is already installed, go to the path /opt/binalyze/air/agent, drag the "air" binary to PPPC Utility, and you will be able to see identifier details
In properties - "Full Disk Access" -> Choose "Allow"
Bottom right, Click "Save", and provide a Payload Name, for example, "AIR"
Save AIR.mobileconfig:
Now you can Import the saved config file into Jamf - Configuration Profiles.
Identifier and Identifier Type for importing the config created using PPPC utility to achieve FDA:
An entry is created in /Library/Application Support/com.apple.TCC/TCC.db for all the applications that were assigned FDA (Manual Install)
For remote deployments, an entry is created in /Library/Application Support/com.apple.TCC/MDMOverrides.plist
For practical verification, users should try to collect KnowledgeC evidence. Successful collection confirms that the responder has Full Disk Access.
There are several ways to uninstall the AIR Responder from assets and these include using the AIR console or working on the actual asset.
It is important to understand that you should only remove the Responder if you have no intention of revisiting the asset for further investigations. If you do need to do so, then a fresh responder deployment will be needed.
From the Assets button in the Main Menu it is possible to select one or multiple assets and then, via the Bulk Action Bar, choose to either 'Uninstall a Responder' or to 'Uninstall responder and purge console data'.
It is also possible to uninstall a Responder from the individual asset's, Asset Info page by selecting the option from the Asset Actions drop-down menu:
The 'Uninstall Responder' will remove the AIR Responder application from any selected assets.
The 'Uninstall Responder and purge console data' option will remove the AIR Responder application from the selected assets and delete the data saved from the assets on the console. All associated Tasks (eg; Timeline) will also be deleted from the console. Data saved to Remote Storage, and locally saved data on the asset will remain intact. interACT or normal asset management tools can be used to remove this data.
Password Protection for AIR Responder Uninstallation
When the Uninstallation Password feature is enabled in AIR's settings, a protection password is required to uninstall the AIR Responder. This feature restricts the uninstallation process to command-line operations only, as uninstallation through the local operating system's user interface (UI) is disabled.
Here are the key points regarding this feature:
Command-Line Uninstallation: The AIR Responder must be uninstalled using shell commands. During this process, the protection password must be included as an argument. This can be executed either locally or through remote management tools like SCCM.
Local User Restrictions: Local users must have the protection password to uninstall the Responder. Without this password, uninstallation via local user interfaces is not possible.
UI and API Uninstallation: Uninstallation through the AIR UI or API does not require the protection password, allowing for more flexible management remotely.
Tamper Detection: AIR monitors and logs any tampering with the Responder. This includes actions like deletion, pausing, termination, or any interference, enhancing security and accountability.
This structured approach ensures that only authorized personnel can remove the AIR Responder, providing an additional layer of security against unauthorized tampering and ensuring compliance with security policies.
The Delete Asset button is available only for Disk Image asset types. For any other asset type, this option remains grayed out. When used, it simply removes the Disk Image of the asset from the console without affecting the asset itself.
As shown above, when attempting to delete assets in the system, certain restrictions apply based on the type of assets selected. For instance, if you select both a Windows asset and a Disk Image asset simultaneously, the "Delete Asset" option becomes unavailable (greyed out). This is because the Windows Asset is classified as non-deletable.
Key Details:
Non-Deletable Assets: Windows assets are considered non-deletable within this system due to their critical nature or specific configuration settings that prevent deletion.
Tooltip Information: When the "Delete Asset" option is greyed out, a tooltip will appear indicating that a non-deletable asset (the Windows Asset) has been selected, providing clarity on why deletion is restricted.
This design ensures that critical assets are protected from accidental deletion, enhancing the security and integrity of the system's data management.
To gracefully uninstall the Responder application from your Windows operating system, follow these steps:
Navigate to the Control Panel.
Access the "Add/Remove Programs" feature.
Locate and select the Binalyze AIR Responder application from the list.
Choose the option to uninstall.
You can also uninstall the Responder application using the command prompt with the following methods:
Using Product Code
To uninstall via the product code, execute the following steps:
Identify the product code of the Responder using PowerShell:
Copy the identified product code.
Uninstall the Responder using msiexec
:
Using Original MSI File
If you possess the original MSI file of the Responder, you can proceed as follows:
In either method, you can efficiently uninstall the Responder application from your system.
To uninstall a password-protected Responder, you can specify your uninstall password with the property UNINSTALL_PASSWORD
by using the command prompt with the following command:
msiexec /x "{84662419-2FEB-48D0-AFBF-C174D871A3CA}" UNINSTALL_PASSWORD="my-password"
When uninstalling the Binalyze AIR Responder program from a computer, certain files and directories are methodically cleaned up to ensure no residual data remains. All of these files are deleted by the Responder before the service is deleted.
Utils Directory: The utils binaries located in the Responder's installation directory are removed. If the installation directory is C:\Program Files (x86)\Binalyze\AIR\agent
, folder can be found in:
C:\Program Files (x86)\Binalyze\AIR\agent\utils\
Upload Temporary Directory: The directory used for temporary storage of upload files is cleared. This can be found in one of the following paths.
C:\Users\[user]\AppData\Local\Temp\BinalyzeUploadTemp
C:\Windows\TEMP\BinalyzeUploadTemp
Update Temporary Directory: The directory used for temporary storage of update files is cleared. This file can be found in one of the following paths.
C:\Users\[user]\AppData\Local\Temp\BinalyzeUpdateTemp
C:\Windows\TEMP\BinalyzeUpdateTemp
Update Task Download Directory: The directory used for downloading MSI binaries, If the Windows system directory is C:\
, the path can be found as follows.
C:\BinalyzeUpdateTemp
Binalyze Temp Directories: If the temp location is C:\Windows\TEMP\
, the paths can be found as follows.
C:\Windows\TEMP\Binalyze
C:\Windows\TEMP\BinalyzeTemp
Open a terminal window.
To uninstall the Binalyze AIR Responder package, use the following command:
sudo apt remove binalyze-air-agent
This command will uninstall the package.
Open a terminal window.
To uninstall the Binalyze AIR Responder package, run the following command:
sudo dnf remove binalyze-air-agent
This command will uninstall the package.
When uninstalling the Binalyze AIR Responder program from a computer, certain files and directories are methodically cleaned up to ensure no residual data remains.
Drone Config File: Drone config file located in the Responder’s installation directory. If the installation directory is /opt/binalyze/air/agent
, the file can be found in:
/opt/binalyze/air/agent/DRONE.Config.yml
Utils Directory: The utils binaries located in the Responder's installation directory are removed before the uninstallation of the service. If the installation directory is /opt/binalyze/air/agent
, the folder can be found in:
/opt/binalyze/air/agent/utils
Upload Temporary Directory: The directory used for temporary storage of upload files is cleared. This folder can be found as follows.
/var/lib/binalyze/BinalyzeUploadTemp
Update Temporary Directory: The directory used for temporary storage of update files is cleared. This folder can be found as follows.
/var/lib/binalyze/BinalyzeUpdateTemp
Update Task Download Directory: The directory used for downloading deb or rpm binaries, If the Linux temp directory is /tmp
, the folder can be found as follows.
/tmp/BinalyzeUpdateTemp
Binalyze Temp Directories: If the temp location is /tmp
, the folders can be found as follows.
/tmp/Binalyze
/tmp/BinalyzeTemp
Persistent Folder: The persistent folder can be found in:
/var/lib/binalyze
Config File: Config file is located in the Responder’s installation directory. After deleting the Responder, the configuration file is deleted. If the installation directory is /opt/binalyze/air/agent
the file can be found in:
/opt/binalyze/air/agent/config.yml
To initiate the uninstallation process for the Responder via the Terminal on macOS, execute the following command:
sudo /opt/binalyze/air/agent/air --uninstall
This command, executed within the Terminal, will seamlessly guide you through the removal of the Responder application from your macOS system.
To uninstall a password-protected Responder, you can specify your uninstall password with the environment variable AIR_UNINSTALL_PASSWORD
by using the command prompt with the following command:
AIR_UNINSTALL_PASSWORD="my-password" sudo -E /opt/binalyze/air/agent/air --uninstall
Uninstallation File and Directory Cleanup Process
When uninstalling the com.binalyze.air-agent
program from a computer, certain files and directories are methodically cleaned up to ensure no residual data remains. All of these files are deleted by the Responder after the package info is deleted.
Utils Directory: The utils binaries located in the Responder's installation directory are removed before the uninstallation of the service. If the installation directory is /opt/binalyze/air/agent
, the folder can be found in:
/opt/binalyze/air/agent/utils
Binaries: If the installation directory is /opt/binalyze/air/agent
, these files are located in:
/opt/binalyze/air/agent/air
/opt/binalyze/air/agent/tactical
/opt/binalyze/air/agent/drone
Config File: This file is located in the Responder’s installation directory. If the installation directory is /opt/binalyze/air/agent
, file can be found in:
/opt/binalyze/air/agent/config.yml
Drone Config File: This file is located in the Responder’s installation directory. If the installation directory is /opt/binalyze/air/agent
, the file can be found in:
/opt/binalyze/air/agent/DRONE.Config.yml
Service File: This file can be found in:
/Library/LaunchDaemons/com.binalyze.air-agent.plist
Upload Temporary Directory: The directory used for temporary storage of upload files are cleared. This folder can be found as follows.
/var/lib/binalyze/BinalyzeUploadTemp
Update Temporary Directory: The directory used for the temporary storage of update files is cleared. This folder can be found as follows.
/var/lib/binalyze/BinalyzeUpdateTemp
Update Task Download Directory: The directory used for downloading pkg binaries, if the unix temp directory is /tmp
, the folder can be found as follows.
/tmp/BinalyzeUpdateTemp
Binalyze Temp Directories: If the temp location is /tmp
, the folders can be found as follows.
/tmp/Binalyze
/tmp/BinalyzeTemp
Persistent Folder: The persistent folder can be found in:
/var/lib/binalyze
Updating the AIR console
This page provides a comprehensive guide to configuring and managing all settings in Binalyze AIR, covering:
General Settings: Platform-wide configurations.
Assets: Managing asset inventories.
Security: Setting up security features.
Features: Customizing AIR’s core functionalities.
Users and Roles: Administering user permissions and roles.
Evidence Repositories: Configuring storage for collected evidence.
Policies: Defining evidence collection rules.
Backup and Backup History: Managing backups and retention schedules.
Each section ensures optimal setup for your AIR environment.
This section provides details on the versions of various components of the Binalyze AIR platform, helping administrators ensure that all parts of the system are up to date.
AIR: The main application version (e.g., 4.23.3). This represents the core platform's release and includes the latest features and security updates.
DB (Database): The version of the database used by AIR (e.g., 6.0.7), which stores all data related to the platform’s taskings and configuration settings.
Responder: The version of the AIR responder (e.g., 2.50.5) installed on assets for data acquisition and remote interaction.
DRONE: The version of the DRONE analysis engine (e.g., 3.11.0), which processes collected evidence to deliver findings and insights on this and some live artifacts through automated analyzers.
TACTICAL: These versions indicate the status of various responders for different operating systems, including Linux, macOS, Windows, and the legacy version for older Windows systems. For example, the latest responders are at version 3.12.1, ensuring up-to-date compatibility with operating system environments.
MITRE ATT&CK Analyzer: This version (e.g., 7.0.0) refers to the built-in mapping against the MITRE ATT&CK framework, which helps identify adversary tactics, techniques, and procedures during investigations.
Disk Image Explorer: This component (e.g., version 1.0.0) provides functionality for exploring disk and volume images acquired during investigations.
Log Level: Determines the verbosity of logging within AIR. Adjusting the log level can help in debugging or keeping track of system activity.
Log Files: Provides access to the system's log files, which are useful for auditing, troubleshooting, and reviewing system performance and security events.
This section provides details about the current licensing status of the Binalyze AIR installation.
License Key: Displays the license key currently in use (e.g., AIR-TEST-LICENSE).
Valid Until: The expiration date of the license (e.g., 2025.09.29), which tells you how long the platform is licensed for.
Max Client: The maximum number of assets (clients) that can be managed under this license (e.g., 1,000,000 assets).
In Use: The number of assets currently being monitored by AIR (e.g., 447,908 assets).
Remaining: The number of asset slots still available (e.g., 552,092 assets). This helps ensure scalability and license compliance.
Console Address: This is the current address of the AIR Console (e.g., air-demo.binalyze.com) where asset responders are polling to check for any tasking assignments that need execution.
Important: Changing this address will trigger a migration process, which will cause all assets to connect to the new address while deregistering from the old one.
Console Proxy: Settings for configuring an internet proxy that AIR can use to connect to external services, such as updates or external evidence storage.
Address: The IP address of the proxy (e.g., 10.0.0.1).
Port: The port used for proxy communication (e.g., 0).
Username and Password: Credentials for authenticating with the proxy.
Certificate Authority (CA): If your organization uses a custom CA for SSL communication, this setting allows you to upload the certificate in the appropriate format for secure connections between assets and the AIR Console.
Displays information about the system where AIR is installed, helping monitor and optimize performance.
CPU:
Cores: The number of processor cores (e.g., 8), indicating the processing power available for handling AIR tasks.
CPU Type: Details of the CPU model (e.g., Intel Xeon Processor, Skylake architecture).
Flags: A list of supported CPU features (e.g., SSE, HT, etc.), indicating hardware capabilities relevant to performance.
Memory:
Total Memory: The total available system memory (e.g., 32.87 GB).
Used Memory: The amount of memory currently in use (e.g., 5.29 GB).
Free Memory: The remaining available memory (e.g., 27.58 GB), ensures that there are enough resources to handle future operations.
File System:
Total Storage: The total storage space available (e.g., 315.93 GB).
Used Storage: How much storage is currently used (e.g., 189.46 GB).
Partition: The partition where AIR data is stored (e.g., /dev/sdb1). Monitoring this ensures sufficient space for data storage and logging.
Manage updates for the AIR responders installed on assets.
This feature allows you to enable or disable automatic updates for responders. If enabled, the responders will automatically update to the latest version when a new release is available. This ensures that the responders are always running the most current version with all the latest features and security patches.
Deployment Tokens: These tokens are used to securely install and register responders on new assets, ensuring the responders communicate correctly with the AIR Console upon installation.
🔄 Clarifying Backward Compatibility in AIR 4.29+
Overview With AIR 4.29, we introduced a major improvement: decoupling AIR console updates from Responder updates. This gives teams greater flexibility when deploying AIR updates, especially in large-scale environments.
What This Means (and What It Doesn’t)
✅ Starting with AIR v4.29, the AIR console can be updated independently of Responder updates.
✅ All future AIR versions (4.29 and onward) will maintain backward compatibility with Responders that are also on version 4.29 or newer.
⚠️ Responders running versions older than 4.29 (e.g., 2.54.3) are not compatible with certain key features such as:
Evidence acquisition
Triage
interACT
Users with older Responder versions will see messages like:
"The asset’s AIR Responder must be updated to accept tasks."
To summarize: Backward compatibility begins from version 4.29 onwards. If your Responders are still on versions earlier than 4.29, they must be upgraded at least once to benefit from this compatibility model going forward.
Why This Matters Once all Responders are updated to v4.29+, you’ll no longer need to upgrade Responders with every new AIR console release — simplifying upgrades and reducing operational friction..
Enable alerts for tampering attempts on responders.
When Tamper Detection is enabled, the responder will actively monitor its own operation for any interference or attempts to disable it.
Functionality: If there is an attempt to modify or interfere with the responder (e.g., by disabling it or altering its files), the responder will notify the AIR Console, ensuring that any malicious attempts are flagged immediately.
This feature is critical for ensuring the integrity and continuous operation of responders in high-security environments.
Prevent unauthorized uninstallation of responders by requiring a password.
When this feature is enabled, users will need to enter a protection password to uninstall the responder from an asset. This prevents unauthorized personnel from removing the responder, which could otherwise leave the asset vulnerable or unmonitored.
Uninstallation Method: The uninstallation process will be restricted to shell commands, meaning it can't be removed via a simple GUI or file system manipulation, adding an extra layer of security.
Synchronize assets from Active Directory with AIR.
This feature allows Binalyze AIR to integrate with your Active Directory (AD) environment. You can specify the AD server (e.g., 10.0.0.1) and the domain (e.g., company.local) to automatically synchronize information about computers and users from AD into AIR.
LDAP Synchronization: By manually starting the LDAP synchronization, you can query Active Directory for specific objects such as computers, ensuring that AIR can discover and manage assets from your organization's AD.
The Query For Computers field (e.g., (&(objectCategory=computer))
) uses an LDAP filter to query and sync only computer objects from the directory.
Authentication: You will need to provide an AD username and password to authenticate and pull information from the directory.
Enable secure connections between AIR Console and users/assets by using SSL encryption.
Certificate: This displays the SSL certificate details used by AIR for secure HTTPS communication. In this case, the certificate is issued by Let's Encrypt (Issuer: Let's Encrypt, Common Name: R3) and is valid for a specific period (e.g., from 2022.09.18 to 2022.12.17).
Subject: The Common Name (CN) field shows the domain (e.g., air-demo.binalyze.com) to which the certificate applies.
Having an SSL certificate ensures that all communications between users and the AIR Console are encrypted, preventing unauthorized access to sensitive information.
Acts as the root certificate authority (CA) for issuing certificates if a custom SSL certificate is not provided.
Binalyze AIR generates an SSL Root CA for each instance when a custom certificate isn’t supplied. This certificate is used to create secure communication channels within the system.
Issuer and Subject: Both are BINALYZE R1, ensuring that the root certificate is tied to the Binalyze platform.
Validity: The root CA certificate is valid from 2017.10.14 until 2100.10.14, ensuring long-term use and security.
Define the port over which the AIR Console is accessible.
The AIR Console is configured to be accessed on port 8443, which is a secure port typically used for HTTPS traffic.
Meanwhile, responders will continue to communicate with the console over the default secure port 443. This setup ensures that assets and users can access the platform via separate but secure ports, enhancing security and flexibility.
Restrict access to the AIR Console based on IP addresses.
This feature allows administrators to restrict access to the AIR Console to a specific range of IP addresses, limiting who can interact with the console.
Important: This restriction does not affect communication between the AIR Console and the assets themselves. It only controls who can access the console’s user interface.
The current IP address of the user accessing the system (e.g., 172.71.122.69) is displayed for reference.
Configure user authentication security settings.
You can enforce Two-Factor Authentication (2FA) for all users, adding an extra layer of security by requiring a second form of verification (e.g., a mobile app code) when logging in. (SSO will override this option)
This setting enhances overall security by ensuring that only authenticated and verified users can access the system.
Enable and configure Single Sign-On (SSO) for AIR.
SSO allows users to log in to AIR using their organization’s existing identity provider (e.g., Azure AD, Okta) without needing separate credentials. This simplifies the login process and enhances security by centralizing authentication management.
Tenant ID and Client ID: These are provided by the SSO identity provider (e.g., Azure, Okta) and uniquely identify the organization’s SSO configuration.
Client Secret: A secure key used for authenticating the connection between AIR and the SSO provider (shown as encrypted in the system).
Callback URL: This is the URL (e.g., https://air-demo.binalyze.com/api/auth/sso/azure/callback
) where users are redirected after successful authentication via SSO. It ensures that users are logged into the AIR platform after authenticating through the identity provider.
Entry Point and Issuer: These fields are also part of the SSO configuration, ensuring that AIR communicates correctly with the identity provider.
Certificate: Uploading a certificate from the identity provider is necessary for secure communication between AIR and the SSO service.
SSO improves user management and security by centralizing login credentials with your existing identity provider, simplifying the user experience while ensuring strong authentication practices.
This feature enables or disables the interACT functionality in Binalyze AIR.
interACT allows users to remotely open a shell session to interact with assets. Users can execute commands and scripts based on their assigned privileges.
Security Requirement: To use interACT, users must have enhanced security in place—either Two-Factor Authentication (2FA) or Single Sign-On (SSO). This ensures secure access to sensitive systems, limiting unauthorized use.
Read more about interACT here: interACT
To enhance security, Binalyze AIR interACT requires Two-Factor Authentication (2FA) using Time-Based One-Time Passwords (TOTP). You can set up offline 2FA solutions such as Google Authenticator or Microsoft Authenticator, making it suitable for use in isolated networks.
Why is 2FA Mandatory in interACT?
Preventing Unauthorized Access interACT provides direct access to systems, making security a top priority. Relying solely on a password increases the risk of unauthorized individuals gaining control. 2FA significantly reduces this risk by adding an extra layer of authentication.
Securing Critical Command Execution interACT allows users to execute commands directly on a system. Without a strong authentication mechanism, a malicious actor could exploit access to perform harmful operations. 2FA ensures that only authorized users can issue commands, maintaining system integrity and security.
By enforcing 2FA, interACT safeguards against unauthorized access and potential misuse, ensuring a secure and controlled environment for forensic investigations.
This feature allows AIR to capture and associate the public IP of an asset.
When enabled, the AIR Console parses HTTP request headers to extract the X-Forwarded-For
header provided by proxies. This header reveals the public IP address of the responder (asset), even if it's behind a proxy or firewall.
Visibility: If the feature is enabled, AIR will display the X-Forwarded-For
IP address instead of the communication IP (the one directly visible to AIR). This provides more accurate forensic visibility of an asset's location and origin.
Enforce mandatory case selection when starting tasks.
This feature requires users to associate every task they run in AIR with a specific case.
Benefit: It enforces structured workflows, ensuring that all investigations are organized and traceable to a particular case, which is critical for auditing and maintaining clarity in incident response efforts.
Provides cryptographic proof of when data was acquired and its integrity.
RFC3161 timestamping ensures that the data collected during acquisition has a digital signature, proving that the data existed at a specific time and has not been altered since.
When enabled, every new acquisition task will include a signature file with metadata, adding legal and forensic robustness to your investigation process.
Protect evidence integrity by registering it on the blockchain via LOCARD which is a blockchain-based system for secure evidence handling in digital forensics. It has seen some adoption in Europe but remains underutilized in the U.S. due to regulatory and infrastructure challenges, leading to slower adoption and less frequent use.
This feature integrates with LOCARD, a blockchain-based platform for evidence integrity. When enabled, the chain of custody for digital evidence is secured by submitting evidence metadata to the blockchain, ensuring it hasn't been tampered with.
LOCARD Credentials: To use this, you'll need to provide the Organization, Host, Username, and Password for your LOCARD account.
Set up email notifications, such as password-reset emails.
Specifying an SMTP server allows Binalyze AIR to send out automated emails, particularly for password resets. This is useful for self-service password recovery.
You must configure the SMTP server address, port, sender email, username, and password. For example, using mail.smtp2go.com
as the server.
Enable integration with Syslog servers or SIEM systems.
This feature allows Binalyze AIR to send event logs to a centralized Syslog or SIEM (Security Information and Event Management) system for enhanced log monitoring and analysis.
You will need to configure the protocol (TCP/UDP), server address, and port to send logs from AIR to your preferred log management system.
Display a custom banner message across all AIR Console pages.
This feature allows you to set a banner message that will appear on all pages of the AIR Console. This is useful for displaying system notices, warnings, or other important information to all users.
Enforce task options and preferences across assets.
Policies allow administrators to define global task preferences and restrictions for assets in the organization.
Customizability: Policies can be tailored for different subsets of assets using filters, and a user must have the "Override Policy" privilege to modify the default organizational policies.
Automate tagging of assets when they are added to AIR.
When this feature is enabled, Binalyze AIR automatically applies asset tags based on predefined rules as soon as a responder is installed on an asset.
Flexibility: Even if this feature is disabled, users can still run the Auto Asset Tagging task manually on assets.
Activate AI-powered assistance for investigations.
Frank.AI is an AI-driven assistant integrated into Binalyze AIR. It helps guide users through investigations, providing suggestions and assistance to streamline the forensic analysis process. Frank.AI acts as a copilot for investigators, improving efficiency by leveraging AI to answer analysts' questions.
This section allows administrators to add new users to the Binalyze AIR platform, specifying essential details such as the user's name, organization, role, and login credentials.
Type:
This field defines the type of user being added, depending on the organization's structure. For example, it could differentiate between internal users and external users (like clients or contractors) if your organization uses different user types.
Name:
Name: The first name of the user being added (e.g., "John").
Surname: The last name of the user (e.g., "Doe"). These fields are important for identifying and managing users in the system, especially in larger organizations.
Username*:
The username is a mandatory field (indicated by the asterisk). This is the unique identifier that the user will use to log in to the AIR platform (e.g., [email protected]
).
The username is often based on the user's email address to ensure uniqueness and facilitate easy recognition.
Email*:
The email is also a mandatory field. It is used for account-related communications, such as password resets, system alerts, or notifications.
This email should be valid and associated with the user being created to ensure they receive important platform-related information.
Organization*:
This field allows you to assign the new user to a specific organization within the Binalyze AIR system.
If multiple organizations are managed within the AIR platform (e.g., in the case of a multi-tenant setup), you can select which organization the user belongs to.
The system can restrict users from viewing or managing other organizations, depending on their access privileges.
Note: If no organization is selected or assigned, the user may have limited or no permissions within the platform.
Role*:
The Role dropdown allows you to assign the user's role within the platform. Roles define the level of access and permissions the user will have. Common roles could include:
Administrator: Full access to manage the platform, users, and assets.
Investigator: Access to forensic and incident investigation features.
Viewer: Read-only access to view data and reports.
This field is crucial for setting user permissions and ensuring that they can only perform actions aligned with their responsibilities.
Password*:
This is where you set the password for the user’s account. The password should meet the organization's security requirements (e.g., complexity, length).
A secure password is essential to ensure that unauthorized access to the platform is prevented.
Confirm Password*:
This field is used to confirm the password entered above. Ensuring that the passwords match helps avoid login issues caused by incorrect entries.
In Binalyze AIR, the Global Admin has full control over managing 109 specific privileges, allowing the creation of highly customized user roles. This granular access control ensures that each user or group has permissions tailored to their specific needs, such as handling evidence acquisition, interACT sessions, or audit log management.
A useful feature within this setup is the tooltips provided alongside each privilege. These tooltips highlight any dependencies that may exist between privileges, helping administrators configure roles accurately without unintentionally restricting necessary functions.
For example, an admin could create a role that enables a user to access interACT for remote evidence collection, while restricting access to audit logs or system-wide settings. The tooltips ensure that admins are aware of any required privileges to avoid misconfigurations.
This approach provides both flexibility and clarity, empowering admins to manage user roles effectively.
Binalyze AIR allows you to set up various Evidence Repositories for storing and managing collected data securely. The supported repository types are:
SMB: Ideal for sharing files across network devices.
SFTP: Utilizes SSH for encrypted data transfer.
FTPS: Combines FTP with SSL/TLS for secure transfers.
Amazon S3: Provides scalable cloud-based storage, perfect for large-scale investigations.
Global or Organization-Level Setup: Repositories can be defined at both global and organizational levels, providing flexibility in evidence management across multiple AIR instances or within a single organization.
Secure Data Management: Protocols like SFTP and FTPS ensure that data transfers are encrypted, safeguarding sensitive information during uploads and downloads.
Automatic and Manual Uploads: Evidence can be automatically uploaded to repositories based on configured tasks, or users can manually upload files as needed.
Task Management: Repositories support task scheduling for evidence uploads, ensuring a smooth workflow for collecting, storing, and analyzing evidence.
Connection Settings: When configuring repositories, users must provide essential connection details such as credentials, encryption options, and repository paths. For cloud-based storage like Amazon S3, you also need to configure bucket settings.
This setup ensures secure, scalable, and efficient management of evidence within AIR, accommodating various infrastructure needs.
Policies serve to define how evidence is collected and managed, providing fine-grained control over resources and processes.
Policies in AIR provide central configuration management and support global configurations that can be overridden at the Organisation level when required.
This overriding is only possible when the user has the “Override Policy” privilege allocated to their role.
Name & Organization: Policies must have a unique name and be assigned to a specific organization.
Evidence Storage: Configures where evidence is stored—either locally (default paths: Binalyze\AIR\
on Windows, /opt/binalyze/air/
on Linux/macOS) or in defined repositories like SMB or SFTP.
Resource Limits: Controls CPU usage, bandwidth, and disk space during collection to prevent resource overuse. You can specify CPU limits (e.g., 100%) and restrict bandwidth and disk space.
Compression & Encryption: Enables optional compression and encryption of the collected evidence, with a password for added security.
Scan Scope: You can opt to restrict scans to local drives only, excluding network and external drives.
Isolation Settings: Policies can include an IP/Port and ‘process allow’ lists for isolation tasks, which ensures that specific communication channels remain open during an asset’s isolation.
When creating a policy for a specific investigation, you could configure it to save evidence in an AWS S3 bucket, limit the CPU to 50%, compress the evidence for efficient storage, and ensure network drives are excluded from the scan. You could also configure the policy to allow communication with critical servers even if the asset is isolated.
The Binalyze AIR Backup feature allows users to back up system data securely and flexibly through the UI or Command Line Interface (CLI). Backups can be stored locally, on SFTP, or in Amazon S3, and encrypted using AES256 with a password.
Backups can be performed immediately or scheduled at intervals of every 4 hours, daily, weekly, or monthly. Users can set the number of backups to retain and the scheduled start time. CLI backup options are available, with detailed instructions in the Knowledge Base.
This document has two parts: The first is a guide to the key post-deployment configuration settings available after installing the AIR console.
The second part is a prioritized checklist of settings to review and adjust based on your specific needs.
After installing the Binalyze AIR console, an Organization is automatically created to help structure and manage your assets and cases. Your next step should be to navigate to the settings in the upper right corner of the AIR Console to ensure all configurations meet your specific needs.
If you're using AIR to support multiple customers or tenants, this is also the time to create and set up additional organizations for each customer. Additional tenants can be added at any time as your needs evolve.
These settings also provide important information, such as the Deployment Token, Shareable Deployment Page URL, and options to add a Relay Server or manage specific users.
For MSSPs, a new Organization should be created for each client engagement. Enterprise clients may only need one Organization but can create additional ones if required.
Using Single Sign-On (SSO) is optional in AIR but is available for those who want to implement it. You can integrate with either:
Two-factor authentication (2FA) is not required if SSO is implemented. However, if you are not using SSO and wish to utilize the built-in remediation capability, interACT, enabling 2FA is mandatory. Enabling 2FA is also recommended to strengthen user account protection and general security.
The Assets Summary window on the home page of the AIR UI will show assets to be in either one of 2 states:
Managed: The asset's responder has been successfully deployed to the device and is ready to collect tasking assignments from the console.
Unmanaged: The asset is discovered by enumerating Active Directory but does not have the AIR responder deployed.
The Assets Summary will also report the asset as:
Off-Network: Responder has supplied data to the console via an Off-Network Acquisition or Triage task.
Unreachable: The asset's responder is currently unreachable. If an Asset's responder fails to connect to the Binalyze AIR console for over 30 days, its status changes to "unreachable." Until then, its status will be managed as online or offline.
Update Required: The responder on the asset requires an update to function correctly.
Update Advised: The responder is still functional but for full functionality, an update is recommended.
Isolated: The asset is currently isolated from the network apart from communication with the AIR console only.
Responders deployed to assets can be updated in three ways:
Manual Updates: Assign an upgrade task to specific assets.
Automatic Updates: Configure automatic updates for all deployed Responders.
SCCM
To ensure seamless operation and maximize the effectiveness of Binalyze AIR in your investigations, it is important to allow-list AIR components in your security tools. Binalyze AIR collects and analyzes extensive forensic data from assets, which may involve activities like executing binaries, creating temporary files, and accessing sensitive directories. These actions can trigger alerts in EDR or AV solutions, potentially disrupting forensic processes and delaying incident response. Configuring exception rules for Binalyze AIR in your security systems, prevents such interference, ensuring fast and complete evidence acquisition without compromising the investigation process.
Binalyze AIR allows you to store collected evidence either on the local machine where the task was executed or in external repositories. Supported external storage options include:
SMB
SFTP
FTPS
AWS S3
Azure Blob
AIR-supported assets include traditional computers, workstations, and servers running Windows, Linux, IBM AIX, or macOS, as well as off-network or cloud-based systems (e.g., AWS EC2 and Azure VMs) running the same operating systems.
Disk images (e.g., RAW, VMDK, E01, Ex01) are also supported for importing into the AIR File Explorer.
After creating an Organization, deploy AIR Responders to assets. Note that assets are associated with a single Organization but can appear in multiple Cases.
Cases in Binalyze AIR manage acquisitions, triages, interACT sessions, comparisons, scheduled tasks, case notes, and assigned users. Cases with no tasks performed will appear empty, while active cases will display all of the assets associated with the Case and all of their individual task assignments.
One of the Case Assets ‘Action Buttons’ will launch the Investigation Hub for that case.
Libraries in Binalyze AIR store reusable resources like acquisition profiles, triage rules, interACT files, and more, ensuring easy access and consistency across investigations. Now is the time to create, upload, and configure your triage rules for efficient threat hunting, fine-tune your acquisition profiles, and add any Auto Asset Tags you want to apply during responder deployment.
Binalyze AIR supports automation through integrations, including:
The Auto Asset Tagging task will run immediately after Responders are installed, this helps organize and manage assets within AIR. It automates the tagging process based on predefined rules.
The process, along with manual tagging, can also be executed on-demand at a later time for individual or multiple assets.
AIR policies allow you to configure settings such as:
Saving collected evidence to a local repository or external repository.
Sending files collected by interACT to a download location or evidence repository.
Resource limits for CPU, bandwidth, and disk space.
Enabling compression and encryption.
Configuring IP, port, and process allow lists for isolation policies.
Additional configurations that may be necessary include:
Console Proxy Settings
When using web proxies, configure the AIR console with the correct proxy and SSL/TLS settings. Administrators can enter proxy details (IP, port, username, password) and import SSL/TLS certificates in PEM, DER, or PKCS formats.
Tamper Detection and Uninstallation Password
With Tamper Detection switched on, the Responder will notify AIR of attempts to interfere with its normal operation.
Chain of Custody using RFC3161 Timestamping
The RFC3161 timestamping feature provides proof that the data existed at a particular moment in time and when combined with hashing that it has not changed.
SMTP Server Configuration
Specifying an SMTP server will allow AIR to send password-reset emails to users.
Syslog/SIEM Integration
Integrate with Syslog or SIEM systems for centralized audit logging. Ensure both TCP and UDP protocols are configured correctly.
Logs from AIR should be forwarded to ensure security monitoring and compliance tracking.
Frank.AI Integration
Enhanced AI Assistance with Frank, your reliable investigation copilot, can be toggled on or off.
Users and Roles Management
AIR enables the creation of users and their assignment to specific Roles and Organizations. Roles offer granular control, with 109 adjustable privileges
Backup Settings
Configure database backup settings. Schedule regular backups or run an instant backup. View your backup history and statuses.
Active Directory Integration
Integrate Active Directory to mirror your organizational structure. This simplifies management by grouping assets based on AD units.
Validate that assets are correctly categorized and managed through AD synchronization.
This integration also allows authorized users to log in to AIR using their Active Directory credentials.
Following on from the Binalyze AIR Post-Deployment Configuration Guide above, here is a prioritized checklist to ensure thorough configuration before operational use:
Organization Setup
Console Proxy Settings
Check the Health of Docker Containers
Responder Deployment
Active Directory Integration
Users and Roles Management
Case Management
Evidence Repository Configuration
Backup Configuration
Single Sign-On (SSO)
Two-Factor Authentication (2FA)
Tamper Detection and Uninstallation Password
SSL/TLS Configuration
SMTP Server Configuration
Chain of Custody using RFC3161 Timestamping
Frank.AI Integration
API and Webhook Integrations
Auto Asset Tagging
Triage Library Setup
Policy Management
Syslog/SIEM Integration
Version Updates
Responder Health
AIR Audit Logs Backup
This prioritized checklist ensures that your Binalyze AIR instance is fully configured and optimized for operational use, covering core setup, security enhancements, automation, and advanced features.
Binalyze AIR Console Backup Procedure
Binalyze AIR Console can be backed up in 2 different ways:
by using the Binalyze AIR Console user interface
by using the command line interface.
Log in to AIR Console Web UI using a Global Admin Account.
Navigate to the Backup Management section by clicking the Gear Button and selecting "Backup History" from the drop-down list.
Get a backup of the system by clicking the "Backup Now" button in the top right corner.
Download the backup file by clicking the Vertical Ellipsis Button under the "Actions" column and clicking Download from the drop-down list.
This will download a compressed file with the ABF extension (AIR Backup File).
Before starting to back up, check and verify that there is enough disk space on the system, more than the size of the Binalyze AIR Console installation directories (more details are specified below).
If you have a 2-Tier installation, you need to check the size of both the Application and Database installation directories.
To back up the Single Tier and 2-Tier installations, first, you need to connect to the Binalyze AIR Console machines via SSH.
Check the size of the Binalyze AIR Console installation directories by using the following commands:
Run the following command to check if there is more free space on the system than the size of the Binalyze AIR Console installation directories:
To back up the Single Tier setup, first, you need to connect to the Binalyze AIR Console via SSH, then, run the commands given below.
The commands below simply perform the following: navigating to the Binalyze AIR Console installation directory, stopping the Docker service, and then copying the directory.
Since there are two components in a 2-Tier setup, both the Application Server and Database Server have to be backed up separately.
To back up the 2-Tier setup, first, you need to connect to Binalyze AIR Console Application and Database servers via SSH. Then, run the commands given below.
The commands below simply perform the following: navigating to the Binalyze AIR Console installation directory, stopping the Docker service, and then copying the directory.
Binalyze AIR offers flexible options for updating your SaaS tenant, ensuring you always have access to the latest features and improvements.
Customers can choose from the following methods to update their AIR SaaS tenant via; Settings>Update:
Manual Update: Initiate the update by clicking the "Update" button within the AIR Console user interface.
Automatic Updates: Enable the auto-update feature in settings to allow AIR to update automatically when a new version is released.
Scheduled Updates: Configure updates to occur at a preferred time, minimizing disruption to your operations.
In Binalyze AIR, two-factor authentication (2FA) is a security feature designed to enhance user account protection by requiring two forms of verification when logging in. This adds an additional layer of security on top of the traditional username and password. With 2FA enabled, even if a user's password is compromised, unauthorized access to the account is significantly harder to achieve.
Some key points about 2FA in Binalyze AIR:
LDAP User Compatibility with 2FA: Binalyze AIR supports two-factor authentication (2FA) for LDAP users. You can easily configure 2FA directly from the account settings within AIR, making the setup process straightforward and efficient.
Administrators can enforce 2FA for all users: This uniform security policy enhances overall security by requiring all users to authenticate with an additional method, such as a one-time password (OTP) sent to a mobile device or generated by an authenticator app.
Individual 2FA Setup and Reset: Users can enable two-factor authentication (2FA) independently in 'Account > Setup Two Factor Authentication'. Global Admin and users with the "user update" privilege can reset 2FA.
Enhanced Security Posture: By enabling 2FA, Binalyze AIR significantly reduces the risk of unauthorized access, even in the event of compromised credentials. This is a critical step in safeguarding sensitive investigation data and maintaining the integrity of digital forensics operations.
User-Friendly Configuration: The integration of 2FA in Binalyze AIR is designed to be user-friendly, making it easy for administrators to enable and enforce 2FA without complex configuration steps.
If you have activated the AIR SSO feature this will override 2FA.
If you are experiencing issues with Two-Factor Authentication (2FA) in AIR, it may often be due to time synchronization problems on your system. Ensuring your system's time is correctly synchronized with an NTP (Network Time Protocol) server is crucial for the proper functioning of 2FA.
Steps to Check Time Synchronization:
Run the timedatectl
Command: Open a terminal and execute the following command:
Verify the Output: After running the command, check the output for the following two lines:
System clock synchronized: yes
NTP service: active
Here’s an example of what the correct output should look like:
What to Do if the Time is Not Accurate: If your system clock is not synchronized or the NTP service is not active, this could be the root cause of your 2FA issues. To resolve this, you may need to synchronize your system's time using NTP.
How to Synchronize Your System Time:
Enable NTP Synchronization: You can synchronize your system’s time by running:
Re-check the Time Status: After enabling NTP, re-run the timedatectl
command to ensure that the system clock is now synchronized and the NTP service is active.
By ensuring your system’s time is accurate and synchronized, you can help prevent potential issues with 2FA in AIR. If the issue persists even after correcting the time, please contact our support team for further assistance.
Azure AD:
Okta SSO:
For more details, visit the.
Visit this page for the full list of items to exclude:
Visit this page for information about .
For cloud-based deployments, it is recommended to use cloud-based repositories like AWS S3 or Azure Blob instead of SMB, SFTP, or FTPS. More details can be found in the.
For deployment instructions, refer to the.
API Tokens:
Webhooks (Triggers):
Triage functionality allows for quick and effective threat hunting across assets. For more information, see the.
Details can be found in the.
For more information, see the.
Check/Configure your Organization settings in the Main Menu.
For MSSPs, set up a new Organization for each client. For enterprises, ensure the appropriate organizational structure is established.
Check/configure proxy server settings if using web proxies within your network.
Run sudo docker ps to list all active Docker containers and check their current status, identifying any that aren't running as expected.
Deploy AIR Responders to all assets (Windows, Linux, macOS, IBM AIX, Cloud Systems).
Check the status of assets and ensure they are connected and associated with the correct Organization(s).
Create Exception Rules for Binalyze AIR in EDR/AV Systems.
Integrate AD to mirror your organizational structure.
Validate AD synchronization and ensure assets are categorized correctly.
Configure roles with appropriate privileges for granular control.
Create users and assign them to roles and organizations.
Set up a Case(s) to manage your investigations these will act as containers for tasks such as acquisitions, triages, interACT sessions, and scheduled tasks.
Assign users to specific cases.
Configure external repositories (SMB, SFTP, FTPS, AWS S3, Azure Blob) to store collected evidence.
Prefer cloud-based repositories for cloud deployments.
Schedule regular backups of the AIR database.
Validate backup settings and review backup history.
Consider an approach for backing up AIR audit logs which the system will only keep for 3 months.
Optional: Configure SSO using Azure AD or Okta for centralized authentication.
Enable 2FA for users if SSO is not implemented, especially if using interACT.
Enable tamper detection to monitor for unauthorized interference with Responders (It is off by default).
Set an uninstallation password to prevent unauthorized removal of Responders.
Configure proxy server and SSL/TLS settings, including importing necessary certificates.
Set up an SMTP server to enable AIR to send password reset and other critical emails.
Enable RFC3161 timestamping to ensure data integrity and proof of existence.
Enable or disable Frank, the AI investigation assistant, based on your needs.
Set up API tokens for automation and Webhooks (Triggers) for real-time event management. (The API is the recommended method for integration, preferred over Webhooks for most use cases)
Configure Auto Asset Tagging to organize and manage assets efficiently.
Enable, write, upload, and configure your triage rules in AIR Libraries for efficient threat hunting.
Establish and configure policies for evidence storage, resource limits, encryption, and isolation settings.
Integrate AIR with Syslog or SIEM systems for centralized audit logging and monitoring.
New releases generally occur once or twice a month - be sure to use the most up-to-date version to ensure the best feature sets, fixes, and performance.
Use the Assets Summary widget on the Home page to manage the status of your deployed responders.
AIR audit logs are saved to the console’s PostgreSQL database and retained for 3 months after which they are deleted, so please arrange to back them up if required. Please see to learn more
Data acquisition is the collection of forensically sound data from any computer system (disk, external storage, memory, etc.). This data generally varies based on the operating system installed on the computer or server. Acquired data often needs to be parsed, stored, and presented in a human-readable format for further analysis and investigation.
Data acquisition is the primary activity of most digital investigations. Before data acquisition, the investigators generally identify the data they’ll need. Since data or evidence is an essential element of any investigation, investigators tend to take as much as they can in the first instance to avoid, if possible, a second acquisition. Therefore, the power of the digital investigation and DFIR solution is often proportional to the acquisition capability and the features associated with it.
Binalyze AIR provides easy-to-deploy and fast data acquisition capabilities with a wide range of operating systems supported for the collection of 350+ forensically sound data types. Binalyze AIR provides remote data acquisition for on-premise, cloud, and off-network devices. Thus investigators can investigate multiple devices remotely, at speed, and scale.
Binalyze AIR supports a growing number of operating systems including; Windows, Linux, macOS, ChromeOS, ESXi, and IBM AIX.
The results of AIR's Acquisition and Triage processes can be further analyzed using DRONE's automated Post Acquisition Analyzers. DRONE's findings, along with all collected artifacts, are then presented within the Investigation Hub.
The Binalyze AIR responder needs first to be deployed to acquire data. All data acquisition is performed according to the Data Acquisition Profile created before the acquisition is started.
Data acquisition is classified into three categories: Evidence, Artifacts, and Network Capture. Additionally, investigators have the flexibility to create custom content profiles, allowing them to collect specific files or data from designated locations.
At Binalyze, we recognize the critical importance of maintaining a strong 'chain of custody' when it comes to the collection and handling of evidence. That's why we employ SHA-256 hashing in combination with RFC3161 digital timestamp certificates. This approach serves to safeguard data content and offers assurance regarding the precise timestamp of the content's creation, as well as a guarantee that it has remained unaltered.
Read more about RFC3161 and the secure way AIR maintains a strong chain of custody here; Protect Your Chain Of Custody With Content Hashing And Timestamping
When performing keyword searches in DRONE, you can leverage regular expressions (POSIX regex) for more flexible and advanced search capabilities. Here’s how it works:
To use regex, your keyword must be enclosed between / /
slashes. For example:
/\d+/
– This will search for one or more digits.
/[a-z]+/i
– This will search for one or more lowercase letters and is case-insensitive (thanks to the i
flag).
You can also include optional flags at the end of the regex to modify its behavior:
g: Global (search all occurrences).
m: Multiline (match across multiple lines).
i: Case-insensitive (ignore letter case).
s: Dot matches newline (dot .
will match any character, including newlines).
Example:
/[A-Z]+\d+/i
– This will match sequences like "ABC123" or "abc123" regardless of case.
If your search contains wildcard patterns like *?
(indicating lazy quantifiers), it will be treated as a wildcard search instead of a regex search.
For example, abc*?
will match "abc", "abcd", "abcxyz", etc.
If your input doesn’t match the regex format and doesn’t contain wildcard symbols, DRONE will perform a case-insensitive "string contains" search.
For example, searching for example
will return results containing "example", "Example", or "EXAMPLE".
To ensure proper installation of the Relay Server, the responder must be installed and registered within the same organization and on the same system. If you plan to install the responder beforehand, you should select a direct connection as the Relay Server will be located on the same system.
From the "Organizations" page, when you create a new organization or select an existing one, it will bring you to a page where you can deploy a new Relay Server.
By clicking “New Relay”, a deployment page brings up for the Relay Server:
As you can see, there are packages available based on Debian or Redhat distributions. You can select either the 64-bit or ARM64 version by clicking the arrow next to the "Download" button.
Once the package is downloaded, you will need to configure the environment settings for the Relay Server to register with the AIR Console. To simplify this process, you can easily copy the necessary commands by clicking the "copy" button located in the right corner of the command.
After copying the commands, open a terminal and navigate to the directory where you have downloaded the package. If the directory is "Downloads" in your home folder, you can go to that folder by executing the following commands in the terminal:
As part of the Relay Server installation process, it performs a check to verify the availability of the responder. Additionally, it configures the responder to manage the newly installed Relay Server.
Upon successful installation, you will be able to view your newly deployed Relay Server on the "Organization Detail" page.
Fast, remote, and scalable across the corporate network
Tasks in Binalyze AIR are operations assigned to assets via the AIR console, either manually or automatically through triggers. Each task can comprise multiple 'tasking assignments,' where a single task on one asset is a 'tasking assignment,' but the term 'task' can also describe the same tasking assignment across many assets. These tasks facilitate various operational needs and can be categorized into three types:
Manual Tasks:
These are assigned manually by users directly through the AIR console.
Scheduled Tasks:
Created by users to commence at a future time. Scheduled tasks can be one-time events or recurring at daily, weekly, or monthly intervals.
Triggered Tasks:
Automatically assigned to assets in response to trigger requests from integrated SIEM, SOAR, or EDR solutions.
Tasks enhance operational efficiency by allowing flexible and automated responses to various cybersecurity scenarios, ensuring that your assets are continually monitored and managed effectively. For more detailed information, please visit our Knowledge Base and refer to the AIR release notes.
In Binalyze AIR, tasking assignments are generated by various activities that target asset operations, including:
Data Acquisition:
Initiating the collection of digital evidence from an asset. This can be a comprehensive acquisition or targeted to specific evidence types.
Triage:
Running predefined or custom rules (YARA, Sigma, osquery) to identify suspicious activities or indicators of compromise on the assets.
Timeline (Investigation):
Creating and analyzing timelines to understand the sequence of events on an asset for forensic investigation.
InterACT Sessions:
Establishing a secure remote shell session to manually investigate and interact with the asset in real-time.
Baseline Acquisition and Comparison:
Running comparisons to detect deviations from a predefined baseline state of the asset and acquiring baseline data.
Disk/Volume Imaging:
Capturing the complete state of disks or volumes for comprehensive forensic analysis.
Auto Tagging:
Automatically tagging assets based on predefined criteria for easier management and identification.
Calculating Hash:
Generating hash values for files to ensure data integrity and assist in identifying duplicate or tampered files.
Offline Acquisition and Offline Triage:
Performing data acquisition and triage on assets that are not connected to the network.
Some more 'administrative' activities also generate tasking assignments and these include:
Shutdown, Reboot, and Uninstall:
Remotely managing the power state and software configuration of assets.
Isolation:
Isolating an asset from the network to prevent further compromise.
Responder Deployment:
Deploying response tools to the asset for immediate action.
Purge Local Data and Retry Upload:
Managing data on the asset, including purging local data and retrying data uploads.
Migration and Version Update:
Migrating data between systems and updating software versions.
Log Retrieval:
Collecting logs for further analysis and troubleshooting.
By understanding and utilizing these task types, users can streamline their incident response and investigation workflows, improving overall security posture and response times.
Tasks are saved at the organization level and can be reviewed comprehensively by navigating to More > Task:
Individual tasking assignments for an asset can also be reviewed by visiting the specific asset. From the secondary menu, you can select "All Tasks" or utilize the filtered tasks view to focus on specific task types:
Let's now take a look at how to create a Task in AIR
Select the Asset(s) on which you wish to execute tasks - In the example below we will Acquire Evidence from an asset named JackWhite:
The Bulk Action Bar will be available if you choose more than one asset:
Create a Task Name if required (if not one will be auto-generated)
Allocate the task to a Case, this is important if you need to build a case for an ongoing investigation and you plan to investigate this asset further or other assets as part of the same investigation. All investigation activity can be recorded within Cases. A Case can be thought of as a container, into which activity for a particular investigation can be grouped, making Incident Response management and investigations easier especially as the Case will also be presented in the Investigation Hub.
Select Now as a Task Start Time to execute the task immediatley (see here for Scheduling Tasks)
Choose an Acquisition Profile (e.g., Compromise Assessment, Full Acquisition etc). We offer many ‘out of the box’ profiles, but you can also create and save your own as needed.
This step allows you to use the policies already set as an organizational policy or, if you have the necessary privileges, make changes to:
Where the collected evidence is to be saved.
Apply Resource limit to the task assignment to reduce potential impacts on the asset.
Enable compression and encryption to be applied to the collected evidence.
In this final step you can enable or disable the DRONE and MITRE ATT&CK Analyzers
We highly recommend keeping both analyzers active as they have minimal impact on resources. The MITRE ATT&CK analyzer runs on live assets and, when combined with other analyzers, facilitates immediate identification of potentially compromised assets. This allows for efficient prioritization of investigative efforts.
Step 4, 'Follow Up', also allows the user to add Keywords and upload keyword list files for DRONE searches, allowing investigators to conduct more focused and efficient searches within their data collections.
Keyword Lists Features:
No character limit for keyword lists, but a 1 MB file size limit applies.
Each keyword must be on a new line for proper search functionality.
Keyword searches are limited to data within the Case.db (excluding CSV files).
Keyword searches are supported by regex, read more here: Regex in AIR
This search functionality extends to event log data collected by Sigma analyzers, including:
Windows: Event Record Analyzer
Linux: Syslog Analyzer
macOS: Audit Event Analyzer
This feature offers investigators greater flexibility and precision in their searches, significantly enhancing the DRONE module's capabilities. Regex support will be added soon in an upcoming release.
Tasks that are accessed via the main menu.
The page for the individual Asset > Acquisitions.
And finally, Case Acquisitions, if the task is sent to a Case.
1) Accessing the scheduled Task report via Assets:
Selecting the ‘eye icon’, under the Actions column as shown above, will give you access to the Details view for the Task, and from here you can access the report associated with this scheduled task acquisition:
2) Below we see the same report being accessed from the Assets menu:
3) And finally the same report from the Case Acquisition page but only if you have sent it here:
Managing large-scale asset inventories allows users to save and quickly apply custom filters for efficient asset management.
Persistent Saved Filters enable users to create and store custom asset filters, making it easier to locate and manage assets without having to reapply filter conditions in each session.
Users can save custom filters for frequently used asset searches.
Saved filters allow for quick application, streamlining asset management.
Bulk actions and asset monitoring can be performed without re-entering filter conditions.
Quick actions on saved filters enable faster asset selection.
Navigate to the secondary menu on the Assets page and click the '+' icon to create a Preset Filter. The Save Preset Filter window will open, allowing you to configure a custom Asset Filter that remains persistent for your user account.
Use the filtering options to build a filter to refine the asset list based on criteria such as status, tags, or isolation state.
After applying the desired filter conditions, enter a name for the filter and click "Save," to confirm your selection.
Select the newly saved filter from the Preset Filters drop down list in the secondary menu to instantly apply it.
Users can edit or delete their saved filters anytime by clicking the three-dot menu next to the preset filter name. This menu also provides access to Quick Actions, enabling bulk operations directly from saved filters.
Saved filters are per-user, meaning each user can only see and manage their own filters. Preset filters remain unchanged; predefined system filters such as Managed Assets and Isolated Assets are still available. Filters persist across sessions, ensuring users do not need to reapply them after logging out.
Acquisition profiles in Binalyze AIR define the specific types of data to be collected during an acquisition task. These profiles enable you to customize and streamline data collection to meet the unique requirements of your investigation. Saved within the AIR Libraries, acquisition profiles can be easily shared, reused, or edited for further refinement, ensuring efficiency and consistency across investigations.
As shown above, Binalyze AIR comes with several predefined acquisition profiles that you can use immediately, for example:
Quick: Designed for fast data acquisition with essential evidence types.
Full: Collects a comprehensive and rich set of data from the assets.
Compromise Assessment: Focuses on indicators of compromise and suspicious activity, defined by the Binalayze threat hunting team.
These ‘out-of-the-box’ profiles are ideal for common scenarios and provide an ideal quick start for your investigations.
To create your own custom acquisition profile, follow these steps:
(1) Navigate to Acquisition Profiles:
Go to the "Libraries > Acquisition Profiles" section from the main dashboard.
(2) Create a New Profile:
Click on the "+ New Profile" Action Button.
Provide your new profile with a name that will perhaps help you later to identify its purpose.
(3) Select the Operating System(s) for your new profile:
Windows
Linux
macOS
IBM AIX
Or a cross-platform eDiscovery collection
(4) Select Evidence Types:
Binalyze AIR supports an ever-growing number of evidence types for collection and presentation in the Investigation Hub. To build your profile, choose the data you want to collect from the extensive options grouped under the following five tabs:
Evidence List
System artifacts (e.g., registry hives, event logs)
Artifact List
Application artifacts such as server Logs, RMM, AV tools, etc
Event Log Records
AIR allows users to collect and present event logs or define specific channels for log collection. (Read more here: Windows Event Records and how AIR handles them)
Custom Content Profiles
Select bespoke file locations for collection.
Network Capture
Network Flow captures TCP/UDP connections and stores them as a CSV.
PCAP will capture IP packets and save them as a PCAP file.
The duration of the Network Capture is determined by the user.
osquery
Use osquery language to capture data.
(5) Save the Profile:
Once you have configured all the necessary settings, click "Save" to create your custom acquisition profile.
Edit Profiles: You can edit existing profiles by selecting the profile and making necessary changes.
Delete Profiles: Remove profiles that are no longer needed to keep your list organized.
Duplicate Profiles: Create a copy of an existing profile to use as a template for a new one.
User Privileges for acquisition profiles can be managed via ‘Settings > Roles’
Check Profiles: Ensure your acquisition profiles are up-to-date with the latest evidence types and investigation requirements.
Test Profiles: Test new profiles in a controlled environment to ensure they collect the intended data.
Average Time Taken: In the Acquisition Profiles table you can see the ‘Average Time’ taken by each profile, this can be useful when considering the performance and efficiency of individual profiles.
By using acquisition profiles in Binalyze AIR, you can efficiently gather relevant data for your investigations, saving time and ensuring comprehensive evidence collection.
These pages categorize the supported evidence and artifacts by OS, indicating whether each item is parsed and presented in the Investigation Hub and/or if the associated file is collected.
Proactive DFIR and automated threat analysis by scheduling AIR Evidence Collections
Scheduling tasks in AIR not only enables the automation of acquisitions and DRONE analysis, but it transforms AIR into a proactive DFIR platform. By setting up scheduled tasks for regular collections and automated DRONE analysis, AIR can proactively identify issues that might well go unnoticed by other security systems.
Instead of waiting for alerts from external sources, AIR takes the initiative to regularly collect and analyze assets according to a predefined schedule and acquisition profile. This proactive approach allows organizations to stay ahead of potential threats and vulnerabilities by detecting issues early on, even before they manifest as security incidents.
By incorporating the scheduling of tasks into security operations, organizations can enhance defense strategies and bolster their overall security posture. Additionally, it ensures that the 'best evidence' is automatically acquired and forensically preserved, facilitating further investigation when needed.
Investigators simply use the tasking wizard to schedule tasks the following activities:
Evidence collections.
Triage/Threat Hunting.
Disk and Volume Imaging.
Auto Asset Tagging.
To set up a scheduled task please follow the steps below:
Choose the Asset(s) on which you wish to shedule tasks - the Bulk Action Bar will be avaiable if yo choose more than one asset
Create a Task Name if required
Allocate the task to a case
Select 'Schedule for later'
Choose the timezone that will determine when the task is executed. You have the flexibility to execute the task in the local timezone of each selected asset or simultaneously across all assets by choosing a single timezone.
Select a start date and time for the task.
If required toggle on the Repeat switch and set the cadence/recurrence rate.
Select when or whether you want to end the schedule.
Lastly, choose the acquisition profile you wish to apply for the scheduled task.
Note: Scheduled tasks cannot be repeated within a "Case". This is because there is no destination for the task results when a Case is closed, leaving them without a location to be sent to.
AIR administrators can now restrict users from scheduling tasks or editing existing ones:
Schedule Task: Enables users to "Schedule for later." Without this privilege, this option is disabled, and a tooltip explains the restriction.
Update Scheduled Task: Allows users to edit scheduled tasks. If this privilege is not granted, the "Edit" button is disabled with an explanatory tooltip.
This step allows you to use the policies already set as an organizational policy or, if you have the necessary privileges, make changes to:
Where the collected evidence is to be saved.
Apply Resource limit to the task assignment to reduce potential impacts on the asset.
Enable compression and encryption to be applied to the collected evidence.
In this final step you can enable or disable the DRONE and MITRE ATT&CK Analyzers
We highly recommend keeping both analyzers active as they have minimal impact on resources. The MITRE ATT&CK analyzer runs on live assets and, when combined with other analyzers, facilitates immediate identification of potentially compromised assets. This allows for efficient prioritization of investigative efforts.
The results of scheduled tasks and their associated reports can be found in three places:
Tasks that are accessed via the main menu.
The page for the individual Asset > Acquisitions.
And finally, Case Acquisitions, if the task is sent to a Case.
1) Accessing the scheduled Task report via Assets:
Selecting the ‘eye icon’, under the Actions column as shown above, will give you access to the Details view for the Task, and from here you can access the report associated with this scheduled task acquisition:
2) Below we see the same report being accessed from the Assets menu:
3) And finally the same report from the Case Acquisition page but only if you have sent it here:
Tasks scheduled through the console will execute as planned. However, if an asset is offline at the scheduled time, it will automatically receive and carry out the task upon its next connection.
Managing scheduled tasks has never been easier. The Edit Scheduled Task feature lets you modify existing scheduled tasks avoiding the need to cancel and reconfigure them.
You can easily add or remove assets without restarting the task, saving valuable time and improving workflow efficiency. After selecting the assets, you can then go on to update the task setup, customize options, and manage follow-up actions, streamlining task management for a smoother, faster process.
To modify a scheduled task, go to your Task listings page and filter by Status > Scheduled to display only the scheduled tasks:
From the filtered results, selecting the ‘eye’ icon presents to you with the Edit or Delete Task options:
The Edit Scheduled Task Wizard will now open, allowing the user to toggle off the "Only Selected Assets" switch (as shown below). This now reveals all other available assets that can be added to the scheduled task:
Step 2, the Setup, allows you to edit the Task Name, the Schedule and even the acquisition profile to be used:
Steps 3 and 4, Customization and Follow-Up, are fully configurable, giving you the ability to completely edit the scheduled task as needed.
AIR supports the following Linux Evidence and Artifacts
1
System
System Controls
Collect system controls
2
System
Cron Jobs
Collect cron jobs
3
System
AppArmor Profiles
Collect AppArmor profiles
4
System
ULimit Information
Collect ulimit information
5
System
Kernel Modules
Collect kernel modules
6
System
Lock Files
Collect lock files
7
System
Systemctl Services
Collect Systemctl Running Services
8
Disk
Block Devices
Collect block devices
9
Disk
Fstab
Collect fstab configuration
10
Disk
Mounts
Collect mounts
11
Disk
NFS Exports
Collect NFS exports
12
File System
File System Enumeration
Dump file and folder information.
13
Processes
Processes
Collect process list
14
Processes
Process Open Files
Collect process open files information
15
Memory
Shared Memory
Collect shared memory
16
Memory
Memory Map
Collect memory map
17
Memory
Swaps
Collect swap info
18
Memory
RAM Image
Create an image of RAM
19
Browser
Default Browser
Collect Default Browser
20
Browser
Chrome Cookies
Collect Chrome Cookies
21
Browser
Chromium Cookies
Collect Chromium Cookies
22
Browser
Edge Cookies
Collect Edge Cookies
23
Browser
Opera Cookies
Collect Opera Cookies
24
Browser
Vivaldi Cookies
Collect Vivaldi Cookies
25
Browser
Brave Cookies
Collect Brave Cookies
26
Browser
Chrome Bookmarks
Collect Chrome Bookmarks
27
Browser
Chromium Bookmarks
Collect Chromium Bookmarks
28
Browser
Edge Bookmarks
Collect Edge Bookmarks
29
Browser
Opera Bookmarks
Collect Opera Bookmarks
30
Browser
Vivaldi Bookmarks
Collect Vivaldi Bookmarks
31
Browser
Brave Bookmarks
Collect Brave Bookmarks
32
Browser
Chrome User Profiles
Collect Chrome User Profiles
33
Browser
Chromium User Profiles
Collect Chromium User Profiles
34
Browser
Edge User Profiles
Collect Edge User Profiles
35
Browser
Opera User Profiles
Collect Opera User Profiles
36
Browser
Vivaldi User Profiles
Collect Vivaldi User Profiles
37
Browser
Brave User Profiles
Collect Brave User Profiles
38
Browser
Chrome Extensions
Collect Chrome Extensions
39
Browser
Firefox Extensions
Collect Firefox Extensions (Addons)
40
Browser
Chrome Local Storage
Collect Chrome Local Storage
41
Browser
Chromium Local Storage
Collect Chromium Local Storage
42
Browser
Edge Local Storage
Collect Edge Local Storage
43
Browser
Opera Local Storage
Collect Opera Local Storage
44
Browser
Vivaldi Local Storage
Collect Vivaldi Local Storage
45
Browser
Brave Local Storage
Collect Brave Local Storage
46
Browser
Dump Chrome Indexed DB
Dump Chrome Indexed DB
47
Browser
Dump Chromium Indexed DB
Dump Chromium Indexed DB
48
Browser
Dump Edge Indexed DB
Dump Edge Indexed DB
49
Browser
Dump Opera Indexed DB
Dump Opera Indexed DB
50
Browser
Dump Vivaldi Indexed DB
Dump Vivaldi Indexed DB
51
Browser
Dump Brave Indexed DB
Dump Brave Indexed DB
52
Browser
Chrome Web Storage
Collect Chrome Web Storage
53
Browser
Chromium Web Storage
Collect Chromium Web Storage
54
Browser
Edge Web Storage
Collect Edge Web Storage
55
Browser
Opera Web Storage
Collect Opera Web Storage
56
Browser
Vivaldi Web Storage
Collect Vivaldi Web Storage
57
Browser
Brave Web Storage
Collect Brave Web Storage
58
Browser
Chrome Form History
Collect Chrome Form History
59
Browser
Chromium Form History
Collect Chromium Form History
60
Browser
Edge Form History
Collect Edge Form History
61
Browser
Opera Form History
Collect Opera Form History
62
Browser
Vivaldi Form History
Collect Vivaldi Form History
63
Browser
Brave Form History
Collect Brave Form History
64
Browser
Chrome Thumbnails
Collect Chrome Thumbnails
65
Browser
Chromium Thumbnails
Collect Chromium Thumbnails
66
Browser
Edge Thumbnails
Collect Edge Thumbnails
67
Browser
Opera Thumbnails
Collect Opera Thumbnails
68
Browser
Vivaldi Thumbnails
Collect Vivaldi Thumbnails
69
Browser
Brave Thumbnails
Collect Brave Thumbnails
70
Browser
Chrome Favicons
Collect Chrome Favicons
71
Browser
Chromium Favicons
Collect Chromium Favicons
72
Browser
Edge Favicons
Collect Edge Favicons
73
Browser
Opera Favicons
Collect Opera Favicons
74
Browser
Vivaldi Favicons
Collect Vivaldi Favicons
75
Browser
Brave Favicons
Collect Brave Favicons
76
Browser
Chrome Login Data
Collect Chrome Login Data
77
Browser
Chromium Login Data
Collect Chromium Login Data
78
Browser
Edge Login Data
Collect Edge Login Data
79
Browser
Opera Login Data
Collect Opera Login Data
80
Browser
Vivaldi Login Data
Collect Vivaldi Login Data
81
Browser
Brave Login Data
Collect Brave Login Data
82
Browser
Chrome Sessions
Collect Chrome Sessions
83
Browser
Chromium Sessions
Collect Chromium Sessions
84
Browser
Brave Sessions
Collect Brave Sessions
85
Browser
Edge Sessions
Collect Edge Sessions
86
Browser
Opera Sessions
Collect Opera Sessions
87
Browser
Vivaldi Sessions
Collect Vivaldi Sessions
88
Browser
Chrome Browsing History
Collect visited URLs from Google Chrome
89
Browser
Firefox Browsing History
Collect visited URLs from Mozilla Firefox
90
Browser
Chromium Browsing History
Collect visited URLs from Chromium
91
Browser
Edge Browsing History
Collect visited URLs from Edge
92
Browser
Opera Browsing History
Collect Visited URLs from Opera
93
Browser
Vivaldi Browsing History
Collect visited URLs from Vivaldi
94
Browser
Brave Browsing History
Collect visited URLs from Brave
95
Browser
Chrome Downloads
Collect Chrome Downloads
96
Browser
Chromium Downloads
Collect Chromium Downloads
97
Browser
Firefox Downloads
Collect Firefox Downloads
98
Browser
Brave Downloads
Collect Brave Downloads
99
Browser
Edge Downloads
Collect Edge Downloads
100
Browser
Opera Downloads
Collect Opera Downloads
101
Browser
Vivaldi Downloads
Collect Vivaldi Downloads
102
Browser
Firefox Cookies
Collect Firefox Cookies
103
Users
User Groups
Collect user group list
104
Users
Users
Collect user list
105
Users
Last Access
Collect last access records
106
Users
Logged Users
Collect logged user list
107
Users
Shadow
Collect shadow content
108
Users
Sudoers
Collect sudoers
109
Users
Failed Login Attempts
Collect fail login attempts
110
SSH
SSH Known Hosts
Collect SSH known hosts
111
SSH
SSH Authorized Keys
Collect SSH authorized keys
112
SSH
SSH Configs
Collect SSH configurations
113
SSH
SSHD Configs
Collect SSHD configurations
114
Network
Hosts
Collect hosts
115
Network
ICMP Table
Collect ICMP table
116
Network
IP Routes
Collect IP routes
117
Network
IP Tables
Collect IP tables
118
Network
Raw Table
Collect Raw table
119
Network
Network Interfaces
Collect network interfaces
120
Network
TCP Table
Collect TCP table
121
Network
UDPLite Table
Collect UDPLite table
122
Network
UDP Table
Collect UDP table
123
Network
Unix Sockets
Collect unix sockets
124
Network
ARP Table
Collect ARP table
125
Network
DNS Resolvers
Collect DNS resolvers
126
Other Evidence
APT Sources
Collect APT sources
127
Other Evidence
APT History
Collect APT history
128
Other Evidence
DEB Packages
Collect Debian packages
129
Other Evidence
YUM Sources
Collect YUM sources
130
Other Evidence
SELinux Configs
Collect SELinux configurations
131
Other Evidence
SELinux Settings
Collect SELinux settings
132
Other Evidence
SUID Binaries
Collect SUID binaries
133
Other Evidence
Shell History
Collect shell history
134
Other Evidence
System Artifacts
Collect system artifacts (Files of collected evidence. For example: /etc/passwd file)
135
Other Evidence
Log Files
Collect log files under /var/log/
1
Server
Apache Logs
Collect Apache Logs
2
Server
NGINX Logs
Collect NGINX Logs
3
Server
MongoDB Logs
Collect MongoDB Logs
4
Server
MySQL Logs
Collect MySQL Logs
5
Server
PostgreSQL Logs
Collect PostgreSQL Logs
6
Server
SSH Server Logs
Collect SSH Server Logs
7
Server
DHCP Server Logs
Collect DHCP Server Logs
8
System
System Logs
Collect System Logs
9
System
Messages
Collect Messages Logs
10
System
Auth Logs
Collect Auth Logs
11
System
Secure
Collect Secure Logs
12
System
Boot Logs
Collect Boot Logs
13
System
Kernel Logs
Collect Kernel Logs
14
System
Mail Logs
Collect Mail Logs
15
Docker
Docker Changes
Collect Docker Changes.
16
Docker
Docker Containers
Collect Docker Containers.
17
Docker
Docker Image History
Collect Docker Image History.
18
Docker
Docker Images
Collect Docker Images.
19
Docker
Docker Info
Collect Docker Info.
20
Docker
Docker Networks
Collect Docker Networks.
21
Docker
Docker Processes
Collect Docker Processes.
22
Docker
Docker Volumes
Collect Docker Volumes.
23
Docker
Docker Container Logs
Collect Docker Container Logs
24
Docker
Docker Logs
Collect Docker Logs on Filesystem
25
Communication
AnyDesk Logs
Collect AnyDesk Logs
AIR supports the following IBM AIX Evidence and Artifacts
1
System
Cron Jobs
Collect cron jobs
2
System
ULimit Information
Collect ulimit information
3
Disk
Mounts
Collect mounts
4
File System
File System Enumeration
Dump file and folder information.
5
Processes
Processes
Collect process list
6
Users
User Groups
Collect user group list
7
Users
Users
Collect user list
8
SSH
SSH Known Hosts
Collect SSH known hosts
9
SSH
SSH Authorized Keys
Collect SSH authorized keys
10
SSH
SSH Configs
Collect SSH configurations
11
SSH
SSHD Configs
Collect SSHD configurations
12
Network
Hosts
Collect hosts
13
Network
DNS Resolvers
Collect DNS resolvers
14
Other Evidence
YUM Sources
Collect YUM sources
15
Other Evidence
YUM History
Collect YUM history
16
Other Evidence
SUID Binaries
Collect SUID binaries
17
Other Evidence
Shell History
Collect shell history
18
Other Evidence
System Artifacts
Collect system artifacts (Files of collected evidence. For example: /etc/passwd file)
19
Other Evidence
Log Files
Collect log files under /var/log/
1
Server
MySQL Logs
Collect MySQL Logs
2
Server
SSH Server Logs
Collect SSH Server Logs
3
Server
DHCP Server Logs
Collect DHCP Server Logs
4
System
System Logs
Collect System Logs
5
System
Auth Logs
Collect Auth Logs
6
System
Boot Logs
Collect Boot Logs
7
System
Mail Logs
Collect Mail Logs
AIR supports the following Windows Evidence and Artifacts
1
System
Crash Dump Information
Collect information about crash dumps
2
System
Recycle Bin Information
Collect information about items in recycle bin
3
System
System Restore Points Information
Collect information about system restore points
4
System
Drivers List
Collect driver list
5
System
Running Processes and Modules
Collect running processes and modules list
6
System
Antivirus Information
Collect information about installed antivirus
7
System
DNS Servers
Collect DNS Server addresses
8
System
Proxy List
Collect information about proxy list
9
System
Installed Applications
Enumerate Installed Applications
10
System
Firewall Rules
Enumerate Firewall Rules
11
System
USB Storage History
Collect USB Storage History
12
System
Downloaded Files Information
Collect information about downloaded files
13
System
Shadow Copy as CSV
Dump Latest Shadow Copy Files Information in CSV Format
14
System
EventTranscript DB
Collect EventTranscript DB
15
System
Users
Collect Users
16
System
User Access Logs (UAL)
Collect and Parse User Access Logs
17
System
SAM Users and Groups
Collect SAM Users and Groups
18
System
Wireless Connection History
Enumerate Wireless Connection History
19
System
Windows Error Reporting Files
Collect WER Files
20
System
NTDS.dit
Collect Active Directory NTDS Database
21
System
Environment Variables
Enumerate Environment Variables
22
Persistence
WMI Active Script
Dump WMI Active Script Event Consumers
23
Persistence
WMI Command Line
Dump WMI Command Line Event Consumers
24
Persistence
Registry Items
Enumerate Registry Items
25
Persistence
Scheduled Tasks
Enumerate Scheduled Tasks
26
Persistence
Service List
Enumerate Service List
27
Persistence
Startup Items
Enumerate Startup Items
28
Disk
Volumes Information
Collect information about volumes
29
Disk
MBR
Collect Master Boot Record
30
Memory
RAM Image
Create an image of RAM
31
Memory
Page File
Dump system page file
32
Memory
Swap File
Dump system swap file
33
Memory
Hibernation File
Dump hibernation file
34
Browser
Default Browser
Collect Default Browser
35
Browser
Chrome Cookies
Collect Chrome Cookies
36
Browser
Edge Cookies
Collect Edge Cookies
37
Browser
Opera Cookies
Collect Opera Cookies
38
Browser
Vivaldi Cookies
Collect Vivaldi Cookies
39
Browser
Brave Cookies
Collect Brave Cookies
40
Browser
QQ Cookies
Collect QQ Cookies
41
Browser
Chrome Bookmarks
Collect Chrome Bookmarks
42
Browser
Edge Bookmarks
Collect Edge Bookmarks
43
Browser
Opera Bookmarks
Collect Opera Bookmarks
44
Browser
Vivaldi Bookmarks
Collect Vivaldi Bookmarks
45
Browser
Brave Bookmarks
Collect Brave Bookmarks
46
Browser
QQ Bookmarks
Collect QQ Bookmarks
47
Browser
Chrome User Profiles
Collect Chrome User Profiles
48
Browser
Edge User Profiles
Collect Edge User Profiles
49
Browser
Opera User Profiles
Collect Opera User Profiles
50
Browser
Vivaldi User Profiles
Collect Vivaldi User Profiles
51
Browser
Brave User Profiles
Collect Brave User Profiles
52
Browser
QQ User Profiles
Collect QQ User Profiles
53
Browser
Chrome Extensions
Collect Chrome Extensions
54
Browser
Edge Extensions
Collect Edge Extensions
55
Browser
Opera Extensions
Collect Opera Extensions
56
Browser
Brave Extensions
Collect Brave Extensions
57
Browser
Vivaldi Extensions
Collect Vivaldi Extensions
58
Browser
QQ Extensions
Collect QQ Extensions
59
Browser
Firefox Extensions
Collect Firefox Extensions (Addons)
60
Browser
Chrome Local Storage
Collect Chrome Local Storage
61
Browser
Edge Local Storage
Collect Edge Local Storage
62
Browser
Opera Local Storage
Collect Opera Local Storage
63
Browser
Vivaldi Local Storage
Collect Vivaldi Local Storage
64
Browser
Brave Local Storage
Collect Brave Local Storage
65
Browser
QQ Local Storage
Collect QQ Local Storage
66
Browser
Dump Chrome Indexed DB
Dump Chrome Indexed DB
67
Browser
Dump Edge Indexed DB
Dump Edge Indexed DB
68
Browser
Dump Opera Indexed DB
Dump Opera Indexed DB
69
Browser
Dump Vivaldi Indexed DB
Dump Vivaldi Indexed DB
70
Browser
Dump Brave Indexed DB
Dump Brave Indexed DB
71
Browser
Dump QQ Indexed DB
Dump QQ Indexed DB
72
Browser
Chrome Web Storage
Collect Chrome Web Storage
73
Browser
Edge Web Storage
Collect Edge Web Storage
74
Browser
Opera Web Storage
Collect Opera Web Storage
75
Browser
Vivaldi Web Storage
Collect Vivaldi Web Storage
76
Browser
Brave Web Storage
Collect Brave Web Storage
77
Browser
QQ Web Storage
Collect QQ Web Storage
78
Browser
Chrome Form History
Collect Chrome Form History
79
Browser
Edge Form History
Collect Edge Form History
80
Browser
Opera Form History
Collect Opera Form History
81
Browser
Vivaldi Form History
Collect Vivaldi Form History
82
Browser
Brave Form History
Collect Brave Form History
83
Browser
QQ Form History
Collect QQ Form History
84
Browser
Chrome Thumbnails
Collect Chrome Thumbnails
85
Browser
Edge Thumbnails
Collect Edge Thumbnails
86
Browser
Opera Thumbnails
Collect Opera Thumbnails
87
Browser
Vivaldi Thumbnails
Collect Vivaldi Thumbnails
88
Browser
Brave Thumbnails
Collect Brave Thumbnails
89
Browser
QQ Thumbnails
Collect QQ Thumbnails
90
Browser
Chrome Favicons
Collect Chrome Favicons
91
Browser
Edge Favicons
Collect Edge Favicons
92
Browser
Opera Favicons
Collect Opera Favicons
93
Browser
Vivaldi Favicons
Collect Vivaldi Favicons
94
Browser
Brave Favicons
Collect Brave Favicons
95
Browser
QQ Favicons
Collect QQ Favicons
96
Browser
Chrome Login Data
Collect Chrome Login Data
97
Browser
Edge Login Data
Collect Edge Login Data
98
Browser
Opera Login Data
Collect Opera Login Data
99
Browser
Vivaldi Login Data
Collect Vivaldi Login Data
100
Browser
Brave Login Data
Collect Brave Login Data
101
Browser
QQ Login Data
Collect QQ Login Data
102
Browser
Chrome Sessions
Collect Chrome Sessions
103
Browser
Edge Sessions
Collect Edge Sessions
104
Browser
Opera Sessions
Collect Opera Sessions
105
Browser
Brave Sessions
Collect Brave Sessions
106
Browser
Vivaldi Sessions
Collect Vivaldi Sessions
107
Browser
QQ Sessions
Collect QQ Sessions
108
Browser
Chrome Browsing History
Collect visited URLs from Google Chrome
109
Browser
Firefox Browsing History
Collect visited URLs from Mozilla Firefox
110
Browser
IE 7,8,9 Browsing History
Collect visited URLs from Internet Explorer
111
Browser
IE 10,11,Edge Browsing History
Collect visited URLs from Internet Explorer and Edge
112
Browser
Opera Browsing History
Collect Visited URLs from Opera
113
Browser
Brave Browsing History
Collect Visited URLs from Brave
114
Browser
Vivaldi Browsing History
Collect Visited URLs from Vivaldi
115
Browser
QQ Browsing History
Collect Visited URLs from QQ
116
Browser
Chrome Downloads
Collect Chrome Downloads
117
Browser
Edge Downloads
Collect Edge Downloads
118
Browser
Firefox Downloads
Collect Firefox Downloads
119
Browser
Opera Downloads
Collect Opera Downloads
120
Browser
Brave Downloads
Collect Brave Downloads
121
Browser
Vivaldi Downloads
Collect Vivaldi Downloads
122
Browser
QQ Downloads
Collect QQ Downloads
123
Browser
Firefox Cookies
Collect Firefox Cookies
124
NTFS
MFT as CSV
Dump MFT entries in CSV format
125
NTFS
MFT
Dump raw contents of $MFT
126
NTFS
MFT Mirror
Dump MFT Mirror as raw
127
NTFS
USN Journal as CSV
Parse USN Journal Entries in CSV Format
128
NTFS
$Log File
Dump raw contents of $LogFile
129
NTFS
USN Journal
Dump contents of $UsnJrnl file
130
NTFS
$Boot
Dump Raw Contents of $Boot File
131
NTFS
USN Journal $Max
Dump Contents of $UsnJrnl:$Max
132
NTFS
$Secure:$SDS
Dump Contents of $Secure:$SDS
133
NTFS
$TxfLog $Tops:$T
Dump Contents of $TxfLog\$Tops:$T
134
Registry
Registry Hives
Dump registry hives
135
Registry
Old Registry Hives
Dump old registry hives in upgraded operating systems
136
Registry
ShellBags
Enumerate ShellBags
137
Registry
AppCompactCache
Enumarate AppCompatCache (aka ShimCache)
138
Registry
UserAssist
Enumerate UserAssist
139
Registry
TypedPaths
Enumerate TypedPaths
140
Registry
FirstFolder
Enumerate FirstFolder
141
Registry
RecentDocs
Enumerate RecentDocs
142
Registry
WordWheelQuery
Enumerate WordWheelQuery
143
Registry
FileExts
Enumerate FileExts
144
Registry
ShellFolders
Enumerate ShellFolders
145
Registry
RunMRU
Enumerate RunMRU
146
Registry
Map Network Drive MRU
Enumerate Map Network Drive MRU
147
Registry
TypedURLs
Enumerate TypedURLs
148
Registry
OfficeMRU
Enumerate OfficeMRU
149
Registry
AppPaths
Enumerate AppPaths
150
Registry
CIDSizeMRU
Enumerate CIDSizeMRU
151
Registry
LastVisitedPidlMRU
Enumerate LastVisitedPidlMRU
152
Registry
OpenSavePidlMRU
Enumerate OpenSavePidlMRU
153
Registry
Winrar History
Enumerate Winrar History
154
Network
DNS Cache
Collect DNS Cache
155
Network
TCP Table
Collect TCP Table
156
Network
UDP Table
Collect UDP Table
157
Network
ARP Table
Collect ARP Table
158
Network
IPv4 Routes
Collect IPv4 Routes
159
Network
Network Adapters
Collect information about network adapters
160
Network
Network Shares
Collect information about network shares
161
Network
Hosts
Dump Hosts File
162
Event Logs
Event Log EVT Files
Dump evt event log files
163
Event Logs
Event Log EVTX Files
Dump evtx event log files
164
Event Logs
Event Log EVT Records
Collect most recent event log records
165
Process Execution
Prefetch Files
Collect Prefetch Files and Parse
166
Process Execution
SRUM
Collect SRUM and Parse
167
Process Execution
Windows Timeline
Collect Windows Timeline
168
Process Execution
AmCache
Collect Amcache and Parse
169
Process Execution
Recent File Cache
Collect recent file cache files
170
Process Execution
Parse LNK Files
Parse LNK Files
171
Process Execution
Collect LNK Files
Collect LNK Files
172
Other Evidence
ETL
Collect ETL Log
173
Other Evidence
CLR
Collect CLR Log
174
Other Evidence
Jump List
Collect Jump List Files
175
Other Evidence
Windows Index Search
Collect Windows Index Search Database
176
Other Evidence
Superfetch
Collect Superfetch Files
177
Other Evidence
WBEM
Collect WBEM Files
178
Other Evidence
INF Setup
Collect INF Setup Log Files
179
Other Evidence
SDB
Collect SDB
180
Other Evidence
Powershell Logs
Collect Powershell Logs
181
Other Evidence
Powershell ConsoleHost History
Collect Powershell ConsoleHost History
182
Other Evidence
Thumbcache
Collect Thumbcache
183
Other Evidence
Iconcache
Collect Iconcache
184
Other Evidence
RDP Cache
Collect RDP Cache Files
1
Server
Apache Logs
Collect Apache Logs
2
Server
MongoDB Logs
Collect MongoDB Logs
3
Server
IIS Logs
Collect IIS Logs
4
Server
MSSQL Logs
Collect MSSQL Logs
5
Server
Microsoft Exchange Logs
Collect Microsoft Exchange Logs
6
Server
DHCP Server Logs
Collect DHCP Server Logs
7
Server
DNS Server Logs
Collect DNS Server Logs
8
Server
Active Directory Logs
Collect Active Directory Logs
9
Microsoft Applications
Microsoft Photos
Collect Microsoft Photos History Database
10
Microsoft Applications
Cortana History
Collect Cortana History Databases
11
Microsoft Applications
Microsoft Store Applications List
Collect Microsoft Store Applications List Database
12
Microsoft Applications
Microsoft Sticky Notes
Collect Microsoft Sticky Notes
13
Microsoft Applications
Microsoft Maps
Collect Microsoft Maps Locations
14
Microsoft Applications
Microsoft Voice Record History
Collect Microsoft Voice Record History
15
Microsoft Applications
Windows Notification History
Collect Windows Notification History
16
Microsoft Applications
Search History
Collect Windows Start Menu Search History
17
Microsoft Applications
Microsoft People
Collect Microsoft People Data
18
Microsoft Applications
Microsoft Calendar
Collect Microsoft Calendar Data
19
Communication
Discord Desktop Cache
Collect Discord Desktop Cache
20
Communication
Microsoft Mail
Collect Microsoft Mail Emails
21
Communication
Microsoft Outlook
Collect Microsoft Outlook Emails
22
Communication
Mozilla Thunderbird
Collect Mozilla Thunderbird Emails
23
Communication
Skype Databases
Collect Skype Databases
24
Communication
Skype Media
Collect Skype Media
25
Communication
Telegram Desktop Data
Collect Telegram Desktop Data
26
Communication
Telegram Desktop Download
Collect Telegram Desktop Download Folder
27
Communication
WhatsApp Desktop Cache
Collect WhatsApp Desktop Cache
28
Communication
WhatsApp Desktop Cookie
Collect WhatsApp Desktop Cookie
29
Communication
Windows Live Mail User Settings
Collect Windows Live Mail User Settings
30
Communication
Zoom Databases
Collect Zoom Databases
31
Communication
Zoom Media
Collect Zoom Media Files & Link Previews
32
Remote Desktop/Management Tools
Action1 RMM Logs
Collect Action1 RMM Logs
33
Remote Desktop/Management Tools
AmmyAdmin Logs
Collect AmmyAdmin Logs
34
Remote Desktop/Management Tools
AnyDesk Logs
Collect AnyDesk Logs
35
Remote Desktop/Management Tools
GoTo Logs
Collect GoTo Logs
36
Remote Desktop/Management Tools
Kaseya Logs
Collect Kaseya Logs
37
Remote Desktop/Management Tools
Level Logs
Collect Level Application Specific Files and Logs
38
Remote Desktop/Management Tools
LogMeIn Logs
Collect LogMeIn Logs
39
Remote Desktop/Management Tools
RealVNC Logs
Collect RealVNC Application Debug Logs
40
Remote Desktop/Management Tools
RemComSvc Logs
Collect RemComSvc Logs
41
Remote Desktop/Management Tools
Remote Utilities Logs
Collect Remote Utilities Application Logs
42
Remote Desktop/Management Tools
ScreenConnect (ConnectWise Control) Application Data
Collect Various Types of ScreenConnect (ConnectWise Control) Application Data
43
Remote Desktop/Management Tools
Splashtop Logs
Collect Splashtop Application Logs
44
Remote Desktop/Management Tools
Supremo Remote Desktop Logs
Collect Supremo Remote Desktop Application Logs
45
Remote Desktop/Management Tools
Teamviewer Logs
Collect Teamviewer Connection Logs
46
Remote Desktop/Management Tools
TightVNC Logs
Collect TightVNC Application Logs
47
Remote Desktop/Management Tools
Ultraviewer Logs
Collect Ultraviewer Logs
48
Remote Desktop/Management Tools
UltraVNC Logs
Collect UltraVNC Application Specific Log Files
49
Remote Desktop/Management Tools
Xeox Logs
Collect Xeox Application Specific Log Files
50
Remote Desktop/Management Tools
ZohoAssist Logs
Collect ZohoAssist Application Specific Logs
51
Social Artifacts
Twitter Databases
Collect Twitter Store Application Databases
52
Social Artifacts
Twitter Cache
Collect Twitter Store Application Cache
53
Social Artifacts
Facebook Databases
Collect Facebook Store Application User Databases
54
Social Artifacts
Facebook Cache
Collect Facebook Store Application Cache
55
Social Artifacts
LinkedIn Cache
Collect LinkedIn Store Application Cache
56
Social Artifacts
Spotify Recently Played List
Collect Spotify Recently Played List & Social Manager
57
Social Artifacts
Spotify Cache
Collect Spotify Cache
58
Productivity Artifacts
Sublime Text Sessions
Collect Sublime Text Sessions & Contents
59
Productivity Artifacts
Notepad++ Sessions
Collect Notepad++ Search History & Sessions
60
Productivity Artifacts
OpenVPN Config
Collect OpenVPN Config Files
61
Productivity Artifacts
Everything History
Collect Everything Run History
62
Productivity Artifacts
Evernote Databases
Collect Evernote Databases
63
Productivity Artifacts
Evernote Drag and Drop Files
Collect Evernote Drag and Drop Files
64
Productivity Artifacts
Evernote Logs
Collect Evernote Logs
65
Utilities Artifacts
iTunes Backups
Collect iTunes Backups
66
Utilities Artifacts
VMware Config
Collect VMware Config
67
Utilities Artifacts
VMware Drag and Drop Files
Collect VMware Drag and Drop Files
68
Utilities Artifacts
VMware Logs
Collect VMware Logs
69
Developer Tools
FileZilla Sessions
Collect FileZilla Sessions & Site Manager Settings
70
Developer Tools
Visual Studio Team Explorer Config
Collect Visual Studio Team Explorer Config
71
Developer Tools
Github Desktop Databases
Collect Github Desktop Databases
72
Developer Tools
Github Desktop Cache
Collect Github Desktop Cache
73
Developer Tools
Github Desktop Logs
Collect Github Desktop Logs
74
Developer Tools
WSL
Collect Windows Subsystem for Linux Files
75
Developer Tools
Tortoise Git Logs
Collect Tortoise Git Synchronization Logs
76
Cloud Artifacts
Google Drive Databases
Collect Google Drive Synchronization Databases
77
Cloud Artifacts
Dropbox Databases
Collect Dropbox Synchronization Databases
78
Cloud Artifacts
Dropbox Logs
Collect Dropbox Logs
79
Cloud Artifacts
Dropbox Cache
Collect Dropbox Cache
80
Cloud Artifacts
OneDrive Logs
Collect OneDrive Logs
81
Docker
Docker Changes
Collect Docker Changes
82
Docker
Docker Containers
Collect Docker Containers
83
Docker
Docker Image History
Collect Docker Image History
84
Docker
Docker Images
Collect Docker Images
85
Docker
Docker Info
Collect Docker Info
86
Docker
Docker Networks
Collect Docker Networks
87
Docker
Docker Processes
Collect Docker Processes
88
Docker
Docker Volumes
Collect Docker Volumes
89
Docker
Docker Container Logs
Collect Docker Container Logs
90
Antivirus Logs
Avast Logs
Collect Avast Logs
91
Antivirus Logs
AVG Logs
Collect AVG Logs
92
Antivirus Logs
Avira Logs
Collect Avira Logs
93
Antivirus Logs
Bitdefender Logs
Collect Bitdefender Logs
94
Antivirus Logs
Carbon Black Logs
Collect Carbon Black Logs
95
Antivirus Logs
Cisco AMP Logs
Collect Cisco AMP Logs
96
Antivirus Logs
ComboFix
Collect ComboFix Logs
97
Antivirus Logs
Cybereason Logs
Collect Cybereason Logs
98
Antivirus Logs
Cylance Logs
Collect Cylance Logs
99
Antivirus Logs
Deep Instinct Logs
Collect Deep Instinct Logs
100
Antivirus Logs
Elastic Logs
Collect Elastic Logs
101
Antivirus Logs
Eset Logs
Collect Eset Logs
102
Antivirus Logs
F-Secure Logs
Collect F-Secure Logs
103
Antivirus Logs
FireEye Logs
Collect FireEye Logs
104
Antivirus Logs
HitmanPro Logs
Collect HitmanPro Logs
105
Antivirus Logs
MalwareBytes Logs
Collect MalwareBytes Logs
106
Antivirus Logs
McAfee Logs
Collect McAfee Logs
107
Antivirus Logs
Palo Alto Logs
Collect Palo Alto Logs
108
Antivirus Logs
RogueKiller Reports
Collect RogueKiller Reports
109
Antivirus Logs
SentinelOne Logs
Collect SentinelOne Logs
110
Antivirus Logs
Sophos Logs
Collect Sophos Logs
111
Antivirus Logs
Sourcefire FireAMP Logs
Collect Sourcefire FireAMP Logs
112
Antivirus Logs
SUPERAntiSpyware Logs
Collect SUPERAntiSpyware Logs
113
Antivirus Logs
Symantec Logs
Collect Symantec Logs
114
Antivirus Logs
Tanium Logs
Collect Tanium Logs
115
Antivirus Logs
TotalAv Logs
Collect TotalAv Logs
116
Antivirus Logs
Trend Micro Logs
Collect Trend Micro Logs
117
Antivirus Logs
VIPRE Logs
Collect VIPRE Logs
118
Antivirus Logs
Webroot Logs
Collect Webroot Logs
119
Antivirus Logs
Windows Defender Logs
Collect Windows Defender Logs
This page provides a guide of how users can schedule Triage tasks via the AIR API.
Download the script and grant permission to run.
wget https://cdn.binalyze.com/air-deploy/scripts/air-triage-task-via-api.sh chmod +x air-triage-task-via-api.sh
Move the script file to a directory, such as the /opt directory, as shown below.
mv air-triage-task-via-api.sh /opt/air-triage-task-via-api.sh
Update the console address and API Token value in the script. You must add the desired triage rule id values to the "triageRuleIds" field.
For example, there are two default rules below; you can change them.
"fireeye-red-team-tools-countermeasures", "fireeye-sunburst-countermeasures"
Add it as a cronjob by running the command below.
crontab -e
After running the above command, add the following lines in the editor.
# At 00:00 on Sunday 0 0 * * 0 /opt/air-triage-task.sh
The acquisition of physical disk images and volume images can be done via an Acquire Image Task in the UI, or by using commands in an interACT session.
In addition to NTFS and FAT, AIR also supports the logical imaging of ext4 and ext3 volumes along with physical disk imaging which is possible from all of the operating systems supported by the AIR Responder.
When performing forensic disk imaging on Mac devices with T2 or later chips, obtaining a physical disk image of APFS volumes is often ineffective. This is because the data on these disks is encrypted, and decryption is exclusively managed by the chip that originally encrypted the data. Consequently, decryption can only occur during the acquisition process using that specific chip.
For most investigative purposes, a logical collection of files using AIR acquisition profiles typically provides sufficient information. This method, supported by AIR, allows investigators to access and analyze the file system and its contents efficiently, bypassing the complexities associated with Apple Silicon APFS encrypted physical disk images.
In the AIR UI you select Assets from the primary menu and then in the Asset Info window when you select the Asset Actions button a drop-down menu appears listing the actions that can be applied to that individual asset. Acquire Image is one such option:
The Acquire Image wizard will now walk you through the steps needed to take a forensic image from the asset:
Choose a Task Name.
Select or Create a case to which the image should be associated.
Choose either the Volume or Disk tab (note the size is displayed so you can be sure the Repository has enough free space to hold the collected image).
If there is more than one disk or volume you can select what you need by searching, filtering or by manual selection.
Having chosen what is to be imaged, you can now configure/setup the image file:
Select an Evidence Repository to which the image file can be saved.
Select your image format, RAW (dd) or EWF2 (Ex01) is currently supported.
For RAW (dd) only, a toggle switch gives users the option to enable or disable the consolidation of physical disk or volume image files into a single zip file, eliminating the need to split them into chunks.
For RAW (dd), if the 'single zip file' option is not toggled on, users will have the option to choose the size image file chunk sizes. If you want to use AIR's File Explorer to browse the image file, the image must be supplied to AIR from an SMB, SFTP, an Amazon S3 bucket, or Azure Blob Storage shared location, where it needs to be saved as a single contiguous RAW file or an EWF file which can be segmented. (Read more here: AIR File Explorer)
Users can also choose to skip a configurable number of bytes before starting the imaging process.
In the 'Resource Limits' section, you can set limits on the network bandwidth used during the image acquisition process. Meanwhile, the 'Compression and Encryption' section provides options for conserving storage space and enhancing the security of the gathered evidence:
The output of your imaging task will be found in the location or evidence repository you selected when building the task in the wizard. The metadata associated with the acquisition will also be found there and this is explained here: Understanding errors documented in the metadata.yml file
Threat Hunting at speed and scale
Almost every case starts with one or more leads. If there are too many leads, the investigator might need to validate all their leads one by one, which is very time-consuming. Or, if there are too few leads, investigators don't have enough information to continue their case. Both situations are not good for the investigation and can impact the speed of resolution.
Triage is the process of hunting for and prioritizing the evidence that they’ll be analyzing. However, prioritizing this evidence is not a straightforward or easy job. An Investigator needs lots of data, leads, or experience to do it well. So, they generally use known attack indicators, called the IOC indicator of the compromise. An indicator of compromise (IoC) in computer forensics is an artifact observed on a network or in an operating system that, with high confidence, indicates a computer intrusion.
An investigator or analyst will generally scan all system data or part of it to discover these IOCs. When they see a match, it typically means that those systems are related to a specific attack type and need to be investigated first. Investigators use YARA, osquery, and Sigma rules for these scans. Investigators can define and scan IOCs by using AIR's built-in YARA, osquery, and Sigma template rules or editors.
Binalyze AIR DFIR Suite provides three different tools to the investigators for triage, which, as stated, are YARA, osquery, and Sigma. These tools generally scan assets to find specific data using IOC (Indicator of Compromise).
Binalyze AIR features a library for YARA, osquery, and Sigma rules, allowing investigators to develop, validate, and manage their rules directly within the platform using the built-in editors. These rules can be saved to Libraries > Triage Rules in AIR.
Investigators can easily threat hunting and scan their assets by selecting the necessary rules from the library. The Triage process flow, depicted below, illustrates how organizational policies allow administrators to control AIR’s functionality and define role-based permissions for specific activities.
Creating a case in AIR also enables users to centralize all collections, triage results, and activities related to a specific incident or investigation. This integration empowers the Investigation Hub to dynamically present everything from raw evidence to automated DRONE findings in a unified view.
NB: Character limitation for single triage rule
To prevent the browser becoming unresponsive we have limited the maximum to 350K characters in a single Triage Rule.
Triage rules in the AIR console can be allocated Tags, which help to organize the rules and filter them when required. This feature for triage rules aids in managing them more efficiently and allows for streamlined searches and better organization within the console.
When creating or using a Triage Rule, the UI allows the user to filter existing rules by their associated Tags.
The Triage Rule Library includes Preset Filters in the secondary menu, allowing users to organize rules hierarchically, also known as 'Nested Tagging'. By incorporating a colon in their tags, users can structure and categorize rules more efficiently. For example, the tag "APT26:Tim:hashset" helps organize related rules under a structured hierarchy, enhancing navigation and accessibility in the library.
With the Scan Local Drives Only feature, users can improve Yara triage efficiency by focusing threat hunting and triage scans solely on local drives, excluding remote external or network drives that often introduce unnecessary data into the investigation. The attached mounted USB drives should be included as ‘local drives’.
Key Details:
Available for all AIR-supported operating systems.
Disabled by default but can be enabled via the organizational policies page: Settings>Policies>Scan Local Drives Only.
It can also be configured when creating individual triage tasks using the custom options:
This feature ensures that only relevant data from local drives is collected, reducing noise and improving the speed and accuracy of investigations.
How to automatically tag your assets based on simple conditions.
Conducting cybersecurity investigations and digital forensics at scale requires a well-structured classification of your assets.
Understanding the number and types of assets, such as web servers, domain controllers, or application servers, significantly reduces response time. This enables you to focus on specific groups of devices within your network, ultimately enhancing situational awareness during an investigation.
Auto Tagging is a feature of Binalyze AIR that lets you automatically tag assets based on conditions such as:
Existence of a file or directory
Existence of a running process
Hostnames, IP addresses, Subnets
Custom osquery conditions
Additionally, you can seamlessly combine conditions using AND/OR logic alongside environment variables for greater flexibility.
This feature can be enabled or disabled from the Auto Asset Tagging section in Settings>Features>Auto Asset Tagging.
Once enabled, any newly added asset will automatically be assigned a task to query the Auto Tagging conditions. Based on the results, AIR will apply the appropriate Tag Name to the asset.
If you need to re-run tagging on all assets, you can do so by clicking the "Run Now" button on the Auto Tagging page. Alternatively, you can run the tagging process for individual assets from the Asset page or select multiple assets and execute the task using the Bulk Action feature.
Auto Tagging can be saved in AIR Libraries specifically for individual organizations or universally across all organizations. This capability supports users in creating and applying incident-specific Auto Tags selectively, avoiding unnecessary use or exposure of a rule outside the intended organizational context.
There are a number of 'out-of-the-box' supported Auto Tags such as those listed below, but as we now know, you can also create custom tags whenever you need them:
Apache
Redis
Mysql
Rabbitmq
Docker
Kubernetes
Domain Controller
IIS Web Server
Web Server
Mail Server
MSSQL Server
When we look at the Auto Tag conditions set for the tagging of an Apache Server, we can see that the AIR Responder will be looking at 5 conditions, all independent of each other as the OR switch is active. So, if any one of these conditions exists, the Apache Tag will be applied to the asset:
It is possible for a user to create, edit, and delete the parameters shown below, but only if they have permission to do so:
AIR has very granular permissions control over Users and Roles, and within Roles, there are currently 109 individually configurable privileges. Six of these allow Global Administrators to determine what users can do within the Auto Asset Tagging feature:
Read more about how AIR uses Auto Tagging to speed up your investigations here: The Power of Auto Asset Tagging in DFIR
Any Auto Tags used in a Tasking Assignment are displayed under the Information tab in the Task Details window. In the example below, we can see that the Tagging Rule for Domain Controller has been run along with 17 others that are related by clicking on the ‘+17’ link:
Here we provide some YARA, Sigma and osquery rule templates for users to copy and edit
How AIR Protects your Chain of Custody with content hashing and RFC3161 Time-stamping
At Binalyze we use SHA-256 to hash all of the files collected by Binalyze AIR and then we take this to the next level. We do this by further hashing our .ppc collection file and having that value sent to a Trusted Timestamp Server to generate a certificate.
This not only proves that the report and all of the data associated with it exist exactly as it did on acquisition, but it did so at the date and time notarized by a Trusted Timestamp Authority (TSA) certificate.
So, thanks to RFC3161, you can prove not only that the data content is 100% intact, but that the date and time of the collection are also guaranteed.
Requests For Comment (RFC) is a system that has been adopted as the official documentation of Internet specifications, communications protocols, procedures, and events. Originally used to record the unofficial notes concerned with the ARPANET project in 1969, the system is now considered a standard-setting body for the internet and its connected systems.
A published RFC will have to go through a review and revision process, overseen by several groups such as the Internet Engineering Task Force (IETF), which is a large open international community of network designers, operators, vendors, and researchers. As part of their collective role, they review the evolution of everything concerned with the evolution of internet architecture and the smooth operation of the internet. A list of RFC3161-compliant TSAs can be found here. When choosing TSAs users may want to consider if their implementation of RFC 3161 has been qualified by organizations such as eIDAS (electronic identification and trust services).
RFC3161 defines how trusted timestamping leverages public-key cryptography and the internet X.509 Public Key Infrastructure Time-Stamp Protocol (TSP) sets the required protocols for standardization.
One way to use a TSA allows a requestor to take the hash they’ve generated for the total of their collected data set, send that hash to the TSA, and receive in return a Timestamp Request Token (TST). This TST can be saved and at any later time be used to verify both the content of the collection along with the date and time that the collection took place.
The RFC 3161 capability is not unique and is available from a whole range of independent third parties. This is important as any in-house time-stamping process could be open to challenge or criticism due to its lack of independence or verified accuracy.
In the AIR platform, when you send a collection task to an asset's responder, the responder will build the collection on the asset in a directory named ‘Cases’. This collection is in a .zip file, with a filename that starts with the date and time of the collection. If you expand the .zip file you’ll note that the collected data has been added while maintaining the directory tree structure. This is good news if you want or need to further investigate the collection in other forensic solutions.
At the root of the collection shown above, you can see the Case.ppc file. This is another .zip container and if you expand this you can inspect the contents.
The hash values for the collected files are available in the Investigation Hub from where they can be exported as a .csv file:
With Binalyze AIR, RFC3161 timestamping is on by default. This means the hash value of your collection .ppc file is sent to the TSA and their TST response is automatically saved as metadata for that collection in the AIR console. You can download and verify the TST from here anytime you or others need to.
You can also disable the RFC3161 Timestamping functionality at any time via the AIR Settings > Features page.
To verify the .ppc via RFC 3161, the first thing you need to do is to download the TST from the metadata button in the AIR asset details > Task tab (as shown in the screenshot above labeled: Metadata button reveals RFC 3161 download).
In the example below I’ve changed the name of the TST to ‘RFC3161 timestamp.tsr’ and saved it to my downloads folder.
I can then open a shell session and change the directory to downloads.
To see the information in the TST Run: openssl ts -reply -in RFC3161\ timestamp.tsr -token_in -token_out -text
and in the output, you’ll see the hash of your .ppc and the Timestamp
To verify this TST we now need to download the root certificate from a TSA: https://cacerts.XXXX.com/XXXXtAssuredIDRootCA.crt.pem.
We will also need the following TSA certificates from the TSA server to build a ‘chain certificate’. In this case, I took the content of each .cer file, in the order shown, and concatenated them into one file that I named ‘CHAIN.pem’.
XXXXTrustedG4RSA4096SHA256TimeStampingCA.cer
With all these files remaining in the same directory, I then ran the following command to verify the TST: openssl ts -verify -CAfile XXXXAssuredIDRootCA.crt.pem -untrusted CHAIN.pem -data TASK.ppc -in RFC3161\ timestamp.tsr -token_in
This simple verification ‘ok’ message confirms that the TST is correct, indicating that my data is sound and that it existed at the date and time shown by the timestamp
Thanks to the RFC 3161 and SHA-256 hashing features of AIR, it’s now possible to prove that not only is your data content 100% intact but that it existed at a particular moment in time. So we can now be sure that we know exactly what was collected and when it was collected. In short, RFC 3161 provides immutable timestamping for an effective chain of custody to maintain forensic integrity.
In order to improve the overall security posture of AIR, accessing AIR over HTTPS is mandatory.
For this reason, it is required that all existing users obtain an SSL certificate issued by a valid public Certificate Authority before updating their instances.
As a fallback to ensure system continuity, you can also use the unique self-signed certificate issued automatically by AIR, either temporarily or as a permanent solution.
IMPORTANT NOTE: Port 443 should be allowed inbound on your AIR console instance.
A unique Root CA (self-signed) and shares the public key of this with the asset responders upon their first connection to the AIR console.
Then an SSL certificate is issued by this Root CA for responder-console communication.
This SSL certificate is only used by the asset responder and is not available to other applications on your assets for security reasons.
Self-signed certificates are provided for business continuity purposes and we strongly suggest using an SSL certificate that is issued by a trusted Root CA. Until you obtain a valid certificate, you can follow the workarounds for major browsers listed below:
During the update, AIR will still create a unique Root CA for your instance and share the public key with the responders. If you already use AIR with a valid SSL certificate, a new SSL certificate will not be issued, and your current certificate will continue to be used.
In this case, the old certificate will be saved locally on the AIR console for backup purposes and AIR will issue a unique Root CA (self-signed) and share the public key of this Root CA with the responders. From this point on, an SSL certificate that is issued using this Root CA will be used for responder-console communication.
AIR will issue a unique Root CA (self-signed) and share the public key of this Root CA with the responders. From this point on, an SSL certificate that is issued using this Root CA will be used for responder-console communication.
AIR will issue a unique Root CA (self-signed) and share the public key of this Root CA with the responders. From this point on, an SSL certificate that is issued using this Root CA will be used for responder-console communication.
Selection of osquery rules for use as guides or templates
UAC_disabled
Windows Update history
Registry Run entries
Services that start automatically
Unusual Cron entries
Launched items not signed by Apple
Processes running no binary on the disk
Scheduled Task with Temp path reference
List all local Users
List logged users
List users with Administrative privileges
Check the security status of the system
List processes running from CMD (with hash value)
AIR supports the following macOS Evidence and Artifacts
1
Processes
Auto Loaded Processes
Collect info on autoloaded processes
2
Processes
Processes
Collect Processes
3
Browser
Default Browser
Collect Default Browser
4
Browser
Chrome Cookies
Collect Chrome Cookies
5
Browser
Edge Cookies
Collect Edge Cookies
6
Browser
Opera Cookies
Collect Opera Cookies
7
Browser
Vivaldi Cookies
Collect Vivaldi Cookies
8
Browser
Arc Cookies
Collect Arc Cookies
9
Browser
Brave Cookies
Collect Brave Cookies
10
Browser
QQ Cookies
Collect QQ Cookies
11
Browser
Chrome Bookmarks
Collect Chrome Bookmarks
12
Browser
Edge Bookmarks
Collect Edge Bookmarks
13
Browser
Opera Bookmarks
Collect Opera Bookmarks
14
Browser
Vivaldi Bookmarks
Collect Vivaldi Bookmarks
15
Browser
Arc Bookmarks
Collect Arc Bookmarks
16
Browser
Brave Bookmarks
Collect Brave Bookmarks
17
Browser
QQ Bookmarks
Collect QQ Bookmarks
18
Browser
Chrome User Profiles
Collect Chrome User Profiles
19
Browser
Edge User Profiles
Collect Edge User Profiles
20
Browser
Opera User Profiles
Collect Opera User Profiles
21
Browser
Vivaldi User Profiles
Collect Vivaldi User Profiles
22
Browser
Arc User Profiles
Collect Arc User Profiles
23
Browser
Brave User Profiles
Collect Brave User Profiles
24
Browser
QQ User Profiles
Collect QQ User Profiles
25
Browser
Chrome Extensions
Collect Chrome Extensions
26
Browser
Edge Extensions
Collect Edge Extensions
27
Browser
Opera Extensions
Collect Opera Extensions
28
Browser
Firefox Extensions
Collect Firefox Extensions (Addons)
29
Browser
Chrome Local Storage
Collect Chrome Local Storage
30
Browser
Edge Local Storage
Collect Edge Local Storage
31
Browser
Opera Local Storage
Collect Opera Local Storage
32
Browser
Vivaldi Local Storage
Collect Vivaldi Local Storage
33
Browser
Arc Local Storage
Collect Arc Local Storage
34
Browser
Brave Local Storage
Collect Brave Local Storage
35
Browser
QQ Local Storage
Collect QQ Local Storage
36
Browser
Dump Chrome Indexed DB
Dump Chrome Indexed DB
37
Browser
Dump Edge Indexed DB
Dump Edge Indexed DB
38
Browser
Dump Opera Indexed DB
Dump Opera Indexed DB
39
Browser
Dump Vivaldi Indexed DB
Dump Vivaldi Indexed DB
40
Browser
Dump Arc Indexed DB
Dump Arc Indexed DB
41
Browser
Dump Brave Indexed DB
Dump Brave Indexed DB
42
Browser
Dump QQ Indexed DB
Dump QQ Indexed DB
43
Browser
Chrome Web Storage
Collect Chrome Web Storage
44
Browser
Edge Web Storage
Collect Edge Web Storage
45
Browser
Opera Web Storage
Collect Opera Web Storage
46
Browser
Vivaldi Web Storage
Collect Vivaldi Web Storage
47
Browser
Arc Web Storage
Collect Arc Web Storage
48
Browser
Brave Web Storage
Collect Brave Web Storage
49
Browser
QQ Web Storage
Collect QQ Web Storage
50
Browser
Chrome Form History
Collect Chrome Form History
51
Browser
Edge Form History
Collect Edge Form History
52
Browser
Opera Form History
Collect Opera Form History
53
Browser
Vivaldi Form History
Collect Vivaldi Form History
54
Browser
Arc Form History
Collect Arc Form History
55
Browser
Brave Form History
Collect Brave Form History
56
Browser
QQ Form History
Collect QQ Form History
57
Browser
Chrome Thumbnails
Collect Chrome Thumbnails
58
Browser
Edge Thumbnails
Collect Edge Thumbnails
59
Browser
Opera Thumbnails
Collect Opera Thumbnails
60
Browser
Vivaldi Thumbnails
Collect Vivaldi Thumbnails
61
Browser
Arc Thumbnails
Collect Arc Thumbnails
62
Browser
Brave Thumbnails
Collect Brave Thumbnails
63
Browser
QQ Thumbnails
Collect QQ Thumbnails
64
Browser
Chrome Favicons
Collect Chrome Favicons
65
Browser
Edge Favicons
Collect Edge Favicons
66
Browser
Opera Favicons
Collect Opera Favicons
67
Browser
Vivaldi Favicons
Collect Vivaldi Favicons
68
Browser
Arc Favicons
Collect Arc Favicons
69
Browser
Brave Favicons
Collect Brave Favicons
70
Browser
QQ Favicons
Collect QQ Favicons
71
Browser
Chrome Login Data
Collect Chrome Login Data
72
Browser
Edge Login Data
Collect Edge Login Data
73
Browser
Opera Login Data
Collect Opera Login Data
74
Browser
Vivaldi Login Data
Collect Vivaldi Login Data
75
Browser
Arc Login Data
Collect Arc Login Data
76
Browser
Brave Login Data
Collect Brave Login Data
77
Browser
QQ Login Data
Collect QQ Login Data
78
Browser
Chrome Sessions
Collect Chrome Sessions
79
Browser
Edge Sessions
Collect Edge Sessions
80
Browser
Opera Sessions
Collect Opera Sessions
81
Browser
Vivaldi Sessions
Collect Vivaldi Sessions
82
Browser
Arc Sessions
Collect Arc Sessions
83
Browser
Brave Sessions
Collect Brave Sessions
84
Browser
QQ Sessions
Collect QQ Sessions
85
Browser
Chrome Browsing History
Collect visited URLs from Google Chrome
86
Browser
Edge Browsing History
Collect visited URLs from Microsoft Edge
87
Browser
Firefox Browsing History
Collect visited URLs from Mozilla Firefox
88
Browser
Opera Browsing History
Collect visited URLs from Opera
89
Browser
Safari Browsing History
Collect visited URLs from Safari
90
Browser
Vivaldi Browsing History
Collect visited URLs from Vivaldi
91
Browser
Waterfox Browsing History
Collect visited URLs from Waterfox
92
Browser
Brave Browsing History
Collect visited URLs from Brave
93
Browser
Arc Browsing History
Collect visited URLs from Arc
94
Browser
QQ Browsing History
Collect Visited URLs from QQ
95
Browser
Chrome Downloads
Collect Chrome Downloads
96
Browser
Safari Downloads
Collect Safari Downloads
97
Browser
Firefox Downloads
Collect Firefox Downloads
98
Browser
Edge Downloads
Collect Edge Downloads
99
Browser
Opera Downloads
Collect Opera Downloads
100
Browser
Vivaldi Downloads
Collect Vivaldi Downloads
101
Browser
Arc Downloads
Collect Arc Downloads
102
Browser
Brave Downloads
Collect Brave Downloads
103
Browser
Waterfox Downloads
Collect Waterfox Downloads
104
Browser
QQ Downloads
Collect QQ Downloads
105
Browser
Firefox Cookies
Collect Firefox Cookies
106
System
Crashes
Collect Crashes
107
System
Gatekeeper
Collect Gatekeeper details
108
System
Gatekeeper Approved Apps
Collect Gatekeeper apps allowed to run
109
System
Installed Applications
Collect info on installed apps
110
System
Kernel Extensions Info
Collect kernel extensions info
111
System
Launchd Overrides
Collect override keys for LaunchDaemons and Agents
112
System
Package Install History
Collect Package Install History
113
System
System Extension Info
Collect system extension info
114
System
System Integrity Protection Status
Collect SIP status
115
System
Print Jobs
Collect print job info
116
System
Printer Info
Collect printer info
117
System
Transparency, Consent, and Control (TCC)
Collect Transparency, Consent, and Control Information
118
System
Quarantine Events
Collect Quarantine Events Database
119
System
Sudo Last Run
Collect Sudo Last Run
120
System
iMessage
Collect iMessages
121
System
Dock Items
Collect Dock Items
122
System
Document Revisions
Collect Document Revisions
123
System
Apple System Logs (ASL)
Collect Apple System Logs (ASL)
124
System
Apple Audit Logs
Collect Apple Audit Logs
125
System
Shared File List
Collect Shared File List (SFL) items
126
System
Shell History
Collect Shell History
127
System
Downloaded Files Information
Collect information about downloaded files
128
System
Cron Jobs
Collect Cron Jobs
129
System
Quick Look Cache
Collect Quick Look Cache
130
System
Event Taps
Collect Event Taps
131
System
Re-Opened Apps
Collect Re-Opened Apps
132
System
Most Recently Used (MRU)
Collect Most Recently Used (MRU) items
133
System
Login Items
Collect Login Items
134
System
Collect File System (FS) Events
Collect File System Events
135
System
Parse File System (FS) Events
Parse File System Events
136
Disk
Block Devices
Collect Block Devices
137
Disk
Disk Encryption
Collect Disk Encryption status
138
File System
File System Enumeration
Dump file and folder information.
139
File System
.DS_Store Files
Collect information about .DS_Store files.
140
Configurations
ETC Hosts
Collect ETC Hosts
141
Configurations
ETC Protocols
Collect ETC Protocols
142
Configurations
ETC Services
Collect ETC Services
143
Network
Listening Ports
Collect Listening Ports
144
Network
IP Routes
Collect IP Routes
145
Network
Network Interfaces
Collect Network Interfaces
146
Network
DNS Resolvers
Collect DNS Resolvers
147
Network
DHCP Settings
Collect DHCP (Dynamic Host Configuration Protocol) Settings
148
Users
User Groups
Collect User Groups
149
Users
Users
Collect Users
150
Users
Logged Users
Collect Logged Users
151
KnowledgeC
Application Usage
Collect Application Usage
152
KnowledgeC
Bluetooth Connections
Collect Bluetooth Connections
153
KnowledgeC
Notification Info
Collect Notification Info
154
Unified Logs
Logind
Filter user login events
155
Unified Logs
Tccd
Filter tccd events
156
Unified Logs
Sshd
Filter ssh activity events
157
Unified Logs
Command Line Activity
Filter command line activity run with elevated privileges
158
Unified Logs
Kernel Extensions
Filter kernel extension events
159
Unified Logs
Screensharing
Filter screen sharing events
160
Unified Logs
Keychain
Filter keychain unlock events
161
Unified Logs
Session Creation and Destruction
Filter sessions creation and destruction events
162
Unified Logs
XProtect Remediation
Filter detecting and blocking malicious software events
163
Unified Logs
Failed Sudo
Filter failed sudo events
164
Unified Logs
Manuel Configuration Profile Install
Filter MDM Clients Events
165
Persistence
Mail Rules
Collect Mail Rules that contain AppleScript
166
Persistence
Login Hooks
Collect Login Hooks
167
Persistence
Logout Hooks
Collect Logout Hooks
168
Persistence
Emond Clients
Collect Emond Clients
169
SSH
SSH Authorized Keys
Collect SSH authorized keys
170
SSH
SSH Configs
Collect SSH configurations
171
SSH
SSH Known Hosts
Collect SSH known hosts
172
SSH
SSHD Configs
Collect SSHD configurations
1
Server
Apache Logs
Collect Apache Logs
2
Server
NGINX Logs
Collect NGINX Logs
3
Server
MongoDB Logs
Collect MongoDB Logs
4
Server
MySQL Logs
Collect MySQL Logs
5
Server
PostgreSQL Logs
Collect PostgreSQL Logs
6
System
System Logs
Collect System Logs
7
System
Install Logs
Collect Install Logs
8
System
Wifi Logs
Collect Wifi Logs
9
System
KnowledgeC
Collect KnowledgeC Database
10
Docker
Docker Changes
Collect Docker Changes
11
Docker
Docker Containers
Collect Docker Containers
12
Docker
Docker Image History
Collect Docker Image History
13
Docker
Docker Images
Collect Docker Images
14
Docker
Docker Info
Collect Docker Info
15
Docker
Docker Networks
Collect Docker Networks
16
Docker
Docker Processes
Collect Docker Processes
17
Docker
Docker Volumes
Collect Docker Volumes
18
Docker
Docker Container Logs
Collect Docker Container Logs
19
Docker
Docker Logs
Collect Docker Logs on Filesystem
20
Communication
AnyDesk Logs
Collect AnyDesk Logs
21
Communication
Teamviewer Logs
Collect Teamviewer Logs
22
Communication
Discord Desktop Cache
Collect Discord Desktop Cache
23
Communication
Splashtop Mac Logs
Collect Splashtop Mac Application Logs
24
Utilities Artifacts
Parallels Logs
Collect Parallels Logs
25
Utilities Artifacts
Homebrew Logs
Collect Homebrew Logs
26
Antivirus Logs
Sophos Events Database
Collect Sophos Events Database
27
Antivirus Logs
Sophos Logs
Collect Sophos Logs
Selection of Sigma rules for use as guides or templates
Detection of Sysinternals Usage
LSASS Dump Detection
Suspicious Add Scheduled Task From User AppData Temp
Disable UAC Using Registry
Windows Defender Service Disabled
PowerShell Get-Clipboard Cmdlet Via CLI
User Account Hidden By Registry
Selection of YARA rules for use as guides or templates
Find by Name
Find by Extension
Find by Content
Find by Hash
Find by Size
Find by Size range
Find by Location
Find PE (portable executable) files only
Find PKZIP files only
Find by Hash with Size filter
Find Process by Name
Find String in Memory
Find Process by Command line
Find Malware domain
Find Byte pattern
Find String
Find Malware domain
Find Byte pattern
Find XOR pattern
Find Base64 pattern
interACT has been built in AIR specifically for DFIR capability. The full list of current commands can be listed by typing ‘help’ at the command prompt and are below the following important 'hint':
cat: To display the content of a file.
cd: To change the current working directory.
curl: To make HTTP requests.
del or delete or rm: For deleting a file or folder.
dir or ls: Will list the files and folders in a chosen directory.
exec or execute: This will allow you to execute a process on the asset with the native shell and return results with stdout/stderr.
find: Allows searching of a file or directory.
get: To pull a file from the asset down to the console
hash: Will display the hash value of a file.
head: To get the first 10 lines of a file displayed.
help: Will display any help messages and switches that you can apply to commands available to you at your current position.
hex: Will display the hex-encoded output of the first 100 bytes of a file.
image: To read a disk or volume and write its contents out as a .dd file. This can also be done from the UI but remains here in interACT for those who prefer to image from the command line.
kill: Is the command to terminate a process.
mkdir: Will make or create a directory
osquery: Gives the user access to osquery queries within the interACT session
pslist: Will display the running process list.
put: Allows the user to push a file from the library to the asset.
pwd: Displays the present working directory.
volumes: Will list the mounted volumes.
yara: Scan files or processes with YARA rules.
zip: This command will compress or decompress a file or folder.
From AIR v4.5 users can speed up workflows by using the following flags for files they want to download using the ‘get’ command in interACT:
Compression: ‘-zip’
Password protection: ‘-zip-password’
File name change: ‘-name’
BEWARE !
Using zip -p
on machines monitored by EDR can trigger alerts due to its association with suspicious activities like encryption or data exfiltration.
EDRs often flag or block such commands, log passwords exposed in plaintext, and create compliance challenges.
Binalyze AIR’s InterACT offers a secure alternative for file handling and remote actions without relying on these risky commands. To ensure smooth operations, AIR users should work with their security teams to get AIR executables whitelisted in their EDR. This prevents unnecessary alerts and guarantees uninterrupted, secure workflows during investigations.
Introduction
In Digital Forensics and Incident Response (DFIR), PowerShell has become a powerful tool for investigators and analysts. Sometimes overlooked is its compatibility with AIR's interACT, which provides a true cross-platform remote shell for Windows, Linux, and macOS. This KB article aims to shed light on how users can leverage PowerShell within interACT to execute cmdlets and perform a variety of operations.
Why is this Important?
Many DFIR investigators rely on PowerShell (and Python) as their primary scripting and remediation tools. However, newcomers to AIR may assume that interACT is exclusively tailored for Linux, which is not the case. interACT is a versatile platform, and certain commands are available to both Windows and UNIX-like operating systems users.
Executing PowerShell in interACT
PowerShell can be executed in interACT through several methods. Here, we'll explore three basic ways to run PowerShell commands:
Using the 'exec' Command
The 'exec' or 'execute' command in interACT serves as a gateway to run PowerShell commands. This versatile command allows DFIR practitioners to integrate PowerShell into their workflows seamlessly. Below are examples of how to use 'exec' with PowerShell:
This command executes a simple PowerShell 'whoami' cmdlet, displaying the currently logged-in user.
In this instance, 'exec' invokes the 'Get-ScheduledTask' cmdlet, providing insights into scheduled tasks on the system.
The 'exec' command facilitates the removal of a file ('example.txt') using the 'Remove-Item' cmdlet from a specified path.
When running exec commands in InterACT, please note that commands requiring additional user input (e.g., Get-CimInstance prompting for a ClassName) may not display the prompt dynamically during execution. Instead, InterACT will continue running and appear to "hang" until the process times out or completes.
To avoid this, we recommend:
Modify the Command: Specify all required parameters directly in the command to prevent prompts. For example:
exec powershell.exe Get-CimInstance -Namespace 'root\SecurityCenter2' -ClassName "YourClassName"
Test Commands Locally First: Run the command in a native PowerShell console to ensure all required inputs are included before executing it in InterACT.
We are aware of this behavior and are continuously working to improve user experience.
-NonInteractive
PowerShell Flag in interACTWhen using PowerShell commands in scripts or automated workflows, you may encounter scenarios where PowerShell expects user input. This can disrupt execution, especially in non-interactive environments such as Binalyze AIR's interACT automation. To address this, PowerShell offers the -NonInteractive
flag.
What is -NonInteractive
?
The -NonInteractive
flag is a command-line option for powershell.exe
that instructs PowerShell to operate in non-interactive mode. When this mode is enabled, PowerShell does not prompt for user input and will terminate the script or command if user input is required.
This feature is particularly useful when running commands in environments where no user interaction is possible or desirable, such as during forensic investigations or automation tasks initiated via interACT.
Example Use Case in interACT
Here’s an example of how the -NonInteractive
flag can be applied within interACT:
powershell.exe -NonInteractive Get-CimInstance -Namespace 'root\SecurityCenter2'
Explanation:
powershell.exe
: The executable for running PowerShell commands.
-NonInteractive
: Ensures the command runs without expecting user interaction.
Get-CimInstance -Namespace 'root\SecurityCenter2'
: Retrieves information from the specified namespace.
This command is designed to collect system security information without risking a prompt for user input that could interrupt execution.
Using the -NonInteractive
flag in Binalyze AIR's interACT provides the following advantages:
Seamless Automation: Prevents disruptions in workflows caused by unexpected prompts.
Increased Reliability: Ensures consistent execution of PowerShell commands, even in headless or remote environments.
Enhanced Efficiency: Minimizes delays during investigative or forensic operations.
Troubleshooting Tips
If a command using -NonInteractive
fails:
Check the command syntax for errors.
Ensure the command does not inherently require user input.
Review interACT logs for additional context on the failure.
For more details, refer to the official PowerShell documentation on about_PowerShell_exe
.
1). Here are some additional PowerShell commands that can be invaluable in cyber investigations:
This command retrieves information about running processes, which can be critical for understanding system activity.
2). You can query specific log entries within a shorter time frame. Here's an example to retrieve Security log events from the last 24 hours:
In this command:
-MaxEvents 100
limits the query to the most recent 100 events, which should make the query faster.
-Oldest
ensures that the query starts with the oldest event, which is from the last 24 hours in this case.
You can adjust the -MaxEvents
value to retrieve a specific number of events or omit it to get all events from the last 24 hours. This command should provide a quicker response with a smaller dataset.
3). Here's an example of a simple PowerShell command that retrieves information about the local computer's operating system:
This command uses the "Get-CimInstance" cmdlet to retrieve information about the local computer's operating system. It should execute quickly and provide details about the operating system on the machine where it's run along with other information such as Build Number, Registered User, Serial Number, and Version.
By following these simple examples, users can harness the capabilities of PowerShell within interACT for DFIR investigations and operations. interACT's compatibility across different platforms ensures that investigators can seamlessly incorporate PowerShell into their toolkit, expanding their capabilities and efficiency in digital forensics and incident response.
Using this integration, users can trigger webhooks from chat windows with slash commands.
Visit the Webhooks page in Binalyze AIR,
Click the "+ New Webhook" button in the upper right corner,
Provide a self-explanatory name (examples: RDP Brute Force Trigger, Phishing Detected Trigger, etc.),
Select "Mattermost: Generic Mattermost Webhook Parser" as the parser for this webhook,
Select an Acquisition Profile when Mattermost activates this webhook,
Select the Ignore option or leave with its default value (defaults to 24 hours for recurrent alerts for a single endpoint),
Provide other settings such as Evidence Repository, CPU Limit, Compression & Encryption to use or let AIR configure them automatically based on the matching policy
Click the "Save" button
Open the dropdown menu on the left pane and click on Integrations.
Select "Slash Commands" and click on "Add Slash Command" button.
Fill in the text box accordingly:
Title: Binalyze AIR Acquisition
Description: You can start an acquisition task in the specified endpoint by using this command.
Command Trigger Word: Type a trigger word that can easily relate to the specified acquisition profile. For example: /air-acquisition-full
Request URL: Webhook URL that you obtained from AIR-Server.
Request Method: POST
Response Username: BinalyzeAIR
Response Icon: Leave Blank.
Autocomplete: Selected
Autocomplete Hint: [Endpoint Hostname]
Autocomplete Description: Provide the hostname of the endpoint.
Click save.
Mattermost will provide a Token to authenticate the slash command in AIR-Server. Click done.
Go to a channel and press "/" for available commands.
Type /air-acquisition-full [ENDPOINT HOSTNAME]
.
For example:
/air-acquisition-full SampleDummyHostForTest
Purpose-Built, Cross Platform Remote Shell for DFIR
In the dynamic landscape of Digital Forensics and Incident Response (DFIR), having the right tools can make all the difference. Enter interACT Remote Shell, a purpose-built solution that differentiates itself from the crowd. This page takes you on a journey through the unique features that make interACT an invaluable asset for DFIR professionals.
At its core, interACT is meticulously crafted for the intricate demands of Digital Forensics and Incident Response. It's not just another shell implementation; it's a specialized tool designed to meet the specific challenges faced by investigators and responders in the field.
Binalyze AIR’s interACT module is a comprehensive secure remote shell that is cross-platform and provides a standardized command set for Windows, macOS, and Linux to empower and greatly simplify the investigation process. Investigators and incident responders can connect to their assets easily by starting an interACT session via the AIR console.
When an interACT session is initiated, the AIR console connects to the asset in just a few seconds and provides a command line interface for investigators to begin their triage, mitigation, or other remediation actions.
Another very useful and unique feature of interACT is its ability to control access to features based on user permissions level. This allows DFIR team leaders and managers to create appropriate access profiles to match the experience and ability of each team member.
DFIR environments are diverse, with Windows, Linux, and MacOS systems. InterACT simplifies operations by supporting cross-platform commands, ensuring a consistent and efficient experience across different operating systems.
interACT’s Library feature allows DFIR managers and team leaders to upload standardized investigation assets like scripts and toolkits, making it easy for individual analysts to utilize them with a single click during their investigation.
Repetitive tasks are an inevitable part of DFIR. With interACT's library, you can deploy your favorite scripts effortlessly. This not only saves time but also ensures consistency in your investigative processes.
Collaboration is at the heart of effective incident response. With InterACT, you can attach sessions to specific cases, promoting seamless collaboration among team members. This feature streamlines communication and ensures everyone is on the same page during investigations.
Also, by providing access to the Library of approved assets, a more uniform investigative process across the whole team can be defined and encouraged.
Another valuable feature of interACT is the full auditing and logging capability. This not only enhances visibility but also facilitates compliance with rigorous security standards. Every command used and response received is logged in a real-time interACT session report. Additionally, if any files are transferred between the analyst and the asset, these are logged, including their hash values.
InterACT has three levels of audit:
Firstly, the interACT session log is generated as a Case Report and saved as a Task immediately after the session is closed.
Next is the Global Audit Log which can not be purged
Thirdly, the user can export the interACT audit logs to their Syslog server for analysis.
InterACT is a powerful tool, so this comprehensive auditing capability provides peace of mind that, should you need to demonstrate exactly what happened during a remote shell session, you can do just that.
The interACT command-line parser will use Unix-like command line parsing methods due to the libraries used and the missing Windows libraries. Because of that, a Windows user will have to write a del command like this:
del C:/xyz/abc.txt # use forward slashes
del 'C:\xyz\abc.txt' # within single quotes
The following is currently invalid and probably will be invalid in the future due to the Windows non-standard way of command-line parsing and escaping characters.
del C:\xyz\abc.txt # Invalid
del "C:\xyz\abc.txt" # Invalid
InterACT provides peace of mind by offering individual privileges for command sets. This fine-grained control allows you to balance the need for access with the imperative of maintaining a secure environment.
In conclusion, interACT Remote Shell is not just a tool; it's a game-changer for DFIR professionals. Its purpose-built design, user privilege customization, cross-platform compatibility, Syslog integration, script deployment capabilities, collaborative features, and emphasis on individual privileges make it a versatile and indispensable asset in the arsenal of any cybersecurity expert.
interACT has an imaging command with several options/switches to allow users to read a disk or volume and write its contents out as a .dd file. As seen on the previous page, this can also be done from the AIR UI but remains here in interACT for those who prefer to image from the command line.
Here is an example of an imaging command in its simplest form:
In this command, the -i
flag is used to specify the input source for the image
command.
Here's a breakdown of the command:
image
: This is the name of a command or script that is used to create an image (a copy) of a disk or volume.
-i E:
: This flag specifies the input source for the image creation process. In this case, E:
represents a disk or volume identifier on the system. It indicates that the image creation process should target the contents of the disk or volume associated with the drive letter E:
.
-o OutputFolder2
: This flag specifies the output destination for the image file. The image file generated by the command will be stored in the OutputFolder2
directory.
To inspect the results of the command shown above, image -i E: -o OutputFolder2,
we can navigate to the folder using interACT and list the contents as shown below:
In this case, we see that there are two image chunks; image.001.zip and image.002.zip, along with a file named metadata.yml. This file exists in your output folder even when you use the AIR UI to image a disk or volume.
This metadata file can be read in the shell with the 'cat' command. It provides information about your image including the source, imaging start and end times, size, and hash values:
From time to time all imaging tools will have issues with areas of the the disk that can not be read. In such cases, AIR will report errors in the metadata.yml file and they will be recorded as shown below:
This imaging metadata report outlines the process and outcome of an imaging operation carried out in AIR. The report provides details about the operation, including the source, target, data transfer metrics, and errors encountered. Let's break down the key parts and interpret the errors mentioned in the report:
Hostname: Win10-002 indicates the machine name where the operation was performed.
Source: '\.\E:' shows that imaging was done from a device mounted at E: (likely a disk drive).
Target: C:\Users\OutputFolder2 is where the imaged data was written.
StartTime: The operation started on March 19, 2024, at 19:58:21 local time.
Duration: It took approximately 7.73 seconds to complete.
Compression: Enabled, indicating the data was compressed during the imaging process.
Encryption: Not used during this imaging operation.
BytesRead and BytesWritten: Both are 1,073,737,728 bytes, indicating that a bit over 1 GB of data was read from the source and written to the target.
NumberOfChunks: 2 chunks of data were processed, aligning with the bytes read/written and chunk size.
ChunkSizeInBytes: Each chunk was 536,870,912 bytes, about 512 MB, which fits the total data size indicating two chunks were necessary.
ReadDuration and WriteDuration: Reading took under half a second, whereas writing took the majority of the operation time (about 7.25 seconds).
The ReadErrorTable
section is particularly noteworthy as it outlines issues encountered during the read operation:
Errors Listed: Two errors, error-1
and error-2
, were encountered during the imaging process.
Regions Affected:
The first error occurred at the very beginning of the read operation (Offset: 0), affecting 1,048,576 bytes (1 MB).
The second error occurred after skipping the next 1 MB chunk (notably absent from the errors), affecting the third 1 MB segment of data (Offset: 2,097,152).
The presence of read errors in specific regions suggests issues with the source device at those locations. This could be due to bad sectors, physical damage, or corruption within the disk's storage.
The operation continued despite these errors, which is common in forensic imaging processes where the goal is to recover as much data as possible, even in the presence of damaged or inaccessible areas.
The absence of errors for the second 1 MB segment (from 1,048,576 to 2,097,152 bytes) indicates that not all regions of the source had issues, highlighting the localized nature of the problems.
The Compare feature enables proactive forensics through baseline analysis, allowing investigators to focus on forensic evidence from the earliest stages of an investigation. Using a patent-pending approach, it identifies and highlights forensic artifacts—added, modified, or deleted—between asset snapshots.
This analysis, completed in just 5 seconds, enhances security by addressing vulnerabilities before they can be exploited, without disrupting ongoing operations. The Compare feature supports both standard and offline acquisitions, providing detailed metadata to help investigators understand potential security risks comprehensively. Compare analysis is performed directly on the Console, eliminating the need for direct access to assets.
The scope of baseline analysis is strategically based on areas commonly abused by attackers. These include:
AutoLoadedProcs
System
System
ChromeExtensions
CronJobs
NetworkAdapters
DiskEncryptions
DNSResolvers
Hosts
ETCHosts
Hosts
AutorunsServices
ETCProtocols
IPRoutes
AutorunsRegistry
ETCServices
IPTables
AutorunsScheduledTasks
GatekeeperApps
KernelModules
AutorunsStartupFolder
InstalledApps
Mounts
InstalledApplications
KextInfo
NetworkInterfaces
Drivers
LaunchdOverrides
SystemArtifacts
FirewallRules
SipStatus
Users
SysExtInfo
Select the Asset for analysis.
Initiate Compare Task:
Navigate to the Compare under Asset Actions.
Specify the Acquisitions to Compare
Specify two acquisitions to compare or create a new baseline by clicking "Acquire Baseline".
Review the Results
Once the task is complete, review the results provided by Baseline Analysis with Compare.
Explore the fine-grained details on the property level.
Integrate with Investigations:
Leverage the insights gained from Baseline Analysis with Compare to inform and enhance ongoing digital investigations.
By incorporating Baseline Analysis with Compare into DFIR process, you empower your team with a proactive and efficient approach to identifying and mitigating potential security risks. This feature is a valuable asset in maintaining a robust cybersecurity posture.
Don't hesitate to contact our dedicated support team for any further assistance or inquiries.
How to create timelines for your investigations?
In the intricate world of digital investigations, time is often of the essence. Timelining has been one of the most critical and time-consuming parts of digital forensic investigations. Enter Timeline Analysis, a feature designed to revolutionize the way investigators navigate through evidence and collaborate seamlessly. In this page, we'll explore the key functionalities that make Timeline Analysis a game-changer for accelerating investigations.
The traditional way of creating timelines is collecting evidence, parsing them, and combining the results using CSV files. Time is a critical factor in investigations, and with the 'One-click' Timeline creation feature, investigators can initiate and collaborate on timelines with just a click. This not only expedites the process but also facilitates remote and multi-user collaboration within a single timeline.
AIR comes to the rescue to solve this problem. You can easily create timelines for multiple assets in parallel and see the results on a collaborative, web-based user interface in which you can tag/flag each piece of evidence.
Timelines can be created from a single asset and can be easily enriched using additional evidence such as:
Additional Assets
CSV Files
Milestones
Off-Network Acquisitions
All the flagged/tagged evidence is listed in the "Flagged" section that makes it easy to create reports before finalizing an investigation.
Flexibility is at the core of effective investigations. With Timeline Analysis, investigators can add more assets at any time to an existing timeline, creating what we like to call 'super-timelines.' This dynamic approach enables the consolidation of diverse assets into a comprehensive timeline for a holistic view of the investigation.
Existing and new Timelines can be created by selecting "More" from the Main Menu and then "Timelines".
To create a new Timeline, select the "+Add New" button at the top of the page:
The New "Timeline" then gives you the option to 'Create with selected assets' or 'Create an empty timeline and add evidence later'
You can now search for and select the assets desired for the Timeline:
Having selected the assets to include in the Timeline you now have to define the task by:
Giving the Timeline a name.
Allocating it to a Case.
Selecting a Timezone
Providing a description (Optional)
Timeline Analysis goes a step further by allowing the import of offline asset acquisitions or CSV datasets into the same timeline. This ensures that investigators can amalgamate a wide range of data sources, enriching the investigative process.
AIR now presents you with three options for adding data to your new Timeline:
Add an asset
Add an off-network asset
Import a CSV file
While Timeline Analysis presents a user-friendly interface, it is supported by a powerful evidence acquisition mechanism behind the scenes. This mechanism selectively includes 'timestamped evidence,' ensuring a concise and relevant timeline. By default, this includes:
All evidence with a timestamp property
Browsing history
AMCache
SRUM data
Timeline Analysis introduces the concept of multiple flags for evidence items. Investigators can flag items to highlight their significance, and all flagged items are conveniently listed in the 'Flagged Evidence' section. This section can be filtered, providing a focused view of critical evidence.
Investigations are often marked by significant events, and Timeline Analysis acknowledges this by allowing investigators to manually insert 'milestones'. These milestones serve as markers for noteworthy occurrences during the investigation.
In conclusion, Timeline Analysis is not just a feature; it's a comprehensive solution for investigators seeking precision, flexibility, and collaboration in their digital investigations. With 'One-click' Timeline creation, the ability to build 'super-timelines,' integration of diverse data sources, manual milestones, streamlined reporting, and precise flagging, investigators can confidently navigate the complexities of digital evidence.
How to integrate AIR for a fully automated Incident Response
Setting up Okta for AIR (Available from AIR 4.1)
Sign in to the Okta Admin Dashboard.
Click the “Applications” button in the left menu.
Click Create App Integration.
Select SAML 2.0 as a sign-in method and click the “Next“ button
Name your application, and upload a logo (logo is optional), and click the “Next” button
Enter your domain name followed by this callback at the end of the path: /api/auth/sso/okta/callback
.
For example: <https://<your-domain-name>>/api/auth/sso/okta/callback
Fill in the Attribute Statements section as follows:
All fields are case-sensitive. Make sure all of them are filled correctly.
On the next page, click the first option, and then click the “Finish” button.
Go to the “Profile Editor” page under the “Directory“ section and click the name of the latest created app.
In the “Attributes” section, click the “Add Attribute” button.
Select “string array“ as the data type.
Enter a name and description for the attribute.
Enter “roleTags“ as the variable name.
Click the “Define enumerated list of values“ checkbox.
Click the “Attribute required“ checkbox.
Give a display name of your new role and enter the corresponding “Tag” of the role that you want to map within the Binalyze AIR Console into the “Value” field. For example, the “global_admin”, which is the tag of the Global Admin role in Binalyze AIR Console, is used for the “Value” section.
Then click save.
Navigate back to the “Applications“ page. Click the name of the app. Then go to the “Assignments” tab.
Click the “Assign to People” button under the “Assign“ dropdown.
Click the “Assign” button that you want to assign to.
Leave the user name field as is and select the roles of the user. And click the “Save and Go Back“ button.
Go to the “Sign On” tab and click “More Details“
Sign in to the Binalyze AIR Console.
Navigate to the “Settings” page, then click the “Security” section.
Enable Okta by clicking the switch button.
Fill in the required fields according to the Sign on tab in the Okta
Entry Point: Okta Sign on Url
Issuer: Okta Issuer
Cert: Okta Signing Certificate
Click Save settings
The “Sign in with OKTA” button should appear on the Binalyze AIR Console login page. Once you click this button, you will navigate to the Okta login page to authenticate your access. Once you are authenticated, you will be redirected back to the AIR console.
In AIR, webhooks act as triggers that enable integration with other security tools, such as SIEM, SOAR, or EDR systems. They allow AIR to automatically initiate evidence collection, analysis, and presentation of findings in response to alerts received from these tools.
A trigger is the combination of a parser, an acquisition profile, and a destination for saving the collected evidence (either local or remote options are available).
Users access webhooks via the Integrations button in the Main Menu and by selecting Webhooks from the Secondary Menu. To create a new Webhook select +Add New:
Triggers are basic REST endpoints that can be called via HTTP GET or POST methods
Each trigger
Starts with the AIR Console address (AIR-ADDRESS)
Has a name that makes it easy to remember (TRIGGER-NAME)
Has a security token (TRIGGER-TOKEN) attached to it that can be regenerated when needed
Optionally an Endpoint Identifier that could either be the hostname or the IP address of the endpoint trigger is being called for
GET Triggers expect this information in the URL
POST Triggers extracts this information from the Webhook Payload
To make it easier to integrate with any trigger source, AIR provides two alternative methods of receiving endpoint information (name or IP address):
URL Parser (HTTP GET)
Webhook Parser (HTTP POST)
This method requires the trigger source to provide an endpoint name or IP address directly in the URL.
Below is an example GET request and response for collecting "Browsing History" from an endpoint with the name "JohnPC".
Even without using a SIEM/SOAR, the above URL can be used for starting an acquisition task simply by:
Visiting it with a web browser,
Adding it to the click action of an HTML button in your case management alert reports,
Creating a simple script for making a GET request to this address.
Webhook parsers require the trigger source to provide the endpoint information inside a JSON payload which is POSTed to the trigger.
Each created trigger contains a dedicated security token that can be revoked at any time.
Once you re-generate a security token, all previous integrations using the old security token will start receiving HTTP 401/Unauthorized.
To begin integrating Azure SSO with Binalyze, you'll first need to register a new application in Azure Active Directory (AD). This process will create a unique identity for your application, enabling secure communication with Azure services.
Go to Manage > App registrations, click on New registration, and provide a name for your application
Select Web, and enter the https://[AIR_CONSOLE_ADDRESS]/api/auth/sso/azure/callback
value for the Redirect URI field. Please remember to change [AIR_CONSOLE_ADDRESS]
part for your instance.
Click Register to complete the registration process.
After registering the application, navigate to the Overview section, and copy the Application (client) ID and Directory (tenant) ID. You will need to input these values into the Binalyze AIR Console.
Once your application is registered, you need to configure essential settings and permissions in Azure AD. This includes creating secrets and setting up API permissions to allow your application to interact securely with Azure resources.
In the left-hand panel, go to Certificates & Secrets.
Click New client secret, provide a description, set the expiration period, and click Add.
Copy the value of the client secret and store it securely as it will be required later. You will need to input this value into the Binalyze AIR Console in the Client Secret field.
Navigate to API permissions and ensure that the profile permission is selected.
If it's not present, click + Add permission, select Microsoft Graph, choose Delegated permissions, toggle profile, and click Add permissions.
If you have users with an empty ‘email’ field, AIR can use UPN to identify users. You can follow the steps below to use UPN as an identifier for users without the ‘email’ field:
Navigate to ‘Token configuration.’ If ‘upn’ is not in the list, click on the ‘Add optional claim’ button. After selecting the ‘ID’ token type, tick ‘upn’ and click on the ‘Add’ button.
Go to App roles within the Azure AD application settings, click + Create app role, provide a name for the role, select Users/Groups for allowed member types, and give the role a description.
Enter the corresponding "Tag" of the role to be mapped within the Binalyze AIR Console under the Value field (e.g., use the tag "global_admin" for the Global Admin role).
You can make the roles on Azure SSO more than one according to your needs. While doing this, make sure that the “tag” value in Binalyze and the “value” value in Azure App are the same.
With your application configured, the next step is to manage the users and groups that will have access to it. Assign roles and permissions to the appropriate users and groups as follows:
Return to the Microsoft Entra ID Directory, select Enterprise applications, filter by the application name, and click on it.
In the left-hand panel, select Users and groups, click + Add user/group.
Choose the users/groups and click Select.
Choose the roles to assign and click Select.
Assign selected user(s) to the selected role by clicking Assign.
After configuring your application in Azure, you must enable and configure SSO in the Binalyze AIR Console to allow users to authenticate using Azure AD credentials.
Sign in to the Binalyze AIR Console.
Navigate to Settings, go to Security, and find the SSO section.
Enable Azure ID by toggling the switch, fill in the required fields with the Tenant ID, Client ID, and Client Secret from the Azure application registration, and click Save.
The final step involves verifying that the SSO integration is working correctly. This ensures that users can log in to the Binalyze AIR Console using their Azure AD credentials without any issues.
After saving, check that a Sign in with Azure AD button appears on the Binalyze AIR Console login page.
Click the Sign in with Azure AD button to be redirected to the Microsoft login page for authentication.
Upon successful authentication, you will be redirected back to the AIR Console.
Request:
Splunk Parser which is provided out-of-box is a very basic example of this. After adding a trigger URL as a POST workflow action, whenever Splunk generates an alert for an endpoint, it posts JSON alert data containing the endpoint information as a nested property which is parsed by the trigger parser. The parser then uses this information for starting an acquisition on the endpoint automatically. You can read documentation for more information.
You can contact for requesting additional trigger parsers for major SIEM/SOAR/EDR products.
Access the , sign in using your credentials, and navigate to the Microsoft Entra ID Directory under the Azure Services section.
Integration of AIR with Cortex XSOAR is possible via Plug-In.
Step 1: Preparing API Token
Create a new API Token by clicking the Settings → API Tokens.
Give a Token Name.
Choose an expiration date.
Click Save and copy the token.
Sign in to Cortex XSOAR server.
Click “Marketplace” on the left bottom corner.
Search and install the Binalyze Integration to your instance.
Click “Settings” on the left bottom corner.
Find installed integration, and click “Add instance”
Fill in the AIR Server URL and API Key. Click “Test”, and you will see “Success”, which means Cortex XSOAR established the test connection with the AIR Server.
Save and Exit.
Isolation
You can use the integration in Automations, Playbooks, or War Room.
To execute an isolation task, write the following command at the bottom of the page:
To execute an acquisition task, write the following command at the bottom of the page:
Integration of AIR with IBM QRadar is possible via a feature called "Custom Actions".
When QRadar generates an alert for an incident, it runs a script provided in Custom Actions,
The properties of the alert alongside some fixed properties are then sent to the trigger URL provided in the bash script,
Upon receiving the URL request, AIR extracts the IP address or Hostname from the URL and automatically assigns an acquisition task to the endpoint in question. The acquisition profile that will be used for this task is provided when you create a trigger.
Create a script file with the contents below and save it as "air-trigger.sh"
Visit the Triggers page in Binalyze AIR
Click the "+ New Trigger" button on the upper right corner
Provide a self-explanatory name (examples: RDP Brute Force Trigger, Phishing Detected Trigger, and etc.)
Select "QRadar Read Endpoint Name or IP Address from URL Path" as the parser for this trigger
Select an Acquisition Profile that will be used when this trigger is activated by QRadar
Select the Ignore option or leave it with its default value (defaults to 24 hours for recurrent alerts for a single endpoint)
Provide other settings such as Compression, Encryption, Evidence Repository to use or let AIR configure them automatically based on the matching policy
Click the "Save" button
Hover your mouse over the link below the Trigger name and click to copy
Go to QRadar Admin > Define Action > Add > Custom Action Define
In the "Edit Custom Action" dialog, upload the script file created in the step above
Select "Bash" as the Interpreter value
In the "Script Parameters" section
Leave "Parameter Name" empty
Select the "Fixed Property" radio button and leave the "Value" field empty
Do *not* check the "Encrypt Value" option
Click the "Add" button and add the parameters listed in the below table
Click Save
Name
Type
Value
air_address
Fixed Property
TYPE-AIR-ADDRESS
trigger_name
Fixed Property
TYPE-TRIGGER-NAME
trigger_token
Fixed Property
TYPE-TRIGGER-TOKEN
endpoint
Network Event Property
sourceip
Please provide the values in the order they are listed above.
This integration is built with the watcher feature of ELK by using sample data. In order to produce this watcher, the watcher's payload must be customized accordingly to parse Endpoint IP or Hostname.
Visit the Webhooks page in Binalyze AIR,
Click the "+ New Webhook" button in the upper right corner,
Provide a self-explanatory name (examples: RDP Brute Force Trigger, Phishing Detected Trigger, etc.),
Select "Elasticsearch Logstash Kibana: Generic Elasticsearch Logstash Kibana" as the parser for this webhook,
Select an Acquisition Profile when ELK activates this webhook,
Select the Ignore option or leave with its default value (defaults to 24 hours for recurrent alerts for a single endpoint),
Provide other settings such as Evidence Repository, CPU Limit, Compression & Encryption to use or let AIR configure them automatically based on the matching policy
Click the "Save" button
Visit the <ELK_Instance URL>/app/management/insightsAndAlerting/watcher/watches . On the right, click "Create" then "Create advanced watch".
Add an action part to your watcher.
Change the following JSON:
Host: AIR Server address,
Port: AIR-Server port,
Path: The webhook full path,
Token: The token that you created in AIR Server.
Body: The body part must include either the endpoint hostname or endpoint IP. Mapping must be customized with the watcher payload itself.
{ "trigger": { "schedule": { "interval": "30m" } }, "input": { "search": { "request": { "search_type": "query_then_fetch", "indices": [ "*" ], "rest_total_hits_as_int": true, "body": { "size": 0, "query": { "match_all": {} } } } } }, "condition": { "compare": { "ctx.payload.hits.total": { "gte": 10 } } }, "actions": { "binalyzeAIR_webhook": { "webhook": { "scheme": "http", "host": "
air-server-url
", "port":
80
, "method": "post", "path": "
/api/webhook/NAME
", "params": { "token": "
9236a8a1-ffb9-4521-9947-3f46548916c0
" }, "headers": { "Content-Type": "application/json" }, "body": """["{{
ctx.payload.endpoint
}}"]""" } } } }
You can simulate the post request to learn if it's working.
Please refer to the vendor's documentation for more information.
Integration of AIR with ServiceNow is possible via the feature called "Business Rules".
Visit the Webhooks page in Binalyze AIR,
Click the "+ New Webhook" button in the upper right corner,
Provide a self-explanatory name (examples: RDP Brute Force Trigger, Phishing Detected Trigger, etc.),
Select "ServiceNow: Generic ServiceNOW Webhook Parser" as the parser for this webhook,
Select an Acquisition Profile when ServiceNow activates this webhook,
Select the Ignore option or leave with its default value (defaults to 24 hours for recurrent alerts for a single endpoint),
Provide other settings such as Evidence Repository, CPU Limit, Compression & Encryption to use or let AIR configure them automatically based on the matching policy,
Click the "Save" button,
Hover your mouse over the link below the Webhook name and double-click to copy.
Open the Business Rules under the System Definitions and click New,
Give your new Business Rule a descriptive name, choose the table you want it to trigger on, and check the Advanced box.
Under the option: When, choose after. You can use various conditions and filtering functions accordingly.
Click the Advanced Tab and paste the following script. Change the 5th line <insert webhook URL> with the webhook link you copied in Step 1.
Click Submit on the top right.
Once you have set up the webhook, you can test the business rule based on the triggering conditions. Check the response body for the data being sent from ServiceNow.
Integration of AIR with Splunk is possible via a feature called "Post Actions".
When Splunk generates an alert for an incident, it sends a JSON payload to the URL provided in Workflow Actions,
The payload that is POSTed contains important information about the alert such as the Host Name, IP Address, and other alert specific details,
Upon receiving this JSON data, AIR parses the payload and extracts IP address or Hostname from it, and automatically assigns an acquisition task to the endpoint in question. The acquisition profile that will be used for this task is provided when you create a trigger.
Visit the Triggers page in Binalyze AIR
Click the "+ New Trigger" button on the upper right corner
Provide a self-explanatory name (examples: RDP Brute Force Trigger, Phishing Detected Trigger, and etc.)
Select "Splunk: Generic Splunk Webhook Parser" as the parser for this trigger
Select an Acquisition Profile that will be used when this trigger is activated by Splunk
Select the Ignore option or leave with its default value (defaults to 24 hours for recurrent alerts for a single endpoint)
Provide other settings such as Compression, Encryption, Evidence Repository to use or let AIR configure them automatically based on the matching policy
Click the "Save" button
Hover your mouse over the link below the Trigger name and click to copy (see below)
Head over to Splunk and create a POST Workflow Action for your workflow
Provide the Trigger URL you have copied above as the URI
to the newly created Workflow Action,
Make sure you have provided the Host Name or IP Address in Post Arguments
At this point, whenever Splunk generates an alert for an endpoint, the information will be sent to AIR for it to automatically assign an acquisition task to the endpoint in question.
Integration of AIR with Wazuh is possible via the feature called "Integrations".
When Wazuh's configuration file has the integration setting with specified RuleID, It runs a defined script. The defined python script sends the relevant information with a POST request to the AIR.
Visit the Webhooks page in Binalyze AIR,
Click the "+ New Webhook" button on the upper right corner,
Provide a self-explanatory name (examples: RDP Brute Force Trigger, Phishing Detected Trigger, etc.),
Select "Wazuh: Wazuh AIR Integration" as the parser for this webhook,
Select an Acquisition Profile when Wazuh activates this webhook,
Select the Ignore option or leave with its default value (defaults to 24 hours for recurrent alerts for a single endpoint),
Provide other settings such as Evidence Repository, CPU Limit, Compression & Encryption to use or let AIR configure them automatically based on the matching policy
Click the "Save" button,
Hover your mouse over the link below the Webhook name and double-click to copy (see below),
Open the ossec.conf file with a preferred text editor and add the following line to the end of the file before closing the ossec_config. The name must be precisely custom-air. For detailed information, please see the Wazuh Documentation.
Every time the relevant rule_id is triggered, a bash script named custom-air is executed. Create a file named custom-air in /var/ossec/integrations/ paste the following script. For detailed information, please refer to the Wazuh Documentation.
Create a python script named custom-air.py /var/ossec/integrations/ and paste the following script. The script runs another python script and makes a request to the air server.
The scripts must be placed in /var/ossec/integrations, have the same name as indicated in the configuration block, contain execution permissions, and belong to the root user of the ossec group. Execute the following two commands:
Visit the Webhooks page in Binalyze AIR,
Click the "+ New Webhook" button in the upper right corner,
Provide a self-explanatory name (examples: RDP Brute Force Trigger, Phishing Detected Trigger, etc.),
Select "Crowd Strike Webhook Parser" as the parser for this webhook,
Select an Acquisition Profile,
Provide other settings such as Evidence Repository, CPU Limit, Compression & Encryption to use or let AIR configure them automatically based on the matching policy.
Click the "Save" button,
Hover your mouse over the link below the Webhook name and double-click to copy
Go to Crowdstrike Store, find the Webhook Plugin, and open it.
Click Configure, and fill in the blanks
Name: Give an explanatory name
Webhook URL: Paste the webhook you created earlier,
Click Notify On Configuration Failure and save the configuration.
Go to Fusion workflow,
Create a workflow or use an existing one,
Create a trigger, Add action
Choose action type: Notification
Choose the webhook name you created in the second step
Add Sensor Hostname to Data to Include
Save and exit.
This integration is built with a webhook connection of Sumo Logic SIEM.
Visit the Webhooks page in Binalyze AIR,
Click the "+ New Webhook" button on the upper right corner,
Provide a self-explanatory name (examples: RDP Brute Force Trigger, Phishing Detected Trigger, etc.),
Select "Sumo Logic: Generic Sumo Logic Webhook Parser" as the parser for this webhook,
Select an Acquisition Profile,
Provide other settings such as Evidence Repository, CPU Limit, Compression & Encryption to use or let AIR configure them automatically based on the matching policy
Click the "Save" button,
Hover your mouse over the link below the Webhook name and double-click to copy
On the left pane, click "Manage Data" then "Monitoring", and alter "Connections".
Give a name to webhook,
Write a description (optional),
Paste Webhook URL, you copied in Step 1,
Type your payload*: ["{{ResultsJson.client_ip}}"]
Save and exit.
For more information, please visit here.
Windows
C:\Program Files (x86)\Binalyze\AIR\agent
macOS
/opt/binalyze/air/agent
Linux
/opt/binalyze/air/agent
htop
(filtered for AIR)