Reducing The Indicators of Compromise (IOCs) on Beacon and Team Server
Most red teamers should already be aware that it is no longer as easy as it used to be to evade detection. Security products such as Endpoint Detection and Response (EDR) have become more effective in detecting beacon activities. Even though static analysis checks could be bypassed easily by obfuscating the payload, heuristics and anomaly-based detection techniques are becoming better at detecting command and control activity. This is due to some unique characteristics of the beacon and team server, which can be fingerprinted, not only by security products but also by an experienced threat hunter. This blog post is written to highlight the different techniques we use to reduce Indicators of Compromise (IOCs) related to our beacons and team servers and elevate the probability of bypassing protective measures.
Cobalt Strike staging process has no security features and can easily be detected. It uses the same request format as Metasploit staging and can be accessed via a valid checksum8 request. If you run your team server on the Internet, it won’t be long before you receive valid checksum8 requests:
Requests of this nature indicate that the team server is being scanned for stagers so that the payload configuration can be extracted. This is a common practice of search engines, such as Shodan for example. Once the actual payload is downloaded, the beacon configuration can be extracted using a known XOR key to confirm the existence of a Cobalt Strike Team Server.
The following is another example from C2IntelFeedsBot, which uses Censys Search 2.0 raw data and potentially uses the same method to find Cobalt Strike Team Servers on the Internet:
Threat hunters can also download the Cobalt Strike payload using the known stager URL and use a tool such as CobaltStrikeParser to extract the beacon config:
Even though Cobalt Strike has a feature to use a custom URL as shown below, instead of the default checksum8 format URL, through its malleable profile, this does not prevent the use of a valid checksum8 request to download and analyse the stager payload:
To prevent this you need to completely disable this feature on the Malleable profile. However, if you still want to use a stager, as the stageless payload is too large for your needs, this can be done with a custom stager.
A custom stager is easy to create but may expose your staging process to a similar issue; anyone with access to the staging URL can download the full stageless payload and analyse it. One option is to use an encrypted payload, with a key only known to the stager.
The following is an example of an AES encrypted staged loader, modified from the Sliver C2 framework Wiki, which can be found here. Once executed, the stager payload will download the encrypted stageless shellcode into memory before it is being decrypted and executed:
As the payload is encrypted, it is difficult for an EDR or a threat hunter to confirm if it is malicious and prevents the beacon configuration from being easily extracted, unlike the default Cobalt Strike stager.
Submitting the encrypted stageless payload in the example above to VirusTotal returned zero detection results:
Hiding your Team Server
It is common for a red team operator or threat actor to hide the team server from the public view by using a redirector. There are many ways this can be achieved, some of the most popular methods use iptables or socat, as shown in the following examples:
socat TCP4-LISTEN:443,fork TCP4:[IP]:443
iptables -I INPUT -p tcp -m tcp –dport 443 -j ACCEPT
iptables -t nat -A PREROUTING -p tcp –dport 443 -j DNAT –to-destination [IP]:443
iptables -t nat -A POSTROUTING -j MASQUERADE
iptables -I FORWARD -j ACCEPT
iptables -P FORWARD ACCEPT
These methods may be sufficient to hide the location of a team server but would not stop a Cobalt Strike Team Server from being fingerprinted by security products or threat hunters.
JARM fingerprinting can be used to identify malicious C2 Team Servers, such as Cobalt Strike, and security vendors have implemented this technique into their products. It works by sending a series of requests to a server and combining and hashing the responses to fingerprint the technology in use. We are not going to delve too deep into this topic, but if you are interested to understand how it works in detail, you can check the following research conducted by Salesforce.
Now back to our dumb redirectors using socat or iptables. Can these techniques evade JARM fingerprinting to detect a team server? The short answer is no, as these methods would only redirect the HTTPS requests to the team server, but they would not change the unique TLS responses returned.
The following is the JARM signature of a Cobalt Strike Team Server running on Kali Linux and redirected by using either socat or iptables:
A quick search for this signature on Shodan revealed the same fingerprint found on multiple servers running Cobalt Strike Team Server:
The same JARM fingerprint can be found on VirusTotal, again indicating that a Cobalt Strike Team Server is hosted on this domain:
Now let’s use another redirector but this time using HaProxy as a reverse proxy for redirection. This reverse proxy will reside in front of the web server and TLS communication will be terminated here before a new client request is made to the team server. By doing this, the JARM fingerprint would be produced from interactions with the reverse proxy instead of the team server. A similar result would be produced if Content Delivery Network (CDN) is used.
The following JARM signature is returned by the HaProxy when the team server is scanned:
A quick look at Shodan revealed the same fingerprint on multiple web servers but none were identified as Cobalt Strike Team Servers:
Another method to avoid JARM fingerprinting is to just avoid using HTTPS for C2 communication. However, different communication channels, such as DNS, are not perfect either, as they can easily be detected. If you are looking for advanced DNS protocol, such as DNS over HTTPS, for your C2 communication, we recommend looking at the brilliant work done by Austin Hudson called TitanLdr.
The evolution of C2 changed rapidly when threat actors started to use cloud services to blend in with legitimate traffic. The technique has been known to bypass security products as cloud service domain names, used to host malicious infrastructure, typically have good reputations and normally would not be blocked by organisations. We discovered that this method can also successfully bypass some EDRs that monitor C2 communications.
The open-source C3 framework, designed by WithSecure, is capable of simulating these behaviours using complex communication paths and it can also be integrated with current C2 frameworks such as Cobalt Strike. To understand how this framework works in detail, we recommend looking at the documentation here.
What we found interesting about this framework is that you can avoid direct communication between your payload and team server and reduce the number of IOCs from TLS fingerprinting. The following is an example of how we used the C3 framework to bypass some EDRs during our red team engagements.
Firstly, we need to enable the Cobalt Strike external C2 listener and turn on the connector to the team server from the gateway:
Now, connect the gateway to the Cobalt Strike external C2 listener:
As you can see on the C3 framework dashboard, the C3 gateway has successfully communicated with the team server:
The next step is to add a channel for our communications. There are many external channels we can choose from, such as Mattermost, Discord and Slack and internal channels such as MSSQL, LDAP and UNC Share File.
In this case, we use the UNC Share File method. If you are looking for a method that communicates via the Internet, you may want to use external channels:
On the C3 framework dashboard, we can see another connection successfully established through the gateway:
Now, let’s create a new relay and download the shellcode so we can obfuscate it and use our own loader to inject it. We would not recommend using the executable directly as it would be highly likely for it be detected by the EDR:
Lastly, execute the payload on the compromised host to activate the beacon communication to the team server through the gateway:
On the C3 framework dashboard, we can see the C2 communication is successfully established and passed through the gateway:
As you can see from the process monitor, the communication between the compromised host and team server was passed through the gateway using the UNC Share File located in the internal network and no direct communication was made between the C3 agent and the team server:
JA3 and JA3S Fingerprint
The research was conducted by Salesforce, before their JARM discovery, to fingerprint the TLS negotiation between the client and server. This research, known as TLS Fingerprinting with JA3 and JA3S, can also be used to detect malicious team servers such as Cobalt Strike. The difference between these techniques and JARM fingerprinting is the JA3 and JA3S use passive techniques while JARM uses active techniques.
The way it works is the TLS Client will send a Hello packet. This packet is normally dependent on the packages and methods used when building the client application. The TLS server will respond with a TLS Server Hello packet and this packet is formulated based on the server-side libraries and configuration. Again, we will not cover this topic in depth but if you are interested to understand how the signatures work in detail, check the following research conducted by Salesforce.
Let’s run the team server and generate the beacon using Cobalt Strike. This time we will use a direct connection to the team server and observe the JA3 and JA3S signatures produced once our beacon is executed:
The JA3 signature, a0e9f5d64349fb13191bc781f81f42e1 produced is known to be a Cobalt Strike JA3 signature as highlighted by the DFIR report blog post here and the JA3S signature produced, f176ba63b4d68e576b5ba345bec2c7b7 is known to be Cobalt Strike JA3 signature as highlighted by the DFIR report blog post here.
The following are the JA3 and JA3S signatures against the team server where iptables or socat used for redirection which also produced the same result:
The following are the JA3 and JA3S signatures against the team server where HaProxy used for redirection:
The same JA3 signature, a0e9f5d64349fb13191bc781f81f42e1 produced again but the JA3S signature, 61be9ce3d068c08ff99a857f62352f9d is no longer identical as the previous two test cases.
If you observe all the signatures produced for all the test cases, you will find that the JA3 client signatures always returned the same values while the server hash, JA3S, can return different values depending on whether redirector is used and types of redirector used to hide the team server. By using HaProxy, the JA3S fingerprint would be produced from interactions with the reverse proxy instead of the team server. A similar result would be produced if Content Delivery Network (CDN) is used to hide the team server.
Unless combined with the server-side JA3S signature, the suspicious JA3 client signature alone should not be used to confirm the Cobalt Strike beacon communication as a group of similar requests, from other applications, can also share the same JA3 signature. However, combining with other IOCs may increase the probability of malicious activity.
Being closed-source software, it is not possible to modify the TLS Client Hello request produced by the Cobalt Strike beacon in order to change the JA3 client signature. If you suspect that the JA3 signature could be the cause of detection and are interested in other C2 frameworks that permit the JA3 hash to be spoofed, we recommend you look at the Merlin C2 framework:
As highlighted by Dominic Chell on the MDSec blog post here, the Cobalt Strike Team Server is based on NanoHttpd and multiple methods can be used to fingerprint it. Hiding the team server behind a reverse proxy and returning custom error messages when unhandled exceptions occur or when specific response codes are returned can hinder fingerprinting attempts. Another method is by using Content Delivery Network (CDN), as the error messages would normally be handled automatically by CDN providers.
Import Hashing (Imphash) is a technique initially used by Mandiant to track specific threat groups’ backdoors by analysing the imports used by portable executable (PE) files. These hashes are created based on library or API names and their specific order within the executable. Due to this behaviour, it is possible for different malicious similar payloads to contain the same Imphash if the same library or API names and their specific order are used on payloads. We recommend you read the Mandiant blog post here if you want to understand how these hashes are generated in detail.
Many malware-hunting providers, such as VirusTotal, also use the same technique to identify the malware similarities within their databases. Most red teamers always obfuscate payloads but use similar codes, that they are already comfortable with, as shellcode loaders. This however may allow a good threat hunter to identify the similarities in payloads by using Imphash. Even though it is possible to rearrange API calls, as highlighted by Mandiant on their blog post, so the signatures can be changed, this may require a great deal of effort, especially during a red team engagement where time is limited.
Let’s upload the default Cobalt Strike beacon on VirusTotal and observe the generated Imphash signature. Even though we would not recommend using the Cobalt Strike default loader, we used this loader to demonstrate how a well-known Cobalt Strike loader’s imphash can be changed:
If we search the Imphash value on the MalwareBazaar website, the signature is clearly known to be for a Cobalt Strike beacon:
Submitting another Cobalt Strike beacon Imphash signatures we found online onto on MalwareBaazar website revealed a more convincing result as shown below:
As the Imphash is calculated based on the information of the import tables, we looked at some tools we could find to manipulate the Import Address Tables (IAT). This brought us to a tool called CallObfuscator. Even though the tool was designed to obfuscate the IAT not to change the Imphash signature, this may trick some static analysis tools to generate Imphash values based on the information stored on the IAT.
The following is the partial information of the Cobalt Strike beacon’s import tables when the binary is imported into PE Bear:
Now, let’s use CallObfuscator to change the import table for VirtualAlloc to another Windows API, QueryThreadProfiling. This would not affect the execution of the payload as the actual API calls, VirtualAlloc would be called when the payload is executed:
Once the modified binary is generated, we can see the import table for VirtualAlloc successfully manipulated:
Uploading the modified beacon to VirusTotal, we now see the manipulated API being shown within the IAT and a different Imphash signature produced:
Searching the new generated Imphash signature on Malware Bazaar revealed a zero-detection rate:
Loading the modified beacon into Ghidra shows the modified function call QueryThreadProfiling instead of the actual function call VirtualAlloc:
However, if we use the same method on the raw shellcode and execute it using the binary emulator, speakeasy, we see the actual function, VirtualAlloc being call instead.
Memory scanning is one of the techniques used by EDRs to hunt for a beacon once it is loaded into memory and the detections are normally conducted by using YARA rules on the memory regions. Even though it is possible to use a Malleable profile to remove all the beacon IOCs in the memory by using strrep and stage block as shown in the following examples, this may not be sufficient enough to evade memory scanning:
In general, some of the common methods to evade memory scanners are by using the following techniques.
- Sleep Obfuscation
- Code Obfuscation and Encryption
- Stack Thread Spoofing
- Heap Encryption
While some of these, such as Sleep Obfuscation and Stack Thread Spoofing, have been integrated into Cobalt Strike, you may find the implementations are not robust enough to avoid detection. For example, Cobalt Strike Sleep Mask Kit obfuscates the beacon prior to sleeping using a XOR key which makes it easier to be spotted.
The following is an example of the Stack Thread Spoofing on Cobalt Strike. The Stack Thread no longer contains a suspicious start address like it used to:
Unfortunately, if you try to use the raw shellcode with your own loader, Stack Thread Spoofing would not be implemented on your payload as this implementation will only work on the binary generated by Cobalt Strike Artifact Kit. This makes it harder if you may want to obfuscate your shellcode and use a specific type of process or code injection on your custom loader without using Cobalt Strike Artifact Kit.
Without Stack Thread Spoofing we should spot the obvious Windows APIs that are commonly used by Cobalt Strike Sleep Mask function such as SleepEx:
If we follow the caller address 0xabbe3b, we will see that it points to the suspicious unmapped memory.
As explained in the following blog post by CodeX and Dominic Chell, there were some bytes related to the Cobalt Strike sleep mask function that are not encrypted when userwx option set to false and this indicator can be used to confirm the existence of a sleeping beacon in memory. The following YARA rule can be used to detect Cobalt Strike Sleep Mask function:
If we analyse the unmapped memory region, we should be able to find the same bytes used by the previous YARA rule within the RX region.
Setting the userwx option set to true can remove this indicator but this will create a memory region with RWX memory protection flag and it is common for the memory region set with this flag to be inspected by the EDR.
The Cobalt Strike Sleep Mask function uses XOR encoding to obfuscate the beacon during sleep and when a null byte is XORed with a key, the newly encoded value will be equal to the key used.
The following is an example of data in the memory before the beacon goes to sleep. In this case, we are interested in the first line as the data on this block are filled with null bytes:
The following is an example of the new data in the beacon within the same location during sleep. By observing the first line, we can spot the repetitive bytes c9 3b 7f b8 05 6a which indicates that this could be part of the key used in XOR operation and the key length is 6 bytes. Bear in mind the reason we had 6 bytes key in our test case is due to the changes in the length of the default key we made on our Cobalt Strike Sleep Mask kit:
Now if we look at the first null byte found on the beacon before it goes sleep and compare it to the new value after sleep, we will find the value 7f on the 13th position which indicates that the key used to obfuscate the beacon is 7f b8 05 6a c9 3b:
For memory evasion techniques, we recommend the following third party User-Defined Reflective Loaders such as ElusiveMice, BokuLoader, TitanLdr, KaynStrike, AceLdr or other implementations such as Ekko, ThreadStackSpoofer, ShellcodeFluctuation, FOLIAGE, gargoyle and YouMayPasser.
Kyle Avery had combined some of these techniques together into a custom User-Defined Reflective Loader, AceLdr to bypass memory scanning. You may also want to check his blog post titled Avoiding Memory Scanners.
AceLdr uses WaitForSingleObject API which is less suspicious from the perspective of a Threat Hunter. There is also no return address on the Stack Thread that pointed to an unmapped memory region:
During sleep, the beacon is hiding within the RW region and its permission will change to RWX only when the beacon is awaken:
If we observe the beacon during sleep, there is also no repetitive pattern as AceLdr uses SystemFunction032 instead of XOR to obfuscate the beacon: