Mike Bosland

Practicing the arcane [infosec] arts

How to Configure OpenRMF.io to use Domain Names and TLS

I heard about OpenRMF.io recently and decided to take a look. It’s deployed with docker containers so I figured it wouldn’t be too hard to get up and running. Turns out it was a little tricker than I expected, so I thought I’d write down how I did it.

Eventually, I wanted a deployment with TLS certificates and using domain names instead of IP Addresses. To get to this point, I stepped through a staged process. This let me make sure I had everything work in each step.

  • Deploy Keycloak using HTTP using IP Address
  • Deploy OpenRMF.io over HTTP using IP Addresses
  • Convert Keycloak and OpenRMF to use domain name
  • Add an nginx proxy to Keycloak to use TLS certificates
  • Update the openrmf-web container to use TLS certificates

This set up uses a Let’s Encrypt certificate rather than self-signed certificates. I have a wildcard certificate for my home lab. This process will be a little more straight forward than using self-signed since the API microservices will trust the certificates automatically. If you’re using a self-signed certificate, you’ll need to add a volume to all the service containers so they can validate the certificate.

I’m not going to go into details on generating a wildcard certificate. I use pfSense to renew my wildcard certificate every 90 days. Then I have my lab systems reach out and pull the certificate as needed. But there’s some good documentation out there on this process.

Lab Overview

For this setup, I have 4 systems, a router, DNS Server, OpenRMF/Keycloak Server and a Workstation.

OpenRMF.io Lab Setup
  • Router – This will allow internet access while we pull down the OpenRMF.io code. It will run vyos, but this won’t matter for this lab.
  • DNS Server – Obviously this will supply the DNS resolution. This system will run Ubuntu 20.04 LTS, server edition. DNS resolution will be with bind, but again, not important for this lab.
  • OpenRMF / Keycloak – This will host the Keycloak and OpenRMF applications. This will run Ubuntu 20.04.
  • Workstation – This is the system we’ll work from. This will also run Ubuntu 20.04.

Virtual Machine Specs

Here are the resources I assigned to each VM:

VM NamevCPUsRAMHard Drive Size
Router11 GB8 GB
DNS12 GB8 GB
OpenRMF66 GB50 GB
Workstation44 GB32 GB
VM Resources used in my OpenRMF Lab

I’m not sure 50 GB would be large enough for a production instance…but for me, this to explore the features and functionality of OpenRMF, so I’m OK with these sizes.

Prerequisites

For OpenRMF, the main dependency are docker and docker-compose. Those will need to be set up and installed on the OpenRMF system.

I also added unzip to the OpenRMF system by running sudo apt-get install unzip.

You should also have valid TLS certificates that are trusted by systems by default. This is easier than trying to set up docker containers to trust self-signed certificates.

Pull Down the OpenRMF.io Code

You can get the latest releases from github. Grab the Keycloak and OpenRMF zip files. I created a main openrmf folder in my home directory.

cd ~
mkdir openrmf
cd openrmf
wget https://github.com/Cingulara/openrmf-docs/releases/download/v1.8.1/Keycloak-v15.0.0.zip
wget https://github.com/Cingulara/openrmf-docs/releases/download/v1.8.1/OpenRMF-v1.8.1.zip

Initial Keycloak Setup

First things first, I set up the Keycloak server on the OpenRMF machine. I created a new folder keycloak and copied the keycloak zip file into this new folder. Finally unzip it.

mkdir keycloak
cp ../Keycloak-v15.0.0.zip ./
unzip Keycloak-v15.0.0.zip

There’s an included start.sh file that we can use to start up the keycloak server.

./start.sh

This will pull down the keycloak docker images, create the containers and start them. Give this a few minutes – 2 or 3 usually. Once you can access the web interface at http://{ your-ip-address}:9001 the containers are up and running.

Alright! Success. Let’s configure the OpenRMF realm. There’s a script for that too. I’m running on linux, so I’m going to run the setup-realm-linux.sh. This script needs privileged access, so we use sudo. This will ask you for the IP address of your Keycloak server. I used the localhost 127.0.0.1 IP. It will also ask you for an admin account name for the OpenRMF tool. You cannot use admin, so I went with sysadmin.

sudo ./setup-realm-linux.sh

Lastly, I allowed port 9001/tcp through the firewall with ufw.

ufw allow 9001/tcp

Configure Keycloak

Login to the Keycloak interface by clicking Administrative Console. Login with the default credentials of user admin and password admin.

NOTE: Don’t worry about the default credentials right now. We’ll change them later once we have HTTPS set up so our new secure password isn’t sent over the wire in cleartext.

An OpenRMF user was created using the script above, but no password was created, so we’ll set one using the web interface. Navigate to Manage > Users on the left. Click View all users

Select the ID next to your admin user. Click the Credentials tab.

Enter a new password. I also turn off the Temporary switch so we don’t have to change the password when we first log in. Click Set Password and then confirm it in the popup. We’ll change this later too since this password will be sent over the wire in cleartext right now.

That should do it for Keycloak for now. Let’s move on to getting OpenRMF up initially.

Initial Setup for OpenRMF

Similar to keycloak, we’re going to create a folder for OpenRMF.

cd ~/openrmf
mkdir openrmf
cd openrmf
cp ../OpenRMF-v1.8.1.zip ./
unzip OpenRMF-v1.8.1.zip

The main configuration here is to update the .env file.

The JWTAUTHORITY has to match a setting in Keycloak and the URL used to access the web app. So for me, I’m going to edit this to be

JWTAUTHORITY=http://10.10.0.10:9001/auth/realms/openrmf
JWTCLIENT=openrmf

Next, let’s set up the redirect_uri in Keycloak. In the web interface, let’s go to Clients on the left. Select Edit from the openrmf row, highlighted in red in the screenshot below.

Update the Valid Redirect URIs to be the IP address of your OpenRMF interface, make sure you have the wildcard * at the end.

Click Save at the bottom of this page.

Starting OpenRMF

Let’s open the port for OpenRMF through the firewall.

sudo ufw allow 8080/tcp

Now similarly, there is a start.sh script. I’m going to run that to start.

./start.sh

Once the docker images are downloaded and the containers started, you can access the web app at http://10.10.0.10:8080/

If everything is configured right, this should redirect you to sign in on the Keycloak URL. Sign in with the OpenRMF user we created: sysadmin in my case.

Here, I got an error. This error usually means the API microservices aren’t able to validate the JWT tokens – the URLs in the Valid Redirect URIs, the .env file and the URL used to access the web app.

I was able to solve this error by stopping the openrmf and keycloak containers and forcing a recreate. Maybe the .env did populate to the API containers properly? Not sure what happened here.

cd /home/sysadmin/openrmf/keycloak
./stop.sh
docker-compose up -d --force-recreate
cd /home/sysadmin/openrmf/openrmf
./stop.sh
docker-compose up -d --force-recreate

Once the containers are up, go back to the OpenRMF web interface and we can see the APIs are working. We have counts in for the System Packages, Checklists and Templates.

So success! We have the base system working.

Configuring OpenRMF to use Domain Names

This is great…but having to remember IP addresses isn’t exciting. So let’s configure some FQDNs. At the end of this section, we’ll have the OpenRMF system using domain names and nice URLs, but still over HTTP. That will be fixed in the next section.

Prerequisites – DNS Server

My DNS server in this lab is an Ubuntu system running bind9, but in reality it doesn’t really matter as long as you can configure (and resolve) your custome domain names.

There are 2 main FQDNs I set up. My lab domain is home.mikebosland.com so this these are what I setup.

keycloak.home.mikebosland.com
openrmf.home.mikebosland.com

Both of these are pointing to 10.10.0.10 which is my Keycloak/OpenRMF server.

Keycloak Configuration

Navigate to your keycloak domain, which for me is http://keycloak.home.mikebosland.com:9001

Log in to the Administrative Console with the default admin:admin credentials. We need to adjust the Valid Redirect URIs to include the domain names. I decided to leave the IP URI in there just in case I want to use IP for some reason.

Navigate to Clients and select Edit in the openrmf row.

I added http://openrmf.home.mikebosland.com:8080/* to the list of URIs. Press Save at the bottom of the page.

Now, since it helped last time, I’m going to stop and rebuild the containers.

cd /home/sysadmin/openrmf/keycloak
./stop.sh
docker-compose up -d --force-recreate

OpenRMF Configuration

Now that we updated Keycloak, we need to update the .env file with the new URL. It’s probably a good idea to stop the OpenRMF containers first.

cd /home/sysadmin/openrmf/openrmf
./stop.sh

Update .env file’s JWTAuthority item with the URL for the OpenRMF server, but use the keycloak port, 9001. This was the biggest “gotcha” for me.

Now we can rebuild the containers and start them.

docker-compose up -d --force-recreate

Once those are running, navigate to http://your-openrmf-domain:8080/, so for me that’s http://openrmf.home.mikebosland.com:8080/ and sign in with your openrmf admin account, for me that’s sysadmin.

Success! We can now access OpenRMF with domain names!

Configuring OpenRMF for TLS

Alright. Almost done. For my set up, I’m using valid TLS certificates to make the process a bit easier.

Prerequisites – TLS Certificates

You’re going to need a valid TLS Certificate and a Key. As mentioned before, I’m using pfsense to automatically renew mine so we won’t go into details there.

Configuring Keycloak for TLS

I was having a hard time getting this running. Based on the OpenRMF Docs, I decided to also build an nginx proxy to sit in front of keycloak.

To do this we’ll need the following:

  • Valid TLS Certificate
  • Valid TLS Key
  • certs.conf file
  • ssl-param.conf file
  • nginx.conf file
  • Update docker-compose.yml

I created an nginx folder inside /home/<username>/openrmf/keycloak to hold all of these.

First up, put the TLS Certificate and Key in this nginx folder.

certs.conf

Next up, we’ll build the certs.conf file to specify the TLS certificates that nginx should use.

ssl_certificate /etc/nginx/certs/ssl/certs/home_mikebosland_com_wildcard.fullchain;
ssl_certificate_key /etc/nginx/certs/ssl/private/home_mikebosland_com_wildcard.key;

ssl-params.conf

This file specifies the SSL configuration for nginx. T

ssl_protocols TLSv1.2 TLSv1.3;
ssl_prefer_server_ciphers on;
ssl_ciphers "EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH";
ssl_ecdh_curve secp384r1;
ssl_session_cache shared:SSL:10m;
ssl_session_tickets off;
ssl_stapling on;
ssl_stapling_verify on;
resolver 8.8.8.8 8.8.4.4 valid=300s;
resolver_timeout 5s;
add_header Strict-Transport-Security "max-age=63072000; includeSubdomains";
add_header X-Frame-Options DENY;
add_header X-Content-Type-Options nosniff;

nginx.conf

We need to set up the nginx.conf configuration file for the enginx proxy. Be sure to update the server_name in the file below. I used server_name keycloak.home.mikebosland.com;

worker_processes 4;
 
pid /tmp/nginx.pid; # Changed from /var/run/nginx.pid

events { worker_connections 4096; }
 
http {
 
    sendfile on;
    client_max_body_size 60M;
    include /etc/nginx/mime.types;
    keepalive_timeout  65;
    proxy_http_version 1.1;

 
    # configure nginx server to redirect to HTTPS
    # server {
    #     listen       9001;
    #     server_name  xxx.xxx.xxx.xxx;
    #     return 302 https://$server_name:9001;
    # }
    
    server {
        listen 8443 ssl http2;
        server_name  keycloak.your.domain;
        include snippets/certs.conf;
        include snippets/ssl-params.conf;

        # # don't send the nginx version number in error pages and Server header
        server_tokens off;

        proxy_set_header        Host $host:9001;
        proxy_set_header        X-Real-IP $remote_addr;
        proxy_set_header        X-Real-IP $remote_addr;
        # proxy_set_header        X-Forwarded-For $host;
        proxy_set_header        X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header        X-Forwarded-Proto $scheme;
        
        location / {
            proxy_pass http://keycloak:8080;
            add_header X-Frame-Options "ALLOWALL";
        }
    }
}

Update docker-compose.yml

That should be all of the files we need.

Now e can update the docker-compose.yml file for keycloak. Also, under keycloak, remove the ports since we want the keycloak proxy to handle it. I just commented them out.

keycloak:
    container_name: openrmf-keycloak
    image: jboss/keycloak:15.0.0
    restart: always
    #ports:
    #  - 9001:8443
    environment:
      - KEYCLOAK_USER=admin
      - KEYCLOAK_PASSWORD=admin
      - DB_VENDOR=postgres
      - DB_ADDR=postgres
      - DB_PORT=5432
      - DB_DATABASE=keycloak
      - DB_USER=keycloak
      - DB_PASSWORD=password
      - PROXY_ADDRESS_FORWARDING=true
    depends_on:
      - postgres
    volumes:
      - ./themes/openrmf/:/opt/jboss/keycloak/themes/openrmf/:ro
      - ./standalone-ha.xml:/opt/jboss/keycloak/standalone/configuration/standalone-ha.xml:ro
    networks:
      - keycloak-network

  keycloak-proxy:
    image: nginx
    container_name: openrmf-keycloak-proxy
    restart: always
    volumes:
      - ./nginx/home_mikebosland_com_wildcard.fullchain:/etc/nginx/certs/ssl/certs/home_mikebosland_com_wildcard.fullchain
      - ./nginx/home_mikebosland_com_wildcard.key:/etc/nginx/certs/ssl/private/home_mikebosland_com_wildcard.key
      - ./nginx/ssl-params.conf:/etc/nginx/snippets/ssl-params.conf:ro
      - ./nginx/certs.conf:/etc/nginx/snippets/certs.conf:ro
      - ./nginx/nginx.conf:/etc/nginx/nginx.conf:ro
    ports:
      - 9001:8443
    networks:
      - keycloak-network
    depends_on:
      - keycloak

Time to start up keycloak.

cd /home/sysadmin/openrmf/keycloak
docker-compose up -d --force-create

Navigate to https://keycloak.home.mikebosland.com. Success!

Log in with the default admin:admin credentials. We need to update the Valid Redirect URIs for HTTPS. Click Clients > edit on openrmf

Edit the URIs. Change http to https

Click Save.

Configure OpenRMF for HTTPs

To get HTTPs working on OpenRMF we need most of the same files. These are the items we need to address:

  • Valid TLS Certificate
  • Valid TLS Key
  • certs.conf file
  • ssl-param.conf file
  • Update nginx.conf file
  • Update docker-compose.yml
  • Update .env

Once again, I created a folder nginx to hold these new files.

cd /home/sysadmin/openrmf/openrmf
mkdir nginx

TLS Certificate and Key

Copy the TLS Certificate and Key into this nginx folder.

Copy the certs.conf and ssl-params.conf files into this folder. Finally, add read permissions to these files. NOTE: There might be a more secure way to allow the nginx container to read these. Still need to look into that.

chmod a+r /etc/sysadmin/openrmf/openrmf/nginx

Update nginx.conf

The changes we need to make are to the openrmf-web container. Add the certificates, certs.conf and ssl-params.conf to the volumes for this container.

version : '3.8'
 
services:
### 1 Web Front End Container
  openrmf-web:
    image: cingulara/openrmf-web:1.08.01
    container_name: openrmf-web
    restart: always
    ports:
      - 8080:8080
    depends_on:
      - openrmfapi-scoring
      - openrmfapi-template
      - openrmfapi-read
      - openrmfapi-controls
      - openrmfapi-audit
      - openrmfapi-report
    volumes:
      - ./nginx.conf:/etc/nginx/nginx.conf
      - ./nginx/home_mikebosland_com_wildcard.fullchain:/etc/nginx/certs/ssl/certs/home_mikebosland_com_wildcard.fullchain
      - ./nginx/home_mikebosland_com_wildcard.key:/etc/nginx/certs/ssl/private/home_mikebosland_com_wildcard.key
      - ./nginx/certs.conf:/etc/nginx/snippets/certs.conf:ro
      - ./nginx/ssl-params.conf:/etc/nginx/snippets/ssl-params.conf:ro
    networks:
      - openrmf

Update nginx.conf

We need to edit the /home/<username>/openrmf/openrmf/nginx.conf file to update the server_name and include the certs.conf and ssl-params.conf files. Make sure to add ssl to the listen line after the port number. If you miss this, you might get an SSL_ERROR_RX_RECORD_TOO_LONG.

Update .env

Lastly, let’s update the .env file to point to the new https URL.

Start up OpenRMF

Ok, I wanted to rebuild the containers, so I used

cd /home/sysadmin/openrmf/openrmf
docker-compose up -d --force-recreate

Navigate to your new https URL. For me that’s https://openrmf.home.mikebosland.com:8080

Once you log in, you should see the dashboard.

Next Steps

Now that the system is up and running, you should change the passwords now that they will be securely transferred.

After that, you can start exploring the OpenRMF tool.

Finally, thanks for making it through this mega post. If you see anything I could be doing better, feel free to let me know.

Migrating from ESXi 7 to Proxmox 7

I’ve been using ESXi as my virtualization platform in my homelab for years, 2012 maybe? It’s been a solid, but limited, platform. With Broadcom’s recent purchase, I figured it was time to start exploring more open source solutions for virtualization. I feel it’s only a matter of time before the free tier of ESXi is further limited.

I tried XCP-ng a few years ago, but I wasn’t able to get a virtual machine’s network interface into promiscuous mode for whatever reason. So this time, I think I’m going to set up Proxmox. I have 2 enterprise servers, so I want to set up a proxmox “cluster” (a true cluster needs 3 servers, so this will just be 2 servers in a “datacenter” that is my office). I want to be able to experiment with live migrating VMs from 1 host to another.

Installing Proxmox

This was pretty straight-forward. Follow the documentation.

Once Proxmox was set up, I configured an NFS share on my TrueNAS server. This will be the shared storage between the 2 proxmox hosts.

Multiple VLANs over a Single Interface

My network is segregated into multiple VLANs: Home, IoT, 2 work VLANs and a guest VLAN. I also have a physically isolated DMZ network. To get all of this into Proxmox without using 6 Network interfaces, I found some steps on how to enable multiple VLANs over a single interface.

Edit your /etc/network/interfaces file

On my server, I have vmbr0 set to my main network interface. This is the interface I want the multiple VLANs to come in on.

For each VLAN, use the template below. Insert it above the vmbr0 definition in the file.

auto vmbr0.vlan-id
iface vmbr0.vlan-id inet static
    address xxx.xxx.xxx.xxxx/24 # IP for the interace on the VLAN subnet
    gateway xxx.xxx.xxx.xxx # IP Address for the gateway on the VLAN

Finally, in the vmbr0 definition, add the following lines to enable the VLANs. You can specify specific VLANs here, but I enabled all of them for simplicity. However, if you only wanted to allow 2 VLANs (say 10 and 20) you would use bridge-vids 10 20

bridge-vlan-aware yes
bridge-vids 1-4092 # the VLAN IDs that will be allowed on this interface. 

Once this is done, I think rebooting the system would be best to ensure the configuration is applied properly.

Migrating VMs from ESXi to Proxmox

This is going to be the time consuming process:

  • Uninstall VMware Tools. Most of my linux systems are a debian flavor so it’s a simple apt purge open-vm-tools
  • Shutdown the VM
  • Export the vm using the VMWare ovftool
  • Copy the resulting VMDK files to Proxmox
  • Create a VM in Proxmox to match the original VM settings
  • Detach, and remove, the hard disk from the blank Proxmox VM
  • From the Proxmox CLI, run qm importdisk vm-id path-to-disk proxmox-storage -format qcow2
  • Repeat the prior step for all disks, if more than 1
  • In the Proxmox GUI, double click on the “unused disk”, select Add
  • Repeat the prior step for all disks, if more than 1
  • Enable the new drives in Options > Boot Order

Start up the New VM

Once all of this is complete, you can power on the VM and make sure everything imported and starts properly.

I use netplan to control my interface configuration. Changing to Proxmox relabeled my interfaces, so I had to go into my netplan configuration /etc/netplan/01-ifcfg.yaml to update the interface name and then run sudo netplan apply to apply the new configuration. I have a blog post about netplan if you want more of a step-by-step here.

Repeat for each VM. I have 15 or so I think I use on a regular basis, so I’ll be on this step for a while.

Thoughts so far

The only feature ESXi has that I haven’t seen a replacement for, is using VMWare Player as a remote console. I really enjoyed that and used it all the time. I’ve recently started using Guacamole as a centralized remote access server so that’s helping.

Technical Analysis of New CaddyWiper Malware discovered in Ukraine

Executive Summary

SHA 256 Hash: a294620543334a721a2ae8eaaf9680a0786f4b9a216d75b55cfd28f39e9430ea

ESET researchers published a report on a new wiper malware first discovered in a limited number of organizations in Ukraine. It was first detected on 14 March 2022 at 9:38 AM UTC. This is the 3rd wiper malware discovered since 23 February 2022.

The sample, a 32-bit Windows Portable Executable (PE) binary, was obtained from MalwareBazaar. It is a destructive wiper program that runs on Windows platforms. Upon execution, the malware first identifies if it is running on a Primary Domain Controller, if so the malware exits harmlessly. If the machine is NOT a Primary Domain Controller, the malware continues executing.

CaddyWiper has a three stage wiper process:

  • All files in the C:\Users folder are overwritten with zeros.
    • Up to the first 10 MB is the file is larger than this
    • The Access Control List is also destroyed.
  • All files in logical drives D:\ through Z:\ are then similarly overwritten
    • This includes network mapped drives
  • Finally, the partition tables on the first 10 drives attached to the computer are overwritten with zeros.

This malware is destructive. It overwrites data with zeros and destroys hard drive partition tables. The only way to recover is to rebuild the machine and load data from backups.

As Primary Domain Controllers are spared by this malware, it can be assumed that the Domain Controller is a mechanism for distributing the malware. The attackers most likely have gained access to the Domain Controller before delivering this payload.

A YARA signature rule is included below after the analysis.

Indicators of Compromise

The malicious binary will be present. The original name seems to be caddy1.exe, but the file could be named anything. Compare unknown files to the hashes below, or using the provided yara rule to identify samples of this malware.

MD5: 42e52b8daf63e6e26c3aa91e7e971492

SHA256: a294620543334a721a2ae8eaaf9680a0786f4b9a216d75b55cfd28f39e9430ea

While the malware is running and actively overwriting files, you may experience a popup saying that the file, shortcut, executable, etc cannot be opened.

An error message that occurs when you try to open a file that has been overwritten

Additionally, Access Control List permissions are overwritten, so you may see access denied errors. If you check with icacls there are no permissions on the sample affected folder in the following screenshot.

Check the ACL of items that have been overwitten

As Windows becomes unable to read the drive containing the operating system, it will notify you of a problem and crash.

The final notice before the machine is unable to boot

The final item of note is that there is not any persistence mechanism as the malware renders the machine inoperable. When attempting to boot an affected machine, you may see an error message. In this case, it simply says unsuccessful and attempts to boot from other options.

Unable to boot from local hard drive so it attempted, and failed, a network boot

High-Level Technical Summary

CaddyWiper first checks to see the role assigned to the machine it is currently running on using the DsRoleGetPrimaryDomainInformation function. If the machine is a Primary Domain controller, the execute stops here.

If the machine has any other role, CaddyWiper begins overwriting all files in the C:\Users folder recursively. For small files, the entire file is overwritten with zeros. For larger files, only the first 10 MB are overwritten. Once this is complete, it iterates through all drives from D:\ to Z:\ and overwrites all files in those drives (including network shares).

After the discovered files have been overwritten, the malware attempts to overwrite the partition tables of all drives connected to the machine. Again, this is done with all zeros. Eventually the machine will crash to a blue error screen and is further unable to boot.

CaddyWiper High Level Execution Flow

Malware Composition

The CaddyWiper malware consists of a single 32-bit PE binary.

It was compiled on 14 March 2022 at 7:19:36 UTC. This was approximately 2 hours before ESET first identified it. This can be manipulated, so this timestamp should be viewing some some caution.

CaddyWiper PE File Headers

Technical Analysis

Basic Static Analysis

A basic static analysis was performed on the malware binary. The file was first fingerprinted using MD5 and SHA256 algorithms.

MD5: 42e52b8daf63e6e26c3aa91e7e971492

SHA256: a294620543334a721a2ae8eaaf9680a0786f4b9a216d75b55cfd28f39e9430ea

VirusTotal was queried using the SHA256 hash. At the time of publishing this report, 48 of the 68 virus scanning engines have flagged this sample as being malicious.

VirusTotal results

VirusTotal had first seen this sample on 14 March 2022 at 12:40:26 UTC. It had been uploaded with several different file names, but caddy1.exe was among them. This is where ESET named the sample CaddyWiper.

To Pack or Not to Pack

Malware is often compressed to assist in evading antivirus detection. Packed samples can often be identified when comparing the PE files Virtual Size to the Size of the Raw Data.

The Virtual size of 0x1B4A (6,986) is similar to the Raw Data size of 0x1C00 (7,196) meaning it is unlikely that this sample has been compressed or packed in some way.

Import Address Table

The Import Address Table details the functions the malware imports to accomplish whatever functions it was designed to do. These can often provide clues to what the malware is doing. However, in this case there is only 1 function imported. This is very unique. Typical programs, both benign and malicious, have significantly more imports than this.

This sample is importing DsRoleGetPrimaryDomainInformation. This function returns the state of the directory service installation and information about the domain, if the machine has been added to one.

Later in the analysis, I observed that malware has implemented a method to import the required function at runtime. Otherwise, we are not able to glean much information here.

Strings

It is often possible to extract characters from the raw binary that can be decoded into ASCII or Unicode text that may indicate functions, URLs, etc. This binary had a very limited amount of strings exposed…however some were of note.

DsRoleGetPrimaryDomainInformation
C:\Users
\\.\PHYSICALDRIVE9
DeviceIoControl
WriteFile
LoadLibraryA
FindFirstFileA

These strings indicate some key functionality of the malware:

  • C:\Users and \\.\PHYSICALDRIVE9 are paths that are used in the wiper functions
  • WriteFile and DeviceIoControl are the functions used to destroy files and the partition tables of the attached hard drives
  • LoadLibraryA is used to import additional functions at runtime, making them a little more difficult to identify.

Basic Dynamic Analysis

Network Indicators of Compromise

No unique network traffic was observed from this sample. Accessing remotely mounted file shares was observed, but this was identical to legitimate usage. The malware was not observed to reach out to any remote IPs or Domains.

Host Based Indicators of Compromise

During the execution of the malware, it was possible to observe files that have been corrupted before the system crashed. A JPEG picture was on the Desktop of the user that executed the malware. You can see the normal JPEG header in the file before it was corrupted. The file starts with the correct FF D8 byte sequence.

A normal JPEG in Hex

After the malware has run, the file has been overwritten entirely with zeros. The FF D8 sequence, as well as all the other data, has been replaced with 00 00.

A file after CaddyWiper has executed

Advanced Static Analysis – Reverse Engineering

The majority of the analysis was performed via advanced static analysis. The malware only has 10 functions, so it is not overly complex. Of these, only 3 are key to the malware’s function.

Importing Functions

As noted in the basic analysis, there was only 1 function imported in the Import Address Table. However, when looking at the code, there is an obfuscated method for importing functions at runtime.

The function name and the name of the dll that contains that function are encoded in hex bytes. Each letter is assigned to a single variable, sequentially.

The address of the first letter of the dll name and the address of the first letter of the function name are passed to this FUN_00401530 function. This function then combines the letters and uses LoadLibraryA to import the specified function and return a handle to that function as seen in the EAX register in the image below.

This is how the malware has such a limited Import Address Table.

The Entry Function – Main

The main function of the malware is large, but straightforward.

CaddyWiper’s entry function graph view

The long block at the top is where some dll names and function names are encoded as detailed above. The interesting function happen at the bottom of this first block and the subsequent blocks. There are 2 wiper functions, 1 for files and 1 for hard drive partition tables. They happen sequentially as seen in the decompiled code below.

Step 1: Determining the Machine Role

The malware does not wipe and destroy Primary Domain Controllers. So the first thing the malware needs to do is identify if it is running on a Primary Domain Controller.

The main logic of CaddyWiper

At the top of the image, we see the call to DsRoleGetPrimaryDomainInformation. If the result of this function is a value of 5, then we just down and exit the program – the lower right hand block highlighted in red. This value of 5 corresponds to the constant for DsRole_RolePrimaryDomainController. Again, if the machine is a Primary Domain controller, exit the program.

However, if we are running on any other machine, follow the green arrow to the malicious core of the program.

Step 2: Wiping Files in C:\Users

After the malware checks to see that it is not running on a Primary Domain Controller, the malware then enters this block of code. This is the section that wipes files from C:\Users.

File Wiping Sections

Once the C:\Users path is passed to the wiper function, the malware will do some obfuscated importing of functions, as detailed above. Eventually it will get to the main core of the file wiper function.

The main file wiper logic

Using the FindFirstFileA , the malware will get a handle to the first discovered file. This file will be opened and the file size checked. If the file is larger than 0xA00000 or 10,485,760 bytes, the fileSize variable is set to 10 MB. This is most likely for speed purposes since destroying the first 10 MB of any file should sufficiently corrupt it.

After getting the file size, a memory segment is allocated and filled with zeros.

Filling the allocated memory segment with zeros

This zeroed memory segment is then written to the file starting at the beginning of the file. The file is closed and the process repeats for the next file.

The malware grants itself the SeTakeOwnershipPrivilege. Using this privilege, the Discretional Access Control List (DACL) for each folder and file is also over written with zeros so no one has access to it using the SetNamedSecurityInfoA API call.

Step 3: Encrypting Other Logical Drives

After the malware zeros out all the files in the C:\Users folder, it moves on to any other logical drives that may be connected. These may be local and/or remote network shares that have been mapped to a drive letter.

The malware starts the process on the D:\ drive. If found, it will loop through all the files using the same function detailed above to write Zeros to the full file, or first 10MB.

After checking for the D drive, it will continue to check each additional letter (D, E, F, etc) incrementing by 1 each time until it reaches the Z:\. Interestingly, the malware also checks for the presence of [:\ which is the character after Z. I believe this is just a mistake in the coding.

Step 4: Destroy the Partition Tables of Any Physical Disks

As a last destructive step, the malware destroys the partition tables of any physical drives connected to it. This renders the machine unusable as the system will not be able to boot from the now corrupt hard drive.

CaddyWiper’s partition destroying code

This code is interesting. Where the logical drive wiping code started at D: and incremented up, this starts at \\.\PHYSICALDRIVE9 and counts down. This is most likely to ensure that additional drives will be wiped before the system drive (most likely \\.\PHYSICALDRIVE0) is corrupted and the system crashes.

The malware uses the function DeviceIOControl and the IOCTL_DISK_SET_DRIVE_LAYOUT_EX Control Code to overwrite the existing partition table with zeros. This destroys the partition table and the system is no long able to use that device.

At the completion of this code, the system will crash and then will not be able to be rebooted.

Yara Rule for Detection

The following yara rule has been developed to identify samples of CaddyWiper in the wild.

rule CaddyWiper {
    
    meta: 
        last_updated = "2022-03-16"
        author = "Mike Bosland"
        description = "A yara rule to detect CaddyWiper"

    strings:
        // Fill out identifying strings and other criteria
        $PE_magic_byte = "MZ"
        $string1 = "DsRoleGetPrimaryDomainInformation" ascii

        // DeviceIOCtl
        $deviceiocontrol_hex = { 
            C6 45 ?? 44
            C6 45 ?? 65
            C6 45 ?? 76
            C6 45 ?? 69
            C6 45 ?? 63
            C6 45 ?? 65
            C6 45 ?? 49
            C6 45 ?? 6f
            C6 45 ?? 43
            C6 45 ?? 6f
            C6 45 ?? 6e
            C6 45 ?? 74
            C6 45 ?? 72
            C6 45 ?? 6f
            C6 45 ?? 6c 
        }

        // C:\Users
        $c_users_hex = { 
            C6 45 ?? 43 
            C6 45 ?? 3A 
            C6 45 ?? 5C 
            C6 45 ?? 55 
            C6 45 ?? 73 
            C6 45 ?? 65 
            C6 45 ?? 72 
            C6 45 ?? 73 
        }

        // \\.\PHYSICALDRIVE
        $physical_drive_hex = {
            C6 45 ?? 5C C6 45 ?? 00
            C6 45 ?? 5C C6 45 ?? 00
            C6 45 ?? 2E C6 45 ?? 00
            C6 45 ?? 5C C6 45 ?? 00
            C6 45 ?? 50 C6 45 ?? 00
            C6 45 ?? 48 C6 45 ?? 00
            C6 45 ?? 59 C6 45 ?? 00
            C6 45 ?? 53 C6 45 ?? 00
            C6 45 ?? 49 C6 45 ?? 00
            C6 45 ?? 43 C6 45 ?? 00
            C6 45 ?? 41 C6 45 ?? 00
            C6 45 ?? 4C C6 45 ?? 00
            C6 45 ?? 44 C6 45 ?? 00
            C6 45 ?? 52 C6 45 ?? 00
            C6 45 ?? 49 C6 45 ?? 00
            C6 45 ?? 56 C6 45 ?? 00
            C6 45 ?? 45 C6 45 ?? 00
            C6 45 ?? 39
        }

    condition:
        // Fill out the conditions that must be met to identify the binary
        $PE_magic_byte at 0 and 
        $string1 and
        ($deviceiocontrol_hex and $c_users_hex and $physical_drive_hex)
}

TLDR: VNC on Linux Sucks…use xrdp

I have a ton of virtual machines running on VMWare ESXi servers at home. To access them, I had been logging into the ESXi server and opening a remote console with VMWare Workstation Player. Works great…but it’s a hassle…then I found Apache Guacamole.

Apache Guacamole is a clientless remote desktop gateway. It supports VNC, RDP and SSH. It works in the browser thanks to HTML5 so no agents on the remote side as well…just a web browser. It’s really making bouncing between machines.

I installed Guacamole on a Kubernetes cluster I recently stood up. Simple install. I might turn it into it’s own server at some point, but this was an easy way to get started.

Now that I have a remote gateway, I need some sort of server to run on the clients – originally I intended to use RDP for Windows and VNC for Linux.

RDP on Windows is simple. Enable remote access, configure Guacamole with the IP, username, password, etc. Ready to go.

Now for Linux. All of the documentation/guides for VNC seemed horribly out of date…and a COMPLETE PAIN to get working on Linux. I spent a long time scouring the internet for any recent guides…then figured out why I couldn’t find any information on VNC…it sucks. No one uses it.

Once I found xrdp, the clouds parted and the angels sang. It’s so easy to use.

Installing xrdp

Most of the linux distributions I use are based on Debian. On RHEL/CentOS sub yum for apt-get

sudo apt-get install xrdp
sudo systemctl start xdrp
sudo systemctl enable xrdp
sudo ufw allow 3389/tcp

Colord Authorization and The Impending Crash Message

One gotcha, which I only see on Ubuntu so far is that when you log in you are prompted to sign in to authorize the color management of the session. This is frustrating…and sometimes the popup doesn’t disappear and a reboot is needed to get rid of it…which it might not go away next time you log in either.

I started to look into how to fix this. I found a great post from c-nergy.

Authorizing colord with PolKit

The PolicyKit Authorization Framework, or polkit, is an authorization system that allows certain users to run certain actions on the system. By default, colord (used in xRDP) is restricted to the root user.

So the first part of the fix, is to allow all users to run colord.

Create the following file: /etc/polkit-1/localauthority.d.conf/02-allow-color.d.conf

Add the following code:

polkit.addRule(function(action, subject) {
if ((action.id == “org.freedesktop.color-manager.create-device” ||
action.id == “org.freedesktop.color-manager.create-profile” ||
action.id == “org.freedesktop.color-manager.delete-device” ||
action.id == “org.freedesktop.color-manager.delete-profile” ||
action.id == “org.freedesktop.color-manager.modify-device” ||
action.id == “org.freedesktop.color-manager.modify-profile”) &&
subject.isInGroup(“{users}”)) {
return polkit.Result.YES;
}
});

You can be more secure and restrict the group allowed to run colord by changing users in the above snippet to the group you’d like.

System Crash

Unfortunately, this causes a system crash whenever you log in to the Ubuntu system. It seems to be related to polkit and the file we just created – if you remove that file the authorization popups return…but no system crash. So how do we fix this crash?

If you have a polkit version < 0.106 you’ll need to create a .pkla file. To check this, run pkaction --version

So in my case, I’ll be making a .pkla file. Otherwise make a .conf file.

Create a file /etc/polkit-1/localauthority/50-local.d/45-allow-colord.pkla

Copy/paste the following

[Allow Colord all Users]
Identity=unix-user:*
Action=org.freedesktop.color-manager.create-device;org.freedesktop.color-manager.create-profile;org.freedesktop.color-manager.delete-device;org.freedesktop.color-manager.delete-profile;org.freedesktop.color-manager.modify-device;org.freedesktop.color-manager.modify-profile
ResultAny=no
ResultInactive=no
ResultActive=yes

Finally, clear out any previous system crashes

sudo rm /var/crash/*

And reboot the system. You should now be able to log in without any authorization prompts or system crash messages.

Installing and Configuring Active Directory with Powershell

This post is part of my series on Building an Atomic Red Team Lab. This is the fifth part of this series where I’m going to document the installation process for the Active Directory Domain Controller. This will alsp be our DNS server as well.

Most enterprise networks are Windows Active Directory networks, so it’s good to be familiar with how to install, configure, break and fix them. We’re going to base the domain controller on Windows Server 2019 Standard Desktop Experience. So let’s get the Operating System installed.

Continue reading

Installing and Configuring Hunting ELK on ESXi

This post is part of my series on Building an Atomic Red Team Lab. This is the fourth part of this series where I’m going to document the installation process for Hunting ELK (HELK). This is the first log, alerting and analysis system I’m going to look at.

HELK is a hunting platform built around the ElasticSearch, Logstash and Kibana technology stack. Logs are sent from the host system, using WinLogBeat, to the HELK server.

The logs from WinLogBeat first enter into the Kafka listener. Kafka is a distributed publish-subsscribe messaging system. From there, the lgos are parsed by Logstash and stored in an ElasticSearch database. These stored messages can be searched and displayed in dashboards using Kibana.

So now that we have a little understanding of how the system works, let’s dive in to the installation process.

Continue reading

Intrusion Detection System Installation and Configuration with Security Onion 2

This post is part of my series on Building an Atomic Red Team Lab. This is the second part of this series where I’m going to document the installation process for the Intrusion Detection System (IDS).

Operating System

Our Intrusion Detection System (IDS) will monitor the network traffic for suspicious or malicious traffic…which I’ll be generating. I’m going to use the new version of SecurityOnion for this.

SecurityOnion is a free and opensource IDS and network monitoring platform. It has a suite of tools installed by default: A full ELK stack, Zeek, Wazuh, Suricata, Snort, etc. You can use the latest Emerging Threats ruleset to grab the most recent threat signatures known in the wild.

SecurityOnion2 is based on CentOS 7, so we’ll use that to build the base VM.

Continue reading

vyOS Firewall and Router Installation and Configuration in ESXi

This post is part of my series on Building an Atomic Red Team Lab. This is the first installation post where I’m going to document how I build, install and configure the central hub of the network.

Operating System

I’m going to be using vyOS. vyOS is a free and opensource firewall and router operating system. It’s a powerful platform, but there’s no GUI. It’s all CLI. I haven’t used it much so I’m excited to learn a new technology.

Continue reading

Building A Virtual Atomic Red Team Research Lab

Dectection is one of the most critical stages of the Information Security processes. There will always be some new zero day exploit, so the ability to detect when an attack is operating on your network is critical.

The Atomic Red Team is an attack emulation toolkit to help you measure, monitor and improve your detection capabilities.

I wanted to build out an active directory lab environment to explore this toolkit and the detection capabilities of various logging systems. It’s going to be a very bare bones network, but should be enough to experiment with the framework.

Continue reading
« Older posts

© 2022 Mike Bosland

Theme by Anders NorenUp ↑