Replaying Windows Event Logs against Elastalert (and Sigma) rules using HELK
If you’ve collected logs from a large number of hosts during IR, we can aim to run Sigma rules across it to find some quick alerts that could highlight something you’ve missed!
Using tools like Plaso and log2timeline.py are great when timelining a range of different artefacts, however, when dealing with events logs, parsing out the important fields is the crux for querying large data sets. HELK’s out of the box parsing and pipelines can help.
Disclaimer: Sigma rules may require certain fields/settings enabled so always make sure you check what the audit settings are on the machine that you get the logs from (such as process creation/command line, auditing success/failure etc).
Intro to HELK and Sigma
HELK (or Hunting ELK) is created by Roberto Rodriguez (@Cyb3rWard0g) and here are some of his great articles on the tool:
- Welcome to HELK! : Enabling Advanced Analytics Capabilities
- Real-Time Sysmon Processing via KSQL and HELK — Part 1: Initial Integration
- What the HELK? SIGMA integration via Elastalert
In the last article on that list, when the Sigma project released a way to convert the Sigma rules into Elastic queries, Roberto added the feature of automatically creating Elastalert rules from the Sigma repo to run them across logs ingested into HELK.
Requirements
So, now I’ve gotten my @Cyb3rWard0g fanboying out of the way, lets start replaying some events into HELK. You will need to have a HELK instance running, a copy of Winlogbeat and most importantly some Evtx files.
HELK: When installing HELK, make sure you use either the 2nd, or 4th build choices, so Elastalert is set up.
Winlogbeat: Download Winlogbeat from Elastic
Adjusting the Elastalert Rules for Historical Events
In HELK you can access the Elastalert docker container with
sudo docker exec -ti helk-elastalert bash
and from there you can access therules/
folder.
Elasticsearch will naturally use the original event time as the data for the @timestamp
field. By default, Elastalert will only query what the buffer_time
in the Elastalert config is set to.
To get around this, we can actually use the timestamp_field
in the rule definition itself to tell Elastalert to look at the time stamp of ingestion into HELK, rather than the true event time as by adding the line below:
timestamp_field: etl_processed_time
How to Replay Windows Event Logs with Winlogbeat
Instead of sending our logs to logstash which many ELK users are familiar with, HELK puts Kafka in front of logstash as it’s message broker. This means in our Winlogbeat config, we need to identify the output to the HELK Kafka port.
Replaying a Single Evtx File
Using the config file below, we can run the following command to ship an evtx file to HELK via Kafka.
I used sysmon_10_11_lsass_memdump.evtx from EVTX-ATTACK-SAMPLES as a sample to base this test off.
.\winlogbeat.exe -c .\winlogbeat-ship2helk.yml -e -E EVTX_FILE="C:\Dev\elk\EVTX-ATTACK-SAMPLES\Credential Access\sysmon_10_11_lsass_memdump.evtx"
Replaying a Directory of Evtx files
The more likely scenario is that you may have a large number of Evtx files that have been acquired that you want to ship.
We can use the script in the EVTX-ATTACK-SAMPLES by @GrantWSales to ship bulk evtx files.
.\Winlogbeat-Bulk-Read.ps1 -Exe C:\Dev\elk\winlogbeat\winlogbeat.exe -Config C:\Dev\elk\winlogbeat\winlogbeat-ship2helk.yml -Source "Path\to\evtxfiles\"
Results
After configuring the Elastalert rules to look at the correct timestamp and shipping the example evtx file detailed before, we get our alert! (I was also playing with clearing the event log to trigger rules too :P)
The index elastalert-status
is where all rule fires are written regardless of which index the original document sits.
Although it takes a little while to get the Elastalert rules to do what you want, it could potentially save a lot of time triaging large quantities of event logs.
As more people contribute to the HELK project, there will hopefully be more rules out of the box that can be used for this purpose.
Tips for Troubleshooting
Pipelines
To observe the events following into Kafka, you can have this command running before you ship the logs with Winlogbeat:
sudo docker exec -ti helk-kafka-broker /opt/helk/kafka/bin/kafka-console-consumer.sh --bootstrap-server helk-kafka-broker:9092 --topic winlogbeat --from-beginning
Restart Docker containers
sudo docker-compose -f helk-kibana-analysis-alert-basic.yml stop
sudo docker-compose -f helk-kibana-analysis-alert-basic.yml start
Elastalert Rules
Test your rules on the last 24hrs of events in the Elastalert docker.
Access the Elastalert docker container with
sudo docker exec -ti helk-elastalert bash
.
elastalert-test-rule rules/<rulename>.yml