From Scrap to SOC: File Integrity Monitoring & Elastic Stack Integration
In today’s blog we will continue learning more about Wazuh
, I want to set up File Integrity Monitoring
on the Agent VM, simulate an attack, and learn how to protect sensitive files from unauthorized modifications. Then I want to set up the Wazuh Reporting Module
and report my findings.
To configure File Integrity Monitoring
we need to access the Wazuh Agent and edit the ossec.conf
file located at:
C:\Program Files (x86)\ossec-agent\ossec.conf
Under <!-- File Integrity Monitoring -->
I added these lines:
<!-- Default files to be monitored. -->
<directories check_all="yes">C:\Windows\System32</directories>
<files check_all="yes">C:\Windows\System32\drivers\etc\hosts</files>
I also edited the line about Frequency that syscheck
is executed, to make it so that syscheck
runs every five minutes instead of 12 hours. I will probably change this back, but for the scope of this simulated attack 5 minutes works best.
<frequency>300</frequency>
With the syscheck
directories added I want to set up SSH
on the Wazuh Agent machine so we can simulate an attack where the attacker has gained remote access to the machine, plus I just need to familiarize myself with SSH
!
We will need to add the OpenSSH Server
optional feature.
Press Win + I
to open Settings. Then navigate to Apps -> Optional features
and click on Add a feature
. Find OpenSSH Server
and install it.
Once OpenSSH Server
is installed we can start the service and set it to start automatically in PowerShell (Administrator)
with these commands:
Start-Service sshd
Set-Service -Name sshd -StartupType 'Automatic'
Get-Service -Name sshd
Now that we have configured OpenSSH
to start automatically lets allow SSH
through Windows Firewall
.
- Press
Win + R
- Type
firewall.cpl
and press Enter - Click
Allow an app or feature through Windows Defender Firewall
- Find
OpenSSH Server
and check Private or Public networks as needed
Editing the Hosts file attack
Now that SSH
is set up I will SSH
into the Wazuh Agent and simulate the attack where I change the hosts file.
powershell
Add-Content -Path "C:\Windows\System32\drivers\etc\hosts" -Value "18.238.192.72 yahoo.com"
After appending the IP address of my website to yahoo.com we can test out yahoo.com and check Wazuh
for FIM Events.
Note: Not sure what happened here, it didn’t actually navigate to my website, but for demonstration purposes it doesn’t really matter, we mainly just want to see that Wazuh
shows an event for this action.
As we can see after editing the hosts file we now have an event log!
Let’s try something a bit more fun and edit the registry!
Modifying the Windows Registry Attack
We are going to simulate an attack where an attacker modifies the Windows registry to persist a malicious executable.
To do this we will utilize PowerShell
to modify the key that is used for automatically starting programs when a user logs in.
HKEY_LOCAL_MACHINE\Software\Microsoft\Windows\CurrentVersion\Run
Let’s add an entry that points to a benign executable like notepad.exe
, in a real attack this would be a malicious executable.
New-ItemProperty -Path "HKLM:\Software\Microsoft\Windows\CurrentVersion\Run" -Name "MaliciousApp" -Value "C:\Windows\System32\notepad.exe" -PropertyType "String" -Force
Check that the new entry was added:
Get-ItemProperty -Path "HKLM:\Software\Microsoft\Windows\CurrentVersion\Run"
Now that the attack has been simulated, we can see in Wazuh
that the Registry value was added for the MaliciousApp
Wazuh Reporting Module
Moving on from simulating the attacks and finding them in Wazuh
, I previously promised to set up the Wazuh Reporting Module
in this blog post. Upon doing some research on this topic however, I was unable to find a dedicated module for this. Instead I want to learn how to integrate Elastic Stack
with Wazuh
.
Integrating the Elastic Stack
was a huge learning curve for me, and eventually I realized it would help if I tried to visualize the flow, it might not be 100% accurate, but it helped me understand how it works a little better. Here is the flow I came up with:
Note: Don’t be me, make sure that Logstash
, Elasticsearch
, Kibana
, and Elastic-Agent
are all running the same version to avoid compatibility issues.
Elastic Stack: Prerequisites
To make my Elastic Stack
a little more secure, I want to set up DNS server
, SSL
, and Apache
on my server machine to make sure that I don’t have to worry about encrypted communication between my agents and the server. Also I just want to keep things simple instead of typing in the Server’s IP Address for everything I need to access from the web browser.
To set up DNS
server run these commands to install dnsmasq
:
sudo apt-get install dnsmasq
sudo nano /etc/dnsmasq.conf
Edit the configuration to set up your custom domain. I will be using a .local address to make sure there is no conflict with any public domains on the internet:
domain=yourdomain.local
address=/<myhomeserver>.local/<Server Machine IP Address>
Then make sure to restart and enable dnsmasq
:
sudo systemctl daemon-reload
sudo systemctl enable dnsmasq
sudo systemctl restart dnsmasq
To test the DNS
resolution:
dig <Domain Name>.local @127.0.0.1
Now that the DNS
server is set up, you have two options. 1: You can configure your router to distribute the DNS
server info via DHCP
, or 2: You can set the DNS
server on each client manually. I will have to go with option 2, as I am not able to set up option 1 given that the router I have access to, isn’t actually my router.
To set up DNS
on each client these are a set of basic instructions for each OS:
For Windows:
Open Network and Sharing Center.
Click on "Change adapter settings."
Right-click on the active network connection and select "Properties."
Select "Internet Protocol Version 4 (TCP/IPv4)" and click "Properties."
Select "Use the following DNS server addresses" and enter the IP address of your dnsmasq server.
Click "OK" to save the changes.
For Linux:
Edit /etc/resolv.conf to include the IP address of your dnsmasq server.
sudo nano /etc/resolv.conf
Add the following line:
nameserver <IP_of_dnsmasq_server>
For macOS:
Open "System Preferences" and go to "Network."
Select your network connection and click "Advanced."
Go to the "DNS" tab and add the IP address of your dnsmasq server.
Click "OK" and then "Apply."
And as you can see, I can access my server machine by using the domain name:
Now to set up SSL
. To do this we will start by installing openssl
:
sudo apt install openssl
Then we will need to create a directory for the SSL Certificate
if not already present:
sudo mkdir -p /etc/ssl/private
sudo chmod 700 /etc/ssl/private
Now to generate the SSL Certificate
and Key
, and fill out the prompts:
sudo openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout /etc/ssl/private/selfsigned.key -out /etc/ssl/certs/selfsigned.crt
Now we can install Apache
and configure it to use the SSL Certificate
:
sudo apt install apache2
sudo systemctl start apache2
sudo systemctl enable apache2
Now we need to set the ServerName
in the apache2.conf
:
sudo nano /etc/apache2/apache2.conf
Add the ServerName
:
ServerName <Domain Name>.local
Then restart Apache
:
sudo systemctl restart apache2
And you can now visit the site in the web browser http://<Domain Name>.local
:
Now let’s configure Apache
to use the generated SSL Certificate
:
sudo a2enmod ssl
sudo a2ensite default-ssl
Before restarting Apache
however we will need to modify the Apache
SSL
port number as Wazuh
is already using port 443
:
sudo nano /etc/apache2/ports.conf
I will be using port 8443
in place of 443
:
Listen 8443
Now configure SSL
in Default Site
:
sudo nano /etc/apache2/sites-available/default-ssl.conf
And add this configuration:
<VirtualHost *:8443>
SSLCertificateFile /etc/ssl/certs/selfsigned.crt
SSLCertificateKeyFile /etc/ssl/private/selfsigned.key
Now restart Apache
:
sudo systemctl restart apache2
sudo systemctl reload apache2
To make sure this worked visit https://<Domain Name>.local:8443
:
Now that we have done a basic setup of dnsmasq
, SSL
, and Apache
, we can continue with installing Elastic Stack
!
Elastic Stack: Installing Elasticsearch
For starters we need to install Elasticsearch
on the Wazuh
server machine:
wget https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-8.9.2-amd64.deb
sudo dpkg -i elasticsearch-8.9.2-amd64.deb
Jot the password for the elastic superuser
, then also run these commands and jot down it’s output. They are the SSL Keystores
for the included SSL
files with Elasticsearch
:
/usr/share/elasticsearch/bin/elasticsearch-keystore show xpack.security.http.ssl.keystore.secure_password
/usr/share/elasticsearch/bin/elasticsearch-keystore show xpack.security.transport.ssl.keystore.secure_password
After jotting down this information you can optionally limit the Elasticsearch
heap memory if you have limited RAM like I do:
sudo nano /etc/elasticsearch/jvm.options.d/jvm.options
And add these lines:
-Xms1g
-Xmx1g
Then we will need to configure Elasicsearch
:
sudo nano /etc/elasticsearch/elasticsearch.yml
Uncomment or modify these lines:
cluster.name: elasticsearch-cluster
node.name: node-1
path.data: /var/lib/elasticsearch
path.logs: /var/log/elasticsearch
network.host: <Server Machine IP Address>
http.port: 9600
discovery.seed_hosts: ["127.0.0.1"]
xpack.security.enabled: true
xpack.security.enrollment.enabled: true
xpack.security.http.ssl:
enabled: true
keystore.path: certs/http.p12
xpack.security.transport.ssl:
enabled: true
verification_mode: certificate
keystore.path: certs/transport.p12
truststore.path: certs/transport.p12
cluster.initial_master_nodes: ["node-1"]
http.host: <Server Machine IP Address>
Note: The default value for http.port
is 9200
but Wazuh Dashboard
and Wazuh Indexer
already claimed this port, so I changed it to 9600
.
sudo systemctl daemon-reload
sudo systemctl start elasticsearch
sudo systemctl enable elasticsearch
sudo systemctl status elasticsearch
Use this command to make sure the cluster is up and running properly:
curl -X GET -k https://elastic:<Elastic Superuser Password>@<Domain Name>.local:9600 -v
And as you can see we get the JSON at the bottom, verifying that Elasticsearch is installed and running!
With Elasticsearch
running we will need to generate an Enrollment Token
for Kibana
and jot it down for later:
sudo /usr/share/elasticsearch/bin/elasticsearch-create-enrollment-token -s kibana
Elastic Stack: Installing Kibana
With Elasticsearch
installed we can proceed with installing Kibana
.
wget https://artifacts.elastic.co/downloads/kibana/kibana-8.9.2-amd64.deb
sudo dpkg -i kibana-8.9.2-amd64.deb
We can now set up Kibana
, you will need that Kibana Enrollment Token
from earlier:
sudo /usr/share/kibana/bin/kibana-setup
You will also need to generate and jot down Secure Encryption Keys
, Kibana
actually gives us a built in way to do this!:
cd /usr/share/kibana/bin/
./kibana-encryption-keys generate
Alternatively you can also just run this command, either way just make sure to write it down for later:
openssl rand -base64 32
Also we will copy the SSL
certificate and key files for Kibana
:
sudo mkdir /etc/kibana/certs
sudo cp /etc/ssl/certs/selfsigned.crt /etc/kibana/certs/
sudo cp /etc/ssl/private/selfsigned.key /etc/kibana/certs/
sudo chown kibana:kibana /etc/kibana/certs/*
sudo chmod 644 /etc/kibana/certs/selfsigned.crt
sudo chmod 640 /etc/kibana/certs/selfsigned.key
Then configure kibana.yml
:
sudo nano /etc/kibana/kibana.yml
Uncomment or modify these lines:
server.port: 5601
server.host: "<Server IP Address>"
server.ssl.enabled: true
server.ssl.certificate: /etc/kibana/certs/selfsigned.crt
server.ssl.key: /etc/kibana/certs/selfsigned.key
logging:
appenders:
file:
type: file
fileName: /var/log/kibana/kibana.log
layout:
type: json
root:
appenders:
- default
- file
pid.file: /run/kibana/kibana.pid
# You should see at the bottom a section that was automatically generated for elasticsearch during setup, just leave this as is.
xpack.security.encryptionKey: "<Encryption Key>"
xpack.encryptedSavedObjects.encryptionKey: "<Encryption Key>"
xpack.reporting.encryptionKey: "<Encryption Key>"
Now to start and enable Kibana
:
sudo systemctl daemon-reload
sudo systemctl start kibana
sudo systemctl enable kibana
sudo systemctl status kibana
To access Kibana
we will need to configure Apache
a little bit:
sudo a2enmod proxy
sudo a2enmod proxy_http
sudo a2enmod proxy_wstunnel
sudo a2enmod rewrite
sudo a2enmod headers
sudo systemctl restart apache2
We are about to create the Apache
configuration file for Kibana
, but earlier we enabled the default-ssl
site configuration for Apache
, we will need to disable that first:
sudo a2dissite default-ssl
Then create the Apache
configuration file for Kibana
:
sudo nano /etc/apache2/sites-available/kibana.conf
And add the following configuration:
<VirtualHost *:8443>
ServerName fzr76-wazuh-box.local
SSLEngine on
SSLProxyEngine on
SSLCertificateFile /etc/kibana/certs/fzr76.crt
SSLCertificateKeyFile /etc/kibana/certs/fzr76.key
ProxyRequests off
ProxyPreserveHost on
ProxyPass / https://10.0.0.95:5601/
ProxyPassReverse / https://10.0.0.95:5601/
<Proxy *>
Order deny,allow
Allow from all
</Proxy>
<Location />
Require all granted
ProxyPassReverse /
RewriteEngine On
RewriteCond %{HTTP:UPGRADE} ^WebSocket$ [NC]
RewriteCond %{HTTP:CONNECTION} ^Upgrade$ [NC]
RewriteRule .* wss://10.0.0.95:5601%{REQUEST_URI} [P]
</Location>
# Headers
Header always set X-Frame-Options "DENY"
Header always set X-Content-Type-Options "nosniff"
Header always set X-XSS-Protection "1; mode=block"
</VirtualHost>
Then run these commands to enable the new site configuration for Kibana
:
sudo a2ensite kibana.conf
sudo systemctl restart apache2
sudo systemctl reload apache2
Elastic Stack: Setting Up Fleet
Now that Kibana
is set up we will need to set up a Fleet
server for centralized management on Elastic Agents
. I will be setting up the Fleet
server on another machine, as you don’t want to have Fleet
server installed alongside Elasticsearch
nodes in case of a compromise. The main benefit of setting up Fleet
is being able to add integrations across the fleet without having to manually set up each integration for each endpoint.
On the Fleet
server machine we will need to do a few things. We need to collect the IP Address
and make sure it is a Static
address on the network. My Fleet
server machine is a Linux machine, so I will navigate to the network settings for the network and set the IPv4
address to Static
, and also add the dnsmasq
info since my router will not be pointing to it instead.
Now I want to set up a custom domain for my Fleet
server. So on the Wazuh
server machine that is running dnsmasq
, we will edit the dnsmasq.conf
file:
sudo nano /etc/dnsmasq.conf
And add this line:
address=/<Domain Name for Fleet>.local/<Fleet Server IP Address>
Last thing we need to do to continue on, is copy the SSL
certificate and key from the Wazuh
server to this Fleet
server machine. On the Wazuh
server run this command:
sudo scp /etc/kibana/certs/* <Username>@<Fleet Server IP Address>:/home/<username>/Documents/
Now move the files to a more suitable directory:
sudo mkdir -p /usr/local/etc/ssl/certs
sudo mv /home/<username>/Documents/selfsigned.crt /usr/local/etc/ssl/certs
sudo mv home/<username>/Documents/selfsigned.key /usr/local/etc/ssl/certs
Note: You may need to install OpenSSH Server
.
Now that our Fleet
server machine is ready to be configured navigate to Kibana
at https://<Domain Name>.local:8443
, and login with your Elastic
superuser credentials.
Then navigate to Add Integrations
, and search for Fleet Server
.
Once selected, click on Add Fleet Server
, and configure it however you need. I will be leaving the default options. Click Save and Continue
. Then we will see a popup about Fleet Server integration added
. From here we will click on Add Elastic Agent to your hosts
.
Go through the Enroll in Fleet
setup and be sure to put in the URL
field the domain name that was chosen for the Fleet
server machine. Then it will give you some generated commands to run for installing the Elastic Agent
software. Be sure to add these two lines if you want to use SSL
with your own certificate:
--fleet-server-es-ca=/usr/local/etc/ssl/certs/selfsigned.crt \
--insecure
We can now see the Fleet
server agent:
With the Fleet
server setup we can now enroll our Wazuh
agent machine in the Fleet
. Navigate to the side menu and click on Fleet
. Then select Add agent
and follow the steps. You will also want to add this flag to the last command when installing Elastic Agent
:
--insecure
Note: Remember to add the DNS
info for the agent machine if you weren’t able to set up your router to point to it. See Elastic Stack: Prerequisites
section above. Also for some reason Add agent
setup gave me the last command using the Fleet
server address at port 443
. I had to change this to port 8220
to actually install Elastic Agent
on the machine. You may need to navigate to Fleet -> Settings -> Fleet server hosts
and check that the Host URL
has the correct port number 8220
as well because I found mine to be 443
for some reason.
Elastic Stack: Installing More Integrations
Now that we have set up our Fleet
in Elastic/Kibana
we can configure a few things, starting with adding an integration for Elastic Defend
(This can be used for alerts). When setting up Elastic Defend
name it something like Endpoint Security
and in the Where to add this integration? Section, select the Agent Policy 2
under Existing hosts
since this is the Wazuh Agent
machine.
Once Endpoint Security
is set up navigate to the side menu Seurity -> Manage -> Rules
. You will find a list of all installed rules, if there are none present you will need to click Add Elastic rules
:
Now we will see that only two rules are enabled, so we will select all rules and click Bulk actions -> Enable
. Now some rules will not be able to be enabled, maybe due to requiring machine learning, or needed integrations, but for the most part this will be fine.
Moving on lets install another integration: Windows
. This will enable collecting logs from Sysmon
, remember to select the Agent policy 2
(For the Wazuh Agent
machine). You can modify what event logs you want to collect from, but I will leave the defaults and I see that Sysmon
is enabled by default.
Now we can navigate to the side menu Analytics -> Discover
and we will see that a ton of logs are coming through:
We can click on the dropdown in the top left logs-*
and select Create a data view
and find on the right side logs-windows.sysmon_operational-default
and copy and paste that into the Index Pattern
field. Then name it Sysmon
and save data view to Kibana
. Now we can view any Sysmon
logs from the Wazuh Agent
machine. You can search Sysmon
event id and see logs that were recorded:
You can also add search criteria like process.pe.original_file_name
, process.working_directory
, host.name
, etc:
From here you can play around learning about Kibana
and all it’s awesome features, for now though I will be moving on to installing and configuring Logstash
!
Elastic Stack: Installing Logstash
To start we will need to install Logstash
on the Wazuh
server machine.
wget https://artifacts.elastic.co/downloads/logstash/logstash-8.9.2-amd64.deb
sudo dpkg -i logstash-8.9.2-amd64.deb
We now need to run this command to install the logstash-input-opensearch
plugin. This plugin reads data from the Wazuh Indexer
into Logstash
:
sudo /usr/share/logstash/bin/logstash-plugin install logstash-input-opensearch
Now we need to copy SSL
certificate files from Wazuh-Indexer
and Elasticsearch
to Logstash
:
sudo mkdir -p /etc/logstash/certs
cp /etc/wazuh-indexer/certs/root-ca.pem /etc/logstash/certs/
cp /etc/elasticsearch/certs/http_ca.crt /etc/logstash/certs/
sudo chmod -R 755 /etc/logstash/certs/root-ca.pem
sudo chmod -R 755 /etc/logstash/certs/http_ca.crt
We also need to get the wazuh.json
template file:
sudo mkdir -p /etc/logstash/templates
curl -o /etc/logstash/templates/wazuh.json https://packages.wazuh.com/integrations/elastic/4.x-8.x/dashboards/wz-es-4.x-8.x-template.json
sudo chown logstash:logstash /etc/logstash/templates/wazuh.json
sudo chmod 644 /etc/logstash/templates/wazuh.json
Now we need to create a Logstash Keystore
to hold Wazuh-Indexer
and Elasticsearch
credentials:
sudo mkdir -p /etc/sysconfig
echo 'LOGSTASH_KEYSTORE_PASS="<Create a Keystore Password>"' | sudo tee /etc/sysconfig/logstash
export LOGSTASH_KEYSTORE_PASS=<Created Keystore Password>
sudo chown root /etc/sysconfig/logstash
sudo chmod 600 /etc/sysconfig/logstash
sudo -E /usr/share/logstash/bin/logstash-keystore --path.settings /etc/logstash create
Get your Elasticsearch
superuser and Wazuh-Indexer
Admin credentials handy for these next commands. Note that you will be prompted to enter the credentials, so there is no need to modify these commands:
sudo -E /usr/share/logstash/bin/logstash-keystore --path.settings /etc/logstash add ELASTICSEARCH_USERNAME
sudo -E /usr/share/logstash/bin/logstash-keystore --path.settings /etc/logstash add ELASTICSEARCH_PASSWORD
sudo -E /usr/share/logstash/bin/logstash-keystore --path.settings /etc/logstash add WAZUH_INDEXER_USERNAME
sudo -E /usr/share/logstash/bin/logstash-keystore --path.settings /etc/logstash add WAZUH_INDEXER_PASSWORD
Now we can create the Logstash
configuration file:
sudo nano /etc/logstash/conf.d/wazuh-elasticsearch.conf
Here is the configuration I went with:
input {
opensearch {
hosts => ["<Wazuh IP Address>:9200"]
user => "${WAZUH_INDEXER_USERNAME}"
password => "${WAZUH_INDEXER_PASSWORD}"
index => "wazuh-alerts-4.x-*"
ssl_enabled => true
ssl_certificate_authorities => ["/etc/logstash/certs/root-ca.pem"]
query => '{
"query": {
"range": {
"@timestamp": {
"gt": "now-1m"
}
}
}
}'
schedule => "* * * * *"
}
}
output {
elasticsearch {
hosts => ["https://<Server IP Address>:9600"]
index => "wazuh-alerts-4.x-%{+YYYY.MM.dd}"
user => '${ELASTICSEARCH_USERNAME}'
password => '${ELASTICSEARCH_PASSWORD}'
ssl_enabled => true
ssl_certificate_authorities => "/etc/logstash/certs/http_ca.crt"
template => "/etc/logstash/templates/wazuh.json"
template_name => "wazuh"
template_overwrite => true
}
}
Now we need to integrate Logstash
with Wazuh-Server
:
sudo /usr/share/logstash/bin/logstash-plugin install logstash-output-elasticsearch
And edit the wazuh-elasticsearch.conf
file again:
sudo nano /etc/logstash/conf.d/wazuh-elasticsearch.conf
Below the first input bracket add another input bracket as such:
input {
file {
id => "wazuh_alerts"
codec => "json"
start_position => "beginning"
stat_interval => "1 second"
path => "/var/ossec/logs/alerts/alerts.json"
mode => "tail"
ecs_compatibility => "disabled"
}
}
Now we need to add the Logstash
user to the Wazuh
group so that it can read the alerts.json
file:
sudo usermod -a -G wazuh logstash
Now that Logstash
is configured lets start it up and enable it:
sudo systemctl daemon-reload
sudo systemctl start logstash
sudo systemctl enable logstash
sudo systemctl status logstash
Now that Logstash
is set up we can navigate to Kibana
at https://<Domain Name>.local:8443
and login with our Elastic
superuser credentials. Then navigate to the side menu Management -> Stack Management -> Kibana -> Data views
and Create a data view where we will input wazuh-alerts-*
for the Index Pattern
.
To verify wazuh-alerts-*
(logs) are being sent to Elasticsearch
go to the Wazuh
server machine and run a query for all indices to see if the Wazuh Agent
is sending logs:
curl -k -u 'elastic:mBOtKIcTSVe_cQgt7oV0' -X GET 'https://10.0.0.95:9600/wazuh-alerts-*/_search?pretty' -H 'Content-Type: application/json' -d'
{
"query": {
"match_all": {}
},
"size": 10,
"sort": [
{
"@timestamp": {
"order": "desc"
}
}
]
}'
The output should show quite a bit of information after a little bit of time.
Configuring Uncomplicated Firewall
Now that we have set up Elasticstack
, we need to set up Uncomplicated Firewall (UFW)
to keep our tools secure. If you don’t already have it installed:
sudo apt-get install ufw
sudo ufw enable && ufw allow 2222
I allowed port 2222 in the above command as I manage my server via SSH
on port 2222
. But from here you can configure which ports to allow or deny:
sudo ufw allow 8443
sudo ufw deny 5601
Since I set up Apache
as a proxy for Kibana
I allowed port 8443
and denied port 5601
from being used. This can also be done with Wazuh
, but I won’t be covering that in this blog post.
To view all port rules run this command:
sudo ufw status
Wrapping Up
Installing the Elastic Stack
and configuring it all proved a challenge for me, not to mention linking them all together to communicate properly. Ultimately I was able to get everything functional for my home lab, and it may not be pretty, or the correct way, but for learning purposes it will do just fine! This isn’t for the faint of heart so if you want to follow my blog here as a guide for your own home lab, you may have to do some additional steps and troubleshooting not listed here in this post. But if you do undertake this project I wish you luck, it is a TON of fun, you will most definitely build troubleshooting skills doing projects like this!
Resources that can help you on this project:
Wazuh
:
https://documentation.wazuh.com/current/installation-guide/index.html
Elasticsearch
:
https://www.elastic.co/guide/en/elasticsearch/reference/8.9/install-elasticsearch.html
Logstash
:
https://www.elastic.co/guide/en/logstash/8.9/installing-logstash.html
Kibana
:
https://www.elastic.co/guide/en/kibana/8.9/install.html
IppSec Video that helped me figure out Fleet
:
https://www.youtube.com/watch?v=Ts-ofIVRMo4&ab_channel=IppSec
Elasticstack Integration with Wazuh Documentation
:
https://documentation.wazuh.com/current/integrations-guide/elastic-stack/index.html
I use ChatGPT
a lot to aid the self-learning process, and it can give a ton of misinformation, but it’s a great tool for people who know how to ask the right questions, and don’t give up. I had never heard of Wazuh
, and the Elastic Stack
up until this year, so be diligent in your research, the answers are out there, no matter how hard you have to search for them. If you made it this far in the blog post, thanks for reading!