Security Onion for Splunk 1.1.3 – IDS Rule Reference

I’ve been working a lot lately on tuning Security Onion alerts, specifically Snort alerts via en/disablesid.conf, threshold.conf and Sguil’s autocat.conf. (If you use Security Onion and don’t know what those are, go here now.)  JJ Cummings’ PulledPork is an incredibly effective tool for updating Snort or Emerging Threat IDS rules and provides some very straightforward methods of controlling what rules are enabled or disabled in Snort. It does that job incredibly well, which is why it’s the tool of choice in Security Onion.

Where I ran into issues was in keeping track of it all. I had 6 terminal windows open: enablesid.conf, disablesid.conf, threshold.conf, autocat.conf, and windows to grep downloaded.rules and lookup the rule reference files. I also had Sguil and Snorby up as I like to keep Sguil focused on incidents and clean of false positives and let Snorby and Splunk provide the deeper visibility into less certain events, where I can get full Bro IDS context. Keeping track of what I had enabled in Snort, but not in Sguil and what was enabled versus disabled was maddening and my desktop was a mess.

There had to be an easier way…so I maddened myself the Splunk way during off hours to reduce the maddening during work hours. Hopefully the end result will help you maintain your sanity during the tuning process.

Just to be clear, what I’m about to describe in no way replaces what PulledPork does. It provides Splunk visibility to rule files created by PulledPork.

Version 1.1.3 introduces IDS Rules to the SOstat menu. By indexing PulledPork *.rules files and the Snort Rule Documentation – opensource.gz (Oinkcode required), this dashboard allows you to search Snort rules by Classtype, Category, Rule Source and/or Rule Status (enabled/disabled). You can quickly check, for example, what rules in the trojan-activity classtype are disabled. Drill down on a rule and you can view the rule, the Snort reference file (if the rule has one) and a timechart with a time picker to view the selected rule’s activity over time.

Needless to say, it’s made sorting through rules and rule data much more manageable.

Before I get to the eye candy, here’s some mind candy. You have to enable the Data Inputs in the Splunk GUI and I’m leaving it up to you in terms of how you want to index the data. If volume is a big concern for you, enable the Data Inputs and manually copy the files (extract the Rule Documentation) to the defined “monitor” folders. If you’ve got audit and tracking concerns, I provide a couple very simplistic scripts and script inputs to give you an idea how you can script the process to provide a running history of a rule and it’s status. If you’re hardcore tuning, you might even want to setup /etc/nsm/rules as the Data Input for real time monitoring. It’s really up to you. Just be mindful that it can chew up some indexing volume if you get carried away.

It’s going to need a little tuning of the dashboard view to handle real-time or daily monitoring and I’m working on filtering by sensor if you have various policies applied across a multi-sensor deployment. But in the words of @DougBurks, “release early and release often.”

Now the eye candy.

When you first load the dashboard you’ll see something like this:

IDS RulesThe drop downs let you refine your searches:

IDS Rules - ClasstypesIDS Rules - CategoriesYou can also specify the rule file you’re targeting as well as whether you want to view all, enabled or disabled rules. Once you’ve made a drop down selection the rules window will populate:

IDS Rules - Rules PanelDrilling down on a rule will display the rule, the reference file (if available for that sid) and a timechart with a time picker so you can quickly check on rule activity.IDS Rules - DrilldownZooming in on the Rule Reference panel:

IDS Rules - Rule Reference PanelThe event workflow for Snort Sguil events now looks like this:

IDS Rules - Sguil WorkflowSelecting *VRT – 498 will open a new Splunk search window to the rule reference file result. (I’m going to make this cleaner in future releases).

IDS Rules - Workflow Result

Quick Setup (for ad-hoc indexing)

Install/Upgrade the app. Enable the Data Inputs for ids_rules and ids_rules_doc sourcetypes in Splunk Manager.

If you use CLI, copy /opt/splunk/etc/apps/securityonion/default/inputs.conf to /opt/splunk/etc/apps/securityonion/local/inputs.conf and edit the /local copy. Making changes to the /local files will not be overwritten by app updates, whereas /default will.

Scroll to the bottom and look for the monitor stanza for sourcetype ids_rules and change disabled = 0 instead of 1.

[monitor:///opt/splunk/etc/apps/securityonion/local/*.rules]
sourcetype = ids_rules
followTail = 0
disabled = 0

If you want to pull in the Suricata rules as well, you might need to add the following after the “disabled = 0” line:

crcSalt = <SOURCE>

That will force Splunk to index everything in the path. Scroll down a little further and enable the monitor for ids_rules_doc sourcetype:

[monitor:///opt/splunk/etc/apps/securityonion/local/doc/signatures/*.txt]
sourcetype = ids_rules_doc
followTail = 0
disabled = 0

Exit and save the file, then copy the rules files to the Splunk folder.

cp /etc/nsm/rules/*.rules /opt/splunk/etc/apps/securityonion/local/rules/

Download the Snort Rule Reference files via browser or curl them:

curl -L http://www.snort.org/reg-rules/opensource.gz/-o opensource.gz

Then extract the files to the monitored path:

tar zxvf opensource.gz -C /opt/splunk/etc/apps/securityonion/local/

Restart Splunk and give it a few to percolate. It will take a few minutes to index all the rule files so be patient. Edit /opt/splunk/etc/apps/securityonion/default/inputs.conf and disable the monitors. Then head on over to SOstat > IDS Rules and give it a spin. If you see a blue bar error at the top that reads “No matching fields exist” the files haven’t been indexed yet. You can do a search for “sourcetype=ids_rules” and/or “sourcetype=ids_rules_doc” to check on the indexing process or open the Search app from the Splunk menu and check the source types panel in the bottom left. Find the ids_rules* sourcetypes and when the count stops going up after more than a few minutes it’s done. Restart Splunk to reload the inputs.conf file and disable subsequent indexing.

I’m seriously debating releasing this as a PulledPork for Splunk app as well, so I would love feedback from either Security Onion users or Snort PulledPork users as to whether there is interest or need outside my own desire for order. But then again I’ll have some time on the flights to and from Vegas next week; it might be fitting for a PulledPork Splunk app to come into it’s own on an airplane, eh JJ?

Other notables in this release:

  • You may need to reenter your CIF api key in the workflow config if you’re using CIF.
  • The SOstat Security Onion and *nix dashboards now allow you to view details by all SO servers/sensors or by the individual hosts in a distributed deployment.
  • VRT lookup added to workflow for Sguil events with a sig_id. Not all sig_ids will have a Snort rule reference file (especially Emerging Threat rules), so mileage will vary.

I’m hopeful making the Snort rule reference files accessible will help move towards the ultimate goal of this app. All along I’ve had two end users in mind: a large scale deployment and the home user. Both can install Security Onion with little knowledge thanks to Doug’s efforts, but neither is assured to be able to take it to the next step without help or a lot of effort if the expertise isn’t there. Providing easy access to context, whether it’s Snort rule reference files or CIF queries, can make a huge difference. To that end, more updates will be coming with a Sguil mining dashboard that will provide correlated context around events (think IR search result type data as drill down results as you review Sguil events) and more Mining views for network based indicators.

I’ll be at BlackHat and DefCon next week so if any Security Onion or SO for Splunk users want to meet up, hit me up via email or the twitter (@bradshoop).

Happy splunking!

Securing Splunk Free Version When Installed On Security Onion Server (or anywhere else)

This stroke of genius comes directly from the man behind Security Onion, @dougburks, and solves two problems, one serious the other functional. Splunk’s free version allows you to index up to 500 mb/day, but does limit some (even basic) capabilities, most important of which is disabling authentication. If you’re running Splunk free version on your Security Onion server and access the server remotely (from another workstation), I highly suggest you make this your standard access process. The instructions below work on Ubuntu distributions and if you followed Doug’s advice about using a Security Onion VM as your client, this should work perfectly as long as you haven’t configured the VM as a server.

The method can be used on a Windows or Linux client. The instructions below focus on Linux, but googling “windows ssh tunnel how to” should get you a good start. In the example below port 81 is the Splunk port. If you installed Splunk on a different port just replace 81 with it.

The approach uses an SSH tunnel and is really easy to setup. On your Security Onion/Splunk server you’ll want to make sure SSH is enabled in Uncomplicated Firewall (ufw).

sudo ufw status

You should see 22/tcp ALLOW in the results. If it says DENY, then enable it:

sudo ufw allow 22/tcp

Next configure ufw to block (yes I said block) Snorby, Squert and Splunk ports:

sudo ufw deny 81/tcp
sudo ufw deny 443/tcp
sudo ufw deny 3000/tcp

From a remote Linux host with openssh-client installed:

sudo ssh username@securityonion.example.com -L 81:localhost:81 -L
443:localhost:443 -L 3000:localhost:3000

Replace username with the Security Onion/Splunk server user and securityonion.example.com with the hostname or IP address of your Security Onion/Splunk server. This command essentially tells your client to pass anything destined to localhost ports 81, 443 or 3000 to your SO server on it’s localhost port 81, 443 or 3000 via the SSH tunnel. The command requires sudo due to accessing privileged ports, so you’ll be prompted for your local password then again for the remote SO server user’s password. After authentication, you’ll have an active SSH terminal session to the server.

Launch a web browser and browse to any of the following:

http://localhost:81 – Splunk
https://localhost:443 – SO Home/Squert
https://localhost:3000 – Snorby

It’s that simple.

If you recall I mentioned a “functional” advantage of using this approach. In the Security Onion for Splunk app, I provide links to Snorby and Squert, but unfortunately, the user must configure the urls to fit their environment if they access the tools remotely. The default config uses “localhost” as the server, so if you’re following, if you use the above method to access Splunk securely, the Snorby and Squert links work out of the box. =)

Thanks and hat tip to Doug for this little gem! I had to bite my lip whenever I recommended someone install the free version of Splunk due to the authentication limit, but now I don’t have to.

Announcing: Security Onion for Splunk Server/Sensor Add-on

I wanted to do a blog post on deploying the Security Onion for Splunk app in a distributed environment, where Splunk, Security Onion server and Security Onion sensor were all on separate hosts. I found it was easier to just build an add-on and let the README do the blogging. The add-on shouldn’t change or require updating nearly as much as the full app, only requiring updates when new logging is added at the server or sensor, as Bro is want to do at times (thank you very much). All the field extractions and transformations happen on the Splunk server. You can download the add-on here (once approved).

README

Overview:
Security Onion Sensor Add On eases the configuration of a multiple Security Onion sensor deployment. Install the Splunk Universal forwarder and untar this app to /opt/splunkforwarder/etc/apps. Edit /opt/splunkforwarder/etc/apps/securityonion_addon/local/inputs.conf to disable specific logs depending on whether you’re indexing from a
server or sensor that is remote to the Splunk indexer.

Installation:

Install Splunk Universalforwarder:

sudo dpkg -i <filename>

Start Splunk and accept the license

splunk start
splunk enable boot-start

Configure the universal forwarder to forward to a specific indexer:

splunk add forward-server <host>:<port> -auth <username>:<password>

Default receiver port is 9997. Username is a Splunk administrative user. Optionally, to enable secure Splunk communications the following command can be used to specify a certificate, root CA and password.

splunk add forward-server <host>:<port> -ssl-cert-path /path/ssl.crt -ssl-root-ca-path/path/ca.crt -ssl-password <password>

Download the Security Onion for Splunk Addon and extract the file to the Splunk forwarder apps folder:

sudo tar -C /opt/splunkforwarder/etc/apps -xvf securityonion_addon.tar.gz

Edit the local inputs.conf file depending on your deployment scenariot:

cd /opt/splunkforwarder/etc/apps/securityonion_addon/local/

Default config files for the following deployments are included:

inputs_server.conf
inputs_sensor.conf
inputs_server_and_sensor.conf

Just copy the appropriate file to replace the default inputs.conf (default deployment is server/sensor). For example, if you are installing on a sensor:

sudo cp inputs_sensor.conf inputs.conf

When you’re done, restart the Splunk forwarder:

sudo /opt/splunkforwarder/bin/splunk restart

As long as your indexer is receiving events you should be good to go.

Security Onion for Splunk 1.1.2 – Workflows and Event Views

Having spent the last few weeks working with the Security Onion for Splunk app and CIF (Collective Intelligence Framework) in addition to the recently released DShield for Splunk app, I found some of the workflows weren’t as easy to pivot from the dashboard panel to workflow queries as I liked. The latest release addresses this issue while expanding on the workflow capabilities already available.

Where you’ll notice the change the most is in the drilldowns from panels. I left the Overview dashboard alone, but you’ll notice on most all the others that when you drill down on an event in a panel, the results now show an event listing with key fields selected instead of the cleaner table views. I’m trading aesthetics for functionality. The event listing provides quick access to workflows, and that is where this update shines.

For example, the Monitors > Sguil Events panel screenshot below shows a drilldown on a Zeus alert. From the workflow dropdown menu on the left, you can now query DShield for Splunk app and Robtex, in addition to CIF, for source and destination IPs.

It gets better with Bro IDS when we look at HTTP Mining.

In addition to source and destination IPs, Bro results containing domain or md5 fields (typically Bro HTTP and SMTP entities logs) will now allow you to query those values directly against CIF. CIF and Robtex searches open conveniently in a new tab. DShhield queries will spawn a new window for now. There is some difference in how Splunk workflow generates links versus searches, where the former will open a new tab, the latter a new window that needs further exploring.

NOTE: You will likely have to reconfigure the CIF workflow server and API keys. For instructions reference my previous post on Querying CIF Data From Splunk.

If you haven’t checked out CIF but want to get a sense of what it’s all about, head over to josehelps.com who has generously stood up a public instance of CIF. You can fill out the form to request an API key and give it a spin.

I’m currently exploring other ways to integrate correlation against the DShield for Splunk data and am working on adding a DShield mining dashboard that will likely be a prototype for future CIF mining dashboards, so more goodness to come.

Happy Splunking!