Security Onion for Splunk 1.1.3 – IDS Rule Reference

I’ve been working a lot lately on tuning Security Onion alerts, specifically Snort alerts via en/disablesid.conf, threshold.conf and Sguil’s autocat.conf. (If you use Security Onion and don’t know what those are, go here now.)  JJ Cummings’ PulledPork is an incredibly effective tool for updating Snort or Emerging Threat IDS rules and provides some very straightforward methods of controlling what rules are enabled or disabled in Snort. It does that job incredibly well, which is why it’s the tool of choice in Security Onion.

Where I ran into issues was in keeping track of it all. I had 6 terminal windows open: enablesid.conf, disablesid.conf, threshold.conf, autocat.conf, and windows to grep downloaded.rules and lookup the rule reference files. I also had Sguil and Snorby up as I like to keep Sguil focused on incidents and clean of false positives and let Snorby and Splunk provide the deeper visibility into less certain events, where I can get full Bro IDS context. Keeping track of what I had enabled in Snort, but not in Sguil and what was enabled versus disabled was maddening and my desktop was a mess.

There had to be an easier way…so I maddened myself the Splunk way during off hours to reduce the maddening during work hours. Hopefully the end result will help you maintain your sanity during the tuning process.

Just to be clear, what I’m about to describe in no way replaces what PulledPork does. It provides Splunk visibility to rule files created by PulledPork.

Version 1.1.3 introduces IDS Rules to the SOstat menu. By indexing PulledPork *.rules files and the Snort Rule Documentation – opensource.gz (Oinkcode required), this dashboard allows you to search Snort rules by Classtype, Category, Rule Source and/or Rule Status (enabled/disabled). You can quickly check, for example, what rules in the trojan-activity classtype are disabled. Drill down on a rule and you can view the rule, the Snort reference file (if the rule has one) and a timechart with a time picker to view the selected rule’s activity over time.

Needless to say, it’s made sorting through rules and rule data much more manageable.

Before I get to the eye candy, here’s some mind candy. You have to enable the Data Inputs in the Splunk GUI and I’m leaving it up to you in terms of how you want to index the data. If volume is a big concern for you, enable the Data Inputs and manually copy the files (extract the Rule Documentation) to the defined “monitor” folders. If you’ve got audit and tracking concerns, I provide a couple very simplistic scripts and script inputs to give you an idea how you can script the process to provide a running history of a rule and it’s status. If you’re hardcore tuning, you might even want to setup /etc/nsm/rules as the Data Input for real time monitoring. It’s really up to you. Just be mindful that it can chew up some indexing volume if you get carried away.

It’s going to need a little tuning of the dashboard view to handle real-time or daily monitoring and I’m working on filtering by sensor if you have various policies applied across a multi-sensor deployment. But in the words of @DougBurks, “release early and release often.”

Now the eye candy.

When you first load the dashboard you’ll see something like this:

IDS RulesThe drop downs let you refine your searches:

IDS Rules - ClasstypesIDS Rules - CategoriesYou can also specify the rule file you’re targeting as well as whether you want to view all, enabled or disabled rules. Once you’ve made a drop down selection the rules window will populate:

IDS Rules - Rules PanelDrilling down on a rule will display the rule, the reference file (if available for that sid) and a timechart with a time picker so you can quickly check on rule activity.IDS Rules - DrilldownZooming in on the Rule Reference panel:

IDS Rules - Rule Reference PanelThe event workflow for Snort Sguil events now looks like this:

IDS Rules - Sguil WorkflowSelecting *VRT – 498 will open a new Splunk search window to the rule reference file result. (I’m going to make this cleaner in future releases).

IDS Rules - Workflow Result

Quick Setup (for ad-hoc indexing)

Install/Upgrade the app. Enable the Data Inputs for ids_rules and ids_rules_doc sourcetypes in Splunk Manager.

If you use CLI, copy /opt/splunk/etc/apps/securityonion/default/inputs.conf to /opt/splunk/etc/apps/securityonion/local/inputs.conf and edit the /local copy. Making changes to the /local files will not be overwritten by app updates, whereas /default will.

Scroll to the bottom and look for the monitor stanza for sourcetype ids_rules and change disabled = 0 instead of 1.

[monitor:///opt/splunk/etc/apps/securityonion/local/*.rules]
sourcetype = ids_rules
followTail = 0
disabled = 0

If you want to pull in the Suricata rules as well, you might need to add the following after the “disabled = 0” line:

crcSalt = <SOURCE>

That will force Splunk to index everything in the path. Scroll down a little further and enable the monitor for ids_rules_doc sourcetype:

[monitor:///opt/splunk/etc/apps/securityonion/local/doc/signatures/*.txt]
sourcetype = ids_rules_doc
followTail = 0
disabled = 0

Exit and save the file, then copy the rules files to the Splunk folder.

cp /etc/nsm/rules/*.rules /opt/splunk/etc/apps/securityonion/local/rules/

Download the Snort Rule Reference files via browser or curl them:

curl -L http://www.snort.org/reg-rules/opensource.gz/-o opensource.gz

Then extract the files to the monitored path:

tar zxvf opensource.gz -C /opt/splunk/etc/apps/securityonion/local/

Restart Splunk and give it a few to percolate. It will take a few minutes to index all the rule files so be patient. Edit /opt/splunk/etc/apps/securityonion/default/inputs.conf and disable the monitors. Then head on over to SOstat > IDS Rules and give it a spin. If you see a blue bar error at the top that reads “No matching fields exist” the files haven’t been indexed yet. You can do a search for “sourcetype=ids_rules” and/or “sourcetype=ids_rules_doc” to check on the indexing process or open the Search app from the Splunk menu and check the source types panel in the bottom left. Find the ids_rules* sourcetypes and when the count stops going up after more than a few minutes it’s done. Restart Splunk to reload the inputs.conf file and disable subsequent indexing.

I’m seriously debating releasing this as a PulledPork for Splunk app as well, so I would love feedback from either Security Onion users or Snort PulledPork users as to whether there is interest or need outside my own desire for order. But then again I’ll have some time on the flights to and from Vegas next week; it might be fitting for a PulledPork Splunk app to come into it’s own on an airplane, eh JJ?

Other notables in this release:

  • You may need to reenter your CIF api key in the workflow config if you’re using CIF.
  • The SOstat Security Onion and *nix dashboards now allow you to view details by all SO servers/sensors or by the individual hosts in a distributed deployment.
  • VRT lookup added to workflow for Sguil events with a sig_id. Not all sig_ids will have a Snort rule reference file (especially Emerging Threat rules), so mileage will vary.

I’m hopeful making the Snort rule reference files accessible will help move towards the ultimate goal of this app. All along I’ve had two end users in mind: a large scale deployment and the home user. Both can install Security Onion with little knowledge thanks to Doug’s efforts, but neither is assured to be able to take it to the next step without help or a lot of effort if the expertise isn’t there. Providing easy access to context, whether it’s Snort rule reference files or CIF queries, can make a huge difference. To that end, more updates will be coming with a Sguil mining dashboard that will provide correlated context around events (think IR search result type data as drill down results as you review Sguil events) and more Mining views for network based indicators.

I’ll be at BlackHat and DefCon next week so if any Security Onion or SO for Splunk users want to meet up, hit me up via email or the twitter (@bradshoop).

Happy splunking!

Securing Splunk Free Version When Installed On Security Onion Server (or anywhere else)

This stroke of genius comes directly from the man behind Security Onion, @dougburks, and solves two problems, one serious the other functional. Splunk’s free version allows you to index up to 500 mb/day, but does limit some (even basic) capabilities, most important of which is disabling authentication. If you’re running Splunk free version on your Security Onion server and access the server remotely (from another workstation), I highly suggest you make this your standard access process. The instructions below work on Ubuntu distributions and if you followed Doug’s advice about using a Security Onion VM as your client, this should work perfectly as long as you haven’t configured the VM as a server.

The method can be used on a Windows or Linux client. The instructions below focus on Linux, but googling “windows ssh tunnel how to” should get you a good start. In the example below port 81 is the Splunk port. If you installed Splunk on a different port just replace 81 with it.

The approach uses an SSH tunnel and is really easy to setup. On your Security Onion/Splunk server you’ll want to make sure SSH is enabled in Uncomplicated Firewall (ufw).

sudo ufw status

You should see 22/tcp ALLOW in the results. If it says DENY, then enable it:

sudo ufw allow 22/tcp

Next configure ufw to block (yes I said block) Snorby, Squert and Splunk ports:

sudo ufw deny 81/tcp
sudo ufw deny 443/tcp
sudo ufw deny 3000/tcp

From a remote Linux host with openssh-client installed:

sudo ssh username@securityonion.example.com -L 81:localhost:81 -L
443:localhost:443 -L 3000:localhost:3000

Replace username with the Security Onion/Splunk server user and securityonion.example.com with the hostname or IP address of your Security Onion/Splunk server. This command essentially tells your client to pass anything destined to localhost ports 81, 443 or 3000 to your SO server on it’s localhost port 81, 443 or 3000 via the SSH tunnel. The command requires sudo due to accessing privileged ports, so you’ll be prompted for your local password then again for the remote SO server user’s password. After authentication, you’ll have an active SSH terminal session to the server.

Launch a web browser and browse to any of the following:

http://localhost:81 – Splunk
https://localhost:443 – SO Home/Squert
https://localhost:3000 – Snorby

It’s that simple.

If you recall I mentioned a “functional” advantage of using this approach. In the Security Onion for Splunk app, I provide links to Snorby and Squert, but unfortunately, the user must configure the urls to fit their environment if they access the tools remotely. The default config uses “localhost” as the server, so if you’re following, if you use the above method to access Splunk securely, the Snorby and Squert links work out of the box. =)

Thanks and hat tip to Doug for this little gem! I had to bite my lip whenever I recommended someone install the free version of Splunk due to the authentication limit, but now I don’t have to.

Announcing: Security Onion for Splunk Server/Sensor Add-on

I wanted to do a blog post on deploying the Security Onion for Splunk app in a distributed environment, where Splunk, Security Onion server and Security Onion sensor were all on separate hosts. I found it was easier to just build an add-on and let the README do the blogging. The add-on shouldn’t change or require updating nearly as much as the full app, only requiring updates when new logging is added at the server or sensor, as Bro is want to do at times (thank you very much). All the field extractions and transformations happen on the Splunk server. You can download the add-on here (once approved).

README

Overview:
Security Onion Sensor Add On eases the configuration of a multiple Security Onion sensor deployment. Install the Splunk Universal forwarder and untar this app to /opt/splunkforwarder/etc/apps. Edit /opt/splunkforwarder/etc/apps/securityonion_addon/local/inputs.conf to disable specific logs depending on whether you’re indexing from a
server or sensor that is remote to the Splunk indexer.

Installation:

Install Splunk Universalforwarder:

sudo dpkg -i <filename>

Start Splunk and accept the license

splunk start
splunk enable boot-start

Configure the universal forwarder to forward to a specific indexer:

splunk add forward-server <host>:<port> -auth <username>:<password>

Default receiver port is 9997. Username is a Splunk administrative user. Optionally, to enable secure Splunk communications the following command can be used to specify a certificate, root CA and password.

splunk add forward-server <host>:<port> -ssl-cert-path /path/ssl.crt -ssl-root-ca-path/path/ca.crt -ssl-password <password>

Download the Security Onion for Splunk Addon and extract the file to the Splunk forwarder apps folder:

sudo tar -C /opt/splunkforwarder/etc/apps -xvf securityonion_addon.tar.gz

Edit the local inputs.conf file depending on your deployment scenariot:

cd /opt/splunkforwarder/etc/apps/securityonion_addon/local/

Default config files for the following deployments are included:

inputs_server.conf
inputs_sensor.conf
inputs_server_and_sensor.conf

Just copy the appropriate file to replace the default inputs.conf (default deployment is server/sensor). For example, if you are installing on a sensor:

sudo cp inputs_sensor.conf inputs.conf

When you’re done, restart the Splunk forwarder:

sudo /opt/splunkforwarder/bin/splunk restart

As long as your indexer is receiving events you should be good to go.

Clap…Be Amazed…Now Go Defend

What does it tell you when SC Magazine’s Best Security Company of the Year bases it’s business on helping organizations recover from all the failures of the layers of prevention and mitigation on which we focus so much of our time and money? The typical IT security environment today is focused on two things: prevention and vulnerability management. Problem is you can’t prevent what you don’t know is coming and with most preventative solutions in place today your best hope is that you’re not the first (or earliest) to see an attack and that whoever is can and will share that intel. Risk worth assuming? Are we better served with the Sisyphean task of continuous patch management than we would be were we to focus most of those resources (people and money) on early detection and response?

Whether you agree with these types of awards or not, Mandiant deserves the recognition. They’re a great company and group of people and are model givers to the Info Sec community which helps people like me do my job better. They’ve assembled some of the best talent in the business as well. All deserving reasons.

I’ll give you one more.

For all the money we spend in Info Sec trying to prevent attacks, and a lot is spent, what happens when an attack gets through? We know prevention is going to fail. Yes. Prevention WILL fail and regularly does. If you don’t have any level of network monitoring in place then you’re doing it wrong. And there’s no excuse for it. In fact, there’s no excuse for not leveraging the latest and greatest technologies available to mitigate attacks when the financial cost is moot and your security posture could be strengthened immensely. For larger organizations who already have commercial solutions in place, are you able to afford the extent of coverage required for maximum visibility? What if you could get that coverage without the lofty hardware and support costs?

How do you save money on network security monitoring? You open your eyes to the low cost potential of open source initiatives and brilliant minds eager to share, like a few of those that Mandiant has shepherded through the Info Sec world. Doug Burks (@DougBurks), of Mandiant, has provided one such solution with Security Onion, a network monitoring Linux distribution that is only limited by the hardware you have at your disposal on which to run it. And what’s even better? He’s made it so easy to setup and configure that even complete strangers to Linux can figure it out. If you know how to boot off of a DVD you’re in business. Need a deployed infrastructure? Setup one Security Onion host as a server and deploy others as sensors. Standalone works just as well. Heck, I run it in a virtual machine on my laptop and it’s the best host-based IPS money can buy…but doesn’t need to. I’m not kidding about how easy this is to deploy and if you take 20-30 minutes and watch one of Doug’s presentations (links available on the Security Onion site) demonstrating just how easy it is, you’ll thank me. Then you’ll show him thanks by downloading it.

When you look at the arsenal of monitoring tools that Security Onion brings to the table you’ll be even more amazed as you peel back the layers and get a handle on just how powerful the solution is.

You get Snort, the great open source intrusion detection/prevention system from Sourcefire (another incredible company; see my Razorback post then go download the new version). Maintaining Snort with Security Onion via PulledPork for signature management is a breeze and easily supports Snort (get thee an Oinkcode! ) and Emerging Threats signatures.

You get full packet capturing with Sourcefire’s Open Source Daemonlogger.

You get OSSEC, a host-based intrusion detection system, which helps you monitor your network security deployment out of the box and the capability to extend that detection to Windows, Linux, and MacOS via the OSSEC agents.

You get Bro IDS, an alternative approach to IDS that I’m just now learning myself and have been continually blown away by the visibility it provides. I hope to cover it more specifically in a future post, as educational resources for using Bro are a little scarce. It’s described as a network analysis framework and it comes preloaded with a few obvious examples that will give you an idea of what the framework can do. It collects basic connection data for every connection, http, ftp, syslog and SMTP data, SSL certificates (both known and the suspicious unknowns), SQL injection attacks, and more. It will continue to amaze you when when the framework reveals its capabilities with events like HTTP::Malware_Hash_Registry_Match, indicating a file has been downloaded, the file hashed and the hash matches a known malicious software hash in the Team Cymru Malware Hash Registry.

All that and more, packaged up nicely saving you all of the headaches of deploying a comprehensive network monitoring suite of Open Source solutions from scratch.

Once you start collecting data you’ll have the powers of Squil, Squert, and Snorby at your disposal for monitoring and analysis. If you prefer, you can also grab a copy of Splunk.which installs fairly painlessly on Security Onion. (Only issue I had was the default Splunk port was in use; I typically use port 81 without problems.)

Take a day. Install a Security Onion VM. Run some sample pcaps through it. See how easy it is to deploy and detect the activity. Now think how much better you might be able to defend with this kind of visibility at next to no cost (how many old servers are being retired and replaced with virtual machines? Reuse them!). Then with all the money you’ll be saving go hire some gifted local talent with dedication and passion to learning and an accepted understanding that they’ll always be a day behind. Do this and you’ve improved your overall security posture immensely and put yourself in a much better position to detect and respond to an incident. Even if you aren’t doing the responding! If the FBI shows up at your door, you’re going to need help. The data you would be collecting would be invaluable to resolving an incident efficiently and effectively.

Now do you start to see why I think Mandiant deserves Best Security Company recognition? And this is just one example. Have you looked at the free tools they offer? Have you ever heard of TaoSecurity Blog? Dustin Webber, the author of Snorby (although he’s with Tenable now I believe)? Their efforts with OpenIOC.org? Jamie Butler, who literally co-authered the book on rootkits? The latest is Michael Sikorski’s contributions with his new book Practical Malware Analysis accompanied by the release of FakeNet, a malware network analysis tool. The list goes on. They’re a company of talented people, providing tools and sharing knowledge to build stronger and more capable communities to build a safer Internet.

Any company like that deserves the title Best Security Company, especially when compared to companies who profit from you for solutions that you buy with the  expectation that they will fail you at some point. Mandiant profits when those companies fail you. And then they (Mandiant’s family) turn around and give, yes give as in free, you a way to do a lot if not more than what you’re paying good money for, or aren’t paying for at all because you can’t afford it. They’re doing something about enabling small/medium businesses and personal networks to adopt affordable security approaches. They’re providing security practitioners with tools and technologies to perform better, defend more wisely and “find evil” more efficiently with less technical skill than was required 2 years ago. They’re doing it in their 9-5 jobs. They’re doing it as hobbyists. They’re doing it as caring volunteer citizens.

They should be recognized and thanked…

And you should go download Security Onion and start protecting your personal and professional assets…like now.

Reflections on MIRcon

If you missed MIRcon 2011, you should tune in to Mandiant’s State of the Hack: What really happened at MIRcon webcast on October 28th. (Archived version should be available here.) There were some great talks from the likes of Richard Clarke, Michael Chertoff, and Tony Sager, and a lot of the greatest minds in incident response and cybersecurity either presented or were present. Kevin Mandia has assembled an insanely gifted and giving crew.

What did we learn? Organized crime, hacktivism and nation-states are the attackers and no target is invulnerable. Your only defense is to quickly identify and carefully disrupt attacks. Don’t be a soft target. The harder the attacker has to work, the more likely you’ll either stop them the next time or they’ll move on to a softer target. They understand and have seen firsthand the effects of cyber espionage: the skill, speed and agility of the attackers; the ineffectiveness of standard security infrastructure; the economic impact of personal, corporate and national data loss and compromise.

We cannot put a price on the ultimate impact of cybercrime. Sure, we all know someone who has had to deal with credit card fraud or has received one of those letters stating that your personal information “may” have been lost. That’s a huge hit on our economy. But it’s the tip of the iceberg. It’s what you see in the media and has the most potential to effect you personally. Now think corporate. Stealing PII is valuable. Attacking corporate bank accounts is profitable too. I believe it was Michael Chertoff who referred to “outsider trading” in his talk: stealing confidential corporate communications to leverage that information against the victim company in business negotiations. If you’re a bidder and you know the lowest bid in advance you can pretty much guarantee a win.

It doesn’t stop there. Richard Clarke told a great story about driving down a highway in Dubai where he saw an eighteen wheeler carrying a predator drone. He later asked Dubai officials when they had starting buying predators. “We haven’t.” they said.  “The US won’t sell them to us. That was a Flying Dragon.” Guess who they bought that from?

The ultimate impact seems immeasurable and there are no indications that it’s going to let up. In that sense, MIRcon was as depressing as I had expected. Actually, a little more so. Kevin Mandia’s opening remarks left my co-worker turning to me saying, “Wow. Depressing. Wow.” and me nodding affirmatively. Had it ended there I probably would be looking to buy some farmland and chickens far away from the Interwebs. Instead the next two days were filled with quality presentations, amazing technology and it’s uses, and real-world stories of victories and defeats. By the time Kevin gave his closing remarks I was still depressed, don’t get me wrong. It’s bleak. But some people get it. Some battles are being won. And at least some of the people fighting those battles are interested in helping you wage those battles yourself and providing tools and guidance to do so. There’s a rich, military background in the core of Mandiant. It’s quite apparent they’ve never abandoned service to their country.

/salute Mandiant

Width-In-Defense

Depth in defense is always a priority in securing an environment. For the novice, the notion is that the more layers of defense you have in place the more likely you’ll be able to detect the bad guys and their malicious code. The typical analogy is that of a fortified castle. From the outside-in, a deep and wide moat surrounds the outer wall with a drawbridge and/or portcullis to control and limit access. The inner walls provide an additional layer of protection for the castle, with additional barriers in place around the keep. And lets not forget the men and women who strategize and defend their home. Firewalls, intrusion prevention devices, web gateways, and endpoint protection act as similar layers in the depth in defense model. It’s a good model and if designed, managed and monitored properly it will serve as a well fortified defense system.

What concerns me is less the depth model and more how it’s constructed. I saw a prediction a year or so ago from an executive at one of the big security vendors who predicted that within 5 years there would be roughly 5 vendors who owned just about every security based solution available. The trend continues in that direction. From a marketing standpoint, great for them. They can wrap them all up nicely in a bundle and say they are the single source solution to all your security problems. But is that great for us, the end user?

Sure the solutions they offer for the varying layers differ. A web security gateway isn’t an endpoint or antivirus client. But if the same company provides you that web gateway and the antivirus, do you gain anything running their antivirus on the web gateway? Mixing vendors, bringing in different ways of performing a similar function, is critical if you want to provide the best defense. Antivirus solutions vary greatly in their methodology and detection capabilities. They almost all use some level of signature based detection, which is inherently weak in an age of malicious code that can polymorph or obfuscate by the second. The more layers of various antivirus solutions you can place between the attackers and your hosts the more likely you’ll be able to stop it.

Revisiting the castle analogy, those outer walls by the moat seem like a good place for archers. The drawbridge/portcullis probably would benefit more from foot soldiers and hot oil vats above the entryway. Cavalry to stampede through the narrow lanes as attackers draw near to the inner keep, and your best and finest swordsman and archers defending the castle proper.

Depth is critical, but depth plus width is where you’ll truly improve your chances of defense. You may have to suffer with different front-end management systems for the varying solutions but, honestly, most times you’re going to be better off isolating the administration of the varying layers as opposed to dealing with an all-in-one solution that in reality is a jumbled mess to manage. The majority of monitoring concerns can be handled with some basic alerting, event correlation or security information management.

So next time a vendor tells you they have the answer to all your security needs, think width-in-defense before you sign up for their suite of solutions.