Information Security and Network Awareness

Hurricane Labs

Subscribe to Hurricane Labs: eMailAlertsEmail Alerts
Get Hurricane Labs: homepageHomepage mobileMobile rssRSS facebookFacebook twitterTwitter linkedinLinkedIn


Blog Feed Post

Gathering LDAP Identity Data with Splunk Cloud

When organizations think about moving their Splunk implementation to the Cloud, there are a couple of pain points that they encounter. For starters, LDAP Authentication. In this article, we’re not going to address that issue, as there aren’t currently any workarounds for it other than opening a firewall rule between Splunk Cloud and one of your domain controllers. A second sticking point that we will address though is pulling identity data into the Splunk App for Enterprise Security.

Typically organizations want to set up an automated way to pull identity data from their LDAP infrastructure. In the past, this has involved setting up that same firewall rule we discussed earlier and then configuring the Splunk Support for Active Directory in the Cloud. But recently I came across a way to bypass that need and to ship identity information up to the Cloud instead.

Of course you can always set up a Read Only Domain Controller and that will put some of your Windows Admins at ease. You can also use firewalls to lock down connections only coming from a single IP on a single port towards that Domain Controller. These are great strategies but usually, it’s a lot of extra work for minimal gain if all you need is a list of identities.

You can get the same level of identity information by using something in Splunk called summary indexing and a Splunk Heavy Forwarder in your environment. This would involve you setting up Splunk Support for Active Directory locally and eliminating the need for any connections inbound to your domain controllers. All data would be sent out to Splunk Cloud via the same port as the rest of your data.

Step 1: Create an index in Splunk Cloud

To create the index in Splunk Cloud:

  1. Login to your Splunk Cloud installation
  2. Navigate to Settings > Indexes
  3. Create a New Index titled “summary_ldap”

Step 2: Create an index in your Heavy Forwarder

Next you’ll want to create an index on your Heavy Forwarder:

  1. Login to your Heavy Forwarder Splunk installation
  2. Navigate to Settings > Indexes
  3. Create a New Index titled “summary_ldap”

Step 3: Schedule a search to send LDAP data to Splunk Cloud

You’ll want to schedule a search with the following settings on your Heavy Forwarder:

Search Box:

| ldapsearch domain=yourdomain search="(&(objectclass=user)(!(objectClass=computer)))" attrs="sAMAccountName,personalTitle,displayName,givenName,sn,mail,telephoneNumber,mobile,manager,department,whenCreated,userAccountControl" |makemv userAccountControl
|search userAccountControl="NORMAL_ACCOUNT"
|eval suffix=""
|eval priority="medium"
|eval category="normal"
|eval watchlist="false"
|eval endDate=""
|table
sAMAccountName,personalTitle,displayName,givenName,sn,suffix,mail,telephoneNumber,mobile,manager,priority,department,category,watchlist,whenCreated,endDate
|rename sAMAccountName as identity, personalTitle as prefix, displayName as nick, givenName as first, sn as last, mail as email, telephoneNumber as phone, mobile as phone2, manager as managedBy, department as bunit, whenCreated as startDate

Schedule: Select a schedule that is appropriate for how often you’d like to see your identities get updated. The key here is to make sure to select “Summary Indexing” and to choose the “summary_ldap” index you recently created.

Step 4: Create the lookup table file in Splunk Cloud

Lookups

The base lookup table file can be created by uploading a skeleton file through SplunkWeb. You will want to make sure that the destination app is the SplunkEnterpriseSecuritySuite and that the sharing permissions are set to Global (object should appear in all Apps).

Add New

Creating the lookup table file (uploading a skeleton file)

The header of the CSV should be as follows:

identity,prefix,nick,first,last,suffix,email,phone,phone2,managedBy,priority,bunit,category,watchlist,startDate,endDate

Permissions

Setting the appropriate permissions

Lookup Table Files

File saved successfully

Step 5: Create the lookup table definition in Splunk Cloud

Step 6: Create the lookup table and verify that it is updated on the filesystem in Splunk Cloud

Step 7: Save the search as a report

This is assuming that the lookup table is being generated properly. Also, this can be scheduled at a later point in time to automatically update the lookup table.

Step 8: Configure Identities in ES: Navigate to App: Enterprise Security -> Configuration -> Identity Management

Step 9: Create the merged identity file

Before you create the merged identity file, wait approximately 5 minutes for Splunk to automatically detect the change in the identity configuration.

The following search will show you the status:  

index=_internal source=*python_modular_input.log *identit*

Step 10: Schedule the report you created earlier to run on a semi-regular basis

This is being done assuming all is working as expected. Ensure that set the time frame so you don’t get duplicate results. If you scheduled the search on your Heavy Forwarder for once per day, this search should also look only for the past 24 hours and not any more. Also, as a final note, don’t forget to disable the sample identities that are enabled by default Enterprise Security.

Read the original blog entry...

More Stories By Hurricane Labs

Christina O’Neill has been working in the information security field for 3 years. She is a board member for the Northern Ohio InfraGard Members Alliance and a committee member for the Information Security Summit, a conference held once a year for information security and physical security professionals.