Category Archives: Organizations

Outlining a Threat Intel Program

(estimated read time 27min)

For new readers, welcome, and please take a moment to read a brief message From the Author.

Executive Summary

I recently crunched the high level basics of setting up a threat intelligence (abbreviated as Threat Intel) program into a 9-tweet thread, which was met with great appreciation and the feedback solicited unanimously agreed I should expand on the thread in a blog so here we go.

This blog elaborates on a nine-step process for creating a Threat Intel program. It is packed full of thought provoking questions, suggestions, and even a few lessons learned to help you avoid bumps in the road. The concepts shared here aren’t necessarily earth shattering; in fact they come from military experience, time spent in combat zones, 24/7 shifts in intelligence facilities, information assurance, governance/risk/compliance, and information security (InfoSec) programs in both government and civilian sectors. Additionally, I take every opportunity to pick the brain of someone (anyone) who has been doing threat intel or InfoSec and occasionally even sit still long enough to read a book, article, or paper on the topic. Threat Intel isn’t anything new. It’s been around since humans have been at odds with each other and doing anything from sending out spies to eavesdropping in a bar, but we seem to struggle with developing a program around it in the digital space. This blog aims to close that gap, and provide you a practical outline for designing your own Threat Intel program.

Introduction

Many of you are used to the long standing saying “You can have your project: fast, cheap, or right. You’re only allowed to choose two.” But what about quality? I remember when I first learned to drive my mother gave me $5 told me to be back in 15 minutes and to bring her some dish detergent. I ran to the store grabbed the bargain brand, hurried back home and handed it to her. She looked and shrieked “What’s this!?” I learned more about dish detergent in the 15 minutes that followed than I care to remember. The lesson here is that, I had completed the task, on time, under budget, and provided exactly what she required. It was fast, cheap AND right, but it didn’t meet her preferred standard of quality.

Taking this lesson learned, I include a fourth constraint for tasks/projects: Quality. Imagine our four factors like a diamond, perfectly balanced, with four equal sections. The rules are simple, if you wish to increase volume in one of the sections, you must decrease volume in another. For this threat intel discussion we label our four sections: time, money, design/accuracy, and quality. Threat intel is rarely, if ever, black and white, therefore we will use the term ‘accuracy’ instead of the ‘right’ as it implies binary thinking ‘right or wrong’. As we discuss building out a Threat Intel program in this blog, we’ll refer back to our balanced diamond, to help remind us of something Tim Helming so eloquently commented (https://twitter.com/timhelming/status/854775298709012480) that at the end of the day the micro (1’s & 0’s of threat hunting) have to translate to the macro (a valuable Threat Intel program that pays the bills).

 

1: WHAT ARE WE PROTECTING?

The first tweet in the series https://twitter.com/GRC_Ninja/status/854573118010122240 starts simply with “list your top 3-5 assets”. This may sound very straightforward however I suspect that if you individually asked each C-level executive, you’d probably wind up with a very diverse list. Try to answer 1) what is it that your organization actually DOES and 2) what assets do you need to do it?

I’d encourage you to have your top two leadership tiers submit their answers via survey or host them at a collaborative meeting where all participants come with write ups on their thoughts, then toss them out on a whiteboard to avoid “group think”. You can have as many as you want, but understand that when hunting threats, you are time constrained and the quality of data is important. There’s a finite value in automation, and at the end of the day threat analysts and threat hunters have “eyes on glass” reading, analyzing, interpreting, and reporting. If your list of “most critical assets” is more than five (and usually three is optimal if there’s stark diversity) then the hunting & analysis teams efforts will usually be proportionally divided according to weight of priorities so that they may perform their jobs to the best of their abilities. A large list will mean you’ll need to invest commensurate amounts of money in staffing to achieve adequate accuracy, quality (and thoroughness) of investigation, analysis and the level of reporting desired.

2: IDENTIFY YOUR THREATS

Tweet number two in the series https://twitter.com/GRC_Ninja/status/854573497741430785 calls for an organization to consider “who would kill to have/destroy those assets? (think of what lethal/strategic value they hold to another)”. This is an exercise in not only giving names to the boogeymen that keep you up at night, but also in identifying who’s the most feared. This sounds simple enough right? When asking groups to do this, there are usually three adversaries named 1) your largest competitor(s), 2) hostile former/current employees, & 3) “hackers”. That third group is a bit too vague for your hunting team to effectively and efficiently execute their duties or provide you a quality threat assessment/intel report. Imagine your threat intelligence report template as “$threat tried to hack/attack us…”, now substitute “hacker” for $threat and read that aloud. [Be honest, you’d probably fire someone for that report.]

Obviously “hacker” needs to be refined. Let’s break that term down into the following groups:

  • advanced persistent threats (APT): one or more actors who are PERSISTENT, which usually means well funded and they don’t stop, ever, they don’t go find ‘an easier’ target, & rarely take holidays, or sleep, or at least so it seems; they are your nemesis. A nation state [hacker] actor (someone working for a foreign country/government) is an APT, but not all APTs are nation states! They ARE all persistent.
  • criminals: entities driven by monetary gain, resorting to anything from phishing & fraud to malware and 0-days
  • hacktivists: a group seeking to promote a political agenda or effect social change, usually not in it for the money
  • Script kiddies: usually seek bragging rights for disrupting business

Now, using these groups instead of “hacker”, try to think of someone (or some group) who meets one of these definitions and would go to great lengths to steal/destroy the assets listed in step one. Depending on what services or products your organization provides your answers will vary. A video game company probably has very different threats than a banker, unless of course the owners or employees bank with the banker. A stationary company will have different threats than a pharmaceutical company. Sometimes however, threats are target-neutral, these threats would be addressed by your security operations center (SOC) first, then escalated to your threat hunters/analysts if necessary. Remember, your threat intel team can’t chase every boogeyman 24/7.

Another thing you’ll want to do is score the threat actors. There are a number of systems out there and the specifics of that activity are beyond the scope of this article. However, it may be helpful when trying to prioritize what/who threatens you by using a matrix. For example, on a scale of 1 to 5, 1 being the lowest, what is each threat actor’s:

  1. level of determination
  2. resources
  3. skill
  4. team size

3: WHAT MUST WE STOP?

Next in the tweet thread https://twitter.com/GRC_Ninja/status/854574465585487872 I asked “…What [are] the 3-5 most important things to prevent? Physical/Virtual Theft? Destruction? Corruption? Manipulation? Modification?…” You may think of these within any context you wish, and some to consider are data, hosts/nodes, code execution, people & processes.

During a debate over what was the minimum security requirement for something highly sensitive, an executive said, to paraphrase, that he didn’t care who could READ the documents, just as long as they couldn’t STEAL them. Needless to say, explaining digital thievery left his brain about to explode and me with carte blanche authority to deny access to everyone and everything as I saw fit. The takeaway is, identify and understand what endstate is beyond your acceptable risk threshold, this unacceptable risk is what you MUST stop.

For example, in some cases a breach of a network segment may be undesirable but it is data exfiltration from that segment that you MUST stop. Another example might be an asset for which destruction is an acceptable risk because you are capable of restoring it quickly. However that asset becoming manipulated, remaining live and online might have far greater reaching consequences. Think of a dataset that has Black Friday’s pricing (in our oversimplified and horribly architected system). The data is approved and posted to a drop folder where a cron job picks it up, pushes price changes to a transactional database and it’s published to your e-commerce site. If an attacker were to destroy or corrupt the file, you’re not alarmed because there’s an alert that will sound and a backup copy from which you can restore. However, consider a scenario in which an attacker modifies the prices, the “too-good-to-be-true” prices are pushed to the database and website, and it takes two hours to detect this, on Black Friday.

Perhaps you have something that is a lethal agent, thus you MUST prevent physical theft by ensuring the IoT and networked security controls have 5-9’s uptime (not down for more than 5 min per year), are never compromised, or that an unauthorized person is never allowed to access or control it. These are just a couple scenarios to get you thinking, but the real importance lies in ensuring your list of “must stops” is manageable and each objective can be allocated sufficient manpower/support when hunting for threats and your SOC is monitoring events that they’ll escalate to your Threat Intel Team

Identifying and understanding the activities that must be prevented will drive and prioritize the corresponding hunting activities your teams will conduct when looking for bad guys who may already be in your systems. Referring back to our balanced diamond, consider that an investment in technologies to support constant monitoring should probably not be part of the budget for your threat intel team, however analytic tools used on the historical outputs from your continuous monitoring systems, security sensors, logs etc. probably would be. Also consider the cost for manpower, time to be spent performing activities in support of these strategic objectives, and how the quality of the investigations and reporting will be affected by available manpower and tools.

4: IDENTIFY DATA/INFORMATION NEEDS/REQUIREMENTS

Next in the series of tweets comes https://twitter.com/GRC_Ninja/status/854574988153716736

“4) identify the data/information you [would] NEED to have to prevent actions…[from step] 3 (not mitigate to acceptable risk, PREVENT)”

After completing the first three steps we should know 1) what we need to protect, 2) who we believe we’ll be defending against/hunting for, and 3) what we must prevent from happening. So what is the most critical resources needed for us to achieve our goals? Data and Information. At this point in the process we are simply making a list. I recommend a brainstorming session to get started. You may be in charge of developing the Threat Intel program, but you can’t run it by yourself. This step in the process is a great way to give your (potential) team members a chance to have some skin in the game, and really feel like they own it. Before you consider asking C-levels for input on this, be considerate of their time and only ask those who have relevant experience, someone who has been a blue/red/purple team member.

Here’s a suggestion to get you started. Gather your security geeks and nerds in a room, make sure everyone understands 1-3, then ask them to think of what data/information they believe they would need to successfully thwart attackers. Next, put giant post-it-note sheets on the walls, title them “Network”, “Application”, “Host”, “Malware”, “Databases” and “InfoSec Soup”, give them each a marker, then give everyone five minutes to run around the room and brain dump information on each sheet (duplication among participants is fine). Whatever doesn’t fit into the first five categories listed goes on the last one (something like 3rd-party svc provider termination of disgruntled employee reports so you can actually revoke their credentials in your own system expeditiously). After the five minutes are up, take some time to go over the entries on each sheet, not in detail, just read them off so you make sure you can read them. Allow alibi additions as something on the list may spark an idea from someone. Then walk away. You may even repeat this exercise with your SOC, NOC, and developers. You’d be surprised how security minded some of these individuals are (you might even want to recruit them for you Threat Intel team later). If your team is remote, a modified version of this could be a survey.

Come back the next day with fresh eyes, take the note sheets, review and organize them into a list. Follow up with the teams and begin to prioritize the list into that which exists and we NEED versus WANT, and my favorite category ‘Unicorns and Leprechauns’ better known as a wishlist, which are things which as far as we know do not exists but might be built/created.

5: IDENTIFY DATA/INFORMATION RESOURCES

Some feedback I received regarding the next tweet https://twitter.com/GRC_Ninja/status/854575357885906944 where I ask if “you [can] get this information from internal sources in sufficient detail to PREVENT items in 3? If not can you get there?” was that it could be combined with the previous step. Depending on the organization, this is a true statement. However, I expect that in order to complete the task above, there will be multiple meetings and a few iterations of list revision before the step is complete. From a project management view, having these as separate milestones makes it easier to track progress toward the goal of creating the program. Additionally, seeing another milestone complete, has immeasurable positive effects as it creates a sense of accomplishment. Whether you combine or separate them, once it is complete, we now have a viable list of information sources we’ve identified as necessary, and now we can start working on identifying how we might source the information.

Information is data that has been analyzed and given context. In some cases, we trust the data analysis of a source, and we are comfortable trusting the information it produces, such as our internal malware reverse engineers, a vetted blacklist provider, or even just “a guy I know” (which ironically sometimes provides the most reliable tips/information out there). In other cases, such as a pew-pew map, we want to see the raw data so that we may perform our own analysis and draw our own conclusions. The challenge in this step, for internal sources, is to identify all the data sources. This will have secondary and tertiary benefits as you will not only identify redundant sources/reporting (which can help reduce costs later) but you will have to decide on which source is your source of truth. You may also discover other unexpected goodies some sources provide that you hadn’t thought of. As an example (not necessarily an endorsement) log files will be on your list of necessary data, and perhaps you find that only portions of these files are pumped into Splunk versus the raw log files which contain data NOT put into Splunk. In most cases when hunting, the raw data source is preferred. However by listing both sources, your discovery of this delta in the sources may even prompt a modification to data architecture to allow the extra fields you want to be added to the Splunk repository.

In other cases, the data which you seek is not currently captured, such as successful login attempts to a resource listed in step one, but it could be if someone turned on that logging. Finally, the data/information you’ve listed, simply is not something you have access to, such as underground extremist threats against your industry or gang activity in the region of an asset from step one. However you still need this information and listing all possible sources for this usually identifies a need for relationships to be established and/or monitoring of open sources to be created. Another data point that will emerge are potential vendors that market/promise that they have the kind(s) of information you want. These will each require a cost/benefit analysis and a “bake off” between vendors to see who truly provides something that is value added to your program and meets your needs. NOTE: most threat intel feeds are at best industry-specific, not organizational or even regionally-specific so be mindful of purchasing “relative” threat intelligence feeds.

6: IDENTIFY DATA/INFORMATION COSTS

The next step in the process mentioned here https://twitter.com/GRC_Ninja/status/854575731585798144, is identifying gaps between what data/information you need but don’t have. “6) if no to 5, can you buy this information? If yes, what’s your budget? Can you eventually generate it yourself?” It’s not surprising to anyone, that sometimes the information we’d like to have is closely held by state and federal agencies. If you’re building this program from the ground up, you will want to establish relationships with these agencies and determine if there’s a cost associated with receiving it. As mentioned earlier ISACs for your industry might be a good source, but most of them are not free.

Other information you might be able to generate, but someone else already develops it. In many cases, not only do they develop it, they do it well, it’s useful, and you couldn’t generate it to the quality standards they do unless that was absolutely the only thing on which your team worked. For example, consider Samuel Culper’s Forward Observer https://readfomag.com/. He provides weekly executive summaries and addresses current indicators of:

  • Systems disruption or instability leading to violence
  • An outbreak of global conflict
  • Organized political violence
  • Economic, financial, or monetary instability

All of the above, could be used to cover the tracks of, or spawn a digital (cyber) attack. As an independent threat researcher, this information is something I do not have the time to collect & analyze, and it costs me about the same as grits & bacon once a month at my favorite breakfast place.

In considering our balanced diamond, money/cost is a resource that if we need a lot of it for one area of of our program, we usually have to give up something else inside that same category, and it is usually manpower or tools, as everyone is pushed to “do more with less”. So how do we prioritize the allocation of funds? Use the ABC prioritization rules: acquire, buy, create. First, see if you can acquire what you need in-house, acquire it from another team, tools, repository etc as this is the cheapest route. If you cannot acquire it, can you buy it? This may be more expensive, but depending on your timeline and availability of personnel in-house to create it, this is sometimes cheaper than the next option, creating it. Finally, if you cannot acquire it or buy it, then consider creating it. This is probably the most time-consuming and costly option (from a total cost of ownership perspective) when first standing up a program; however, it may be something that goes on a roadmap for later. Creating a source can allow greater flexibility, control, and validation over your threat intelligence data/information.

Whether or not to choose A,B or C will depend on your balanced diamond. If time is not a resource you have, and the program needs to be stood up quickly, you may take the hit on the cost section of your diamond as you need to buy the data/information from a source. Also, the talent pool from which you have to choose may also affect your decision, the time and cost associated with hiring the talent (if you can’t train someone up) may force your hand into buying instead of creating. In some instances the cost of the data may be prohibitive, and you do not have it in-house thus you may have to adjust your time section on your diamond to allow you to hire that resource in. The bottom line is that there is no cookie-cutter “right” answer to how you go about selecting each data resource, and one way or another you must select something and you may need to revise your needs, goals, and long term objectives.

 

7: DEFINE YOUR THREAT INTELLIGENCE DEVELOPMENT PROCESSES & PERSONNEL REQUIREMENTS

The next tweet in the series is where we really start to get into the “HOW” of our program

https://twitter.com/GRC_Ninja/status/854576206867566596 “7) Once you get the information, how will you evaluate, analyze & report on it? How much manpower will you need? How will you assess ROI?” There’s a lot packed into this tweet and the questions build on each other. Beginning with the first question, you’ll be looking at your day-to-day and weekly activities. How will you evaluate the data & information received? Take for example, an aggregate customized social media feed, will the results need manual review? If so, how often? Will you be receiving threat bulletins from an Intel Sharing and Analysis Center (ISAC)? Who’s going to read/take action on them?  One key thing to include in your reporting in the WHO, not just the when and how.  A great tool for this is a RACI chart.

For each information source you listed in steps 5 & 6, you should have a plan to evaluate, analyze & report on it. You will find, that as your team analyzes and evaluates these sources, some of them will become redundant.

The second question in the tweet was “How much manpower will you need?” There are a variety of estimating models, but I urge you to consider basing it on 1) the number of information sources you’ve identified as necessary and 2) the number of employees in your organization. What’s the point of having a source, if you don’t have anyone to use/analyze/report on or mine it?  Your own employees are sensors, sometimes they’re also an internal threat. Another point to consider is how much of each analysis effort will be manual at first, that can become automated? Remember, you can never fully automate all analyses, because you can never fully predict human behavior, and every threat still has a human behind it.

The third question in the tweet, “How will you assess ROI?” is critical. Before you begin your program, you want to define HOW you will evaluate these. Will it be based on bad actors found? The number of incoming reports from a source that you read, but tell you nothing new? Remember our balanced diamond, there are finite finances, and time that can be invested into the program. As the daily tasks go on, new information and talent needs will emerge but more importantly, the internal data and information sources will either prove to be noise or niche. Other sources, such as an Intel feed, or membership in an ISAC might not prove to be producing valuable information or intelligence. I’d recommend at minimum, annual evaluation (using your pre-defined metrics for your qualitative ROI) if not semi-annual review of any external/paid sources to ensure they are reliable, and providing value. If your team tracks this at least monthly, it’ll be much easier when annual budget reviews convene.

REMINDER: Defining the metrics for ROI in advance does not mean you cannot add or refine the metrics as the program progresses. I recommend reviewing them every 6 months to determine if they need revising. Also, don’t forget that new information needs will emerge as your program grows. Take them, and go back through steps 5-7 before asking for them.

8: DEFINE SUCCESS AND THE METRICS THAT REFLECT IT

Good advice I’ve heard time and time again is, always begin with the end in mind. The next tweet in the series https://twitter.com/GRC_Ninja/status/854576964065275904 touches on this by asking “8) what will success look like? # of compromises? Thwarted attempts? Time before bad guys detected? Lost revenue? Good/Bad press hits?” Granted 140 characters is not nearly enough to list all of the possible metrics one could use, but the objective of that tweet and this blog are not to list them for you, rather to encourage you to think of your own.

Before you start hunting threats and developing a threat intelligence program, you’ll need a measuring stick for success, for without one how will you know if you’re on the right path or have achieved your goals? As with everything in business, metrics are used to justify budgets and evaluate performance (there’s a buzz word called key performance indicators KPIs you should become familiar with, also known as status or stop light reporting red, yellow, green).

In a very young program, I’d encourage you to include a list of “relationships” you need/want to establish outside vs inside the organization, and the number of them that you do create. You can find other ideas for metrics with this search: https://www.google.com/#q=%22threat+intelligence+metrics%22

 

9: IDENTIFY INTERNAL AND EXTERNAL CONTINUOUS IMPROVEMENT MEASURES

The final tweet in the series https://twitter.com/GRC_Ninja/status/854577542778499072 addresses the three most important things, that in my expereince, are heavily overlooked, if not completely forgotten, in most threat intelligence (and InfoSec) programs. Summed up in three questions to fit into the 140 character limit: “9) How can you continue to improve? How will you training & staying current? How will you share lessons learned with the community?”

Addressing them in reverse order, sharing experiences (and threat intelligence) can be likened to your body’s ability to fight off disease. If you’re never exposed to a germ, your body won’t know how to fight it off. If you have an immune deficiency (lack of threat intel and InfoSec knowledge) your body is in a weakened state and you get sick (compromised) more easily. Sharing what you know/learn at local security group meetings, conferences, schools and universities etc. not only helps others it will help you. It pays dividends for years to come. Additionally, people will come to trust you, and will share information with you that you might not get anywhere else except the next news cycle and by then it is too late.

Next, once you’ve designed this awesome threat intelligence program, how are you going to keep this finely tuned machine running at top notch levels? The answer is simple, invest in your people. Pay for them to attend security conferences, and yes it is fair to mandate they attend specific talks and provide a knowledge sharing summary. It is also important to understand that much of the value of attending these events, lies in the networking that goes on and the information shared at “lobby-con” and “smoker-con” where nerds are simply geeking out and allowing their brains to be picked. Additionally, you can find valuable trainings at conferences, sometimes at discounted prices that you won’t find anywhere else. Also, theses are great places to find talent if you’re looking to build or expand a team.

Speaking of training, include in your budget funds to send your people to at least one training per year if not more. Of course you want to ensure they stay on after you pay for it so it is understandable if you tie a prorated repayment clause to it. It is easier to create a rock star than it is to hire one.

Finally, how can you continue to improve? The answer for each team will be different, but if you aren’t putting it on your roadmaps and integrating it into your one-on-one sessions with your employees, you’ll quickly become irrelevant and outdated.  Sometimes a great idea for improvement pops into your head and then two hours later you cannot remember it.  Create a space (virtual or physical) where people can drop ideas that can later be reviewed in a team meeting or a one-on-one sessions.  I find that whiteboard walls are great for this (paint a wall with special paint that allows it to act as a whiteboard).  Sometimes an IRC-styled channel, shared do, or wiki page will work too.

SUMMARY

This blog provides a practical outline for designing a threat intelligence program in the digital realm also known as cyberspace, and introduced a four-point constraint mode: time, money, design/accuracy, and quality.

 

As with any threat intelligence, we must understand the digital landscape and know what want it is that must be protected.  In order to protect it, we must have good visibility and simply having more data does not mean we have better visibility or better intelligence. Instead, an abundance of data, that isn’t good data (or is redundant) becomes noise. Discussed above was the next critical step in the defining the program   identify what we need to know, where we can get the answers and information we need, and how much, if anything, those answers and information will cost.  Some programs will run on a shoestring budget while others will be swimming in a sea of money.  Either way, reasonable projections and responsible spending are a must.

 

Once the major outlining is done, we start to dig a little deeper into the actual executions of the program, and we discussed figuring out exactly how we will (or would like to) develop and report the threat intelligence so that you can adequately source/hire the manpower and talent needed to meet these goals. Then we highlighted the all important task of defining success for without a starting definition, how can we show whether or not we are succeeding or failing?  Remember to revisit the definition and metrics regularly, at least semi-annually, and refine them as needed.

 

Finally, we close out the program outline by remembering to plan growth into our team.  That growth should include training, sharing lessons learned internally and externally.  Remember to leverage your local security community social groups, and the multi faceted benefits of security conferences which include networking, knowledge from talks, and knowledge/information gained by collaborating in the social hangout spots.  
Thank you for your time. Please share your experiences and constructive commentary below and share this blog on your forums of choice. For consultation inquiries, the fastest way to reach me is via DM on Twitter.

Hacking Critical Infrastructure

Please accept my apologies in advance if you were hoping to read about an actual technical vulnerability in critical infrastructure or the exploitation thereof. Today we discuss a plausible strategic cby3r threat, and how one might go about hacking our critical infrastructure without going after the plant or the IT team(s) supporting the technologies in it (or at least not at first).  Before we get started, we’ll define two terms, relevant to the scope of this article:

  1. Strategic cyb3r threat intelligence would be that which is timely (i.e. received before an attack), researched in depth, and provides context to a potential attack scenario
  2. Personally identifiable information (PII) as a piece (or combination) of data that can uniquely identify an individual

Now, let’s take a minute to review a key point of a historical event, the OPM breach (you can brush up on it here http://www.nextgov.com/cybersecurity/2015/06/timeline-what-we-know-about-opm-breach/115603/).  According to the information that has been released, attackers did not originally steal personally identifiable information (PII) .  What the attackers did make off with was even more critical, manuals, basically the “schematics” to the OPM IT infrastructure.  [QUESTION: Are any of you logging access attempts (failed and successful) to your asset inventories, network diagrams, application architecture documentation? If you are, is anyone reviewing the logs?]  Many have forgotten the first items stolen were manuals, thanks to the media news buzz about “identities stolen” blah blah blah, and chalked it up to just another breach of PII and millions of dollars wasted on identity theft protection.   The attackers went after something that was considered by many to be a secondary or tertiary target, something that wasn’t “important”.  However, it was a consolidated information resource with phenomenal value.

So, what does this have to do with hacking critical infrastructure?  Well, aside of the option to leave malicious USBs laying around, what if I could compromise MULTIPLE infrastructure companies at once? [dear LEOs I have no plans to do this, I’m just creating a hypothetical scenario and hoping it makes someone improve security].  How could I do this? Where could I do this? Who would I try to compromise?  If I could get just ONE company, I could have the “blueprints” to components at multiple facilities! * insert evil genius laugh* Muahahahahahah!  If I could get these, then I could find a vuln that they’d all share, and then I could launch a coordinated attack on multiple plants at once, or I could launch a targeted attack which would cause a domino effect to hide further malicious acts.

Warning InfoSec professionals, grab your headache medicine now…

Where to begin…

First, I’d see if there was a way I could get a list of companies that created the technology used in the critical infrastructure such as boilers, turbines, and generators. In fact, there is a list, and it is publicly available!  YAY for research databases!! Wooo hooo!  In fact, I’m even able to break it down into coal, gas, geothermal, hydro, nuclear, oil, & waste.  Wait, it gets better. I can even determine the commission date, model, and capacity for each.  Next, if I find data missing from the awesome resource, I may be an OCD attacker and want all the details, I’d plan a social engineering attack. I bet that for those plants that have “missing data” I could probably call, pretend to be a college student doing research, and they’d tell me any one of the previously listed data elements, especially if I sent them the link to the public resource that already has “everyone else’s data” in it.  Although I did not do that, I did collect the manufacturer names for US infrastructure.  Admittedly some appear to have nominal differences in naming based by those who submitted the data, thus potential duplication, but as an attacker I probably wouldn’t care:

  • Aalborg
  • ABB
  • ABB, Asea Brown Boveri
  • Allis Chalmers
  • Alstom
  • American Hydro
  • ASEA
  • Babcock & Wilcox (B&W)
  • Baldwin-Lima-Hamilton (BLH)
  • BBC, Brown Boveri & Cie
  • Brown Boveri & Cie (BBC)
  • Brush
  • Combustion Engineering
  • Deltak
  • Doosan
  • Foster Wheeler
  • GE
  • GE Hydro
  • General Electric
  • Hitachi
  • Hitachi Japan
  • Hitachi Power Systems America
  • Hyundai/Ideal
  • Inepar
  • Kawaskai
  • Leffel
  • Melco
  • Melco Japan
  • MHI
  • MHI Japan
  • Mitsubishi Japan
  • Newport News Ship & Dry Dock
  • Noell
  • Nohab
  • Nooter
  • Nooter/Eriksen
  • Nooter-Erikson
  • Riley Stoker
  • S Morgan Smith (SMS)
  • Siemens
  • SWPC
  • Toshiba
  • TP&M
  • Voest Alpine
  • Vogt Power International Inc.
  • Voith Hydro
  • Westinghouse

Next, I’d start searching to find events where multiple companies would attend.  As you can guess, there is yet another OSINT source that would list potential gatherings of these individuals http://wikicfp.com/cfp/call?conference=energy.  This is just one source, but it is such an amazing source I decided to share it (HINT: if you’re looking for InfoSec conferences, check out the security and technology categories).  For a moment, let’s just assume that this source didn’t yield any promising results.  Another option would be to find a single company that lists one or more of these manufacturers as their client or the technology as their area of expertise.  After a simple search for ABB (yeah had to go pretty far down that list there) we find https://www.turbinepros.com/about/oem-experience.  And wouldn’t you know it, they’re hosting some events of their own.  A search for ‘turbine generator maintenance’ yields http://www.turbinegenerator.com/ and their events tab takes me to http://www.powerservicesgroup.com/events/ and the process continues.  If I wanted a “current” status of critical infrastructure I could pull it from DHS reports/publications at https://www.dhs.gov/publication/daily-open-source-infrastructure-report (granted Jan 2017 they discontinued it).  I could also go here https://www.dhs.gov/critical-infrastructure-sectors and pull each sector’s plan which typically identify the number of plants running and the states in which they are located.  The amount of information available for a bad actor in open sources is plentiful, and allows them plenty of time to plan their attack.  Ironically, I wonder how many companies are doing the same thing to plan FOR the attack?

 

So, what’s next? As a bad actor, one wants bang for the buck so I want to find a conference listing the sponsors & speakers (who does that? #sarcasm), hopefully this might help me narrow down my target (i.e. the one with the largest collection of key players most likely). I also want to find one that isn’t too large, small-med conferences usually have smaller budgets thus, the only real security they put in place is some volunteer with no “security” experience at an entrance asking, “Do you have a conference badge?”  Also, keep in mind, these are energy conferences in this hypothetical scenario, security, especially cyb3r security is probably not on the top of their list.  Since these are not Information Security conferences, i.e. they are not BlackHat or DEFCON, nobody is running around yelling “turn off your Bluetooth, NFC, & WiFi” or “please don’t scan random QR codes”.  There’s also probably not anyone checking to see how many mobile access points (or stingrays) popped up before/after the conference or whether there’s a sniffer on the free conference (or hotel) WiFi.  Another thing an adversary might consider is chatting up the marketing guy, making sure to get his business card.  Also, get him to talk about other key leaders (everyone will talk plenty about the guy they dislike the most).  Then later that bad actor would be sending him (or someone else) a spear-phishing email as they are sure to have captured plenty of topics of interest.  The chances of the targeted victim clicking a link (or not reporting it) are more likely to succeed and avoid detection with a targeted phishing email than a mass blast.  The bottom line is, that from an attacker perspective, it is probably much easier for me to compromise a person from one of these conferences than it is for me to hack into infrastructure directly.

 

If I was a bad guy, I’d consider this casting a wide net, the key is though, that I only need to catch one fish.  Once I’ve caught one, then it is game on.  While all of them are worried about NERC or ISO compliance, how many of them are worried about if a bad actor is accessing IT asset inventories, network diagrams, purchase orders, IT road maps, or archived vulnerability scan reports?  One of the gaps in security that surprises me the most, is the lack of security surrounding previous penetration test reports.  The vendor providing the report(s) may give the highest protections to the documents when sending and storing them, and at first the client treats them with great protections when they first arrive.   However, once they are considered old (usually 12+ months) complacency sets in.  The irony is, the greatest frustration I hear from my Red Team friends is “we told them [1-10] years ago to fix this, and it’s still wide open.”  Well, not only is it wide open, the report now sits on all-company-access shared drive or worse a public FTP server because its “old”.

Bottom Line – It’s Game On.

Many of you might have objections to me laying out this attack scenario on a public blog.  You would argue that I’m giving bad guys ideas and shame on me.  I considered that, however, the more likely truth is that they’ve already thought about this, and we have our heads so far up our 4th point of contact running around screaming about ransomware, malware, hashes, IOCs, and malicious domains that we, the InfoSec community, do not give 1/100th of our time to thinking about strategic cyb3r threats.  We do not plan for attack scenarios beyond device compromise.  Blue Teams spend all day fighting a tactical battle and Red Teams spend all day attacking systems. We rarely stop to give thought to the person we “let in” through the front door.   When do we stop and think about domino effects and strategic cyb3r threat scenarios, so that we can take a harder look at our environments for hints of a strategic attacker and then actually go look for footprints?  Most, if not all of you reading this will say, we don’t ever do that.  That is why I’ve written this.

We have to change what we’re doing and start thinking outside of immediate [tactical] cyb3r threats or we’ll lose the fight not for lack of technology and effort, but for lack of creative and disruptive thinking.

 

FOOD FOR THOUGHT

  1. Look [in your environment] at the sensitive documents listed in this blog (app architecture, network architecture, asset inventory, purchase orders, pentest results, vulnerability reports etc.). Are you logging who/what has accessed them?  Do you see any non-human accounts accessing them?  Is every copy/download accounted for?
  1. Are you adequately educating your staff who attends conferences on the elevated security risks? When’s the last time you made a forensic image of an executive’s laptop?  If you allow BYOD are you adequately inspecting the devices upon return? What changes in procedure for “conference attendance” can you make to better protect your environment?
  1. Do you have relationships with local FBI/Police/InfoSec Community so that you can learn about any potential threats, especially cyb3r threats? Are you sending an InfoSec person to these non-InfoSec conferences with your staff to assess the InfoSec risks/threats?

 

Thank you for taking time to read the blog, please feel free to leave comments and questions.  I will respond as time permits.

 

 

Phishing the Affordable Care Act

Recently, while working on a project I was asked to gather some information on Blue Cross Blue Shield (BCBS) and something scary began to unfold.  I noticed that states have individual BCBS websites, and that there is no real consistency in the URL naming convention.  Then I began imagining the methods an attacker could use to exploit this. This is especially disconcerting since tax season is here and, thanks to the Affordable Care Act, we’ll all be needing forms showing proof of medical coverage, but more on that later. Back to the BCBS domains….

The first thing I noticed was the inconsistent use of the dash (-) character.  For example if I want to visit Georgia’s BCBS site I can use use http://bcbsGA.com, https://bcbsGA.com, http://bcbs-GA.com or https://bcbs-GA.com.  I found that only four other states returned a 200 status for names with the dash ex: bcbs-$state.com.

  • http://bcbs-vt.com/ is under construction, and the owner listed is BlueCross BlueShield of Vermont
  • http://bcbs-mt.com resolves to https://www.bcbsmt.com/
  • http://bcbs-sc.com and http://bcbs-nc.com are currently parked for free at GoDaddy, and the owner information is not available.

I have not inquired with SC/NC BCBS to determine if they own the domains listed above (the ones with the dash).  I also cannot elaborate as to why there is no DNS record resolving each of the Carolina domains above to a primary one as MT did.  It is possible a malicious actor/s own/s the NC/SC domains, although currently that is purely speculation. The final observation that made me decide to script this out and just see how much room there is  for nefarious activity was finding that some states don’t even use BCBS in the URL for example www.southcarolinablues.com.

Deciding where to start wasn’t very difficult.  There are many logical names that could be used for a phishing expedition, but I wanted to stay as close as possible to the logical and already known naming conventions. So I opted not to check for domains like “bcbsofGA.com” or iterations with the state spelled out.  I settled on eight different possible combinations.   As seen with the domains for BCBS of GA, the state abbreviation always appears after BCBS, so I checked for domains with the state at the front as well, and both an HTTP and HTTPS response.  I also checked for domains with the dash before and after the state abbreviation.  Math says that 8 combinations (seen below) * 50 states = 400 possible domains.

  •       http://bcbsXX.com
  •       https://bcbsXX.com
  •       http://bcbs-XX.com
  •       https://bcbs-XX.com
  •       http://XXbcbs.com
  •       https://XXbcbs.com
  •       http://XX-bcbs.com
  •       https://XX-bcbs.com

The results were a bit unnerving…

It took ~13.5 minutes using 18 lines of Python (could be fewer but I was being lazy) on a old, slow laptop, to check the 400 possibilities to learn the following:

  • 200 status = 69 domains
  • 403 status = 02 domains
  • 404 status = 02 domains

Leaving 329 domains available for purchase, and the price for many of them was less than $10.  Keep in mind, I did not verify ownership of the 69 domains, but if I’m a bad guy, I don’t really care who owns them because I’m only looking for what’s available for me to use.

Now back to the tax forms I mentioned earlier….

We teach users not to click on links or open emails that they aren’t expecting, so can you blame them if they click on a link in an email that says “click here to download your 2017 proof of medical coverage, IRS form 1095”?  After all, the IRS website even tells us that we will receive them, and that for the B & C forms the “Health insurance providers (for example, health insurance companies) will send Form 1095-B to individuals they cover, with information about who was covered and when.  And, certain employers will send Form 1095-C to certain employees, with information about what coverage the employer offered.”

Remember all that information lost in the Anthem breach a few years ago? Or the Aug 2016 BCBS breach in Kansas? Hrmmm, I wonder how those might play into potential phishing attacks.

 

MITIGATION

How you choose to mitigate this vulnerability is up to you and the solution(s) you come up with will vary depending on your company size, geographic dispersement of employees, and network architecture among other things.  Some of you may choose to update your whitelists, blacklists or both.  Some of you may use this opportunity as an educational phishing exercise soon, but whatever your solution is, I hope includes pro-active messaging and education for your users.

Finally, if you or someone you know works at a healthcare provider and has the ability to influence them to purchase domains that could be used to phish the employees and/or individuals they cover, I strongly encourage you to share this article with them. You can also try convincing management that not only are you preventing a malicious actor from having them, you could use them for training. While BCBS is the example used here, they are not the only provider out there and this problem is not unique to BCBS or its affiliates.  However, if BCBS licenses it’s affiliates, then enforcing 1) standardized naming conventions for URL’s and 2) requiring them to purchase a minimum set of domains to minimize risk of malicious phishing doesn’t seem unreasonable.  Considering the prudent man rule, I think a prudent man would agree the financial burden of purchasing a few extra domains, is easily justified by the impact of the risk reduction.

Thanks for taking time to read, and for those of you with mitigation ideas, please share your knowledge in the comments, and if you’re new to infosec and want to ask a question about mitigations please ask it.  I only require that comments be constructive and helpful, not negative, insulting, derogatory or anything else along those lines.

Specific details for the 1095 forms can be found here.https://www.irs.gov/affordable-care-act/individuals-and-families/gathering-your-health-coverage-documentation-for-the-tax-filing-season)

Thank you my dear friends for your proofreading, for the laughs, and most of all your time and support.

Stop Having Sex for the First Time – part 2

In the first part of this article, I gave some various examples of how InfoSec teams are structured to fail or at the very least function very inefficiently. Next we’ll talk about how to achieve a more effective *INTEL* team – and how it will enable the development of intelligence in the organization.

FIRST: Specialization Without Division –
So, here’s where experience in the bedroom really pans out in this InfoSecsy relationship. You want to get lots of smart people who each excel at one thing but know a little bit about a lot of related things.

Both InfoSec & Intel teams will benefit from this structure, the caveat is that you must also have people with the right personality (nobody likes selfishness in the sheets). In addition to the right mix of talent, you need people that respect each other’s abilities, aren’t afraid to ask for help and will be willing and even eager to share what they find. You don’t need a bunch of multipurpose rock stars, rather you want people who excel at things such as malware reverse engineering, pcap analysis, social engineering, development, data analysis, and even specific application software etc. You also want them to have foundation knowledge in other security realms.

The second part to this is that they are ONE TEAM, they are not divided into divisions with Directors and VPs over specific areas rather they are outside hires or even the internal elite from the network security team, the security operations center, the devops team etc. They will likely have liasion relationships with these functional areas and access to the data from them as well.

In some cases it may make sense to have multiple teams located together across the country, in some cases the company size may support having them co-locate in one physical space, nonetheless the bottom line is that they are all ONE Team. They are your version of a special forces troop, everyone has a job yet they all help each other and are willing to learn what they can about another area to be as effective and helpful as possible when needed.
SECOND: In Failure and Success, in Sickness and in Health ’til Termination Do We Part

This is an InfoSecsy partnership whether you like it or not. If an attack on your organization succeeds or fails, you share the responsibility. If you build something, and it doesn’t work, you share the failure and when it does, you share the success. If you have an idea and it leads nowhere, you mark it off as something tried and eliminated. If you have an idea, try it, if it fails tell everyone WHY/HOW it failed so they don’t waste resources trying the same thing, then move on. If you try something and it succeeds, share so everyone knows WHY/HOW it worked and they can repeat, enhance, and also succeed. [Ask @Ben0xA for his preso on FIAL – it’s awesome]
THIRD: share, Share, SHare, SHAre, SHARe, SHARE, SHARE!!!!!

Sharing InfoSecsy knowledge, skills, experience and ideas is only going to enhance your Intel team and company’s security posture. For example, the other day I had someone tell me that an Exchange team was unable to help us identify who clicked on a link while accessing OWA on a machine because everyone shared a generic login on the shared workstation. Having similar experience in a related area, I was able to offer a suggestion to the Exchange Team and the SOC Analyst that allowed the proper syslogs to be identified in their repository and the Exchange Team to liason with the Windows IIS team to pull the data that was later analyzed. Neither of these areas was my responsibility or expertise, but due to their willingness to share the problem and brainstorm, solutions emerged.
Another example, When we had a host that was unable to be found, I got the NOC, SOC and Help Desk all talking and we collectively came up with a non-traditional way to protect the network and find the asset. While I didn’t know the topology I was able to ask questions that spawned conversations that resulted in solutions.

Sometimes the person with the LEAST knowledge in a subject area can ask the simplest question that will light a much needed fire when because of how they processed the information. The bottom line is – get your people together regularly to discuss what has/is happened, known, and is yet to be figured out, and collectively, ideas and solutions will emerge.
FINALLY: Recycle & Re-Use

For this final note, I’ll use a hypothetical incident as an example. A Sales Engineer (SE) gets an email from an individual purportedly representing one if his clients. The individual is asking for assistance in collecting network and netflow data to help him tune his SIEM, a seemingly harmless request. As the conversation progresses the SE thinks the guy is sketchy so he contacts the SOC. The SOC runs a number of checks on the accounts and checks for any relationship to any known incidents, nothing is found. Guidance given is to limit the scope of information given to the individual per the company guidelines. So what’s next? Well, if we abide by the 3rd rule, this information would get shared with the Intel team, and then the 4th rule takes effect, the information is recycled. It is sent through the Intel Team that runs through it with a different filter and they begin to identify that not only is the individual sketchy, he is possibly even an imposter executing a very crafty social engineering attack. So what’s next? Recycle & Re-Use again. Contact the customer that the individual claims to represent and pass the information to them. Let them look at it with a different filter. You never know what puzzle someone else is putting together and what appears to be “nothing to see here” might be a critical piece of information that ties everything together for someone else.

SUMMARY:
The first part of this article discussed how traditional, rigid, corporate sandboxes of responsibility that define various IT functionalities within an InfoSec program have a tendency to do hinder effectiveness when it comes to security. The second part of this article provides some ideas and examples on how to restructure and build teams as well as ideas on when/how share information across specialities. There are a few takeaways I’d like to leave you with:

1. The only right structure, is the one that maximizes and encourages information sharing and meets the organizational needs for security AND intelligence within resource constraints

2. Embrace failures – they are the stepping stones that lead to the door of success

3. Bring your teams (worker bee level) from all disciplines, together regularly to discuss all kinds of security concerns and issues everyone is experiencing – and most of all encourage them to SHARE ideas and experience.

4. Recycle data on security incidents, even concerns of a possible incident. Ensure they are passed amongst your teams via a process that works for your organization, with the end goal of everyone getting a say-so/review of it.

So go forth, do great things, and enjoy the InfoSecsy side of security not just the InfoFail side.

Thank you once again for taking time to read OSINT Heaven’s Blog.

Stop Having Sex for the First Time – part 1

As someone who’s been working on an OSINT project lately, I’ve had many surprises and hurdles because there’s poor organization to our execution and little to no information sharing between security functions in the same department. I recently got access to a very important piece of information/tool that resulted in a huge discovery…..this is Oct, we’ve been working on this since July…. Unfortunately, this problem is not unique to this project, OSINT or InfoSec.

THE EXAMPLE:

The US Army structured a communications battalion with companies, made up of platoons/teams/squads etc.  and basically the personnel all had the same functional training background. One company, 30-100 people, would be folks who operated/maintained satellites, another knew cabling and wiring, another radios, and another of those skilled in networking/network communications. Whenever the battalion would go out to train, they would take a few people from each company throw them together like a patch quilt so as to have someone capable of each required skill for the mission. They’d send these patch quilt teams out to different locations with some training objective (usually to successfully establish a communication link, keep it up, and practice for war).

The teams contained the best of the best, people of varying skill levels, and competent (minus the token derp). Nonetheless, despite these groups being highly trained w/ above average intelligence, their execution was clunky, fluidity was all but present and they flat out struggled every time to meet the objective. Why? A few basic reasons (this list is not exhaustive) – nobody knew each other, we communicated in different ways, we could not anticipate each others needs or actions, there was no rhythm no synchronization. It was like being a virgin having sex for the first time every time, with another virgin. Sure, we got the job done, but it was rarely every “awesome”.

So the heart of the problem – teams, functions and activities were silos, not circles. Instead of being an elegant woven silk tapestry full of vibrant colors, we were a hideous patch quilt.

THE INFOSEC PARALLEL

We have the same problem in “Information Security” teams. There’s the Network Security Team, The Security Operations Center, and if you’re luck there is/are Pen Test, Intel, Forensics, and Malware team(s). So with all this awesomeness under one roof how could we possibly fail?

  1. Leadership Roadblocks – Managers sit in rooms making drug deals over resources and designing processes in vacuums.
  2. Lack of Communication/Sharing – None of the worker bees comes together on a regular with information to share, the “intel” that everyone needs.  Instead, data gets passed around/tracked in one ticketing system from workflow to the next team’s workflow if we’re lucky (and documentation usually sucks).
  3. Pissing Contests – we’ve got the “you will use *MY* ticketing system” mentality
  4. Lack of Integration – Let’s not forget that we’ve got all the awesome teams, and we’ve spent money (millions) on awesome tools and not a dime to integrate them, so “intel” sits hidden or is nearly/impossible to gather.
  5. That’s MINE! – Network “Security” teams don’t let anyone have read access to network logs (and only send silly/useless globs of syslogs to a SIEM), only the Help Desk is allowed to have remote access to a host even when a user contacts a SOC suspecting compromise of their host.
  6. Black Holes – Forensic team takes a compromised drive/image that the SOC quarantined and runs away to their cave never to be seen again, the malware team pops their heads up like a prairie dog when you say malware, you feed them and they run away only to pop out of another hole and say here’s your IoC and scurry down the hole.

Instead of being a highly functioning ecosystem of intelligent wild animals (face it, real InfoSec folks we’re just wild :), we’re a damn zoo and none of the animals get to play together.

OK….hopefully you get the point by now – We ALL play a part in this.

SURPISE! – Not really

So is it any wonder when there’s an attack on your organization that everyone flounders to some degree and for the serious ones you simply have to call someone in? [In all honesty, sometimes that actually IS the best and most responsible thing to do]. Is it any surprise that after the attack, all you do is prepare for the next one and you never really figure out anything behind it?  You never really operate in a preventative or offensive fashion.  You just sit around waiting for the next bully to steal your lunch.

So I ask, do you really want to keep having s3x the first time every time? I mean – practice **IS** supposed to improve performance thus making the experience better and better. Sure you have processes, that’s great & flow charts are awesome, but it only gets you so far. The SOC does it’s own little training on “here’s how we RESPOND to ABC incident” the NetSec-Ops team is doing their own RESPONSE training as is every other team that plays some role in a RESPONSE effort. The funny thing – the Windows/Unix/Server/App teams have a part too, but they’re never part of the training and nobody is invited to participate in the other team’s training.  BTW: where is all the info from your “lessons learned” going and where’s your “intel” sharing so you can start PREVENTING instead of just RESPONDING?

Back to our example….

The Army realized the shortcomings of their structure and began restructuring their communications units. They reorganized so that the groups that would fight together would not only train together, but live and work together. Battalions had companies that consisted of platoons with personnel from all the skills needed to be successful. These soldiers worked together every day, even began to learn about each other’s jobs. Light bulbs started going off, greater understanding and better communication emerged. They began to bond, to learn each other’s likes/dislikes, communication nuances, they began to execute with precision and efficiency. They began looking more like that expensive beautiful tapestry and acting like life long lovers.

So how could a company do this?

Well there is no one cookie-cutter solution that will work for every company, but here’s one novel underlying theme – locate them together physically if possible, gather them virtually at minimum. Granted there needs to be separation of duties and permissions, but that doesn’t mean you must have silos. Let the worker bees ACROSS GROUPS work together to define processes and make suggestions up through management. If that’s not possible, have regular working groups (weekly preferably) where they all get together. Sometimes the meetings will be intense with lots of hot topics/issues, other times they’ll have coffee and just bonding, but get them together

Another idea, Wherever your largest team is, usually the SOC, have seats for a NetSec, Malware, Forensics, and Intel Team members to work. The teams can rotate out who works over there, but have someone over there for 2-4 weeks at a time, let them “live & fight together”. Let them share information, watch the people that are part of your processes begin to work more effectively.

In the end, the goal is to have your team execute like they’ve been giving it to each other their whole life, not fumbling through sex like virgins for the first time, every time you need to respond to an incident. Then comes the next step, pillow talk the morning after – or sharing coffee and a bagel if you prefer.

Stay tuned for Part 2 where I’ll be talking about how to maximize this architecture for an intel team.