Category Archives: Strategic

Outlining a Threat Intel Program

(estimated read time 27min)

For new readers, welcome, and please take a moment to read a brief message From the Author.

Executive Summary

I recently crunched the high level basics of setting up a threat intelligence (abbreviated as Threat Intel) program into a 9-tweet thread, which was met with great appreciation and the feedback solicited unanimously agreed I should expand on the thread in a blog so here we go.

This blog elaborates on a nine-step process for creating a Threat Intel program. It is packed full of thought provoking questions, suggestions, and even a few lessons learned to help you avoid bumps in the road. The concepts shared here aren’t necessarily earth shattering; in fact they come from military experience, time spent in combat zones, 24/7 shifts in intelligence facilities, information assurance, governance/risk/compliance, and information security (InfoSec) programs in both government and civilian sectors. Additionally, I take every opportunity to pick the brain of someone (anyone) who has been doing threat intel or InfoSec and occasionally even sit still long enough to read a book, article, or paper on the topic. Threat Intel isn’t anything new. It’s been around since humans have been at odds with each other and doing anything from sending out spies to eavesdropping in a bar, but we seem to struggle with developing a program around it in the digital space. This blog aims to close that gap, and provide you a practical outline for designing your own Threat Intel program.

Introduction

Many of you are used to the long standing saying “You can have your project: fast, cheap, or right. You’re only allowed to choose two.” But what about quality? I remember when I first learned to drive my mother gave me $5 told me to be back in 15 minutes and to bring her some dish detergent. I ran to the store grabbed the bargain brand, hurried back home and handed it to her. She looked and shrieked “What’s this!?” I learned more about dish detergent in the 15 minutes that followed than I care to remember. The lesson here is that, I had completed the task, on time, under budget, and provided exactly what she required. It was fast, cheap AND right, but it didn’t meet her preferred standard of quality.

Taking this lesson learned, I include a fourth constraint for tasks/projects: Quality. Imagine our four factors like a diamond, perfectly balanced, with four equal sections. The rules are simple, if you wish to increase volume in one of the sections, you must decrease volume in another. For this threat intel discussion we label our four sections: time, money, design/accuracy, and quality. Threat intel is rarely, if ever, black and white, therefore we will use the term ‘accuracy’ instead of the ‘right’ as it implies binary thinking ‘right or wrong’. As we discuss building out a Threat Intel program in this blog, we’ll refer back to our balanced diamond, to help remind us of something Tim Helming so eloquently commented (https://twitter.com/timhelming/status/854775298709012480) that at the end of the day the micro (1’s & 0’s of threat hunting) have to translate to the macro (a valuable Threat Intel program that pays the bills).

 

1: WHAT ARE WE PROTECTING?

The first tweet in the series https://twitter.com/GRC_Ninja/status/854573118010122240 starts simply with “list your top 3-5 assets”. This may sound very straightforward however I suspect that if you individually asked each C-level executive, you’d probably wind up with a very diverse list. Try to answer 1) what is it that your organization actually DOES and 2) what assets do you need to do it?

I’d encourage you to have your top two leadership tiers submit their answers via survey or host them at a collaborative meeting where all participants come with write ups on their thoughts, then toss them out on a whiteboard to avoid “group think”. You can have as many as you want, but understand that when hunting threats, you are time constrained and the quality of data is important. There’s a finite value in automation, and at the end of the day threat analysts and threat hunters have “eyes on glass” reading, analyzing, interpreting, and reporting. If your list of “most critical assets” is more than five (and usually three is optimal if there’s stark diversity) then the hunting & analysis teams efforts will usually be proportionally divided according to weight of priorities so that they may perform their jobs to the best of their abilities. A large list will mean you’ll need to invest commensurate amounts of money in staffing to achieve adequate accuracy, quality (and thoroughness) of investigation, analysis and the level of reporting desired.

2: IDENTIFY YOUR THREATS

Tweet number two in the series https://twitter.com/GRC_Ninja/status/854573497741430785 calls for an organization to consider “who would kill to have/destroy those assets? (think of what lethal/strategic value they hold to another)”. This is an exercise in not only giving names to the boogeymen that keep you up at night, but also in identifying who’s the most feared. This sounds simple enough right? When asking groups to do this, there are usually three adversaries named 1) your largest competitor(s), 2) hostile former/current employees, & 3) “hackers”. That third group is a bit too vague for your hunting team to effectively and efficiently execute their duties or provide you a quality threat assessment/intel report. Imagine your threat intelligence report template as “$threat tried to hack/attack us…”, now substitute “hacker” for $threat and read that aloud. [Be honest, you’d probably fire someone for that report.]

Obviously “hacker” needs to be refined. Let’s break that term down into the following groups:

  • advanced persistent threats (APT): one or more actors who are PERSISTENT, which usually means well funded and they don’t stop, ever, they don’t go find ‘an easier’ target, & rarely take holidays, or sleep, or at least so it seems; they are your nemesis. A nation state [hacker] actor (someone working for a foreign country/government) is an APT, but not all APTs are nation states! They ARE all persistent.
  • criminals: entities driven by monetary gain, resorting to anything from phishing & fraud to malware and 0-days
  • hacktivists: a group seeking to promote a political agenda or effect social change, usually not in it for the money
  • Script kiddies: usually seek bragging rights for disrupting business

Now, using these groups instead of “hacker”, try to think of someone (or some group) who meets one of these definitions and would go to great lengths to steal/destroy the assets listed in step one. Depending on what services or products your organization provides your answers will vary. A video game company probably has very different threats than a banker, unless of course the owners or employees bank with the banker. A stationary company will have different threats than a pharmaceutical company. Sometimes however, threats are target-neutral, these threats would be addressed by your security operations center (SOC) first, then escalated to your threat hunters/analysts if necessary. Remember, your threat intel team can’t chase every boogeyman 24/7.

Another thing you’ll want to do is score the threat actors. There are a number of systems out there and the specifics of that activity are beyond the scope of this article. However, it may be helpful when trying to prioritize what/who threatens you by using a matrix. For example, on a scale of 1 to 5, 1 being the lowest, what is each threat actor’s:

  1. level of determination
  2. resources
  3. skill
  4. team size

3: WHAT MUST WE STOP?

Next in the tweet thread https://twitter.com/GRC_Ninja/status/854574465585487872 I asked “…What [are] the 3-5 most important things to prevent? Physical/Virtual Theft? Destruction? Corruption? Manipulation? Modification?…” You may think of these within any context you wish, and some to consider are data, hosts/nodes, code execution, people & processes.

During a debate over what was the minimum security requirement for something highly sensitive, an executive said, to paraphrase, that he didn’t care who could READ the documents, just as long as they couldn’t STEAL them. Needless to say, explaining digital thievery left his brain about to explode and me with carte blanche authority to deny access to everyone and everything as I saw fit. The takeaway is, identify and understand what endstate is beyond your acceptable risk threshold, this unacceptable risk is what you MUST stop.

For example, in some cases a breach of a network segment may be undesirable but it is data exfiltration from that segment that you MUST stop. Another example might be an asset for which destruction is an acceptable risk because you are capable of restoring it quickly. However that asset becoming manipulated, remaining live and online might have far greater reaching consequences. Think of a dataset that has Black Friday’s pricing (in our oversimplified and horribly architected system). The data is approved and posted to a drop folder where a cron job picks it up, pushes price changes to a transactional database and it’s published to your e-commerce site. If an attacker were to destroy or corrupt the file, you’re not alarmed because there’s an alert that will sound and a backup copy from which you can restore. However, consider a scenario in which an attacker modifies the prices, the “too-good-to-be-true” prices are pushed to the database and website, and it takes two hours to detect this, on Black Friday.

Perhaps you have something that is a lethal agent, thus you MUST prevent physical theft by ensuring the IoT and networked security controls have 5-9’s uptime (not down for more than 5 min per year), are never compromised, or that an unauthorized person is never allowed to access or control it. These are just a couple scenarios to get you thinking, but the real importance lies in ensuring your list of “must stops” is manageable and each objective can be allocated sufficient manpower/support when hunting for threats and your SOC is monitoring events that they’ll escalate to your Threat Intel Team

Identifying and understanding the activities that must be prevented will drive and prioritize the corresponding hunting activities your teams will conduct when looking for bad guys who may already be in your systems. Referring back to our balanced diamond, consider that an investment in technologies to support constant monitoring should probably not be part of the budget for your threat intel team, however analytic tools used on the historical outputs from your continuous monitoring systems, security sensors, logs etc. probably would be. Also consider the cost for manpower, time to be spent performing activities in support of these strategic objectives, and how the quality of the investigations and reporting will be affected by available manpower and tools.

4: IDENTIFY DATA/INFORMATION NEEDS/REQUIREMENTS

Next in the series of tweets comes https://twitter.com/GRC_Ninja/status/854574988153716736

“4) identify the data/information you [would] NEED to have to prevent actions…[from step] 3 (not mitigate to acceptable risk, PREVENT)”

After completing the first three steps we should know 1) what we need to protect, 2) who we believe we’ll be defending against/hunting for, and 3) what we must prevent from happening. So what is the most critical resources needed for us to achieve our goals? Data and Information. At this point in the process we are simply making a list. I recommend a brainstorming session to get started. You may be in charge of developing the Threat Intel program, but you can’t run it by yourself. This step in the process is a great way to give your (potential) team members a chance to have some skin in the game, and really feel like they own it. Before you consider asking C-levels for input on this, be considerate of their time and only ask those who have relevant experience, someone who has been a blue/red/purple team member.

Here’s a suggestion to get you started. Gather your security geeks and nerds in a room, make sure everyone understands 1-3, then ask them to think of what data/information they believe they would need to successfully thwart attackers. Next, put giant post-it-note sheets on the walls, title them “Network”, “Application”, “Host”, “Malware”, “Databases” and “InfoSec Soup”, give them each a marker, then give everyone five minutes to run around the room and brain dump information on each sheet (duplication among participants is fine). Whatever doesn’t fit into the first five categories listed goes on the last one (something like 3rd-party svc provider termination of disgruntled employee reports so you can actually revoke their credentials in your own system expeditiously). After the five minutes are up, take some time to go over the entries on each sheet, not in detail, just read them off so you make sure you can read them. Allow alibi additions as something on the list may spark an idea from someone. Then walk away. You may even repeat this exercise with your SOC, NOC, and developers. You’d be surprised how security minded some of these individuals are (you might even want to recruit them for you Threat Intel team later). If your team is remote, a modified version of this could be a survey.

Come back the next day with fresh eyes, take the note sheets, review and organize them into a list. Follow up with the teams and begin to prioritize the list into that which exists and we NEED versus WANT, and my favorite category ‘Unicorns and Leprechauns’ better known as a wishlist, which are things which as far as we know do not exists but might be built/created.

5: IDENTIFY DATA/INFORMATION RESOURCES

Some feedback I received regarding the next tweet https://twitter.com/GRC_Ninja/status/854575357885906944 where I ask if “you [can] get this information from internal sources in sufficient detail to PREVENT items in 3? If not can you get there?” was that it could be combined with the previous step. Depending on the organization, this is a true statement. However, I expect that in order to complete the task above, there will be multiple meetings and a few iterations of list revision before the step is complete. From a project management view, having these as separate milestones makes it easier to track progress toward the goal of creating the program. Additionally, seeing another milestone complete, has immeasurable positive effects as it creates a sense of accomplishment. Whether you combine or separate them, once it is complete, we now have a viable list of information sources we’ve identified as necessary, and now we can start working on identifying how we might source the information.

Information is data that has been analyzed and given context. In some cases, we trust the data analysis of a source, and we are comfortable trusting the information it produces, such as our internal malware reverse engineers, a vetted blacklist provider, or even just “a guy I know” (which ironically sometimes provides the most reliable tips/information out there). In other cases, such as a pew-pew map, we want to see the raw data so that we may perform our own analysis and draw our own conclusions. The challenge in this step, for internal sources, is to identify all the data sources. This will have secondary and tertiary benefits as you will not only identify redundant sources/reporting (which can help reduce costs later) but you will have to decide on which source is your source of truth. You may also discover other unexpected goodies some sources provide that you hadn’t thought of. As an example (not necessarily an endorsement) log files will be on your list of necessary data, and perhaps you find that only portions of these files are pumped into Splunk versus the raw log files which contain data NOT put into Splunk. In most cases when hunting, the raw data source is preferred. However by listing both sources, your discovery of this delta in the sources may even prompt a modification to data architecture to allow the extra fields you want to be added to the Splunk repository.

In other cases, the data which you seek is not currently captured, such as successful login attempts to a resource listed in step one, but it could be if someone turned on that logging. Finally, the data/information you’ve listed, simply is not something you have access to, such as underground extremist threats against your industry or gang activity in the region of an asset from step one. However you still need this information and listing all possible sources for this usually identifies a need for relationships to be established and/or monitoring of open sources to be created. Another data point that will emerge are potential vendors that market/promise that they have the kind(s) of information you want. These will each require a cost/benefit analysis and a “bake off” between vendors to see who truly provides something that is value added to your program and meets your needs. NOTE: most threat intel feeds are at best industry-specific, not organizational or even regionally-specific so be mindful of purchasing “relative” threat intelligence feeds.

6: IDENTIFY DATA/INFORMATION COSTS

The next step in the process mentioned here https://twitter.com/GRC_Ninja/status/854575731585798144, is identifying gaps between what data/information you need but don’t have. “6) if no to 5, can you buy this information? If yes, what’s your budget? Can you eventually generate it yourself?” It’s not surprising to anyone, that sometimes the information we’d like to have is closely held by state and federal agencies. If you’re building this program from the ground up, you will want to establish relationships with these agencies and determine if there’s a cost associated with receiving it. As mentioned earlier ISACs for your industry might be a good source, but most of them are not free.

Other information you might be able to generate, but someone else already develops it. In many cases, not only do they develop it, they do it well, it’s useful, and you couldn’t generate it to the quality standards they do unless that was absolutely the only thing on which your team worked. For example, consider Samuel Culper’s Forward Observer https://readfomag.com/. He provides weekly executive summaries and addresses current indicators of:

  • Systems disruption or instability leading to violence
  • An outbreak of global conflict
  • Organized political violence
  • Economic, financial, or monetary instability

All of the above, could be used to cover the tracks of, or spawn a digital (cyber) attack. As an independent threat researcher, this information is something I do not have the time to collect & analyze, and it costs me about the same as grits & bacon once a month at my favorite breakfast place.

In considering our balanced diamond, money/cost is a resource that if we need a lot of it for one area of of our program, we usually have to give up something else inside that same category, and it is usually manpower or tools, as everyone is pushed to “do more with less”. So how do we prioritize the allocation of funds? Use the ABC prioritization rules: acquire, buy, create. First, see if you can acquire what you need in-house, acquire it from another team, tools, repository etc as this is the cheapest route. If you cannot acquire it, can you buy it? This may be more expensive, but depending on your timeline and availability of personnel in-house to create it, this is sometimes cheaper than the next option, creating it. Finally, if you cannot acquire it or buy it, then consider creating it. This is probably the most time-consuming and costly option (from a total cost of ownership perspective) when first standing up a program; however, it may be something that goes on a roadmap for later. Creating a source can allow greater flexibility, control, and validation over your threat intelligence data/information.

Whether or not to choose A,B or C will depend on your balanced diamond. If time is not a resource you have, and the program needs to be stood up quickly, you may take the hit on the cost section of your diamond as you need to buy the data/information from a source. Also, the talent pool from which you have to choose may also affect your decision, the time and cost associated with hiring the talent (if you can’t train someone up) may force your hand into buying instead of creating. In some instances the cost of the data may be prohibitive, and you do not have it in-house thus you may have to adjust your time section on your diamond to allow you to hire that resource in. The bottom line is that there is no cookie-cutter “right” answer to how you go about selecting each data resource, and one way or another you must select something and you may need to revise your needs, goals, and long term objectives.

 

7: DEFINE YOUR THREAT INTELLIGENCE DEVELOPMENT PROCESSES & PERSONNEL REQUIREMENTS

The next tweet in the series is where we really start to get into the “HOW” of our program

https://twitter.com/GRC_Ninja/status/854576206867566596 “7) Once you get the information, how will you evaluate, analyze & report on it? How much manpower will you need? How will you assess ROI?” There’s a lot packed into this tweet and the questions build on each other. Beginning with the first question, you’ll be looking at your day-to-day and weekly activities. How will you evaluate the data & information received? Take for example, an aggregate customized social media feed, will the results need manual review? If so, how often? Will you be receiving threat bulletins from an Intel Sharing and Analysis Center (ISAC)? Who’s going to read/take action on them?  One key thing to include in your reporting in the WHO, not just the when and how.  A great tool for this is a RACI chart.

For each information source you listed in steps 5 & 6, you should have a plan to evaluate, analyze & report on it. You will find, that as your team analyzes and evaluates these sources, some of them will become redundant.

The second question in the tweet was “How much manpower will you need?” There are a variety of estimating models, but I urge you to consider basing it on 1) the number of information sources you’ve identified as necessary and 2) the number of employees in your organization. What’s the point of having a source, if you don’t have anyone to use/analyze/report on or mine it?  Your own employees are sensors, sometimes they’re also an internal threat. Another point to consider is how much of each analysis effort will be manual at first, that can become automated? Remember, you can never fully automate all analyses, because you can never fully predict human behavior, and every threat still has a human behind it.

The third question in the tweet, “How will you assess ROI?” is critical. Before you begin your program, you want to define HOW you will evaluate these. Will it be based on bad actors found? The number of incoming reports from a source that you read, but tell you nothing new? Remember our balanced diamond, there are finite finances, and time that can be invested into the program. As the daily tasks go on, new information and talent needs will emerge but more importantly, the internal data and information sources will either prove to be noise or niche. Other sources, such as an Intel feed, or membership in an ISAC might not prove to be producing valuable information or intelligence. I’d recommend at minimum, annual evaluation (using your pre-defined metrics for your qualitative ROI) if not semi-annual review of any external/paid sources to ensure they are reliable, and providing value. If your team tracks this at least monthly, it’ll be much easier when annual budget reviews convene.

REMINDER: Defining the metrics for ROI in advance does not mean you cannot add or refine the metrics as the program progresses. I recommend reviewing them every 6 months to determine if they need revising. Also, don’t forget that new information needs will emerge as your program grows. Take them, and go back through steps 5-7 before asking for them.

8: DEFINE SUCCESS AND THE METRICS THAT REFLECT IT

Good advice I’ve heard time and time again is, always begin with the end in mind. The next tweet in the series https://twitter.com/GRC_Ninja/status/854576964065275904 touches on this by asking “8) what will success look like? # of compromises? Thwarted attempts? Time before bad guys detected? Lost revenue? Good/Bad press hits?” Granted 140 characters is not nearly enough to list all of the possible metrics one could use, but the objective of that tweet and this blog are not to list them for you, rather to encourage you to think of your own.

Before you start hunting threats and developing a threat intelligence program, you’ll need a measuring stick for success, for without one how will you know if you’re on the right path or have achieved your goals? As with everything in business, metrics are used to justify budgets and evaluate performance (there’s a buzz word called key performance indicators KPIs you should become familiar with, also known as status or stop light reporting red, yellow, green).

In a very young program, I’d encourage you to include a list of “relationships” you need/want to establish outside vs inside the organization, and the number of them that you do create. You can find other ideas for metrics with this search: https://www.google.com/#q=%22threat+intelligence+metrics%22

 

9: IDENTIFY INTERNAL AND EXTERNAL CONTINUOUS IMPROVEMENT MEASURES

The final tweet in the series https://twitter.com/GRC_Ninja/status/854577542778499072 addresses the three most important things, that in my expereince, are heavily overlooked, if not completely forgotten, in most threat intelligence (and InfoSec) programs. Summed up in three questions to fit into the 140 character limit: “9) How can you continue to improve? How will you training & staying current? How will you share lessons learned with the community?”

Addressing them in reverse order, sharing experiences (and threat intelligence) can be likened to your body’s ability to fight off disease. If you’re never exposed to a germ, your body won’t know how to fight it off. If you have an immune deficiency (lack of threat intel and InfoSec knowledge) your body is in a weakened state and you get sick (compromised) more easily. Sharing what you know/learn at local security group meetings, conferences, schools and universities etc. not only helps others it will help you. It pays dividends for years to come. Additionally, people will come to trust you, and will share information with you that you might not get anywhere else except the next news cycle and by then it is too late.

Next, once you’ve designed this awesome threat intelligence program, how are you going to keep this finely tuned machine running at top notch levels? The answer is simple, invest in your people. Pay for them to attend security conferences, and yes it is fair to mandate they attend specific talks and provide a knowledge sharing summary. It is also important to understand that much of the value of attending these events, lies in the networking that goes on and the information shared at “lobby-con” and “smoker-con” where nerds are simply geeking out and allowing their brains to be picked. Additionally, you can find valuable trainings at conferences, sometimes at discounted prices that you won’t find anywhere else. Also, theses are great places to find talent if you’re looking to build or expand a team.

Speaking of training, include in your budget funds to send your people to at least one training per year if not more. Of course you want to ensure they stay on after you pay for it so it is understandable if you tie a prorated repayment clause to it. It is easier to create a rock star than it is to hire one.

Finally, how can you continue to improve? The answer for each team will be different, but if you aren’t putting it on your roadmaps and integrating it into your one-on-one sessions with your employees, you’ll quickly become irrelevant and outdated.  Sometimes a great idea for improvement pops into your head and then two hours later you cannot remember it.  Create a space (virtual or physical) where people can drop ideas that can later be reviewed in a team meeting or a one-on-one sessions.  I find that whiteboard walls are great for this (paint a wall with special paint that allows it to act as a whiteboard).  Sometimes an IRC-styled channel, shared do, or wiki page will work too.

SUMMARY

This blog provides a practical outline for designing a threat intelligence program in the digital realm also known as cyberspace, and introduced a four-point constraint mode: time, money, design/accuracy, and quality.

 

As with any threat intelligence, we must understand the digital landscape and know what want it is that must be protected.  In order to protect it, we must have good visibility and simply having more data does not mean we have better visibility or better intelligence. Instead, an abundance of data, that isn’t good data (or is redundant) becomes noise. Discussed above was the next critical step in the defining the program   identify what we need to know, where we can get the answers and information we need, and how much, if anything, those answers and information will cost.  Some programs will run on a shoestring budget while others will be swimming in a sea of money.  Either way, reasonable projections and responsible spending are a must.

 

Once the major outlining is done, we start to dig a little deeper into the actual executions of the program, and we discussed figuring out exactly how we will (or would like to) develop and report the threat intelligence so that you can adequately source/hire the manpower and talent needed to meet these goals. Then we highlighted the all important task of defining success for without a starting definition, how can we show whether or not we are succeeding or failing?  Remember to revisit the definition and metrics regularly, at least semi-annually, and refine them as needed.

 

Finally, we close out the program outline by remembering to plan growth into our team.  That growth should include training, sharing lessons learned internally and externally.  Remember to leverage your local security community social groups, and the multi faceted benefits of security conferences which include networking, knowledge from talks, and knowledge/information gained by collaborating in the social hangout spots.  
Thank you for your time. Please share your experiences and constructive commentary below and share this blog on your forums of choice. For consultation inquiries, the fastest way to reach me is via DM on Twitter.

Hacking Critical Infrastructure

Please accept my apologies in advance if you were hoping to read about an actual technical vulnerability in critical infrastructure or the exploitation thereof. Today we discuss a plausible strategic cby3r threat, and how one might go about hacking our critical infrastructure without going after the plant or the IT team(s) supporting the technologies in it (or at least not at first).  Before we get started, we’ll define two terms, relevant to the scope of this article:

  1. Strategic cyb3r threat intelligence would be that which is timely (i.e. received before an attack), researched in depth, and provides context to a potential attack scenario
  2. Personally identifiable information (PII) as a piece (or combination) of data that can uniquely identify an individual

Now, let’s take a minute to review a key point of a historical event, the OPM breach (you can brush up on it here http://www.nextgov.com/cybersecurity/2015/06/timeline-what-we-know-about-opm-breach/115603/).  According to the information that has been released, attackers did not originally steal personally identifiable information (PII) .  What the attackers did make off with was even more critical, manuals, basically the “schematics” to the OPM IT infrastructure.  [QUESTION: Are any of you logging access attempts (failed and successful) to your asset inventories, network diagrams, application architecture documentation? If you are, is anyone reviewing the logs?]  Many have forgotten the first items stolen were manuals, thanks to the media news buzz about “identities stolen” blah blah blah, and chalked it up to just another breach of PII and millions of dollars wasted on identity theft protection.   The attackers went after something that was considered by many to be a secondary or tertiary target, something that wasn’t “important”.  However, it was a consolidated information resource with phenomenal value.

So, what does this have to do with hacking critical infrastructure?  Well, aside of the option to leave malicious USBs laying around, what if I could compromise MULTIPLE infrastructure companies at once? [dear LEOs I have no plans to do this, I’m just creating a hypothetical scenario and hoping it makes someone improve security].  How could I do this? Where could I do this? Who would I try to compromise?  If I could get just ONE company, I could have the “blueprints” to components at multiple facilities! * insert evil genius laugh* Muahahahahahah!  If I could get these, then I could find a vuln that they’d all share, and then I could launch a coordinated attack on multiple plants at once, or I could launch a targeted attack which would cause a domino effect to hide further malicious acts.

Warning InfoSec professionals, grab your headache medicine now…

Where to begin…

First, I’d see if there was a way I could get a list of companies that created the technology used in the critical infrastructure such as boilers, turbines, and generators. In fact, there is a list, and it is publicly available!  YAY for research databases!! Wooo hooo!  In fact, I’m even able to break it down into coal, gas, geothermal, hydro, nuclear, oil, & waste.  Wait, it gets better. I can even determine the commission date, model, and capacity for each.  Next, if I find data missing from the awesome resource, I may be an OCD attacker and want all the details, I’d plan a social engineering attack. I bet that for those plants that have “missing data” I could probably call, pretend to be a college student doing research, and they’d tell me any one of the previously listed data elements, especially if I sent them the link to the public resource that already has “everyone else’s data” in it.  Although I did not do that, I did collect the manufacturer names for US infrastructure.  Admittedly some appear to have nominal differences in naming based by those who submitted the data, thus potential duplication, but as an attacker I probably wouldn’t care:

  • Aalborg
  • ABB
  • ABB, Asea Brown Boveri
  • Allis Chalmers
  • Alstom
  • American Hydro
  • ASEA
  • Babcock & Wilcox (B&W)
  • Baldwin-Lima-Hamilton (BLH)
  • BBC, Brown Boveri & Cie
  • Brown Boveri & Cie (BBC)
  • Brush
  • Combustion Engineering
  • Deltak
  • Doosan
  • Foster Wheeler
  • GE
  • GE Hydro
  • General Electric
  • Hitachi
  • Hitachi Japan
  • Hitachi Power Systems America
  • Hyundai/Ideal
  • Inepar
  • Kawaskai
  • Leffel
  • Melco
  • Melco Japan
  • MHI
  • MHI Japan
  • Mitsubishi Japan
  • Newport News Ship & Dry Dock
  • Noell
  • Nohab
  • Nooter
  • Nooter/Eriksen
  • Nooter-Erikson
  • Riley Stoker
  • S Morgan Smith (SMS)
  • Siemens
  • SWPC
  • Toshiba
  • TP&M
  • Voest Alpine
  • Vogt Power International Inc.
  • Voith Hydro
  • Westinghouse

Next, I’d start searching to find events where multiple companies would attend.  As you can guess, there is yet another OSINT source that would list potential gatherings of these individuals http://wikicfp.com/cfp/call?conference=energy.  This is just one source, but it is such an amazing source I decided to share it (HINT: if you’re looking for InfoSec conferences, check out the security and technology categories).  For a moment, let’s just assume that this source didn’t yield any promising results.  Another option would be to find a single company that lists one or more of these manufacturers as their client or the technology as their area of expertise.  After a simple search for ABB (yeah had to go pretty far down that list there) we find https://www.turbinepros.com/about/oem-experience.  And wouldn’t you know it, they’re hosting some events of their own.  A search for ‘turbine generator maintenance’ yields http://www.turbinegenerator.com/ and their events tab takes me to http://www.powerservicesgroup.com/events/ and the process continues.  If I wanted a “current” status of critical infrastructure I could pull it from DHS reports/publications at https://www.dhs.gov/publication/daily-open-source-infrastructure-report (granted Jan 2017 they discontinued it).  I could also go here https://www.dhs.gov/critical-infrastructure-sectors and pull each sector’s plan which typically identify the number of plants running and the states in which they are located.  The amount of information available for a bad actor in open sources is plentiful, and allows them plenty of time to plan their attack.  Ironically, I wonder how many companies are doing the same thing to plan FOR the attack?

 

So, what’s next? As a bad actor, one wants bang for the buck so I want to find a conference listing the sponsors & speakers (who does that? #sarcasm), hopefully this might help me narrow down my target (i.e. the one with the largest collection of key players most likely). I also want to find one that isn’t too large, small-med conferences usually have smaller budgets thus, the only real security they put in place is some volunteer with no “security” experience at an entrance asking, “Do you have a conference badge?”  Also, keep in mind, these are energy conferences in this hypothetical scenario, security, especially cyb3r security is probably not on the top of their list.  Since these are not Information Security conferences, i.e. they are not BlackHat or DEFCON, nobody is running around yelling “turn off your Bluetooth, NFC, & WiFi” or “please don’t scan random QR codes”.  There’s also probably not anyone checking to see how many mobile access points (or stingrays) popped up before/after the conference or whether there’s a sniffer on the free conference (or hotel) WiFi.  Another thing an adversary might consider is chatting up the marketing guy, making sure to get his business card.  Also, get him to talk about other key leaders (everyone will talk plenty about the guy they dislike the most).  Then later that bad actor would be sending him (or someone else) a spear-phishing email as they are sure to have captured plenty of topics of interest.  The chances of the targeted victim clicking a link (or not reporting it) are more likely to succeed and avoid detection with a targeted phishing email than a mass blast.  The bottom line is, that from an attacker perspective, it is probably much easier for me to compromise a person from one of these conferences than it is for me to hack into infrastructure directly.

 

If I was a bad guy, I’d consider this casting a wide net, the key is though, that I only need to catch one fish.  Once I’ve caught one, then it is game on.  While all of them are worried about NERC or ISO compliance, how many of them are worried about if a bad actor is accessing IT asset inventories, network diagrams, purchase orders, IT road maps, or archived vulnerability scan reports?  One of the gaps in security that surprises me the most, is the lack of security surrounding previous penetration test reports.  The vendor providing the report(s) may give the highest protections to the documents when sending and storing them, and at first the client treats them with great protections when they first arrive.   However, once they are considered old (usually 12+ months) complacency sets in.  The irony is, the greatest frustration I hear from my Red Team friends is “we told them [1-10] years ago to fix this, and it’s still wide open.”  Well, not only is it wide open, the report now sits on all-company-access shared drive or worse a public FTP server because its “old”.

Bottom Line – It’s Game On.

Many of you might have objections to me laying out this attack scenario on a public blog.  You would argue that I’m giving bad guys ideas and shame on me.  I considered that, however, the more likely truth is that they’ve already thought about this, and we have our heads so far up our 4th point of contact running around screaming about ransomware, malware, hashes, IOCs, and malicious domains that we, the InfoSec community, do not give 1/100th of our time to thinking about strategic cyb3r threats.  We do not plan for attack scenarios beyond device compromise.  Blue Teams spend all day fighting a tactical battle and Red Teams spend all day attacking systems. We rarely stop to give thought to the person we “let in” through the front door.   When do we stop and think about domino effects and strategic cyb3r threat scenarios, so that we can take a harder look at our environments for hints of a strategic attacker and then actually go look for footprints?  Most, if not all of you reading this will say, we don’t ever do that.  That is why I’ve written this.

We have to change what we’re doing and start thinking outside of immediate [tactical] cyb3r threats or we’ll lose the fight not for lack of technology and effort, but for lack of creative and disruptive thinking.

 

FOOD FOR THOUGHT

  1. Look [in your environment] at the sensitive documents listed in this blog (app architecture, network architecture, asset inventory, purchase orders, pentest results, vulnerability reports etc.). Are you logging who/what has accessed them?  Do you see any non-human accounts accessing them?  Is every copy/download accounted for?
  1. Are you adequately educating your staff who attends conferences on the elevated security risks? When’s the last time you made a forensic image of an executive’s laptop?  If you allow BYOD are you adequately inspecting the devices upon return? What changes in procedure for “conference attendance” can you make to better protect your environment?
  1. Do you have relationships with local FBI/Police/InfoSec Community so that you can learn about any potential threats, especially cyb3r threats? Are you sending an InfoSec person to these non-InfoSec conferences with your staff to assess the InfoSec risks/threats?

 

Thank you for taking time to read the blog, please feel free to leave comments and questions.  I will respond as time permits.