Outlining a Threat Intel Program

(estimated read time 27min)

For new readers, welcome, and please take a moment to read a brief message From the Author.

Executive Summary

I recently crunched the high level basics of setting up a threat intelligence (abbreviated as Threat Intel) program into a 9-tweet thread, which was met with great appreciation and the feedback solicited unanimously agreed I should expand on the thread in a blog so here we go.

This blog elaborates on a nine-step process for creating a Threat Intel program. It is packed full of thought provoking questions, suggestions, and even a few lessons learned to help you avoid bumps in the road. The concepts shared here aren’t necessarily earth shattering; in fact they come from military experience, time spent in combat zones, 24/7 shifts in intelligence facilities, information assurance, governance/risk/compliance, and information security (InfoSec) programs in both government and civilian sectors. Additionally, I take every opportunity to pick the brain of someone (anyone) who has been doing threat intel or InfoSec and occasionally even sit still long enough to read a book, article, or paper on the topic. Threat Intel isn’t anything new. It’s been around since humans have been at odds with each other and doing anything from sending out spies to eavesdropping in a bar, but we seem to struggle with developing a program around it in the digital space. This blog aims to close that gap, and provide you a practical outline for designing your own Threat Intel program.


Many of you are used to the long standing saying “You can have your project: fast, cheap, or right. You’re only allowed to choose two.” But what about quality? I remember when I first learned to drive my mother gave me $5 told me to be back in 15 minutes and to bring her some dish detergent. I ran to the store grabbed the bargain brand, hurried back home and handed it to her. She looked and shrieked “What’s this!?” I learned more about dish detergent in the 15 minutes that followed than I care to remember. The lesson here is that, I had completed the task, on time, under budget, and provided exactly what she required. It was fast, cheap AND right, but it didn’t meet her preferred standard of quality.

Taking this lesson learned, I include a fourth constraint for tasks/projects: Quality. Imagine our four factors like a diamond, perfectly balanced, with four equal sections. The rules are simple, if you wish to increase volume in one of the sections, you must decrease volume in another. For this threat intel discussion we label our four sections: time, money, design/accuracy, and quality. Threat intel is rarely, if ever, black and white, therefore we will use the term ‘accuracy’ instead of the ‘right’ as it implies binary thinking ‘right or wrong’. As we discuss building out a Threat Intel program in this blog, we’ll refer back to our balanced diamond, to help remind us of something Tim Helming so eloquently commented (https://twitter.com/timhelming/status/854775298709012480) that at the end of the day the micro (1’s & 0’s of threat hunting) have to translate to the macro (a valuable Threat Intel program that pays the bills).



The first tweet in the series https://twitter.com/GRC_Ninja/status/854573118010122240 starts simply with “list your top 3-5 assets”. This may sound very straightforward however I suspect that if you individually asked each C-level executive, you’d probably wind up with a very diverse list. Try to answer 1) what is it that your organization actually DOES and 2) what assets do you need to do it?

I’d encourage you to have your top two leadership tiers submit their answers via survey or host them at a collaborative meeting where all participants come with write ups on their thoughts, then toss them out on a whiteboard to avoid “group think”. You can have as many as you want, but understand that when hunting threats, you are time constrained and the quality of data is important. There’s a finite value in automation, and at the end of the day threat analysts and threat hunters have “eyes on glass” reading, analyzing, interpreting, and reporting. If your list of “most critical assets” is more than five (and usually three is optimal if there’s stark diversity) then the hunting & analysis teams efforts will usually be proportionally divided according to weight of priorities so that they may perform their jobs to the best of their abilities. A large list will mean you’ll need to invest commensurate amounts of money in staffing to achieve adequate accuracy, quality (and thoroughness) of investigation, analysis and the level of reporting desired.


Tweet number two in the series https://twitter.com/GRC_Ninja/status/854573497741430785 calls for an organization to consider “who would kill to have/destroy those assets? (think of what lethal/strategic value they hold to another)”. This is an exercise in not only giving names to the boogeymen that keep you up at night, but also in identifying who’s the most feared. This sounds simple enough right? When asking groups to do this, there are usually three adversaries named 1) your largest competitor(s), 2) hostile former/current employees, & 3) “hackers”. That third group is a bit too vague for your hunting team to effectively and efficiently execute their duties or provide you a quality threat assessment/intel report. Imagine your threat intelligence report template as “$threat tried to hack/attack us…”, now substitute “hacker” for $threat and read that aloud. [Be honest, you’d probably fire someone for that report.]

Obviously “hacker” needs to be refined. Let’s break that term down into the following groups:

  • advanced persistent threats (APT): one or more actors who are PERSISTENT, which usually means well funded and they don’t stop, ever, they don’t go find ‘an easier’ target, & rarely take holidays, or sleep, or at least so it seems; they are your nemesis. A nation state [hacker] actor (someone working for a foreign country/government) is an APT, but not all APTs are nation states! They ARE all persistent.
  • criminals: entities driven by monetary gain, resorting to anything from phishing & fraud to malware and 0-days
  • hacktivists: a group seeking to promote a political agenda or effect social change, usually not in it for the money
  • Script kiddies: usually seek bragging rights for disrupting business

Now, using these groups instead of “hacker”, try to think of someone (or some group) who meets one of these definitions and would go to great lengths to steal/destroy the assets listed in step one. Depending on what services or products your organization provides your answers will vary. A video game company probably has very different threats than a banker, unless of course the owners or employees bank with the banker. A stationary company will have different threats than a pharmaceutical company. Sometimes however, threats are target-neutral, these threats would be addressed by your security operations center (SOC) first, then escalated to your threat hunters/analysts if necessary. Remember, your threat intel team can’t chase every boogeyman 24/7.

Another thing you’ll want to do is score the threat actors. There are a number of systems out there and the specifics of that activity are beyond the scope of this article. However, it may be helpful when trying to prioritize what/who threatens you by using a matrix. For example, on a scale of 1 to 5, 1 being the lowest, what is each threat actor’s:

  1. level of determination
  2. resources
  3. skill
  4. team size


Next in the tweet thread https://twitter.com/GRC_Ninja/status/854574465585487872 I asked “…What [are] the 3-5 most important things to prevent? Physical/Virtual Theft? Destruction? Corruption? Manipulation? Modification?…” You may think of these within any context you wish, and some to consider are data, hosts/nodes, code execution, people & processes.

During a debate over what was the minimum security requirement for something highly sensitive, an executive said, to paraphrase, that he didn’t care who could READ the documents, just as long as they couldn’t STEAL them. Needless to say, explaining digital thievery left his brain about to explode and me with carte blanche authority to deny access to everyone and everything as I saw fit. The takeaway is, identify and understand what endstate is beyond your acceptable risk threshold, this unacceptable risk is what you MUST stop.

For example, in some cases a breach of a network segment may be undesirable but it is data exfiltration from that segment that you MUST stop. Another example might be an asset for which destruction is an acceptable risk because you are capable of restoring it quickly. However that asset becoming manipulated, remaining live and online might have far greater reaching consequences. Think of a dataset that has Black Friday’s pricing (in our oversimplified and horribly architected system). The data is approved and posted to a drop folder where a cron job picks it up, pushes price changes to a transactional database and it’s published to your e-commerce site. If an attacker were to destroy or corrupt the file, you’re not alarmed because there’s an alert that will sound and a backup copy from which you can restore. However, consider a scenario in which an attacker modifies the prices, the “too-good-to-be-true” prices are pushed to the database and website, and it takes two hours to detect this, on Black Friday.

Perhaps you have something that is a lethal agent, thus you MUST prevent physical theft by ensuring the IoT and networked security controls have 5-9’s uptime (not down for more than 5 min per year), are never compromised, or that an unauthorized person is never allowed to access or control it. These are just a couple scenarios to get you thinking, but the real importance lies in ensuring your list of “must stops” is manageable and each objective can be allocated sufficient manpower/support when hunting for threats and your SOC is monitoring events that they’ll escalate to your Threat Intel Team

Identifying and understanding the activities that must be prevented will drive and prioritize the corresponding hunting activities your teams will conduct when looking for bad guys who may already be in your systems. Referring back to our balanced diamond, consider that an investment in technologies to support constant monitoring should probably not be part of the budget for your threat intel team, however analytic tools used on the historical outputs from your continuous monitoring systems, security sensors, logs etc. probably would be. Also consider the cost for manpower, time to be spent performing activities in support of these strategic objectives, and how the quality of the investigations and reporting will be affected by available manpower and tools.


Next in the series of tweets comes https://twitter.com/GRC_Ninja/status/854574988153716736

“4) identify the data/information you [would] NEED to have to prevent actions…[from step] 3 (not mitigate to acceptable risk, PREVENT)”

After completing the first three steps we should know 1) what we need to protect, 2) who we believe we’ll be defending against/hunting for, and 3) what we must prevent from happening. So what is the most critical resources needed for us to achieve our goals? Data and Information. At this point in the process we are simply making a list. I recommend a brainstorming session to get started. You may be in charge of developing the Threat Intel program, but you can’t run it by yourself. This step in the process is a great way to give your (potential) team members a chance to have some skin in the game, and really feel like they own it. Before you consider asking C-levels for input on this, be considerate of their time and only ask those who have relevant experience, someone who has been a blue/red/purple team member.

Here’s a suggestion to get you started. Gather your security geeks and nerds in a room, make sure everyone understands 1-3, then ask them to think of what data/information they believe they would need to successfully thwart attackers. Next, put giant post-it-note sheets on the walls, title them “Network”, “Application”, “Host”, “Malware”, “Databases” and “InfoSec Soup”, give them each a marker, then give everyone five minutes to run around the room and brain dump information on each sheet (duplication among participants is fine). Whatever doesn’t fit into the first five categories listed goes on the last one (something like 3rd-party svc provider termination of disgruntled employee reports so you can actually revoke their credentials in your own system expeditiously). After the five minutes are up, take some time to go over the entries on each sheet, not in detail, just read them off so you make sure you can read them. Allow alibi additions as something on the list may spark an idea from someone. Then walk away. You may even repeat this exercise with your SOC, NOC, and developers. You’d be surprised how security minded some of these individuals are (you might even want to recruit them for you Threat Intel team later). If your team is remote, a modified version of this could be a survey.

Come back the next day with fresh eyes, take the note sheets, review and organize them into a list. Follow up with the teams and begin to prioritize the list into that which exists and we NEED versus WANT, and my favorite category ‘Unicorns and Leprechauns’ better known as a wishlist, which are things which as far as we know do not exists but might be built/created.


Some feedback I received regarding the next tweet https://twitter.com/GRC_Ninja/status/854575357885906944 where I ask if “you [can] get this information from internal sources in sufficient detail to PREVENT items in 3? If not can you get there?” was that it could be combined with the previous step. Depending on the organization, this is a true statement. However, I expect that in order to complete the task above, there will be multiple meetings and a few iterations of list revision before the step is complete. From a project management view, having these as separate milestones makes it easier to track progress toward the goal of creating the program. Additionally, seeing another milestone complete, has immeasurable positive effects as it creates a sense of accomplishment. Whether you combine or separate them, once it is complete, we now have a viable list of information sources we’ve identified as necessary, and now we can start working on identifying how we might source the information.

Information is data that has been analyzed and given context. In some cases, we trust the data analysis of a source, and we are comfortable trusting the information it produces, such as our internal malware reverse engineers, a vetted blacklist provider, or even just “a guy I know” (which ironically sometimes provides the most reliable tips/information out there). In other cases, such as a pew-pew map, we want to see the raw data so that we may perform our own analysis and draw our own conclusions. The challenge in this step, for internal sources, is to identify all the data sources. This will have secondary and tertiary benefits as you will not only identify redundant sources/reporting (which can help reduce costs later) but you will have to decide on which source is your source of truth. You may also discover other unexpected goodies some sources provide that you hadn’t thought of. As an example (not necessarily an endorsement) log files will be on your list of necessary data, and perhaps you find that only portions of these files are pumped into Splunk versus the raw log files which contain data NOT put into Splunk. In most cases when hunting, the raw data source is preferred. However by listing both sources, your discovery of this delta in the sources may even prompt a modification to data architecture to allow the extra fields you want to be added to the Splunk repository.

In other cases, the data which you seek is not currently captured, such as successful login attempts to a resource listed in step one, but it could be if someone turned on that logging. Finally, the data/information you’ve listed, simply is not something you have access to, such as underground extremist threats against your industry or gang activity in the region of an asset from step one. However you still need this information and listing all possible sources for this usually identifies a need for relationships to be established and/or monitoring of open sources to be created. Another data point that will emerge are potential vendors that market/promise that they have the kind(s) of information you want. These will each require a cost/benefit analysis and a “bake off” between vendors to see who truly provides something that is value added to your program and meets your needs. NOTE: most threat intel feeds are at best industry-specific, not organizational or even regionally-specific so be mindful of purchasing “relative” threat intelligence feeds.


The next step in the process mentioned here https://twitter.com/GRC_Ninja/status/854575731585798144, is identifying gaps between what data/information you need but don’t have. “6) if no to 5, can you buy this information? If yes, what’s your budget? Can you eventually generate it yourself?” It’s not surprising to anyone, that sometimes the information we’d like to have is closely held by state and federal agencies. If you’re building this program from the ground up, you will want to establish relationships with these agencies and determine if there’s a cost associated with receiving it. As mentioned earlier ISACs for your industry might be a good source, but most of them are not free.

Other information you might be able to generate, but someone else already develops it. In many cases, not only do they develop it, they do it well, it’s useful, and you couldn’t generate it to the quality standards they do unless that was absolutely the only thing on which your team worked. For example, consider Samuel Culper’s Forward Observer https://readfomag.com/. He provides weekly executive summaries and addresses current indicators of:

  • Systems disruption or instability leading to violence
  • An outbreak of global conflict
  • Organized political violence
  • Economic, financial, or monetary instability

All of the above, could be used to cover the tracks of, or spawn a digital (cyber) attack. As an independent threat researcher, this information is something I do not have the time to collect & analyze, and it costs me about the same as grits & bacon once a month at my favorite breakfast place.

In considering our balanced diamond, money/cost is a resource that if we need a lot of it for one area of of our program, we usually have to give up something else inside that same category, and it is usually manpower or tools, as everyone is pushed to “do more with less”. So how do we prioritize the allocation of funds? Use the ABC prioritization rules: acquire, buy, create. First, see if you can acquire what you need in-house, acquire it from another team, tools, repository etc as this is the cheapest route. If you cannot acquire it, can you buy it? This may be more expensive, but depending on your timeline and availability of personnel in-house to create it, this is sometimes cheaper than the next option, creating it. Finally, if you cannot acquire it or buy it, then consider creating it. This is probably the most time-consuming and costly option (from a total cost of ownership perspective) when first standing up a program; however, it may be something that goes on a roadmap for later. Creating a source can allow greater flexibility, control, and validation over your threat intelligence data/information.

Whether or not to choose A,B or C will depend on your balanced diamond. If time is not a resource you have, and the program needs to be stood up quickly, you may take the hit on the cost section of your diamond as you need to buy the data/information from a source. Also, the talent pool from which you have to choose may also affect your decision, the time and cost associated with hiring the talent (if you can’t train someone up) may force your hand into buying instead of creating. In some instances the cost of the data may be prohibitive, and you do not have it in-house thus you may have to adjust your time section on your diamond to allow you to hire that resource in. The bottom line is that there is no cookie-cutter “right” answer to how you go about selecting each data resource, and one way or another you must select something and you may need to revise your needs, goals, and long term objectives.



The next tweet in the series is where we really start to get into the “HOW” of our program

https://twitter.com/GRC_Ninja/status/854576206867566596 “7) Once you get the information, how will you evaluate, analyze & report on it? How much manpower will you need? How will you assess ROI?” There’s a lot packed into this tweet and the questions build on each other. Beginning with the first question, you’ll be looking at your day-to-day and weekly activities. How will you evaluate the data & information received? Take for example, an aggregate customized social media feed, will the results need manual review? If so, how often? Will you be receiving threat bulletins from an Intel Sharing and Analysis Center (ISAC)? Who’s going to read/take action on them?  One key thing to include in your reporting in the WHO, not just the when and how.  A great tool for this is a RACI chart.

For each information source you listed in steps 5 & 6, you should have a plan to evaluate, analyze & report on it. You will find, that as your team analyzes and evaluates these sources, some of them will become redundant.

The second question in the tweet was “How much manpower will you need?” There are a variety of estimating models, but I urge you to consider basing it on 1) the number of information sources you’ve identified as necessary and 2) the number of employees in your organization. What’s the point of having a source, if you don’t have anyone to use/analyze/report on or mine it?  Your own employees are sensors, sometimes they’re also an internal threat. Another point to consider is how much of each analysis effort will be manual at first, that can become automated? Remember, you can never fully automate all analyses, because you can never fully predict human behavior, and every threat still has a human behind it.

The third question in the tweet, “How will you assess ROI?” is critical. Before you begin your program, you want to define HOW you will evaluate these. Will it be based on bad actors found? The number of incoming reports from a source that you read, but tell you nothing new? Remember our balanced diamond, there are finite finances, and time that can be invested into the program. As the daily tasks go on, new information and talent needs will emerge but more importantly, the internal data and information sources will either prove to be noise or niche. Other sources, such as an Intel feed, or membership in an ISAC might not prove to be producing valuable information or intelligence. I’d recommend at minimum, annual evaluation (using your pre-defined metrics for your qualitative ROI) if not semi-annual review of any external/paid sources to ensure they are reliable, and providing value. If your team tracks this at least monthly, it’ll be much easier when annual budget reviews convene.

REMINDER: Defining the metrics for ROI in advance does not mean you cannot add or refine the metrics as the program progresses. I recommend reviewing them every 6 months to determine if they need revising. Also, don’t forget that new information needs will emerge as your program grows. Take them, and go back through steps 5-7 before asking for them.


Good advice I’ve heard time and time again is, always begin with the end in mind. The next tweet in the series https://twitter.com/GRC_Ninja/status/854576964065275904 touches on this by asking “8) what will success look like? # of compromises? Thwarted attempts? Time before bad guys detected? Lost revenue? Good/Bad press hits?” Granted 140 characters is not nearly enough to list all of the possible metrics one could use, but the objective of that tweet and this blog are not to list them for you, rather to encourage you to think of your own.

Before you start hunting threats and developing a threat intelligence program, you’ll need a measuring stick for success, for without one how will you know if you’re on the right path or have achieved your goals? As with everything in business, metrics are used to justify budgets and evaluate performance (there’s a buzz word called key performance indicators KPIs you should become familiar with, also known as status or stop light reporting red, yellow, green).

In a very young program, I’d encourage you to include a list of “relationships” you need/want to establish outside vs inside the organization, and the number of them that you do create. You can find other ideas for metrics with this search: https://www.google.com/#q=%22threat+intelligence+metrics%22



The final tweet in the series https://twitter.com/GRC_Ninja/status/854577542778499072 addresses the three most important things, that in my expereince, are heavily overlooked, if not completely forgotten, in most threat intelligence (and InfoSec) programs. Summed up in three questions to fit into the 140 character limit: “9) How can you continue to improve? How will you training & staying current? How will you share lessons learned with the community?”

Addressing them in reverse order, sharing experiences (and threat intelligence) can be likened to your body’s ability to fight off disease. If you’re never exposed to a germ, your body won’t know how to fight it off. If you have an immune deficiency (lack of threat intel and InfoSec knowledge) your body is in a weakened state and you get sick (compromised) more easily. Sharing what you know/learn at local security group meetings, conferences, schools and universities etc. not only helps others it will help you. It pays dividends for years to come. Additionally, people will come to trust you, and will share information with you that you might not get anywhere else except the next news cycle and by then it is too late.

Next, once you’ve designed this awesome threat intelligence program, how are you going to keep this finely tuned machine running at top notch levels? The answer is simple, invest in your people. Pay for them to attend security conferences, and yes it is fair to mandate they attend specific talks and provide a knowledge sharing summary. It is also important to understand that much of the value of attending these events, lies in the networking that goes on and the information shared at “lobby-con” and “smoker-con” where nerds are simply geeking out and allowing their brains to be picked. Additionally, you can find valuable trainings at conferences, sometimes at discounted prices that you won’t find anywhere else. Also, theses are great places to find talent if you’re looking to build or expand a team.

Speaking of training, include in your budget funds to send your people to at least one training per year if not more. Of course you want to ensure they stay on after you pay for it so it is understandable if you tie a prorated repayment clause to it. It is easier to create a rock star than it is to hire one.

Finally, how can you continue to improve? The answer for each team will be different, but if you aren’t putting it on your roadmaps and integrating it into your one-on-one sessions with your employees, you’ll quickly become irrelevant and outdated.  Sometimes a great idea for improvement pops into your head and then two hours later you cannot remember it.  Create a space (virtual or physical) where people can drop ideas that can later be reviewed in a team meeting or a one-on-one sessions.  I find that whiteboard walls are great for this (paint a wall with special paint that allows it to act as a whiteboard).  Sometimes an IRC-styled channel, shared do, or wiki page will work too.


This blog provides a practical outline for designing a threat intelligence program in the digital realm also known as cyberspace, and introduced a four-point constraint mode: time, money, design/accuracy, and quality.


As with any threat intelligence, we must understand the digital landscape and know what want it is that must be protected.  In order to protect it, we must have good visibility and simply having more data does not mean we have better visibility or better intelligence. Instead, an abundance of data, that isn’t good data (or is redundant) becomes noise. Discussed above was the next critical step in the defining the program   identify what we need to know, where we can get the answers and information we need, and how much, if anything, those answers and information will cost.  Some programs will run on a shoestring budget while others will be swimming in a sea of money.  Either way, reasonable projections and responsible spending are a must.


Once the major outlining is done, we start to dig a little deeper into the actual executions of the program, and we discussed figuring out exactly how we will (or would like to) develop and report the threat intelligence so that you can adequately source/hire the manpower and talent needed to meet these goals. Then we highlighted the all important task of defining success for without a starting definition, how can we show whether or not we are succeeding or failing?  Remember to revisit the definition and metrics regularly, at least semi-annually, and refine them as needed.


Finally, we close out the program outline by remembering to plan growth into our team.  That growth should include training, sharing lessons learned internally and externally.  Remember to leverage your local security community social groups, and the multi faceted benefits of security conferences which include networking, knowledge from talks, and knowledge/information gained by collaborating in the social hangout spots.  
Thank you for your time. Please share your experiences and constructive commentary below and share this blog on your forums of choice. For consultation inquiries, the fastest way to reach me is via DM on Twitter.

Hacking Critical Infrastructure

Please accept my apologies in advance if you were hoping to read about an actual technical vulnerability in critical infrastructure or the exploitation thereof. Today we discuss a plausible strategic cby3r threat, and how one might go about hacking our critical infrastructure without going after the plant or the IT team(s) supporting the technologies in it (or at least not at first).  Before we get started, we’ll define two terms, relevant to the scope of this article:

  1. Strategic cyb3r threat intelligence would be that which is timely (i.e. received before an attack), researched in depth, and provides context to a potential attack scenario
  2. Personally identifiable information (PII) as a piece (or combination) of data that can uniquely identify an individual

Now, let’s take a minute to review a key point of a historical event, the OPM breach (you can brush up on it here http://www.nextgov.com/cybersecurity/2015/06/timeline-what-we-know-about-opm-breach/115603/).  According to the information that has been released, attackers did not originally steal personally identifiable information (PII) .  What the attackers did make off with was even more critical, manuals, basically the “schematics” to the OPM IT infrastructure.  [QUESTION: Are any of you logging access attempts (failed and successful) to your asset inventories, network diagrams, application architecture documentation? If you are, is anyone reviewing the logs?]  Many have forgotten the first items stolen were manuals, thanks to the media news buzz about “identities stolen” blah blah blah, and chalked it up to just another breach of PII and millions of dollars wasted on identity theft protection.   The attackers went after something that was considered by many to be a secondary or tertiary target, something that wasn’t “important”.  However, it was a consolidated information resource with phenomenal value.

So, what does this have to do with hacking critical infrastructure?  Well, aside of the option to leave malicious USBs laying around, what if I could compromise MULTIPLE infrastructure companies at once? [dear LEOs I have no plans to do this, I’m just creating a hypothetical scenario and hoping it makes someone improve security].  How could I do this? Where could I do this? Who would I try to compromise?  If I could get just ONE company, I could have the “blueprints” to components at multiple facilities! * insert evil genius laugh* Muahahahahahah!  If I could get these, then I could find a vuln that they’d all share, and then I could launch a coordinated attack on multiple plants at once, or I could launch a targeted attack which would cause a domino effect to hide further malicious acts.

Warning InfoSec professionals, grab your headache medicine now…

Where to begin…

First, I’d see if there was a way I could get a list of companies that created the technology used in the critical infrastructure such as boilers, turbines, and generators. In fact, there is a list, and it is publicly available!  YAY for research databases!! Wooo hooo!  In fact, I’m even able to break it down into coal, gas, geothermal, hydro, nuclear, oil, & waste.  Wait, it gets better. I can even determine the commission date, model, and capacity for each.  Next, if I find data missing from the awesome resource, I may be an OCD attacker and want all the details, I’d plan a social engineering attack. I bet that for those plants that have “missing data” I could probably call, pretend to be a college student doing research, and they’d tell me any one of the previously listed data elements, especially if I sent them the link to the public resource that already has “everyone else’s data” in it.  Although I did not do that, I did collect the manufacturer names for US infrastructure.  Admittedly some appear to have nominal differences in naming based by those who submitted the data, thus potential duplication, but as an attacker I probably wouldn’t care:

  • Aalborg
  • ABB
  • ABB, Asea Brown Boveri
  • Allis Chalmers
  • Alstom
  • American Hydro
  • ASEA
  • Babcock & Wilcox (B&W)
  • Baldwin-Lima-Hamilton (BLH)
  • BBC, Brown Boveri & Cie
  • Brown Boveri & Cie (BBC)
  • Brush
  • Combustion Engineering
  • Deltak
  • Doosan
  • Foster Wheeler
  • GE
  • GE Hydro
  • General Electric
  • Hitachi
  • Hitachi Japan
  • Hitachi Power Systems America
  • Hyundai/Ideal
  • Inepar
  • Kawaskai
  • Leffel
  • Melco
  • Melco Japan
  • MHI
  • MHI Japan
  • Mitsubishi Japan
  • Newport News Ship & Dry Dock
  • Noell
  • Nohab
  • Nooter
  • Nooter/Eriksen
  • Nooter-Erikson
  • Riley Stoker
  • S Morgan Smith (SMS)
  • Siemens
  • SWPC
  • Toshiba
  • TP&M
  • Voest Alpine
  • Vogt Power International Inc.
  • Voith Hydro
  • Westinghouse

Next, I’d start searching to find events where multiple companies would attend.  As you can guess, there is yet another OSINT source that would list potential gatherings of these individuals http://wikicfp.com/cfp/call?conference=energy.  This is just one source, but it is such an amazing source I decided to share it (HINT: if you’re looking for InfoSec conferences, check out the security and technology categories).  For a moment, let’s just assume that this source didn’t yield any promising results.  Another option would be to find a single company that lists one or more of these manufacturers as their client or the technology as their area of expertise.  After a simple search for ABB (yeah had to go pretty far down that list there) we find https://www.turbinepros.com/about/oem-experience.  And wouldn’t you know it, they’re hosting some events of their own.  A search for ‘turbine generator maintenance’ yields http://www.turbinegenerator.com/ and their events tab takes me to http://www.powerservicesgroup.com/events/ and the process continues.  If I wanted a “current” status of critical infrastructure I could pull it from DHS reports/publications at https://www.dhs.gov/publication/daily-open-source-infrastructure-report (granted Jan 2017 they discontinued it).  I could also go here https://www.dhs.gov/critical-infrastructure-sectors and pull each sector’s plan which typically identify the number of plants running and the states in which they are located.  The amount of information available for a bad actor in open sources is plentiful, and allows them plenty of time to plan their attack.  Ironically, I wonder how many companies are doing the same thing to plan FOR the attack?


So, what’s next? As a bad actor, one wants bang for the buck so I want to find a conference listing the sponsors & speakers (who does that? #sarcasm), hopefully this might help me narrow down my target (i.e. the one with the largest collection of key players most likely). I also want to find one that isn’t too large, small-med conferences usually have smaller budgets thus, the only real security they put in place is some volunteer with no “security” experience at an entrance asking, “Do you have a conference badge?”  Also, keep in mind, these are energy conferences in this hypothetical scenario, security, especially cyb3r security is probably not on the top of their list.  Since these are not Information Security conferences, i.e. they are not BlackHat or DEFCON, nobody is running around yelling “turn off your Bluetooth, NFC, & WiFi” or “please don’t scan random QR codes”.  There’s also probably not anyone checking to see how many mobile access points (or stingrays) popped up before/after the conference or whether there’s a sniffer on the free conference (or hotel) WiFi.  Another thing an adversary might consider is chatting up the marketing guy, making sure to get his business card.  Also, get him to talk about other key leaders (everyone will talk plenty about the guy they dislike the most).  Then later that bad actor would be sending him (or someone else) a spear-phishing email as they are sure to have captured plenty of topics of interest.  The chances of the targeted victim clicking a link (or not reporting it) are more likely to succeed and avoid detection with a targeted phishing email than a mass blast.  The bottom line is, that from an attacker perspective, it is probably much easier for me to compromise a person from one of these conferences than it is for me to hack into infrastructure directly.


If I was a bad guy, I’d consider this casting a wide net, the key is though, that I only need to catch one fish.  Once I’ve caught one, then it is game on.  While all of them are worried about NERC or ISO compliance, how many of them are worried about if a bad actor is accessing IT asset inventories, network diagrams, purchase orders, IT road maps, or archived vulnerability scan reports?  One of the gaps in security that surprises me the most, is the lack of security surrounding previous penetration test reports.  The vendor providing the report(s) may give the highest protections to the documents when sending and storing them, and at first the client treats them with great protections when they first arrive.   However, once they are considered old (usually 12+ months) complacency sets in.  The irony is, the greatest frustration I hear from my Red Team friends is “we told them [1-10] years ago to fix this, and it’s still wide open.”  Well, not only is it wide open, the report now sits on all-company-access shared drive or worse a public FTP server because its “old”.

Bottom Line – It’s Game On.

Many of you might have objections to me laying out this attack scenario on a public blog.  You would argue that I’m giving bad guys ideas and shame on me.  I considered that, however, the more likely truth is that they’ve already thought about this, and we have our heads so far up our 4th point of contact running around screaming about ransomware, malware, hashes, IOCs, and malicious domains that we, the InfoSec community, do not give 1/100th of our time to thinking about strategic cyb3r threats.  We do not plan for attack scenarios beyond device compromise.  Blue Teams spend all day fighting a tactical battle and Red Teams spend all day attacking systems. We rarely stop to give thought to the person we “let in” through the front door.   When do we stop and think about domino effects and strategic cyb3r threat scenarios, so that we can take a harder look at our environments for hints of a strategic attacker and then actually go look for footprints?  Most, if not all of you reading this will say, we don’t ever do that.  That is why I’ve written this.

We have to change what we’re doing and start thinking outside of immediate [tactical] cyb3r threats or we’ll lose the fight not for lack of technology and effort, but for lack of creative and disruptive thinking.



  1. Look [in your environment] at the sensitive documents listed in this blog (app architecture, network architecture, asset inventory, purchase orders, pentest results, vulnerability reports etc.). Are you logging who/what has accessed them?  Do you see any non-human accounts accessing them?  Is every copy/download accounted for?
  1. Are you adequately educating your staff who attends conferences on the elevated security risks? When’s the last time you made a forensic image of an executive’s laptop?  If you allow BYOD are you adequately inspecting the devices upon return? What changes in procedure for “conference attendance” can you make to better protect your environment?
  1. Do you have relationships with local FBI/Police/InfoSec Community so that you can learn about any potential threats, especially cyb3r threats? Are you sending an InfoSec person to these non-InfoSec conferences with your staff to assess the InfoSec risks/threats?


Thank you for taking time to read the blog, please feel free to leave comments and questions.  I will respond as time permits.



People Search Sites – Erase Me Please

The good folks over at Divine Intel (Twitter @divineintel) asked to borrow a little space on my blog as they are still getting their website set up. They’ve recently tweeted 21 URLs where you can go to submit requests to have your information removed from the people search sites, and in some cases phone numbers too. The tweets are tagged with #eraseme and #privacy to make them easy to find as well.

There are over 60 sites they work with to help remove information from, and the list below is not exhaustive, but it is a complete list as far as they know of those sites allowing for web-based submissions for records removal requests.

Some sites only accept requests by mail, such as PeopleLookUp, US Search, & Zaba Search which can all be reached at the same address. Although, one site tells you that you can only fax in your request, another says you can only mail it, and one allows you to fax and/or mail it… ironically, they all share overlapping fax or mailing information. So one letter, to this address requesting removal from all three sites should help take care of this one. Be sure to check out their OptOut requirements as you have to send supporting documentation you’ll need this form too http://intelius.com/docs/notaryverificationform.pdf. Yes, that form is hosted on the Intelius domain.

Privacy Officer / Records Removals
P.O. Box 4145
Removal Bellevue WA 98009-4145

Here is the consolidated list of URLs that they tweeted out separately earlier this evening.


Be sure to follow their twitter account for helpful nuggets on deleting your personal information and reducing your digital footprint.

Phishing the Affordable Care Act

Recently, while working on a project I was asked to gather some information on Blue Cross Blue Shield (BCBS) and something scary began to unfold.  I noticed that states have individual BCBS websites, and that there is no real consistency in the URL naming convention.  Then I began imagining the methods an attacker could use to exploit this. This is especially disconcerting since tax season is here and, thanks to the Affordable Care Act, we’ll all be needing forms showing proof of medical coverage, but more on that later. Back to the BCBS domains….

The first thing I noticed was the inconsistent use of the dash (-) character.  For example if I want to visit Georgia’s BCBS site I can use use http://bcbsGA.com, https://bcbsGA.com, http://bcbs-GA.com or https://bcbs-GA.com.  I found that only four other states returned a 200 status for names with the dash ex: bcbs-$state.com.

  • http://bcbs-vt.com/ is under construction, and the owner listed is BlueCross BlueShield of Vermont
  • http://bcbs-mt.com resolves to https://www.bcbsmt.com/
  • http://bcbs-sc.com and http://bcbs-nc.com are currently parked for free at GoDaddy, and the owner information is not available.

I have not inquired with SC/NC BCBS to determine if they own the domains listed above (the ones with the dash).  I also cannot elaborate as to why there is no DNS record resolving each of the Carolina domains above to a primary one as MT did.  It is possible a malicious actor/s own/s the NC/SC domains, although currently that is purely speculation. The final observation that made me decide to script this out and just see how much room there is  for nefarious activity was finding that some states don’t even use BCBS in the URL for example www.southcarolinablues.com.

Deciding where to start wasn’t very difficult.  There are many logical names that could be used for a phishing expedition, but I wanted to stay as close as possible to the logical and already known naming conventions. So I opted not to check for domains like “bcbsofGA.com” or iterations with the state spelled out.  I settled on eight different possible combinations.   As seen with the domains for BCBS of GA, the state abbreviation always appears after BCBS, so I checked for domains with the state at the front as well, and both an HTTP and HTTPS response.  I also checked for domains with the dash before and after the state abbreviation.  Math says that 8 combinations (seen below) * 50 states = 400 possible domains.

  •       http://bcbsXX.com
  •       https://bcbsXX.com
  •       http://bcbs-XX.com
  •       https://bcbs-XX.com
  •       http://XXbcbs.com
  •       https://XXbcbs.com
  •       http://XX-bcbs.com
  •       https://XX-bcbs.com

The results were a bit unnerving…

It took ~13.5 minutes using 18 lines of Python (could be fewer but I was being lazy) on a old, slow laptop, to check the 400 possibilities to learn the following:

  • 200 status = 69 domains
  • 403 status = 02 domains
  • 404 status = 02 domains

Leaving 329 domains available for purchase, and the price for many of them was less than $10.  Keep in mind, I did not verify ownership of the 69 domains, but if I’m a bad guy, I don’t really care who owns them because I’m only looking for what’s available for me to use.

Now back to the tax forms I mentioned earlier….

We teach users not to click on links or open emails that they aren’t expecting, so can you blame them if they click on a link in an email that says “click here to download your 2017 proof of medical coverage, IRS form 1095”?  After all, the IRS website even tells us that we will receive them, and that for the B & C forms the “Health insurance providers (for example, health insurance companies) will send Form 1095-B to individuals they cover, with information about who was covered and when.  And, certain employers will send Form 1095-C to certain employees, with information about what coverage the employer offered.”

Remember all that information lost in the Anthem breach a few years ago? Or the Aug 2016 BCBS breach in Kansas? Hrmmm, I wonder how those might play into potential phishing attacks.



How you choose to mitigate this vulnerability is up to you and the solution(s) you come up with will vary depending on your company size, geographic dispersement of employees, and network architecture among other things.  Some of you may choose to update your whitelists, blacklists or both.  Some of you may use this opportunity as an educational phishing exercise soon, but whatever your solution is, I hope includes pro-active messaging and education for your users.

Finally, if you or someone you know works at a healthcare provider and has the ability to influence them to purchase domains that could be used to phish the employees and/or individuals they cover, I strongly encourage you to share this article with them. You can also try convincing management that not only are you preventing a malicious actor from having them, you could use them for training. While BCBS is the example used here, they are not the only provider out there and this problem is not unique to BCBS or its affiliates.  However, if BCBS licenses it’s affiliates, then enforcing 1) standardized naming conventions for URL’s and 2) requiring them to purchase a minimum set of domains to minimize risk of malicious phishing doesn’t seem unreasonable.  Considering the prudent man rule, I think a prudent man would agree the financial burden of purchasing a few extra domains, is easily justified by the impact of the risk reduction.

Thanks for taking time to read, and for those of you with mitigation ideas, please share your knowledge in the comments, and if you’re new to infosec and want to ask a question about mitigations please ask it.  I only require that comments be constructive and helpful, not negative, insulting, derogatory or anything else along those lines.

Specific details for the 1095 forms can be found here.https://www.irs.gov/affordable-care-act/individuals-and-families/gathering-your-health-coverage-documentation-for-the-tax-filing-season)

Thank you my dear friends for your proofreading, for the laughs, and most of all your time and support.

Strategic Threat Intelligence in the Digital Realm

Thank you @Ngree_H0bit and @TXVB for your editorials on this blog.

Imagine if someone walked up to your job, and fired an automatic weapon at the building or detonated a bomb in the lobby. Then the police showed up the conversation went like this:

LEO: “Did anyone die or get shot?”
Company: No
LEO: “Is there any damage to the facility that can’t be repaired?”
Company: No
LEO: “Ok, that’s all we wanted to know, our work is done here. You can go back to what you were doing.”
Company: “Wait?! Don’t you care who did this?”
LEO: “No, you’re safe now, the threat is contained”
Company: “Aren’t you going to try to figure out WHY they did this?”
LEO: “No, that’s not important, you’re not in danger anymore.”
Company: “How do you know that?”
LEO: “The threat has been contained, the attacker is gone/dead”
Company: “But what if there are more attackers?”
LEO: “Well, you better install some bullet proof glass, wear a Kevlar vest everyday, and hope for the best.”
(2 weeks later)
Company: “All employees are required to buy a Kevlar vest…”
Company: (to the property manager) “We need an upgrade to the building next year if we’re going to renew our lease….”

We would be absolutely oozing with disgust and screaming from the tops of our lungs at how incompetent and dismissive the police were at protecting us if this happened. Yet, in the InfoSec world we do it all the time. Let’s change the conversation slightly:

SCENARIO: Major digital attack against a company

Management: “Was anyone’s data lost?”
Strategic Threat Hunter: “We’re not sure, but it doesn’t look like it.”
Management: “Is there any damage to the computers that can’t be repaired?”
Strategic Threat Hunter: “No.”
Management: “Ok, that’s all we wanted to know, your work is done here. You can go back to what you were doing.”
Strategic Threat Hunter: “Wait?! Don’t you care who did this?”
Management: “No, we’re safe now, the threat is gone.”
Strategic Threat Hunter: “Aren’t you going to try to figure out WHY they did this?”
Management: “No, that’s not important, we’re not in danger anymore.”
Strategic Threat Hunter: “How do you know that?”
Management: “The threat has been contained, the attacker is gone/malware is blocked.”
Strategic Threat Hunter: “But what if there are more attackers in the group?”
Management: “We should improve security, buy/install a new security widget, and hope for the best. Oh and no, you can’t have any more resources to do this.”

Management comes running when something catastrophic happens yet all they care about is a damage report, the immediate impact. Even when Incident Response teams respond to a major breach, little, if any, time is spent after the event trying to understand why they were targeted or asking any of the questions above. Now don’t get me wrong, I’m not saying NOBODY EVER asks, I’m just saying that more often than not, nobody cares or is asking. Few companies put ANY investment in strategic intelligence efforts that can identify threats. Instead they sit and wait for the FBI to call them to tell them they are about to (or already do) have a problem.

It is this gap that concerns me the most, and what the remainder of this blog post will seek to touch on. I dare say address as that is a goal that I doubt I can achieve fully in one blog post.


The inability to gather strategic intelligence and conduct “target development” in the digital space, at lower echelons of the military or in the civilian sector, is troubling. Nonetheless, it is critical to for us to anticipate the adversary and to defend against them, and, frankly, to act offensively or pre-empt their actions.

One of the things we do in the military to prepare is train, train, train – and we train like we fight. If the enemy’s landscape is a desert – train in a desert, if it’s a jungle – train in the tropics, a winter wasteland – train in the arctic etc. Training like you fight isn’t limited to the environment either; it includes using the tools and weapons available to you in scenarios you may find yourself in. If your enemy might deploy chemical weapons, you might have to wear full chemical protective gear and fire your weapon to save your life. So you put on all that chemical gear, go to the range and, fire your weapon. You train in the environments, scenarios and, gear you may face; you train to all of it. You train to the point that it is a natural reflex, muscle memory, so you don’t even have to think about it. I can’t tell you how many times I responded to “Gas! Gas! Gas!” and ran full speed ahead, weapon in hand, and dove into a fighting position – “just training.”

Then there’s the intelligence teams; what intelligence are they gathering to support that ground troop? Ask them to tell you how they leverage Cyber Command to gather strategic intelligence for the warfighter, and I’ll show you a politician doing the Cotton Eye Joe at warp speed (https://www.youtube.com/watch?v=b8Z4sVwdwp4) . They have no idea, the politicians that is. They’ll tell you that’s the NSA’s job, and they’ll still have no idea. I’m not going to go down a list, but there are other agencies such as DIA, DNI & DHS to name a few that also have cyber operations, who, to some degree, all suffer the same gap, discussed here. What of the civilian companies that support critical infrastructure or even city, county, and state governments? What about USCERT/DHS & ISACs? After all, isn’t this kind of support *THEIR* job? Ask them and they’ll tell you strategic intelligence is about targeted threats and APTs. *cough-ulzhit* No I’m not making that up, a C-level executive from state-level county government and US government officials have actually told me that. They have no idea either.

One of my favorite analogies that explains tactical, operational and strategic leadership came from a Stephen Covey presentation on the levels of leadership, first line managers, middle management, and senior leadership. However, it translates well here as first line managers are tactical, middle managers are at the operational level, and senior leadership are at the strategic level. Tactical intelligence tells you how to eliminate the threats in the jungle where you are working. Operational intelligence tells you where you should be in the jungle, and what kinds of threats are in each area of the jungle. Strategic intelligence is when someone yells “we’re in the wrong jungle!” (his presentation was on his book 7 Habits of Highly Effective People)

In the civilian world, our digital intelligence is heavily tactical, it is overwhelmingly focused on how malware executed or the fact that there is an 0-day in a piece of software. Tactical intelligence is important, it has a place, it serves a purpose, but it is focused on winning a battle, not a war. So how do we do this in a digital realm? How do you train to fight there? What does strategic intelligence to support a digital war look like? What does a tactical aggressive vs. strategic covert attack from the enemy look like in a digital war? What does it take to defend against it? What does “guard duty” look like when you’re defending 1’s and 0’s? Surely it isn’t pacing back and forth with an M16 in front of concertina wire if you’re a soldier. It isn’t going to be a roving watch like the border patrol. If you’re a civilian, is it simply sitting in a SOC staring at a dashboard for 12-hours looking for alerts/waiting for alarms? So just what do passive and active digital reconnaissance look like and how are they executed?

Strategic intelligence in support of a physical or digital fight – isn’t always in your logs, your dashboards or anything else digital. Development targets, predicting what your enemy would do and what you might need to do to win a fight, will almost certainly involve technology; however, more often than not, it is going to focus on gaining a greater understanding of your enemy as person, a human being with objectives that need resources and have motivations, habits, skills, and weaknesses. It will be less concerned with how the malware executed, than it would be with the knowledge required to design the malware to execute in the manner it did. Strategic intelligence would be more focused on derived metadata about the attacker that would go toward profiling skill/expertise/training/origins etc. Examples of questions to ask: Does the distribution or content indicate a country of origin? Did the execution require specific knowledge about the affected target’s design that indicates insider knowledge?? If yes, maybe your attacker is a former or current employee? If no, did it require knowledge of proprietary information? Let’s assume it did and, everyone is trusted/vetted; are you looking at a possible breach or data loss that hasn’t been detected? Again, we are less concerned with the tactical intelligence surrounding being protected and more concerned with strategic intelligence and understanding the person that is behind the attack/malware.


Next we’re going to get a 30,000-foot view of what strategic intelligence is with respect to the digital world, because understanding what it is sets the foundation for me to explain, in a future blog post, the kind of person(s) needed on your team and why they are critical to winning the war, not just the battles, that we, face as a country and commercial companies.

Typically, InfoSec people hate the word “cyber” we consider it as profane as most people would consider the F-word. Because we’re going be discussing intelligence gathering and analysis in this post, I’d rather say DIGINT, a collective term for digital intelligence, instead of CYINT. DIGINT is not its own intelligence domain, rather it is a component of all others. If I were to draw a diagram of the intelligence silos, DIGINT would run horizontally across all of them. Blasphemy you say? Let me ask you this, can you name a part of your life not affected by technology, something digital? Even a stroll in the park without an iPod or cellphone isn’t sacred as the cell phone and iPod rely on tens of thousands of lines of code and have multiple RF transmitters. Streetlights are powered by electricity, on a grid managed and monitored by technology, programmed to come on at a specific time or use solar power and light detection. Your walk on a beach with no cellphone and no smart watch – I bet you drove a car to get there that had electronic fuel injection, GPS or a digital radio. Anyway, you get my point…

DIGINT is best defined as the intelligence gathered from digital sources, and much like HUMINT is gathered from humans, SIGINT is gathered from “anything that goes through the air” etc. DIGINT can be found in an open source, in which case it would be digital intelligence from an OSINT source (a book, magazine, the news, the Internet etc). In the case of signals, SIGINT, it could be logs or transmission captures. If the source is human, their behavioral data captured in the apps they use and how they use them, the GPS history in their phone, their social media posts – all digital intelligence sources that can be leveraged for strategic intelligence gathering missions that support and enrich tactical intelligence operations.

So what exactly is Strategic Threat Intelligence and how is DIGINT factored in? Let us first understand what Tactical Threat Intelligence (TTI) is in the digital world, as most of us will be able to relate to this much more easily. Tactical Threat Intelligence in the digital world is very similar to the tangible world it is sometimes referred to intelligence developed from and in support of incident response and is easily likened to fighting fires, playing whack-a-mole, smack-a-RAT, bash-a-bot etc; you may have even heard the term Indications/Indicators of Compromise (IoC). It is the kind of intelligence that supports addressing an immediate threat, one that is right in front of you, either presently attacking/affecting your assets or running rampant in the wild and could be on your network’s doorstep at any moment. These kinds of threats include malware (viruses, Trojans, RATs, ransomware), DDoS tools/networks, spam etc. TTI is “current” information that allows you to take an action to prevent or address these impending threats. It is easily recognizable to anyone who’s defended against an attack or been part of a penetration testing team on the offensive.

To understand what Strategic Threat Intelligence (STI) is and how it translates to the digital space we also need to understand the characteristics of it. The easiest way to do this is by reviewing what we know about tactical intelligence thereby identifying what strategic intelligence is NOT. Below are some of these examples of TTI vs STI that commercial companies might need, along with the characteristics of each.

Timely != Current

TTI is “current;” that means it is dealing with the here and now, immediate threat. For those of you who have been to a gun range you might call it “the 50-meter target.” STI, on the other hand, is TIMELY, not necessarily current. This means it is actionable and relevant to the timeline of achieving an objective. Timely does not arbitrarily translate into long range. For instance, you might find that a client is opening a new office or manufacturing plant, or perhaps an agreement of some sort is going to be signed in 3-6 months. Timely in this sense would mean identifying digital threats to one of these targets in a timeframe that allowed identification, detections and/or protections to be developed relating to the event. The artifacts of this research would be considered strategic.

A timely piece of STI in one of those scenarios is any significant local cultural, religious, educational or competitor activities scheduled to occur in the same location around the same time. Also, identifying relatives of key corporate staff or engineers that hold proprietary information that may be targeted for a phishing or social engineering attack could be helpful. Taking that a step further, strategic DIGINT could determine if there is there evidence of online activity related to events that can be used to mask a pending attack, for example a distributed denial of service (DDoS). An often-overlooked form of STI is historical activities. In this case, answering questions surrounding what “digital challenges” or “cyber threats” [I feel gross just saying that] has the client (or your organization) faced in this region in the past for regions with similar economic/cultural composition? None of these would necessarily help you defend or protect against an immediate attack, but they could all be used to prepare (train) for a future attack, identify risks, and identify information & information sources that could be leveraged to provide a company the upper hand against a digital attack.

Deep Analysis != Long Range

STI, much like TTI, involves analysis where you collect data, vet the source and content, assign a value to it, interpret it and convert it into intelligence. A common misconception is that STI is long-range because it requires deep analysis and deep analysis takes a very long time, thus is reserved for long-range projects. This is simply not true. Sometimes a raw piece of data itself, given a relevant situation can be immediately relevant. The term “deep” is relative to the mission/objective. Deep could mean, finding out who really owns/runs a company, especially considering that what is often on paper doesn’t reflect real-world dynamics. This deep analysis could take a couple hours or it could take a couple weeks.

Another example: you might learn that a company from a global power (US, Russia, UK, Germany etc.) is planning a joint venture to build critical infrastructure in another country and, this project could have huge economic impact on the cities involved and the country it is in. If you provide services such as travel, communications, HR, accounting etc. to this region or any of the parties involved or do business with your customer(s), this might be considered a piece of strategic intelligence. Why? Because this information could help you identify where or what types of threats might emerge to attack the communications, electronic resources, and infrastructure of the parties involved in the deal, thereby also making you a potential target. Just search for “data breach” and you can create a list of your own of companies that were compromised when an attacker pivoted from a subcontractor or partner’s network. While learning of this business venture is considered raw data, it has immediate value impacting a strategic objective and can result in an action being taken such as focusing the next round of data gathering in a new direction, changing what’s being searched for in logs/telemetry data. The list of responses to this kind of intelligence will vary depending on your organization, the service(s) you provide, and your own objectives among many other things.

Indicator of Attack != Indicator of Compromise

The acronym IOC (or IoC) is something every TTI analyst or researcher is likely familiar with. An Indicator of Compromise (IOC) is something developed from analysis of an event that has already occurred, or malware that has already been discovered. It is a piece of metadata that helps identify a threat hiding in other places where it may not have yet been discovered. The difference with STI is that it seeks to identify threats on the horizon, an indication of a future attack, or better called an Indicator of Attack (IoA). An IoA is simply identifying the fact that a threat is developing and an attack is probable.

Let’s consider a physical fight first and some progressively obvious indicators of attack brewing. To start, you observe a country suddenly shipping large quantities of equipment, supplies and troops to an area that is declared a training facility only meant to support a small number of soldiers for a brief time. That might be an indicator that something is developing. Then later, you observed these activities occurring outside of any scheduled military training that might further support a theory that something is about to happen. Finally you noticed missiles loaded, armed and pointed at your location. This is probably a pretty good indication that an attack is coming. On a smaller scale, if you noticed a person snapping pictures, it might be reconnaissance or he/she could just be a tourist. If you noticed the same person, at the same place multiple days, maybe even at approximately the same time, snapping pictures that is probably a little more suspicious and it could be argued it is more likely indicates a reconnaissance activity, something that usually happens before an attack.

So how do we identify the suspicious person from a DIGINT perspective? A very simple example of an IoA in the digital realm is port scans on your firewall from an IP address that’s never scanned you before. Another less obvious IoA would be an IP from a strange subnet that pings, scans, or attempts a connection to just a few ports, every 12 hours. Maybe this activity occurs only on Sundays or during hours when nobody is working, and they’ve been doing it for the last six months. Another way you could develop an IoA would be from a human intelligence source in a digital space. In the old days, you’d be eavesdropping on conversations at a coffee shop whereas today it could be something learned from hanging out in a chat room or forum. If you found an archive of the forum or chat logs, it could be argued that this is DIGINT. The tactics and techniques the old days such as in-person eavesdropping and reconnaissance, aren’t forgotten or antiquated. This is why the paranoid InfoSec person of today won’t talk about a pending attack or sensitive topics online. Either way (online or in person), you might learn of someone discussing the fact that your client is going to have a really bad day once “their friend” is finished with X activity/development/recon etc. All of these could be considered an indication of an attack that will be played out in the digital space. Of course, like any other form of threat intelligence, it needs to be reviewed, assessed, put into context with other pieces of intelligence from other sources etc. to develop a true threat intelligence report.


Strategic intelligence is essential for long-range success in any war, whether you are fighting it with boots-on-ground or in a digital space. It requires investments of time and money and it requires leadership to insist on deeper understanding. It means that we need to spend time thinking like the enemy, doing target development, and figuring out where the next strike could happen so that we can look through relevant indicators in order to develop DIGINT related to that target with a new analytic perspective. We should be going back over the history of attacks we’ve endured at our companies as if they were cold cases that never got solved and, we should be looking at them with a new objective – that of profiling the adversary through his attack. Look at your “crime scene” and ask, what kind of person did this and why?

I encourage you to start pushing your leadership to ask these higher level questions; insist that you stop simply being victims building yet another/higher wall for the enemy to scale. Start doing some reconnaissance of your own, and look for adversaries in their planning stages so you can foil their plot. Catch the bad guys in their recon stages of your assets and start figuring out what might be on the horizon so that if you do have to defend, at least you’ll know what you’re up against and when it’s coming. I leave you with this final thought: If you keep doing what you’ve always done, you’ll keep getting what you’ve always got.

Stay tuned for a future blog post on what skill sets to look for in potential strategic intelligence team members.

Beyond Whack-a-Mole “Intel”

In recent days I had some conversations with folks regarding the common INFOSEC comprehension of threat intelligence and what it really is, and we all come back a marketing buzz phrase “actionable intel”. My concern is that the definition of “action” seems to be getting diluted these days and at worst it has been morphed into “write a signature to prevent X” or create some hot new technology that uses artificial intelligence to anticipate ABC and block the attack. Also, everyone wants to be first to blog about the latest threat that hit the landscape. Researches spend hours trudging through dashboards, PCAPs, log files and retro hunting with yara rules looking for that needle in a mountain of needles that is sitting inside of their grandmothers sewing bench, and hope they don’t prick themselves wasting time with “unrelated” data or false positives. We’re inundated and consumed with the tactical execution. Why? Money, and possibly a case of nearsightedness.

Businesses are consumed by needing to show immediate value (nearsighted), and value is usually measured in the number of bad things blocked. Thus the tactical war against malicious actors saturates every aspect of our information security programs, our hiring for INFOSEC roles, the reports we produce, the metrics we pull our hair out trying to develop, and most of all BUDGET – where we spend our money. We are at constant war, just ask any incident response, forensics, malware reverse engineer, threat researcher, or SOC analyst – it is an all out 24/7 war against bad guys, and one thing you need to win a war besides soldiers, beans and bullets? Strategy.

Strategic operations are nothing new to any military organizations. Nor is strategy new to any successful CEO trying to position his company to gain a competitive advantage over a market share, but strategic planning and execution to an INFOSEC threat intelligence team seems to be as foreign as a fully nude woman standing in the flesh in front of a virgin. The concepts of profiling, understanding, and anticipating your enemy so that you can not only win battles but win the war, are something I find few people grasp. Make no mistake, I am not saying that the tactical activities mentioned above are without merit, they are 100% critical and vital to protecting assets both tangible and intangible, and even lives. What I am saying is that, organizations that have reached a maturity level where they are effective with near-surgical precision in squashing malware and phishing attacks, should be looking to take things to the next level.

I tweeted recently something to the effect that the words “new malware” literally have a Pavlov’s effect on threat researchers. Everyone gets excited about the shiny new malware, we all want to rip it apart, see how it works, hopefully find flaws in it, & blog about and HOPEFULLY to share indicators of compromise (IOCs) with the whole world to make the Internet a safer place. (Side rant – if you blog about threats and don’t share IOC’s and actionable intel, IMHO you are a douche nozzle being used for an enema) Back to the topic at hand…..We want to tell everyone how the malware did it’s backflip, blindfolded, across hot coals and broken glass, shit a peanut that turned into a malware tree, that bloomed ransomware buds whose pollen poisoned the threat landscape and that’s how we got money to grow on trees. Okay, not exactly, but close enough. But then what? Then we all go back to looking for the next shiny piece of malware, cuz we can never have too many in our collection right? Well, this all falls into tactical operations, a very instrumental element to protecting and defending our orgs and current customers, not to mention attracting new customers. The race is to be the one that finds it first, blogs first, and makes current and potential customers feel safer – basically whack the mole the fastest and most accurately. Heaven forbid if another organization blogs about some new major threat and you didn’t, your org is destined to get a tsunami of “are we protected” inquiries. And of course, that’s what the business is worried about – happy customers who feel safe because that’s what pays the bills. So I ask again, but then what?

In all of this, after the hours spent finding it, ripping it apart, and figuring out which IP or domain it came from so you can write a signature, blacklist and block it, what have you learned about your enemy? Better yet, what have you converted from an observation into codified knowledge that can be used later – that is not an IOC? What do you know about their objectives, short and long term? What do you know about their resource needs, infrastructure, motivations (are they political or financial)? Trying to teach strategic threat research in one blog post is insane, so I’ll try to give an example via an imaginary conversation.

Do you know or understand why *THAT* malware was used against *THAT* organization? NO

What about that domain, have you run down the registrant to see what other domains he/she owns and if there’s any other malware associated with them? YES

Oh really! Well do you know if it’s the same kind of malware? It’s not

It’s not? Well bad actors are kind of like serial killers they usually have a modus operandi (method of operation) aka M.O., a habit, that they rarely deviate from, so why did you actor change his M.O.? I don’t know

OK, go figure out if something caused your actor to change his M.O or if this indicates multiple actors sharing the same registration information.

Is it on a dedicated/shared IP? SHARED, on an ISP that only owns 200 IPs and only hosts 100 domains, and they’re behind bulletproof hosting

Do you have enough information based on victims to build a potential target profile so we can figure out where/who they might attack next? NO

What vertical was that attack against? Transportation

What org? a trucking company

What geographic region? Timbuktu

Are there any key political figures headed to that region? sporting events coming up? tourist or entertainment events planned in the near future? Yes

Really, hmm what other resources are needed to support (X from previous question)? Catering? Power? Decorating? Air Travel? Yes

Now the scenario above is completely made up, and there is an entire line of questions that could follow. In fact, changing the answer to any one question can change the next round of questions that would follow. Nonetheless I think you get my point. And if you really do get my point, then you’ll understand why a massive “threat intelligence feed” from a company is practically useless. You’re better off just ingesting a black/whitelist from some trusted asset with an understanding that you may have false positives, but you’d rather be secure and inconvenienced. It is time the INFOSEC community take threat intelligence to a new level and start looking past the shiny new malware and actually start trying to understand attackers.

It kind of reminds me of the sci-fi movie I watched this weekend (I won’t name it because I don’t want to get sued). Basically our planet had been attacked in the past and we defeated the enemy. Then the humans studied the technology left behind from the aliens. They used it to advance the human race and unite the world. But then years later, another alien shows up, without hesitation we blasted it out of the sky, then a bigger alien shows up and threatens the planet again. However, a group of scientists takes the time to study and understand the first alien that showed up those years later. They come to find out the motivation behind that alien, learn from their observations and if they apply the knowledge correctly they can then ultimately defeat the massive alien force that now threatens them.

The key here, is that they took time to study – let me type that out a little more slowly “T H E Y T O O K **T I M E**” to “S T U D Y” – of course it was after they whacked the mole, but they did do the deeper investigation. This is where we all need to be headed. After we’ve honed our skills at quickly finding and annihilating the immediate threat, let’s start adding a new function to our INFOSEC portfolios: teams to do strategic analysis, enemy profiling, and developing threat intelligence that allows us to take proactive measures to prevent attacks or at the very least identify behaviors that indicate a larger (measured by impact not volume) threat on the battlefield.


BTW, Business people – please pick your faces up off the floor, I know, I just said we need to invest time and money into something that has long-term payouts and not immediate ones. Let me know if you need me to pay your co-pay for your hospital visit.

As always, thanks for reading and supporting.

How’d They Know $PrivateDetails ?


Today a friend and colleague of mine shared that he got a really really good gmail login phish purporting to come from his home owners association president. Immediately my brain spins up because this is my friend and I asked some critical questions.

1) How did the phisher know who the HOA President was?
2) How did they get that individual’s email?
3) How did they know my friend was in that specific HOA?
4) How did they know my friend’s personal email?

Of course the list of questions can go on and on, but the plot thickens when he says the email was sent to his gmail address attempting to get his gmail creds, but that he does not use his gmail account to converse with the HOA President, nor does he remember EVER using his gmail to contact him.

And there we have it, a spearphish executed on a non-work resource.


Now my friend is in Information Security, so naturally he avoided the compromise, discarded the email and is going to take the necessary follow-up steps, but do you know what they are?

So YAY he’s not a victim, but what’s next? Discard them email of course, however the train doesn’t stop there and it shouldn’t. This kind of incident, although on a private email address that was (most likely) accessed from a home computer, still needs to be reported to your Information Security Department **making sure the information makes it all the way to your SOC & Threat Intelligence Teams**. After all, this was a spearphish, not a generic blast-all phish hoping for a random victim. CONGRATS YOU ARE A WALKING PIECE OF INTELLIGENCE! Finally, a “reminder to be vigilant” with details on the spearphish should go out to key leaders. Why? Well let’s play the what-if game.


What if….my friend’s wife had gotten the email, on a shared family computer, fell victim to it, and a key logger (or other malware) was installed? Someone went to great lengths to find all this information out about my friend, they obviously don’t mind investing time into a target.

What if….another non-security-savvy key leader, who reuses work passwords at home (cuz that never happens), has gotten a similar email on a personal email at home, and s/he fell victim to it? Getting the “be vigilant” reminder may have him pressing the ZOMG button and in turn report that s/he got something like that as well.

What if….the email appeared to be from your children’s school, regarding grades/bad behavior/parent event/free ice cream etc. and it was sent to the child?

What if….the email appeared to be from your child/spouse’s $hobby group (that is publicly plastered all over social media)?

Remember, someone went to the length to figure out where my friend lived, what the name of the HOA was, who the president of it was, the president’s email address, and my friend’s email address. That is a lot of effort to just get gmail credentials. This likely indicates they’re probably after something bigger.


Now something I don’t see or hear a lot of companies doing is having security awareness training for spouses and family members. People laugh, but I remember when the Iraq war broke out and family members were plastering all over social media tons of pictures of everyone gathered in a gym preparing to leave saying “Gonna miss my husband so much! God bless the $military unit” or “My husband is finally coming home! They should be landing at $YYMMDD:HH:mm:ss.” We would sit back and say “Dear spouse, please stop helping the bad guys determine our troop strength and travel plans!” There hasn’t been a #facepalm meme invented yet that could accurately depict the military commanders.

So for the non-military folks out there here’s an idea. Put together a a “fun day”, invite out the spouses and the children, and teach them about phishing (and the various specific forms, phone, spear, social media etc) and OPSEC! Sure you might be the security person in the home, and you have your personal firewall all tightly locked down, your wifi is wrapped up nice with strong passwords, MAC filtering, SSID broadcast shut off blah blah blah, but what about your spouse/children’s phones and their social media activities? Security is a BEHAVIOR or a STATE OF MIND if you will, not just technology. Educating the family is just as important as educating the employee.


If you are in security you are a target. While my friend holds a significant role at his employer, the risk would be no less if he was a “lowly” systems administrator (I say that with sarcasm cuz pffft it’s only domain admin creds at stake). Your family is a target, ensure you take the time to educate them and grow a security-minded culture at home as well as at work. If you find yourself spearphished, using personal non-work information you need to be asking yourself “How’d they know that?” and possibly have a conversation with the entity that was impersonated or review how much private data you are sharing.

Report spearphishing whether it is at work or at home, you or your family etc. as this is precious intelligence that any intelligence team needs. Finally, DO NOT do work on a non-work computer especially one you share with the family, be vigilant, remind your co-workers regularly to be vigilant, and share what you know with others.

A reader reached out with a question that made me realize I needed to clarify something above. While I did note that you should delete the spearphish email, the implied task was that you captured it (full header and all contents) BEFORE deleting it as the email along with the report of the spearphishing attempt needs to be provided to the Threat Intelligence team. There can be valuable information in the email header that you do not want to destroy. You can accomplish this by attaching the email to a new email and sending it to the proper team/individual etc. or saving it down and attaching it to an incident report.

Large Foot Prints and Loud Noises

So milling around in some spam while on another research project, I started noticing something strange… how so many seemingly unrelated domains appeared in the Reply To address of the same spam campaign. I began digging into the domains for multiple campaigns and I am currently monitoring the behavior and working on mapping the associations of the bad guys. Granted, there’s nothing glorious about discovering spammers or shutting them down, but uncovering a large enterprise of what appears to be individuals working together kind of intrigues me.

Anyway, while working on this, I found someone that sticks out like a sore thumb. Why, because they are “loud” and have a huge footprint. This individual has registered over 4,000 domains in 11 days. I’m sure he/she has reasons, but none that I’m currently interested in hearing. Not surprisingly, the actor is also associated with over 100K other active domains. I’m not sure what you plan to do about it, but in the mean time, I’m blocking these.

For the record, at this present moment I have only tied this actor to spam (in my resources) and malicious sites as indicated by other research sources, but I have not personally tied them to specific malware.  If I do, I will update this blog with those details provided it doesn’t compromise any other OPSEC.

As a general rule, I’d recommend blocking non standard gTLDs and allowing your users to request an exception.

Here’s a link to the blacklist thanks T-byrd for hosting it. https://www.dropbox.com/s/31c2p85naba08wa/blacklist.txt?dl=0

Check back to that link for future updates.

That’s all for now, if this was helpful to you please let me know.


**UPDATE** 2016-03-29 (0446-UTC)

Additional investigation shows the registrant is a Chinese reseller (http://www.wuyumi.com/).  I’ve personally linked many of the domains to spam, and   others are blacklisted by Domain Tools and other resources.   The seller’s page reveals the price they sell domains at is (2.9-3.5 Yuan, take the avg) less than 50 cents (3.27 Yuan = 0.50 USD) and pricing is also per month in many cases.

So let’s math a little here (assuming all his domains are “rented” for the next 12 months at the average rate)

100,135 domains
x 50 cents/mo
x 12 month

$600,810/year gross

Now I’m not sure what their cost is, but let’s assume they bought a .download domain from ALPNAMES for the advertised $0.60 for 1 year.  They just rented it for .50 x 12 = $6.00 – .60 = profit of $5.40

ROI = 5.40/.60 = 9

9 x 100% = 900% return on investment of 60 cents for one domain.

So we have a low cost of entry to do [bad] business (both the folks buying & renting the domain), links to multiple spam campaigns, some with phishing elements, and links to other confirmed spam campaigns.  I don’t care what they are re/selling them for, at that price, nothing good is going to come of it IMHO.  So the list has been made available WITHOUT WARRANTY you may do with it what you wish.

What’s Under that Threshold?

This blog post is meant to be short, sweet and to the point so please forgive the brevity if you were looking for something in depth this time….


Many of us are trained to get the big fish, find the next cutting edge threat, defend against the big blob of red in the graphic of some ridiculous C-level slide presentation. We sit, eyes locked on some SOC tool waiting for bells & whistles to go off, the emails to start flying, the lights to flash to wake us up because we’ve fallen asleep from boredom all because we’ve place our trust in a tool to tell us on what we should focus our attention. So, how often do you go digging, or lift up the lid on something peeking to see what’s inside? What are you doing about the quiet, smart bad guy who’s tiptoeing in just under your alert criteria? You know, the one who isn’t making a lot of noise on your network, the customer doing the dirtiest of deeds, just under your thresholds for your automated alarms?


Well, if you know what your thresholds are for automated alerts, why aren’t YOU looking at what lies beneath it? Is it because you think nobody with malicious intent would take the time to do X in such small quantities because it wouldn’t pay off? Is it because your tool is awesome and perfect *cough*cough*cough*cries*grabs water*? If you answered yes, to the 2nd or 3rd question please allow me to share some good ol’ country advice that has served me well is “He who underestimates his enemy, has lost the first battle in the war.”


So without divulging the details to my current research, I’ll share a few things I’ve been noticing lately. First is bad guys doing a little here, a little there regarding purchasing domains. Instead of buying in bulk, they’re buying a few each day at a time. So, if you’re selling domains, maybe you want to take a look at any customers who are buying in quantities just below your “alarm” threshold and who are NOT buying via your bulk discount programs. I mean seriously, what does one individual need with a couple hundred domains, that he/she wouldn’t want to take advantage of bulk discounts? I mean, they could just be a legit business that doesn’t know any better, but I’m gonna guess not. It might be worth checking those domains out using tools such as OpenDNS, Domain Tools, Threat Grid, and Virus Total. Are the domains registered, more than 30 days old and still do not have a websites? What’s the aggregate amount of domains purchased in the last 30 days and how old is the customer account? Does the data on the domain registrations, match that on your customer’s account? Does the data on the domain registration match ANOTHER customer account? If you find that your customer’s domains are popping hot, ya just might want to take a leeeetle-bit closer look at their activities.

Let’s look at another OSINT source you have….customer access logs. The second thing I’ve been noticing is bad guys creating DNS entries a little here, a little there. So you found a guy, flying below the radar (could be a girl, but just go with me here) with the daily number of domains being purchased under your alarm level. Maybe you provide infrastructure not domains, so you offer DNS, and you have a customer flying below the radar making lots of DNS records. Do your tools alert you when a customer logs into his/her account from multiple ASN’s or ASN’s in different countries? I mean if a guy logs in for <5 minutes, makes DNS records, and logs out all from from Romania on Sunday, Russia on Monday, Great Britain on Tues....etc either he's racking up some serious frequent flyer miles or he might be up to no good. AGAIN, there COULD be a perfectly legitimate explanation (none come to mind immediately) but you won't even know unless you go looking. If you're providing website hosting, do you have a customer that has hundreds of completely unrelated domains pointing to a single IP? I once found a guy with over 900 malicious domains all pointing/pointed to a single IP...I wanted to say to the provider "Seriously you don't notice?" *SUMMARY* So the point of today's topic - start looking BELOW your automated thresholds for the really bad guys. Be pro-active, stop waiting for bad guys to wave, shake your hand, and say hello. Thanks again for taking time to read the blog and feel free to share comments, DM me on twitter, or just tag and say hi!

Stop Having Sex for the First Time – part 2

In the first part of this article, I gave some various examples of how InfoSec teams are structured to fail or at the very least function very inefficiently. Next we’ll talk about how to achieve a more effective *INTEL* team – and how it will enable the development of intelligence in the organization.

FIRST: Specialization Without Division –
So, here’s where experience in the bedroom really pans out in this InfoSecsy relationship. You want to get lots of smart people who each excel at one thing but know a little bit about a lot of related things.

Both InfoSec & Intel teams will benefit from this structure, the caveat is that you must also have people with the right personality (nobody likes selfishness in the sheets). In addition to the right mix of talent, you need people that respect each other’s abilities, aren’t afraid to ask for help and will be willing and even eager to share what they find. You don’t need a bunch of multipurpose rock stars, rather you want people who excel at things such as malware reverse engineering, pcap analysis, social engineering, development, data analysis, and even specific application software etc. You also want them to have foundation knowledge in other security realms.

The second part to this is that they are ONE TEAM, they are not divided into divisions with Directors and VPs over specific areas rather they are outside hires or even the internal elite from the network security team, the security operations center, the devops team etc. They will likely have liasion relationships with these functional areas and access to the data from them as well.

In some cases it may make sense to have multiple teams located together across the country, in some cases the company size may support having them co-locate in one physical space, nonetheless the bottom line is that they are all ONE Team. They are your version of a special forces troop, everyone has a job yet they all help each other and are willing to learn what they can about another area to be as effective and helpful as possible when needed.
SECOND: In Failure and Success, in Sickness and in Health ’til Termination Do We Part

This is an InfoSecsy partnership whether you like it or not. If an attack on your organization succeeds or fails, you share the responsibility. If you build something, and it doesn’t work, you share the failure and when it does, you share the success. If you have an idea and it leads nowhere, you mark it off as something tried and eliminated. If you have an idea, try it, if it fails tell everyone WHY/HOW it failed so they don’t waste resources trying the same thing, then move on. If you try something and it succeeds, share so everyone knows WHY/HOW it worked and they can repeat, enhance, and also succeed. [Ask @Ben0xA for his preso on FIAL – it’s awesome]
THIRD: share, Share, SHare, SHAre, SHARe, SHARE, SHARE!!!!!

Sharing InfoSecsy knowledge, skills, experience and ideas is only going to enhance your Intel team and company’s security posture. For example, the other day I had someone tell me that an Exchange team was unable to help us identify who clicked on a link while accessing OWA on a machine because everyone shared a generic login on the shared workstation. Having similar experience in a related area, I was able to offer a suggestion to the Exchange Team and the SOC Analyst that allowed the proper syslogs to be identified in their repository and the Exchange Team to liason with the Windows IIS team to pull the data that was later analyzed. Neither of these areas was my responsibility or expertise, but due to their willingness to share the problem and brainstorm, solutions emerged.
Another example, When we had a host that was unable to be found, I got the NOC, SOC and Help Desk all talking and we collectively came up with a non-traditional way to protect the network and find the asset. While I didn’t know the topology I was able to ask questions that spawned conversations that resulted in solutions.

Sometimes the person with the LEAST knowledge in a subject area can ask the simplest question that will light a much needed fire when because of how they processed the information. The bottom line is – get your people together regularly to discuss what has/is happened, known, and is yet to be figured out, and collectively, ideas and solutions will emerge.
FINALLY: Recycle & Re-Use

For this final note, I’ll use a hypothetical incident as an example. A Sales Engineer (SE) gets an email from an individual purportedly representing one if his clients. The individual is asking for assistance in collecting network and netflow data to help him tune his SIEM, a seemingly harmless request. As the conversation progresses the SE thinks the guy is sketchy so he contacts the SOC. The SOC runs a number of checks on the accounts and checks for any relationship to any known incidents, nothing is found. Guidance given is to limit the scope of information given to the individual per the company guidelines. So what’s next? Well, if we abide by the 3rd rule, this information would get shared with the Intel team, and then the 4th rule takes effect, the information is recycled. It is sent through the Intel Team that runs through it with a different filter and they begin to identify that not only is the individual sketchy, he is possibly even an imposter executing a very crafty social engineering attack. So what’s next? Recycle & Re-Use again. Contact the customer that the individual claims to represent and pass the information to them. Let them look at it with a different filter. You never know what puzzle someone else is putting together and what appears to be “nothing to see here” might be a critical piece of information that ties everything together for someone else.

The first part of this article discussed how traditional, rigid, corporate sandboxes of responsibility that define various IT functionalities within an InfoSec program have a tendency to do hinder effectiveness when it comes to security. The second part of this article provides some ideas and examples on how to restructure and build teams as well as ideas on when/how share information across specialities. There are a few takeaways I’d like to leave you with:

1. The only right structure, is the one that maximizes and encourages information sharing and meets the organizational needs for security AND intelligence within resource constraints

2. Embrace failures – they are the stepping stones that lead to the door of success

3. Bring your teams (worker bee level) from all disciplines, together regularly to discuss all kinds of security concerns and issues everyone is experiencing – and most of all encourage them to SHARE ideas and experience.

4. Recycle data on security incidents, even concerns of a possible incident. Ensure they are passed amongst your teams via a process that works for your organization, with the end goal of everyone getting a say-so/review of it.

So go forth, do great things, and enjoy the InfoSecsy side of security not just the InfoFail side.

Thank you once again for taking time to read OSINT Heaven’s Blog.