Category Archives: Planning

When and How to Hire a Threat Intelligence Analyst

WHEN…

Threat Intelligence has become the latest marketing buzzword, often abused and misused in an effort to impress a customer base. So, when do you need threat intelligence and when is the right time to hire someone to “provide customers” with threat intelligence? Well, you should never hire someone specifically to provide customers with threat intelligence, unless that is the product you are specifically in business to produce. You can read more about this in the blog “Three Myths about Threat Intelligence.

Typically, you would be ready to hire a threat intelligence analyst once you’ve established mature security practices for your organization. This is not to say that a Threat Intelligence team cannot be set up and designed to grow as the company grows, however, it is typically a strategic investment where the Threat Intelligence team’s first role is to serve internally, supporting decision makers, it also serves to strengthen the security posture and proactively detect, deter, and destroy/avoid threats. While start-ups would benefit from understanding threats to their products, people, facilities, and customer data, they do not typically plan for the capital investment to support threat intelligence efforts. Additionally, Threat Intelligence teams do not normally generate products for revenue; rather, they serve to inform decision makers about potential threats on the horizon, protect the organization from internal and external threats to people, property, and assets, and in rare instances provide competitive advantage. In short, you are probably ready to hire once you are ready to make a strategic investment and take a proactive approach to security and threat detection, deterrence, and avoidance.

Below is a brief checklist of things an organization should achieve before being ready to hire a threat intelligence analyst.

  • Mature security processes and culture in place
  • Obtained CEO, CFO, CIO support and buy-in from Legal, Marketing, Physical & Information Security
  • Structured the Director of Threat Intelligence and his/her team to report directly to a C-level officer, optimally the Chief Security Officer
  • Completed a threat intelligence program charter and program outline
  • Defined the immediate intelligence requirements
  • Defined communications plans for intelligence dissemination internally and externally

ONE PERSON CANNOT EFFECTIVELY SERVE TWO MASTERS

Once you’ve completed the tasks above, you should be ready for the next phase – hiring in preparation for collection and analysis. You should not have started any intelligence collection aside of what may already be generating inside individual departments: network logs, market reports, incident reports, etc.

Your first hire should be a managerial role that will oversee the persons performing collection and analysis. While it will be immensely beneficial to hire someone who has experience within the intelligence community, it is not a requirement. Conversely, someone skilled in managing “geeks” or “nerds” is a minimal requirement.  

When under tight budget constraints, companies often try to cut corners and hire someone skilled in both collection and analysis, having them perform both full-time roles, i.e. two masters. This does not scale and is not sustainable. While it may work initially, you will quickly learn that time spent serving the first master Collection & Processing (collecting intelligence, developing tools, and tuning collectors) is time that cannot be spent serving the second master Analysis and Reporting (doing robust analysis of the threat data that has been collected). The individual cannot serve two masters (do both jobs) indefinitely.

At a minimum, you should plan on having a developer to focus on developing, integrating, and tuning intelligence collection tools. This person will also work with analysts to develop tools and processes for converting the collected data into formats the analysts can use, a phase known as intelligence processing. The team/person responsible for developing the tools will have an intimate relationship with the analysts consuming the data/information that has been collected and processed. Whether you hire the threat intelligence analyst or the developer first is not important, however, them being able to effectively communicate with each other and having a solid understanding of what the other one does is important.

HOW…

Know the traits you need in a threat intelligence analyst and realize a great analyst may not have “analyst” in their previous job titles. More importantly, a person’s mindset and character often make the difference between a good and great analyst, not their years on the job. A good threat intelligence analyst, while unique in their own way, shares many characteristics with analysts from other disciplines. So, what traits and skills should they possess?

First, they should be able to WRITE CONCISELY. This is a skill commonly found in journalists, historians, and researchers. Look for someone who has experience in public affairs, school newspapers, blogging.  If an analyst cannot communicate the importance of the threat in a short, concise manner, decision makers will likely not find value in their reporting. If an analyst cannot show value, leaders can (and often quickly) form the opinion threat intelligence is a useless money pit.

Second, a good analyst is a professional tin-foil hat model, never trusting an analysis without knowing what methods and data were used to generate a report and how it was collected. They are skeptical, ask lots of questions, and think outside the box.

Third, they should be humble, admit their mistakes, and learn from them. Sometimes an analysis can go horribly wrong, and when it does, it makes front page news. This doesn’t necessarily mean the analyst is a bad analyst, at least as long as they learn from it. It may be they were pressured to provide a report based on insufficient or corrupted source data and didn’t push back for more time to consider other explanations of the data, or maybe they were unaware of their own bias. Whatever the cause, a good analyst can identify where the analysis went wrong and learn from the error(s).

Fourth, a threat intelligence analyst needs to have comprehensive knowledge on the subject or be able to quickly ramp up. For example, an analyst with one year of security experience who also has in-depth knowledge of religious and cultural practices from a geographic region where your biggest threats reside can be just as valuable to a threat intelligence team as someone with ten years of security experience and no relevant geographical or religious knowledge or experience.

Fifth, they know the tools and data resources available for collecting intelligence. Often, the hardest part of collecting intelligence is knowing where it is, how to get it, and the ability to find new sources.

Sixth, a good analyst has refined technical skills with respect to understanding how data is/was collected and processed, as well as knowing when data is missing and being able to explain why it is missing. This helps them know when to question the collection results and how to work with the collection team to tune the methods, techniques and processes. Additionally, they should have advanced skills when it comes to collating data points for analysis in order to identify relationships and trends.

Finally, they should have a solid understanding of and experience in developing and testing hypotheses, to include communicating the methods used, assumptions made, data that is missing, and potential biases.

GOOD TO GREAT

A great analyst is one who is willing to review someone else’s hypothesis, theory, model, etc. Then, if the data supports it, admit that while his/her assessment may differ, that they are both viable. Many times, the best analysis can be a hybrid of theories from different individuals who had very opposite starting points, combining the best of each analysis to create the final product. Additionally, when a theory or hypothesis is disproven, or the data doesn’t support it, they need to have a “no-quit” mentality in continuing to chip away at it until they have a theory that is supported by the data.

In addition to willfully accepting other’s evaluations and assessments, a great analyst is also cognizant of his/her own bias. For example, a 50-year old male analyst from Ohio who grew up in a Christian home and never traveled more than 250 miles from home is probably going to have a very different set of biases that influence his/her analyses than a 50-year old male analyst from Mississippi who spent 20 years in the military and is an atheist. The ability to admit one’s own bias is something that is often found in someone that is able to have academic discussions, being able to say, “I understand your argument, I just don’t agree with it.” Being self-aware and able to admit one’s own bias is a trait often overlooked in the interview process.

IT’S ALL ABOUT THE BIAS…

So, which of all of these things discussed above is the most critical characteristic of a threat intelligence analyst? Only the last one, the ability to admit one’s own bias. You definitely hope to find a threat intelligence analyst who embodies all of the listed traits that constitute a good and great analyst, however, at the end of it all, the ability to admit one’s own bias turns out to be the foundation upon which most of these other traits sits.

Finally, the most important, they are willing to admit when they are wrong, and even more importantly, when someone else is correct.

Three Myths About Threat Intelligence

Word Count: 678
Estimated Reading Time: 3 -4 minutes

 

  1. Threat intelligence is something you should provide your customers

If threat intelligence products are not your flagship product or primary business function, then threat intelligence is not something you should provide as a product or service directly to your customers. Threat intelligence is more than just blogs about the latest malware; it is a full scope business function that serves the organization strategically, operationally, and tactically. While threat intelligence may direct/influence the actions taken at the tactical level (i.e. to protect internal assets such as networks, intellectual property, and (customer) data), the intelligence itself and methods by which it is developed should not be released to your customer base as a product. In some rare instances, corporations have full teams dedicated to developing threat intelligence, which in turn is disseminated internally; these are usually organizations with very mature security practices and processes. While they may eventually publish what they learn via a corporate blog, the team’s function is to serve the organization, not provide a product to the customer.

NOTE: This should not be interpreted to mean that intelligence should never be shared or disseminated to customers. That is a discussion that goes beyond this article’s scope.

NOTE: In short, if you have not mastered the art of developing threat intelligence in-house, you should not be offering it as a service or product.

 

  1. Threat intelligence is nothing more than advanced information security or “googling”

Threat intelligence itself is a proactive approach to security, while an information security practice (or department) is a consumer of the details generated from threat intelligence. A true threat intelligence program consists of governance and compliance, data/intelligence collection, processing, analysis, reporting and dissemination. A Threat Intelligence team combines data from the information “cyber” security domain with data from multiple domains and disciplines such as history, economics, political science, education, religion, industry/market-specific trends, and cultural studies to define the threat. An information security department often generates data (i.e. incident post-mortem) that may be synthesized with various other sources in order to generate a holistic threat picture, because they themselves are a target. While the Information Security team may generate threat data points at a tactical/operational level, such as details about the latest denial of service attack or phishing campaign, they are not generating actual intelligence, or in other words, they are not defining the threat.

 

  1. Threat Intelligence is a “cyber” thing

While threat intelligence has many faces and a fully-fledged Threat Intelligence Program serves multiple departments, its primary mission is to support C-suite decision making by educating decision makers so they can make well-informed decisions with as much available information as possible. Supporting other departments is a secondary role, albeit still important. The Marketing department benefits from information about threats to the corporate brand and works with the Legal department to thwart it. The Legal department benefits from information about threats posed to copyrights or trademarks, by specific individuals or business partners, and anything exposing the company to potential litigation. The Human Resources department benefits from information about threats posed by personnel, especially for mission-essential roles. Pretty much any department that works with sensitive strategic information, plans, projections, forecasts, or highly sensitive data such as intellectual property, customer data, or security-related information can benefit from threat intelligence.

Risk Management and Threat Scoring: a qualitative, quantitative, consistent and repeatable method

For new readers, welcome, and please take a moment to read a brief message From the Author.

Executive Summary

Threat Intelligence teams and Information Security teams often struggle to communicate to leadership why a specific vulnerability should be taken seriously or given more precedence, especially when the CVSS score is low. This blog is a brief explanation of how to use a threat scoring matrix to consistently evaluate threat actors, then combine that score with a CVSS score to communicate the true risk posed to your organization. It helps provide a measurable score that can easily be integrated with CVSS scoring systems, thereby also facilitating risk scoring automation. It also allows the leadership to make a well-informed decision as a vulnerability is considered within the context of a comprehensive threat actor profile seeking to exploit the vulnerability. We discuss the example matrix metrics below, and you may download the The Threat Actor Impact Scoring Matrix PDF from here [0]. It uses a five-point scale to scorer a threat actor based on nine criteria: determination, motivation, technical resources, financial resources, intelligence resources, skills/expertise, time they went undetected, team size, and time available to dedicate to malicious activities. It then scores the target (presumably you) on three criteria: probability of successful attack, stage of any technical exploit, and the attacker’s focus.

PART 1: Threat Actor Profile / Characteristics

Once upon a time, when I went and got this thing called a CISSP because some super smart cyb3r entity decided that a cert meant I was smrt……
There was an entire section dedicated to risk management, and it discussed in depth qualitative and quantitative methods for calculating risk. It taught you how to calculate values for assets six ways to Sunday, and how to score vulnerability. There is even this magical formula on how to calculate risk:

Threat x Vulnerability x Asset value = Total risk

And there’s this other magical formula…

Total risk – Countermeasures = Residual risk

Despite telling you to use a Threat (score) factor, they never teach you how to calculate/score a threat, or do threat modeling. So, I’ve tried to bridge that gap with the example matrix and the explanations below should help you understand how to grade each metric, and integrate it with your vulnerability scores for a better measure of a threat.
Determination is the measure of the threat actor’s courage or boldness. This score ranges from measuring if the are they easily scared off or are they brazen and hell bent on compromising you.
Motivation is also understood from the threat actor’s perspective as “What’s in it for me”, aka the WIFM. What is it that they are really after? Are they adolescents just screwing around in their free time, ready to show to their friends at the lunch table what they did to your website? Or are they a bigger threat, motivated by money or even worse, they seek to completely destroy you?

TECHNICAL RESOURCES refers to what kind of tools they indicate they are using. This is also closely related to the next criteria of Financial Backing. Are they downloading or only using free/open sourced tools indicating very low financial backing? Are they using tools that require purchase, licensing, or subscription or that indicate they’ve done some in-house customization? Did they leverage something that had never been seen before, sometimes referred to as an 0day (zero day)?

FINANCIAL BACKING is a little trickier to evaluate. It’s unlikely the threat actor is going to publish their banking records for you but you can interpret other factors, such as the one above to help make a selection in the matrix for this. If you find that an attack is only M-F from 5pm-10pm, well, they probably have a regular fulltime job and only do this in the evenings, indicating moderate to low financial backing. However, if this attack is M-F 8am-6pm, well, this is probably their fulltime job. If it is only on the weekends… well you get my point. Then there is the tools, and cost associated with them that you can consider as well.

INTELLIGENCE RESOURCES indicates your level of exposure. How could the attacker have known what they knew in order to launch the attack? Did they use knowledge that only someone inside could have known? This doesn’t mean it is insider threat, but it could indicate that someone is loose-lipped in chatrooms, social media, or at their local InfoSec meet-up and ignorantly disclosed something. Did they go after future marketing and merger information? Did they go after intellectual property for designs or your secret sauce? Or did they use something publicly available on a pastebin or at shodan.io or censys.io (that maybe shouldn’t be there in the first place)? Just because you think nobody should know it, doesn’t mean someone didn’t screw up and share it accidentally. Be very careful before making a selection here as it can skew your overall score.

SKILLS AND EXPERIENCE is calculated to some degree in the CVSS v3 scoring system under Attack Complexity (AC), but unfortunately the vulnerability evaluator is given a binary option of Low or High. Rarely in life is anything as simple as “easy’ or “hard”, thus to more effectively create a profile for the threat actor we consider his skills and experience. It may help to think of this metric based on the amount of forensic evidence left behind. Does it appear (or do you know) if the attacker has ever executed an attack such as this? Was it messy, in other words, did/do they leave behind a trove of forensic evidence or is it extremely difficult to track their movements and activities because they cleaned up logs extremely well?

TIME UNDETECTED is fairly straight forward, how long were they in your network, accessing an employee’s email, siphoning data etc, before you found them? Perhaps it is not your asset that they were in, and you are scoring a threat that has been published, and you recognize some similar characteristics in your data, you can use the information published as a reference point, and make your own assumptions, provided you document these assumptions as they may need to be changed in the future. Side note: if you document what you’ve assumed, and you change it later because you find your assumption was inaccurate, it helps the organization from making the same mistake later once you are gone.

TEAM SIZE again is something that can be calculated a number of different ways, how you define the scale is up to you, but the example gives you more than just numbers for your calculation as it takes into account the “tech savvy” level. It is kind of like saying, I can get one rock star in a sports draft for x-dollars or I can get three solid guys for x-dollars. Team sizes are just straight forward numbers, and there should be some wiggle room for scoring to account for this.

TIME is the final characteristic metric and it represents how many hours per week the attacker(s) are putting into this. It is not specific measurement of how many hours did they launch a ping sweep, rather how much time did they have to put into planning, reconnaissance, and execution as well as actual attack design, launch, and monitoring.

———————————-

PART 2: Threat Target Score

This takes into account the impact of the attacker attacking you, whether or not there is an exploit (and its stage), and who the actor is targeting.

Probability of Successful Actor Attack measures how likely are they to succeed in compromising you? If they are highly skilled, well-funded, and malicious activity is their full-time job, they are probably more likely to succeed than fail. In some cases. You might determine this score, mathematically based on the average score of the items above. Another alternative is to calculate this based on the CVSS score and the threat actor’s score above. However, you choose to do it, document your decision, and provide instructions if it is to be done mathematically based on the other factors.

Technical Exploit measures the stage of an exploit. In the CVSS v3 scoring system, this is captured in the temporal scoring metrics.to some degree, but it is based on whether or not the exploit code works in most versus all situations. In this matrix, we are concerned about how an exploit is being used once it exists because we are scoring the threat, not the exploit or the vulnerability. The top two tiers of this metric measure whether or not the exploit is in the wild and is/isn’t organic to your technical eco-system.

Non-technical is a metric for measuring who is being attacked. Is the attacker using a spray-and pay approach, indiscriminate, or doesn’t seem to care who is affected so long as someone is affected? Or is the threat more focused to a geographic region or country, your industry (tech, travel, biomedical, manufacturing, power plants etc), or your immediate peers?

Using the Matrix Wisely

This threat actor matrix should be carefully analyzed before you choose to implement it in your organization as what is represented here may not fit your organization’s risk tolerance. You may need to tighten it down, making it more stringent, i.e. your organization may consider someone motivated by monetary gain to be a threat level higher than someone motivated to induce change. The point is, don’t just take this and start using it until you actually understand and have considered each definition of each characteristic. At the bottom of the matrix you will find one way you can use the Threat Actor Impact Score, and Threat Target Score to create an Overall Threat Score. Then, you can combine this with the CVSS Score ascribed to a vulnerability (I recommend using the one provided by the vendor whenever possible as they know their software best). Together, you can present a holistic risk score to your leadership that represents the vulnerability severity within the context of a specific threat actor. You will probably be surprised to find that low and moderate CVSS-scored vulnerabilities warrant more attention than you realize.

Because Math…

If you’re a math whiz, you probably noticed that I have “divide by 60” to calculate the overall threat score, then when using it with CVSS I turned around and multiplied by 10…. Well Smarty Pants 😊 good for you for noticing that I could have just divided by 6 in the first place, but this was built on the Keep It Simple Stupid (KISS) principle, kind of like developers putting in lots of comments for their code so that other people can understand what they were thinking or trying to accomplish.

If you choose not to use one of the metrics, and you eliminate it all together, make sure at the bottom, you reduce the Out of ## to reflect the maximum possible points remaining so that your score doesn’t get skewed.

Finally,
This is not a one-and-done process. You will need to review the score for your threat actor profiles regularly (I recommend quarterly, semi-annually at minimum). Small changes in how you do business can have unexpected consequences (both good and bad) that will impact how you score these threats. They may also change the priorities of asset values and what is deemed a critical asset.

Thank you all for reading this. Being the holidays, I haven’t asked anyone for a proof reading so if you find typos, errors or corrections needed please email me at 447cc8c9 \at\ opayq.com.
Merry Christmas and Happy New Year to all of you.

[0] https://github.com/grcninja/Blog_Docs/blob/master/THREAT%20ACTOR%20IMPACT%20SCORING.pdf

Outlining a Threat Intel Program

(estimated read time 27min)

For new readers, welcome, and please take a moment to read a brief message From the Author.

Executive Summary

I recently crunched the high level basics of setting up a threat intelligence (abbreviated as Threat Intel) program into a 9-tweet thread, which was met with great appreciation and the feedback solicited unanimously agreed I should expand on the thread in a blog so here we go.

This blog elaborates on a nine-step process for creating a Threat Intel program. It is packed full of thought provoking questions, suggestions, and even a few lessons learned to help you avoid bumps in the road. The concepts shared here aren’t necessarily earth shattering; in fact they come from military experience, time spent in combat zones, 24/7 shifts in intelligence facilities, information assurance, governance/risk/compliance, and information security (InfoSec) programs in both government and civilian sectors. Additionally, I take every opportunity to pick the brain of someone (anyone) who has been doing threat intel or InfoSec and occasionally even sit still long enough to read a book, article, or paper on the topic. Threat Intel isn’t anything new. It’s been around since humans have been at odds with each other and doing anything from sending out spies to eavesdropping in a bar, but we seem to struggle with developing a program around it in the digital space. This blog aims to close that gap, and provide you a practical outline for designing your own Threat Intel program.

Introduction

Many of you are used to the long standing saying “You can have your project: fast, cheap, or right. You’re only allowed to choose two.” But what about quality? I remember when I first learned to drive my mother gave me $5 told me to be back in 15 minutes and to bring her some dish detergent. I ran to the store grabbed the bargain brand, hurried back home and handed it to her. She looked and shrieked “What’s this!?” I learned more about dish detergent in the 15 minutes that followed than I care to remember. The lesson here is that, I had completed the task, on time, under budget, and provided exactly what she required. It was fast, cheap AND right, but it didn’t meet her preferred standard of quality.

Taking this lesson learned, I include a fourth constraint for tasks/projects: Quality. Imagine our four factors like a diamond, perfectly balanced, with four equal sections. The rules are simple, if you wish to increase volume in one of the sections, you must decrease volume in another. For this threat intel discussion we label our four sections: time, money, design/accuracy, and quality. Threat intel is rarely, if ever, black and white, therefore we will use the term ‘accuracy’ instead of the ‘right’ as it implies binary thinking ‘right or wrong’. As we discuss building out a Threat Intel program in this blog, we’ll refer back to our balanced diamond, to help remind us of something Tim Helming so eloquently commented (https://twitter.com/timhelming/status/854775298709012480) that at the end of the day the micro (1’s & 0’s of threat hunting) have to translate to the macro (a valuable Threat Intel program that pays the bills).

 

1: WHAT ARE WE PROTECTING?

The first tweet in the series https://twitter.com/GRC_Ninja/status/854573118010122240 starts simply with “list your top 3-5 assets”. This may sound very straightforward however I suspect that if you individually asked each C-level executive, you’d probably wind up with a very diverse list. Try to answer 1) what is it that your organization actually DOES and 2) what assets do you need to do it?

I’d encourage you to have your top two leadership tiers submit their answers via survey or host them at a collaborative meeting where all participants come with write ups on their thoughts, then toss them out on a whiteboard to avoid “group think”. You can have as many as you want, but understand that when hunting threats, you are time constrained and the quality of data is important. There’s a finite value in automation, and at the end of the day threat analysts and threat hunters have “eyes on glass” reading, analyzing, interpreting, and reporting. If your list of “most critical assets” is more than five (and usually three is optimal if there’s stark diversity) then the hunting & analysis teams efforts will usually be proportionally divided according to weight of priorities so that they may perform their jobs to the best of their abilities. A large list will mean you’ll need to invest commensurate amounts of money in staffing to achieve adequate accuracy, quality (and thoroughness) of investigation, analysis and the level of reporting desired.

2: IDENTIFY YOUR THREATS

Tweet number two in the series https://twitter.com/GRC_Ninja/status/854573497741430785 calls for an organization to consider “who would kill to have/destroy those assets? (think of what lethal/strategic value they hold to another)”. This is an exercise in not only giving names to the boogeymen that keep you up at night, but also in identifying who’s the most feared. This sounds simple enough right? When asking groups to do this, there are usually three adversaries named 1) your largest competitor(s), 2) hostile former/current employees, & 3) “hackers”. That third group is a bit too vague for your hunting team to effectively and efficiently execute their duties or provide you a quality threat assessment/intel report. Imagine your threat intelligence report template as “$threat tried to hack/attack us…”, now substitute “hacker” for $threat and read that aloud. [Be honest, you’d probably fire someone for that report.]

Obviously “hacker” needs to be refined. Let’s break that term down into the following groups:

  • advanced persistent threats (APT): one or more actors who are PERSISTENT, which usually means well funded and they don’t stop, ever, they don’t go find ‘an easier’ target, & rarely take holidays, or sleep, or at least so it seems; they are your nemesis. A nation state [hacker] actor (someone working for a foreign country/government) is an APT, but not all APTs are nation states! They ARE all persistent.
  • criminals: entities driven by monetary gain, resorting to anything from phishing & fraud to malware and 0-days
  • hacktivists: a group seeking to promote a political agenda or effect social change, usually not in it for the money
  • Script kiddies: usually seek bragging rights for disrupting business

Now, using these groups instead of “hacker”, try to think of someone (or some group) who meets one of these definitions and would go to great lengths to steal/destroy the assets listed in step one. Depending on what services or products your organization provides your answers will vary. A video game company probably has very different threats than a banker, unless of course the owners or employees bank with the banker. A stationary company will have different threats than a pharmaceutical company. Sometimes however, threats are target-neutral, these threats would be addressed by your security operations center (SOC) first, then escalated to your threat hunters/analysts if necessary. Remember, your threat intel team can’t chase every boogeyman 24/7.

Another thing you’ll want to do is score the threat actors. There are a number of systems out there and the specifics of that activity are beyond the scope of this article. However, it may be helpful when trying to prioritize what/who threatens you by using a matrix. For example, on a scale of 1 to 5, 1 being the lowest, what is each threat actor’s:

  1. level of determination
  2. resources
  3. skill
  4. team size

3: WHAT MUST WE STOP?

Next in the tweet thread https://twitter.com/GRC_Ninja/status/854574465585487872 I asked “…What [are] the 3-5 most important things to prevent? Physical/Virtual Theft? Destruction? Corruption? Manipulation? Modification?…” You may think of these within any context you wish, and some to consider are data, hosts/nodes, code execution, people & processes.

During a debate over what was the minimum security requirement for something highly sensitive, an executive said, to paraphrase, that he didn’t care who could READ the documents, just as long as they couldn’t STEAL them. Needless to say, explaining digital thievery left his brain about to explode and me with carte blanche authority to deny access to everyone and everything as I saw fit. The takeaway is, identify and understand what endstate is beyond your acceptable risk threshold, this unacceptable risk is what you MUST stop.

For example, in some cases a breach of a network segment may be undesirable but it is data exfiltration from that segment that you MUST stop. Another example might be an asset for which destruction is an acceptable risk because you are capable of restoring it quickly. However that asset becoming manipulated, remaining live and online might have far greater reaching consequences. Think of a dataset that has Black Friday’s pricing (in our oversimplified and horribly architected system). The data is approved and posted to a drop folder where a cron job picks it up, pushes price changes to a transactional database and it’s published to your e-commerce site. If an attacker were to destroy or corrupt the file, you’re not alarmed because there’s an alert that will sound and a backup copy from which you can restore. However, consider a scenario in which an attacker modifies the prices, the “too-good-to-be-true” prices are pushed to the database and website, and it takes two hours to detect this, on Black Friday.

Perhaps you have something that is a lethal agent, thus you MUST prevent physical theft by ensuring the IoT and networked security controls have 5-9’s uptime (not down for more than 5 min per year), are never compromised, or that an unauthorized person is never allowed to access or control it. These are just a couple scenarios to get you thinking, but the real importance lies in ensuring your list of “must stops” is manageable and each objective can be allocated sufficient manpower/support when hunting for threats and your SOC is monitoring events that they’ll escalate to your Threat Intel Team

Identifying and understanding the activities that must be prevented will drive and prioritize the corresponding hunting activities your teams will conduct when looking for bad guys who may already be in your systems. Referring back to our balanced diamond, consider that an investment in technologies to support constant monitoring should probably not be part of the budget for your threat intel team, however analytic tools used on the historical outputs from your continuous monitoring systems, security sensors, logs etc. probably would be. Also consider the cost for manpower, time to be spent performing activities in support of these strategic objectives, and how the quality of the investigations and reporting will be affected by available manpower and tools.

4: IDENTIFY DATA/INFORMATION NEEDS/REQUIREMENTS

Next in the series of tweets comes https://twitter.com/GRC_Ninja/status/854574988153716736

“4) identify the data/information you [would] NEED to have to prevent actions…[from step] 3 (not mitigate to acceptable risk, PREVENT)”

After completing the first three steps we should know 1) what we need to protect, 2) who we believe we’ll be defending against/hunting for, and 3) what we must prevent from happening. So what is the most critical resources needed for us to achieve our goals? Data and Information. At this point in the process we are simply making a list. I recommend a brainstorming session to get started. You may be in charge of developing the Threat Intel program, but you can’t run it by yourself. This step in the process is a great way to give your (potential) team members a chance to have some skin in the game, and really feel like they own it. Before you consider asking C-levels for input on this, be considerate of their time and only ask those who have relevant experience, someone who has been a blue/red/purple team member.

Here’s a suggestion to get you started. Gather your security geeks and nerds in a room, make sure everyone understands 1-3, then ask them to think of what data/information they believe they would need to successfully thwart attackers. Next, put giant post-it-note sheets on the walls, title them “Network”, “Application”, “Host”, “Malware”, “Databases” and “InfoSec Soup”, give them each a marker, then give everyone five minutes to run around the room and brain dump information on each sheet (duplication among participants is fine). Whatever doesn’t fit into the first five categories listed goes on the last one (something like 3rd-party svc provider termination of disgruntled employee reports so you can actually revoke their credentials in your own system expeditiously). After the five minutes are up, take some time to go over the entries on each sheet, not in detail, just read them off so you make sure you can read them. Allow alibi additions as something on the list may spark an idea from someone. Then walk away. You may even repeat this exercise with your SOC, NOC, and developers. You’d be surprised how security minded some of these individuals are (you might even want to recruit them for you Threat Intel team later). If your team is remote, a modified version of this could be a survey.

Come back the next day with fresh eyes, take the note sheets, review and organize them into a list. Follow up with the teams and begin to prioritize the list into that which exists and we NEED versus WANT, and my favorite category ‘Unicorns and Leprechauns’ better known as a wishlist, which are things which as far as we know do not exists but might be built/created.

5: IDENTIFY DATA/INFORMATION RESOURCES

Some feedback I received regarding the next tweet https://twitter.com/GRC_Ninja/status/854575357885906944 where I ask if “you [can] get this information from internal sources in sufficient detail to PREVENT items in 3? If not can you get there?” was that it could be combined with the previous step. Depending on the organization, this is a true statement. However, I expect that in order to complete the task above, there will be multiple meetings and a few iterations of list revision before the step is complete. From a project management view, having these as separate milestones makes it easier to track progress toward the goal of creating the program. Additionally, seeing another milestone complete, has immeasurable positive effects as it creates a sense of accomplishment. Whether you combine or separate them, once it is complete, we now have a viable list of information sources we’ve identified as necessary, and now we can start working on identifying how we might source the information.

Information is data that has been analyzed and given context. In some cases, we trust the data analysis of a source, and we are comfortable trusting the information it produces, such as our internal malware reverse engineers, a vetted blacklist provider, or even just “a guy I know” (which ironically sometimes provides the most reliable tips/information out there). In other cases, such as a pew-pew map, we want to see the raw data so that we may perform our own analysis and draw our own conclusions. The challenge in this step, for internal sources, is to identify all the data sources. This will have secondary and tertiary benefits as you will not only identify redundant sources/reporting (which can help reduce costs later) but you will have to decide on which source is your source of truth. You may also discover other unexpected goodies some sources provide that you hadn’t thought of. As an example (not necessarily an endorsement) log files will be on your list of necessary data, and perhaps you find that only portions of these files are pumped into Splunk versus the raw log files which contain data NOT put into Splunk. In most cases when hunting, the raw data source is preferred. However by listing both sources, your discovery of this delta in the sources may even prompt a modification to data architecture to allow the extra fields you want to be added to the Splunk repository.

In other cases, the data which you seek is not currently captured, such as successful login attempts to a resource listed in step one, but it could be if someone turned on that logging. Finally, the data/information you’ve listed, simply is not something you have access to, such as underground extremist threats against your industry or gang activity in the region of an asset from step one. However you still need this information and listing all possible sources for this usually identifies a need for relationships to be established and/or monitoring of open sources to be created. Another data point that will emerge are potential vendors that market/promise that they have the kind(s) of information you want. These will each require a cost/benefit analysis and a “bake off” between vendors to see who truly provides something that is value added to your program and meets your needs. NOTE: most threat intel feeds are at best industry-specific, not organizational or even regionally-specific so be mindful of purchasing “relative” threat intelligence feeds.

6: IDENTIFY DATA/INFORMATION COSTS

The next step in the process mentioned here https://twitter.com/GRC_Ninja/status/854575731585798144, is identifying gaps between what data/information you need but don’t have. “6) if no to 5, can you buy this information? If yes, what’s your budget? Can you eventually generate it yourself?” It’s not surprising to anyone, that sometimes the information we’d like to have is closely held by state and federal agencies. If you’re building this program from the ground up, you will want to establish relationships with these agencies and determine if there’s a cost associated with receiving it. As mentioned earlier ISACs for your industry might be a good source, but most of them are not free.

Other information you might be able to generate, but someone else already develops it. In many cases, not only do they develop it, they do it well, it’s useful, and you couldn’t generate it to the quality standards they do unless that was absolutely the only thing on which your team worked. For example, consider Samuel Culper’s Forward Observer https://readfomag.com/. He provides weekly executive summaries and addresses current indicators of:

  • Systems disruption or instability leading to violence
  • An outbreak of global conflict
  • Organized political violence
  • Economic, financial, or monetary instability

All of the above, could be used to cover the tracks of, or spawn a digital (cyber) attack. As an independent threat researcher, this information is something I do not have the time to collect & analyze, and it costs me about the same as grits & bacon once a month at my favorite breakfast place.

In considering our balanced diamond, money/cost is a resource that if we need a lot of it for one area of of our program, we usually have to give up something else inside that same category, and it is usually manpower or tools, as everyone is pushed to “do more with less”. So how do we prioritize the allocation of funds? Use the ABC prioritization rules: acquire, buy, create. First, see if you can acquire what you need in-house, acquire it from another team, tools, repository etc as this is the cheapest route. If you cannot acquire it, can you buy it? This may be more expensive, but depending on your timeline and availability of personnel in-house to create it, this is sometimes cheaper than the next option, creating it. Finally, if you cannot acquire it or buy it, then consider creating it. This is probably the most time-consuming and costly option (from a total cost of ownership perspective) when first standing up a program; however, it may be something that goes on a roadmap for later. Creating a source can allow greater flexibility, control, and validation over your threat intelligence data/information.

Whether or not to choose A,B or C will depend on your balanced diamond. If time is not a resource you have, and the program needs to be stood up quickly, you may take the hit on the cost section of your diamond as you need to buy the data/information from a source. Also, the talent pool from which you have to choose may also affect your decision, the time and cost associated with hiring the talent (if you can’t train someone up) may force your hand into buying instead of creating. In some instances the cost of the data may be prohibitive, and you do not have it in-house thus you may have to adjust your time section on your diamond to allow you to hire that resource in. The bottom line is that there is no cookie-cutter “right” answer to how you go about selecting each data resource, and one way or another you must select something and you may need to revise your needs, goals, and long term objectives.

 

7: DEFINE YOUR THREAT INTELLIGENCE DEVELOPMENT PROCESSES & PERSONNEL REQUIREMENTS

The next tweet in the series is where we really start to get into the “HOW” of our program

https://twitter.com/GRC_Ninja/status/854576206867566596 “7) Once you get the information, how will you evaluate, analyze & report on it? How much manpower will you need? How will you assess ROI?” There’s a lot packed into this tweet and the questions build on each other. Beginning with the first question, you’ll be looking at your day-to-day and weekly activities. How will you evaluate the data & information received? Take for example, an aggregate customized social media feed, will the results need manual review? If so, how often? Will you be receiving threat bulletins from an Intel Sharing and Analysis Center (ISAC)? Who’s going to read/take action on them?  One key thing to include in your reporting in the WHO, not just the when and how.  A great tool for this is a RACI chart.

For each information source you listed in steps 5 & 6, you should have a plan to evaluate, analyze & report on it. You will find, that as your team analyzes and evaluates these sources, some of them will become redundant.

The second question in the tweet was “How much manpower will you need?” There are a variety of estimating models, but I urge you to consider basing it on 1) the number of information sources you’ve identified as necessary and 2) the number of employees in your organization. What’s the point of having a source, if you don’t have anyone to use/analyze/report on or mine it?  Your own employees are sensors, sometimes they’re also an internal threat. Another point to consider is how much of each analysis effort will be manual at first, that can become automated? Remember, you can never fully automate all analyses, because you can never fully predict human behavior, and every threat still has a human behind it.

The third question in the tweet, “How will you assess ROI?” is critical. Before you begin your program, you want to define HOW you will evaluate these. Will it be based on bad actors found? The number of incoming reports from a source that you read, but tell you nothing new? Remember our balanced diamond, there are finite finances, and time that can be invested into the program. As the daily tasks go on, new information and talent needs will emerge but more importantly, the internal data and information sources will either prove to be noise or niche. Other sources, such as an Intel feed, or membership in an ISAC might not prove to be producing valuable information or intelligence. I’d recommend at minimum, annual evaluation (using your pre-defined metrics for your qualitative ROI) if not semi-annual review of any external/paid sources to ensure they are reliable, and providing value. If your team tracks this at least monthly, it’ll be much easier when annual budget reviews convene.

REMINDER: Defining the metrics for ROI in advance does not mean you cannot add or refine the metrics as the program progresses. I recommend reviewing them every 6 months to determine if they need revising. Also, don’t forget that new information needs will emerge as your program grows. Take them, and go back through steps 5-7 before asking for them.

8: DEFINE SUCCESS AND THE METRICS THAT REFLECT IT

Good advice I’ve heard time and time again is, always begin with the end in mind. The next tweet in the series https://twitter.com/GRC_Ninja/status/854576964065275904 touches on this by asking “8) what will success look like? # of compromises? Thwarted attempts? Time before bad guys detected? Lost revenue? Good/Bad press hits?” Granted 140 characters is not nearly enough to list all of the possible metrics one could use, but the objective of that tweet and this blog are not to list them for you, rather to encourage you to think of your own.

Before you start hunting threats and developing a threat intelligence program, you’ll need a measuring stick for success, for without one how will you know if you’re on the right path or have achieved your goals? As with everything in business, metrics are used to justify budgets and evaluate performance (there’s a buzz word called key performance indicators KPIs you should become familiar with, also known as status or stop light reporting red, yellow, green).

In a very young program, I’d encourage you to include a list of “relationships” you need/want to establish outside vs inside the organization, and the number of them that you do create. You can find other ideas for metrics with this search: https://www.google.com/#q=%22threat+intelligence+metrics%22

 

9: IDENTIFY INTERNAL AND EXTERNAL CONTINUOUS IMPROVEMENT MEASURES

The final tweet in the series https://twitter.com/GRC_Ninja/status/854577542778499072 addresses the three most important things, that in my expereince, are heavily overlooked, if not completely forgotten, in most threat intelligence (and InfoSec) programs. Summed up in three questions to fit into the 140 character limit: “9) How can you continue to improve? How will you training & staying current? How will you share lessons learned with the community?”

Addressing them in reverse order, sharing experiences (and threat intelligence) can be likened to your body’s ability to fight off disease. If you’re never exposed to a germ, your body won’t know how to fight it off. If you have an immune deficiency (lack of threat intel and InfoSec knowledge) your body is in a weakened state and you get sick (compromised) more easily. Sharing what you know/learn at local security group meetings, conferences, schools and universities etc. not only helps others it will help you. It pays dividends for years to come. Additionally, people will come to trust you, and will share information with you that you might not get anywhere else except the next news cycle and by then it is too late.

Next, once you’ve designed this awesome threat intelligence program, how are you going to keep this finely tuned machine running at top notch levels? The answer is simple, invest in your people. Pay for them to attend security conferences, and yes it is fair to mandate they attend specific talks and provide a knowledge sharing summary. It is also important to understand that much of the value of attending these events, lies in the networking that goes on and the information shared at “lobby-con” and “smoker-con” where nerds are simply geeking out and allowing their brains to be picked. Additionally, you can find valuable trainings at conferences, sometimes at discounted prices that you won’t find anywhere else. Also, theses are great places to find talent if you’re looking to build or expand a team.

Speaking of training, include in your budget funds to send your people to at least one training per year if not more. Of course you want to ensure they stay on after you pay for it so it is understandable if you tie a prorated repayment clause to it. It is easier to create a rock star than it is to hire one.

Finally, how can you continue to improve? The answer for each team will be different, but if you aren’t putting it on your roadmaps and integrating it into your one-on-one sessions with your employees, you’ll quickly become irrelevant and outdated.  Sometimes a great idea for improvement pops into your head and then two hours later you cannot remember it.  Create a space (virtual or physical) where people can drop ideas that can later be reviewed in a team meeting or a one-on-one sessions.  I find that whiteboard walls are great for this (paint a wall with special paint that allows it to act as a whiteboard).  Sometimes an IRC-styled channel, shared do, or wiki page will work too.

SUMMARY

This blog provides a practical outline for designing a threat intelligence program in the digital realm also known as cyberspace, and introduced a four-point constraint mode: time, money, design/accuracy, and quality.

 

As with any threat intelligence, we must understand the digital landscape and know what want it is that must be protected.  In order to protect it, we must have good visibility and simply having more data does not mean we have better visibility or better intelligence. Instead, an abundance of data, that isn’t good data (or is redundant) becomes noise. Discussed above was the next critical step in the defining the program   identify what we need to know, where we can get the answers and information we need, and how much, if anything, those answers and information will cost.  Some programs will run on a shoestring budget while others will be swimming in a sea of money.  Either way, reasonable projections and responsible spending are a must.

 

Once the major outlining is done, we start to dig a little deeper into the actual executions of the program, and we discussed figuring out exactly how we will (or would like to) develop and report the threat intelligence so that you can adequately source/hire the manpower and talent needed to meet these goals. Then we highlighted the all important task of defining success for without a starting definition, how can we show whether or not we are succeeding or failing?  Remember to revisit the definition and metrics regularly, at least semi-annually, and refine them as needed.

 

Finally, we close out the program outline by remembering to plan growth into our team.  That growth should include training, sharing lessons learned internally and externally.  Remember to leverage your local security community social groups, and the multi faceted benefits of security conferences which include networking, knowledge from talks, and knowledge/information gained by collaborating in the social hangout spots.  
Thank you for your time. Please share your experiences and constructive commentary below and share this blog on your forums of choice. For consultation inquiries, the fastest way to reach me is via DM on Twitter.