Mirror Mirror On The Wall

Security updates for Monday

Mon Dec 17 15:22:00 2018
lwn.net

Security updates have been issued by Debian (php5, poppler, and samba), Fedora (firefox, mbedtls, nbdkit, pdns-recursor, php, php-symfony, php-symfony3, and php-symfony4), Gentoo (CouchDB, scala, and spamassassin), Mageia (firefox, libwpd, nss, and thunderbird), openSUSE (Chromium, cups, ghostscript, kernel, openvswitch, phpMyAdmin, qemu, and tcpdump), Red Hat (RHGS WA), and SUSE (ansible, openldap2, openvswitch, qemu, and tcpdump).

#categories

4.20-rc7 and stable kernels

Mon Dec 17 09:24:00 2018
lwn.net

Linus has released 4.20-rc7, saying: "The plan remains the same: if everything continues normally, I'll release 4.20 just before christmas, and then just have a more leisurely merge window than normal."

On the stable side, 4.19.10, 4.14.89, and 4.9.146 are out with a new set of important fixes.

#categories

[$] Relief for retpoline pain

Fri Dec 14 22:27:00 2018
lwn.net

Indirect function calls — calls to a function whose address is stored in a pointer variable — have never been blindingly fast, but the Spectre hardware vulnerabilities have made things far worse. The indirect branch predictor used to speed up indirect calls in the CPU can no longer be used, and performance has suffered accordingly. The "retpoline" mechanism was a brilliant hack that proved faster than the hardware-based solutions that were tried at the beginning. While retpolines took a lot of the pain out of Spectre mitigation, experience over the last year has made it clear that they still hurt. It is thus not surprising that developers have been looking for alternatives to retpolines; several of them have shown up on the kernel lists recently.

#categories

Security updates for Friday

Fri Dec 14 15:55:00 2018
lwn.net

Security updates have been issued by CentOS (ghostscript, git, java-1.7.0-openjdk, java-11-openjdk, kernel, NetworkManager, python-paramiko, ruby, sos-collector, thunderbird, and xorg-x11-server), Debian (gcc-4.9), and SUSE (amanda, ntfs-3g_ntfsprogs, and tiff).

#categories

The Origin of the Term Indicators of Compromise (IOCs)

Fri Dec 14 10:01:00 2018
taosecurity.blogspot.com

I am an historian. I practice digital security, but I earned a bachelor's of science degree in history from the United States Air Force Academy. (1)

Historians create products by analyzing artifacts, among which the most significant is the written word.

In my last post, I talked about IOCs, or indicators of compromise. Do you know the origin of the term? I thought I did, but I wanted to rely on my historian's methodology to invalidate or confirm my understanding.

I became aware of the term "indicator" as an element of indications and warning (I&W), when I attended Air Force Intelligence Officer's school in 1996-1997. I will return to this shortly, but I did not encounter the term "indicator" in a digital security context until I encountered the work of Kevin Mandia.

In August 2001, shortly after its publication, I read Incident Response: Investigating Computer Crime, by Kevin Mandia, Chris Prosise, and Matt Pepe (Osborne/McGraw-Hill). I was so impressed by this work that I managed to secure a job with their company, Foundstone, by April 2002. I joined the Foundstone incident response team, which was led by Kevin and consisted of Matt Pepe, Keith Jones, Julie Darmstadt, and me.

I Tweeted earlier today that Kevin invented the term "indicator" (in the IR context) in that 2001 edition, but a quick review of the hard copy in my library does not show its usage, at least not prominently. I believe we were using the term in the office but that it had not appeared in the 2001 book. Documentation would seem to confirm that, as Kevin was working on the second edition of the IR book (to which I contributed), and that version, published in 2003, features the term "indicator" in multiple locations.

In fact, the earliest use of the term "indicators of compromise," appearing in print in a digital security context, appears on page 280 in Incident Response & Computer Forensics, 2nd Edition.


From other uses of the term "indicators" in that IR book, you can observe that IOC wasn't a formal, independent concept at this point, in 2003. In the same excerpt above you see "indicators of attack" mentioned.

The first citation of the term "indicators" in the 2003 book shows it is meant as an investigative lead or tip:


Did I just give up my search at this point? Of course not.

If you do time-limited Google searches for "indicators of compromise," after weeding out patent filings that reference later work (from FireEye, in 2013), you might find this document, which concludes with this statement:

Indicators of compromise are from Lynn Fischer, Lynn, "Looking for the Unexpected," Security Awareness Bulletin, 3-96, 1996. Richmond, VA: DoD Security Institute.

Here the context is the compromise of a person with a security clearance.

In the same spirit, the earliest reference to "indicator" in a security-specific, detection-oriented context appears in the patent Method and system for reducing the rate of infection of a communications network by a software worm (6 Dec 2002). Stuart Staniford is the lead author; he was later chief scientist at FireEye, although he left before FireEye acquired Mandiant (and me).

While Kevin, et al were publishing the second edition of their IR book in 2003, I was writing my first book, The Tao of Network Security Monitoring. I began chapter two with a discussion of indicators, inspired by my Air Force intelligence officer training in I&W and Kevin's use of the term at Foundstone.

You can find chapter two in its entirety online. In the chapter I also used the term "indicators of compromise," in the spirit Kevin used it; but again, it was not yet a formal, independent term.

My book was published in 2004, followed by two more in rapid succession.

The term "indicators" didn't really make a splash until 2009, when Mike Cloppert published a series on threat intelligence and the cyber kill chain. The most impactful in my opinion was Security Intelligence: Attacking the Cyber Kill Chain. Mike wrote:


I remember very much enjoying these posts, but the Cyber Kill Chain was the aspect that had the biggest impact on the security community. Mike does not say "IOC" in the post. Where he does say "compromise," he's using it to describe a victimized computer.

The stage is now set for seeing indicators of compromise in a modern context. Drum roll, please!

The first documented appearance of the term indicators of compromise, or IOCs, in the modern context, appears in basically two places simultaneously, with ultimate credit going to the same organziation: Mandiant.

The first Mandiant M-Trends report, published on 25 Jan 2010, provides the following description of IOCs on page 9:


The next day, 26 Jan 2010, Matt Frazier published Combat the APT by Sharing Indicators of Compromise to the Mandiant blog. Matt wrote to introduce an XML-based instantiation of IOCs, which could be read and created using free Mandiant tools.


Note how complicated Matt's IOC example is. It's not a file hash (alone), or a file name (alone), or an IP address, etc. It's a Boolean expression of many elements. You can read in the text that this original IOC definition rejects what some commonly consider "IOCs" to be. Matt wrote:

Historically, compromise data has been exchanged in CSV or PDFs laden with tables of "known bad" malware information - name, size, MD5 hash values and paragraphs of imprecise descriptions... (emphasis added)

On a related note, I looked for early citations of work on defining IOCs, and found a paper by Simson Garfinkel, well-respected forensic analyst. He gave credit to Matt Frazier and Mandiant, writing in 2011:

Frazier (2010) of MANDIANT developed Indicators of Compromise (IOCs), an XML-based language designed to express signatures of malware such as files with a particular MD5 hash value, file length, or the existence of particular registry entries. There is a free editor for manipulating the XML. MANDIANT has a tool that can use these IOCs to scan for malware and the so-called “Advanced Persistent Threat.”

Starting in 2010, the debate was initially about the format for IOCs, and how to produce and consume them. We can see in this written evidence from 2010, however, a definition of indicators of compromise and IOCs that contains all the elements that would be recognized in current usage.

tl;dr Mandiant invented the term indicators of compromise, or IOCs, in 2010, building off the term "indicator," introduced widely in a detection context by Kevin Mandia, no later than his 2003 incident response book.

(1) Yes, a BS, not a BA -- thank you USAFA for 14 mandatory STEM classes.

#categories

Even More on Threat Hunting

Fri Dec 14 10:01:00 2018
taosecurity.blogspot.com

In response to my post More on Threat Hunting, Rob Lee asked:

[D]o you consider detection through ID’ing/“matching” TTPs not hunting?

To answer this question, we must begin by clarifying "TTPs." Most readers know TTPs to mean tactics, techniques and procedures, defined by David Bianco in his Pyramid of Pain post as:

How the adversary goes about accomplishing their mission, from reconnaissance all the way through data exfiltration and at every step in between.

In case you've forgotten David's pyramid, it looks like this.


It's important to recognize that the pyramid consists of indicators of compromise (IOCs). David uses the term "indicator" in his original post, but his follow-up post from his time at Sqrrl makes this clear:

There are a wide variety of IoCs ranging from basic file hashes to hacking Tactics, Techniques and Procedures (TTPs). Sqrrl Security Architect, David Bianco, uses a concept called the Pyramid of Pain to categorize IoCs. 

At this point it should be clear that I consider TTPs to be one form of IOC.

In The Practice of Network Security Monitoring, I included the following workflow:

You can see in the second column that I define hunting as "IOC-free analysis." On page 193 of the book I wrote:

Analysis is the process of identifying and validating normal, suspicious, and malicious activity. IOCs expedite this process. Formally, IOCs are manifestations of observable or discernible adversary actions. Informally, IOCs are ways to codify adversary activity so that technical systems can find intruders in digital evidence...

I refer to relying on IOCs to find intruders as IOC-centric analysis, or matching. Analysts match IOCs to evidence to identify suspicious or malicious activity, and then validate their findings.

Matching is not the only way to find intruders. More advanced NSM operations also pursue IOC-free analysis, or hunting. In the mid-2000s, the US Air Force popularized the term hunter-killer in the digital world. Security experts performed friendly force projection on their networks, examining data and sometimes occupying the systems themselves in order to find advanced threats. 

Today, NSM professionals like David Bianco and Aaron Wade promote network “hunting trips,” during which a senior investigator with a novel way to detect intruders guides junior analysts through data and systems looking for signs of the adversary. 

Upon validating the technique (and responding to any enemy actions), the hunters incorporate the new detection method into a CIRT’s IOC-centric operations. (emphasis added)

Let's consider Chris Sanders' blog post titled Threat Hunting for HTTP User Agents as an example of my definition of hunting. 

I will build a "hunting profile" via excerpts (in italics) from his post:

Assumption: "Attackers frequently use HTTP to facilitate malicious network communication."

Hypothesis: If I find an unusual user agent string in HTTP traffic, I may have discovered an attacker.

Question: “Did any system on my network communicate over HTTP using a suspicious or unknown user agent?”

Method: "This question can be answered with a simple aggregation wherein the user agent field in all HTTP traffic for a set time is analyzed. I’ve done this using Sqrrl Query Language here:

SELECT COUNT(*),user_agent FROM HTTPProxy GROUP BY user_agent ORDER BY COUNT(*) ASC LIMIT 20

This query selects the user_agent field from the HTTPProxy data source and groups and counts all unique entries for that field. The results are sorted by the count, with the least frequent occurrences at the top."

Results: Chris offers advice on how to interpret the various user agent strings produced by the query.

This is the critical part: Chris did not say "look for *this user agent*. He offered the reader an assumption, a hypothesis, a question, and a method. It is up to the defender to investigate the results. This, for me, is true hunting.

If Chris had instead referred users to this list of malware user agents (for example) and said look for "Mazilla/4.0", then I consider that manual (human) matching. If I created a Snort or Suricata rule to look for that user agent, then I consider that automated (machine) matching.

This is where my threat hunting definition likely diverges from modern practice. Analyst Z sees the results of Chris' hunt and thinks "Chris found user agent XXXX to be malicious, so I should go look for it." Analyst Z queries his or her data and does or does not find evidence of user agent XXXX.

I do not consider analyst Z's actions to be hunting. I consider it matching. There is nothing wrong with this. In fact, one of the purposes of hunting is to provide new inputs to the matching process, so that future hunting trips can explore new assumptions, hypotheses, questions, and methods, and let the machines do the matching on IOCs already found to be suggestive of adversary activity. This is why I wrote in my 2013 book "Upon validating the technique (and responding to any enemy actions), the hunters incorporate the new detection method into a CIRT’s IOC-centric operations."

The term "hunting" is a victim of its own success, with emotional baggage. We defenders have finally found a way to make "blue team" work appealing to the wider security community. Vendors love this new way to market their products. "If you're not hunting, are you doing anything useful?" one might ask.

Compared to "I'm threat hunting!" (insert chest beating), the alternative, "I'm matching!" (womp womp), seems sad. 

Nevertheless, we must remember that threat hunting methodologies were invented to find adversary activity for which there were no IOCs. Hunting was IOC-free analysis because we didn't know what to look for. Once you know what to look for, you are matching. Both forms of detection require analysis to validate adversary activity, of course. Let's not forget that.

I'm also very thankful, however it's defined or packaged, that people are excited to search for adversary activity in their environment, whether via matching or hunting. It's a big step from the mindset of 10 years ago, which had a "prevention works" milieu.

tl;dr Because TTPs are a form of IOC, then detection via matching IOCs is a form of matching, and not hunting.

#categories

More on Threat Hunting

Fri Dec 14 10:01:00 2018
taosecurity.blogspot.com

Earlier this week hellor00t asked via Twitter:

Where would you place your security researchers/hunt team?

I replied:

For me, "hunt" is just a form of detection. I don't see the need to build a "hunt" team. IR teams detect intruders using two major modes: matching and hunting. Junior people spend more time matching. Senior people spend more time hunting. Both can and should do both functions.

This inspired Rob Lee to blog a response, from which I extract his core argument:

[Hunting] really isn’t, to me, about detecting threats...

Hunting is a hypothesis-led approach to testing your environment for threats. The purpose, to me, is not in finding threats but in determining what gaps you have in your ability to detect and respond to them...

In short, hunting, to me, is a way to assess your security (people, process, and technology) against threats while extending your automation footprint to better be prepared in the future. Or simply stated, it’s incident response without the incident that’s done with a purpose and contributes something. 

As background for my answer, I recommend my March 2017 post The Origin of Threat Hunting, which cites my article "Become a Hunter," published in the July-August 2011 issue of Information Security Magazine. I wrote it in the spring of 2011, when I was director of incident response for GE-CIRT.

For the term "hunting," I give credit to briefers from the Air Force and NSA who, in the mid-2000s briefed "hunter-killer" missions to the Red Team/Blue Team Symposium at the Johns Hopkins University Applied Physics Lab in Laurel, MD.

As a comment to that post, Tony Sager, who ran NSA VAO at the time I was briefed at ReBl, described hunting thus:

[Hunting] was an active and sustained search for Attackers...

For us, "Hunt" meant a very planned and sustained search, taking advantage of the existing infrastructure of Red/Blue Teams and COMSEC Monitoring, as well as intelligence information to guide the search. 

For the practice of hunting, as I experienced it, I give credit to our GE-CIRT incident handlers -- David Bianco,  Ken Bradley, Tim Crothers, Tyler Hudak, Bamm Visscher, and Aaron Wade -- who took junior analysts on "hunting trips," starting in 2008-2009.

It is very clear, to me, that hunting has always been associated with detecting an adversary, not "determining what gaps you have in your ability to detect and respond to them," as characterized by Rob.

For me, Rob is describing the job of an enterprise visibility architect, which I described in a 2007 post:

[W]e are stuck with numerous platforms, operating systems, applications, and data (POAD) for which we have zero visibility. 

I suggest that enterprises consider hiring or assigning a new role -- Enterprise Visibility Architect. The role of the EVA is to identify visibility deficiencies in existing and future POAD and design solutions to instrument these resources.

A primary reason to hire an enterprise visibility architect is to build visibility in, which I described in several posts, including this one from 2009 titled Build Visibility In. As a proponent of the "monitor first" school, I will always agree that it is important to identify and address visibility gaps.

So where do we go from here?

Tony Sager, as one of my wise men, offers sage advice at the conclusion of his comment:

"Hunt" emerged as part of a unifying mission model for my Group in the Information Assurance Directorate at NSA (the defensive mission) in the mid-late 2000's. But it was also a way to unify the relationship between IA and the SIGINT mission - intelligence as the driver for Hunting. The marketplace, of course, has now brought its own meaning to the term, but I just wanted to share some history. 

In my younger days I might have expressed much more energy and emotion when encountering a different viewpoint. At this point in my career, I'm more comfortable with other points of view, so long as they do not result in harm, or a waste of my taxpayer dollars, or other clearly negative consequences. I also appreciate the kind words Rob offered toward my point of view.

tl;dr I believe the definition and practice of hunting has always been tied to adversaries, and that Rob describes the work of an enterprise visibility architect when he focuses on visibility gaps rather than adversary activity.

Update 1: If in the course of conducting a hunt you identify a visibility or resistance deficiency, that is indeed beneficial. The benefit, however, is derivative. You hunt to find adversaries. Identifying gaps is secondary although welcome.

The same would be true of hunting and discovering misconfigured systems, or previously unidentified assets, or unpatched software, or any of the other myriad facts on the ground that manifest when one applies Clausewitz's directed telescope towards their computing environment.

#categories

Have Network, Need Network Security Monitoring

Fri Dec 14 10:01:00 2018
taosecurity.blogspot.com

I have been associated with network security monitoring my entire cybersecurity career, so I am obviously biased towards network-centric security strategies and technologies. I also work for a network security monitoring company (Corelight), but I am not writing this post in any corporate capacity.

There is a tendency in many aspects of the security operations community to shy away from network-centric approaches. The rise of encryption and cloud platforms, the argument goes, makes methodologies like NSM less relevant. The natural response seems to be migration towards the endpoint, because it is still possible to deploy agents on general purpose computing devices in order to instrument and interdict on the endpoint itself.

It occurred to me this morning that this tendency ignores the fact that the trend in computing is toward closed computing devices. Mobile platforms, especially those running Apple's iOS, are not friendly to introducing third party code for the purpose of "security." In fact, one could argue that iOS is one of, if not the, most security platform, thanks to this architectural decision. (Timely and regular updates, a policed applications store, and other choices are undoubtedly part of the security success of iOS, to be sure.)

How is the endpoint-centric security strategy going to work when security teams are no longer able to install third party endpoint agents? The answer is -- it will not. What will security teams be left with?

The answer is probably application logging, i.e., usage and activity reports from the software with which users interact. Most of this will likely be hosted in the cloud. Therefore, security teams responsible for protecting work-anywhere-but-remote-intensive users, accessing cloud-hosted assets, will have really only cloud-provided data to analyze and escalate.

It's possible that the endpoint providers themselves might assume a greater security role. In other words, Apple and other manufacturers provide security information directly to users. This could be like Chase asking if I really made a purchase. This model tends to break down when one is using a potentially compromised asset to ask the user if that asset is compromised.

In any case, this vision of the future ignores the fact that someone will still be providing network services. My contention is that if you are responsible for a network, you are responsible for monitoring it.

It is negligent to provide network services but ignore abuse of that service.

If you disagree and cite the "common carrier" exception, I would agree to a certain extent. However, one cannot easily fall back on that defense in an age where Facebook, Twitter, and other platforms are being told to police their infrastructure or face ever more government regulation.

At the end of the day, using modern Internet services means, by definition, using someone's network. Whoever is providing that network will need to instrument it, if only to avoid the liability associated with misuse. Therefore, anyone operating a network would do well to continue to deploy and operate network security monitoring capabilities.

We may be in a golden age of endpoint visibility, but closure of those platforms will end the endpoint's viability as a source of security logging. So long as there are networks, we will need network security monitoring.

#categories

Firewalls and the Need for Speed

Fri Dec 14 10:14:00 2018
taosecurity.blogspot.com

I was looking for resources on campus network design and found these slides (pdf) from a 2011 Network Startup Resource Center presentation. These two caught my attention:



This bothered me, so I Tweeted about it.

This started some discussion, and prompted me to see what NSRC suggests for architecture these days. You can find the latest, from April 2018, here. Here is the bottom line for their suggested architecture:






What do you think of this architecture?

My Tweet has attracted some attention from the high speed network researcher community, some of whom assume I must be a junior security apprentice who equates "firewall" with "security." Long-time blog readers will laugh at that, like I did. So what was my problem with the original recommendation, and what problems do I have (if any) with the 2018 version?

First, let's be clear that I have always differentiated between visibility and control. A firewall is a poor visibility tool, but it is a control tool. It controls inbound or outbound activity according to its ability to perform in-line traffic inspection. This inline inspection comes at a cost, which is the major concern of those responding to my Tweet.

Notice how the presentation author thinks about firewalls. In the slides above, from the 2018 version, he says "firewalls don't protect users from getting viruses" because "clicked links while browsing" and "email attachments" are "both encrypted and firewalls won't help." Therefore, "since firewalls don't really protect users from viruses, let's focus on protecting critical server assets," because "some campuses can't develop the political backing to remove firewalls for the majority of the campus."

The author is arguing that firewalls are an inbound control mechanism, and they are ill-suited for the most prevalent threat vectors for users, in his opinion: "viruses," delivered via email attachment, or "clicked links."

Mail administrators can protect users from many malicious attachments. Desktop anti-virus can protect users from many malicious downloads delivered via "clicked links." If that is your worldview, of course firewalls are not important.

His argument for firewalls protecting servers is, implicitly, that servers may offer services that should not be exposed to the Internet. Rather than disabling those services, or limiting access via identity or local address restrictions, he says a firewall can provide that inbound control.

These arguments completely miss the point that firewalls are, in my opinion, more effective as an outbound control mechanism. For example, a firewall helps restrict adversary access to his victims when they reach outbound to establish post-exploitation command and control. This relies on the firewall identifying the attempted C2 as being malicious. To the extent intruders encrypt their C2 (and sites fail to inspect it) or use covert mechanisms (e.g., C2 over Twitter), firewalls will be less effective.

The previous argument assumes admins rely on the firewall to identify and block malicious outbound activity. Admins might alternatively identify the activity themselves, and direct the firewall to block outbound activity from designated compromised assets or to designated adversary infrastructure.

As some Twitter responders said, it's possible to do some or all of this without using a stateful firewall. I'm aware of the cool tricks one can play with routing to control traffic. Ken Meyers and I wrote about some of these approaches in 2005 in my book Extrusion Detection. See chapter 5, "Layer 3 Network Access Control."

Implementing these non-firewall-based security choices requries a high degree of diligence, which requires visibility. I did not see this emphasized in the NSRC presentation. For example:


These are fine goals, but I don't equate "manageability" with visibility or security. I don't think "problems and viruses" captures the magnitude of the threat to research networks.

The core of the reaction to my original Tweet is that I don't appreciate the need for speed in research networks. I understand that. However, I can't understand the requirement for "full bandwidth, un-filtered access to the Internet." That is a recipe for disaster.

On the other hand, if you define partner specific networks, and allow essentially site-to-site connectivity with exquisite network security monitoring methods and operations, then I do not have a problem with eliminating firewalls from the architecture. I do have a problem with unrestricted access to adversary infrastructure.

I understand that security doesn't exist to serve itself. Security exists to enable an organizational mission. Security must be a partner in network architecture design. It would be better to emphasize enhance monitoring for the networks discussed above, and think carefully about enabling speed without restrictions. The NSRC resources on the science DMZ merit consideration in this case.

#categories

Defining Counterintelligence

Fri Dec 14 10:14:00 2018
taosecurity.blogspot.com

I've written about counterintelligence (CI) before, but I realized today that some of my writing, and the writing of others, may be confused as to exactly what CI means.

The authoritative place to find an American definition for CI is the United States National Counterintelligence and Security Center. I am more familiar with the old name of this organization, the  Office of the National Counterintelligence Executive (ONCIX).

The 2016 National Counterintelligence Strategy cites Executive Order 12333 (as amended) for its definition of CI:

Counterintelligence – Information gathered and activities conducted to identify, deceive,
exploit, disrupt, or protect against espionage, other intelligence activities, sabotage, or assassinations conducted for or on behalf of foreign powers, organizations, or persons, or their agents, or international terrorist organizations or activities. (emphasis added)

The strict interpretation of this definition is countering foreign nation state intelligence activities, such as those conducted by China's Ministry of State Security (MSS), the Foreign Intelligence Service of the Russian Federation (SVR RF), Iran's Ministry of Intelligence, or the military intelligence services of those countries and others.

In other words, counterintelligence is countering foreign intelligence. The focus is on the party doing the bad things, and less on what the bad thing is.

The definition, however, is loose enough to encompass others; "organizations," "persons," and "international terrorist organizations" are in scope, according to the definition. This is just about everyone, although criminals are explicitly not mentioned.

The definition is also slightly unbounded by moving beyond "espionage, or other intelligence activities," to include "sabotage, or assassinations." In those cases, the assumptions is that foreign intelligence agencies and their proxies are the parties likely to be conducting sabotage or assassinations. In the course of their CI work, paying attention to foreign intelligence agents, the CI team may encounter plans for activities beyond collection.

The bottom line for this post is a cautionary message. It's not appropriate to call all intelligence activities "counterintelligence." It's more appropriate to call countering adversary intelligence activities counterintelligence.

You may use similar or the same approaches as counterintelligence agents when performing your cyber threat intelligence function. For example, you may recruit a source inside a carding forum, or you may plant your own source in a carding forum. This is similar to turning a foreign intelligence agent, or inserting your own agent in a foreign intelligence service. However, activities directing against a carding forum are not counterintelligence. Activities directing against a foreign intelligence service are counterintelligence.

The nature and target of your intelligence activities are what determine if it is counterintelligence, not necessarily the methods you use. Again, this is in keeping with the stricter definition, and not becoming a victim of scope creep.


#categories

<<<