There’s plenty of crossover between violent crime and computer crime. Probably more than people would expect, particularly when people see themselves as outsiders and have nothing left to lose. There’s not a whole lot stopping cyber criminals from doing things that aren’t rational. In most countries, you’re much better off just getting a legitimate computer job with your technical skills, unless you just can’t get along with people, or you’re maladjusted. In general, becoming a cyber criminal is not a very good career path.
Cyber criminals who make money from their activities will often become involved with organised crime for the purposes of money laundering. Other cyber criminals involved in the kinds of activities that would be life-ruining if they were ever uncovered simply don’t have a lot to lose. They may be willing to go to great lengths to cover up their crimes, lengths which may spill over into the real world. State sponsored actors and Advanced Persistent Threats (APT) might be the most dangerous of all. Few would want to test the lengths that powerful authoritarian governments will go to in order to protect their intelligence and financial interests.
In short, investigating cyber criminals can be more dangerous than many security investigators realise.
This post is about how to protect your identity and cover your tracks when conducting security investigations. The recommendations here are part of on operational security (opsec) approach, conducting investigations in a way that denies your targets information about you and your activities and, ultimately, helps to keep you, and others, safe.
A big issue that we saw a lot as part of operating our phishing defence productis the way that people interact with phishing sites. Phishing is probably the #1 activity that most defensive security people encounter that interfaces with criminals. Here are some common mistakes:
Accessing phishing sites from enterprise networks or corporate proxies. This is not great because it’s easy for criminals to know what you’re up to.
Neglecting to strip personally identifiable information from phishing URLs. Quite often security people will submit URLs to phishing reporting sites, or share them around, or include them in takedowns, with the full URL including anything that’s in it. These URLs often link back to the phone number or email address of the kind staff members or other good Samaritans who reported the phishing site, which is pretty grim. I’d ask that you please don’t do that.
Using browsers with tracking included so phishers can link you across other sites. Don’t allow phishers to track you. Distributions like Tails or browsers like Brave are well suited to hiding your identity and don’t persist data. They may stick out as a privacy conscious user but that won’t really matter most of the time.
Resetting virtual machines used for analysis regularly is a must. Make use of the snapshot features!
Sending takedowns which include personal contact information. Once, as part of a phishing attack simulation, I received a takedown that had the full email of the takedown provider included in the abuse report sent to me as the customer of the hosting provider. The email included names, contacts, and headers of the individual who was processing the takedown. Instead, I would definitely send takedowns from ‘Security Operations Centre’ not ‘Bill Smith and here’s my personal mobile number’. You should act as if attackers will have access to your takedown reports.
In terms of handling malware, the common opsec mistakes that I see are:
Uploading samples to public sandboxes, and/or VirusTotal, without adequate information. I’ve seen two different documents on public sandboxes relating to active police investigations (as in, the coordination of those police investigations, and where people were staying and so on). Which is pretty bad, because for some reason someone thought the document might be a virus and uploaded it. So don’t go uploading things carelessly. Besides anything else, you tip your hand if it is a sophisticated attack. If it’s a sophisticated attack, and you’re the only victim of that particular sample of malware, then someone knows that you’ve found it.
Uploading internal red team malware to public sandboxes. I’ve been involved in a number of cases where I’ve been contacted by red teams saying “Hey, can you please remove this file? Because some idiot SOC person uploaded it and it’s actually our internally developed malware that we use for testing. Can you get rid of that?”
Oopsies when copying files on a corporate desktop. For example, when you’re trying to take a malware sample and do something with it, and someone sneezes and double clicks and runs the sample on their machine instead of uploading it somewhere. I’ve seen that a few times. And people hope that nothing bad happened. Just don’t have live virus samples on your corporate windows desktop. It’s a bad idea. Keep malicious files off your enterprise network, ideally in a lab set up especially for handing malware and incident response.
People opening documents that have tracking in them. Tracked documents often beacon back to someone, telling them the document has been opened, and by whom. Open documents in an offline environment like a sandbox initially so you can see if they try to originate outgoing traffic.
Scanning the assets of criminals from their public infrastructure. In general, the people doing investigations into criminals are doing something that isn’t technically part of their job. That happens particularly when you’ve got people that are pretty young and keen. It’s definitely a risk that security managers need to think about. Instead, it’s always best to use a VPN when scanning criminal assets so that your activity can’t be easily attributed back to your company.
Know that malware may be running differently in your malware sandbox. “Execution guardrails” is a term coined by FireEye. This refers to when malware authors add logic to stop the malware running in places it shouldn’t run or to change the way it runs in certain environments. Some if it is incredibly smart, like what the APT41 guys did. For example if you embed the volume serial ID of the Windows partition and use it as an encryption key it will only run on that machine, or run differently on other machines. There’s a lot of these out there. The more common use cases for this type of thing is making malware run differently on machines that are being used by malware defenders, sandboxes especially. Sandbox detection is super common. Not running on systems with an Cyrillic character set is also very common to avoid Eastern European criminals falling foul of local law enforcement.
In general, know that malware doesn’t always run the same way in every environment. This is a repository from NCC Group who are a really advanced security firm. As part of their red teaming tools they use the image size from the person’s broadband router as the encryption key to extract the malware. So it only works if you’re on Virgin broadband. Because you don’t know what the key should be when you actually do the decryption it will only work from one place. It might mean that you expect a certain behaviour based on your analysis and then when you go to deploy your protection in real life, in your environment, you find something particularly prickly happened because it doesn’t work the same way it does if you’re running a debugger on it. I’ve never seen anyone use this stuff maliciously but I’ve seen them use these tricks to make it harder to pull malware apart. It certainly could be used to, say, alter the way that a certain function works.
Beware of the potential for unintended consequences to your actions when dealing with malware. Marcus Hutchins (a.k.a. MalwareTech) was arrested and charged for being the author of the Kronos malware but he’s best known for being the guy who stopped WannaCry. As told to TechCrunch, he looked into WannaCry, saw a domain name, and went “Hey, I wonder what they’re using this for, it’s not registered.” He bought the domain. But he didn’t exactly know what that was going to do. He was fortunate in that it stopped the spread of WannaCry, however, it could have gone very differently. There’s no reason that there couldn’t have been some other consequence, unintended by him, where maybe it wipes everyone’s machine if the killswitch is triggered. He was lucky. You’ve got to be aware of the impact you might have globally if you register certain sinkholes. You might get victimized by criminals. Hutchins’ life could have been very different if he accidentally led to the wiping hundreds of NHS computers instead of saving them all.
Don’t actively log into compromised systems the criminals are currently using and look around yourself, because they will most likely see you logging in and looking at them. This can tip your hand. The criminals might just panic and delete everything. If you are a cyber criminal operating an intrusion and you know that you’ve been blown, usually the quickest and easiest way to limit damage is to trigger a wiper and delete all the boxes on the network. That way you destroy the evidence about you and you’re giving the defenders something else to worry about. Instead, use existing EDR tooling and telemetry. If you don’t have any in place then Sysmon and the WMI tools provided by Microsoft give you a lot of functionality for free.
Don’t use legal names in accounts used for investigation. It’s standard operating practice at a number of incident response companies, but I hate it. If I know who you are because of your accounts, so do all the criminals, or the state sponsored people (more likely) you’re operating against. Have role accounts if you need to but don’t have your full legal name in the accounts you use to investigate complicated financial crime and malware.
Be thoughtful about naming names. Just as it may be wise to avoid associating your legal name with your activity as an investigator, it is worth being thoughtful about whether you disclose the legal names or aliases of attackers in threat actor reports. Doing so has potential benefits as well as potential consequences. There isn’t a blanket rule for this situation, my only advice is to give this point due consideration. There are many instances of people being publicly accused of being cyber criminals due to mis-attribution, in some cases something as simple as a researcher reading down a row in a spreadsheet has been enough to ruin someone’s reputation.
Don’t use attributable IPs. We see this a lot, and it’s why we made our VPN for security investigators, Smokeproxy. Sharing addresses with corporate assets is always bad. A lot of people, if they want an ADSL tail connected at their office, they have to use whoever the corporate provider is. The provider will give them a business plan. The business plan has a static IP address. So even though it sort of blends in, it never changes. Someone will realise it’s them.
Don’t cross over IPs. If you are doing work using an unattributable IP address (such as a VPN) make sure you have safeguards in place so you don’t end up connecting from your regular IP address if the VPN is disconnected for some reason. This can be as simple as a firewall rule to only allow traffic exiting your network to connect to the VPN server. Likewise, ensure that all DNS lookups and other traffic come out of your VPN connection rather than using a local resolver that could give you away.
Check that reverse domain resolution can’t be used to identify you or your company. Make sure that reverse domain resolution doesn’t include your full company name in the DNS. (And check again every now and again, as I’ve seen a case where reverse domain resolution which had previously been kept purposely anonymous-looking was ‘fixed’ to include the full company name, unbeknownst to the SOC team).
Know that phishers may blacklist your IP if they get suspicious. In cases where people think a phishing site they reported got taken down unusually fast, it’s often because it’s got awful PHP like this on it (pictured above), where it’s either the reverse DNS getting matched, in which case they get served this fake 404 page, or they have a massive list of IP address masks. They’re using what looks like IP addresses as regular expressions, so they’re including a bunch of stuff they don’t intend to include, but it still works. They’re blocking more people than they intend. But anyone who is on their list gets served a 404 even though the page is still up. This came through the other week from a live site. They’re idiots, but it works. You don’t have to be smart to make this stuff work. They just copy it from each other, including the IP addresses, off forums.
The best way to mitigate this is to use a VPN for your security investigations. As much as possible, you want to look like a phishing victim. This means using an IP address and network that is realistic for the intended victims of a phish (usually your customers).
Be aware that, in addition to simple blacklists, some phishers are using more sophisticated methods to block certain IPs. This is one (above) that came from a Wells Fargo phish. They are using an external service to look up who IPs belong to or whether they’re on a blacklist, and also maintaining their own internal blacklist of whether someone’s visited the site before. So you can’t keep visiting the site from the same place, which is interesting. It’s a smarter way to crowdsource the blocking of security researchers from phishing sites.
Assuming you are already using a VPN when investigating phishing sites, ideally, each new connection you make should originate from a different IP address. Investigators are much more likely than victims to visit a phishing website more than once, and this gives phishers an easy way to automate serving you a 404, even when the phishing site is still online.
Often people are doing investigations under their own steam. They’re keen and want to do something to help, and all power to them. But investigations conducted in your personal time come with particularly high risks.
Do not trust domain privacy protections. It is relatively easy to get them removed if someone’s willing to tell the right story.
Don’t buy domains that can be traced back. For example, using the same name server as the one you used to register the domain name for your daughter’s pony club.
Keep the various ‘hats’ that you wear totally separate. Anything that crosses over between your personal life, hobby projects, and something you registered to look into crime stuff, is usually easily attributable back to you. Same with email addresses and that kind of thing.
Assume that every mailing list you’re a part of has leaked. Every list leaks. Criminals have access to every mailing list that has probably more than 200 people on it, because a list member’s system was compromised at some point. I know quite a number of people who receive spam emails to the email address that they only use for private security mailing lists. So they’ve definitely had their email address harvested, if nothing else.
Don’t trust that companies are going to do the right thing with your information. Internet providers and phone providers maybe haven’t been compromised, but they might have staff members who are morally compromised and will happily look someone up for a few bucks. It’s happened before, it will continue to happen.
When investigators reveal their identity to attackers it’s usually the result of a mistake or series of mistakes they made out of carelessness; they didn’t take the threat seriously. In a world where cyber crime and violent crime are increasingly interlinked, investigators of cyber crime who don’t protect their identities could potentially face bigger risks than doxxing.
I hope the tips here give you some ideas on how to keep yourself safe while conducting security investigations. The most important thing I hope you’ll take away from this is that, often, the cyber criminals you are dealing with don’t have a lot left to lose. Act accordingly.