LAPSUS$ seems to be the new focus after their recent activity surrounding Microsoft and Okta. If you’ve not seen, here is Microsofts blog.
I always find these types of articles interesting but do often think that the cool aspect of “How they hacked” pulls focus away from some of the key takeaways. I’m not saying it’s a bad thing but instead of rushing into swapping push notifications for OTP, it might be more beneficial to ask yourself
“how exposed are you?”. The answer here would be an opinion and to get the true answer, you need to start digging.
Let’s use the article as a guide and break it down, starting with the Analysis:
The first section of this paragraph depends on how public your company is and what your employees share online. This can come in many forms. Just think of social media, advertising, Intranet, company events and anything that may give internal information away. Let’s start with LinkedIn.
Using Okta as an example, we can see that 2 people from my school work there and that 6k employees have LinkedIn profiles.
Here we can filter on keywords and find out more about the organization.
For this example, searching for keyword: Support , I find support engineers.
Now this doesn’t look particularly like a threat, however there is a reason companies don’t have their internal structure on their websites. It let’s attacks know how they function and even gives names to roles. Looking into this further, what if these employees posted about tools or solutions they use internally? What if they shared to much personal information on here? What if they shared pictures or screenshots that contain too much?
I’ve seen for certain organizations, emails and direct phone numbers being shared on their profile. This would give an attacker direct access for social engineering attacks. Names, Phone numbers, Images, email addresses, basically everything can be searched for nowadays.
Reverse image lookup is often a favorite as if a person has a high quality display photo, they often reuse it elsewhere. This can create a “path” which can lead to personal accounts, which in turn give greater information to be used against them. This is all quick longwinded, so unless you are being targeted, Googling email formats can be a quicker solution. Knowing the names of the employees and their email format, can help created a Phishing list within minutes.
This method would be a low effort, high reward situation, as they could targeting the mass, and only need one bite. It’s not always direct email or phone calls though. Actors go after public facing “contact” forms as well.
Here is a quick exercise.
Load Google, and search for: inurl:[yourcompanyname] “contact”.
Here you should see contact methods that Google has Indexed.
This is just a simple Google Dork , however similar searches can uncover a lot more. It’s not just contact information though, as Google dorks can help find files, web apps, intranet sites, VPNs, remote support portal, basically anything the threat actor can think of and it existing. The GHDB on ExploitDB can take a lot of the thought process out of it, so if you’re curious enough, take a look: ExploitDB
intitle:”database” “backup” filetype:sql
Going back to the top…
Looking at the second half, we see it mentions spamming MFA. Now this technique is really clever when you think about it. MFA is designed to keep that account secure by having an additional verification method. If that verification method just annoys the user to the point that they just accept, the control becomes somewhat useless.
The MFA annoyance prompt has been talked about a lot recently. It’s something that will make an organizations life harder.
Whatever method you do decided, remember that the education piece is also key. Explaining and teaching users can often be universal so if they understand the risk, or even that they need to report unusual behavior with MFA, you might be able to prevent these types of attacks. Remember, the OTP is great however LAPSUS$ also paid for MFAs to just be accepted. A OTP can easily be shared in exchange for money. They just don’t make it easy…
So… What’s this got to do with the perimeter?
Well, actually a lot. Enabling MFA on all internet facing systems is great however, IMO it brings an unwelcomed aspect. I feel like it can often bring a sense of false confidence. “If we are MFA’d, they can’t get in”.
This of course is not true!
For those account you do have MFA‘d, there are still weaknesses.
Firstly, the above (Social engineering) and secondly, Cookie stealing. Another point is that it’s only as good as the configuration behind it.
Let me give you some examples.
Something that we are finally seeing less of… Legacy auth. Whilst companies enabled MFA on all accounts, they unknowingly left the legacy protocols enabled. Doing so somewhat made MFA redundant. Attackers could use tools such as MailSniper, to abused the legacy protocols so that they could brute force credentials. MFA isn’t supported on these protocols, so it simply allowed ‘Single factor’ access. Oppssss…
“That happens less nowadays”…Ok, fair point… Let’s say, legacy protocols are disabled, and we set it to enforce MFA on all accounts?
That’s great, however did you also set the auth method for everyone active account you have? If not, when logging in, the attacker can often just set them up for you and login. If AzureAD, going to https://aka.ms/mfasetup will allow them to do so (If credentials are brute forced, or compromised).
What comes to mind; service accounts, shared accounts, guests, “test” accounts basically anything not heavily used by a user. If you don’t tailor your rules to simply block these accounts, or apply criteria to help reduce the surface, it will eventually be used as an entry point into your organization.
Managing accounts has always been a struggle but what about the services themselves. Finding these exposed services is another matter, and perhaps you don’t have a clear picture of what’s out there. This can even be a simple Excel spreadsheet with all the company owned Public IPs/ URLs.
If you’re not doing it, the attackers might be so it’s at least worth a thought.
If you’re still thinking “how do they find these sites?”, let’s go further into it.
Simple techniques are using Google and subdomain enumeration.
We often find ourselves using common names for certain solutions.
For example, the use of intranet.company.com.
Other examples: Citrix.Yourcompany.com , VPN.Yourcompany.com , Remote.Yourcompany.com and Support.Yourcompany.com. These key words can be queried and used against you.
For those not obvious, subdomain enumeration can be used. If interested, there are a few tools which I cover a bit more here…
If you didn’t want to use those tools, here is a simple exercise that you can run within your browser.
Visit: https://virustotal.com, click URL and search your company name.
If found, click the relationship tab and have a look at the subdomains:
Second exercise: Load https://crt.sh and search your company name:
Here are just two examples, of subdomain enumeration that can be used to find additional services online. If you didn’t before, hopefully by now you are seeing that if you put it on the internet, there is a footprint somewhere.
It doesn’t matter if you randomize the name or don’t shout about it, somewhere, it will be indexed or recorded.
As I say, the above exercise is a very simple run-through, and if you have found new sites, do move further into the tools. These tools may look scary but come with good documentation and are free (Minus API integration).
Now that you’ve found your sites, make sure you do a posture check to ensure that they are as you would like and that you have the necessary logs to spot an attack. I always feel the latter is the most important as at some stage, you need to be acting as if you are breach. LAPSUS$ echo’s this as they got in via a despite several layers of security.
You can secure all you want, but if they pay the person at the door to let them in, then how do you respond? If however, you have a list of all that entered and how, you at least have some breadcrumbs to investigate. The worst place to be is during a breach and having zero logs or clue.
When reviewing your list, make sure that you look for misconfiguration and have an understanding on how the technology is working.
The first point, is pretty simple as accidents or a “temp change” can often be left. Review how it’s running and ask yourself, is that how it’s meant to be in production. If not, make sure it’s corrected. For anything Dev or test, we often go leaner on the settings, so if allowed, take it off the internet or restrict to location/ IPs. We often don’t run tests 365 days a year, so maybe only lift as and when required.
For my second point, it falls down to knowledge of the system. Attackers are clever, and the difference is, that they have more time on their hands than you. Let’s face it, you would be pretty secure if you could run a fine comb on each solution; but you don’t. Instead, you need to understand the tech best you can and remove anything that could be abused.
The cloud is a perfect example of this. When AWS was big, how many S3 buckets were exposed because the customer didn’t know?
For Azure, how many customers know that by default, all your users have read-only access and can login via the portal, Graph or Powershell. Yes, any active account can login, dump your AzureAD and walk away.
For SFTP services, often the admin portals are accessed by /admin. For those I’ve seen, the logging isn’t great by default. Another example of this is orgs lifting Outlook OWA, and not realizing that the admin access is also lifted.
Problems like this, fall across vulnerabilities, misconfiguration and bad practice, however at the heart, it’s down to not knowing.
This can expand further than just what was lifted by accident. For example, you have a file upload page to allow your customers to upload documents. Now this isn’t an accident, it’s a design. Now, ask yourself, is the file format limited? Could someone upload malicious files? When uploaded, does it get uploaded to a production server sat next to other production servers? Can a user then map to their file using /file=filename?
Weaknesses like this, will not always be flagged by normal means. Here’s another example to further my point.
You’ve allowed Citrix to be externally facing, however you’ve reduced your attack surface by only presenting a browser. This browser, loads the page of your application. I agree, this is much better than RDP, as the Operating system layer is removed, however, ask yourself: Can someone enter the URL of other applications and reach them (Think intranet)?
Browsers can view local files, so can they map to C:\ ? If they can do both, could they use this as a method of exfiltration ?
I could go on but I’m just trying to get across that you can’t take things at face value. Something that is put in to do A can also be abused to do B. It just takes someone whos curious. Dependency on tools are great but scanners may not pick up on these.
You have to go looking.
Leave a Reply