- Aloha and welcome to Bundles of Joy. I’ll talk about breaking macOS via Subverted Application Bundles.
00:12 - My name is Patrick Wardle, I am the creator of the Mac Security Tool Suite and Security Website, Objective-See.
00:20 - Also the organizer of the Mac Security Conference, “Objective by the Sea,” and also the author of the “Art of Mac Malware” analysis book.
00:31 - So today, we’re gonna be talking about an interesting flaw that affected all recent versions of macOS.
00:37 - We’re gonna start by looking at various anti-infection mechanisms that the flaw was able to sidestep.
00:44 - We’ll then dive into the flaw, looking at it root cause analysis.
00:50 - We’ll then talk about the discovery of it being exploited as zero-day in the wild to distribute malware.
00:58 - And we’ll then dive into protections and detection mechanisms that we deployed while awaiting Apples patch.
01:05 - And then finally, we’ll wrap up by analyzing Apples Patch to see how they ultimately addressed, fixed, patched the flaw.
01:16 - So first some background, the main way that Mac users are infected, come infected with malware is via user assisted techniques.
01:29 - These are methods that require user assistant, user interactions.
01:34 - I’m sure many of us here are familiar with these, but a brief review.
01:40 - We’ve probably all heard about malicious websites that display popups, for example, claiming your flash player is out of date.
01:48 - If you download and run, what you believe is perhaps the required flash update, your system may become infected with malware.
01:56 - Adversaries also, do things like, poisoned search results, infect popular websites that users may browse to in order to distribute malware.
02:08 - Finally, hackers are very fond of pirating applications, but then injecting malicious code, trojanizing these applications, so when users download and run them, they will be infected with malware.
02:20 - But really the main takeaway that kind of ties all these approaches together is there is explicit user interaction required.
02:28 - And in some sense, the users ultimately infecting themselves.
02:33 - Now, Macs become ever more popular, ever more prevalent, so do these attacks, just more Mac malware and adware than ever.
02:43 - And Apple rightfully realized that as we just mentioned, the majority of ways that Macs were getting infected was by user assisted, user interaction based malware attacks and infection vectors, they really decided, we need to do something to protect the users from themselves.
03:04 - And you know, I’m a critic of Apple, but in this case, yeah, I think that was definitely the right approach.
03:10 - So we’re briefly gonna look at three technologies, three anti-infection mechanisms.
03:15 - File quarantine, gatekeeper and notarization requirements.
03:19 - And they know that these are all aimed at protecting the user from infecting themselves.
03:23 - The goal is, if the user is tricked or coerced into running something, the operating system will first intercept that launch, that application launched, that process execution and examine it to make sure that it’s not malware.
03:40 - Now, we first need to talk about the quarantine attribute.
03:44 - Quarantine attribute is something that is added to most essentially all downloaded items, either by the application that’s downloading it for example, the browser or the operating system.
03:56 - And it is an extended attribute that basically tells the operating system, Hey, this item is from the internet.
04:03 - When the user then goes to launch the item, for example, launch the application that they’ve just downloaded, the operating system will see if the quarantine attribute has been added.
04:14 - And if so, it will then perform a variety of checks, gatekeeper, notarization, file quarantine checks on that item.
04:21 - So the quarantine actually, it’s kind of the catalyst for those checks.
04:25 - You can examine if a file has a quarantine attribute via the “xattr” command, as you can see on the slide, downloaded some malware from the internet and as expected the browser and the operating system slap that quarantine attribute on the item.
04:42 - So if we were to go launch this application, the operating system would perform all its anti-infection mechanisms, it’s checks.
04:51 - So the the first anti-infection mechanism that Apple introduced was all the way back in 2007.
05:00 - Yeah. Way, way back. (chuckles) And this technology was… it’s named “File Quarantine” and in a nutshell, it basically will display a prompt to the user saying two things, first, Hey, the item you’re about to launch is from the internet.
05:16 - And two, it is an executable application. This is important because which number of malware it will attempt to masquerade as a benign file types.
05:28 - On the slide, we have an example, this is Wind Tail, that was a distributed as a malicious application, but used a application icon to masquerade as a PowerPoint document.
05:40 - The idea from the malware author’s point of view is the users might be tricked into launching this because they thought it was a benign PowerPoint application.
05:49 - However, a file quarantine would jump in the way and say, Hey, wait a minute, user, just to make you aware of this, just to make sure you know that this is actually an application and if you run it, you might infect yourself.
06:02 - So a good warning, but the problem as is the case with most warnings, users would simply click allow open, thus, still infecting themselves.
06:11 - So Apple had to take the next step. And that next step was gatekeeper, which was introduced in 2012.
06:19 - In a nutshell, gatekeeper will block unsigned applications from running, basically, when the user launches an item that they’ve downloaded from the internet, the operating system will intercept that and gatekeeper will check to see if it’s validly signed.
06:33 - If it’s not, it will block that. That was a good approach because at the time, when the majority of Mac malware was unsigned.
06:42 - Of course the shortcoming was that malware authors simply began signing their malware.
06:46 - It’s pretty easy to fraudulently obtain or steal a legitimate code signing a developer ID, which would then allow you to sign your malware, which would then allow you to bypass or sidestep gatekeeper.
07:01 - So Apple had to respond yet again, which they did in 2019 with the introduction of notarization.
07:08 - Notarization will block any application that has not been explicitly verified, scanned and approved by Apple proper.
07:17 - So on this slide, we see kind of a conceptual overview.
07:20 - Imagine your developer creating a application, you compile it, you now have to submit that application to Apple, where they will scan and verify the application.
07:30 - If they don’t detect any malicious code, they will then give it their stamp of approval, they will then notarize this.
07:37 - At runtime then, when you distribute this now, notarized application, it will be allowed to run.
07:42 - The idea is malware authors will, A- either not submit their applications to Apple for verification, or if they do, Apple will detect that they contain malicious code and does not notarize them.
07:56 - This means that even then, if the malware authors successfully trick the users into attempting to run their malicious code, the operating system will be like, whoa, wait a minute, this is not notarized, I will block it.
08:09 - And in reality, this does work very well. We can see on the slide, some hackers slid into my DM’s bemoaning the fact that notarization had essentially ruined their entire operation.
08:23 - Let’s now talk about an interesting flaw though, that was able to very neatly sidestep fleet bypass all of these anti-infection mechanisms.
08:34 - And as it was a logic flaw did so in a hundred percent reliable manner.
08:42 - First, I wanna give credit to Cedric Owens who uncovered this vulnerability.
08:47 - He wasn’t a hundred percent. sure on the root cause, so ping me describing what he had found.
08:55 - We have a nice proof of concept on the slide that demonstrates the power of the vulnerability and aligns to what Cedric observed, which was we can download a malicious application from the internet that is not signed, not notarized.
09:14 - We can, you know, masquerade as for example, a resume, a PDF document or really anything else.
09:21 - And when launched, neither a file quarantine nor a gatekeeper, nor the operating systems notarization checks appear to come into play.
09:30 - And as Cedric notes, no prompts from the operating system at all.
09:36 - This is a very powerful vulnerability because what it means is in theory, malware authors could go back to their old tricks of basically, infecting users across the globe and not have to worry about any of macOSes recent anti-infective mechanisms.
09:55 - So let’s take a look at what’s going on. So we have this proof of concepts, and the first thing I wanted to check was what’s the quarantine attribute being correctly said, ‘cause it was mentioned before the quarantine attribute is the indicator that tells the operating system to perform its various anti-infection checks, quarantine, file quarantine, notarization, and a gatekeeper.
10:20 - Well, as we can see, the proof concept is not signed, which also means it’s not notarized, but indeed it does have the quarantine attribute set.
10:29 - We can confirm that via the xattr command. So this immediately shows us, it’s not an issue with the quarantine attribute being missed, but this is almost more intriguing, right? We have an unsigned application that can bypass file quarantine, gatekeeper and notarization requirements.
10:44 - How? It was insane. So closer look, what’s going on? Well, if we look at the contents of the application, we noticed two very interesting things and I’ll point these out ‘cause you might not be super familiar with application bundles.
11:03 - The first thing is, if we look at an application. app, which is really a special directory structure, we see that it only contains three things, a contents directory, a macOS sub-directory in that contents directory, and then a file named PoC into that macOS subdirectory.
11:23 - Now, if you’re familiar with normal application bundles, you’ll be like, wait a minute, where is the info. plist file? The info. plist file is a metadata file that describes information about the application and it is always present in normal applications, I thought it was required, but apparently not.
11:43 - The other interesting thing about this proof of concept application is that the main executable component named PoC, was not a macOS executable, which is the standard executable file format on macOS but rather a POSIX shell script, a bash script.
12:01 - Now, rather interestingly, there is a popular developer script on GitHub that will package up applications in exactly this manner.
12:10 - The idea is if you have a script that you want to distribute to Mac users, if you package it up as an application, it’s way easier to both distribute and for users to run, they can just double click on it and the operating system will take care of it.
12:27 - The sad or a laughable thing about all of this is this Appify developer script would actually package up applications in this manner, which inadvertently would trigger this same flaw.
12:39 - So looking for bugs in macOS sometimes all you have to do is use open source developer packaging tools, insane.
12:49 - Okay, so we have this bare-boned script-based application, no info. plist file and its main executable component is a script. And we’ll see, these are both prerequisites for triggering the flaw.
13:03 - As we also saw in that proof of concept, when we download and run it, there are no prompts as there should be because the quarantine attribute has been set and this malicious proof of concept application is unsigned from the internet, non-notarized, so there’s some flaw in the operating system.
13:24 - My interest was peaked, I wanted to figure out exactly what was going on.
13:27 - Where was the flaw? The problem though, was that when you launched an application, when you double-click an application, there is no less than half a dozen applications, system, daemons, and the kernel, which all get involved with parsing, launching, classifying the application.
13:46 - It’s incredible, I gave a talk about this at ShmooCon a while back talking about another gatekeeper flaw, but you can see there is a myriad of apps and daemons and frameworks that are all working together.
13:59 - And again, this is problematic because the flaw is somewhere in here, but you know, this is a lot going on, where do we even begin? So my ideal was, I’m gonna start by looking at log messages to see if there is some interesting log message that can point me at least towards the right application, daemon framework or kernel code where this vulnerability might lie.
14:28 - And what they decided to do was launch three applications and basically diff their log messages to hopefully point me in the right direction of this flaw.
14:37 - So the three apps were all from the internet, all in signed.
14:41 - The first one was a standard application, meaning it’s executable was a Mach-O executable.
14:47 - It also had the common info. plist file in its application bundle.
14:53 - Second application was a script-based application, so it’s executable component was a bash script, but it still had that info. plist file.
15:03 - And then finally, we had our proof of concept, which is script-based, but is also missing the info. plist file.
15:12 - Now, before we can look at the log messages, we have to enable private logging.
15:17 - A recent versions of macOS suppress a lot of information from the logs, which is not helpful when we are digging into the internals of the operating system in an attempt to find a flaw.
15:30 - Long story short, we can install a profile, which turns on private logging, posted a link in the slides if you’re interested in this, but once this is installed, all data will be logged, which is great.
15:45 - So now, let’s run the three apps and basically diff their log output.
15:49 - Starting with the standard application, the Mach-O based application that contains the info. plist file.
15:56 - Two things pop-out first and foremost, we can quickly identify that the syspolicyd binary, the syspolicy daemon is the component of the operating system that is ultimately responsible for evaluating and classifying applications, binaries from the internet, ultimately saying, should they be allowed or not? It is the arbiter.
16:20 - So we can assume, and as we’ll see correctly assume that this binary is where the logic flaw resided.
16:30 - There’s a lot of interesting log messages here.
16:31 - I’ve highlighted what I think is the most indicative, and that is the results of the GK or a gatekeeper check.
16:40 - And we can see there’s a variety of numbers, the path of the item, but interestingly at the bottom, it says, gk eval was allowed zero, false, show prompt one, and then a log message changed the prompt was shown.
16:54 - This is what we expected as this is an unsigned application from the internet, so the log messages correspond to what we see that is a prompt being shown to the user saying, this application is not allowed.
17:07 - We execute the second application, this is the script-based application with the info. plist file, still, we see almost the exact same log messages.
17:16 - However, there is an addition of a script evaluation log message, which indicates there is another code path to handle applications that contain a script as their executable component, and we’ll see, this is important as well.
17:33 - Finally, we execute our proof of concept, the bare-bones script-based application without the info. plist file.
17:40 - You see it goes down the same script-based evaluation code path, and also the scan results are printed.
17:48 - Interestingly, though, there’s no messages about the app being blocked nor a prompt being shown, which is also what we saw when we launched the proof of concept, it was not blocked.
17:58 - There was no alerts, no prompts. So now let’s kinda diff these two, three log messages and really point out the very subtle, but very indicative differences.
18:10 - So the only differences are actually in the scan results, specifically in the GK evaluate scan result message.
18:17 - For the applications that contain the info. plist file, we can see a evaluation result of zero.
18:25 - Whereas for our bare-bone script-based application that did not have the info. plist file, we can see that the scan results was a two.
18:35 - Also, as we can see in the log message, the system identified it as not a bundle.
18:42 - So to summarize, a evaluation type of zero will result in a prompt in the application being blocked, whereas an evaluation type of two, will be allowed with no prompts.
18:55 - Interesting. So now let’s look into both how and why this type two is returned and ultimately what it is means? I just mentioned it actually means is that the application will be allowed, at least we saw that through our experiments, so now let’s look at the code to confirm that this is really the case.
19:17 - So we’re gonna reverse engineer syspolicyd, and this is the daemon that’s responsible for making decisions about whether an application should be allowed or blocked.
19:25 - If we reverse engineer the evaluate scan result method, we can see that it explicitly checks the evaluation type.
19:34 - And if the evaluation type is set to two, it does two things.
19:39 - First, invokes a set allowed method, two, set something to be allowed.
19:45 - This is a flag saying, yeah, this, this item is allowed.
19:48 - And then it returns, skipping all the logic that would present the prompt to the user and block the application.
19:58 - Now, we can see this in the static analysis of the binary in the disassembly, but we can also confirm this in a debugger.
20:06 - So I was debugging the syspolicy daemon, set some break points and we can see that after this code is executed on our proof of concept application, which is allowed, we can print out the value of the allowed instance variable and see it’s set to true.
20:20 - We can also print out the value of the would prompt flag and see that it’s no, which means as we saw our application is allowed with no prompts.
20:30 - So we’ve confirmed that what we saw experimentally is realized in code, but I still wanted to know why was this evaluation of type two assigned to our proof of concept, clearly incorrectly.
20:44 - So where does it come from, what returns it? Well, if we look back in the code, still in syspolicyd, we see a method named, “determineGatekeeperEvaluationTypeForTarget. ” And there are various methods that are called upon the application bundle that’s about to be launched, for example, our proof of concept.
21:06 - So first is a method, isUserApproved, and since we’re not yet approved, code executes into this, if statement, so we continue within it.
21:16 - There’s then another method that’s called, which is isScript.
21:21 - Since, our proof of concept application is a script-based application, this method returns true, meaning we then go again into the next code block within that if statement.
21:34 - We then see two things happen, first, we see the r15 register set two, devaluation type of two.
21:40 - Okay, cool. This is what we’re looking for.
21:42 - And then we see a third method call to a method called, “isBundled,” and if that returns false, it exits.
21:51 - Now, as you can see in the debugger prompt, this method returns no or false for our proof of concept application, which means we’re going to jump to that leave label.
22:01 - If we look at what that leave label does, it simply moves the r15 register into the rax register and then returns that.
22:09 - So now we understand where that evaluation type two is being set, and it looks like it’s being returned because of our application not being classified as a bundle.
22:23 - Which is strange, but let’s look into that a little deeper.
22:27 - First, we take a peek at this isBundled method, all it does is returns the isBundled instance variable flag.
22:34 - So that’s not really that helpful, but what we can do is we can look back in the code to figure out where this instance variable, where this flag is set.
22:44 - So we find that within a method name, “evaluateCodeForUser,” and specifically what it does is it calls an unnamed subroutine passing in the path of the application that’s about to be launched, for example, our proof of concept application.
22:58 - And then the return value from that unnamed subroutine is passed to the set isBundled method, which sets or updates the isBundled instance variable flag.
23:10 - So obviously, we’re interested in that unnamed subroutine because that is the one that is ultimately classifying the item as a bundle or not, which will then determine if the evaluation type is set to two or not.
23:22 - So it turns out this unnamed subroutine is fairly straight forward, since I mentioned, attempting to determine if something is either a bundle or not.
23:33 - And as we can see on the slide, the way it does this, is looking for an info. plist file.
23:41 - And I’ve I said about 20 times already, our proof of concept application is missing or does not have this info. plist file.
23:48 - The application is still allowed to run, even though it doesn’t have this file.
23:53 - However, the classification logic here thinks that is indicative of a bundle, so if an item does not have an info. plist file, it is not classified as a bundle, which we saw was problematic.
24:05 - We can confirm this in debugger by stepping over this code, and then looking at the value of the isBundled instance variable, the flag.
24:15 - And we can see that in fact, it is set to no, for false.
24:18 - So the system has basically said, you don’t have an info. plist file, you are not a bundle.
24:24 - And this is, as I mentioned, problematic because if you don’t have an info. plist file, and you have an executable that is a script, you will be classified as not a bundle, your evaluation type will be set to two, which will then as we saw, skip all the logic that deals with prompting and blocking the application.
24:47 - So we have just neatly sidestepped, gatekeeper, notarization requirements and file quarantine.
24:54 - And that’s a pretty brief overview of reverse engineering, syspolicyd, if you’re interested in more of the details of that reverse engineering effort, check out the detailed blog post that I posted on this slide.
25:09 - All right, so now we know the cause of the vulnerability.
25:12 - Just iterate, script-based application with no info. plist file will get misclassified as not being a bundle and will be allowed to run, bypassing all of Apple’s anti-infection mechanisms.
25:25 - Sweet! So next up was me thinking, Hey, like, is it possible that, attackers have independently found the same vulnerability and are actively exploiting it in the wild? So the search was pretty simple.
25:41 - Basically, we looked for an application that does not have an info. plist file whose executable component is a script.
25:50 - And I pinged my former colleagues at Jamf and asked them to poke around and see if they could uncover any applications that matched this search criteria.
26:00 - And they actually came back and said, Hey, we have an application that seems to match what you requested.
26:07 - So they sent me kindly the candidate application, an application named “1302. app. ” As we can see on the slide, if we look at its application bundle contents, we can see that indeed it is missing an info. plist file and moreover it’s executable content is a script.
26:28 - Moreover, it is also unsigned and unnotarized.
26:31 - So this seems to be a very promising candidate.
26:35 - Popped into a virtual machine and executed this, even though it had the file quarantine bit sets and gatekeeper, notarization, everything else was enabled as it is by default on macOS.
26:49 - The application was allowed to run without any prompts.
26:54 - And as we can see in the output from the process monitor, not only was it allowed to execute, it was able to reach out and download and install its second stage payload, which installed a bunch of malware and adware on the infected machine.
27:09 - Yikes! Pinged Jamf, and we were able to uncover the initial infection vector.
27:17 - It turns out attackers had targeted popular Google search queries and poisoned the results, and also infected sites that would show up in these results to serve up a malware.
27:29 - So for example, if you Googled Alexa and Disney, clicked on the second link, it would take you to a site that would serve up an application that exploited this vulnerability.
27:39 - Jamf published a lot more information on this, so if you’re interested, check out their post on that.
27:44 - But again, takeaway here is that this application exploited the same flaw, so if the user clicked and launched it, none of Apple’s anti-infection mechanisms will even come in to play.
27:59 - That sucks! So while awaiting a patch from Apple, I thought it’d be interesting to dig into methods of protecting Mac users.
28:11 - And my idea was pretty simple. First, the observation is none of these applications, that that are exploiting, this vulnerability are gonna be notarized.
28:20 - So why don’t I simply block, detect and block the execution of any downloaded code that has not been notarized, again while waiting for an official patch from Apple.
28:35 - So I thought I could do this in, basically four steps.
28:38 - First, detect whenever a new process was launched.
28:42 - Secondly, once I detected this process with launch, classify it as coming from the internet and being launched from the user, this was important because I wanted local items to be able to run and also if there was something that was already installed, downloading updates, I didn’t wanna get in the way.
29:01 - So I basically said, I only wanna focus on applications that are from the internet that the user has launched.
29:07 - And then because macOS has this flaw and we can’t rely on its anti-infection checks and it’s notarization logic, can we then explicitly check if that item is notarized, meaning it’s been a scanned, improved, approved by Apple, which this malware obviously won’t be.
29:23 - And if it’s not notarized, simply, I’ll block it.
29:28 - It turns out, this was actually pretty easy to do.
29:30 - So first, we can leverage Apple’s endpoint security framework, the ESF and this is a really powerful user mode framework that allows us to register for operating system events, such as process launches.
29:46 - So here’s a snippet of the code on the slide.
29:49 - We can see we’re registering a new endpoint security client, and we’re telling it we’re interested in the event off exec.
29:56 - The off exec event tells the operating system, Hey, please invoke my callback anytime a process is about to be launched and I will tell you if it’s authorized or not.
30:07 - So it allows you to be the arbiter. I blog more about the end point security framework, posted a link on the slide if you are interested.
30:17 - Okay, so now we have a callback. That’s gonna be invoked by the operating system, every time a new process is launched.
30:24 - So the first thing we want to do, is we want to check if this is an item, for example, an application that the user is launched from the internet.
30:33 - And there’s a variety of ways to do this, but the easiest way is simply to check its app translocation status.
30:39 - App translocation is another security mechanism built into macOS that was in direct response to research I published and presented at DEFCON 15, which involved Dylib hijack attacks.
30:52 - The idea is when the user downloads something from the internet and launches it, Apple takes just the application bundle, copies it to a randomized read-only share, mount and executes it from there.
31:05 - So no external libraries can be injected or hijack into it.
31:11 - It’s a pretty good security mechanism. So what we can do though is, when an application is launched, we can query and see, Hey, was it translocated? And if the answer is, yes, we know A- it’s from the internet and B- it was launched from the user.
31:24 - Cool, which is exactly what you wanna know.
31:27 - Unfortunately, there’s no public APIs to do this, but there’s a very powerful private API, Sec translate is translocated URL that you just invoke it with a pass and it won’t give you a result, whether that item is translocated or not.
31:41 - So we can leverage that, it’s perfect for our needs.
31:44 - And then finally, we need to see if this user launched application from the internet is notarized or not.
31:51 - Apple provides a public API to do it, the “SecStaticCodeCheckValidity” API.
31:56 - You can invoke this API with a requirement, so what we do is we initialize a notarization requirement and then make this API call and it will set a flag, whether the item we are examining is notarized or not.
32:11 - So if we put this all together and I did within an application I wrote called “BlockBlock,” it’s fully open sourced available on GitHub.
32:19 - We can now generically prevent the execution of applications, even ones exploiting this vulnerability as a zero-day.
32:28 - The screenshot on the slide, we double-clicked this executable, this malicious 1302 application that was exploiting the vulnerability as a zero-day, the system intercepts the application launch because we registered with the end point security framework, invokes our callback.
32:46 - We see this item has been app translocated because the user is launching it and it’s from the internet, which I can see that it’s not notarized.
32:54 - Then alert the user saying, Hey, just to let you know, blah, blah, blah, blah and essentially blocking the execution of the exploit.
33:04 - Great! This is great. So we have a good way to protect against this, again, while awaiting a patch from Cupertino.
33:12 - But I also wanted to figure out, was there an easy way for us to examine a system to ascertain, to determine had it been infected or not, you know, answer the question, was I exploited? So I kept analyzing syspolicyd and looking at the log messages, and there was an interesting log message that we can see on the slide basically says, updating flags and then has the path, the item that was analyzed, was classified and then a number.
33:42 - And so what syspolicyd does, we mentioned it’s the arbiter, it’s the one making the decisions about whether an application should be allowed or blocked.
33:52 - And then apparently, looks like it saves the, or logs out the results of this statement.
33:59 - So I then ran a file monitor FS usage while executing the proof of concept and various other applications.
34:07 - And I could see that once syspolicyd had classified the application, either as should be blocked or inadvertently should be allowed, or legitimately should be allowed, if it was a legitimately notarized application, it would update an undocumented database called, “ExecPolicy. ” On the slide, there’s a screenshot of the database, we can see there is a volume, UUID, something that says object ID, FS name, and then also, things like the flags, basically the result of the evaluation from syspolicyd.
34:44 - So all right, we have this undocumented database \ where syspolicyd is writing out the results of its evaluations for everything that the user launches.
34:53 - This is seems like a good path to go down. Unfortunately, and looking at the values in this table, there’s nothing that immediately pops-out that points to the path of the item that was run, which is ultimately what we want.
35:13 - We want to know, Hey, was I exploited? And where’s the item that triggered, where’s the malicious application? Well, it turns out that object ID value in this undocumented ExecPolicy database is actually, a inode, a file inode, and we can confirm that, we can see on the slide, we take the value of the one that starts with 23d, and if we execute the stack command on the proof of concept application that I downloaded, it matches.
35:41 - We can then also take that path, query the database, sorry that inode ID, and we can see that it does appear to be the same application.
35:54 - So what I then did is, I needed to figure out a way to parse all the rows in that database, in that specific table and then for each of those, take the volume ID in the file inode ID, and then from that, get a full path to the item.
36:10 - And yes, you can do that via the stat command, but that’s really rather slow.
36:15 - Well, it turns out there is a foundation API you can invoke, get resource value for T that given a path that has, that starts with. volume and then the file inode, will actually return to you the canonical path of the item.
36:33 - So it’s kind of mapping of file inode to path, which is exactly what we want.
36:39 - So I implemented this in a basic Python script, the link to the Python script is on the slide, pretty simple, it basically parses this undocumented exact policy database and for each item in that table, it first resolves the path from the file inode, and then it also checks for if the item is an application that is missing an application, an info. plist file, and whose executable is a script.
37:06 - So in other words, it’s basically just looking for these bare-bones script-based applications, no info. plist file, executable is a script.
37:15 - And this is important because in this database, there’s a lot of other legitimate items, standalone scripts, legitimate applications that have been run, so it’s important to kind of filter out those results.
37:26 - I ran this then on a system where I had run the malicious application and we can see that the Python script was able to identify and pull it up, so it’s kinda cool.
37:38 - Apple did finally release a patch. So let’s end by kind of looking at their patch, reverse engineering it to figure out how they ultimately fixed this flaw.
37:50 - So the patch was released in macOS 10. 13, and it was assigned CVE-2021-3657.
37:59 - Again, giving credit to Cedric Owens for ultimately, uncovering and reporting this lovely vulnerability to Apple.
38:08 - So when you usually look for what was changed in a patch, if you know the specific details about the vulnerability, it’s a lot simpler.
38:17 - So we don’t have to diff the entire patch, we know we can actually just diff the updated syspolicyd and moreover, since we identified the root cause of the vulnerability, we can assume that that’s where the patch details ultimately lie.
38:32 - So we can start there and confirm whether our assumptions are valid or correct.
38:37 - So recall the crux of the flaw was the misclassification of an application bundle, one that was missing info. plist, and this was realized in a unknown subroutine in the syspolicyd daemon.
38:52 - So what I did was I diff the stat subroutine of an unpatched system and a patch system and as we can see on the slide, the unpatched code had 26 unique control code blocks, whereas almost 10 more had been added in the patched system.
39:10 - So this is a really good sign that the majority of the patch was within this subroutine, so we’ll start our reverse engineering there.
39:20 - If we analyze the updated syspolicyd specifically, this unnamed subroutine that has the isBundle algorithm, we can see that the classification algorithm has been greatly improved and expanded.
39:32 - Specifically, there was addition of two new comprehensive checks.
39:36 - The first is checking if the items past extension is app. app.
39:41 - This is important because if an item is not named. app, when the user double-clicks it, it’s likely not to be launched by finder.
39:48 - So this is almost a prerequisite to get an application to be launched, so it makes sense to check for this.
39:55 - So if we look at the disassembly and also the pseudo code, we can see, it’s basically just getting the path extension, and then checking if that path extension is app, for application.
40:06 - If it is, it then classifies that item as a bundle.
40:13 - Check two, is it also checks if the item contains, content/MacOS, so even if it doesn’t have that. app extension, it looks for this directory structure.
40:23 - Again, this is a very important check because this is required.
40:26 - Info. plist file is apparently optional, but this directory structure is what defines an application bundle structure.
40:35 - So again, we can see the disassembly and then the decompilation at the bottom.
40:40 - It’s essentially building this path and checking if the bundle contains that.
40:45 - And if the item does contain that, and now says, yes, you are a bundle.
40:51 - So in summary, the patch added two checks. First, the checks if the item has an application file extension, and then secondly, it also checks that it contains, content/MacOS.
41:04 - And if either of these conditions are true, it says, yes, you are a bundle.
41:09 - So with this new algorithm, with this improved isBundle, checked this algorithm, if we run the proof of concept vulnerability application, we can see that macOS now correctly classifies it as a bundle, which means its evaluation type will not be two, it will be zero, which then triggers the rest of the notarization gatekeeper and file quarantine checks, which unsurprisingly now, blocks the application because it’s from the internet unsigned and non-notarized.
41:43 - Let’s briefly wrap this up with some conclusions.
41:47 - First, a key takeaway, which I really want to reiterate.
41:51 - And that is hopefully, this illustrates that macOS still has a ton of shallow bugs.
41:57 - You know, we talked about the fact that there was a very popular developer packaging script on GitHub that would inadvertently trigger the flaw.
42:06 - So, you know, you didn’t have to do some crazy fuzzing or reverse engineering, if you actually just packaged up your application, your script with this, it would trigger this flaw and bypass all of Apple’s anti-infection mechanisms.
42:20 - And we see this time and time, again, and really to me, illustrates that large components of macOS have never been audited.
42:28 - And there’s a lot of very low hanging fruit that still can be found and these vulnerabilities while shallow, are still very impactful.
42:37 - Being able to bypass all of the Apple’s anti-infection mechanisms, that’s huge.
42:42 - And again, as a logic flaw, a hundred percent reliably.
42:47 - We also in this talk, talked about the root cause analysis of the vulnerability, talked about how Apple and macOS does application classification and did so incorrectly.
42:59 - We showed that unfortunately attackers were abusing this flaw as a zero-day in the wild, but luckily we were able to provide some protections and detection strategies while Apple was awaiting a patch, and by reverse engineering their patch does seem that they comprehensively addressed this flaw.
43:17 - I also hope this talk inspired you or gave you some ideas, some tools, some techniques for you to go out and do your own splunking around the operating system, your own reverse engineering, malware analysis, or even security tool development.
43:30 - And if you’re interested in these topics, there’s some more resources I briefly wanted to share.
43:36 - As I mentioned, I’m the author of the Art of Mac Malware.
43:39 - It is a book on analyzing Mac wild malware, it’s free online.
43:43 - So if you wanna learn more about Mac malware and how to become a proficient Mac malware analyst, check it out again, free open source.
43:51 - I also organize a Mac security conference that’s coming up at the end of September, a lot of really amazing speakers talking about Mac and iOS security topics.
44:01 - So if you’re interested, check that out. Finally, I wanna thank first and foremost, you, for attending my talk either virtually or in person.
44:11 - I just wanna thank the organizers of DEFCON for putting together this conference, especially in these trying times.
44:19 - And then finally, I wanna thank the companies who support my research and my tools, allowing me to release open source tools and share my research with the world.
44:29 - So again, thank you so much for attending my talk, stay safe and see you next time. .