DEF CON 29 - Rion Carter - Why does my security camera scream like a Banshee?

Aug 5, 2021 17:39 · 6989 words · 33 minute read

- Hi, my name’s Rion Carter, and today I’ll be presenting on why my security camera screams like a banshee.

00:10 - The talking on signal analysis and reverse engineering of an audio and coding protocol.

00:16 - A little bit about myself, I’m a software developer, a security engineer.

00:19 - I love to code, love to automate, love to solve problems.

00:24 - I like to employ the hacker mindset. Like to break things in cool and unexpected ways to learn more about the system and hopefully drive an improvement that makes it better for everybody.

00:35 - I love food. Love cooking, love baking.

00:39 - Recipe hacking is a passion of mine, and when I can get a delicious result, that really makes my day.

00:45 - And then of course the standard disclaimer applies here.

00:48 - All opinions are my own. Don’t reflect positions or thoughts of anybody else or any current or previous employers.

00:55 - So, let’s get to it. Got a few different sections to cover.

01:02 - We’re going to touch on what it is that we’re actually doing here.

01:05 - The signal analysis piece, application analysis, hacking the signal, and, if all goes well, we’ll get to a demo.

01:14 - So, what are we doing here? And why are we talking about wireless security cameras? So, my original goal, before I even had the idea to submit a Defcon talk was to use an inexpensive wireless camera to monitor my garden.

01:32 - And this is the inexpensive camera I selected.

01:36 - It’s got an antenna suitable for outdoor use.

01:39 - This one’s kind of interesting in that it has a microphone and a speaker, so you could have two-way communication if you wanted it.

01:47 - The nice thing about this is that it was cheap.

01:50 - So, and it seemed like it would do the job.

01:53 - This sounds fairly easy and straight forward, so what’s the catch here? I discovered after purchasing the camera, unboxing it and examining it, that it requires a cloud application in order to enable and pair the camera.

02:11 - There’s no way to self set up the camera. There’s no ad hoc wireless network.

02:17 - It doesn’t show up with a Bluetooth connection.

02:21 - There’s, when you plug in the USB cable, there’s no signals there whatsoever.

02:25 - Also, there’s no documentation online about this camera to any real technical depth.

02:31 - Not that I was expecting much from a $30 camera.

02:34 - And then of course, what brings us here today is the bespoke protocol that it uses to, well that the vendor application uses to communicate and configure the wireless camera.

02:48 - So take a listen to this. This is what really peaked my interest and sent me down the path of trying to do a Defcon presentation.

03:00 - (electronic noise) So that’s the sound that this vendor application makes to interface with the camera and configure it to connect to a wireless network.

03:24 - I have to say I was not expecting that. That’s not usually how you configure things like security cameras.

03:32 - So, my new goal after finding out that it uses a sound wave signal to configure the camera, was to find out what was going on in the camera set up, and see if I can’t hack on it and replicate it, and if possible, cast off the shackles of the proprietary cloud enabled app that the vendor supplies.

03:57 - So, let’s investigate. First thing you want to investigate is the hardware.

04:02 - And as I mentioned before, it does have a USB cable.

04:06 - This connector though only supplies power. When I trace the leads, there’s no activity on the data pins.

04:14 - Other investigative angles, of course, check for Bluetooth, check for ad hoc wifi.

04:21 - And unfortunately, after many hours of trying all sorts of different permutations of things, pressing the reset button, holding the reset button, scanning with wireless scanners, et cetera.

04:34 - Nothing was advertising. So that left me to investigate the software in a little bit more detail.

04:44 - This is the vendor application that comes with the camera.

04:47 - It’s called JAWA. And it’s used to configure the cloud camera.

04:54 - However, like I mentioned before, I wasn’t really a fan of having to use this proprietary cloud locked application.

05:01 - JAWA requires an internet connection. It also requires a username and password to be configured with this cloud set up.

05:11 - So that made me a little frustrated, and incentivized me to poke around some more.

05:18 - Now, in order to analyze the vendor application, I needed a test device.

05:25 - I didn’t want to run this on my primary phone just being a security paranoid person that I am.

05:30 - I don’t really have a trust for applications that come from from dubious sources, like the manufacturer of a $30 cloud enabled camera.

05:41 - And as I searched online for information either about the camera or the application, probably not too surprised to hear that there wasn’t very many, if any results that were found.

05:55 - I did uncover a few other camera models that seemed to use the sound audio wave signal approach to configure the camera for Wi-Fi network.

06:04 - I don’t have any of those though, and I just more list them here as an interesting aside.

06:11 - There are some cheap cameras though which leverage, in my opinion, a far superior approach to pairing the camera to a wireless network.

06:19 - And that’s having the app show a QR code that you then scan with the camera.

06:26 - I mean the camera has, well a camera, and scanning a QR code is a fairly straightforward thing to do in 2021.

06:36 - So I doubt, or I should say, I wonder if there’ll be many, if any, more cameras out there, which leverage this audio coded approach.

06:47 - So, now that we’ve taken a quick pass at the hardware and the software, let’s think about this signal a little bit more.

06:55 - See what we can identify and figure out. And along the way, let’s think about what are some things that we can think about or look for as we analyze the signal.

07:10 - Of course, the first thing is we’ll want to capture and visualize the signal.

07:14 - We’ll be looking for things like repetition, the variation in replay.

07:20 - And if possible, we’ll try to fuzz and simulate the signal in a way that can hopefully track with a valid coding.

07:31 - This is the raw view of the signal as captured by Audacity, and visualized in the spectrographic view.

07:42 - Just taking a quick look at this. It’s pretty clear that there are distinct tones, and there appears to be steps.

07:51 - This isn’t a continuous wave form that gets transmitted.

07:55 - There’s individual tones, which are given certain slices of time, that are transmitted for a certain amount of time, and then other tones are played after that.

08:04 - Taking a look, it seems like a lot of the signals are centering at least here around 3,500 Hertz with a few outliers on the low end of the frequency range, and the high end as well.

08:18 - So that is something worth noting as we go about analyzing the signal.

08:25 - Now, let’s see. One thing that I thought as I was looking at the signal is, is this similar to a modem signal? It’s been an awful long time since I’ve heard a modem, and obviously modems encode their, transmit information using an audio signal.

08:50 - So I did a quick comparison against a recording of 56K dial-up modem establishing a connection.

09:00 - And just by looking at these wave forms, it’s pretty apparent that it’s not a 56K modem.

09:09 - The spectrographs are substantially different.

09:12 - And this protocol, this audio protocol that they’re using to configure the camera it’s bespoke in the sense that you can’t find information about it easily, and it doesn’t track with other common audio protocols that you might think of like modem or fax.

09:30 - So, looking a little closer at this with our eyes.

09:36 - I marked out a few sections that appeared interesting, just really highlighting the signals that appear that are extraneous, or that don’t really track with what the rest of the signal offers.

09:50 - And on the left of this slide here, I put together, I guess what I’m calling a collapsed spectrograph view, where I basically took all of the tones and I slid them all over to my left, and just lined them up to see which tones and frequencies were represented.

10:10 - And you can see that it does center around 3,500 Hertz.

10:14 - There’s a small gap above 4,000 Hertz. And then there appeared to be some things at the higher end, at the higher register range.

10:24 - Now, a picture is nice, and it helps us understand maybe how the signal is structured.

10:31 - But a picture can only take us so far. We’d like to get more precise and better understand what is actually encoded in this signal, and kind of what the protocol is for actually encoding data into the signal.

10:49 - With a manual approach, we can keep using a tool such as Audacity or other audio editing tools that are out there.

10:57 - With Audacity, though, you can use this functionality called labeling.

11:02 - You position the cursor over each one of those sections where there appears to be a distinct tone.

11:10 - You press Control + B and it will cause Audacity to label that time slice, and mark the frequency that’s detected at that point in time.

11:21 - And so you can see just in the signal in this picture here, it might be a little smaller.

11:26 - A little hard to see, but I’ve got a bunch of labels on each one of these tones.

11:31 - This next view here is the Audacity view where you can view the labels that you’ve taken.

11:40 - You can go to edit labels, edit labels, and you can export them to a text file, which you could run through some other type of automated analysis, or plug it into a spreadsheet, or what have you.

11:53 - Let’s take a little closer look at this. And you can see that Audacity is mapping a low and a high frequency that it detects at that time slice.

12:06 - These frequencies are a little variable. So to me it looks like this puts us in the ballpark for what each of the target tones are.

12:17 - I don’t imagine the vendor application is really putting out 5,101. 89 Hertz.

12:23 - It’s probably something a bit more round. We’ll figure out more about that as we go along in this process.

12:33 - What do we know now from doing our quick manual analysis? We can see that there is encoding going on.

12:43 - There’s a digitized signal. But the signal isn’t binary, it’s not like it’s just two tones, one and zero.

12:50 - There’s a range of frequencies represented here.

12:54 - So there’s some type of digital encoding going on.

12:58 - The frequencies seem to be centered in the three to five kilohertz range.

13:02 - And my suspicion is that the signals that are outliers at the top of bottom are control signals, and that they warrant a closer look for investigating how the signal is put together.

13:15 - And we see that there’s repetition. I noticed that in my analysis of the vendor application and the pairing tones that it produces.

13:24 - The complete sequence repeats itself multiple times, at least three times.

13:29 - So, and then finally, we can see that this is not a 56K modem or fax signal.

13:35 - The spectral analysis just do not match. So at this point, we have to ask ourselves, is there really much further that we can go in manual mode? And the answer there is yes, but with a set of caveats.

13:52 - There’s variability whenever you play back the audio signal.

13:56 - I found that each time I played back, even the same signal from the vendor application, that Audacity would slightly vary, that the Audacity analysis would slightly vary in terms of which frequencies it shows when you do the labeling process.

14:12 - And of course, manually going through the process of playing a signal from an application, recording it into an audio editor, and doing that over and over again.

14:22 - It’s very time consuming. Since again, the app repeats the same signal multiple times.

14:27 - So even after you get a complete signal captured, you have to wait for the app to finish its full cycle before you can kick off another test permutation.

14:36 - And, just to be clear, the only options we have to configure in this vendor application are the SSID and the passphrase for the wireless network.

14:46 - So there’s not a whole lot of things that you can vary for the input.

14:51 - Then, one thing I noticed is that there’s no readily apparent API to leverage the frequency detection portion of Audacity.

15:01 - There’s no CLI option. There’s no readily available API option.

15:04 - And while I could have dug deeper into the Audacity code base to better understand how that’s put together and hook into it, that really wasn’t what I was trying to go for.

15:15 - That would be more of an aside as opposed to helping me on my main journey to reverse engineer and better understand this audio signal.

15:24 - So with manual mode, we can do black box signal reversing.

15:28 - We can try to brute force reproduce the tones.

15:31 - We can attempt to match generated tones with spectrographic views.

15:34 - And then of course, just fuzzing generating permutations until we find a match.

15:40 - This is a very tedious and a time consuming process though.

15:43 - So I was looking for a better way to leverage what I have and what I know in order to improve this process.

15:53 - So really the next step here is to do an analysis of the Android application, since the Android application is what generates the audio signals.

16:06 - And, let’s take a closer look at this vendor application.

16:11 - So. How do we go about analysis of a software artifact? We could do things like executing it and logging the results in a sandbox or a test environment.

16:26 - We can decompile the package. We can look for strings, anything that might relate to the audio or sound or SSIDs and passwords, things of that nature.

16:40 - We can do a key method search since Android uses a higher level language, or at least I should say this, this APK is written to a higher level language.

16:51 - And even though vendors can obfuscate their code, it’s a lot harder to obfuscate the underlying library functions that you use as a vendor.

17:00 - So you could do a search for Android system calls or Android libraries that provide methods that you might need when dealing with audio and audio and coding.

17:11 - Once we figured out these code paths, we can attempt to do high speed fuzzing.

17:15 - And then of course, if we identify something that has been obfuscated, we can try to go and deobfuscate it and attribute the classes, the methods, the properties, some other identifiers, which makes more sense to humans.

17:29 - And it helps us better reason about the code to really figure out how this all works.

17:35 - Now, let’s talk a little bit about preparation.

17:38 - You’ll need to prepare your computer to pull the APK off of your test device.

17:44 - If you’ve done any of this, if you’ve worked with Android before, you’re probably already familiar with this.

17:50 - And you need to make sure your developer mode is enabled, that you’ve allowed USB debugging.

17:55 - Make sure that you have Android Studio installed, and that version of ADB is correctly placed in your path.

18:03 - So that way you can leverage it for the purposes of this.

18:08 - You’ll want to extract the Android package.

18:12 - And here I show a few commands that you can use if you want to follow along afterwards and try this.

18:19 - You’ll want to make sure that you take the output of each step and feed it into the next step.

18:24 - Since what I have here is really only applicable to a Blackberry priv, ‘cause this is the test device that I had lying around after all these years to do this analysis on.

18:38 - Once you have the APK, you can use a tool to decompile it.

18:42 - I leveraged jadx, you can go to the GitHub page, pull the latest release, and then it’s very simple to decompile the code.

18:50 - Just a quick one liner. You will probably note that it’ll show finished with errors.

18:59 - I found that the errors did not negatively impact my analysis of the package.

19:04 - And I was not impeded in my journey. Once you have the decompiled sources, you’ll want to open up a new Android Studio project.

19:14 - Open the decompiled sources from jadx, and then click a little button in the lower right-hand corner that says configure the Android framework.

19:26 - By configuring the Android framework, it enables you to do things like find usages and go to definition.

19:34 - Just all the goodness that you’d expect from a modern IDE.

19:40 - Once it’s loaded, you’ll see a bunch of classes on the side.

19:44 - The one that I have highlighted there is a U. ALI, which is clearly obfuscated.

19:51 - As you drill into there, there’s a bunch of obfuscated classes and methods.

19:56 - Now, quick note on obfuscated code. What is obfuscation? Sometimes software makers want to hide their implementations.

20:05 - They want to impede you from figuring out how they work, and from reverse engineering it to better understand what the underlying mechanisms of its operation are.

20:18 - With higher level languages, you get a terse randomly generated identifiers.

20:23 - You might have a class named lowercase A. You might have a method named F999, or just whatever the case may be.

20:32 - It’s harder to obfuscate the use of system libraries in a higher level language.

20:37 - Since those decompile cleanly back to base libraries, so.

20:44 - Why do we use Android Studio? Or I should say, what’s the advantage of using Android Studio is in your manual deobfuscation process.

20:53 - It’s a very slick IDE, it’s free, it’s readily available.

20:57 - It receives a lot of support, a lot of people use it.

21:00 - And then of course you get all the classic IDE functionality like find usages, go to declarations, things like that.

21:07 - With Android in particular, you get a LogCats instance or LogCat window, which lets you search.

21:14 - You can also target specific applications that are running on a phone to reduce the verbosity of the messages that you see, and better help you tailor your analysis.

21:25 - Let’s take a look at what we can do with this application.

21:31 - So live log analysis. This is one of the first things I try because being a developer myself, I know that oftentimes the debug logs will contain a wealth of information.

21:44 - And as a regular user of the phone or the service of the application, I regularly user’s not going to see the debug output.

21:52 - So if you’re rushing a release out the door, and you don’t disable your debug output, somebody like me is gonna come along and hook up the Android phone to LogCat and investigate for messages if we’re curious about what’s going on.

22:09 - Now, let’s take a look at what logs we get as we start this application.

22:16 - Here’s the login screen. Here’s a little capture from LogCat.

22:23 - And we can see that there’s some interesting information in there, there appears to be some kind of an encoded payload.

22:30 - There’s some interesting strings in there. And we appear to be getting both informational and debug output.

22:37 - So, there’s a URL that ap. jawalife. net. Go JAWAs! And then as we kind of continue scrolling through the screen that there’s a lot more messages like this.

22:52 - When you try the camera pairing process, you have to enter in the SSID and the password.

23:00 - And at this stage, we see that there’s log output which logs the SSID, the password, and then what appears to be some kind of a randomly generated token.

23:12 - And in this log output, I know it’s really hard to see here, but there’s a class that we can start to investigate.

23:19 - And then there’s what appears to be an HTTP helper class, which is what helps us send and receive messages back from the cloud server.

23:29 - Let’s try to pair to a camera and see what we get.

23:33 - So there’s a button that says click to send the sound wave.

23:38 - Just to love it, it makes me smile when I see that.

23:41 - And when we send the sound wave, we get some additional information.

23:45 - And it may not look like much, but there are a few strings here which can help in the analysis.

23:54 - We found, just to recap what we’ve found so far.

23:59 - We found a distinctive characters. We found URLs.

24:02 - We found a class to investigate this bind device new activity.

24:06 - That sounds particularly fitting given that we are trying to enable and configure a new camera device.

24:14 - So, what does this lead us? We can continue our search by taking those strings that we found in the log outputs, and searching for them within Android Studio.

24:24 - And as I searched through the decompile output, I found a few things.

24:31 - It looks like the number one is used to delimit fields.

24:36 - They call the random generated code, they call it a SmartCode.

24:41 - Then, there’s a string one that’s appended at the end of this little message block.

24:50 - And even though Android Studio is calling this message DB notify reached, I kind of wonder if this isn’t a decompilation artifact of some kind, because it really is just the string of the character one.

25:05 - So, what is the SmartCode thing? I noticed that each time I tried to pair via the camera to the cloud app, this SmartCode would change.

25:15 - It would be different every time. And I could see by looking at this boot up code, that yes, every time that you attempt to pair the camera, you get characters and numbers for six characters, and that constitutes the SmartCode.

25:33 - But the question still remains, like, what is this thing? And just after having gone through this entire analysis process, and seen it change with every single time that I attempt to pair, and noticing that whenever I paired the camera, a message was sent from the application up to the cloud server that included the random code.

25:56 - I can only presume that the backend cloud service uses this random code to tie this camera to my user account in the cloud.

26:08 - Since, how else is the camera going to identify that it belongs to my account? So that’s the best case that I have for what this code is used for.

26:21 - As we continue looking through the strings, we can see other strings which guide us to processes, sorry, to functions, methods that warrant further investigation like run and play voice.

26:35 - Both of those sound, they sound good. Let’s take a closer look and do an extractive analysis.

26:45 - At this point, we’ve uncovered a lot of functions, a lot of methods, static constants in the code base.

26:58 - And we want to take the key sections out of the vendor application, put them in a clean project so that way we can perform an analysis.

27:08 - Just a couple of notes on setting up the clean application.

27:15 - If you’re looking at another application, which like this application here, leverages native libraries, you’ll need to manually create a jniLibs folder, put all those compiled libraries in the jniLibs directory, and then you’ll need to make the JAWA class that matches, the package structure has to be the same.

27:38 - So this thing is called like calm. ithink. voice in the vendor application.

27:44 - I can’t call it calm. test. reverseengineer, I have to name the package structure the same.

27:51 - Because the way that JNI works, it requires those two things to match up.

27:57 - And once you have your sample test project set up, you’re able to perform black box analysis of the code that’s used to generate the signal.

28:08 - And along this way, one of the questions that I had was, well, what are the exact tones that are being generated by the application to pair and bind with the camera? Well, there’s a class called VCodeTable.

28:24 - And as I ran it in this extracted project, it produced a mapping of all of the tones.

28:32 - All the tones along with the characters that they map to.

28:38 - So this is what the characters map to. We have from zero to 4875 Hertz.

28:44 - And there are 16 states, so this is a hexadecimal style and coding here.

28:54 - Now, looking at what else we found here, there’s a lot of findings.

28:58 - We know that Android uses AudioTrack, we know that the application uses AudioTrack to play a signal.

29:04 - We’ve identified how it creates the payload, as far as the SSID, the password, the random code, and then the delimiters between those fields.

29:14 - We’ve identified control tones, like a frequency begin and frequency ends that are just static constants.

29:19 - There’s also a space tone, which is used for when two tones play back to back the same tone.

29:26 - There’s a little space tone that pops in, and that’ll be better visualized in a later slide.

29:33 - There’s methods which play the characters, there’s the use of a CRC values to help the camera know if it’s received a complete signal or not.

29:42 - So, there’s a been a wealth of information that we’ve uncovered through this process.

29:47 - So, what do we know now? We can reconstruct all of section one and section two of the signal.

29:54 - ‘Cause each signal consists of three sections.

29:58 - And now that we can reconstruct section one and section two, really that just leaves section three.

30:06 - And I’ve highlighted in this image the part of the code which is, sorry, the part of the signal which is elusive at this stage in the analysis.

30:16 - This tone appears to be some type of error correction code.

30:20 - It doesn’t exactly track what the CRC process that the rest of the code base uses though, which left me wondering.

30:27 - And since this is generated by code that’s in a native library, it means that I need binary analysis to dig deeper and try to figure out what’s going on here.

30:40 - My tool of choice is Ghidra. I don’t know how to pronounce that.

30:44 - It’s a free tool. It’s very capable.

30:47 - And it does the job here. So to get set up with Ghidra, you’ll want to visit their GitHub page.

30:54 - Pull the latest release for your platform, and then follow the installation guide.

30:59 - Once you have Ghidra installed, create a new project, fill out all the wizard boxes.

31:05 - I just took basically all the defaults and gave it a project name.

31:09 - Click the dragon icon. Import the native library that you want to analyze.

31:15 - In my case, I just went with the x8664 library since I am a little bit more comfortable with x86 than I am with arm library at the moment.

31:26 - When you click the yes button, it’ll go through and it’ll do an analysis of this compiled library, which you can then navigate in the UI.

31:35 - So, reverse engineering with Ghidra. We need to know what we’re looking at here.

31:40 - So, you want to go to your Android Studio project, make sure that you identify which functions, which methods in the higher level language map to functions in the compiled library.

31:56 - Once you know that, you can look in the symbol tree, and you can see here that there’s a number of a JAWA com interrupts.

32:06 - So JNI interfaces here in this native library.

32:09 - The methods that we’re looking for are the get voise structures that are listed towards the bottom of the screen.

32:17 - And here’s a closer view on what you would see in Ghidra as you do this analysis.

32:23 - So, now we just need to pick one of the functions and dig in.

32:28 - I focused on this intuitively named function called “getVoiseStructGoke2. ” So I love the spelling of voice, and I don’t know what Goke2 means.

32:38 - This is the function though that generates the section two and section three output for the audio signal.

32:45 - One thing that I noticed as I was doing this analysis is that on the JAWA side, you pass in eight parameters to this native function.

32:56 - Yet on the compiled side, when we look at the function signature in Ghidra, there were 10 parameters here.

33:04 - So, it seems a little odd, but then doing a little bit of reading I found that JNI call in conventions add two parameters.

33:15 - There is a, yeah, let’s talk about the note on JNI.

33:20 - There’s a JNI environment pointer, and then there’s an object pointer.

33:24 - And these two parameters are front loaded to the function signature.

33:28 - So those first two are just the environment and the object.

33:33 - So this top picture is the raw decompiled view.

33:38 - Just with all the generated identifiers that don’t really make a lot of sense.

33:43 - The bottom picture shows it refactored in Ghidra, to indicate that the first two parameters are JNI related.

33:54 - Now let’s continue the analysis. Okay, so inside of Ghidra, there’s a function decompiler window.

34:01 - And the nice thing about Ghidra, it’s like most other IDEs that I’ve worked with.

34:07 - You can right click on an identifier, you can rename it, you can highlight it.

34:12 - You can do things that’ll help you analyze the flow of how a particular parameter is used and manipulated.

34:20 - So, this function, this getVoiseStructGoke2.

34:27 - It calls another function that leverages the inputs that are passed into this function.

34:34 - What I do when I do this type of analysis is for each screen that I’m on, I try to rename and refactor the parameters and the methods, the functions, to names that actually make some degree of human sense.

34:49 - So that’s what I’ll be doing here. This is the cleaned up view.

34:54 - And I know it’s small, but the picture shows that each of those parameters are named to reflect what value they represent from the Android side.

35:05 - And then, I go from there, I check the usages.

35:09 - Since this is decompiled, there can be a lot of, sometimes it doesn’t exactly make the most sense.

35:17 - Like I noticed that input parameters are copied to local variables, and then those local variables are then used elsewhere.

35:25 - So, in the analysis, just keep in mind what you’re looking at, track the flow through any type of intermediate steps that it goes through, to see where it winds up being manipulated.

35:38 - Now, this is the raw view of that nested function.

35:45 - Fortunately for me, and almost conveniently so for this demo, this is a very small function.

35:51 - There’s only about 58. Yeah, actually about 56 lines long.

35:57 - So it makes it pretty easy to analyze. Again, since the identifiers are all terse and auto-generated, I need to refactor those into something that I can use.

36:08 - So, start with what you know, find a good starting point.

36:12 - Even if you can’t get all the names to something human readable, just to do what you know, and as you reason through the code, you’ll find that the rest of the pieces can fall into place sometimes if you enter what you know.

36:27 - As I went through this and did all of the renaming, I found that the critical section, the critical operation that I needed to apply in my reverse engineering project to replicate the signal three, it just came down to a shift.

36:44 - So, this is the line. It takes the CRC SSID, and then it shifts it to the right.

36:53 - So, that’s a very simple operation for me to perform in my replicated Android project.

37:00 - It is not something that I was able to figure out just by reasoning through the JAWA, or by passing in inputs to the library function and fuzzing the output.

37:10 - I think probably with enough time, I probably would have figured it out, but I get a little impatient, and when I can go explore a little deeper and a more fully understand how something works, I’ll take that opportunity.

37:23 - So, a shift, that’s all I got to do to replicate section three.

37:27 - Now, let’s think about hacking the signal. How can we recreate this and manipulate it to serve our purposes? So, let’s look again at what we know.

37:37 - This is the spectrographic wave form of a complete pairing cycle.

37:45 - The wave form is comprised of three sections of hexified data.

37:50 - Each section is prefixed and suffixed by control codes and section identifiers.

37:55 - We know that when two sequential tones are used, there’s a space tone that shows up in between it to help the camera better differentiate and identify distinct signals.

38:08 - The duration of each tone that I found is about 50 to 60 milliseconds.

38:12 - And we know the structure of each wave form section.

38:16 - Let’s look at section one. This one’s a long one.

38:19 - It’s got frequency begin. It’s got delimited, SSID, passphrase and random code digits.

38:28 - It has a CRCs of a bunch of data put together.

38:32 - And then it’s got end tones. Section two is incredibly simple by comparison.

38:38 - All it’s about is the SmartCode, and just making sure that there’s a proper error correction on that randomly generated code.

38:48 - So that’s very terse, very short, very easy to reason through.

38:52 - Section three. Yeah, this one’s a little bit longer as well.

38:55 - We have some CRC codes in there. We have another kind of like mutilated version of the SmartCode.

39:04 - There’s a passphrase by itself, another CRC, and then this thing wraps up.

39:09 - So, we can reproduce the signal now. We know every aspect of every part of the signal, and we are able to recreate it as a result.

39:20 - So, that’s where the demo comes into play here.

39:24 - I created an application which can be used to pair this wireless camera to a wireless network without having to use the cloud application.

39:37 - This enables the camera to be further analyzed using more traditional but network style of investigation techniques.

39:46 - So with that, let’s go ahead and let’s take a look at the demo.

40:03 - In this demo, we’ll be pairing the wireless camera with a wireless network that’s hosted on this laptop running host APD, advertising a Defcon 29 SSID.

40:21 - To do the pairing, we will leverage the reverse engineered application that I created as part of this kind of reverse engineering process.

40:35 - Where I’ve configured the SSID and passphrase.

40:40 - Now, to get this camera to pair, we need to wait for it to get into setup mode.

40:48 - After I plug it in, we’ll want to wait for the flashing light.

40:53 - And at that point, the camera should be susceptible to our suggestion that it paired to a specific network.

41:01 - So, I’ll plug the camera into the power bank, and start it up.

41:11 - On boot the camera shows a solid green light to indicate that it has power.

41:19 - After it goes through its set up sequence, whatever that entails, I haven’t been able to really probe that.

41:26 - It’ll go into a flashing light mode where we can pass it along our message.

41:32 - So, let’s give this a try. (electronic noise) All right.

41:46 - With that tone, it should indicate that the camera has received our pairing message.

41:57 - And in the Wireshark capture, you will see that the camera is communicating with the network, and that it’s paired.

42:14 - So that is looking good. Let’s take another look at the pairing, this time from the screen recording that shows the Wireshark output of our packet capture.

42:34 - As the camera goes through its visualization sequence, receives our pairing code, it should show up requesting an address.

42:44 - Which in this case, I’ve targeted to be a specific one in advance.

42:50 - You can see here that it receives an IP address on the local demo network, and it proceeds to query back home and attempt to call home and do the cloud configuration bit.

43:17 - We’re going to try connecting to the camera’s video now.

43:22 - One thing I do want to note about this camera is that the video connection can be a little bit iffy.

43:29 - It doesn’t always work and can require three, four, sometimes upwards of five different attempts to get the video signal to work.

43:41 - Here I’m showing an attempt to connect to the camera using VLC, and surprise surprise.

43:49 - It fires right up. So, go figure. Let’s go ahead and wrap this up now.

44:04 - There are a few limitations that are worth noting.

44:07 - It’s not easy to discover the device’s administrative password.

44:10 - It is six hexadecimal characters. And the password changes each time the camera is reset.

44:18 - It doesn’t seem to be tied to macro serial number.

44:20 - So just kind of brute forcing your way through it might be one decent option.

44:27 - The easiest option is just to have it pair once to the cloud and pull the password off of that.

44:35 - That is not the approach that I would prefer if at all possible though.

44:41 - So, it’s not possible or not really very easy I should say to decipher the camera to cloud communication, based off of some of the code that I’ve seen in the application and what I’ve intercepted between the camera and the cloud servers.

44:59 - The camera has a local RSA key pair that changes on reset or potentially between each request.

45:06 - The payloads are encrypted and sent over to the server.

45:11 - So even though you can view the payloads by setting up a self-signed demand in the middle server, you can’t really make sense of what the payloads are saying.

45:22 - So, could be worth some additional investigation.

45:27 - You also get what you pay for, even if you know the password, it doesn’t always connect.

45:31 - VLC will sometimes connect and sometimes it will not.

45:35 - So, just keep that in mind if you want to economize and save a buck or two on a cheap wireless camera.

45:43 - So, thank you very much for attending my Defcon talk.

45:47 - It’s been a real pleasure to spend this time with you today.

45:50 - So thanks. .