This Week in Security: iPhone Unpowered, Python Unsandboxed, and Wizard Spider Unmasked
As conspiracy theories go, one of the more plausible is that a cell phone could be running malicious firmware on its baseband processor, and be listening and transmitting data even when powered off. Nowadays, this sort of behavior is called a feature, at least if your phone is made by Apple, with their Find My functionality. Even with the phone off, the Bluetooth chip runs happily in a low-power state, making these features work. The problem is that this chip doesn’t do signed firmware. All it takes is root-level access to the phone’s primary OS to load a potentially malicious firmware image to the Bluetooth chip.
Researchers at TU Darmstadt in Germany demonstrated the approach, writing up a great paper on their work (PDF). There are a few really interesting possibilities this research suggests. The simplest is hijacking Apple’s Find My system to track someone with a powered down phone. The greater danger is that this could be used to keep surveillance malware on a device even through power cycles. Devices tend to be secured reasonably well against attacks from the outside network, and hardly at all from attacks originating on the chips themselves. Unfortunately, since unsigned firmware is a hardware limitation, a security update can’t do much to mitigate this, other than the normal efforts to prevent attackers compromising the OS.
Bluetooth Low Energy
It’s yet another Bluetooth related problem, this time concerning Bluetooth Low Energy (BLE) used as an authentication token. You’ve probably seen this idea in one form or another, like the Android option to remain unlocked whenever connected to your BLE earbuds. It’s used for various vehicles, to unlock once the appropriate phone is within BLE range.
It’s always been sort-of a bad idea to use BLE for this sort of authentication, because BLE is succeptible to in-flight relay attacks. One half of the attack is next to your phone, acting like the car’s BLE chip, and the other is next to the car, spoofing your phone. Connect the two spoofing devices, and the car thinks the authorized phone is right there. To make this “secure”, vendors have added encryption features, as well as signal timing analysis to try to catch spoofing.
The real innovation in the hack here is to use dedicated hardware that is sniffing and replaying at the link layer. This avoids the encryption problem, as the signal is just passed on unmolested. It also speeds up the process enough that latencies are low enough even over the internet hundreds of miles away. It’s likely that the next iteration of this technique could simply use Software Defined Radios to replay the signals at an even lower level. The solution is to either prompt the user for authorization before unlocking the vehicle, or embedding location information in the encrypted payload.
Python Buffer Blown
This is one of those issues that isn’t a big deal, and yet could be a problem in certain situations. It all started in 2012, when it was observed that the Python
memoryview object could crash a program when it pointed to a memory location that is no longer valid. A
memoryview is essentially a pointer to the underlying C buffer, and doesn’t get quite the same automatic reference counting as a normal Python object. De-allocate the object the
memoryview points at, then dereference this “pointer” for some C-style undefined behavior. (Here we don’t mean cursed code, but more garden variety UD — dereferencing a pointer that’s no longer a valid pointer.) A bit of memory manipulation can pretty much control what the raw pointer value will be, and setting it NULL predictably crashes the interpreter.
This is actually a read and write primitive. Snoop around Python’s memory, find the ELF headers, and then figure out where the glibc
system dynamic library is sitting in the procedure linkage table. Find it, use the memory corruption bug to jump to the appropriate location in memory, and boom, you’ve popped a shell from Python!
The more astute among you are surely already thinking, gee, that’s a convoluted way to call
os.system(). And yes, as an exploit, it’s quite unimpressive. [kn32], our tour guide into this quirk of Python points out that it could be used to escape a Python sandbox, but that is a very niche use-case. Even if we conclude that this isn’t really an exploit, it’s a great learning tool, and some fun hackery.
What happens when a group of intelligent and highly motivated researchers, like the folks at PRODRAFT, set their sites on a big ransomware gang? Well first, they have to come up with a catchy name. They decided to call this Conti-slinging malware gang Wizard Spider — getting some strong D&D vibes from that one.
The PDF report details the findings, and they are impressive. The investigation mapped out WS’s tools of choice, as well as some of their infrastructure, like the web of Wireguard tunnels they use to proxy their actions. Most interesting was the discovery of a backup server, believed to be in Russia, that also contained backups corresponding to REvil attacks. Theories abound as to what exactly that finding indicates. There’s another version of the report that was handed over to law enforcement, probably including more identifying information.
There are a few notable techniques discussed here, including a machine learning engine that looks at writing, and tries to determine the author’s native language. There are tells for this, like leaving out articles, like “the”, and using the wrong verb tense. Some odd looking English phrases are literal word-for-word translations of common expressions in the native tongue. In a conclusion that surprised no-one, PRODRAFT determined that the official spokesperson of WS was a native Russian speaker. Hopefully the rest of the story behind the extraction of this trove of information can be shared. It promises to be quite the tale of hacking the hackers, and maybe some old-fashioned trade-craft, as well.
Revealing the Parallels Hack
During Pwn2own 2021, [Jack Dates] of RET2 Systems managed to break the Parallels VM. To our delight, he has written up the exploit process for our education. A series of bugs in the guest additions code allows for a chain to escape the guest. The first bug used is an information leak, where 0x20 bytes are written to a 0x90 sized buffer, and then the whole buffer is exposed to the guest. That’s 0x70 bytes of host VM heap memory that can be read at a time, just enough to work out some base addresses.
The next bug is a buffer overflow in the drag-and-drop handling code. The struct passed to the host contains a string intended to be null-terminated, and skipping the null allows for a buffer overflow onto the stack. This overflow can be used to break the exception handling of the guest addition code running on the host. A third bug, a so-called “megasmash” doesn’t seem very useful, as it overflows an integer to trigger a massive buffer overflow. The problem with using this one is that when it overflows, it tries to write
0xffffffff bytes over the programs memory. The chain does use this to modify a callback pointer to point at malicious code. However, some of the memory is guaranteed to be read-only, triggering an exception.
The key there is that the exception handling has been tampered with, so when the exception is triggered, the handling code immediately faults and hangs, preventing the normal program cleanup. Other threads can then hit the tampered-with function pointer, leading to code execution. The discovered bugs were all fixed late last year, and [Jack] made a nice $40,000 for the exploit chain. Enjoy!