Last week, NY Times published an article - "It’s Possible to Hack a Phone With Sound Waves, Researchers Show" - that since gained a lot of attention and a lot of echoes in other media outlets. It’s not just about smartphones, as the title might suggest; but any system incorporating a particular class of sensors (an accelerometer, in this case) allows under certain conditions a knowledgeable person to influence, control or disable the device that incorporates it. Now, think about this: countless accelerometers reside inside smartphones, automobiles, medical devices, anti-theft devices, drones, IoT devices, etc. Worried yet?
Phreaking: frequency hacking/ phone hacking
When I read the NYT article and then the research paper these guys published I almost immediately made the connection to one of the oldest hacking stories I read - the famous Cap’n Crunch hack.
The story goes like this: a blind boy, Joe Engressia, accidentally discovered that whistling a certain note imitates the internal-use tone of 2600 Hz AT&T used to signal the end of a call. Being able to do this allowed him (and other, later) to avail themselves of (illegal) free long-distance calls. Why Cap’n Crunch hack? Because whistles offered in those cereal boxes could do the trick even easier: sounding a long whistle would reset the line and if you knew the proper groups of whistles you could dial numbers.
Thus, phreaking was born.
According to Wikipedia "phreaking is a slang term coined to describe the activity of a culture of people who study, experiment with, or explore, telecommunication systems, such as equipment and systems connected to public telephone networks. The term phreak is a sensational spelling of the word freak with the ph- from phone, and may also refer to the use of various audio frequencies to manipulate a phone system."
Very avant garde and chic in its heyday, phreaking became obsolete and became just a hacking history curiosity with the advent of the Internet in the 90s; There was no benefit in it and there were other more interesting avenues to pursue. Obsolete, done.
Except, it turns out it isn’t.
Consider this list of documented information leakages (as compiled in Tippler et al 2017):
- gyroscopes and accelerometers can leak personal information
- gyroscopes in smart-phones can be used as a microphone to eavesdrop on conversations
- smart-phone accelerometers leak enough information to infer keystrokes from a nearby keyboard
- smart-phone accelerometer information leakage can be leveraged to infer user touchscreen gestures and key presses to leak passwords and PIN codes to unlock phone
- Process variation in accelerometers yields a unique fingerprint that can uniquely identify a device
Let me ask again: worried yet?
The research on analog cyber security at Michigan University
Аt the beginning of this youtube video, Kevin Fu explains that his lab is interested in future threats to cybersecurity: autonomous vehicles, medical devices, IoT. We are used to thinking - perhaps too much - in terms of software vulnerabilities; however, hardware can be also open to exploits. While we do have a plethora of tools to evaluate and counteract software threats, we are much less prepared when it comes to analog cybersecurity.
Demonstration of how you can fool a stationary fitbit to record any number of of fake steps;
~ screenshot ~
The video than makes a short presentation of how the research group uncovered a range of vulnerabilities that the ubiquitous accelerometer has. Devices that incorporate accelerometers could have their output intentionally biased by a third party or even controlled. According to their test data based on 20 models of capacitive MEMS accelerometers from 5 different manufacturers no less than 75% are vulnerable to output biasing, and 65% are vulnerable to output control!
Is it difficult? Is it expensive? Not at all: a $5 speaker in the right hands can inject false steps into a Fitbit, a subtly altered music file from a smartphone speaker can gain control of the on-board MEMS (i.e. microelectromechanical systems) accelerometer of a toy radio-controlled car. The whole technical analysis and an evaluation of potential software solutions is discussed in their paper and on this dedicated webpage.
The app developers: where do they come in?
Obviously, sensors should be "hardened" by the manufacturers. But it would folly to think that this can happen overnight. Until they figure ways of limiting sensor exposure to acoustic interference (e.g. surround sensors with acoustic dampening foam, as the researchers suggested), developers need to start thinking a bit more paranoically about input coming from sensors. In other words, stop assuming 100% trustworthiness.
A summary of the effectiveness of various defense mechanisms against acoustic attacks;
from Trippel et al (2017) paper
Тhe approach suggested by the research team is to implement "data processing algorithms that attempt to reject abnormal acceleration signals, especially those with frequency components around the resonant frequency of the MEMS sensor". Randomized sampling is one software defense mechanism that could be implemented. Another one could be 180° out-of-phase sampling. Both solutions assume the device has control over the sampling regimes of its sensors and tested well in their research. For more information consult the cited paper.
Conclusions: we need to add analog cybersecurity to digital cybersecurity
Phreaking, albeit in a different guises, is back. More and more systems will incorporate sensors and actuators that can be exploited. Any forward thinking app developer need to shift gears and start adding analog cybersecurity to digital cybersecurity. Acoustic injection attacks is merely one example of the many types of attacks that have already been documented, as we have already mentioned in our blog on the challenges posed by mobile device sensors.
As Pascal famously quipped: Luck favours the prepared mind. Unless your app developers are keeping in touch with these developments, you are unlucky.
You just don’t know it yet.