Hidden Hazards: The Computers Inside
This article is the first in a series on emerging privacy vulnerabilities that are often overlooked in planning for PII and PHI security.
More than three million people worldwide are estimated to have pacemakers, with about 600,000 more implanted annually. An estimated 300,000 people use insulin pumps worldwide. Implanted medical devices like these are improving and saving lives, but they also present a looming privacy risk. The medical industry as a whole is concerned about the potential for malicious tampering with the operation of these devices, with potentially life-threatening consequences. The secondary and more subtle danger is that many of these devices transmit personal health information that could be used to steal, exploit, or tamper with patient’s health records.
Hacked to Death?
The problem is that each implanted medical device contains a computer with, in some cases, industry standard operating systems, wireless transmitters, and all the features of any other computer. In a pacemaker, the computer controls delivery of electrical pulses that help regulate heart rhythms; in an insulin pump, it controls delivery of insulin; and computers in other implantable devices may be used to simply monitor vital signs and transmit reports that alert caregivers to developing issues before they become life-threatening. The problem is that any computer can be hacked, and recent experiments have proved that some of the computers inside medical devices can be hacked quickly and cheaply with dire consequences. (As one expert drily observed, the term “blue screen of death” could take on a whole new meaning.)
In a much-reported live presentation at a security conference in 2011, security researcher Jerome Radcliffe hacked into his own insulin pump and was able to change doses and disable the device using nothing more than a $20 used radio frequency transmitter, an easy-to-obtain device serial number, and the small amount of research needed to crack the communication protocol of the pump. In another experiment, researchers were able to remotely tamper with the functioning of a pacemaker. As a result of these tests, the Government Accountability Office released a report in August 2012 identifying information security issues associated with medical devices and advising the Food and Drug Administration to accelerate efforts to address them. The GAO outlined a number of security risks, several of which also pertain to PHI vulnerability, including:
- Remote access capability
- Continuous use of wireless communication, creating a point of entry for unauthorized users
- Unencrypted data transfer
- Limited or nonexistent authentication and authorization mechanisms
- Inability to update or install security patches
While the FDA is looking at the immediate health risks of medical devices, healthcare insurers and providers need to also consider the privacy risks associated with devices capable of transmitting patient information. Experiments have proved that the communications distance for some insulin pumps, for example, is as wide as 45 meters around a patient, giving potential data thieves plenty of room to maneuver undetected. Patients often download health information from their devices to unsecured personal computers and transmit the records to healthcare providers over unsecured public networks. These days, even the pedometers issued through employee health programs can upload data to personal computers to be shared with insurers.
It is up to the FDA and device manufacturers to address the security issues of medical devices. Some of the solutions are simple. Jerome Radcliffe, the researcher who hacked his own insulin pump, said the device manufacturer had not even implemented a simple password or other authentication scheme to prevent hacking. Some device manufacturers are already using authentication or proprietary communication mechanisms to protect their devices from tampering. There are technical challenges—for example, the computing power needed to encrypt data transmissions could shorten a device’s battery life—but solutions will be found because otherwise, as a recent headline in Wired magazine said, the world’s health data is just patiently awaiting “the inevitable hack.”
Healthcare providers cannot control how medical devices are built, but they can take steps to limit the PHI security risks of implantable devices and the information they transmit. First, they can do their homework and choose the most secure devices available. There has been no incident as yet to test a device manufacturer's potential obligations under HIPAA/HITECH regulations, making the provider responsible for privacy breaches caused by the device, but no one wants to be the test case in that scenario. Second, providers can include implantable and other special-purpose medical devices in their on-going risk assessments, evaluating the whole communication path between the device, the patient's environment and personal computer, public networks, and internal networks and systems. Proactive assessment can help providers minimize risks to their patients and their business, and help ensure that they are in compliance should "the inevitable" happen.
It’s a brave new world, and not just in healthcare. Microcomputers are being used to monitor, control, and report on everything from the operation of manufacturing plants and data centers to the use of electricity in private homes and gasoline usage in private cars. An onboard computer in a car can tell insurers about the driving habits of their customer or can dial emergency services in case of an accident. All of these devices hold the promise of a healthier, more efficient world but unless we extend our privacy practices from the data center and the desktop to the unseen computers inside our homes, our vehicles, and our bodies, it will not necessarily be a safer world.