Financial Institutions Win the Data Breach Game with RADAR 4.0

by Mahmood Sher-jan

The great football coach Vince Lombardi said, “You don’t do things right once in a while…you do them right all the time.”

RADAR is an enterprise software solution for managing security and privacy incident response providing data breach guidance and operational workflow (to simplify compliance and reduce risks of future breaches).

RADAR™ 4.0 is incident response management software (IRMS) that helps financial institutions and businesses that hold sensitive customer information do the right thing “all the time” when it comes to mitigating risks and managing security and privacy incidents involving customer data.

Consistency and efficiency are critical for financial companies that handle millions of transactions daily using customers’ sensitive information. Trying to keep that information private while complying with myriad federal and ever-changing state regulations puts financial companies at increased risk for fines, lawsuits, lost business and reputation damage.

RADAR 4.0, with its Breach Guidance Engine™, reduces those risks and takes the complexity out of incident assessment and compliance with federal and all state breach notification rules.  It enables these organizations to become proactive in managing and reducing data breach risks. For example, CNO Financial Group, Phytel, and Seattle Children’s Hospital rely on RADAR to simplify and automate their incident response management process.

An innovative coach knows how to design and call winning plays. Likewise, the RADAR team submitted and received two patents from the U.S. Patent and Trademark Office. The first patent granted covers RADAR’s process of assessing security incidents of sensitive data under both federal and state data breach regulations and automatically generating guidance. The U.S. Patent and Trademark Office granted ID Experts a second patent for RADAR for its analytical modeling for optimizing incident response management.

RADAR 4.0 has made great strides to simplify the complexities of incident response and reduce risks for organizations in financial services, healthcare, and insurance—those responsible for maintaining and securing sensitive customer and regulated data. That’s a win for everybody.

About the Author

Mahmood Sher-jan's avatar
Mahmood Sher-jan

Mahmood is the lead inventor of ID Experts RADAR, an award winning and patented incident management software. Mahmood holds a BS in Computer Science from University of Washington and an MBA from University of Redlands.

Hey, You on the Phone!

by Heather Noonan

How decisions you make today will impact your world tomorrow.

Have you noticed we don’t go anywhere without our phones? We rarely go to bed without it, we walk, run, and drive everywhere with it. It seems to be a third limb for some people. This phenomenon occurred to me about a year ago when I went outside and noticed that every person had their phone attached to them. I realized we can’t go outside, or go anywhere for that matter, without our phone, without the fear of missing something important.

Take a look around while driving. At the next traffic light, you will notice that every other person has their head down, checking emails, texts, phone calls, ESPN, MSN, downloading apps, you name it. The real shocker was a couple months back when my husband and I went out for the evening. We walked into a restaurant to see five people at a table, all on their phones. Not one person was speaking or looking at the other. What? Is this really the world we live in? And is this world connecting us a little too much?

I stopped and thought about all of the pictures and messages we post online, each and every day. This can’t be a good thing. I’m nervous that in 10 years, our privacy, security, and livelihood will be completely different than it is today. Better make it five years.

Look at the things your family, your kids, and your neighbors post online. Whether on Instagram, Linkedin, Tumblr, or Facebook. I can quickly tell when my neighbors are out of town by checking Facebook. On the flip side, I can also tell you when and where my friends had their children. These bits of information used to be private, personal information. Remember the security question of “Where were you born?” and “What is your birthday?” If I’m a hacker, BINGO! All of the “private” information is now public.

Tell me this…If I were a hacker, what would stop me from finding out when someone is or isn’t home and breaking into their home. Or if I wanted to know when a high-powered executive is meeting off-site or attending a conference, so that I can access her/his office and information. It doesn’t take much to know where a person is on a daily and weekly basis.

Now think about how we program our heat and electricity just from a simple app or how we pay our bills from a touch of a button. It wouldn’t take much for a hacker to log into your phone, obtain your banking information, your address, learn when you are home, know who your friends and family are, and ultimately, take over your life. Extreme? I don’t think so.

I’ve been watching through forensics, articles, and numerous reports how hackers are becoming more and more savvy and how it’s only a matter of time that they hack into large social networks and begin to “control” people’s lives.

With all that said, be careful what you post and what personal information you share with the world—such as your children’s names and their date of birth and your travel itinerary that details your week in Hawaii. You may say, “Oh, it’s just my friends”, but really, it’s a large corporation, advertising firms, and the internet. That’s a lot of “friends”.

About the Author

Heather Noonan's avatar
Heather Noonan

Heather is the Senior Project Manager for the ID Experts Data Breach Response Team. She provides subject matter expertise, state and federal regulatory requirements and best practices. She is the primary point of contact for client communication and data breach engagements. Heather has been with ID Experts since 2008 and works in all areas of ID Experts including informational webinars and blog discussions. Heather has a Bachelors of Science, specializing in Business Communication, and has over 15 years of experience in client customer service with 10 years specific in Project Management.

Snail-Slow Security Implementation Despite Reports of Increased Risks

by Mahmood Sher-jan

If you knew thieves would break into your house, you would take immediate steps to secure it, right? That’s not the case for organizations that face security risks, according to the Ponemon Institute’s recent report, The State of Data Centric Security, that was recently covered in an SC Magazine article

Even though 72 percent of companies had a data breach in the last year, “only 51 percent of 1,587 IT executives surveyed…gave high priority status to securing confidential data,” according to the article. Even worse, 58 percent said the breaches their companies had were avoidable.

“Most respondents recognize the very significant business risk facing their organizations as a result of insecure data assets,” Larry Ponemon told SC Magazine. “Despite this recognition, many respondents acknowledge they do not have the people, process and technology to curtail this serious risk.”

In a recent ID Experts’ Data Breach Examiner article, James Christiansen, vice president of information risk management at Accuvant, puts it this way: “There are over 700 security technologies out there, and there are millions of potential threat actors around the world. Now it's not a question of if but when and where we will be breached. For the CISO, it's a shift in focus from securing the data to managing the inevitable risk to information, lots of which is outside the organization and going to customers, third parties, and others.”

Ponemon recommends a “data-centric approach to security” that creates “ ‘a holistic framework that helps organizations cope with massive increases in both structured and unstructured data.’” Between Ponemon and Christiansen, recommendations include:

  1. Organizations should “determine the location of information assets and the control practices that exist to protect it.
  2. Set up a governance process that ranks data according to “its importance or risk to the company.” This process then applies rules and policies regarding the use and dissemination of the data.
  3. “To be effective, risk management needs to become core DNA for the organization. You have to move past just being compliant to managing the unique threats facing your organization.”
  4. “Invest in technologies that help IT and IT security practitioners to gain visibility over the information lifecycle (i.e., creation, collection, use, sharing and retention of information assets) and associated regulatory implications in the event of a security incident or data breach."
  5. “Establish metrics for success to ensure that the above steps are reducing the risk of data loss or theft.”

It is important to use technologies and best practices for preventing and detecting security incidents. However, no amount of preventative effort if full proof so you need to effectively respond to incidents that escape your preventive controls.   ID Experts RADAR is patented incident management software that helps organizations comply with regulatory obligations in the event of a security incident. It also provides insights and metrics for measuring the effectiveness of security controls. For instance, RADAR can track the root causes of security incidents. Organizations can take appropriate steps to address those risks, and gauge whether or not those security measures were effective.

If you don’t lock the doors and turn on the security system, you may as well be inviting thieves to steal the silver. Similarly, companies must actually take steps to protect “business-critical information assets,” according to SC Magazine, or they could pay dearly “in terms of customer churn, diminished reputation and legal actions,” Ponemon says.

About the Author

Mahmood Sher-jan's avatar
Mahmood Sher-jan

Mahmood is the lead inventor of ID Experts RADAR, an award winning and patented incident management software. Mahmood holds a BS in Computer Science from University of Washington and an MBA from University of Redlands.

Managing Risks of Any Size, For Entities of All Sizes: 2014 Cyber Liability Forum in Review

by Jeremy Henley

As usual it was great to meet up in Philadelphia for this annual conference to see old friends and make many new ones.  As a long-time attendee and speaker, I can confirm that it is a good time for us to catch up on the latest and greatest in the cyber insurance space. 

Catching up on the latest and greatest from year to year is always interesting since some folks come back bigger and better and others do not come back at all.  It is a litmus test for those that will make it in the data breach risk prevention, response and insurance coverage space. 

I was given the opportunity again this year to participate in an interesting panel-style discussion on smaller entities, how they handle data breaches, and the insurance available to those entities.  There was a lot of great discussion around how some of these smaller organizations do not anticipate their data breach risks, like their larger counterparts.

Top Issues for Small Businesses

One of the top issues for small business is not having policies and procedures in place for privacy and security.  Another hot item was a discussion about how these smaller entities often do not carefully evaluate their contractual relationships. This can lead to exposures, putting them at risk; often times until it is too late and they need to scramble. 

During the session one of the audience members asked about the underwriting process for these small entities interested in binding cyber or data breach insurance and whether or not the panelists thought it was thorough enough.  The response from one of the underwriters seemed to surprise the audience when he explained that most insurers are taking on the risks based on very limited knowledge of the insured. 

He said that they rely mostly on revenue size, industry class and really just a best guess on the total number of records exposed in a total loss.  Many insurers have a reactive approach to underwriting the risk, for example by avoiding retailers for awhile after a large retail breach such as Target.  Another panelist suggested just binding the policies and dealing with the risk later because it is a mad dash to get this businesses in the door and the insurers that are too selective will be left behind.     

What Can Be Done to Reduce Risks?

Naturally, the next question from the audience was centered on what could be done to limit these risks post binding.  Since the panel was educational and not sales-focused, I couldn’t discuss all the opportunities ID Experts has to efficiently manage these risks, but I did speak to how I believe that one of the keys to limiting the risk of a data breach is for the insurance industry to move past risk management services that are designed to market and sell an insurance policy only to be put on a shelf and rarely used once the selling is over.  I think that as the penetration level continues to increase, the awareness levels of what a breach incident looks like increases and the regulators continue to increase the frequency of their fines, it will lead to a significant rise in the claims made to insurer.    

Most insurers, not all, are very happy with their current loss ratio which translates to being profitable with this book of business.  Once the tide shifts, which everyone agrees is coming, then how well an insurer can manage the risk of their book will be critical to their overall performance and profitability.  Clearly the claims management process and making that as efficient as possible will be important as well so stay tuned for my next post on that topic. 

Managing Risks of Every Size For Small or Large Entities

How does ID Experts manage an entity’s risk, regardless of size? Since I am not on stage, I can share that the first step is solved with the ID Experts Virtual Privacy Expert ™ (VPE) service, designed to be built into an insurance policy for the benefit of the insured.  We provide valuable proprietary tools to help insured’s reduce their risk of a data breach. And we can provide valuable underwriting information to the insurance community as well to help them with the future of their books. 

About the Author

Jeremy Henley's avatar
Jeremy Henley

Jeremy Henley is an Insurance Solutions Executive for ID Experts. He is has been certified by the Healthcare Compliance Association for Healthcare Privacy and Compliance and brings 11 years of Sales and Leadership experience to the ID Experts team.

Florida in the Forefront: How Florida’s Data Breach Law is Paving the Way for Change

by Heather Noonan

To be honest, I am impressed with Florida’s new Information Protection Act of 2014. Also referred to as FIPA, FIPA went into effect July 1, 2014 and starts a new wave of positive, future change for Florida.  

Many of our United Sates data breach laws are similar in nature, but Florida has reduced the notification timeframe to 30 days, expanded the definition of personal information, and stipulates only 10 days for a third-party to notify a covered entity.

Florida, the Sunshine State, has also been referred to as the “Identity Theft Capital” of the U.S. A title I don’t think Florida is proud of and it looks like they are eager to do something about it.

With FIPA now in effect, Florida has repealed its existing breach notification law and replaced it with many new changes.

Here is a quick guideline of the changes:

  • Personal information now includes medical information, health insurance information, user names, and e-mail addresses
  • Reduced the notification period from 45 days to no later than 30 days after the determination of a breach or reason to believe that a breach occurred, unless the breach qualifies for exceptions
  • Exceptions to notification include circumstances where information was released during an ongoing criminal investigation or the covered entity determines, after consultation with law enforcement, that the breach has not and will not likely result in identify theft or other financial harm. This latter exception must be documented in writing and it must be maintained for 5 years.
  • Notification to the Department of Legal Affairs of any breach over 500 people no later than 30 days
  • Guidelines on what must be included in the notification to the Department of Legal Affairs
  • Guidelines on what must be included in the individual notification letter
  • A third-party (such as a business associate or vendor) must notify the covered entity no later than 10 days
  • If a third-party notifies individuals on a covered entities behalf, the covered entity is deemed to have violated the law, if the third-party fails to provide proper notice

More power to you, Florida! I think more and more states will follow your path for change.

Florida Information Protection Act of 2014 -http://www.flsenate.gov/Session/Bill/2014/1524/BillText/er/HTML

About the Author

Heather Noonan's avatar
Heather Noonan

Heather is the Senior Project Manager for the ID Experts Data Breach Response Team. She provides subject matter expertise, state and federal regulatory requirements and best practices. She is the primary point of contact for client communication and data breach engagements. Heather has been with ID Experts since 2008 and works in all areas of ID Experts including informational webinars and blog discussions. Heather has a Bachelors of Science, specializing in Business Communication, and has over 15 years of experience in client customer service with 10 years specific in Project Management.

Florida Repeals Old Law & Passes New Data Breach Law

by Mahmood Sher-jan

States typically amend their breach notification laws but Florida repealed its law and passed a more comprehensive law.. The newly signed Florida Information Protection Act of 2014, which will take effect on July 1, 2014, Florida joins the ranks of states that require businesses to safeguard individuals’ health inform by extending its definition of personal information (PI) to any information regarding an individual’s medical history, mental or physical condition, or medical treatment or diagnosis by health care professional, health insurance policy number or subscriber identification number and any unique identifier used by a health insurer to identify the individual.  Consistent with most states, the new law provides for a “harm” test to determine if an incident is a data breach by performing an appropriate investigation and consultation with relevant federal, state, and local agencies responsible for law enforcement. Under the new law, the impacted business must document and maintain the outcome of its investigation for 5 years.

The key issue for any entity doing business in Florida is to know the notification timeline, threshold and content requirements of the new law for individuals, the department of legal affairs and credit agencies.    Government agencies and 3rd Party service providers also have notification obligations under the new law. How do you know if your current practices are in compliance?

The newest version of ID Experts’ patented incident management software, RADAR 4.0, has built-in knowledge of this latest Florida law as well as all other state data breach notification laws, including recently amended Iowa and Maryland and soon to be effective new Kentucky breach law.

Keeping up with the changing federal and state regulations is a real challenge for organizations of any size and any industry that manage personal information and protected health information. RADAR ensure that you are current with the latest breach laws and it takes the complexities out of unique incident risk assessment and compliance requirements mandated in these laws. RADAR 4.0’s “baked-in” regulatory knowledge is the only software solution that effectively simplifies the risk assessment & management of security incidents involving regulated data.

About the Author

Mahmood Sher-jan's avatar
Mahmood Sher-jan

Mahmood is the lead inventor of ID Experts RADAR, an award winning and patented incident management software. Mahmood holds a BS in Computer Science from University of Washington and an MBA from University of Redlands.

How “Near Misses” Can Inform your Security Strategy and Reduce Data Breach Risk

by Doug Pollack

Organizations across industries are facing increased public attention and regulatory scrutiny in light of high profile data breach incidents. While the Target’s of the world get more publicity than they’d ever hoped for, what lies beneath all of this is that for every one very public Target security incident, there are hundreds, if not thousands, of security incidents involving regulated data (specifically personal information) that are “near misses”. The one’s that happened, but due to good fortune or effective efforts did not result in an incident that is categorized as a “data breach”, which then requires a public disclosure for all to see.

Learn More: RADAR: Privacy & Security Incident Management

In order to manage the risks of data breach, organizations typically turn to two people – their chief information security officer (CISO) and their chief privacy officer (CPO). While typically in different functional organizations, they operate almost as a team in the assessment and management of data security incidents. And in many organizations, there may be many dozens of these every month. These two individuals typically bring together both technical and legal expertise, since determining if a “security incident” is a “data breach” has both a technical and a legal aspects that must be considered.

While it is the hopefully rare public data breach that garners all of the attention, there is a lot to be learned from the near misses. Those security incidents that involve regulated data that for whatever reason did not result in the public “compromise” of the data. What I think deserves greater attention is that the near misses themselves, provide an excellent base of information to help inform the CISO as to where their greatest security vulnerabilities may lie.

In a recent article in CIO Insight titled Security Strategies Must Be Integrated [Steve Durbin, June 2, 2014], the author notes that one of “security’s primary aims is to prevent negative incidents….since it is “almost impossible for organizations to avoid such events.” He goes on to further note that without a proper analysis of negative incidents, the “near misses” I discuss, that an organization may “not spend money where it’s most needed to reduce the odds of a major data breach or other security incident.” And that is exactly the point. To reduce data breach risks, you need to look at the incidents that might have been data breaches. 

I believe that the current evolution in the information security role from a compliance-based approach to a risk-based approach will help focus attention of security incidents that involve regulated data; personally identifiable information (PII) for general personal and financial information and protected health information (PHI) often including PII with some addition of health information. The more they read about the Target breach and the University of Pittsburgh Medical Center (UPMC) breach in the news, the more CISOs and their privacy counterparts will become laser focused on the lessons that can be learned from their “near misses”.

About the Author

Doug Pollack's avatar
Doug Pollack

CIPP, MBA. With over 25 years experience in technology industry products and services, Doug is an expert in personal information privacy and security. He is currently a senior executive at ID Experts.

Healthcare Organizations Prepare for Upcoming OCR HIPAA Audit Season

by ID Experts

Healthcare organizations and business associates may soon be hearing from the Health and Human Resources (HHS) Office for Civil Rights (OCR), as the agency prepares to conduct a new phase of audits. Scheduled to begin in the fall, OCR Audits Phase 2 will be conducted by the OCR itself and will focus on high-risk areas and enforcement. To address the upcoming increases in oversight, ID Experts will host a webinar on Tuesday, June 10, 2014, “Get Your Ducks in a Row: The OCR Audit Season is About to Begin.” Rebecca L. Williams, RN, JD co-chair health information practice, partner at Davis Wright Tremaine, and Mahmood Sher-Jan, CHPC, vice president and general manager, RADAR Products Unit at ID Experts, will discuss lessons learned from past audits, OCR phase 2 audit scope and timeline, the potential impact of being the subject of an audit, and how to prepare for them using a risk-based approach.

What: A free webinar by ID Experts: “Get Your Ducks in a Row: The OCR Audit Season is About to Begin.”

When: Tuesday, June 10, 2014; 10:00 a.m. – 11:00 a.m. PST

Website and registration: To register for the free webinar, please visit http://bit.ly/UfPiqR

About the Author

ID Experts's avatar
ID Experts

Singing the Blues

by Christine Arevalo

The 2014 Blue National Summit is the premier BCBSA conference of the year, assembling Blue Cross Blue Shield professionals from across the multi-faceted network to share best practices, gather insights and information, as well as speak to current trends. This conference is the only all-Blue event that brings together professionals from all 37 Blue Plans from virtually every discipline.
 
ID Experts MIDAS was proudly in attendance and if you made the time to stop by and say hello, we really appreciate it.  You might have noticed our new booth (which yes, was a pain to set up!) as well as the new MIDAS messaging on display.  We couldn’t hide our excitement in sharing the Gartner cool vendor award, which also gave us something timely and “cool” to share with everyone!  This award was unexpected; but it is a very distinguished announcement for us and our team.
 
The event was a great one overall for ID Experts and the medical identity theft message was well received.  I think that the recent publicity surrounding large scale breach incidents (remember Target and Ebay) has heightened consumer awareness about fraud and the importance of addressing breaches responsibly.  While we strive to educate our industry in the importance of breach preparedness and response, the real issue is in finding a solution for early detection.  This is why we are so excited about MIDAS!
 
It was loud and clear to me as I wandered from session to session in Orlando - people are hungry for solutions involving consumer engagement, strategies and tactics for continued evolution in the wake of healthcare transformation, and combatting the ever present threats involving fraud of protected health information.

About the Author

Christine Arevalo's avatar
Christine Arevalo

Christine is a founding employee of ID Experts and leads industry initatives around healthcare identity management. She has experience managing risk assessments, complex crisis communication strategies, and data breach response for ID Experts clients.

Is It Really That Bad?

by Heather Noonan

Is a data breach really that bad? Well, yes and no. It’s terrible that a breach occurred, but how you respond is what counts.  It’s just like any problem. Your solution and response can turn everything around or your lack of response can create a bigger mess down the road, causing you to rethink your initial response. We have all seen it. (As consumers, we ask, why didn’t you just tell us what happened? Be honest.)

I hear a lot of people say, “well at least it wasn’t that 3.1 million breach that happened last year.” True, but what if you were one of those 3.1 million individuals affected? Your breach might not be that bad, but it is to those individuals whose lives and livelihoods have been affected.

As a consumer, when your personal and financial information is stolen and used, it not only affects you, it affects your home life, your work, your marriage, and your family. Not to mention it probably affects you at the grocery store, if you are late to work, and whether you can go out to dinner on Friday night. As an organization faced with a breach, try to remember that when only 50 people are affected, it’s 50 individuals whose lives are impacted.

With all that said, a teammate of ours wrote a good article about what we have learned with data breaches, what we haven’t learned, and how to better prepare ourselves. It’s not only good because I was interviewed (!), but it goes into depth about the evolution of data breaches and how some things are getting better. And some things are getting worse.

Take a look if you have a couple minutes - Breach Evolution: The Good, The Bad, and the Scary

About the Author

Heather Noonan's avatar
Heather Noonan

Heather is the Senior Project Manager for the ID Experts Data Breach Response Team. She provides subject matter expertise, state and federal regulatory requirements and best practices. She is the primary point of contact for client communication and data breach engagements. Heather has been with ID Experts since 2008 and works in all areas of ID Experts including informational webinars and blog discussions. Heather has a Bachelors of Science, specializing in Business Communication, and has over 15 years of experience in client customer service with 10 years specific in Project Management.

ID Experts Announces Issuance of U.S. Patent for RADAR

by ID Experts

On April 22, 2014, the U.S. Patent and Trademark Office granted ID Experts U.S. Patent No. 8,707,445:  Systems and Methods for Managing Data Incidents for our industry leading incident management software, RADAR. The patent covers systems and methods for managing a data incident including data breach data that comprises information corresponding to a suspected data breach incident. The patent further covers automatically generating a risk assessment from a comparison of data breach data to privacy rules, including defining requirements associated with data breach notification laws, and providing the risk assessment outcome to the user’s display device.  This patent is in force over its full lifetime to February 14, 2032.  

All technical jargon aside, the receipt of this patent further validates the unique value and innovative approach RADAR provides to our customers as they manage the complexities of compliance with Federal and State breach laws. RADAR was invented as an easy-to-use efficient and collaborative workflow coupled with a data disclosure engine that results in consistency of assessments and decision-making.

Click here to learn more about RADAR, our patented incident management software.

About the Author

ID Experts's avatar
ID Experts

A Year of Rampant Tax Fraud

by Doug Pollack

There must be something in the water this year. There has been an epidemic of tax fraud that has affected employees of several healthcare organizations. And it now seems like cyber security experts are getting to the root cause of how this was done.

In the news lately was a data breach that occurred at the University of Pittsburgh Medical Center (UPMC). After weeks of what seemed like agonizing analysis and discovery, officials there have determined that around 27,000 of their employees were impacted by a cyber attack, as noted in an article by the Pittsburgh Post-Gazette. Now to make things worse, there has been a lawsuit seeking class action status on behalf of the 62,000 UPMC employees filed against UPMC by Michael Kraemer, a Pittsburgh attorney.

In a recent article by KrebsOnSecurity, Mr. Krebs has delved into how cybercriminals may have succeeded in breaching numerous healthcare organizations and acquiring information including names, social security numbers, birthdates and pay information of employees. While this may not have been the means by which UPMC was breached, he discovered that hackers were able to obtain valid credentials from several healthcare organizations for their third party payroll and HR management system, called UltiPro. 

The potential organizations impacted by this scheme include several that were listed in “a Web-based control panel that an organized criminal gang has been using to track bogus tax returns filed on behalf of employees at hacked companies whose HR departments had been relieved of W-2 forms for all employees.” [Tax Fraud Gang Targeted Healthcare Firms, KrebsOnSecurity, April 14, 2014]. Among the organizations listed in that panel were Plaintree, Inc., Griffin Faculty Practice Plan as well as senior living facilities including SL Bella Terra LLC and Swan Home Health LLC.

This cybercrime scenario exposes several interesting twists in terms of the challenges of maintaining privacy for the personal information of health patients. The UPMC attack highlights the intrinsic value of personal information, in this case to perpetrate tax fraud. As well, it may be an indicator that cyber criminals may consider healthcare organizations as the “slowest antelope” in the cyberjungle. This perspective would potentially be validated by the significant growth in malicious breaches in healthcare this past year [Ponemon Institute, Fourth Annual Benchmark Study on Patient Privacy and Data Security, March 2014]. 

In this study, they drew the conclusion that “criminal attacks on healthcare organizations increased 100%.” Sadly, it also noted that “nearly 70% of respondents believe the Affordable Care Act has increased or significantly increased the risk to millions of patients, because of inadequate security.”

The UltiPro system hack also illustrates how there is vulnerability to information in the “cloud” or in other managed applications and systems outside of the healthcare organization itself, but that the vulnerability can lie with the user of the system in the HR department of the healthcare provider, in compromising their valid access credentials. Acquiring such credentials can happen in many ways, among them phishing attacks and other socially engineered approaches to implanting malware of the user’s computer.

So this illustrates that information entrusted to the business associates of healthcare organizations such as payroll processors, among a wide variety of outside data and application services, can be compromised not by attacking the third party service itself, but rather in targeting their users within healthcare providers. This type of threat is one that the providers need to specifically address within their security risk analysis process. As is often said people, not technology, represent the weakest link in security.

 

 

 

 

About the Author

Doug Pollack's avatar
Doug Pollack

CIPP, MBA. With over 25 years experience in technology industry products and services, Doug is an expert in personal information privacy and security. He is currently a senior executive at ID Experts.

Patient Data Security in the Face of Advance Persistent Threats!

by Mahmood Sher-jan

Patient Privacy Network’s 2nd annual conference, held in Anaheim California on April 10, 2014, was a great opportunity for experts and Healthcare industry participants to share perspectives and to learn more about the latest security challenges facing the Healthcare industry.  Participants included Healthcare Chief Information Security Officers (CISOs), a California AG Office representative, Legal Experts, Internet Security Alliance (ISA) and Healthcare providers and payers. 

LEARN MORE: RADAR Software for Privacy Incident Assessment & Management

The consulting firm PwC says, "the New Health Economy represents the most significant re-engineering of our health system since employers began covering workers in the 1930s. Change will hit everyone, too: Hospitals, insurers, pharmaceutical companies, employers, retailers, patients and IT vendors of all stripes.  When the dust settles, those able to adapt to what analyst firm IDC calls “technology's third platform - social media, big data, cloud computing and mobility - will stand tall. Everyone else will fall.”  This is why protecting sensitive patient and consumer information should be designed into these platforms from the start and not as an afterthought. 

Ironically the event coincided with the news of the Heartbleed cyber risk alert.   It reinforced the timeliness of the escalating cyber security threat data presented during the conference and the value of the practical advice from the panel of Healthcare security experts.  You can download the presenters’ conference material here. In this blog, I want to share a few selective items that highlight the cyber security challenges facing the Healthcare industry and what is needed for the industry to effectively manage these challenges. For example, Larry Clinton, ISA CEO, whose organization’s mission is to integrate advanced technology with economics and public policy to create a sustainable system of cyber security, stressed the fact that digitization is changing everything—from how our brains work to how we think about security and how we think about Healthcare.  Criminal attacks on Healthcare have risen over 100% since 2010.  It is no surprise that 94% of Healthcare organizations have been victims of cyber attacks, with California faring the worst.   Larry referenced a PriceWaterhouseCoopers’ study concluding, “Healthcare industry is among the most vulnerable of all industry sectors.”  Another disturbing study that Larry referenced was by Johns Hopkins which concluded that: “healthcare is the industry with the least regard, understanding, respect for cyber security…routine failure to fix aging tech & a culture where MDs, & Healthcare workers sidestep basic security in favor of convenience.”

As I sat listening to Larry paint a gloomy picture of the state of cyber security affairs in Healthcare, it was easy to get discouraged.  He pointed out how well organized cyber attackers are and that they are not just a bunch of rag tag hackers but rather sophisticated and organized, even state supported.  My co-presenter, Dr. Cris Ewell, CISO of Seattle’s Children Hospital, further reinforced this point given his experiences dealing with daily deluge of cyber attacks at the University of Washington and Seattle’s Children Hospital.  It doesn’t help that some of the most ubiquitous technologies used in the industry contain serious cyber flaws—at the heels of the Hearetbleed, we just learned about Internet Explorer (IE) vulnerability dubbed “Operation Clandestine Fox.”  Even though IE’s (IE v9, v10, v11) market share is about 26%, the majority of Healthcare organizations use IE as the default browser. For CIO and CISO, these are troubling time indeed.  FireEye, the firm that discovered the vulnerability, explained their use of the term “clandestine” because the hackers lure computer users to malicious web code, like a “fox” who lures prey to a watering hole and then moves in for the kill. 

Today we are facing Advance Persistent Threats (APT) to the security of our patient data.   Conventional information security defenses (anti-virus, intrusion detection/ prevention, etc.) are not effective because attackers can evade them.  It would be discouraging to leave you as a reader with such a doom and gloom perspective.  All is not lost.  There are Healthcare organizations which are leading the way culturally and operationally.  These leaders have an overall information security plan.  CISOs in these organizations report to the board and top leadership.  They continuously measure and review the types of security events.  They are in the minority today but they offer the best path forward for the rest of the industry.  I encourage you to read Dr. Cris Ewell’s presentation from the 2014 PPN event as an example of a Healthcare organization leading the way.

About the Author

Mahmood Sher-jan's avatar
Mahmood Sher-jan

Mahmood is the lead inventor of ID Experts RADAR, an award winning and patented incident management software. Mahmood holds a BS in Computer Science from University of Washington and an MBA from University of Redlands.

In The Data Breach Regulatory Derby – Kentucky Loses Out to Iowa

by Mahmood Sher-jan

On the first Saturday of May the nation turned its attention to the Bluegrass State to see California Chrome race to victory at the 140th Kentucky Derby. Now that the spectacle has come and gone, we’re boxing up our hats but still keeping our eye on Kentucky to see how it fares in another kind of derby. Earlier this spring Kentucky entered the “data breach regulatory derby,” becoming the 47th state to enact a data breach notification law.  If we were to handicap the race, this latest derby entrant is going to lose to Iowa’s recently amended breach law, SB 2252. Next to Iowa’s new law, Kentucky is looking a bit coltish, if you will.  Here’s why: 

LEARN MORE: RADAR Software for Privacy Incident Assessment & Management

Kentucky’s law sticks mostly to the bare bones of data breach regulation established years ago by other states. Under the guidelines of HB 232, which Kentucky Governor Steve Beshear signed into law on April 10th, entities that transact business in the state are required to notify individuals if a breach of computerized personal information occurs. HB 232 further rests on the most rudimentary pillars of data breach regulation: it ambiguously stipulates that post-breach notification must be delivered in “the most expeditious time possible;” requires that credit reporting agencies be notified if the breach involves the information of 1,000 or more individuals; and presents a particularly limited definition of personally identifiable information. Moreover, Kentucky lacks any provisions addressing PHI or requirements about notifying the Attorney General, which Iowa recently amended its data breach law to include. Iowa also extended its amendment to include coverage of paper-based data in addition to electronic data. In this latest iteration of “Data Breach Regulatory Derby,” unfortunately for Kentucky residents, the newcomer-entrant is off to a fairly weak start. We’re putting our money on Iowa.

Iowa and Kentucky are similar to one another in that they can both be considered “new entrants.” So how does Kentucky fair against some other laws that have been around the track a few times before? Again, Kentucky is an underdog, and we don’t need to look far past its borders to learn why. Neighboring state Ohio bests Kentucky in regard to notification timing as it sets a hard and clear notification deadline of 45 days post-breach discovery.

While Kentucky’s laggard law lacks some punch, it has one wild-card element in the special attention it pays towards student data.

Law Aims to Protect Student Data

As noted, the law extends special provisions aimed at protecting student data at both public and private educational institutions in the cloud. The National Law Review reports that for the purposes of the law, student data is defined as

any information or material, in any medium or format, that concerns a student and is created or provided by the student in the course of the student’s use of cloud computing services, or by an agent or employee of the educational institution in connection with the cloud computing services. Student data includes the student’s name, email address, email messages, postal address, phone number, and any documents, photos, or unique identifiers relating to the student.

The article goes on to suggest that the law’s concern with student data may have been spurred by a report issued late last year by Fordham Law School.

The report  titled “Privacy and Cloud Computing in Public Schools,” found that an increasing number of schools are outsourcing data handling to cloud providers for their cost-effective, time-saving services. Of course there’s a flip side: these third-party providers’ privacy policies are often poorly understood by school districts, contain weak privacy provisions, and lack parental approval clauses, among many other flaws. Altogether, it’s quite the perfect storm for data security and all the problems that follow, from identity theft to ruined credit and beyond. It seems likely that Kentucky lawmakers were trying to help counter the perils the increasingly ubiquitous cloud presents to student data when they wrote this part of the law.

Protecting our Data

We know that despite all our best efforts, breaches are ultimately inevitable and can happen to anyone. That’s why we have laws to minimize the damage one can wreak. We’re optimistic about Kentucky’s student data protection efforts; and even if the rest of the law isn’t leading in the final stretch, it’s a step in the right direction.

About the Author

Mahmood Sher-jan's avatar
Mahmood Sher-jan

Mahmood is the lead inventor of ID Experts RADAR, an award winning and patented incident management software. Mahmood holds a BS in Computer Science from University of Washington and an MBA from University of Redlands.

The “HIPAA-cratic” Oath: Keep Sensitive Health information Private

by Rick Kam

I was thinking back on the keynote speech that Michael Josephson, president and founder of the Josephson Institute, gave at the HCCA conference in San Diego in April.  He spoke on the role of ethics in compliance.  It prompted me to Google the Hippocratic oath that physicians and other health care professionals take to uphold their professional ethical standards.  What I found was this text taken from the original version of the Hippocratic oath written in the 12th century:

Ὄμνυμι Ἀπόλλωνα ἰητρὸν καὶ Ἀσκληπιὸν καὶ Ὑγείαν καὶ Πανάκειαν καὶ θεοὺς πάντας τε καὶ πάσας, ἵστορας ποιεύμενος, ἐπιτελέα ποιήσειν κατὰ δύναμιν καὶ κρίσιν ἐμὴν ὅρκον τόνδε καὶ συγγραφὴν τήνδε‧

ἡγήσεσθαι μὲν τὸν διδάξαντά με τὴν τέχνην ταύτην ἴσα γενέτῃσιν ἐμοῖς, καὶ βίου κοινώσεσθαι, καὶ χρεῶν χρηί̈ζοντι μετάδοσιν ποιήσεσθαι, καὶ γένος τὸ ἐξ αὐτοῦ ἀδελφοῖς ἴσον ἐπικρινεῖν ἄρρεσι, καὶ διδάξειν τὴν τέχνην ταύτην, ἢν χρηί̈ζωσι μανθάνειν, ἄνευ μισθοῦ καὶ συγγραφῆς, παραγγελίης τε καὶ ἀκροήσιος καὶ τῆς λοίπης ἁπάσης μαθήσιος μετάδοσιν ποιήσεσθαι υἱοῖς τε ἐμοῖς καὶ τοῖς τοῦ ἐμὲ διδάξαντος, καὶ μαθητῇσι συγγεγραμμένοις τε καὶ ὡρκισμένοις νόμῳ ἰητρικῷ, ἄλλῳ δὲ οὐδενί.

διαιτήμασί τε χρήσομαι ἐπ' ὠφελείῃ καμνόντων κατὰ δύναμιν καὶ κρίσιν ἐμήν, ἐπὶ δηλήσει δὲ καὶ ἀδικίῃ εἴρξειν.

οὐ δώσω δὲ οὐδὲ φάρμακον οὐδενὶ αἰτηθεὶς θανάσιμον, οὐδὲ ὑφηγήσομαι συμβουλίην τοιήνδε‧ ὁμοίως δὲ οὐδὲ γυναικὶ πεσσὸν φθόριον δώσω. ἁγνῶς δὲ καὶ ὁσίως διατηρήσω βίον τὸν ἐμὸν καὶ τέχνην τὴν ἐμήν.

οὐ τεμέω δὲ οὐδὲ μὴν λιθιῶντας, ἐκχωρήσω δὲ ἐργάτῃσι ἀνδράσι πρήξιος τῆσδε.

ἐς οἰκίας δὲ ὁκόσας ἂν ἐσίω, ἐσελεύσομαι ἐπ' ὠφελείῃ καμνόντων, ἐκτὸς ἐὼν πάσης ἀδικίης ἑκουσίης καὶ φθορίης, τῆς τε ἄλλης καὶ ἀφροδισίων ἔργων ἐπί τε γυναικείων σωμάτων καὶ ἀνδρῴων, ἐλευθέρων τε καὶ δούλων.

ἃ δ' ἂν ἐν θεραπείῃ ἢ ἴδω ἢ ἀκούσω, ἢ καὶ ἄνευ θεραπείης κατὰ βίον ἀνθρώπων, ἃ μὴ χρή ποτε ἐκλαλεῖσθαι ἔξω, σιγήσομαι, ἄρρητα ἡγεύμενος εἶναι τὰ τοιαῦτα.

ὅρκον μὲν οὖν μοι τόνδε ἐπιτελέα ποιέοντι, καὶ μὴ συγχέοντι, εἴη ἐπαύρασθαι καὶ βίου καὶ τέχνης δοξαζομένῳ παρὰ πᾶσιν ἀνθρώποις ἐς τὸν αἰεὶ χρόνον‧ παραβαίνοντι δὲ καὶ ἐπιορκέοντι, τἀναντία τούτων.

The English translation of two paragraphs of the oath highlighted in red above had special meaning for me as a privacy professional.  They read:

“I will prescribe regimens for the good of my patients according to my ability and my judgment and never do harm to anyone.”

“All that may come to my knowledge in the exercise of my profession or in daily commerce with men, which ought not to be spread abroad, I will keep secret and will never reveal.

Like the Hippocratic oath, HIPAA requires healthcare entities and their business associates to protect the privacy of their patients’ sensitive health information. And if that information is accidentally disclosed in a security breach, to “do no harm.”  Today the burden of proof is on the disclosing entity to perform a risk assessment as part of the final omnibus breach notification rule.

Michael Josephson did a masterful job laying out how ethics plays a significant role in compliance to the audience at the HCCA conference.  He described ethics as core to understanding right from wrong, and guiding behaviors of people working within our healthcare ecosystem. 

As it relates to HIPAA, implementing the privacy, security, and breach notification rules and “doing the right thing” would go a long way to improving patient data privacy and security.  Josephson talked about people taking actions that were legal, but not necessarily doing the right thing morally. 

It concerns me that organizations struggle with this issue when they discover a breach of personal sensitive health information.  While it may be “legal” for health care organizations not to notify consumers about a breach, is it the right thing to do for their patients?  If patients don’t have the opportunity to take actions to protect themselves, it puts them at an elevated risk of medical identity theft, especially when sensitive personal health information is disclosed in a data breach (i.e. personal health records and health insurance information).

A recent Javelin study indicated that roughly 30 percent of people whose SSN is disclosed in a data breach will fall victim to identity fraud within the year.  That is a significantly higher risk that the 5.6% that fell victim to identity crimes not caused by a data breach.

ID Experts released RADAR 4.0 this past week, which will help privacy professionals effectively manage regulated data disclosures.  It will help privacy professionals comply with the “HIPAA-cratic” oath of protecting sensitive personal information and performing the required risk assessment along with documenting and reporting regulated data disclosures to executive management and HHS/OCR when they perform audits and investigations.

About the Author

Rick Kam's avatar
Rick Kam

Rick Kam, CIPP, is founder and president of ID Experts. He is an expert in privacy and information security. His experience includes leading organizations in policy and solutions to address protecting PHI/PII and resolving privacy incidents and identity theft. He is the chair of the ANSI PHI Project, Identity Management Standards Panel and the Santa Fe Group Vendor Council ID Management working group. He is also an active member of the International Association of Privacy Professionals and is a member of the Research Planning Committee for the Center Identity which is part of the University of Texas Austin.

In Experian We Trust?

by Doug Pollack

The folks at Experian have been receiving a great deal of well-deserved attention recently. Just this week, the states of North Carolina and Iowa joined Connecticut and Illinois Attorneys General in investigating Experian for the alleged access to a database of records on approximately 200 million Americans with information including social security numbers, dates of birth, and email addresses, among other sensitive personal data, by an identity theft service masquerading as a private investigator.  That Experian is able to sell the personal data of American’s without explicit consent seems counterintuitive, given that as a credit bureau they are in a “unique position” relative to the (regulated) manner as to how they collect such information. The fact that Experian is also frequently trusted by organizations that themselves have had data breach to care for the affected consumers is downright perplexing.

LEARN MORE: RADAR Software for Privacy Incident Assessment & Management

So why are several state AGs investigating Experian? The Wall Street Journal provides a pretty clearcut answer to this question. In 2012, Experian purchased a company named Court Ventures. Court Ventures, via a partnership with another company – U.S. Info Search – had the ability to sell personal information, including social security numbers, on around 200 million Americans. A Mr. Hieu Minh Ngo, who recently pleaded guilty to operating a business for fraudsters, was a customer of Court Ventures (he admittedly used a false identity to open his account).  So effectively, given this chain of facts, Experian (through its subsidiary, Court Ventures) was providing access to a database of sensitive information on 200 million Americans to a Vietnamese man who was selling this information to criminals for perpetrating identity theft.

In addition to the state investigations, as noted by Reuters, Experian also provided testimony to the Senate Commerce Committee, whose membership includes U.S. Senator Claire McCaskill from Missouri. Experian SVP Tony Hadley told Senator McCaskill that “we know who they [the Americans whose personal information was accessed] are and are going to make sure we are going to protect them.” And it doesn’t appear that Experian actually followed through on that promise.

As noted this week by KrebsOnSecurity, Experian seems to have taken an about face as to helping the folks affected by this data breach. In fact in the same committee testimony, Mr. Hadley goes on to state that “there’s been no allegation that any harm has come, thankfully, in this scam.” And per KrebsOnSecurity, “Experian has declined to answer questions about whether it has lifted a finger to help consumers impacted by this scheme, or to clarify its apparently conflicting statements about whether it believes anyone has been harmed by its (in)action.”

And to further this point, US Info Search CEO Marc Martin stated that he has “cooperated and assisted the authorities in their investigation and from the onset have urged Experian to make timely notifications….Experian never notified us of the breach as required by state statute, and to date has not cooperated with our investigation, nor provided us with the queries the suspect ran.”

I would encourage you to read Mr. Krebs post which provides “fact checking” of the Experian talking points. He brings into this discussion a clear dissection of the facts, but also a sense of perspective as to how Experian plays within the credit sphere of our financial ecosystem that is illuminating.

So clearly it hasn’t been a good week for Experian, although I suspect that this data breach would have received much more attention over the last several months had it not been for the small public “distraction” that was the Target breach. I do wonder though whether in the future that organizations will be as inclined to turn to Experian for help when they have a data breach?

About the Author

Doug Pollack's avatar
Doug Pollack

CIPP, MBA. With over 25 years experience in technology industry products and services, Doug is an expert in personal information privacy and security. He is currently a senior executive at ID Experts.

FTC Explores Privacy Issues of Big Data at IAPP

by Doug Pollack

Julie Brill, Commissioner of the FTC, answered questions at the IAPP Global Conference last month regarding the FTC’s interest in consumer privacy issues associated with “big data” and the Internet of things (“IoT”). The information security and privacy aspects of these major computing trends have obviously garnered the attention of the FTC.

In her session, Ms. Brill noted that the FTC is looking at mobile, mobile payments and mobile billings, as well as the Internet of things, on which they recently held a workshop, and importantly on data brokers. She mentioned the 6(b) study on data brokers (named after the section of the FTC Act that provides for such studies) that the FTC carried out looking at 9 data brokers last year, and went on to explain why big data and data brokers are a focus for the FTC.

Her explanation was that because data brokers do not have a consumer facing presence, that they make it problematic to use notice and consent as a means to provide the consumer with privacy information and choice.

“We do have issues with notice and choice, but with data brokers and IoT, there is often no opportunity. So consumers may not understand what it means to be connected and the implications [on their privacy].”

In this context, she mentioned the recent controversy that has broken out around how Target used a pregnancy-prediction model in order to assess the probability that a woman customer is pregnant. In the New York Times article titled “How Companies Learn your Secrets”, they describe in enormous detail how consumer goods companies and retailers use “predictive analytics” in order to better target the needs of their customers.

However, marketing to customers who have not self-disclosed certain personal circumstances, such as a pregnancy, can be tricky. As Andrew Pole at Target describes:

“If we send someone a catalog and say, ‘Congratulations on your first child!’ and they’ve never told us they’re pregnant, that’s going to make some people uncomfortable. We are very conservative about compliance with all privacy laws. But even if you’re following the law, you can do things where people get queasy.”

And it is this point that shines a light on the privacy issue noted by Brill. In the case of the “pregnancy-predicted” Target shopper, they have never really had the opportunity to be notified of their privacy rights, related to any special conditions that might apply to them, nor consent or opt-out to marketing or other activities related to their privacy rights. And there-in lies the rub with big data and predictive analytics. At least, as I interpret the comments from Ms. Brill.

She encourages companies to think more deeply about the ethics of what they are doing, as far as appropriate use of the data that they can (and do) collect. In the case of Target, it appears that they collect quite a bit of information on their “guests”. Not only everything you buy, but also your age, where you live, your estimated salary, whether you are married and have kids, and the list goes on and on.

Ms. Brill would like to see a market based solution, not waiting for legislation, but she also see the need for legislation longer term. HIPAA covered entities have clear rules, but that isn’t so much the case outside of organizations with health information. The FTC will provide a baseline and ground rules for dealing with data.

And just to make things more difficult, Ms. Brill notes that “as we move to mobile, security is an even more difficult issue.” So we all have a lot to look forward to.

About the Author

Doug Pollack's avatar
Doug Pollack

CIPP, MBA. With over 25 years experience in technology industry products and services, Doug is an expert in personal information privacy and security. He is currently a senior executive at ID Experts.

BYOD: Beware of Your Own Device—and the People Who Carry Them

by Doug Pollack

Mobile devices allow you to do amazing things: play Candy Crush Saga in a boring meeting, download your alma mater’s fight song, get the calorie count of the Costco-sized pizza you bought. They also have the potential to do amazing damage to sensitive patient information—especially if you don’t take care to secure your phone or tablet. 

The Ponemon Institute’s Fourth Annual Benchmark Study on Patient Privacy and Data Security found that 88 percent of organizations allow their workforce to use their own smartphones or tablet to connect to networks or enterprise systems, such as email. Yet more than half of organizations are not confident that these devices are secure. Small wonder, then, that the Ponemon study found unsecured mobile devices to be a top threat, the Wall Street Journal reported.

This lack of confidence fails to surprise me, since few organizations mandate common-sense security precautions when it comes to bring your own devices:

  • Only 23 percent require anti-virus/anti-malware software on mobile devices before connecting.
  • Only 22 percent scan these devices for viruses and malware prior to connection.
  • Only 14 percent scan devices and remove mobile apps that present a security threat before connecting.

A key word in these statistics troubles me: prior or before. Proactive security measures that, if followed, could avoid exponentially more cost and headache later on. It’s a lot easier to stamp out a campfire than it is to battle a forest fire.

Workforce Negligence and BYOD (Bring Your Own Device): A Flame to Tinder

An article in SearchHeatlhIT, “BYOD in healthcare brings new mobile device security strategies,” quotes the CIO of NCH Healthcare System in Naples, Fla., “…all of our processes will be electronic. It is part of our journey to the digital hospital. There will be no more paper.” This is a bold statement, given that a mere 100 of 650 physicians affiliated with the two NCH-managed hospitals are employees.

The Ponemon study connected the concerns over employee negligence and BYOD. Seventy-five percent of organizations in the Ponemon study view employee negligence as their biggest security risk. What about the risk of non-employees who may access sensitive patient data with their insecure mobile devices?

Dousing the Risk of BYOD

The SearchHealthIT article goes on to discuss Beth Israel Deaconess Medical Center in Boston, an organization tightening its security controls over mobile devices to better comply with changing regulations. Some of these controls, listed below, are among the same ones that security experts recommend, the article says.

  • Implement a governance policy that “outlines the rules and responsibilities around data access, device use and employee behavior."
  • Communicate. The goal is to raise employee awareness about the consequences of their technology-related actions.
  • Enable IT to implement security-related technologies and policies and procedures, such as onboarding, network access control, the use of mobile device management apps, encryption, desktop virtualization, and remote wiping.

Note that while many of these BYOD best practices center on technology, much of the responsibility rests with the people who carry these devices—employees and non-employees alike. As this article points out, employee awareness is the first line of defense. They must understand that every time they open an app or access their e-mail, they may be giving hackers, malware, and viruses access to the most vulnerable of information. At the same time, decision-makers in healthcare organizations must choose to implement policies that would secure these devices.

Bottom line: Mobile devices are as secure or insecure as the security precautions that organizations choose to require for them.

About the Author

Doug Pollack's avatar
Doug Pollack

CIPP, MBA. With over 25 years experience in technology industry products and services, Doug is an expert in personal information privacy and security. He is currently a senior executive at ID Experts.

Should Insurance Carriers offer a Virtual Privacy Expert?

by Jeremy Henley

I recently wrote about the Advisen conference I attended in San Francisco and some of the presentations that I found very interesting - see my previous blog on Advisen West Coast Style.  During the conference several of the panelists discussed the need for pre-breach services and what services would be useful prior to a breach.  Susan Young a Vice President with Marsh Risk & Insurance services talked about how valuable loss prevention tools would be, such as risk assessments, training or education, as well as incident response planning. 

One of the issues she suggested about current tools was the lack of use these tools get which Karl Pedersen Senior Vice President from Willis agreed with.  Susan did not go into detail about why the current tools are rarely used but my understanding from a variety of carriers is that they are either difficult to use, do not provide useful risk management tools other than a calculator - see my previous blog on Breach Calculators - or the policy holders just do not know about them. 

Mike Palotay a Senior Vice President with NAS Insurance also made comments that he believed the industry was heading in a direction that supported more robust pre-loss services.  To me it makes sense that insurance carriers are looking for ways to differentiate their pre-loss strategies when Advisen reports that there are now at least 50 carriers writing cyber liability coverage. 

I would also think that many of the carriers who have been offering coverage for some time now are looking for ways to manage the risk versus just having something to help sell the policy.  I think where we are headed, which is supported with comments like the ones above, is a need for breach focused company to develop efficient risk management tools.  These imbedded tools can add value during the sales process of the policy and manage the risk to help policy holders by limiting the size and severity of their events. I believe a tool like that would be a double edged sword since it would provide unique value to a policy which doesn’t not have it, but also reduce the risks and limit the liability for all parties involved. 

The last issue that needs to be solved however is the usage component.  Why would you provide a tool to policy holders if they never use it?  “Well, it is the price of admission to be in the market,” is what one underwriter said to me not long ago.  That sounds like a problem which could be solved by a company like ID Experts focused solely on data breach prevention, preparation and response with years of experience doing it for companies of all industry classes and revenue sizes. So we intend to solve that problem as well and have partnered with Enquiron, a company which has a proven system which has demonstrated significantly higher usage rates over the last 15 or more years.  

That is why I am so excited to announce ID Experts newest offering the Virtual Privacy Expert™.  We have carefully studied the market and the other offerings, asked questions, and listened to what the insurance industry and many of their target clients, small and medium size enterprise, are asking for help with relative to privacy and security issues.  This has led us to create an offering that is a surprisingly affordable solution that provides risk assessments, real-time information on privacy and security questions, best practice advice on managing data breach incidents, guidelines for complying with federal and state rules and regulations, and an incident response plan that incorporates our efficient patented data breach approach, YourResponse

As the only company focused entirely on data breach prevention, preparation, and response we continue to push the boundaries on the most cost effective ways to limit the size and severity of these issues.  I would be happy to share plenty of more detail on the product with anyone directly.  Please contact me at Jeremy.henley@idexpertscorp.com or 760-304-4761.

About the Author

Jeremy Henley's avatar
Jeremy Henley

Jeremy Henley is an Insurance Solutions Executive for ID Experts. He is has been certified by the Healthcare Compliance Association for Healthcare Privacy and Compliance and brings 11 years of Sales and Leadership experience to the ID Experts team.

Allies Await – Join me in Anaheim April 10

by Rick Kam

The recent story on the healthcare breach tally reaching 30 million victims caused pause for many of us. That coupled with the recent Ponemon Report that revealed data breaches in healthcare have increased 100% since 2010 should send shock waves across the community of those who are responsible for protecting PHI.

That is why I am so passionate about an industry effort that started 3 years ago by a group of highly skilled and engaged participants that we called the PHI Project. We issued a white paper on the value of PHI inside the organization. On April 10 we are holding our Second Annual PHI Protection Network Conference in Anaheim.

Our speaker slate is top notch – a representative from the California DOJ; experts in legal and technology, and industry insiders who are ready to network, share best practices and help all that protect PHI get the tools, resources and support they need.

In one day you'll learn from people that have answers and you'll meet much-needed allies. So join us and use this code PHIRick to get a special discount or drop me a line if you have questions about the event.

About the Author

Rick Kam's avatar
Rick Kam

Rick Kam, CIPP, is founder and president of ID Experts. He is an expert in privacy and information security. His experience includes leading organizations in policy and solutions to address protecting PHI/PII and resolving privacy incidents and identity theft. He is the chair of the ANSI PHI Project, Identity Management Standards Panel and the Santa Fe Group Vendor Council ID Management working group. He is also an active member of the International Association of Privacy Professionals and is a member of the Research Planning Committee for the Center Identity which is part of the University of Texas Austin.

Privacy Warriors Must Operationalize to Keep Compliance Up, Data Breaches Down

by Mahmood Sher-jan

Privacy warriors constantly battle to keep pace and comply with complex and ever changing regulations designed to address rapidly evolving business practices, technologies, and privacy threats.  In 2014 alone, at least 19 states have introduced bills that could amend or impact breach laws.  Unfortunately, these warriors are learning that the gap between what they must do, and what they are doing is growing unless their organizations implement the required processes and tools designed to simplify the monitoring and management of these complex breach laws.

For example, the Ponemon Institute’s Fourth Annual Benchmark Study on Patient Privacy and Data Security reveals how healthcare organizations continue to struggle with incident management and compliance, despite modest progress since the HIPAA Final Rule’s enforcement date. Some of the findings: 

  • Nearly half of healthcare organizations are not yet compliant with the post-incident risk assessment requirement in the Final Rule.
  • Thirty-nine percent of respondents say their incident assessment processes are not effective.
  • Nearly half, or 46 percent of organizations, have personnel who lack understanding about federal and state data breach notification laws.
  • Nearly half, or 49 percent, are not compliant or only partially compliant regarding conducting and documenting post-data breach incident risk assessment as required in the Final Rule.

WATCH: How software solves data incident headaches.

Lack of Consistency

For me, the most telling statistic is this: Seventy-nine percent of respondents believe their incident risk assessments are not effective because of lack of consistency in the process.  If acknowledgement is a first step towards recovery, then we are moving in the right direction.

How can privacy and security officers even hope for compliance when they must reinvent the wheel every time they have a data privacy or security incident?

And these incidents happen every day, especially given the risks described in the Ponemon survey— the Affordable Care Act, criminal attacks on healthcare security up 100 percent, employee negligence, unsecured mobile devices rampant in the workplace, and lack of trust of business associates. 

Organizations are accepting that they cannot avoid these data privacy and security incidents and as a result, they are moving away from ad-hoc incident management towards an operational model. According to Gartner, organizations must “Develop an enterprise-wide regulatory compliance capability that is aligned with strategic, as well as operational imperatives. Include initiatives to capture incentives as well as comply with regulatory compliance details.”[1]

Existing Tools Fail to Bridge the Gap

However, privacy and security officers are finding that existing manual processes and even general-purpose case management and GRC platforms are inadequate for the task at hand. Privacy and security officers need tools to help them keep current with the changing regulatory environment and to streamline incident response processes, including multi-factor and multi-jurisdictional incident risk assessment and mitigation. Ideally, organizations would efficiently escalate the discovery of incidents and quickly and consistently assess and fully document any data breach-related incidents. These actions would lead to prompt and appropriate decision about notification, proof of regulatory compliance, and effective protection and recovery for any breach victims.

The Solution to Manage Security Incidents: ID Experts RADAR

In practice, however, most of the tools on the market prove inadequate for multi-factor incident risk assessment, opening the door to subjectivity and inconsistency in incident response and potential non-compliance. The surest way to mount a consistent, effective response is by having the right people working with the right tools that are built specifically for this purpose. An effective incident management tool, such as ID Experts RADAR, will be (or have):

  1.  Multi-factor risk assessment engine, factoring in the scope and type of data disclosed, the suspected recipient(s) and their intent, the ability of those recipients to view and re-disclose the information, and any risk mitigation measures that have been exercised (such as encryption or proper recovery of the data).
  2. Multi-jurisdictional, taking into account all currently applicable state and federal laws and regulations.
  3. Collaborative, allowing all stakeholders, including privacy, security, and counsel staff within the organization to contribute to the incident analysis, decision making and response.
  4. Self-documenting, so that documentation that proves compliance created as an integrated and automatic part of the response process.

Given the evolving nature of threats and the complexity of notification laws that seem to confuse rather than help, privacy warriors must arm themselves with the best tools for compliance. Since its launch in 2011, RADAR has been trusted by prominent healthcare, insurance, financial services, and information organizations to simply data incident management. Read more about that here.

 


[1] Gartner, Business Drivers of Technology Decisions for Healthcare Providers, Zafar Chadry, M.D., Steven Lefebure, et al., December 26, 2013.

About the Author

Mahmood Sher-jan's avatar
Mahmood Sher-jan

Mahmood is the lead inventor of ID Experts RADAR, an award winning and patented incident management software. Mahmood holds a BS in Computer Science from University of Washington and an MBA from University of Redlands.

Are Credit Monitoring Services Worth It?

by Doug Pollack

"In the wake of one data breach after another, millions of Americans each year are offered credit monitoring services that promise to shield them from identity thieves. Although these services can help true victims step out from beneath the shadow of ID theft, the sad truth is that most services offer little in the way of real preventative protection against the fastest-growing crime in America."

- Are Credit Monitoring Services Worth It? - Krebs on Security

The question of whether credit monitoring services are “worth it” came up recently in a thoughtful article by Brian Krebs in his KrebsonSecurity blog. The presumed catalyst for asking this question is the recent Target data breach that has touched a huge proportion of the U.S. population. So with that in mind, this question has prompted me to think about this, and more broadly, about whether the “de facto” approach to data breach response in the U.S. actually helps the affected consumers.

This article from Mr. Krebs appropriately titled Are credit monitoring services worth it? brings up a number of interesting questions in addition to the one that the title poses. Among them, why do breached companies offer credit monitoring? Is it to look good from a PR standpoint (managing reputational risks)? Is it to mollify regulators or lawyers who may consider investigation or class action? Does credit monitoring help consumers with the harms of the breach? What’s in it for Experian?

Now let’s consider that there are multiple factors at play in crafting a data breach response strategy. There are regulatory mandates based on state data breach notification laws, and in some cases federal regulations, that require notification of the affected population and have something to say about what content must be included in the notification. Then there is, in many cases, an offer of a free identity protection product (most typically credit monitoring) for some period of time (typically no less than one year) which is done voluntarily by the breached organization.

As Avivah Litan from Gartner, Inc. notes, the offer of credit monitoring has become a “defacto public response” to a data breach and that this can be done as much or more for “PR” purposes in intent, rather than a result of thoughtfully consideration as to whether this is the most efficacious offering for addressing the potential harms to the affected individuals. Which gets back to the title question as to “are credit monitoring services worth it?”

I generally agree with Mr. Krebs that “it probably can’t hurt”. It’s like taking your vitamins when you feel like you may be coming down with a cold.  They might help. They aren’t the only way for you to get those nutrients, you could always eat healthier and drink a lot of orange juice for extra vitamin C, but it might make you feel just a little better knowing that you’re doing something to address this new risk.

So given this, two things where I’d like to suggest some food for thought. First, should legislators do more to ensure consumer protection in cases of malicious data breaches? If offering credit monitoring has become “defacto”, and if it isn’t very efficacious, should more or something different be required? Some in the industry have suggested that a more effective solution would be to provide identity restoration services. Rather than using credit monitoring to provide a late, early-warning indicator, maybe consumers would be better off with services to help them out if/when they become a victim of identity theft? (full disclosure: my company, ID Experts, provides fully managed identity restoration services) 

And second, there is the nagging question in the back of my mind as to whether I’m comfortable giving Experian more information about me. By signing up for their credit monitoring, they how have a valid email address and phone number for me. And they also have my permission to send me emails encouraging me to “buy extra stuff”. I’m certain to get offers to upgrade my service and to sign up to pay for service once the free offering period expires.  Maybe of even greater concern to me, though, is that Experian is in the business of selling information about me to other companies that want to market “their stuff” to me, among other things.

So this question of what an organization should do for individuals when they’ve exposed personal information about them in a data breach has many tentacles. Unfortunately no simple answers.

About the Author

Doug Pollack's avatar
Doug Pollack

CIPP, MBA. With over 25 years experience in technology industry products and services, Doug is an expert in personal information privacy and security. He is currently a senior executive at ID Experts.

Advisen West Coast Style

by Jeremy Henley

The folks at Advisen finally got the word that the West Coast is the best coast when it comes to Cyber Liability, or at least that we do matter our here, and had their first Cyber Insights conference in San Francisco this March.  I don’t know the final numbers but there seemed to be north of 100 in attendance for the inaugural event.  All kidding aside, many of the speakers were the usual suspects that often appear at conferences and rightly so since most of them really do know there specialties well.  What was great is that having the event on the West Coast brought out new faces in the cyber world and I made some great new connections.    

There are always a lot of numbers thrown around at these events whether it is from the panelists during a session or informally during the networking opportunities.   Often those statistics are based on small sample sizes or even just opinions from those in the space.  However, I found some real value in the numbers that Advisen’s Jim Blinn EVP of Information and Analytics division provided around the estimated total premium worldwide—about $2 Billion—and number of carriers with an offering coverage now at 50 or more.    

More interesting were Mr. Blinn’s numbers on who is actually buying coverage.  The studies they have completed indicate that 1 in 4 large companies $5 Billion in revenues and larger are picking up cyber cover.  This is the number, 25-30%, that is often thrown around informally but I have more recently heard that this is grossly over stated and the Advisen numbers appear to confirm this.  Companies with revenues less than $5B drop the percentage significantly, for example $25M-100M in revenue size companies only acquire cyber coverage about 10% of the time, and the smallest businesses $2.5M and less are at less than 4%. 

Also it appears that one of the bigger disconnects is tied to the industry class when we look at which companies are buying coverage.  Those who are in the technology and healthcare space are more likely to purchase coverage than those who associate with other industry classes, so the numbers are still relatively small and the total overall market has a lot more potential.  Estimates are that the eventual market for cyber liability coverage could be as much as $5 Billion in premium just in the United States.

The most interesting thing I took away from the event however was more closely tied to the need for better and more robust services prior to a breach event.  Mary Beth Borgwing from Advisen and Russell Cohen Partner Orrick, Herrington & Sutcliffe agreed that before buying coverage companies need to get all the departments involved and assess their risks first.  During the same session Evelyn De Souza from Cisco suggested that most companies cannot produce a data inventory or map of data information, especially in the retail and university settings.  Wow! How can you protect your data if you don’t know what you have to protect?  How can a risk manager confidently respond to an application for coverage without this information?

Pulling all the departments together also makes sense when a company is discussing which policy fits their needs the best.  The policies are very different and the entire team needs to discuss it before binding a policy.  This should not be something that only the risk manager and broker work out together.  I have seen too many cases where a breach has occurred and leadership is surprised by how their hands are tied with certain policies and they cannot take the action they would prefer.  This can becomes critical when the decisions are likely to impact your business reputation, finances and legal liabilities.

About the Author

Jeremy Henley's avatar
Jeremy Henley

Jeremy Henley is an Insurance Solutions Executive for ID Experts. He is has been certified by the Healthcare Compliance Association for Healthcare Privacy and Compliance and brings 11 years of Sales and Leadership experience to the ID Experts team.

100: The New Bad-Luck Number

by Doug Pollack

Thirteen, the traditional unlucky number, is a pot of gold compared to 100. That’s because criminal attacks on healthcare organizations have increased 100 percent since 2010, according to the Fourth Annual Benchmark Study on Patient Privacy and Data Security by Ponemon Institute.

Criminals find medical identities to be highly profitable—a $50 street value versus the dollar value of a Social Security number, according to Kirk Herath, Nationwide Chief Privacy Officer. The ironic thing, Herath notes, is that people are more careless with their medical identities than they are with their Social Security numbers.

Dr. Larry Ponemon, chairman and founder of the Ponemon Institute, agrees with Herath. “The information that's contained in a medical record has real value in the hands of a cyber criminal,” he told CNBC. “And there's evidence that suggests that in the world of black market information, a medical record is considered more valuable than everything else.”

Medical information also attracts criminals because of its long shelf life, as Rick Kam, ID Experts president and co-founder, calls it. Credit card data expires when a new card is issued. Not so with personal health records.

The volume of newly insured also spurs criminal activity. “As millions of new patients enter the U.S. healthcare system under the Affordable Care Act, patient records have become a smorgasbord for criminals,” Dr. Ponemon said in a recent Government Health IT article. 

Turn Your Luck Around

Clearly, criminals are motivated to mine for medical data. The trick is keeping ahead of them, hard to do in a world of evolving cyber threats. Norse, a MIFA founding member, recently published its SANS Health Care Cyberthreat Report, that provides some guidance on this problem:

 
  1. Know what’s on your network. The very technologies designed to protect can be dangerous themselves—firewalls, VPNs, etc.—if they’re not properly secured. In addition, Norse recommends keeping an eye on “non-traditional” devices, such as printers, VoIP boxes, and personal medical devices.
  2. Think like an attacker. For instance, hackers might access patient prescriptions from a networked fax machine that sent or received that data. Physical vulnerabilities such as a surveillance camera covering the entrance to the server room might reveal the passcode the IT staff types into the keypad. The scenarios are endless.
  3. Consider your networked pathways. If your organization faces internal security issues, then monitoring outbound traffic is just as critical as what comes in. “Organizations may need egress filtering—monitoring, controlling and potentially restricting the flow of information outbound from a network—to ensure that…unauthorized or malicious traffic…never makes it to the Internet,” the report states.

These strategies are part of a larger picture, what Norse calls “assess and attest”— to conduct and document ongoing risk analyses of all systems, not just EHRs. The hope of course, is to drive the percentage of criminal attacks down.

At ID Experts, we take encourage healthcare and insurance organizations to go one step further, to “operationalize” their incident assessment methodology for regulated data disclosures. Doing so helps your organization in managing its risks and compliance with the web of state breach laws and the year-old HIPAA Omnibus Rules.

About the Author

Doug Pollack's avatar
Doug Pollack

CIPP, MBA. With over 25 years experience in technology industry products and services, Doug is an expert in personal information privacy and security. He is currently a senior executive at ID Experts.

‘Side Effects’ of Affordable Care Act

by Bob Gregg

The list of side effects for many medications is long and somewhat frightening—anything from dizziness to fever to death. Side effects of the Affordable Care Act are equally troubling—including increased risk to data security and patient privacy—according to findings from the Fourth Annual Benchmark Study on Patient Privacy and Data Security by Ponemon Institute.

More than two-thirds (69 percent) of organizations feel that the ACA increases or significantly increases risk to patient privacy and security. Specific concerns include:

  • Insecure exchange of patient data between healthcare providers and government (75 percent)
  • Patient data on insecure databases (65 percent)
  • Patient registration on insecure websites (63 percent)

Pam Dixon, executive director of the World Privacy Forum, told NBC News that the Affordable Care Act “was like adding jet fuel” to the problem of medical identity theft.

“It’s been open season for scams related to ACA,” she said. “I don’t know that there was an easy way around that, but I think there was some lack of preparation on this front.”

The Department of Health and Human Services says “that HealthCare.gov is safe and that there have not been any breaches detected,” according to a recent Nextgov.com article. This may be true, but this lack of confidence that healthcare organizations exhibit in the Affordable Care Act is proof that these organizations are very confident that that data breaches can—and will—happen.

Ponemon: Lack of Confidence in HIE Security

On a related note, the Ponemon study also found a disturbing lack of confidence in the security of health information exchanges (HIEs). Only 32 percent of organizations surveyed belong to an HIE—a slight bump from 28 percent last year. In addition, one-third of organizations have no plans to join an HIE, largely because 72 percent say they are only somewhat confident or not confident in the security and privacy of patient data share on the exchanges.

Clearly, the ACA and health information exchanges pose a significant real—and perceived—risk to patient privacy and data security. Despite these fears, however, the ACA and HIEs are changing the flow of protected health information (PHI) in the healthcare ecosystem. Healthcare organizations need to accept that fact, and take these proactive steps to protect their patients in this risky new world:

  1. Educate your patients or your Health Plan members as to the importance of controlling  their medical identities. Consumers are the first line of defense against financial fraud. When they receive a fraud alert, they contact their financial services provider to fix the problem. Similarly, patients should review their Explanation of Benefits (EOBs) to spot questionable activity. If you’re a payer or health plan, consider ID Experts MIDAS—Medical Identity Fraud Alert System—that enlists members to detect and report fraud.
  2. Operationalize your incident risk assessment and breach response processes. This moves organizations from the typical, knee-jerk reaction of incident response to a more strategic approach. In fact, the law requires you to provide a consistent, defensible method for incident risk assessment. Given the many variables in a data breach, many organizations have found the value of third-party solutions such as ID Experts RADAR.
  3. Upgrade your risk analysis—and your technology—to meet changing threats. For instance, the Target breach happened at the point-of-sale, a place not considered as vulnerable. The SANS report cited above notes that “exploited medical devices, conferencing systems, web servers, printers and edge security technologies all send… out malicious traffic from medical organizations.”

    Side effects of the ACA are inevitable. But being proactive can minimize their duration and impact, and set your organization up for a secure and healthy future.

    About the Author

    Bob Gregg's avatar
    Bob Gregg

    With over 30 years of experience in high technology and software services, Bob joined ID Experts as CEO in 2009. He is particularly interested in the emerging trends involving identity theft and privacy data breaches, with emphasis on healthcare. "Let's keep our private, confidential information just that...private and confidential"

    PHI Protection Network (PPN) Forum - the event for best practices to protect sensitive patient info

    by Rick Kam

    If you are a compliance or privacy officer of a hospital or work in a health care entity that is responsible for managing patient data, you need to attend the PPN Forum.  In just one day, you will learn the current and future threats causing PHI data loss incidents as well as the best practices to effectively remediate breaches and mitigate the risk of harm to affected patients.

    We’ve assembled a top-notch list of expert speakers from organizations including the California Department of Justice, Seattle Children’s Hospital, Sharp Healthcare, Symantec, CHOC Children’s Hospital and more. The program has been designed to create an interactive exchange of ideas that will provide immediate and actionable insights that you can bring to your organization.

    3 Reasons You Should Register Now:

    1. New Ideas and Information: PPN brings you the latest PHI /PII privacy and security industry practices. It provides a forum for privacy and security professionals to gain valuable information and keep up-to-date with current industry trends along with the many networking opportunities available.
    2. Up-to-Date Education and Training: A forum for current trends and topics, latest tools and technology that you don't want to miss, and an opportunity to earn CE credits from (ISC)2® Americas, ISSA, IAPP, HCCA and AHIMA
    3. The Latest Products and Services: PPN will showcase technologies, products, and services to strengthen data security and compliance.

      Full program and registration details are available at http://www.phiprotection.org Just for reading my blog post, use this code PHIPPN and get 20% off your registration.

      See you in Anaheim!

      About the Author

      Rick Kam's avatar
      Rick Kam

      Rick Kam, CIPP, is founder and president of ID Experts. He is an expert in privacy and information security. His experience includes leading organizations in policy and solutions to address protecting PHI/PII and resolving privacy incidents and identity theft. He is the chair of the ANSI PHI Project, Identity Management Standards Panel and the Santa Fe Group Vendor Council ID Management working group. He is also an active member of the International Association of Privacy Professionals and is a member of the Research Planning Committee for the Center Identity which is part of the University of Texas Austin.

      Fourth Annual Benchmark Study on Patient Privacy and Data Security.

      by Larry Ponemon

      Today we are releasing our Fourth Annual Benchmark Study on Patient Privacy and Data Security. We hope you will read the report sponsored by ID Experts that reveals some fascinating trends. Specifically, criminal attacks on healthcare systems have risen a startling 100 percent since we first conducted the study in 2010. This year, we found the number and size of data breaches has declined somewhat.  Employee negligence is a major risk and is being fueled by BYOD. Giving healthcare organizations major headaches are: risks to patient data caused by the Affordable Care Act, exchange of patient health information with Accountable Care Organizations and lack of trust in business associates privacy and security practices. For a copy of the Fourth Annual Benchmark Study on Patient Privacy and Data Security, visit www2.idexpertscorp.com/ponemon.

      About the Author

      Larry Ponemon's avatar
      Larry Ponemon

      Dr. Larry Ponemon is the Chairman and Founder of the Ponemon Institute, a research “think tank” dedicated to advancing privacy and data protection practices. Dr. Ponemon is considered a pioneer in privacy auditing and the Responsible Information Management or RIM framework.

      Latest Ponemon Study Shows “Leaky Bucket” Approach to Managing New Threats

      by Rick Kam

      It’s been a year since the HIPAA Omnibus Final Rule was issued. Kudos to the healthcare organizations that have made strides toward compliance. But shifting threats and risks, as revealed in the newly released Fourth Annual Benchmark Study on Patient Privacy and Data Security by Ponemon Institute, are forcing organizations to be reactive, not proactive. It’s like a bucket filled with water and holes. The water keeps spurting out. Every time you patch a hole, a new one forms. The whole process of patching old and new holes is overwhelming and never-ending

      It’s no surprise, then, that 90 percent of healthcare organizations are still experiencing breaches, and 38 percent report that they have had more than five incidents in the last two years.

      Some of the key threats the Ponemon study found are:

      Employee negligence: 75 percent reported employee negligence as their biggest worry, and insider negligence was the root of most data breaches reported in the study.

      Unsecured mobile devices: It’s a lot more convenient to use your personal mobile device for work—a major security risk to the 88 percent of healthcare organizations that permit employees and medical staff to use their own mobile devices to connect to the organization’s networks or enterprise systems.

      Security gaps with business associates: In light of the Target data breach, which may have been caused by a fourth-party—essentially a subcontractor of a subcontractor—this, is a real concern. Only 30 percent of organizations surveyed are confident that their business associates are appropriately safeguarding patient data as required under the HIPAA Final Rule.

      Evolving criminal threats: “The latest trend we are seeing is the uptick in criminal attacks on hospitals, which have increased a staggering 100 percent since the first study four years ago,” Dr. Larry Ponemon says. “As millions of new patients enter the U.S. healthcare system under the Affordable Care Act, patient records have become a smorgasbord for criminals.”

      New vulnerabilities under the Affordable Care Act: Survey participants had strong reservations about the security of Health Information Exchanges (HIEs): a third said they don’t plan to participate in HIEs because they are not confident enough in the security and privacy of patient data shared on the exchanges.

      PHI Protection Network Conference—A Proactive Approach to New Threats

      It’s time to get a new bucket—and the best way to do that is to join us at the second annual PHI Protection Network Conference, Thursday, April 10, 2014 in Anaheim, California.  Senior privacy, compliance, and security officers will share best practices and insights, giving you tangible and actionable takeaways that you can implement right away. To register for Adopting Best Practices and Protecting Patients, visit phiprotection.org or visit the PPN LinkedIn Group here.

      Free Webinar on the Ponemon Study

      In addition, if you would like more information on the Ponemon findings, join Dr. Ponemon and me for a free webinar, ACA Impacts on Patient Data Security, on Tuesday, April 8, 2014 at 2:00 p.m. ET. To register, visit here.

      About the Author

      Rick Kam's avatar
      Rick Kam

      Rick Kam, CIPP, is founder and president of ID Experts. He is an expert in privacy and information security. His experience includes leading organizations in policy and solutions to address protecting PHI/PII and resolving privacy incidents and identity theft. He is the chair of the ANSI PHI Project, Identity Management Standards Panel and the Santa Fe Group Vendor Council ID Management working group. He is also an active member of the International Association of Privacy Professionals and is a member of the Research Planning Committee for the Center Identity which is part of the University of Texas Austin.

      Edith Ramirez, chairwoman of the FTC, speaks on privacy & security considerations with big data

      by Doug Pollack

      Friday at the IAPP Global Privacy Summit, Edith Ramirez, chairwoman of the FTC, took questions related to privacy and data security with an initial focus on the challenges posed by so-called "big data".  In the session, she noted that the FTC looks at the issues with big data quite broadly.

      Connect: Data Protection Community on LinkedIn

      She specifically indicated that the privacy issues that they at focused on with big data deal with its "ubiquitous collection" of data about consumers, combined with the use of "powerful analytical tools" for extracting information and drawing inferences. 

      She went on to note that the privacy issues of today are very much the same ones as those in the past, how to ensure adequate transparency and provide individuals with control over the collection, transmission, and use of the data, but that with big data the magnitude is so significant and vast that it amplifies the issues of privacy. 

      When asked about how this can lead to consumer harms, she talked about the use of big data to "compile profiles and draw inferences". She gave as a case example a story about a retailer that, based on a woman's purchase of an unscented lotion combined with other data, was able to infer that she was pregnant.  Presumably in this circumstance the presumption is that inferring this conclusion was a violation her her privacy given that she didn't directly disclose this information to the retailer.

      She then related some of the findings and conclusions that were drawn by the FTC based on a recent workshop they held, the results of which have not yet been published. She indicated that in the workshop they learned a lot about the benefits of big data but also about the privacy challenges that result.  These have to deal with:

      1. The ubiquitous collection from a range of devices. Specifically because they were considering the impact of the "internet of things" (IoT) phenomenon. 
      2. The fact that such data collection can lead to unexpected uses.  For instance, the transfer of and sharing of data with 3rd parties, especially without provide consumer transparency. 
      3. Data security concerns; specifically how to provide for "reasonable security" as the datasets get larger and larger. 

      In thinking about these issues, she advocated for the approach of "privacy by design".  How best to embed privacy considerations and protections into consumer products and services at the design phase.

      She noted that most consumer don't read privacy policies.  In the debate relative to the merits of the privacy model based on consent, however, the FTC continues to believe in he approach.  But they favor a combination of "simplified notification" with "just-in-time notices".  Especially to better serve consumers using the smartphone form factor. 

      A final comment from chairwoman Ramirez that I'd like to note is her advocacy for national legislation and rule making authority dealing with the FTC's Section 5 authority.  She noted that the public debate sparked by the Target breach has elevated the issue and concerns, but I didn't sense that she is optimistic as to seeing Congreessional action on this subject in the near future. 

      This was a fabulous session and I continue to look for how the FTC will continue to explore and enforce privacy issues in the big data world.

      About the Author

      Doug Pollack's avatar
      Doug Pollack

      CIPP, MBA. With over 25 years experience in technology industry products and services, Doug is an expert in personal information privacy and security. He is currently a senior executive at ID Experts.

      IAPP Global Privacy Summit: Preparing for a transformational year in privacy.

      by Doug Pollack

      With the Target breach in the news what seems like daily since December, the concept of data breach, and it's implications, has now burned itself into the psyche of the American public. 2014 marks the beginning of a new era in how organizations "manage" incidents and disclosures of personal, regulated data.  See the innovative ID Experts RADAR software platform at the IAPP Global Summit and make sure your organization has begun to operationalize the multi-factor assessment and management of data security incidents.  

      Web Conference - Coping with the Data Breach Regulatory Avalanche

      While Target may be the poster child for how a nefarious criminal hacking scheme led to the exposure of millions of American's personal data, acquiring credit card information is really just the tip of the iceberg of where the real organizational risks and criminal potential lie.  The greater risks, both to the organization and their customers, are in industries where acquiring sensitive, personal information is crucial to their business operations.  Think healthcare, insurance, financial services. 

      In these industries, you combine an environment where the personal data they acquire is highly regulated with one where this data is also of great value to criminals.  The 4th annual Ponemon Study on Patient Privacy and Data Security, to be released shortly, concludes that data breaches in healthcare resulting from criminal attacks has grown by 100% over the past four years. You can get a “preview copy” of the Executive Summary of this report in the ID Experts booth at the IAPP Global Summit next week.

      Based on this and other recent data, in 2014 there will be an imperative for organizations in these industries to become rigorous in their assessment and management of security incidents involving regulated data.  Failure to do this in a disciplined and consistent manner puts you at risk of having a "Target" experience yourself.  Not something that any Privacy officer aspires to.

      ID Experts developed the RADAR incident assessment and management software platform specifically in order to address the needs and risks of these organizations in healthcare, insurance, and financial services. Our long history in helping these organizations in responding to data breaches has given us unique and valuable insights around assessing privacy incidents in the context of regulatory mandates. And RADAR's patented Regulated Data Disclosure Engine™ embodies and builds upon this deep knowledge and experience.  

      So please make time to visit ID Experts at the IAPP Global Summit next week to see the innovative and patented RADAR incident assessment and management software platform. Don't go another year without a consistent, repeatable, documented, and defensible way to manage incidents involving your regulated data.  

      About the Author

      Doug Pollack's avatar
      Doug Pollack

      CIPP, MBA. With over 25 years experience in technology industry products and services, Doug is an expert in personal information privacy and security. He is currently a senior executive at ID Experts.

      Are Data Breach Calculators Effective?

      by Jeremy Henley

      There are plenty of data breach calculators out there for anyone to play with to see how bad a breach might be in terms of costs, but are they accurate?  I think the answer is no, for a variety of reasons; so then the question is: what purpose do they serve?  And why are there so many?  Since they do not have the same results, how do we know which ones to trust?  

      I think most of the time organizations use them to help a prospective client understand how bad a breach can be so that they will invest the funds for products and services to reduce the chance of a breach, whether it is antivirus software, cyberliability insurance, consulting services or anything that may reduce the risk of a breach. 

      LEARN MORE: RADAR Software for Privacy Incident Assessment & Management

      I understand the need to prove a return on investment as part of getting the funds to improve compliance, increase IT security, or purchase insurance to cover the expenses of a data breach.  I believe that some of the calculators are built with good intentions, however the problem is that all of them are still primarily used to scare prospects into a purchase. 

      ID Experts has seen a significant number of data breaches, so if any company can produce a reliable calculator to calculate breach costs, it would probably be us.  One reason we have not built a calculator is that every breach is unique. I know you have heard that before, right? It’s true. Some incidents require forensic investigators and this can add a little or a lot to the total cost. Additionally you can have a range of other variables that are not predictable.  We don’t know where or when a breach is going to hit and what the circumstances will be. 

      The complexity increases when you look at how one variable can change several others to determine the cost of the event. Things such as current levels of compliance, the relationship with the customer or patient whose data was lost. All of these issues build on one another and ultimately make each event unique and can create a wide range of expenses, low or high.

      As another example, the costs of the Target breach are discussed almost daily. If I were to put that breach into a calculator at the onset—and let’s say that the forensic portion was public so we all had the same number--it would still be nearly impossible for all us to arrive at the same number due to all the options Target had a few weeks into the forensic engagement: notify now and risk more bad news; notify later but have the full scope of the incident; uncover evidence that the breach wasn’t as bad.  There are also decisions to be made about how much to “fix” the issue before going public.  All of this would change the calculator’s estimations. 

      In this particular case it appears that Target chose to alert the public before it had all the information. This may not have increased the forensic expense but it certainly created an even bigger PR issue, and a significant lost business expense, when Target had to notify a second time that there was more bad news. I don’t think any calculators have a way to figure that in yet.

      On top of this, Target offered credit monitoring but didn’t fully explain where and how customers could get protected. This may not have been the most efficient solution for the issue, as it was difficult for customers to reach customer service call center, thus creating more costs and problems. Of course, it is much easier to analyze the response process in hindsight, but for purposes of the calculator, any one calculator or even different individuals using the same calculator could be off by hundreds of thousands of dollars in many cases, probably tens of millions of dollars in the Target breach case. 

      To answer the original questions about breach calculators to measure the actual cost of a breach, I believe they are mostly for shock value and attempt to create some sort of ROI for purposes of justifying investments related to minimizing data breaches versus being a solid tool to predict what your organizations true costs might be.  If you are interested in determining the value of your corporate data I would recommend the method developed by the American National Standards Institute, to calculate a real number based on the real risks for an organization.

      About the Author

      Jeremy Henley's avatar
      Jeremy Henley

      Jeremy Henley is an Insurance Solutions Executive for ID Experts. He is has been certified by the Healthcare Compliance Association for Healthcare Privacy and Compliance and brings 11 years of Sales and Leadership experience to the ID Experts team.

      Don’t Be a Data Breach Target

      by Rick Kam

      Up to 110 million people were affected by last year’s Target breach. But it’s the retailer that will have to pay the price—possibly $3.6 billion in fines for the breach of approximately 40 million credit and debit cards.

      Most organizations will never face a Target-sized data breach. But smaller companies suffer similar consequences that are just as damaging. 

      With that in mind, organizations can learn valuable lessons from the Target data breach—lessons that safeguard their business, their customers, and their reputation.

      Lesson #1: Protect your organization’s reputation and bottom line with fast, accurate breach assessment. Doing so enables effective communication and response that limits reputational and perhaps financial damage. Target’s name was pummeled repeatedly as additional information came to light—first the breach of up to 40 million credit and debit cards, and then the theft of up to 70 million people’s personal information.

      Lesson #2: Get your business associates into alignment. Data breaches don’t stop at the door of your organization, as the Target breach illustrates. Recent news reports say that hackers may have accessed the Target system by way of a third-party vendor.

      Lesson #3: Operationalize your security incident assessment and management processes. This moves your organization from a typical, knee-jerk breach decision-making process, to more timely, consistent incident assessment. This strategy might have helped Target avoid at least one class-action lawsuit, in which the court must determine if “’Target unreasonably delayed in notifying affected customers of the data breach.’”[1]

      The difficulty with strategizing incident response, though, is that each data breach is different—the type of information exposed, the number of records, the way the breach happened. The Target data breach is a perfect example—a single incident that involved two different groups with different data types.

      ID Experts solves this problem with RADAR™, software for managing incidents involving regulated data. Using RADAR helps organizations operationalize privacy and security incidents to minimize risk and meet regulatory compliance—all while taking into account the unique details of each incident.

      Some experts contend that the data breach should never have occurred. Whether or not that’s true, it did happen. And it will again. But we can all learn from Target and do a better job to safeguard sensitive data and operationalizing incident management.


      [1] “State Attorneys General Target Target,” December 20, 2013, by Elizabeth MacDonald. From FOXbusiness.com: http://www.foxbusiness.com/industries/2013/12/20/state-attorneys-general-target-target/

      About the Author

      Rick Kam's avatar
      Rick Kam

      Rick Kam, CIPP, is founder and president of ID Experts. He is an expert in privacy and information security. His experience includes leading organizations in policy and solutions to address protecting PHI/PII and resolving privacy incidents and identity theft. He is the chair of the ANSI PHI Project, Identity Management Standards Panel and the Santa Fe Group Vendor Council ID Management working group. He is also an active member of the International Association of Privacy Professionals and is a member of the Research Planning Committee for the Center Identity which is part of the University of Texas Austin.

      CISOs know the importance of operationalizing data incident response

      by Mahmood Sher-jan

      I was invited to speak about data governance in Boston and Washington DC last week along with multiple groups of security and compliance executives.  Coincidentally the Boston session was on January 28th, which is designated as Data Privacy Day  These sessions were part of data governance roundtable discussions organized by the CISO Executive Network. The scope of discussions ranged from organizational culture to data proliferation to emerging technologies addressing data classification, behavioral threat intelligence, and incident response management.  

      Learn More: Security Incident Assessment and Management

      I had the unenviable joy of talking about the inevitable—breach incidents and use of software to help manage data breach incident lifecycle.  As much as everyone wants to focus on breach “prevention”, the reality is that investing in “incident response” is just as important if not more.  My audience was totally in sync with me on this point.  Check out this infoRisk Today article which offers a perspective from Tom Cross of Lancope who argues that prevention is simply not effective. If you are a CISO, you are guaranteed to experience a breach, sooner or later so how the event unfolds is a matter of your level of preparation and incident management tools.

      What a difference a stream of headline grabbing breaches and tightening breach regulations can make for stopping organizations from viewing them as some unlikely events that only happen to others. I talked with groups of highly experienced security and compliance executives and none of them were in denial.  I didn’t have to convince anyone in this audience that their organizations are very likely to suffer a data breach.  They all knew it, deep down.  Most of them raised their hands when I asked if they had documented incident response plans (IRP) in place.  A surprising number of them said that they do annual testing of their IRPs.   That is impressive because it shows that breach incidents are no longer an exception but an anticipated business risk.  If you handle protected health information or customer sensitive information then you make an ideal target.  It really doesn't matter who “owns” the data—what really matters is that you own the “responsibility” to protect the data if you user or hold this data.  This was a point that was stressed by the host law firms and the executives sitting around the massive rectangular tables.  By the way, having participated in a number of IRP testing exercises I can attest that using a mock incident scenario can be very illuminating about how well a company can handle a real incident when it happens.  Testing also helps you uncover gaps in your response process, tools, and chain of command so you can fix them before they turn your data breach into a disastrous emergency.

      Data governance is about creating a structure and framework. Data management is about operationalizing the governance structure.  The role of CISOs is growing more complex and there was consensus among the audience, made up mostly of CISOs, that they need to be the driving force for establishing a culture of executive awareness & involvement in data governance and management.   Without board and senior executive involvement in the data governance, it is very difficult to establish a culture of privacy and security and get the resources they need to create an effective program—yes an on-going program.  On that note, these security, privacy and compliance executives were very receptive and interested in learning about the emerging technologies addressing data classification from Titus, behavioral threat intelligence from startup company Confer, and incident response management which included our patented incident management software (RADAR) and our incident response services.  You can check out all of these technologies to see how they can help your data governance program.

      About the Author

      Mahmood Sher-jan's avatar
      Mahmood Sher-jan

      Mahmood is the lead inventor of ID Experts RADAR, an award winning and patented incident management software. Mahmood holds a BS in Computer Science from University of Washington and an MBA from University of Redlands.

      How Data Breaches Really Happen: Reasons 1 – 1,000,000

      by Heather Noonan

       Have you ever read those stories about a company data breach and wondered “How in the world did that happen”? And then think, “Did that really happen and could it happen again”?  

      All too often, the answer is yes. These strange and rather peculiar scenarios never cease to amaze me or honestly, become boring. Everything from papers flying out of a moving truck to hackers blackmailing companies into finally doing something. It’s all becoming a reality. A reality that all of us probably wish would just disappear.

      I can easily think of a handful of situations that had me baffled and shaking my head. Incidents that you would think could only happen on a daytime Emmy show, when in reality, these incidents are happening every day and to the company right down the street.

      Here are some examples of my head scratching:

      • A hospital employee stole hundreds of patient data files, then posted it on Facebook exclaiming how brilliant they were. Really? Well, this thoughtful Facebook post later helped the FBI catch you.
      • A recent city administration was hacked and the hackers were nice enough to alert the city prior to the hacking. The hackers simply told them that if they secured their network properly, they wouldn’t be hacked. Sadly, it sounds as if they didn’t listen, as thousands of personnel files were taken.
      • One of the more complex data breaches entailed hundreds of thousands of private data and credit cards, with it only to happen again a year later. This is where my mind thinks “Did that really happen?” Yes, I guess it did. Again.

      To liven the situation of all these depressing scenarios, we decided to make light of what has occurred over the years and add some humor to an often painful experience.

      Below are three separate incidents that we posted to our twitter feed. You be the judge!

      How Data Breaches Really Happen #62 – Oh, so you say you lost your flash drive?


      How Data Breaches Really Happen #415- Sorry, I left my work laptop in my car overnight and it was stolen. Oops!      

      How Data Breaches Really Happen #742- How can this beautiful Unicorn wallpaper cause a server hacking?


      About the Author

      Heather Noonan's avatar
      Heather Noonan

      Heather is the Senior Project Manager for the ID Experts Data Breach Response Team. She provides subject matter expertise, state and federal regulatory requirements and best practices. She is the primary point of contact for client communication and data breach engagements. Heather has been with ID Experts since 2008 and works in all areas of ID Experts including informational webinars and blog discussions. Heather has a Bachelors of Science, specializing in Business Communication, and has over 15 years of experience in client customer service with 10 years specific in Project Management.

      HIPAA Omnibus Final Rule: One Year After

      by Doug Pollack

      The healthcare information management and data breach community was put on notice last year from regulators that they would be expecting much greater focus on and performance in securing patient health information (HIPAA protected health information, or PHI) and managing the privacy of patient data from healthcare organizations (HIPAA covered entities) and their business and technology partners (HIPAA business associates) going forward. I thought I’d take a look in this post of what 2013 brought us, and what to expect as we plunge forward into 2014.

      On January 17, 2013 the U.S. Department of Health and Human Services Office for Civil Rights (OCR) published the HIPAA Omnibus Rule, which updated privacy, security, breach notification, enforcement, and genetic information rules. While there was a lot in this very lengthy document, there were three primary takeaways for me as to what would significantly impacted healthcare organizations during 2013.

      First, the Omnibus Rule for the first time obligates HIPAA business associates, which come in all shapes and sizes, directly to comply with the rules. In other words, business associates now have obligations under the privacy, security, and breach notification rules, and are subject to direct investigation and enforcement actions by OCR if they are found wanting in any of these areas. This is a big deal. During 2013, organizations needed to figure out if they even are in fact, HIPAA business associates, and what it would take for them to comply with the rules, especially the Security Rule as well as their obligations under the Breach Notification Rule.

      Second, the Omnibus Rule changed the definition of “data breach” under the breach notification rule. Gone is the somewhat fuzzy definition that relies on determining the “risk of harm” to affected individuals. Now, under the Final Rule, organizations must determine whether an unauthorized disclosure meets the new standard of  a low probability that the PHI was “compromised”. In this case, jettisoning the fuzzy “risk of harm” standard for an also somewhat fuzzy “probability of compromise” standard. However, the new standard is backed up by a four-factor approach for carrying out the incident risk assessment in a compliant fashion, a good addition to the rule.

      Then third, the Omnibus Rule gives OCR and other bodies increased opportunities to penalize organizations that are found to have violated the rules. During 2013, Director Rodriguez at OCR publicly stated that his organization intends to be more aggressive in investigating and fining violators. OCR signed five resolution agreements during 2013 (source: OCR website, January 28, 2014) in settlement of cases that they were investigating. The largest financial penalty was in the Wellpoint settlement during July for $1.7MM for a case where ePHI of 612,000 individuals was exposed in an online application database. However, such enforcement actions are not only for larger companies. In December, OCR settled with Adult & Pediatric Dermatology of Concord, MA for $150,000 for a breach resulting from a lost thumb drive with ePHI of just over 2,000 patients.  So OCR is proving to be an equal opportunity investigator when it comes to their role in enforcing the Omnibus Rule.

      So what does 2014 promise to hold for healthcare organizations and their partners? It certainly will be a time of transition, since Director Rodriquez will be leaving OCR for another government post. With this transition, questions come into play as to whether the new director will have the same somewhat aggressive stance on enforcement as we’ve seen under director Rodriguez, as noted by Health IT Security (Will OCR leadership changes affect healthcare organizations? , January 15, 2014). You are probably well advised to assume that OCR will continue along the same trajectory, preparing your organization to stand up to the scrutiny of an audit or an investigation into your privacy, security and data breach practices.

      2014 will also be the year when business associates become targets of greater scrutiny. To a great extent, up until now, they have gotten a “virtual pass” from OCR, relative to their practices for data security and compliance with the rules. Now that the rules are enforceable for business associates, it would be prudent for these organizations to ensure that they have in place credible and defensible practices for protecting the PHI that they are entrusted to handle by their covered entity clients. If I were to make a bet, I would certainly go with one where a sizable HIPAA business associate is investigated and found in violation of the Omnibus rules during 2014 and fined over $1MM. We’ll see.

      And then lastly, during 2014, we expect to see much greater focus within healthcare organizations on their approach and processes for complying with the Accounting of Disclosures regulation. While the current rule is undergoing some revisions under an NPRM (Notice of Proposed Rulemaking) from OCR, my expectation is that this will morph into a final rule during 2014. Healthcare privacy officers are already expected to comply with requests from patients for accounting of disclosures, and they will need to make sure that their systems and tools are prepared to meet existing requirements and anticipated revisions to these requirements resulting from the Tiger Team’s recommendations .

      The Privacy and Security Tiger Team presented recommendations to the Health IT Policy Committee (HITPC) last month that included recommendations for HHS/OCR as to how best to balance the needs and requirements of patients for Accounting of Disclosures with the burden placed on healthcare organizations in order to provide for them (Health IT Security, Privacy and Security Tiger Team advises HHS on HIPAA tweaks, December 9, 2013).  Their recommendations stated that:

      “The Tiger Team does not believe the proposed access report meets the requirements of HITECH to take into account the interests of the patient and administrative burden on covered entities (CEs). Instead, we urge HHS to pursue a more focused approach that prioritizes quality over quantity, where the scope of disclosures and related details to be reported to patients provide information that is useful to patients, without overwhelming them or placing undue burden on CEs. 

      During 2014, healthcare organizations will need to ensure that they can accommodate the needs of the Accounting of Disclosure regulations from their electronic health record systems, and also that they have a tool that will confirm and “prove” for them that they are abiding by the rules, and that they maintain a documentation set in an organized fashion of such disclosures in order to demonstrate as they may be required to, to OCR, that they are compliant with these new rules.  Let’s remember that organizations such as Health Plans that do not use electronic health record systems are also obligated to the AoD regulations.

      About the Author

      Doug Pollack's avatar
      Doug Pollack

      CIPP, MBA. With over 25 years experience in technology industry products and services, Doug is an expert in personal information privacy and security. He is currently a senior executive at ID Experts.

      Protecting against Healthcare Fraud: Lessons Learned from the Financial Industry

      by James Christiansen

      We all have routines: get up, brush our teeth, eat breakfast, and exercise. As consumers, we also have fairly set spending patterns. Visa was among the first credit card issuers to develop a system for tracking those patterns—and noticing deviations in them—as a way to detect potential fraud.

      With this system, Visa set a precedent for protecting consumers against financial loss due to fraud. I had the privilege of working at the company during this time, learning valuable lessons along the way—lessons that can apply to fighting healthcare fraud and medical identity theft.

      Lesson 1: Consumers Are Their Own Best Protectors

      Chances are you’ve received a call from your credit card issuer about a suspicious card purchase. These calls—alerts, really—occur because the fraud tracking system detected a change in your normal spending patterns.

      When you get such a call, what’s the first thing you do?

      Contact your financial services company, of course, so they can resolve the problem. In this way, consumers become the first line of defense against financial fraud and identity theft. Consumers, in fact, have been instrumental in driving down fraud costs, not only for themselves, but also for the industry as a whole.

      Consumer-driven defense is an excellent model for the healthcare industry to follow. Members of health plans should have the right to exercise the same level of control over their medical identity as consumers do over their financial identity. Doing so will protect them against the health risks that arise from medical identity theft and healthcare fraud.

      Lesson 2: No “Acceptable” Level of Fraud Loss in Healthcare

      For credit card issuers, there is a level of acceptable financial loss due to fraud. There has to be. At some point, it becomes more expensive to try and eradicate fraud altogether than it is to bear the cost of some fraud.

      But with healthcare fraud and medical identity theft, the results can be much worse. As with credit card fraud there is financial loss, but there are also reputational and medical repercussions. Imagine the embarrassment and legal liability if someone’s psychiatric records were exposed, or consider the misdiagnosis or mistreatment that arises from mixed-up medical records. People get sick, and they even die. It’s happened.

      So the question is, how do you put a price on a human life or their health? In other words, what is the acceptable level of loss due to healthcare fraud or medical identity theft?

      The answer, of course, is zero.

      MIDAS—Healthcare’s Solution to Fraud Loss

      If consumers are their own best protectors, and if there is no acceptable loss due to healthcare fraud, then it logically follows that patients have the right to take charge of their medical identity.

      ID Experts’ MIDAS—Medical Identity Alert System—is a good solution. Its alert-driven system lets members stop fraud in its tracks, similar to the successful method that Visa launched years ago. This proactive approach can help reduce the level of fraud and medical identity theft, reducing the risk to our financial, reputational, and medical health.

      Given the rising costs of healthcare due to fraud, everybody, including patients, needs to step in and fight. MIDAS may be the weapon we all need.

      About the Author

      James Christiansen's avatar
      James Christiansen

      James Christiansen is Chief Information Security and Risk Officer of RiskyData, an information security and privacy solutions corporation focused providing clients scalable and cost effective tools and services to manage their Information Risk. Prior to joining RiskyData, James was Chief Information Risk Officer for Evantix and CSO for Experian Americas. James had the overall responsibility for information security providing strategic direction and vision across Experian business units. James joined Experian after serving as Chief Information Security Officer for General Motors where his responsibilities included worldwide implementation of security plan for the largest financial (GMAC) and the largest manufacturing corporation in the world. Prior to joining GM he was SVP and Division Head of Information Security for Visa International, responsible for their worldwide information security program. James has been featured in the New York Times as one of the leaders in information security. He has an MBA in International Management, BS in Business Management and is the author of the “Internet Survival Series”, contributing author of “CISO Essentials” and numerous industry papers. James has been chair for the IT Fraud Summit, and co-chair of the ANSI study of the impact of security breaches on healthcare, a prominent speaker for prestigious events such as the Business Round Table, Research Board, American Bar Association, American Banker, RSA, BankInfoSecurity, ISSA and MIS Training Institute. James has more than 25 years of experience in information security, systems management, including network and operating systems management, application development and design and now meeting the significant challenge of providing risk management solutions for RiskyData.

      Information Security of Health Information Exchanges – Are Humans the Weakest Link?

      by Winston Krone

      Technology glitches have been a predominant source of news headlines for health information exchanges (HIEs).  These glitches have raised a breadth of concerns ranging from usability to the protection of patient privacy.  What has received less news coverage is the risk that human vulnerabilities pose to HIEs. 

      Health care organizations that use HIEs face a broader array of “soft” risks such as human error or phishing attacks.  Unfortunately, human vulnerabilities are addressed less frequently in the context of information security and incident response programs.  The result is an open door to a potential data breach.

      CASE: Human Error leads to HIE Breach

      A September 2013 data breach of Minnesota’s online health exchange, MNsure, demonstrates the impact of human error.  The data breach occurred when an employee mistakenly emailed the personal information of 2,400 insurance agents to an insurance broker.  An investigation conducted by Minnesota’s Office of the Legislative Auditor confirmed the breach was not intentional and resulted from poor internal procedures and human error.  (LINK: http://www.startribune.com/business/231039781.html)

      MNsure’s breach event is consistent with findings reported in the study, 2013 Cost of Data Breach Study: Global Analysis — report from Symantec and the Ponemon Institute.  According to this study of 277 data breach incidents from companies around the globe, 35% of data breaches resulted from human error.  As HIEs mature, human error may be the most significant source of security vulnerabilities.

      Problems with mis-addressed emails are further exacerbated by technology supposedly designed to make our lives easier – for example, the setting whereby Outlook auto completes email addresses.  This has led to cases where emails containing PHI have been sent to the wrong “John Smith” from the sender’s address book.  It’s interesting to note that in Europe, national privacy regulators have specifically stated that certain types of communications (e.g., faxes) are inherently insecure because of the unreasonably high possibility of user error without any safety device to catch errors (e.g. making a one digit mistake while typing in a fax number can lead to the data being sent to a complete stranger).

      CASE: Names and Email Addresses Have Value

      While organizations have improved information security to protect certain sources of personal information such as medical information held in HIEs, personal information such as email addresses have been short-changed in terms of protection.  An Anonymous attack on Uniontown Hospital in Pennsylvania presents an example of how the theft of names and email addresses is discounted.

      Anonymous—a hacking group—accessed computer systems at Uniontown Hospital and absconded with names, email addresses and other personal information.  Anonymous proceeded to post the stolen information online, and this information was identified by information security researchers who notified Uniontown Hospital.  A spokesman for Uniontown Hospital indicated the posted data was from a 2 year-old breach and that no medical information or personal information such as social security numbers or credit cards had been stolen. 

      Names and email addresses may not have the same identity theft risks as a social security number, but the risks are not benign.  Cyber-criminals can use a name and home address to acquire medical information over the phone.  An email address can be used for a targeted phishing attack.  It is conceivable that a hospital’s healthcare portal could be compromised by the single click on a Word document in an email attachment.

      Addressing “Soft” Risks in Information Security

      As information security programs mature, further reductions in information security risk will stem from addressing people and process risks rather than the deployment of technology solutions such as a firewalls or encryption.  Listed below are recommendations to consider in reducing risks associated with “soft” risks.

      1. Use quality control and assurance techniques such as analyzing monthly failed logins by user or average number of data entry corrections to reduce the risk of human error.  Quality-oriented techniques and metrics may take effort to establish, but once present, provide deep control over “soft” risks.
      2. Increase the opportunities to report security incidents including potential data breaches.  This includes anonymous hotlines, forms on a public website and open-door policies for employees to interact with management.
      3. Have policies for information assets such as names and email addresses as part of information security programs.  This could include broader client notification criteria for breach events even when personal information such as a credit card number is not disclosed.
      4. Use data risk exposure analysis to uncover hidden or unknown data risks.  Organizations use encryption to protect stored data, and data loss prevention (DLP) tools to scan for exposed PII that reside in email or places where data is not protected.  The shortfall in these approaches is the data that goes undetected.  The use of forensics and other data analytics can expose and quantify these hidden data risks.
      5. Increase the use of information security analytics to protect IT environment and information assets.  Understanding information security data, trends and the use of analytics remains a barrier to visibility and control over privacy and information security programs.  Organizations such as SANS and EC-Council offer specialized training programs that cover many facets of information security analytics.  (LINKS: http://www.eccouncil.org/ and http://www.sans.org/)

      Authors

      Winston Krone, Esq.

      Megan Bell

      About the Author

      Winston Krone's avatar
      Winston Krone

      Winston Krone is the Managing Director of Kivu Consulting, Inc. in San Francisco, California. He manages a team of technology experts specializing in data breaches, computer forensic investigations, and security compliance, and has served as a legal and technology advisor to corporations, healthcare organizations and financial institutions in cases involving online privacy, hacking, and theft of trade secrets. Winston has testified as an expert in federal and state courts on unauthorized access to networks, deletion of digital evidence, and the veracity of email and online communications. Winston is qualified as an attorney in California and as a solicitor in the United Kingdom. Prior to Kivu, Winston was the Director of Computer Investigations for a publicly traded risk management company. As an attorney, he worked for major law firms in London, Brussels and San Francisco, and for the UN and US State Department in Rwanda, Bosnia, and Kosovo.

      Happy New Year & What’s in Store for 2014 in Healthcare Privacy, Security & Compliance

      by Doug Pollack

      So 2013 was a very busy year for anyone that knows what HIPAA stands for. This should include healthcare providers, insurance companies, and (at least soon) their thousands of business associates, among others. If you hold a privacy, security, compliance, or general counsel title within any such organization, hopefully you got some rest over the holidays, because 2014 promises to be a barnburner.

      The U.S. Department of Health and Human Services (HHS) through their Office for Civil Rights (OCR) kicked off 2013 by publishing the HIPAA Omnibus Rule, which incorporated the final privacy, security and breach notification rules. They then began executing on their promise to be more active in investigating potential violations of these HIPAA rules, while also carrying out the first of what we expect will be an on-going audit program.

      During 2013, there were over 100 HIPAA privacy breaches affecting 500 or more individuals reported to OCR. The largest of these was at Horizon Blue Cross Blue Shield where a pair of laptops were stolen containing information for over 840,000 individuals. OCR also investigated at least dozens of organizations of potential violations under HIPAA. They ended 2013 by settling with Adult & Pediatric Dermatology of Concord, Massachusetts for a payment of $150,000 and an agreement to abide by a corrective action plan (CAP), including development of a “risk analysis and risk management plan” to “address and mitigate security risks and vulnerabilities.”

      So why do I think that 2014 will be such a frenetic year for privacy, security and compliance officers in healthcare? In a not-overly-scientific survey, we asked some of them what they anticipate for this year; and what they wish for. Their answers were anticipatory, and illuminating.

      I think the most creative response we received was from a member of this community who wished for a compliance fairy. The compliance fairy would “sprinkle compliance dust and all employees would follow the rules. If they don’t, they would disappear. This and other compliance “strategies” are outlined in “Compliance Fairy” Tops Healthcare Wish List in 2014, published in the January, 2014 edition of Government Health IT News.

      While attracting a compliance fairy to your organization I’m sure will be the most successful strategy for 2014 for addressing privacy, security and breach risks, other members of the community had additional suggestions, and prognostications, for 2014. 

      Several individuals felt that their HIPAA compliance efforts would benefit most by increasing budget, staff, training and audit assistance. This only makes sense since privacy, especially, tends to be a highly underfunded mandate within healthcare organizations. Among this group, the realist noted that they expect 2014 to require “more work, higher expectations, and no new staff.”

      Even the most optimistic person will acknowledge that 2014 will bring to healthcare more audits, more privacy & security incidents, more OCR investigations, more fines, penalties and corrective action plans. Just with more health information going into electronic health record (EHR) systems, more of this information moving around through health information exchanges (HIE), more people signing up for health insurance through the brand new health insurance exchanges, and more patient information becoming accessible through the internet and the plethora of devices that are used by all of us for information access, data privacy incidents are likely to be rising in 2014.

      Based on all of this, and because most of you don’t have ready access to a compliance fairy, I have two suggestions for your healthcare organizations.

      First, consider how you organize to support the health and efficacy of your privacy, security and compliance efforts. While not at all unusual for these to be in separate organizational “silos”, this is counterintuitive (at least to me) given the level of co-reliance and interaction required between these functions in order to develop and execute effective programs. In Privacy, Security, and Compliance: Strange Bedfellows or a Marriage Made in Heaven, the authors note that:

      “As those of us who work in security and compliance know, there is a strong delineation between the two areas. We may be completely satisfied that a third party—for example, Amazon—runs a very tight ship regarding security within its storage environment.8 Yet without contractual assurances, a higher education institution cannot agree to the storing of regulated data (e.g., FERPA or HIPAA data) within that environment. This leads to various cases that lie at the boundaries between privacy, security, and compliance.”

      To me this, as well as the goal for more effective governance around privacy, security and HIPAA compliance, argues for an organizational structure that unifies these functions. Your organization should consider such a move in 2014.

      Second (and I acknowledge that this recommendation is self serving to some extent), you should use real software for managing data privacy incidents and disclosures. Today, most organizations that acquire, store, exchange or otherwise have access to health information manage their unauthorized disclosures (a.k.a. data breach incidents), not to mention their authorized disclosures, by committee, using home grown spreadsheets, and soliciting just-in-time information on relevant laws and regulations from counsel. This is inefficient, can be confusing, and often error prone. Certainly in a process where errors can result in egregious consequences.

      Purpose-built applications, such as ID Experts RADAR, are now available to fill this need for decision support and documentation. This product was designed specifically to address the problem faced across privacy, security and compliance for efficiently managing incidents where there has been unauthorized disclosure of regulated data, such is the case with HIPAA, along with numerous state laws and other regulations.

      So just as has happened so many times in past business evolutions, where there is an intersection of regulations and risk, technology gets drawn into this vortex to provide a solution. A decade ago this occurred in banking. Now it is happening in organizations that have access to and manage regulated privacy data, especially highly sensitive PHI, and they should seriously look at a solution such as RADAR during 2014.

      So happy new year privacy, security and compliance professionals. As I said at the beginning of this post, it’s going to be a barnburner of a year.

      About the Author

      Doug Pollack's avatar
      Doug Pollack

      CIPP, MBA. With over 25 years experience in technology industry products and services, Doug is an expert in personal information privacy and security. He is currently a senior executive at ID Experts.

      Why Aren’t Health Insurance Exchanges (HIX) Bound By HIPAA Rules?

      by Mahmood Sher-jan

      Health Insurance Exchanges come in two flavors—Federally Funded Exchanges (FFE) and State Exchanges.  And regardless of the flavor, they need image and technical makeovers given their catastrophic public debut.  Thank goodness for the good old paper forms that have come to the rescue of the consumers looking to enroll and the officials responsible for making the enrollment systems work.  Given all the initial technical missteps in the rollout of these exchanges, I hesitate to pile on but it’s hard to resist - especially when it comes to the privacy and security safeguards, or lack thereof.  The government’s own memo confirmed that not enough testing of privacy and security safeguards were done before rolling out the HealthCare.gov.  So for those of us who are privacy and compliance nerds, it begs the question about the applicable governing privacy and security rules and who is in charge of enforcement?

      Yes, it is understandable that a HIX would not meet HIPAA’s definition of a covered entity (CE) and therefore HIPAA Privacy Rule would not generally apply.  But I wonder why these exchanges did not get designated as Business Associates (BA) under HIPAA since they all provide a clear service (data analysis and eligibility) to participating Health Plans and these plans are all covered entities under HIPAA?

      So in the spirit of keeping it simple, I think extending the definition of a BA (45 CFR 160.103) to a third party entity that assists a health plan with data analysis and enrollment eligibility determination would do the trick—a HIX could then qualify as a Business Associate and be governed by the HIPAA Privacy Rule provisions 45 CFR 164.502(e), 164.504(e), 164.532(d) and (e) which in turn would also apply the Administrative Safeguards under 45 CFR 164.308.  This would make the rules and compliance obligations far more consistent for the HIXs and participating Health Plans.  Instead, what we have is 45 CFR 155.260 – Privacy and security of personally identifiable information for HIX.  It basically addresses the creation, collection, use and disclosure of PII (not PHI).  I am still trying to clarify which agency is in charge of enforcing HIX privacy and security obligations. Does the flavor of HIX matter as to who is in charge?  A BA designation for HIX would’ve made this pretty clear with HHS/OCR as the enforcement agency to make sure that the HIX meets its obligations under HIPAA privacy and security safeguards.  A since we finally got clarification on the role and responsibilities of Subcontractors under the HIPAA Final Rule, the same could have been applied to non-Exchange entities associated with FFEs and State Exchanges. 

      I heard a recent interview with Jeffrey D. Zients, President Obama’s troubleshooter for correcting problems with the HealthCare.gov Web site, where he said lack of incident management process was one of the key deficiencies that was found through a root cause analysis of HealthCare.gov debacle.  Again, this is a clear requirement under HIPAA security rule (CFR 164.308(a)) along with the HIPAA Breach Notification Rule.  Healthcare entities have just figuring out these rules so why not stick with the same standards as much as possible to reduce confusion and improve compliance?

      To make matters even more complicated, HHS has proposed separate data breach reporting rules for FFEs, State Exchanges and non-Exchanges associated with federal and state exchanges.  Apparently HHS is deeming the scope of data incidents in the Exchange environments to be broader than HIPAA because instead of using HIPAA’s definition of incidents and newly finalized risk assessment factors, it uses the OMB Memorandum M-06-19, Memorandum M-07-16 and the NIST publication 800-61 to define incidents and breaches.  No wonder the job of privacy, compliance and security officers are getting tougher by the day.  In §155.280(c)(3) HHS and CMS proposed that FFEs, non-Exchange entities associated with FFEs, and State Exchanges must report all privacy and security incidents and breaches to HHS within one hour of discovering the incident or breach.  This looks like a very lofty goal given the reality and the complexity of incident assessment and decision making within across the industry.  Laws are supposed to shape positive behavior—not induce despair and confusion among those who are obligated to comply.

      I started with a simple question whether Exchanges were bound by the HIPAA rules and quickly found myself going down a complex maze of old and new rules that unintentionally do more to confuse these stewards of consumers’ PII and PHI.  The complexity of these rules also contributes to the poor state of compliance and ultimately lowers consumer protection.  The proliferation of new data privacy and security rules isn’t too helpful when existing rules can help us achieve the same end game.

      About the Author

      Mahmood Sher-jan's avatar
      Mahmood Sher-jan

      Mahmood is the lead inventor of ID Experts RADAR, an award winning and patented incident management software. Mahmood holds a BS in Computer Science from University of Washington and an MBA from University of Redlands.

      Medical Identity Theft Prevention – Finally! A Tool for Consumers

      by Robin Slade

      Medical identity theft and related healthcare fraud in the U.S. are undermining healthcare and increasing costs for its provision. Individuals are being physically harmed because of fraudsters and the dangerous changes they make to individuals’ medical identities through their healthcare records. And, shockingly, the FBI reports that medical professionals are more willing to risk patient harm in healthcare fraud schemes than ever before.[1]

      • Healthcare fraud is increasing and in the U.S. costs an estimated $80 billion each year.[2]
      • Up to 10 percent of all healthcare expenditures are lost each year to scammers through healthcare fraud and medical identity theft.[3]
      • Medical identity theft carries serious health and financial implications for victims.[4]
      • 54 percent of patients do not check their health records or Explanation of Benefits (EOBs) for inaccuracies.[5]
      • 52 percent of those who do find inaccuracies fail to report them.[6]

      Typically, healthcare fraud is not discovered until long after it occurs and fighting this type of fraud has been made more difficult by the lack of any product or service that could identify healthcare fraud early on. Working from the foundation of successful monitoring and alert strategies that have been employed by the financial services industry, ID Experts has developed a powerful new product to reduce healthcare fraud and its impact on individuals, MIDAS™—Medical Identity Alert System.

      MIDAS™ is so important because it is the first and only healthcare fraud software that is built around the engagement of patients throughout the healthcare process.  It empowers consumers, through their health plans, to report suspicious claims, which in turn will protect individuals’ health and financial assets, and can potentially lower healthcare costs.  MIDAS engages and informs consumers, allowing them to review every healthcare transaction submitted to their provider, allowing them to become the first line of defense in protecting consumers from potentially falling victim to medical identity theft.



      [2] Ibid.

      [3] National Health Care Anti-fraud Association. 2010.

      [4] PWC Health Research Institute. "Behind the Numbers: Medical cost trends for 2012." May 2011.

      [5] Ponemon Institute 2013 Survey on Medical Identity Theft, http://www.ponemon.org/blog/2013-survey-on-medical-identity-theft retrieved December 12, 2013.

      [6] Ibid.

      About the Author

      Robin Slade's avatar
      Robin Slade

      Robin M. Slade is the Development Coordinator for the Medical Identity Fraud Alliance, a public/private partnership that unites the healthcare ecosystem to develop solutions and best practices for medical identity fraud. Ms. Slade is also the President and Chief Executive Officer of the Foundation for Payments Fraud Abatement and Activism and FraudAvengers.org, a non-profit corporation and weblog focused on helping consumers lessen their exposure to fraud and scams. She is also Senior Vice President and Chief Operating Officer for The Santa Fe Group, and manages the Shared Assessments Program, a consortium created by leading banks, auditing firms, and service providers to inject efficiency and cost savings into the vendor risk assessment process.

      Fraud and Medical Identity Theft: Scary Tales from the Darker Side of Healthcare

      by Christine Arevalo

      Fraudsters. Scammers. Medical identity thieves. The news is full of bad people making patients sicker and healthcare costlier. A quick scan of the headlines pulls up some stories that you have to read to believe.

      Where’s the Doctor?

      This case has a haunted-house quality about it. In New York this past April, a man named Herayer Baghoumian was sentenced to 135 months in prison for setting up so-called “dozens of ‘phantom clinic’ health care providers that existed only on paper, had no doctors, and treated no patients.”[1] The scam, which ran from 2006 to 2010, included more than $100 million in fraudulent bills to Medicare. Baghoumian and his conspirators received more than a third of that money.

      Stolen Identities

      Typically, we think of victims of medical identity theft as patients. But in this case, the operator of a New York City medical practice used the identity of the former clinic owner, a radiologist, to fraudulently bill Medicare and Medicaid for more than $30 million. The defendant, 34-year-old Ting Huan Tai, was living the high life in Lower Manhattan with a 2008 Lamborghini and millions of dollars spread across several bank accounts, according to an NBC 4 New York article.

      These cases illustrate better than any statistics the problem of healthcare fraud. Patients’ lives are put at risk, as are the reputation of healthcare providers. And, of course, we all are left to absorb the subsequent economic impact. The increasing risks to consumers as healthcare data is misappropriated is something we cannot ignore. We’re becoming more savvy than that; and we demand easy simply access to the information that affects us most intimately.

      That’s why despite the headlines, I truly believe most people are honest and just wish they had the power to combat a problem that affects us all.

      The MIDAS Solution

      Fortunately, the average consumer finally has the ability to diminish healthcare costs and the risks associated with healthcare fraud and medical identity theft. The solution is ID Experts’ MIDAS—Medical Identity Alert System—the first member-focused healthcare fraud software solution for healthcare payers. It engages payers’ members (you and me) to monitor their healthcare transactions and better control their medical identities.

      Only a small percentage of health plan members peruse their paper Explanation of Benefits (EOBs) for fraud. They’re confusing, and quite honestly, I see “this isn’t a bill” and move on with my day. MIDAS takes care of this issue with real-time text messages and emails to alert members when healthcare transaction is submitted. They can then validate the transaction or mark it as suspicious, setting in motion the investigation and resolution of fraud much earlier in the process. Think about it, would you know who to call if you identified a seriously suspicious claim? Most of us don’t. MIDAS gives me access to the information I need, in a simple way, with the sole intent to allow me to chime in if I’m seeing something I shouldn’t.

      MIDAS mobilizes patients as the first line of defense against this most devastating of identity problems, turning the average consumer into a “fraud buster.” And that’s news we can all be glad about. You can read more about the newly available MIDAS at www2.idexpertscorp.com/MIDAS.



      [1] Participant In $100 Million Medicare Fraud Sentenced In Manhattan Federal Court To 135 Months In Prison, U.S. Attorney’s Office Southern District of New York Press Release, April 12, 2013

      About the Author

      Christine Arevalo's avatar
      Christine Arevalo

      Christine is a founding employee of ID Experts and leads industry initatives around healthcare identity management. She has experience managing risk assessments, complex crisis communication strategies, and data breach response for ID Experts clients.

      ID Experts Launches MIDAS

      by Bob Gregg

      Criminals stealing children’s Medicaid numbers. Billing Medicare for “phantom clinics” that exist only on paper. A fraudster posing as a doctor. These horror stories are real and far too frequent. According to the FBI, healthcare fraud costs the United States at least $80 billion a year, and the Ponemon Institute’s 2013 Survey on Medical Identity Theft found that 1.84 million people are victims of medical identity theft.

      With millions of Americans entering the healthcare insurance market under the Affordable Care Act, fraud and abuse are expected to escalate. The time to act against medical identity theft and healthcare fraud is now.

      The MIDAS Solution

      To be sure, the government and health insurers are cracking down on fraud and medical identity theft, using complicated computer algorithms and other methods to break the “pay-then-chase” cycle. While helpful, these tactics are insufficient. We need a grassroots approach that involves individual patients, the way the financial services industry works with success.

      Customers receive alerts when questionable transactions using their financial information occur.  Why shouldn’t patients receive similar alerts regarding their most personal data—their healthcare information?

      With MIDAS—Medical Identity Alert System—ID Experts puts the power in the hands of patients to fight healthcare fraud and medical identity theft. MIDAS is the first “alert-driven” software for healthcare payers that engages members to monitor their healthcare transactions and take control of their medical identities.

      MIDAS uses real-time text messages and emails to alert members directly when a healthcare transaction is submitted. Members can then validate the transaction or tag it as “suspicious,” enabling the MIDAS team to quickly follow up. If fraud or medical identity theft has occurred, MIDAS leverages ID Experts’ proven resolution process to diagnose the problem, resolve the issue, and mitigate any harm.

      How Does MIDAS Help?

      One of the frustrating aspects of medical identity theft and healthcare fraud is the lack of consumer knowledge. The Ponemon survey also found that 54 percent of patients don’t check their health records and Explanation of Benefits (EOBs) for errors, either because they don’t know how or it’s too confusing.

      MIDAS solves both problems because it’s easy to use and employs plain language patients can understand. Removing these barriers enables patients to act as a first line of defense against healthcare fraud and medical identity theft.

      Reducing Healthcare Costs for Payers

      MIDAS’s early fraud detection solution empowers patients to respond quickly, reducing healthcare costs and reducing health risks for consumers. By streamlining the investigation of fraud and medical identity theft, MIDAS will help health insurance providers to better protect members, reduce fraud losses, and lower their costs.

      As competition increases among insurers, MIDAS also provides a competitive advantage. It helps payers demonstrate a proactive attitude toward fighting healthcare fraud and protecting members against medical identity theft.

      Perfect Timing

      As the fight against healthcare fraud enters a new phase, ID Experts and MIDAS will be formidable champions for both payers and consumers alike. This first, and only, blending of patient, provider, and data breach response experts is a powerful tool against healthcare fraud. Payers and consumers need MIDAS. Much like consumers take control of their bank accounts, it is time for consumers to take control of their medical identities.

      Learn more about MIDAS at www2.idexpertscorp.com/MIDAS

      About the Author

      Bob Gregg's avatar
      Bob Gregg

      With over 30 years of experience in high technology and software services, Bob joined ID Experts as CEO in 2009. He is particularly interested in the emerging trends involving identity theft and privacy data breaches, with emphasis on healthcare. "Let's keep our private, confidential information just that...private and confidential"

      Sitting down with the OCR; Advice for Business Associates & What it Means to be Investigated

      by Heather Noonan

      We hosted a pretty impressive webinar focusing on what is important to the Office for Civil Rights (OCR), what it means to be a business associate or a covered entity, and what it means to be investigated after a data breach. I can say it was impressive, because it really was.

      Q&A: HIPAA Compliance for Business Associates: Ignorance is Not Bliss

      What was most impressive was the feedback we received directly from OCR. We sat down with OCR and asked the 4 main questions of:

      • What would you like people to know when working through an OCR investigation and working with an OCR Investigator?
      • What is the best way to work with OCR?
      • What should business associates be concerned about or what are you seeing as the biggest oversight?
      • What kind of questions do you receive or hear the most as it refers to business associates?

      One of the critical responses we heard from the OCR interview is that "proprietary does not work". Proprietary as in the company believing it has its own private, confidential information. For example, if OCR is investigating your organization, you can't tell them that your policies and procedures are "proprietary" and belong strictly to your organization. That excuse won't work. By law, you are required to share any and all information as it relates to your organizations’ security and protected health information.

      OCR stated that their biggest concern, or you could said complaint, is when they request documentation and the organization won't provide it. The problem is, if OCR requests it, they will get it and the delay will not be in your favor. Putting up a fight will only take longer.

      Along with OCR's responses to our questions and advice; specifics and advice on risk analysis, risk assessments, incident response planning and breach notification was also provided during our brief one hour webinar.

      If you have finally come to terms that you are a business associate and want to learn more about what is required of you, or if you have been in this space and want to learn more and what it could mean to be investigated, take a quick minute and listen in.

      Take a minute and listen in: ID Experts Webinar HIPAA Compliance for Business Associates: Ignorance is Not Bliss

      About the Author

      Heather Noonan's avatar
      Heather Noonan

      Heather is the Senior Project Manager for the ID Experts Data Breach Response Team. She provides subject matter expertise, state and federal regulatory requirements and best practices. She is the primary point of contact for client communication and data breach engagements. Heather has been with ID Experts since 2008 and works in all areas of ID Experts including informational webinars and blog discussions. Heather has a Bachelors of Science, specializing in Business Communication, and has over 15 years of experience in client customer service with 10 years specific in Project Management.

      Prepare Now! Six Tips for a Successful Breach Response in the HIXs

      by James Christiansen

      The article is the third in our three part series by James Christiansen.  You can read the first two articles here and here

      If you have a yard, chances are you’ve spent hours this fall raking up thousands of fallen leaves. Have you ever wondered which leaf on the tree fell first?

      It’s impossible to say, but it gives you an idea of the challenge organizations in a health insurance exchange (HIX) face when trying to pinpoint the source of a healthcare data breach—in this analogy, a breach is like a leaf. Consider all that data flowing everywhere among all those entities. And we thought data breaches were complicated before!

      Once you know the cause of an incident, you have to work with any number of organizations to determine the scope and impact of the breach before going public. The problem is time. The notification clock starts ticking the moment you’ve discovered a data security incident. But with multiple parties involved, sorting out the details gets complicated. It becomes a test of getting the most accurate information in the shortest amount of time.

      Kind of makes you want to chop down the tree.

      Keep the Tree: Six Steps for a Successful Breach Response

      Before getting out the chainsaw, consider this: although you can’t control every aspect of detecting, containing, and mitigating the impact of a breach in a health insurance exchange, you can ensure your organization is ready to respond. The trick is to be prepared. Here are a few tips:

      1. Have an incident response plan in place. Prepare, document, and test the proper steps for a breach response. Include breach scenarios in your incident response plan that has contingencies for responding to a breach in an insurance exchange. A key consideration is notifying (or being notified) and working with the incident response teams of other organizations that are affected by the breach. You may also consider increasing your budget for forensics or other resources to help you better manage a breach in this environment.
      2. Update your policies and procedures to your incident response plan to enable detection and escalation of a potential breach. Be sure to maintain and continually update a list of contacts of organizations in the insurance exchange that have access to your sensitive customer data. (You can’t afford to lose time searching for this information during a breach.)
      3. Have a consistent method for investigation and incident risk assessment in place, and document this method and the outcome of your incident assessment. Besides providing burden of proof to federal and state regulators, this step can help forensics determine if the breach started within your organization, as well as help determine the scope and impact of the incident on your patients or customers. Read my blog on challenges of risk analyses in the HIX and check out ID Experts RADAR incident risk assessment software.
      4. Ensure that your service providers and contractors have similar policies and procedures and incident plans in place. This includes notifying you as required by law. The “Final Rule Playbook,” outlines the steps business associates can take to become compliant. The same best practices can be applied to handle sensitive information traversing through the health insurance exchanges.
      5. Get accurate data, now. Before notifying the media about your breach, be sure you have your facts right: who was impacted, how they were impacted, number of records compromised, the source of the breach, etc. Wrong data equals re-notification. That’s bad news for your reputation. But work fast, because time is not on your side.  
      6. Consider a reliable third party to help with incident response. A third party can objectively and efficiently handle all phases of breach response—investigation, notification, remediation, etc.—so you can be free to focus on doing your business. This step is actually a best practice, and smart organizations already have contracts in place for “first responders” who contain the breach and “incident handlers” who manage the aftermath. It’s even better if you can get a breach services provider, such as ID Experts, that does both.

      The more you are prepared, the faster you and the other affected organizations can gather accurate information and coordinate an effective response. I urge you to document and test a response plan, and sign up with a respected breach services provider right away. You’ll better protect your patients, your customers, and your reputation.

      James Christiansen is chief information risk officer of RiskyData, a firm that specializes in information security and privacy management solutions for companies in finance, healthcare, high-tech, government, and more.

      About the Author

      James Christiansen's avatar
      James Christiansen

      James Christiansen is Chief Information Security and Risk Officer of RiskyData, an information security and privacy solutions corporation focused providing clients scalable and cost effective tools and services to manage their Information Risk. Prior to joining RiskyData, James was Chief Information Risk Officer for Evantix and CSO for Experian Americas. James had the overall responsibility for information security providing strategic direction and vision across Experian business units. James joined Experian after serving as Chief Information Security Officer for General Motors where his responsibilities included worldwide implementation of security plan for the largest financial (GMAC) and the largest manufacturing corporation in the world. Prior to joining GM he was SVP and Division Head of Information Security for Visa International, responsible for their worldwide information security program. James has been featured in the New York Times as one of the leaders in information security. He has an MBA in International Management, BS in Business Management and is the author of the “Internet Survival Series”, contributing author of “CISO Essentials” and numerous industry papers. James has been chair for the IT Fraud Summit, and co-chair of the ANSI study of the impact of security breaches on healthcare, a prominent speaker for prestigious events such as the Business Round Table, Research Board, American Bar Association, American Banker, RSA, BankInfoSecurity, ISSA and MIS Training Institute. James has more than 25 years of experience in information security, systems management, including network and operating systems management, application development and design and now meeting the significant challenge of providing risk management solutions for RiskyData.

      Security in the Health Insurance Exchange—Easier Said than Done

      by James Christiansen

      You’ve felt it for years—the weight of securing private health information as part of a typical compliance program, now add the complexity of information flowing through a health insurance exchange and you may feel your knees start to buckle. Especially as Health Insurance Exchanges (HIXs) face new security and privacy issues—read this article for more on that. How do you ensure that your information is secure when traversing the exchanges?

      The answer is simple: Do what you’ve always done.

      The guidelines under the HIPAA Security Rule are pretty clear and can be used as a guiding light. They have evolved over the past decade or so, culminating in the HIPAA Final Omnibus Rule, passed earlier this year. So even though the privacy requirements for the health insurance exchanges themselves are still a mystery, the same HIPAA privacy and security standards can be applied as best practices for the participants in the exchange.

      If you are part of a health insurance exchange you can use the same processes already required of healthcare entity, a business associate, or a subcontractor.  Consider these tips:

      1.  Set the standard for compliance with HIPAA’s Security Rule.

      • Conduct a HIPAA security risk analysis (read my blog on challenges of risk analyses in the HIX)
      • Document and test your incident response plan
      • Make sure your method for incident risk assessment meets the HIPPA Final Rule’s “compromise” standard (See HIPAA Security and Breach Notification Rules)

      2.  Provide the tools to help become compliant. ID Experts recently published a “Final Rule Playbook,” outlining the steps business associates can take to become compliant.  The same best practices can be applied to handle sensitive information traversing through the health insurance exchanges.

      3.  Review or create new contracts with third parties: Make sure your contracts with your suppliers, contractors, etc. include sensitive data protection best practices.  Provisions should include:

      • A requirement for best security practices, The HIPAA Security Rule provides great guidance on protecting sensitive information.  Include requirements that the third parties will safeguard the sensitive data.
      • Breach Notification: The third party should notify you as soon as possible of a breach that has impacted your organization but certainly within 48 hours.
      • Incident Response:  Specify that the third party must have an incident response team and regularly test the incident response capability.  You should always maintain the contact information for their incident response leader.

      Whether you’re a covered entity in your own ecosystem or belong to the big world of exchanges, the HIPAA Security Rule can be used for completing your own due diligence or applying the requirements to your third parties providing services to your business.  Complete your risk analysis, update your security practices, update the third-party contracts and you’ll be able to face regulators with confidence.

      James Christiansen is chief information risk officer of RiskyData, a firm that specializes in information security and privacy management solutions for companies in finance, healthcare, high-tech, government, and more.

      About the Author

      James Christiansen's avatar
      James Christiansen

      James Christiansen is Chief Information Security and Risk Officer of RiskyData, an information security and privacy solutions corporation focused providing clients scalable and cost effective tools and services to manage their Information Risk. Prior to joining RiskyData, James was Chief Information Risk Officer for Evantix and CSO for Experian Americas. James had the overall responsibility for information security providing strategic direction and vision across Experian business units. James joined Experian after serving as Chief Information Security Officer for General Motors where his responsibilities included worldwide implementation of security plan for the largest financial (GMAC) and the largest manufacturing corporation in the world. Prior to joining GM he was SVP and Division Head of Information Security for Visa International, responsible for their worldwide information security program. James has been featured in the New York Times as one of the leaders in information security. He has an MBA in International Management, BS in Business Management and is the author of the “Internet Survival Series”, contributing author of “CISO Essentials” and numerous industry papers. James has been chair for the IT Fraud Summit, and co-chair of the ANSI study of the impact of security breaches on healthcare, a prominent speaker for prestigious events such as the Business Round Table, Research Board, American Bar Association, American Banker, RSA, BankInfoSecurity, ISSA and MIS Training Institute. James has more than 25 years of experience in information security, systems management, including network and operating systems management, application development and design and now meeting the significant challenge of providing risk management solutions for RiskyData.

      Sitting down with OCR; Working alongside them & what it means to be investigated

      by Heather Noonan

      The Office for Civil Rights has always been a daunting place to me. They are the U.S. Federal government. How could that not be intimidating? They have the full capability and authority to not only investigate you and your organization, but to fine your organization for multiple reasons in accordance with the law. Well, the good news is that I have worked alongside OCR for many years and they have always had good advice and recommendations along the way.

      I sat down recently with an OCR investigator and asked them many point blank questions as it relates to the new HIPAA Omnibus rule and what it means to be investigated.  From the very beginning, the OCR investigator said “This is not a lawsuit, this is a compliance issue”. As simple as that, it’s a compliance issue. OCR is not immediately taking you to court, they want to know what is happening in your organization and how you plan to mitigate the risks and problems you are up against.

      OCR investigators may not expect you to be perfect, but they expect you to address everything and expect you to fix what isn’t currently in place. What they don’t want is your organization to put your head in the sand and be “disconnected” from the process or the regulations. They want you to be involved and pay attention. Which doesn’t have to be terrifying.

      Many of my conversations and a much better understanding of OCR as it focuses on business associates will be shared during our upcoming webinar, November 19, 2013. If you think you are a business associate or might just be one, take a quick minute and listen in. We are here to help and guide you through these difficult times and new regulations.

      Recording: ID Experts Webinar November 19, 2013: HIPAA Compliance for Business Associates: Ignorance is Not Bliss

      About the Author

      Heather Noonan's avatar
      Heather Noonan

      Heather is the Senior Project Manager for the ID Experts Data Breach Response Team. She provides subject matter expertise, state and federal regulatory requirements and best practices. She is the primary point of contact for client communication and data breach engagements. Heather has been with ID Experts since 2008 and works in all areas of ID Experts including informational webinars and blog discussions. Heather has a Bachelors of Science, specializing in Business Communication, and has over 15 years of experience in client customer service with 10 years specific in Project Management.

      Why the Healthcare CISO can’t be a Dr. No

      by Mahmood Sher-jan

      Well, the obvious answer is that acting as Dr. No can impede innovation and delivery of business value, not to mention its career limiting affects.  If you are associated with the Healthcare industry, you are witnessing the biggest transformation any industry has gone through.  Whether you are a CISO at a Provider, a Payer, or a Healthcare Business Associate (BA), there’s little resembling business as usual. So much is changing so fast that the goal posts for meeting your security & privacy obligations and keeping your patients and members’ data secure seem farther than ever before. You are expected to be an enabler of sharing ever-larger amounts of sensitive patient & member data with authorized entities, not a blocker.

      MORE INFO: HIPAA Breach Risk Assessment Software

      You face challenges from the massive digitization, online access and brokering of medical records for improving quality of care, and reducing costs of Healthcare while protecting patient/member privacy and PHI security.  This massive industry transformation is expected to take shape with the help of technological innovation, including the adoption of EHR systems, cloud & virtualization technologies, and mobile technologies-- all at once and in a business climate that is facing increasing HIPAA-HITECH regulations and fines. Unlike the financial industry and its decades of investment in security and fraud management, the Healthcare industry has a way to go to create an effective culture of security and compliance. 

      In the face of mounting business model and economic uncertainties facing Health Plan and Hospital CEOs, data security can’t become a reason to slow down business objectives. When the CEO asks for an unrealistically fast deployment of digital tablets to her large workforce, the CISO needs to be part of the solution without being perceived as a blocker or a Dr. No!  We know that security is complicated and hard work but it is no longer safe for a CISO to take months to study the security risks of this massive deployment.  It is a complex new world and it makes the job of a CISO that much more challenging and tenuous—damned if you do and damned if you don’t.

      CISOs know that they have a tough balancing act because connectivity is critical for delivering good and safe patient care. Security can’t get in the way of this goal.  However, we know that connectivity and data sharing also increase the risk of incidents and data breaches.  At the Gartner Group 2013 Symposium/IT Expo in Orlando, Karl West, CISO of Intermountain Healthcare (UT) talked about the importance of protecting patient and employee data while making information available to all providers that participate in the patients’ care.  I found his comment “…can’t separate innovation from the role of security” very insightful.  He certainly has embraced his role as a change agent and has earned a seat at the strategic decision-making table so to speak.  It is more effective to have security and privacy controls built into the architecture of a new system than trying to retrofit them in.  CISOs that deliver business value will have a much better chance of success in protecting their organization and customers from incidents involving sensitive and damaging data.

      About the Author

      Mahmood Sher-jan's avatar
      Mahmood Sher-jan

      Mahmood is the lead inventor of ID Experts RADAR, an award winning and patented incident management software. Mahmood holds a BS in Computer Science from University of Washington and an MBA from University of Redlands.

      Wrestling the Security Octopus—The Challenges of Risk Analysis in the HIX’s

      by James Christiansen

      Have you ever tried to wrangle an octopus? Neither have I, but I imagine the six-arm advantage of the octopus would quickly overpower my best efforts. Health plans and other participants in the Healthcare Insurance Exchange (HIX) face a similar challenge when it comes to security, particularly with risk analysis.

      A requirement of the HIPAA Security Rule, a risk analysis assesses the potential risks and vulnerabilities to the confidentiality, integrity, and availability of an organization’s electronic protected health information (PHI). It is a complex calculation for determining risk based on many factors—a difficult-enough task in one’s own ecosystem. Raised to the level of a healthcare insurance exchange, risk analysis becomes exponentially more complicated.

      The biggest problem is what I call the “mosaic theory,” the risks of data in disparate places—the arms of the octopus, if you will—now being brought together in one place. Under the new healthcare system, the Department of Health and Human Services (HHS) operates a central data hub that connects participating state health insurance exchanges with federal government agencies—such as the Treasury Department and Internal Revenue Service, and with other state agencies—to verify enrollees’ eligibility. While the government hub doesn’t store data on individuals, the risk that identity thieves could steal the identity of one participating organization to gain access through the hub to data held by another. Isolating and resolving security problems in such a complexity of systems is difficult at best.

      Three Tips for Successful Risk Analysis in the Exchange

      Despite these challenges, I believe that risk analysis will be critical to the future security and thus success of the federal and state exchanges. Here are a few ideas that may help:

      1.      Do it in good faith. It may be tempting to adopt a “what’s the use” mentality, but that’s the worst thing to do. Besides the legal mandate to conduct risk analysis, doing so demonstrates goodwill to stakeholders and what the Office for Civil Rights (OCR) calls “a culture of compliance”—always a good thing for regulators to see. Smart organizations are proactive.


      2.      Consider the professionals.  Just like you’d leave octopus wrestling to the experts, consider using a reliable third-party to tackle the complexities of risk analysis. According to a September Compliance Today article, reposted on the ID Experts blog here: “Risk analysis by independent experts can help an organization quickly analyze and benchmark security programs against peer organizations and industry best practices.”

      3.      Go to the source: No gold standard exists for conducting a risk analysis. However, the federal government does provide guidelines at www.hhs.gov/ocr/privacy/hipaa/administrative/securityrule/securityruleguidance.html. These resources are particularly helpful:

      Conclusion

      Despite our two-arm limitations, I believe we can tackle the security octopus one organization at a time. It starts with an intrinsic commitment to safeguarding data. Every member of a health insurance exchange must build in security from the beginning, not as an afterthought. It has to be part of the system’s DNA. And I’m not just taking authentication, authorization, and accounting, but also monitoring, risk analysis, and privacy policies and processes.

      In other words, the whole creature.

      James Christiansen is chief information risk officer of RiskyData, a firm that specializes in information security and privacy management solutions for companies in finance, healthcare, high-tech, government, and more.

      About the Author

      James Christiansen's avatar
      James Christiansen

      James Christiansen is Chief Information Security and Risk Officer of RiskyData, an information security and privacy solutions corporation focused providing clients scalable and cost effective tools and services to manage their Information Risk. Prior to joining RiskyData, James was Chief Information Risk Officer for Evantix and CSO for Experian Americas. James had the overall responsibility for information security providing strategic direction and vision across Experian business units. James joined Experian after serving as Chief Information Security Officer for General Motors where his responsibilities included worldwide implementation of security plan for the largest financial (GMAC) and the largest manufacturing corporation in the world. Prior to joining GM he was SVP and Division Head of Information Security for Visa International, responsible for their worldwide information security program. James has been featured in the New York Times as one of the leaders in information security. He has an MBA in International Management, BS in Business Management and is the author of the “Internet Survival Series”, contributing author of “CISO Essentials” and numerous industry papers. James has been chair for the IT Fraud Summit, and co-chair of the ANSI study of the impact of security breaches on healthcare, a prominent speaker for prestigious events such as the Business Round Table, Research Board, American Bar Association, American Banker, RSA, BankInfoSecurity, ISSA and MIS Training Institute. James has more than 25 years of experience in information security, systems management, including network and operating systems management, application development and design and now meeting the significant challenge of providing risk management solutions for RiskyData.

      PLUS International Cyber Update

      by Jeremy Henley

      This year at PLUS International there were more cyber related sessions than ever before as this increasingly becomes an area of focus for more underwriters. The session I was invited to speak at, Cyber Insurance 3.0 Cutting Edge Advancements in Coverage and Services, covered some of the changes to coverage including new product developments. 

      These developments are both pre and post breach related and cover enhancements to the areas covered when claims are made.  We also covered risk management services that are more robust and targeted and whether or not insurance carriers can do more to minimize the likelihood of data breach.

      Our session had some interesting discussion around introducing highly protected risk models and government’s responsibility in reducing the threat of the data breach and the insurance market.  Which in light of recent online insurance blunders from our government I am not sure we want them touching anything related to cyber security or insurance. 

      If cyber insurance continues to grow at its rapid pace we will see a day where most organizations have some form of coverage for cyber liabilities.  The panelists all agreed that we are already seeing evidence of the highly protected risk models that are used in other areas of insurance.  For example smoke alarms, sprinkler systems and proximity to fire stations or hydrants can reduce premiums.  Demonstrating privacy and security compliance by completing consistent risk assessments auditing that policies and procedures are in place, employees are trained on these and are retaining the knowledge, and IT architecture has been reviewed and meets or exceeds industry standards, then the risk of a breach are less.  Fires will still happen and so will data breaches, but risk management techniques will always minimize the losses.   

      Lastly the section I spoke to the most was on whether or not virtual help from a privacy officer or security officer would be valuable to a policyholder whether they are small or large and something that could effectively be implemented as part of an insurance policy.  The major issue with the highly protected risk model currently is where the money comes from.  Most companies would prefer to be compliant and secure as possible however the funding for those programs has been thin the last several years and even the best companies are still catching up.  There are so many competitors in the cyber insurance space at the moment that premiums have become very competitive and even companies who have had several breaches in the past and are in a high risk industry will still have plenty of carriers bidding for their business.  I believe there is a way to provide access to experts and useful tools and information that will help to manage the risk proactively.

      A Virtual Privacy Expert can be an incredible value-add for any insurance carrier providing cyber liability coverage or even as a tool to increase awareness and cross selling opportunities to other insurance lines.  The trick is making sure that policy holders are aware of the tools and get the value out of them.  If you would like to learn more about our virtual privacy expert offering please click the link here

      About the Author

      Jeremy Henley's avatar
      Jeremy Henley

      Jeremy Henley is an Insurance Solutions Executive for ID Experts. He is has been certified by the Healthcare Compliance Association for Healthcare Privacy and Compliance and brings 11 years of Sales and Leadership experience to the ID Experts team.

      Is the Obamacare Health Insurance Exchange Secure?

      by Doug Pollack

      I had been waiting for this question to be asked. Now it seems like almost overnight that everyone, including members of Congress and the Obama Administration, is asking about the level of security of the U.S. government’s new health insurance exchange, HealthCare.gov. The impetus for the elevated level of interest and scrutiny seems to be a September 27th memo internal to the Center for Medicare and Medicaid Services (CMS) that discussed security concerns and is extensively quoted in an ABC News article.

      LEARN MORE: 3 Steps to Tackle HIPAA's Final Rule

      The article quotes a memo written by two federal technology administrators to CMS chief Marilyn Tavenner. In it, they note that due to time constraints on the schedule for release of the system that “the security contractor has not been able to test all of the security controls in one complete version of the system.”

      They then further noted that the implication of this is that “there are inherent security risks with not having all code tested in a single environment” and then conclude that “from a security perspective, the aspects of the system were not tested due to the ongoing development, exposed a level of uncertainty that can be deemed as high risk for the FFM [Federally Facilitated Marketplace].” (italics added).

      Now while rushing a system of this level of complexity to market would seem to inherently result in risks relative to the an appropriate level of security and privacy for its consumers, there are also longer term questions now being asked about how extensively this system shares private consumer information with many other systems and agencies.

      For example, in an InformationWeek article titled Online Health Exchanges: How Secure? published October 2, 2013, they explore the overall security architecture of the HealthCare.gov. One area of particular concern stated in this article is the system’s “data hub”, which routes information between multiple systems during the application process.

      The author references a July blog post by Center for Democracy and Technology analyst Christopher Rasmussen that explores the level of and nature of data sharing that takes place in the data hub. For example, “an individual’s eligibility for a federal subsidy to purchase health insurance requires verification of income and family size from the Internal Revenue Service (IRS), immigration status from the Department of Homeland Security (DHS) and incarceration status from the Social Security Administration (SSA). Insurance companies will also use a single portal -- run by the Centers for Medicare & Medicaid Services (CMS) -- that gives them access to some of that information.”

      Now thinking about the data hub has started me thinking about some of the privacy implications of this new system. Not as much privacy, as in “don’t let people who shouldn’t see my information see my information”, but privacy as in, “do I really know who all is getting access to my information, and the results of decisions they are making about me and adding to the information about me.” And then “is there a way for me to view all of the information that gets associated with me by this new system in order to ensure that it is accurate and up to date?” With data on me getting routed around by the data hub, I fear that the privacy implications and risks may not be entirely “transparent”. This concerns me in the longer term, while the security issues worry me in the short term.

      So I’d like to be clear that I applaud the intention of these government healthcare insurance exchanges, which is to make health insurance available at an affordable price to millions of Americans who may otherwise not be able to secure such coverage. But the scope and reach of the federal exchange elevates my inherent concerns as to the government’s effectiveness in maintaining the privacy of its citizen’s sensitive personal information. Government agencies clearly have not had a great track record in this regard.  There have been well publicized data breaches, and of course there is the underlying issue as to whether American’s “trust” the government to use such personal information in an appropriate and well intentioned manner.

      As noted in an October 30th CNN article Government memo warns of high security risk at health care website, lawmakers said “the system should have been more thoroughly vetted, since it asks purchasers of health insurance to provide personal information." Speaking to Secretary Sebelius, Rep. Mike Rogers, R-Michigan stated “you accepted a risk on behalf of every person that used this computer that put their personal and financial information at risk because you did not even have the most basic 'end-to-end' test on security of this system.”

      So politics aside, I’m sure that over time, the kinks will get worked out of the new health insurance exchanges. And that the security will continue to be tested and improved. But it is disappointing that it appears to have been more important to “meet a launch date” than to ensure a very high level of security in such a broad reaching system that would be handling such sensitive personal information of an extraordinary number of U.S. citizens.

      About the Author

      Doug Pollack's avatar
      Doug Pollack

      CIPP, MBA. With over 25 years experience in technology industry products and services, Doug is an expert in personal information privacy and security. He is currently a senior executive at ID Experts.

      Cyber Liability Forum Review

      by Jeremy Henley

      Last week saw the highest attendance ever at the Cyber Liability Forum in Marina Del Rey.  It was a who’s who for Cyber insurance carriers, brokers, state regulators and a few vendors as well.  There were many great sessions but I wanted to highlight a few of the take-a-ways that I had from the event. 

      The first was from the session discussing a claims report produced by Net Diligence that analyzed actual events that were covered by these policies.  In many areas the data mirrors what other reports like the Ponemon Study have found showing that lost or stolen devices continue to be the lead cause for data breaches.  Security Awareness Training and encryption can go a long way to cut back this trend.  The types of records compromised were very close between PII and PHI at 29% and 27% respectively.  The claims are also most commonly coming from the healthcare sector at 29% followed by Financial Institutions at 15%. 

      Those numbers all seem to be in line with the breaches we see on a weekly basis; however the number that shocked me was the legal defense costs of a data breach at more than $575k and an average settlement cost of $278k.  None of our clients have been fined or sued as a result of a data breach we helped them respond too.  Maybe I should have been a defense lawyer for everyone else! 

      This session also had averages for call centers, credit monitoring, and notification and what interests me the most is that no one seems to link these services to the defense costs and the settlements post breach.  There is always talk about the importance of certain coverage and things to make sure are not excluded if you are the broker, but there is little to no talk about the quality level of the response affecting the defense costs. This is the process of telling the victims they have been breached!  It has to be done with care or it will naturally lead to a class action.

      The plaintiffs’ attorneys who spoke in another session about class actions were asked how they find plaintiffs for these cases......his response………”well they get a letter telling them they may have been harmed” and nearly everyone acted surprised by this.  I followed up with a question about why they call the law firm instead of the 800 number that goes to the company or breach response vendor that is set up to answer questions for them.  I asked if it was money motivated or is it something else.  His response was that these folks generally just want to know the company is fixing its problems so others are not impacted by the same issue. 

      What!?  If a company responds correctly to an incident the breached population should have easy access to answers regarding these areas and it is significantly less expensive than defending and settling a claim.    

      Our President and Co-founder started ID Experts on the idea that we could take care of identity theft victims’ in a more kind and caring way than any other company with the highest quality customer service and the best identity theft recovery specialist on the planet.  In part because they often have firsthand experience of what ID theft is like and the fear a data breach can bring to the individual whose data was compromised.  Building from that concept we developed systems to provide this care to large numbers of breached individuals and have always seemed to help our clients avoid litigation, which has allowed them to never tap insurance or reserve resources to defend themselves post breach.  This system works so well we have patented it and called it, YourResponse™

      Now of course I am proud of our patented system and would tell anyone it’s the best and can’t be matched but occasionally you need more proof than just a sales person telling you it’s great, but it’s our secret sauce so I can’t share anymore.  Even our prefect track record of avoiding litigation is sometimes not enough to convince folks that our system is the best solution for most cases.  Fortunately for me some proof came from the second interesting session that will not give away the secret formula.

      This session was one made up of a forensic expert, a printing expert, a credit bureau, and a public relations expert all from different vendors.  Although all of them have other areas of their business other than data breach none of them provide services for a data breach response other than what is listed above.  At first I was disappointed to not be a part of the panel but it worked out just fine. 

      What I heard them collectively say several times was that breach response work is very complicated.  They are all very well spoken and understand the laws and regulations that have created opportunities for these organizations; however they all have separate pieces of the work.  Multiple individuals from the same group also mentioned that some of the work they do is dependent on another players in the data breach response life cycle.  All of the areas they were discussing were not related to the laws or regulations but the fact that a breach can be a massive project with a short time frame to complete. 

      There is absolutely a legal component BUT there is just as much of an operational project that needs to be completed.  Let’s also not forget that this project ALWAYS comes as a surprise, is something most companies do not have considerable experience with and need perfect coordination and project management, all overseen by a breach coach to protect privilege in the potential event of litigation. 

      It was during this session that I felt more confident than ever that our patented approach, YourResponse™, is what makes our responses more successful than any of our competitors.  Being a project management driven process with more flexibility than any other approach, plus the secret sauce, allows the client and the breach coach to orchestrate there operational and legal needs to perfection while benefiting from the efficiency, and experience ID Experts has as single source solution. 

      One comment from the forensic expert was “you often need to get your response engine running while you are still sorting out the forensic findings.”  I certainly can appreciate this and can tell you that when the forensics are completed by a firm other than ours it can be very difficult to start any sort of meaningful work, prior to getting the final “official” report.  There are many reasons for this which I will not go into on this post but it slows down the process, creates opportunities for more errors and is not the most efficient way to respond.

      About the Author

      Jeremy Henley's avatar
      Jeremy Henley

      Jeremy Henley is an Insurance Solutions Executive for ID Experts. He is has been certified by the Healthcare Compliance Association for Healthcare Privacy and Compliance and brings 11 years of Sales and Leadership experience to the ID Experts team.

      Solving Incident Risk Assessments through Innovation

      by Mahmood Sher-jan

      I recently returned from the weeklong Gartner Group 2013 Symposium/IT Expo in Orlando where over 12,000 CIOs and IT professionals from across many industries, including healthcare were in attendance.  Healthcare CIOs are expected to use innovation and use emerging technologies to become data brokers and data stewards in the new brave world of connected and mobile healthcare services delivery.  Karl West, CISO of Intermountain Healthcare, talked about the importance of protecting patient and employee data while making information available to all providers that participate in the patients’ care.  I found his comment “…can’t separate innovation from the role of security” very insightful.  Innovation and security should work together to enable transformation of our healthcare system.  This means that we should apply innovation and emerging technologies for solving our data incident management needs and compliance with federal and state data breach regulations. 

      Video: Solving Breach Risk Assessment Through Software

      One area that innovation can significantly help is PHI/PII incident specific risk assessment for compliance with the HITECH Final Breach Rule and state breach laws.  Today, leading organizations are adopting what Gartner calls “purpose-built” software applications instead of general-purpose solutions. General-purpose case management and enterprise GRC platforms and worse yet excel-based toolkits do not address the regulatory complexity, assessment consistency and workflow requirements that a privacy and security officer needs to perform incident risk assessment and manage this process.  This creates unnecessary risk of fines and corrective action plans for the organization.

      I think it is time to let innovation and secure cloud based software do what it does best—affordably and securely transform the way PHI and PII incidents are managed so organizations can comply with the breach laws and minimize the risk of fines and audits.  Your best defense during an OCR audit is to show that you have a consistent and fully documented process for incident risk assessment and decision making whether an incident is a breach or not.  For anyone anxiously awaiting the release of OCR’s breach toolkit, here’s an excerpt from a recent post on the SCCE’s social network (HCCA) about the anticipated OCR tool:

      “…we agreed that it [OCR Tool] certainly would not be an end all tool that will answer all of the outstanding questions people have on the process.  It may range from a "restatement" or "repackaging" of what is already in the Final Rules with a bit of window dressing here and there to allow for the capture of input as the CA tool...or it may be more technically constructed as the HIPAA COW tool or AHIMA Risk assessment Tool.

      One area that folks did bring up that the tool doesn't provide a mechanism for assessing the various weights or impact of each individual factor as well as a way to assess the factors in combination.  This then opens the door for subjectivity or inconsistency.”

      I think using innovative software and modeling we can do much better to help privacy, compliance and security officers manage PHI/PII incidents.  Let’s look outside the traditional views and explore using ID Experts RADAR, purpose-built incident management and decision-support software for this task instead of unsuccessfully trying to repurpose traditional general-purpose products.  To learn more about applying multiple factors for incident risk assessment and RADAR, please attend our upcoming educational Webinar.

      About the Author

      Mahmood Sher-jan's avatar
      Mahmood Sher-jan

      Mahmood is the lead inventor of ID Experts RADAR, an award winning and patented incident management software. Mahmood holds a BS in Computer Science from University of Washington and an MBA from University of Redlands.

      Experian Business Practices Challenged by South Carolina

      by Doug Pollack

      Experian is in the news again. While most recently due to their supporting role as a subcontractor in the new federal government health insurance exchange that has been having technical and availability problems (Software, Design Defects Cripple Health-Care Website, Wall Street Journal, October 6, 2013), the subject of this write-up concerns their role in the recent controversy in South Carolina, in which the nature of Experian’s business practices have come into question.

      You may remember that Experian was chosen to provide a credit monitoring product to the citizens of South Carolina when a data breach occurred at the South Carolina Department of Revenue (SCDOR) in October, 2012 (SC Department of Revenue Responds to Cyber Attack, Will Provide Credit Monitoring and Identity Theft Protection to Taxpayers).  In this privacy breach, social security numbers as well as credit and debit card numbers, were exposed on approximately 3.6 million individuals in a highly effective cyber attack.

      The Experian product was offered to South Carolinians initially for a period of one year. South Carolina contracted with Experian to provide these services in a $12 million contract. As this one year period is coming to an end later this month, Bill Blume, SCDOR Director, has said that the state Budget & Control Board is “trying to award a new $10 million contract…[in order to extend protection to South Carolinians for another year and that] Experian has already decided that it will not re-bid for a second year.” (Experian offers cancellations for hacking victims who unwittingly re-enrolled, South Carolina Radio Network, September 18, 2013).

      But this is where things get interesting. Apparently, in anticipation of the expiration of this credit monitoring Experian has, seemingly without having the courtesy of mentioning this to the SCDOR authorities, emailed all of the enrolled individuals suggesting that they should renew the Experian service after the October 24th expiration date, at a special annual price.

      “However, the email did not mention that South Carolina was negotiating a new contract with other companies to continue the service for free once Experian’s current deal expires…”. ‘We were caught off-guard,’ SCDOR director Bill Blue told South Carolina Radio Network. ‘They’re a private company….but I was disappointed with some of the misinformation that they let out.’ ”

      Which is why The State (Scoppe: If you can’t trust Experian, who can you trust?, September 24, 2013) noted that this kind of Experian business practice “was a scam”. And this has led to some political back and forth in South Carolina. 

      “When Sen. Vincent Sheheen blamed Gov. Nikki Haley last week for allowing taxpayers to ‘continue to have their personal data abused,’ this time by Experian, the governor’s office counter-attacked the Democratic gubernatorial candidate for thinking ‘he can tell a private company that it can or cannot reach out to those it is providing services to.’ ”

      And this last comment gets to the root of this issue. Experian is in the business of providing credit monitoring services to consumers. You are probably already familiar with their ads for freecreditreport.com. That’s Experian. It is only an adjunct to this business that Experian bids on contracts to serve clients that have experienced a data breach. Such as in this case, occurred at the SCDOR. By bidding for data breach business, Experian has yet another means of selling credit monitoring to consumers, upon expiration of the free breach offering, as was the case in South Carolina in this instance.

      And frankly, while I applaud Governor Haley’s support for the free enterprise spirit being exhibited by Experian, it seems to me that Experian is demonstrating an utter disregard for the government and by extension the taxpayers of South Carolina that were victimized in this data breach. 

      With this in mind, I’d like to note that there are excellent alternatives to Experian when an organization has a data breach and would like to provide credit monitoring and identity protection products to the individuals who are the data breach victims.

      One of the cornerstone’s of the ID Experts approach to providing such solutions for data breach response, what we call YourResponse™, is that we are solely concerned with the best interests of our client, the organization that was breached, in helping them address the best interests of their victims of the breach. For all of our breach clients, we agree to NEVER market to the data breach victims, as a tenant of our contract with the client.

      Our flexibility in this regard is specifically provided in order to avoid the kinds of unintended negative outcomes that have been so visibly publicized in the SCDOR breach. The basis of the YourRepsonse methodology is to craft every element of a breach response to best serve the explicit needs of our client.  One such element is our standard agreement to not market or make subsequent offers directly to the breach consumers, unless it is specifically requested by our client that such offers be made.

      About the Author

      Doug Pollack's avatar
      Doug Pollack

      CIPP, MBA. With over 25 years experience in technology industry products and services, Doug is an expert in personal information privacy and security. He is currently a senior executive at ID Experts.

      Identity Authentication Under Attack

      by Bob Gregg

      You know those questions that those “highly secure” websites like your bank or your brokerage account ask you to verify your identity? They do this because they believe they are asking you questions that only you, the real you, would know.  Well, it looks like it is time to reconsider that belief. A seven-month investigation by security blogger Brian Krebs reveals that an underground cyber threat organization known as SSNDOB compromised the computers of information aggregators Dun & Bradstreet, LexisNexis and Kroll Background America, which maintain records on millions of Americans that can be used to support knowledge-based authentication.

      SSNDOB markets itself on cybercrime forums as a service where customers can pay a few dollars and get all the knowledge based authentication information they need to convince a “secure” website that they are you.

      In his report, Krebs says SSNDOB's database was itself hacked this summer by multiple attackers, and he received a copy of the database. He says the database shows that the site's 1,300 customers have spent hundreds of thousands of dollars looking up Social Security numbers, birthdays, drivers' license records, and obtaining unauthorized credit and background reports on more than 4 million Americans.

      Quoting from the Krebs Report:                                         

       Avivah Litan, a fraud analyst with Gartner Inc., said most credit-granting organizations assess the likelihood that a given application for credit is valid or fraudulent largely based on how accurately an applicant answers a set of questions about their financial and consumer history.

      These questions, known in industry parlance as “knowledge-based authentication” or KBA for short, have become the gold standard of authentication among nearly all credit-granting institutions, from loan providers to credit card companies, Litan said. She estimates that the KBA market is worth at least $2 billion a year.

      Let’s say you’re trying to move money via online bank transfer, or apply for a new line of credit,” Litan proposed. “There are about 100 questions and answers that companies like LexisNexis store on all of us, such as, ‘What was your previous address?’ or ‘Which company services your mortgage? Litan related a story she heard from one fellow fraud analyst who had an opportunity to listen in on the KBA questions that a mortgage lender was asking of a credit applicant who was later determined to have been a fraudster.

      The woman on the phone was asking the applicant, ‘Hey, what is the amount of your last mortgage payment?’, and you could hear the guy on the other line saying hold on a minute….and you could hear him clicking through page after page for the right questions,” Litan said.”

      So it now appears that we all have to step up our game on how identities are authenticated. There is biometrics, improved real time adaptive authentication, such as tying your personal device and /or location to an identification, and many other schemes. But there is no reason to believe that knowledge based authentication will go away any time soon. So we must push for organizations that use it to do everything possible to protect and update that information as much as possible. . Most importantly, we need all the key stakeholders- government, law enforcement, and private concerns, to agree to come together and build and share best practices to fight these cyber crimes.  This will always be a cat and mouse game of one trying to be ahead of the other, but we should agree to fight this problem as a team, not a bunch of individual organizations that can get clobbered at any time.

      About the Author

      Bob Gregg's avatar
      Bob Gregg

      With over 30 years of experience in high technology and software services, Bob joined ID Experts as CEO in 2009. He is particularly interested in the emerging trends involving identity theft and privacy data breaches, with emphasis on healthcare. "Let's keep our private, confidential information just that...private and confidential"

      Leon Rodriguez Provides Thoughts and Guidance on HIPAA Privacy and Security Enforcement Focus

      by Doug Pollack

      Leon Rodriguez, director of the Office for Civil Rights (OCR) at the U.S. Department of Health and Human Services (HHS), gave the keynote presentation today at the HIMSS Privacy & Security Forum in Boston.  He began by noting that it is auspicious that he was presenting today, since September 23rd is the day that HIPAA business associates are now required to comply with the HIPAA Omnibus Rule including the substantial new security regulations under the Security Rule.  

      Director Rodriguez's presentation went on to address the areas of focus for the enforcement actions that would be undertaken by OCR, specific prescriptive advice for organizations that are under the purview of OCR, and then a bit of insight into how OCR will proceed with its audit program.  

      Learn More: 3 Steps to Tackle HIPAA's Final Rule

      Enforcement focus

      OCR will have three areas of focus for their enforcement actions.  

      1. Major breakdowns or deficiencies in security.  In this area he noted that while a data breach might be the catalyst for an investigation, that the breakdown in security identified by OCR May often have little to do with the cause the breach that precipitated the investigation.  

      2. Egregious disclosures of patient information.  He mentioned the Farrah Fawcett case at UCLA as an example. The key here appears to be situations where the exposure of PHI was totally unwarranted, and certainly not a function of "quantity" of patient records involved.  

      3. Failure to provide access.   He noted that under the new rules, patients now how access to information in their electronic medical record. He mentioned the Cignet case as an example of where access was not provided, and then further where the entity then didn't cooperate with OCR during the investigation.  He went on to describe this as the "sleeper" category for enforcement, which makes sense in the context of it being very important but somewhat less obvious.  

      Having outlined these three categories, he also noted that OCR has a new portal for capturing complaints.  They expect around 18,000 complaints annually and that the majority of these will be potential HIPAA violations.  

      One interesting area to note relative to their resolution agreements, that all covered entities and business associates should pay attention to, is that the agreements will typically span the entire organization entity, not just the group that caused a breach or demonstrated a violation of HIPAA privacy or security.  So the lesson here gets back to a core point that compliance starts with a through risk analysis.  If you don't identify your weakest link, it may be the one that causes your entire organization to face the scrutiny of an on-going resolution agreement.  

      Prescriptive guidance

      I found it helpful that in some areas he was able to provide advice to covered entities and business associates relative to complying with the omnibus rule and avoiding unpleasant outcomes.  

      He must have said a half dozen times that they key to compliance starts with the security risk analysis; emphasizing the importance of knowing where your PHI is stored and what are your most significant vulnerabilities.  

      He illustrated this point with the Affinity Healthcare case.  In this situation, they returned numerous leased copiers without first wiping them clean of patient data.  Unluckily, the leading company re-leased them, also without wiping them clean, to a media network that realized that they contained thousands of instances on patient information and then proceeded to do a segment on this situation.  

      He also specifically talked about the value of encryption. There is a misperception in his view that encryption is not a cost effective solution to avoiding unauthorized disclosures.  His admonition was that if you do the math, you'll find that encryption is a terrific value for any HIPAA covered entity or business associate. 

      Audit program

      Director Rodriguez also delved into learnings from their recently concluded pilot audit program, and gave some perspective as to where the permanent audit program would focus.  He reiterated that one of the key learnings was the important role played by the security risk analysis. 

      He would, however, like the permanent program to address a much larger population of entities than the 115 that were audited in the pilot program. In order to accomplish this, they are adding permanent staff to complement outside auditors.  And they are funding the growth of their team through funds brought in via their enforcement actions that result in fines and penalties. He indicated that they will continue to use civil monetary penalties as a tool in their enforcement actions going forward.  

      Final thoughts

      I found director Rodriguez's talk comforting in that it continued to express themes that have been consistently emphasized by OCR in their communication to all of us in the HIPAA community.  

      It was useful that OCR has identified the need and value of a security risk analysis as the cornerstone of a positive HIPAA compliance posture.  It was also helpful that they re-emphasized the value of encryption technology as a core tool for maintaining an appropriate security posture.  

      Although it is clearly a brave new world, OCR indicated that we shouldn't be surprised to start seeing enforcement actions taken against HIPAA business associates.  And under their permanent audit program, they will be casting a much wider net, and it is highly likely that such net will identify many more entities found lacking in terms of their security controls and privacy compliance.  

      And lastly, I expect that with meaningful use driving more organizations to implement electronic health record (EHR) systems and participation in health information exchanges, we will see increasing levels of complaints arising from challenges that patients will have in gaining access to their health records.  I was pleased to see this as an area of focus within OCR given how it touches all of us in a very personal way. 

      Download: HIPAA Final Omnibus Rule Playbook

      About the Author

      Doug Pollack's avatar
      Doug Pollack

      CIPP, MBA. With over 25 years experience in technology industry products and services, Doug is an expert in personal information privacy and security. He is currently a senior executive at ID Experts.

      Buckle Up and Face Your Final Rule Enforcement Date!

      by Mahmood Sher-jan

      If you are the adventurous type you might appreciate a blind date now and then for the mystery it can offer but the same is never true when the blind date is with a regulator.  The anxiety of not knowing your obligations and consequences well in advance of the date can be brand and even career threatening.  While the 9/23/2013 enforcement date was rapidly approaching, I found myself spending time handling many calls from privacy officers and CISOs who are using our RADAR data incident management software.  The topic of interest was the same—they all wanted to know what to expect when 9/23 rolls around relative to any unauthorized PHI disclosures and associated risk assessment under the Final Rule.  Some of them were updating their OIG work plans and needed to scope the impact and compliance effort.  I was more than happy to put their minds at ease.  I reassured them that by using RADAR they were already using multiple factors for incident risk assessment, as prescribed in the Final Breach Notification Rule. 

      Additionally, with the release of our RADAR 3.0 for the Final Rule on 9/23, I did not anticipate any significant impact in the number of breach incidents.  A key benefit to using RADAR is that ID Experts takes care of any regulatory changes so our RADAR customers don’t have to.  The story is of course different for any covered entity that turned a blind eye and is not using a consistent and multi-factor risk assessment methodology.  And the situation for Business Associates can be even more problematic since they could be facing a blind date with the HIPAA security; privacy and breach notification rules if they did not pay attention and learn about their obligations in time to be prepared.  These rules now apply to them directly.  BAs are required to have an incident management process and notify their affected CEs and to cooperate with the CEs to meet their burden of proof. 

      Rick Kam, ID Experts founder and I were recently interviewed by multiple publications, including the Wall Street Journal (http://stream.wsj.com/story/latest-headlines/SS-2-63399/SS-2-334968/0) and GovernmentHealthIT publication (http://www.govhealthit.com/news/4-steps-business-associates-comply-omnibus-hipaa) about the implication of the Final Rule and what steps these regulated organizations must take to comply.  As I was quoted in the WSJ article, the end of the grace period doesn’t mean all organizations, especially the BAs, will be in compliance, but it does mean if HHS comes calling you have to show that you have a plan in place and are working toward compliance.  HHS needs to see a culture of compliance if you expect leniency.  Hopefully you’ll find this advice helpful in planning your compliance effort. 

      About the Author

      Mahmood Sher-jan's avatar
      Mahmood Sher-jan

      Mahmood is the lead inventor of ID Experts RADAR, an award winning and patented incident management software. Mahmood holds a BS in Computer Science from University of Washington and an MBA from University of Redlands.

      The Medical Identity Fraud Alliance: Tackling Medical Identity Theft as a Team

      by Rick Kam

      Football season is underway. Fans paint their faces, grab artery-clogging hotdogs, and yell the opposing team into oblivion. We must apply that same fervor to fighting medical identity theft. The Ponemon Institute’s 2013 Survey on Medical Identity Theft received national coverage—NBCNews.com, The Wall Street Journal, CNNMoney.com and CNN to name a few—highlighting the societal nature of this crime.

      Given its pervasive nature, the issue of medical identity theft cannot be tackled alone. It takes a team. The Medical Identity Fraud Alliance, which sponsored the Ponemon report, has assembled an impressive roster of industry players to fight this problem. Participants include health plans, healthcare providers, industry experts, and legislators, each with their part to do.

      Tips for Healthcare Providers

      Perhaps more than any other team member, healthcare providers create, process, and secure vast amounts of sensitive patient data. They play a crucial role in fighting medical identity theft. ID Experts recently published the HIPAA Final Omnibus Rule Playbook to help covered entities and their business associates come into compliance. The plays outlined in the playbook can help healthcare providers better protect PHI/PII against a breach, thus reducing the likelihood that a patient will become a victim of medical identity theft. A few of the key plays include:

      Conduct a security risk analysis. Electronic health records have become a fact of life—putting more data and more patients at risk. A security risk analysis provides a prospective and in-depth analysis of the risks to your information assets involving electronic PHI and recommendations to meet the requirements of the HIPAA Security Rule.

      Provide employee training. The Department of Health and Human Services requires periodic privacy and security training for all employees of healthcare organizations. This is critical, given that an industry survey[1] found that the leading source (38 percent) of breach incidents is due to lost paper files and that the leading source of discovery of these incidents is from non-IT employees.

      Develop and test an incident response plan. Like it or not, data breaches are an everyday occurrence, and healthcare providers must be prepared. A ready-to-execute incident response plan can help minimize the impact of a breach incident on organizations and their patients.

      A Game or a War?

      With people’s lives at stake, medical identity theft is no game. It’s an ongoing battle against thieves, fraudsters, and even well-meaning family members to protect patients and their identities. The Medical Identity Fraud Alliance seeks to provide a solid defense against this crime, and the more who join, the better chance we have of winning. I encourage you to learn more at medidfraud.org.



      [1] “Health data breach trends from HCCA, SCCE survey,” January 25, 2013, HealthITSecurity.com.

      About the Author

      Rick Kam's avatar
      Rick Kam

      Rick Kam, CIPP, is founder and president of ID Experts. He is an expert in privacy and information security. His experience includes leading organizations in policy and solutions to address protecting PHI/PII and resolving privacy incidents and identity theft. He is the chair of the ANSI PHI Project, Identity Management Standards Panel and the Santa Fe Group Vendor Council ID Management working group. He is also an active member of the International Association of Privacy Professionals and is a member of the Research Planning Committee for the Center Identity which is part of the University of Texas Austin.

      The Dangerous Consequences of Medical Identity Theft

      by Bob Gregg

      The 2013 Survey on Medical Identity Theft, sponsored by the Medical Identity Fraud Alliance, with support from ID Experts, finds that that an estimated 1.84 million people are victims of medical identity theft in the U.S., costing victims an estimated $12.3 billion. Medical identity theft occurs when someone uses an individual’s name and personal identity to fraudulently receive medical services, goods, and/or prescription drugs, including attempts to commit fraudulent billing.

      The Harmful Side Effects of Medical Identity Theft

      What does this mean to individuals who have had their medical identities compromised? Money, for sure. In the Ponemon report, 36 percent of respondents paid an average of $18,860 in out-of-pocket expenses.

      Yet, far more dangerous are the medical side effects. Corrupt medical records can and do lead to mistreatment, misdiagnosis, a delay in treatment, or being prescribed the wrong pharmaceuticals. One victim I know of nearly received a possibly fatal injection of penicillin because someone used his lost medical insurance card.

      Ignorance Can Be Hazardous to Your Health

      A recent paper by the Medical Identity Fraud Alliance[1] cites the lack of awareness among professionals and consumers about the crime and its potential dangers.  “Few people think of themselves as having a medical identity and thus the idea of someone stealing their medical identity is not even on their radar screen,” the report says.

      In addition, few people understand—or even try to understand—their insurer’s Explanation of Benefits (EOBs). Fifty-six percent of respondents in the Ponemon report do not check their health records and EOBs for inaccuracies because they either don’t know how or said it’s too difficult.

      But, as we noted in the Decade of Data Breach infographic, medical identities are the new “black market.” “A stolen medical identity has exponentially higher street value than a Social Security number. Criminals are motivated to acquire and exploit medical identities.

      The increased use of electronic health records and the Affordable Care Act will make it easier than ever for individuals, organizations, and even nations to steal and sell medical records. It is a societal issue that must be addressed at all levels, from individuals to providers to health plans. Criminals will always be around, but with the efforts of the Medical Identity Fraud Alliance and the healthcare ecosystem as a whole can make it a lot harder to commit these dangerous crimes.



      [1] A Publication of The Medical Identity Fraud Alliance, “The Growing Threat of Medical Identity Fraud: A Call to Action,” July 2013

      About the Author

      Bob Gregg's avatar
      Bob Gregg

      With over 30 years of experience in high technology and software services, Bob joined ID Experts as CEO in 2009. He is particularly interested in the emerging trends involving identity theft and privacy data breaches, with emphasis on healthcare. "Let's keep our private, confidential information just that...private and confidential"

      Final Breach Notification Rule Enforcement Impact: UP or Down?

      by Mahmood Sher-jan

      Since the publication of the Final Breach Notification Rule in March 2013, there has been rampant speculation about the impact of the rule on the number of health care data breaches. Would the rule’s new “compromise” standard cause the trajectory of reported breaches to change significantly as of 9/23/2013-- the enforcement date?  

      I have been discussing this topic with many covered entities and I would characterize the sentiment as mostly uneasy.  Most entities are concerned that there could be a rise in their breach volume while fewer entities anticipating a rather steady state where the impact is insignificant.  We clearly are not talking about the most egregious incidents from intentional and malicious acts by insiders or hackers. At issue is the high volume of unintentional and inadvertent disclosures of PHI that occur daily across the industry and do not qualify under the breach rule’s allowed exceptions.  However, based on who the recipient is and what the entity does to mitigate risk to PHI, these incidents could be deemed to have low risk of compromise.  Examples of these incidents include misdirected faxes to the wrong CEs and EOBs mailed to the wrong patients or policyholders.

      Who has it right is anyone’s guess at this point. This divergent outlook is a reflection of the continuing ambiguity of the final rule because it does not define the term “compromise” even though it is the new standard for risk assessment for all incidents involving unauthorized disclosure of PHI.  The rule gives covered entities and business associates better guidance in the form of four factors to be used for conducting incident risk assessment.  But frankly these factors, to varying degree, contribute to inconsistencies in the incident risk assessment outcome and the ongoing uncertainly of what the ultimate impact will be on the volume of breaches.

      The final rule is clear that no one factor should be the deciding factor whether an incident is a breach or not.  And depending on the circumstances, additional factors can be taken in to consideration.  I find two of the factors (recipient of PHI & PHI risk mitigation) to be most consequential in establishing if a low probability exists that PHI has been compromised.  For example, I advise and remind entities to obtain confidentiality agreements when disclosed PHI is returned or destroyed by the unauthorized recipients, including individuals, CEs and BAs. 

      My perspective is shaped by years of experience in incident risk assessment and as the inventor of ID Experts RADAR, the only incident management software that has an incident risk assessment engine specifically designed for this purpose.  I have performed extensive analysis of the Final Rule’s factors and ensured RADAR’s support of these factors.  RADAR is invaluable decision support tool for the counsel, privacy and security officers and it is already adopted by many healthcare covered entities, business associates, and insurance companies.

      So when in doubt make sure that as a covered entity or business associate you have a “consistent” and “objective” methodology for performing incident risk assessment and that your process and decisions are documented.  I predict that entities which have been using a multi-factor incident risk assessment methodology under the interim final rule will see only a marginal increase, if any, under the final rule.  Entities that should be concerned are those who have yet to implement a compliant incident management and risk assessment process.

      About the Author

      Mahmood Sher-jan's avatar
      Mahmood Sher-jan

      Mahmood is the lead inventor of ID Experts RADAR, an award winning and patented incident management software. Mahmood holds a BS in Computer Science from University of Washington and an MBA from University of Redlands.

      Medical and Health Information State Notification Laws

      by Heather Noonan

      Besides HITECH and the Final Omnibus Rule coming into play, it might be helpful to know that the following states also require healthcare data breach notification.  It’s also important to notice that the state list is slowly, but steadily growing and while your business might not fall under HITECH regulations, it could fall under state laws.

      Learn More: 3 Steps to Tackle HIPAA's Final Rule

      Medical and health information state notification laws:

      • Medical records: Arkansas, California, Missouri, Puerto Rico (Act 111, Reg. 7207)
      • Medical information: Arkansas, California, Missouri, North Dakota, Puerto Rico (Act 111, Reg. 7207), Virginia (Public Sector)
      • Health insurance information: Connecticut (Insurance), California, Missouri, North Dakota 
      • Information that identifies an individual and relates to: Texas
        • the physical or mental health condition of the individual;
        • the provision of health care to the individual; or
        • payment for the provision of health care to the individual.

      Stay tuned for our “How Breaches Really Happen…” series. A little humor is always healthy.

      About the Author

      Heather Noonan's avatar
      Heather Noonan

      Heather is the Senior Project Manager for the ID Experts Data Breach Response Team. She provides subject matter expertise, state and federal regulatory requirements and best practices. She is the primary point of contact for client communication and data breach engagements. Heather has been with ID Experts since 2008 and works in all areas of ID Experts including informational webinars and blog discussions. Heather has a Bachelors of Science, specializing in Business Communication, and has over 15 years of experience in client customer service with 10 years specific in Project Management.

      Medical Identity Theft Poisoning the Healthcare Ecosystem, New Ponemon Study Reveals

      by Larry Ponemon

      Hybrid cars. Low-VOC paint. Organic foods. By reducing toxins and pollutants, Americans are committed to a healthier environment. But one little-known contaminant endangering our health is medical identity theft, according to the 2013 Survey on Medical Identity Theft conducted by Ponemon Institute and sponsored by the Medical Identity Fraud Alliance (MIFA), with support from ID Experts.

      Medical identity theft occurs when someone uses an individual’s name and personal identity to fraudulently receive medical services, prescription drugs and/or goods, including attempts to commit fraudulent billing. With victims paying $12.3 billion in out-of-pocket costs incurred by medical identity theft, this crime has become a national healthcare issue with life-threatening and financial consequences. It is tainting the healthcare ecosystem, much like poisoning the town’s water supply. Everyone will be affected.

      However, unlike lead in the water supply or a hole in the ozone layer, the 2013 Survey on Medical Identity Theft finds that consumers are unaware of the seriousness and dangers of medical identity theft. In fact, they often share their medical identification with family members or friends, making medical identity theft a family affair. The result? Life-threatening consequences for an estimated 1.84 million victims in the U.S.

      The 2013 Survey on Medical Identity Theft reveals other disturbing trends:

      • Lack of confidence in healthcare providers. Fifty percent of victims lost trust and confidence in their healthcare provider, while 56 percent of consumers would find another provider if they knew their healthcare provider could not safeguard their medical records.
      • Medical identity theft can cause serious medical and financial consequences, yet consumers are unaware of the dangers. Half of consumers surveyed aren’t aware that medical identity theft can create permanent, life-threatening inaccuracies and permanent damage to their medical records. The medical identity victims surveyed experienced a misdiagnosis, mistreatment, delay in treatment, or were prescribed the wrong pharmaceuticals. Half of respondents are left with unresolved incidents.
      • Consumers don’t take action to protect their health information. Fifty percent of respondents don’t take steps to protect themselves from future medical identity theft. Fifty-six percent of consumers don’t check their health records and Explanation of Benefits (EOBs) from health insurers for inaccuracies because they either don’t know how or said it’s too difficult. Of those who found unfamiliar claims, 52 percent didn’t report them. Half thought the police would be of no help.
      • Consumers often share their medical identification with family members or friends, putting themselves at risk. Thirty percent of respondents knowingly permitted a family member to use their personal identification to obtain medical services including treatment, healthcare products, or pharmaceuticals. By sharing medical identification with family members or friends, consumers unintentionally leave themselves and their health records vulnerable. People do not know that they are committing fraud. More than 20 percent of people surveyed can’t remember how many times they shared their healthcare credentials. Forty-eight percent said they knew the thief and didn’t want to report him or her.

      The Solution Is Us

      Factors such as the Affordable Care Act and the increased use of electronic health records (EHRs) are fueling the size and complexity of medical identity theft. Healthcare organizations cannot solve the problem alone. It will take a united effort, such as the Medical Identity Fraud Alliance, to research the problem, develop best practices, and empower individuals to be the first line of defense in protecting their PHI.

      The complete 2013 Survey on Medical Identity Theft is available at http://medidfraud.org/2013-survey-on-medical-identity-theft.

      About the Author

      Larry Ponemon's avatar
      Larry Ponemon

      Dr. Larry Ponemon is the Chairman and Founder of the Ponemon Institute, a research “think tank” dedicated to advancing privacy and data protection practices. Dr. Ponemon is considered a pioneer in privacy auditing and the Responsible Information Management or RIM framework.

      Security Compliance by HIPAA Business Associates May be Unexpectedly Costly

      by Doug Pollack

      An recently published article by Modern Healthcare noted the extraordinary level of time and effort that it is estimated that our healthcare system will need to expend in order to comply with the new HIPAA privacy and security rules. The U.S. Department of Health and Human Services (HHS) has estimated that each year, it takes in aggregate around 32 million person hours.

      Learn More: Be A Compliance Champion

      So what does that mean for each HIPAA business associate, who as of September 23rd, 2013, are now obligated to meet effectively the same rules as hospitals and insurers? How big will their effort be? As it turns out, it isn’t as easy to answer that question as one might presume.

      HHS estimates in a document published in the Federal Register that there are between 300,000 and 400,000 HIPAA business associates. Each of these organizations will have what I think of as “start up costs” associated with becoming and demonstrating HIPAA security rule compliance. And then they will also have “on-going annual costs” for maintaining their security compliance posture. I focus on the security rule since the burdens of the privacy rule are “mostly” borne by HIPAA covered entities.

      So how much are these costs for the “average” business associate? It is exceedingly difficult to tell. The HHS document speaks solely to the “burden” defined in this context as:

      “… the time expended by persons to generate, maintain, retain, disclose or provide the information requested.”

      So in this case, they note that the average business associate will have a new burden from the Final Omnibus Rule of 1.17 hours, for “documentation of security rule policies and procedures and administrative safeguards.” But this is a very deceptively small number compared to the overall level of effort that most business associates will require.

      A Wall Street Journal article titled “HIPAA Compliance Burden Grows with New Rule” delves into the costs associated with HIPAA compliance under the Omnibus Rule. While the article notes that the total cost to all entities of compliance would be no more than $225.4 million, according to HHS, it doesn’t help inform the question as to what the cost would be for the newly obligated business associates.

      In this article, Brian Beard, director of compliance and ethics at McKesson Specialty Health notes that “it will be tougher for the smaller [business associate] shops to be able to afford being in the health care business” as a result of their obligations under the new regulations. And this point is reinforced by HHS.

      Leon Rodriguez, Director of the Office for Civil Rights of HHS in an interview with Risk & Compliance Journal suggested to business associates “the first step for companies should be an inventory to identify what protected health information [PHI] they have, and then a risk analysis.  ‘Both in the audit pilot we did and in our enforcement work failure to do a risk analysis is a frequent deficiency,’ he said.”

      So as we look at the “startup costs” to business associates, these would include doing an exhaustive inventory of the nature and extent of the PHI that they receive from their covered entity clients, and/or share with their subcontractors, as well as doing a security risk analysis. Both of these efforts are substantial, and the latter is required in order to be compliance with the HIPAA security rule.

      And so this starts to lay the foundation for the earlier comment as to how some smaller business associates may find it unaffordable to remain in the health care business. While professional services prices for a security risk analysis and a PHI inventory can vary substantially, it is easy to imagine that these two combined could cost a business associate between $50,000 and $100,000 or more, depending on the size and scope of the business associate’s enterprise and data management.

      Based on this, it is easy to see why business associates may be “dragging their feet” in terms of understanding and addressing their obligations under the Omnibus Rule. The costs, however you calculate them, are non-trivial. However, the exposure of not dealing with security rule compliance could be catastrophic.

      In the same Wall Street Journal article, an HHS spokesman “pointed to a resolution agreement last year in which BlueCross BlueShield of Tennessee agreed to pay the HHS $1.5 million to settle a case involving theft of hard drives left in a network data closet after the insurer had moved its staff out of an office complex.  Under the new rule, the spokesman said, the property manager–a business associate — could also be liable for similar enforcement action.”

      So business associates should be able to see the writing on the wall. They invest upwards of $100,000 or more to demonstrate compliance with the HIPAA security rule. Or, starting September 23, 2013, they are liable for penalties that can run in the millions of dollars if HHS takes an interest in their security posture.

      So as Director Rodriguez has suggested, you would be well advised to 1) do a rigorous PHI inventory for your organization so that you know what you have and how sensitive it is, and 2) do a security risk analysis (or more often likely find an experienced HIPAA professional services firm to do this for you) and clearly document its recommendations and the actions you took to address the most severe risk factors. And if I were you, I’d get these going in the next couple of weeks.

      Read More: Risk Analysis: Fundamental to HIPAA security compliance

      About the Author

      Doug Pollack's avatar
      Doug Pollack

      CIPP, MBA. With over 25 years experience in technology industry products and services, Doug is an expert in personal information privacy and security. He is currently a senior executive at ID Experts.

      Risk Analysis: Fundamental to HIPAA security compliance

      by Doug Pollack

      This article is reprinted with premission from Compliance Today: September 2013

      • The Omnibus Final Rule does not change the HIPAA Security Rule's requirements for risk analysis and risk management.
      • The requirement for risk analysis is explicitly stated in the Security Rule.
      • Risk analysis provides the foundation upon which a risk management program is built.
      • Although there is no "template" for risk analysis, resources such as NIST publications present industry standards for best practices.
      • Risk analysis by independent experts can help an organization quickly analyze and benchmark security programs against peer organizations and industry best practices.

      Learn More: RADAR Incident Risk Assessment Software

      With the publication of the HIPAA Final Omnibus Rule, healthcare providers and other covered entities are once again reassessing their privacy and security programs with an eye toward compliance. In light of new questions and requirements, we talked to Terrill Clements, Senior Equal Opportunity Specialist at the Department of Health and Human Services (HHS) Office for Civil Rights (OCR), to find out about new developments. His answer was reassuring—the key to compliance still lies in the fundamentals of good privacy practices: ongoing risk analysis, risk management, and monitoring.

      DP: Let's start with the question on everyone's mind: Does the Omnibus Rule change any of the requirements of the Security Rule?

      TC: The Omnibus Rule doesn't do away with any of the requirements of the Security Rule. In fact, it reiterates the importance ofpatient privacy and data security, and it officially modifies the HIPAA Privacy, Security, and Enforcement rules to include requirements specified by the HITECH Act. It extends these rules to include business associates, and it provides for a tiered increase in penalties for compliance violations and specifies mandatory audits by HHS. Leon Rodriguez, the Director of OCR has said that it strengthens the ability of OCR "to vigorously enforce the HIPAA privacy and security protections." However, the HIPAA/HITECH Omnibus Final Rule contains no changes to the Security Rule's standards and specifications for risk analysis and risk management. The fundamentals are the same, and a compliance program still begins with and rests on thorough and ongoing risk analysis. (The Omnibus Final Rule became effective on March 26, 2013, and the compliance enforcement date is September 23, 2013.)

      DP: Where in the Security Rule does it specifically require that covered entities complete a risk analysis?

      TC: In section 45 C.F.R. § 164.308(a)(1)(ii)(A), the Rule states that as the first step of its security management process, a covered entity must "conduct an accurate and thorough assessment of the potential risks and vulnerabilities to the confidentiality, integrity, and availability of electronic protected health information held by [the covered entity].1

      DP: What is the difference between risk analysis and risk management in the Security Rule?

      TC: Risk analysis is the evaluation of the risks and vulnerabilities that could negatively impact the confidentiality, integrity, and availability of the electronic protected health information (e-PHI) held by a covered entity, and the likelihood of occurrence.2 Risk management is the actual implementation of security measures sufficient to reduce the risks and vulnerabilities to a reasonable and appropriate level.3 These are obviously closely related concepts, and both are required by the Security Rule. You might say that risk analysis is the process of identifying and prioritizing potential problems, and risk management is the process of planning and taking systematic steps to reduce their likelihood of occurrence and severity.

      DP: Is there a risk analysis template, or a resource that provides a good example of what should be included in a risk analysis document?

      TC: Because the Security Rule is meant to be scalable according to the size of an entity, it does not specify an actual "template" for risk analysis. There are numerous methods of performing a risk analysis and there is no single method or best practice that guarantees compliance with the Security Rule.

      NIST is a federal agency that sets computer security standards for the federal government and publishes reports on topics related to IT security. Note that these reports are informational resources, not guidance that sets a standard upon which compliance is measured. Some examples of steps that might be applied in a risk analysis process are outlined in NIST Special Publication 800-30: Risk Management Guide for Information Technology Systems.4 However, there are several elements a risk analysis must include, regardless of the method or format employed:

      • Scope of the analysis — A covered entity's risk analysis must take into account all of its e-PHI, regardless of the particular electronic medium in which it is created, received, maintained, or transmitted, or its source or location.
      • Data collection — Covered entities must identify where the e-PHI is stored, received, maintained, or transmitted. An organization could gather relevant data by reviewing past and/or existing projects, performing interviews, reviewing documentation, or using other data gathering techniques. The e-PHI data gathered using these methods must be documented.
      • Identification and documentation of potential threats and vulnerabilities — Covered entities must identify and document reasonably anticipated threats to e-PHI, including different threats that are unique to the circumstances of their environment. Covered entities must also identify and document vulnerabilities that, if triggered or exploited by a threat, would create a risk of inappropriate access to or disclosure of e-PHI.
      • Assessment of current security measures — Covered entities should assess and document the security measures currently being used to safeguard e-PHI, whether security measures required by the Security Rule are already in place, and if current security measures are configured and used properly.
      • Threat probability assessment — The Security Rule requires covered entities to take into account the probability of potential risks to e-PHI. The results of this assessment will determine which threats may be "reasonably anticipated"—the threats that the Security Rule requires covered entities to protect against. The output of this step will be documentation of all threat and vulnerability combinations with associated likelihood estimates that may impact the confidentiality, availability, and integrity of the organization's e-PHI.
      • Threat impact assessment — The Security Rule also requires consideration of the "criticality," or impact, of potential risks to confidentiality, integrity, and availability of e-PHI. Covered entities must assess the magnitude of the potential impact resulting from a threat triggering or exploiting each specific vulnerability. A covered entity may use either a qualitative or quantitative method, or a combination of the two methods, to measure the potential impact, and the output of this process should be documented.
      • Risk impact assessment — Covered entities should assign risk levels for all threat and vulnerability combinations identified during the risk analysis. The level of risk could be determined, for example, by analyzing the values assigned to the likelihood and resulting impact of threat occurrence, or by assigning a risk level based on the average of the assigned likelihood and impact levels. The resulting levels should be documented and correlated with a list of corrective actions to mitigate each risk level.
      • Documentation — The Security Rule requires the risk analysis to be documented but does not require a specific format.5 The risk analysis documentation is a direct input to the risk management process.

      OCR's Guidance on Risk Analysis Requirements Under the HIPAA Security Rule and the other materials available on OCR's web pages, provide much more detail on the steps in the risk analysis process.6

      DP: How often does a covered entity haveto review and update its risk analysis?

      TC: The risk analysis process should be ongoing. In order for an entity to update and document its security measures "as needed," which the rule requires, it should conduct continuous risk analysis to identify when updates are needed.7 However, the Security Rule does not mandate a specific interval for updates. This process will vary from one covered entity to another, depending on individual circumstances. (Again, the time period is determined by what is "reasonable and appropriate" for that organization.) For example, many entities have formal policy review cycles that include periodic risk analysis updates. Annual or biennial reassessments appear to be a common general practice for such routine reassessments. But where there are significant changes in electronic record systems and processes, significant security incidents, or significant unforeseen security issues identified through system access tracking, it would be prudent for a covered entity to update its risk analysis and risk management programs promptly, rather than wait for a scheduled periodic reassessment that may be many months away.

      DP: What are some examples of threats that covered entities should address when conducting their risk analysis?

      TC: Several types of threats may occur within an information system or operating environment. Threats are typically grouped into general categories such as natural, human, and environmental. According to NIST Special Publication 800-30, Risk Management Guide for Information Technology Systems, some examples of common threats in each of these categories include:

      • Natural threats such as floods, earthquakes, tornadoes, and landslides.
      • Human threats are enabled or caused by humans and may include intentional (e.g., network and computer based attacks, malicious software upload, and unauthorized access to e-PHI) or unintentional (e.g., inadvertent data entry or deletion and inaccurate data entry) actions.
      • Environmental threats such as power failures, pollution, chemicals, and liquid leakage.

      DP: Does OCR use NIST risk-analysis standards as a best practices baseline for an assessment of whether a covered entity has met Security Rule requirements for risk analysis?

      TC: The NIST papers are important informational resources for OCR as well as covered entities, but they are not legally binding guidance for covered entities. Because the requirements of the Security Rule are flexible and scalable, no guidance material or other such resources can provide rigid prescriptions for compliance in every situation. NIST guidelines represent the industry standard for good business practices with respect to standards for securing e-PHI, however, so covered entities will often find their content valuable when developing and performing compliance activities.

      DP: Where can I find the OCR, CMS, and NIST guidance materials for risk analysis online?

      TC: You can find them at www.hhs.gov/ocr/privacy/hipaa/administrative/securityrule/securityruleguidance.html. You'll find these important resources (among others) there:

      • OCR's Guidance on Risk Analysis Requirements Under the HIPAA Security Rule.
      • CMS HIPAA Security Series 1: Security 101 for Covered Entities
      • CMS HIPAA Security Series 6: Basics of Risk Analysis and Risk Management
      • CMS HIPAA Security Series 7: Security Standards: Implementation for the Small Provider

      The HIPAA Security Information Series consists of educational papers produced by the HHS

      Centers for Medicare and Medicaid Services (CMS) to give covered entities insight into the Security Rule and assistance with implementation of the security standards. These are the most pertinent ones for risk analysis.

      NIST special publications are provided by OCR as an informational resource but are not legally binding guidance for covered entities. NIST publications can be found at: http://1.usa.gov/1cVMuFC. These three are a great place to start:

      • NIST Special Publication 800-30: Risk Management Guide for Information Technology Systems
      • NIST Special Publication 800-66: An Introductory Resource Guide for Implementing the HIPAA Security Rule
      • NIST Special Publication 800-115: Guide to Technical Aspects of Performing Information Security Assessments

      Risk analysis and customized compliance: With flexibility comes responsibility

      Information security is never "one size fits all," and PHI security is no exception. In recognition of varying needs, HIPAA and HITECH regulations don't prescribe the details of PHI privacy programs, giving organizations the freedom to create security programs that meet their needs. But with the flexibility to define your own security program, comes the responsibility to ensure and be able to demonstrate and document that risks to the exposure of your PHI are discovered and adequately addressed.

      Terrill Clements emphasizes that a security risk analysis is not optional: it is required in order to achieve compliance. These eight questions and answers highlight the importance of ongoing risk analysis, risk management, and monitoring. With the enforcement date for the Omnibus Final Rule fast approaching, organizations that have not yet done a formal HIPAA security risk analysis need to move quickly to do so, also implementing risk management programs based on their findings. The risk analysis provides you with a blueprint for focusing your risk management program.

      Download: HIPAA Risk Assessment: Where To Begin


       

      1. The full text of the Security Rule can be found on OCR's website at: http://1.usa.gov/13u8Bjg

      2. See 45 C.F.R. § 164.308(a)(1)(ii)(A).

      3. See 45 C.F.R. § 164.308(a)(1)(ii)(B)

      4. NIST Special Publication 800-30: Risk Management Guide for Information Technology Systems and NIST Special Publication 800-66: An Introductory Resource Guide for Implementing the HIPAA Security Rule are available at http://1.usa.gov/1cVMuFC

      5. See 45 C.F.R. § 164.316(b)(1)

      6. http://www.hhs.gov/ocr/office/index.html

      7. See 45 C.F.R. §§ 164.306(e) and 164.316(b)(2)(iii).

      About the Author

      Doug Pollack's avatar
      Doug Pollack

      CIPP, MBA. With over 25 years experience in technology industry products and services, Doug is an expert in personal information privacy and security. He is currently a senior executive at ID Experts.

      Medical Identity Theft White Paper Released

      by Robin Slade

      Healthcare fraud in the US topped $80 billion last year, making it as much an economic driver as most of the top Fortune 500 companies.[1],[2]  This is particularly troublesome, in part, because medical identity fraud:

      • Can destroy your credit and credibility.
      • Places you at physical risk when critical patient information is altered to match that of the thief.
      • Can keep you from getting benefits you need.
      • And, if the thief’s actions are in some way criminal, can lead to you being accused of their crime.[3]

      In addition, effective legal and healthcare industry deterrents and solutions are currently lacking to protect victims and to allow them to repair their records. Victims of medical identity theft include not only the person whose identity has been stolen, but also healthcare providers, insurance companies, taxpayers, as well as other consumers who pay higher prices for their own care as a result of theft and fraud.

      The extent and insidious nature of medical identity theft and fraud has such far reaching and dangerous effects that stopping it requires all these stakeholders to come together to achieve meaningful solutions. The Medical Identity Fraud Alliance (MIFA) was recently founded to meet this challenge. MIFA is the first cooperative public/private sector effort created specifically to unite all stakeholders in jointly developing solutions and best practices for the prevention, detection and remediation of medical identity fraud. 

      MIFA has just released The Growing Threat of Medical Identity Theft: A Call to Action, which provides a blueprint for: protecting health information; increasing education and solution awareness; creating a body of research and best-in-class technologies and practices and advocating for laws that will solve or curtail this problem. Today, very few professionals or consumers are aware of medical identity theft and its potential for harm. Policy decision-makers, organizations that hold Protected Health Information (PHI), law enforcement, regulatory agencies and consumer facing groups now have an opportunity and obligation to bring this serious societal problem to the forefront and work together to protect the public.

      Individuals, agencies and organizations can:

      The organization welcomes healthcare ecosystem participants to join them, including healthcare organizations, self-insuring companies, third-party administrators, law-enforcement, government agencies, technology service providers, academia, professional services and research firms to join them in meeting this challenge head on.



      [3] Medical identity theft and medical identity fraud both refer to crimes that involve the theft of Personally Identifiable Information (PII) from another individual. In the case of medical identity theft, this can also include theft of Protected Health Information (PHI).

      About the Author

      Robin Slade's avatar
      Robin Slade

      Robin M. Slade is the Development Coordinator for the Medical Identity Fraud Alliance, a public/private partnership that unites the healthcare ecosystem to develop solutions and best practices for medical identity fraud. Ms. Slade is also the President and Chief Executive Officer of the Foundation for Payments Fraud Abatement and Activism and FraudAvengers.org, a non-profit corporation and weblog focused on helping consumers lessen their exposure to fraud and scams. She is also Senior Vice President and Chief Operating Officer for The Santa Fe Group, and manages the Shared Assessments Program, a consortium created by leading banks, auditing firms, and service providers to inject efficiency and cost savings into the vendor risk assessment process.

      The Cost of a HIPAA Violation After the Omnibus Final Rule

      by Heather Noonan

      Why are we all a little paranoid about September 23, 2013? Why does it feel that we have an important meeting or a late bill lurking in the back of our mind? Or did we forget that doctor’s appointment last month that we tried to remind ourselves about? Maybe it’s both.

      I will be honest. The deadline is a bit nerve-racking for me and we live and work in the HIPAA world every day.

      You will see the previous violations, if the violation occurred prior to 2/18/2009. (Significantly lower.)

       

      For Violations Occurring Prior To 2/18/2009

      For Violations Occurring On Or After 2/18/2009

      Penalty Amount

      Up to $100 per violation

      $100 to $50,000 or more per violation

      Calendar Year Cap

      $25,000

      $1,500,000

       

      Take a look at today’s new violations. If you are in the HIPAA or HITECH world, I believe this is one deadline you don’t want to ignore. (That is millions, not thousands on the right.)

      Violation Type

      Each Violation

      Repeat Violations per Year

      Did Not Know

      $100 – $50,000

      $1,500,000

      Reasonable Cause

      $1,000 – $50,000

      $1,500,000

      Willful Neglect – Corrected

      $10,000 – $50,000

      $1,500,000

      Willful Neglect – Not Corrected

      $50,000

      $1,500,000

       

      Consider circling it on your calendar – September 23, 2013.

      More Info: HIPAA Final Omnibus Rule Playbook:Your Ticket to Winning the Compliance Game

      About the Author

      Heather Noonan's avatar
      Heather Noonan

      Heather is the Senior Project Manager for the ID Experts Data Breach Response Team. She provides subject matter expertise, state and federal regulatory requirements and best practices. She is the primary point of contact for client communication and data breach engagements. Heather has been with ID Experts since 2008 and works in all areas of ID Experts including informational webinars and blog discussions. Heather has a Bachelors of Science, specializing in Business Communication, and has over 15 years of experience in client customer service with 10 years specific in Project Management.

      How to quickly prepare Attorney General Letters after a data breach

      by Heather Noonan

      I have to honestly say that preparing Attorney General notification letters is the hardest part of my job. Not because it is difficult, but because it is incredibly cumbersome. You have 50 different states with 50 different requirements, you also have U.S. Territories, and if the data breach involves healthcare, education, credit cards, you will have federal regulations that come into play. It’s exhausting just writing about it.

      This, like anything, gets easier the more times you do it. Yes, you can go to varying legal websites and look up the information and yes, you can keep your own reference sheet, but organization is key for this type of thing.

      MORE INFO: Don’t forget to include tips: State Requirements in the Data Breach Notification Letter

      There really isn’t a quick way to prepare Attorney General and regulatory notification. Regulatory notification can be anything from notification to the Attorney General, Credit Reporting Agencies, or the Department of Education. Regulatory notification also includes state or federal entities, and their enforcement agencies such as the Department of Consumer Affairs or the Department of Law and Public Safety. When I say there isn’t a quick way to prepare these documents, unless you have been doing this for years, you will want to set aside a good amount of time to prepare these.

      Typically a company spends all their time reviewing and stressing about the individual notification letter and they forget about the time it will take to prepare the regulatory notification. This small group of letters can take a lot of time. Let me say that again, this small group of letters can take a lot of time.  Regulatory notification often, if not always, needs to be sent before the individual notification, so pay very close attention to the due date on these.

      My main recommendation is to know the requirements and timeframes for the current states where your affected individuals reside. Some Attorney General’s require notification within 24 hours, while some only require notification if more than 500 individuals must be notified. Others don’t require notification unless 1,000 individuals are affected. You can see that there is a lot of variation here.

      Does the Attorney General require notice? If so, what do they require in the notification and when do they require it by? Do they want to know the date of the incident or how many individuals were affected? Keep an organized reference sheet for who requires what and when...and you have got it made!

      Along with Attorney General requirements, please see our other “How to” series for more information: Don’t forget to include tips: State Requirements in the Data Breach Notification Letter

      About the Author

      Heather Noonan's avatar
      Heather Noonan

      Heather is the Senior Project Manager for the ID Experts Data Breach Response Team. She provides subject matter expertise, state and federal regulatory requirements and best practices. She is the primary point of contact for client communication and data breach engagements. Heather has been with ID Experts since 2008 and works in all areas of ID Experts including informational webinars and blog discussions. Heather has a Bachelors of Science, specializing in Business Communication, and has over 15 years of experience in client customer service with 10 years specific in Project Management.

      Responding to a Data Breach in the Cloud

      by Doug Pollack

      There are new and additional challenges that come into play when you have a data breach of sensitive personal information of your customers, or others, when that data resides in the cloud. The Cloud Security Alliance February 2013 report titled The Notorious Nine: Cloud Computing Top Threats in 2013 recognizes data breaches at the #1 cloud security threat.  So given this level of risk, how do you prepare for, and what new twists must you anticipate, when it comes to a data breach by a cloud vendor?

      Learn More: How Forensics Can Help You Comply with the HIPAA Final Omnibus Rule

      So first and foremost, when you have personal data on your customers, be it personally identifiable information (PII) or protected health information (PHI) stored on or transmitted through a cloud vendor, you relinquish the level of physical and logical security and control that you have with your own servers. The implication of this is that it makes it more difficult for you to determine exactly what data was “breached”.

      As noted in a recent discussion with Seth Berman of Stroz Friedberg (Cloud service providers often not set up for incident response, ComputerWeekly.com, August 2, 2013):

      “Companies are forced to fight attackers on multiple geographic fronts, but the complexities of the internet cloud and patchwork quilt of data privacy laws means a prompt response is often difficult.”

      And while as Mr. Berman notes, part of the complexity is the interesting web of laws that dictate the requirement for data breach notification, especially given that in the U.S. federal and state laws can overlap in confusing ways, much of the challenge can be in just determining the relevant facts: what data on what individuals was exposed? He further notes that:

      “We regularly deal with incidents where data is scattered across servers in multiple physical locations or even on servers that may house other companies' data. This makes forensic response complicated, slow or, in some cases, impossible.”

      Which is understandable. Part of the benefit of using cloud services is that they handle the virtualization process and scalability issues for you. So sensitive personal data to which you are entrusted can be spread across servers, and in some cases, on shared systems with other company’s similarly sensitive information. So imagine the challenges of carrying out a forensics investigation to determine the nature and level of data exposure, when you don’t “control” the computing environment.

      A starting point to address this thorny issue comes in a recent report from Gartner Group on cloud contracts.  Gartner has recommendations and guidance for companies to improve the provisions in their cloud contracts to address data breach risks and the processes for mitigating compromises and supporting the required data breach notification process. And then of course, there is the question of who bears the costs, and which costs.

      It makes a great deal of sense to address the questions of how a data breach is handled with your cloud vendor as part of the agreement. Given the challenges in carrying out forensics in a timely and accurate fashion, you really want to be assured that your cloud vendor will have your back.

      While it might be obvious to you already, it is your company’s reputation that is at risk with such a breach, and your obligation, morally and otherwise, to address the issues that surface with the affected individuals. So if your cloud vendor isn’t prepared to help you in promptly assessing the nature, scope and extent of a breach, then your life is likely to be much less pleasant for a long while.

      DOWNLOAD: HIPAA Final Omnibus Rule Playbook: Business Associate Edition

      About the Author

      Doug Pollack's avatar
      Doug Pollack

      CIPP, MBA. With over 25 years experience in technology industry products and services, Doug is an expert in personal information privacy and security. He is currently a senior executive at ID Experts.

      How to develop a data breach informational website that helps people become more self-sufficient

      by Heather Noonan

      This “Data Breach Response – How To” article is part of our larger series by Heather Noonan.

      Informational website? What is that? Well, if you don’t mind, I’m going to recommend it.  It’s a definite must have for any breach response. You will find that most people today turn to the internet to find answers. They will still call a toll-free number, but if you give them the option, they will go straight to Google. I still pick up the phone, but if it is something I can easily go to a website for, I’m looking it up right now. People simply don’t have the time or want to make the effort to call in and talk to someone. They want immediate answers and on their time. The term instant gratification comes to mind.

      An informational website will tell the affected individuals what happened and what is being done to repair the situation. The website will provide recommendations and tools if someone does fall victim to identity theft. It will also provide information on how to place a fraud alert, a security freeze, and how to contact the credit reporting agencies. An informational website can provide a multitude of things whether the incident is credit card theft or healthcare fraud, the website can answer a large facet of questions.

      Still not convinced? I will provide a list of the benefits for your company and the affected individuals. Don’t get me wrong. You still need and want a toll-free number, but a website will make your life and everyone else’s a whole lot easier.

      Top 10 reasons an informational website is important to a company and the affected individuals:

      1. Everyone has the same question – Who, what, where, when and why. You will find that the same questions are always asked.
      2. Empower – Give people the tools to help themselves.
      3. Decrease costs – Having a website will cut down on phone maintenance and costs.
      4. Decrease call volume – Most people today will go to a website for information. Less and less are picking up the phone.
      5. Follow federal and state requirements – HITECH already requires a toll-free number, email address, postal address, or website for people to ask questions and learn additional information. U.S. states and territories have similar requirements and many are headed in that direction.
      6. Continued productivity – Less time on the phone means more employees and affected individuals continuing with their job and daily activities.
      7. Privacy – A lot of people will be upset by the incident and will want privacy. They don’t want documentation that they called in or needed assistance.
      8. Direct information – A website is a lot easier to pull information from. “You said the FTC address is 600 Pennsylvania or did you say 6000? Pennsylvania? Is that spelled with one n?”
      9. Choices – Give people a choice. If the website doesn’t provide all the answers, give them the alternative and option to call a toll-free number.
      10. Simplicity – Websites are easy to control and easy to access. Making content management easy.

      About the Author

      Heather Noonan's avatar
      Heather Noonan

      Heather is the Senior Project Manager for the ID Experts Data Breach Response Team. She provides subject matter expertise, state and federal regulatory requirements and best practices. She is the primary point of contact for client communication and data breach engagements. Heather has been with ID Experts since 2008 and works in all areas of ID Experts including informational webinars and blog discussions. Heather has a Bachelors of Science, specializing in Business Communication, and has over 15 years of experience in client customer service with 10 years specific in Project Management.

      Is Dealing in Personal Data a Game?

      by Rick Kam

      In real life, no.  But wait...

      Check out this website: www.DataDealer.com and find a new online game that makes brokering personal information a game!

      Data Privacy

      I tried to play it to see if I could be a successful data broker or data thief depending on how far I was willing to cross the line to obtain personal information.  The game give you several options to buy personal data that includes names, addresses, email addresses, medical records, sexual preference and more stating with an initial bank roll of $5,000.

      I started playing and bought medical records, email addresses, and profiles from Uncle Enzo spending a few hundred bucks to amass over 1,184,403 records.  The game also provides corporate buyers and government agencies willing to pay for the personal data I collected - Wow!!!

      The game provides a fun way to learn about the impact of your personal data being bought, sold, stolen, breached, etc.  Give it a try if you have a few minutes and want to see how your personal data may be valuable to many legal and illegal operators.

      About the Author

      Rick Kam's avatar
      Rick Kam

      Rick Kam, CIPP, is founder and president of ID Experts. He is an expert in privacy and information security. His experience includes leading organizations in policy and solutions to address protecting PHI/PII and resolving privacy incidents and identity theft. He is the chair of the ANSI PHI Project, Identity Management Standards Panel and the Santa Fe Group Vendor Council ID Management working group. He is also an active member of the International Association of Privacy Professionals and is a member of the Research Planning Committee for the Center Identity which is part of the University of Texas Austin.

      Enterprise Risk Management is a team sport

      by Jeremy Henley

      We often get the question of “who is in charge when a data breach occurs?” or a slightly different one of “who is in charge of preventing a data breach for an organization?”

      This seems like a very simple question that should have a simple title for the answer like “Information Security” or maybe something like “the person in charge of Privacy or Compliance.”  However preventing and responding to data breaches is more like a team sport.  In team sports you have key players in specific skill positions which they excel in and key people in the organization like management and ownership who select these players and coaches.

      I like to think of it as a football team, the American version.  Even if you do not follow the sport you probably know there are offensive and defensive teams and “skilled” positions like quarterbacks, running backs and wide receivers.  When you dig deeper into a sports team you find specialty coaches, head coaches, a general manager, marketing and ownership. 

      I am sure no one wants to think about resolving a data breach in the same light as winning the Super Bowl.  However, I do think you could make the case that having an entire organization in sync is how you achieve great accomplishments for your organization, like data security and dealing with the challenges of a data breach.

      To reach ultimate success it is important for ownership, general management, coaches and players to all get on the same page.  In this webinar we will attempt to help facilitate the discussions necessary to help cross over to different skilled players or “silos” in your organization and create a winning data security team that can play both offensive and defense at a high level and has all the tools necessary to win.

      Please join us August 13 at 11:00am PST.

      Click here to register.

      About the Author

      Jeremy Henley's avatar
      Jeremy Henley

      Jeremy Henley is an Insurance Solutions Executive for ID Experts. He is has been certified by the Healthcare Compliance Association for Healthcare Privacy and Compliance and brings 11 years of Sales and Leadership experience to the ID Experts team.

      A Decade of Breach: The More Things Change, the More They Stay the Same

      by Doug Pollack

      We have spent this summer highlighting some of the key drivers, trends, and characteristics surrounding the data breach phenomena of the last decade. As I’ve reflected on some of the really interesting key data points in the Decade of Data Breach Infographic, I find that a lot has changed over the last 10 years as to the nature, root cause, and types of data involved in data breaches. But while technology and social use of personal data has changed in dramatic fashion, as well as the nature of threats, the regulatory framework and enforcement of data breaches have changed not so much (except in the health care industry and some state laws—CA CT, TX, etc?).

      Thinking back almost a decade to the Choicepoint data breach in September 2004, at this time very few members of the public were familiar with the term “data breach”, nor did they really understand its meaning. Even less so, did they grasp the kinds of information collected by data aggregators such as Choicepoint, nor their business models for monetizing this information about all of us.

      The Choicepoint breach was a seminal moment in the history of data breach. It was the means by which data breach came into the public consciousness. It affected around 163,000 individuals, somewhat small by modern data breach standards. But it was also the first major breach that was subject to the notification provisions of the new California law, SB 1386.

      Prior to this incident, breaches did occur, but they were often kept very quiet. SB 1386 was the first legislation that required breached organizations to at least notify the affected individuals of the breach. While this wasn’t “public” notification, it began to get information about breaches into the public domain. The Choicepoint breach tested in many ways what behavior we would require of organizations that breach personal information on U.S. citizens.

      Choicepoint initially only notified the 35,000 California residents, a small subset of the 163,000 individuals nationally whose personal information was exposed. After a public outcry, and investigations by members of Congress, the Federal Trade Commission and the US Securities and Exchange Commission, Choicepoint relented and notified all of the affected individuals. But this didn’t save them from further liabilities. They ultimately were fined $15MM by the Federal Trade Commission in January, 2006, $10MM for civil penalties and $5MM to compensate victims. And then finally in January, 2008 settled their final class action lawsuit for $10MM.

      Now a lot has changed since the Choicepoint breach. During the subsequent years, from around 2005-2009, most US states passed data breach notification laws, modeled after the California bill. I think of this period as the Card Breach Era in data breach history, because these laws had the opportunity to affect a series of breaches in which credit and debit card information was compromised.

      The first of these notable breaches that was at CardSystems, a processor of Mastercard transactions. This breach affected around 40MM card members and was an indicator of the “value” of credit card information on the black market. The CardSystems breach was followed by the breach at TJX (parent company of retailer TJ Maxx) of 94MM cards, and then what at the time was the Mother of all Breaches, the Heartland Payment Systems breach of 130MM cards.

      During this era, it became common for breached organizations to provide a free year of credit monitoring to the affected individuals, in order to address their potential “harms” from the breach. This tended to make sense at this time, because these breaches of card information led naturally to risks of card fraud for which credit monitoring could provide an early detection indicator, of sorts to the affected people.

      So just as the Card Breach Era was running out of steam, we could see that cybercriminals and health fraudsters were setting their sights on more sensitive, more valuable health information as a new, prime target. The resulting wave of breaches, think of this as the HIPAA Breach Era, notably is where hackers are attracted by the monetization potential for personal health information. The monetization potential for health insurance information in the black market is substantial. The “value” of valid health insurance data is approaching $50 each, by some calculations, 100 times the value of a valid credit card record.

      Just as this wave was approaching, our U.S. Congress coincidentally passed the very first national data breach notification law, incorporated in the HITECH Act. This law required notification not just of the affected individuals, but also of the public at large and regulators, when a breach of PHI (protected health information) occurred. It also required that the US Department of Health and Human Services (HHS) maintain a public database of reported breaches. This new regulatory framework and associated organizational focus on protecting health information has come into play just as electronic health information is exploding in size, with medical and health insurance records going into electronic health record systems and moving across health information exchanges.

      So in many ways the data breach world today is vastly different than that of the Choicepoint breach a decade ago. The data is more sensitive and more valuable. The black market is operating on a global scale using the Internet. But interestingly, the laws haven’t changed much, although enforcement of privacy and security regulations has risen substantially.

      The laws are based around and focused on notification. I suspect that the rationale for this approach is that it would “embarrass” organizations that have breaches for their failure to protect their customers’ private information. This assumption is validated somewhat by the informal name for the HHS breach database, known affectionately in the industry as the “Wall of Shame”. But I’m really not sure that this is working, or that it is sufficient.

      Breaches have become so commonplace today, that management of the breach response process has become primarily a legal exercise, ensuring proper notification of individuals and authorities as dictated by law and rule, rather than one where the focus and intent is on addressing the perceived and actual risk of harm to the breached population.

      The laws are not at all prescriptive in how breaching organizations should address the “harms” that occur to individuals victimized by breaches. During the Card Breach Era, organizations voluntarily provided credit monitoring by convention, as a means of addressing the associated harms that could occur with such breaches. But this approach is neither mandated, nor would it be effective, in addressing the potential consumer harms in a health information data breach.

      This leads me to a share a few thoughts and perceptions on data breaches moving forward. Now that we’re well into the HIPAA Breach Era, I perceive that organizations that are entrusted with our health information are finding it difficult, if not impossible, to protect this data from both accidental and malicious exposure. The environment is too fluid, too much information is now digital, use of portable devices is exploding, and the data is too profitable for bad guys. Healthcare fraud is a remarkably huge financial problem in the U.S.

      So what can we all do differently to address the harms that consumers are exposed to by data breaches?

      1. Help consumers detect & prevent health fraud.

      Develop technologies and products that can assist in identifying and preventing health fraud, and medical identity theft. During the Card Breach Era, there was credit monitoring. We need a solution for the HIPAA Breach Era that can provide a similar level of efficacy helping consumers in the early detection of fraudulent use of their health identities. Current offerings that provide scanning of the cyber black market are a great start. But better tools are needed to engage and enroll the consumer in monitoring their health identity for compromise and fraud.

      2. Update laws to address consumer harms.

      Acknowledge that malicious breaches of personal health information are different than incidental breaches of this data. In the case of malicious breaches, laws should require the breached entity take tangible actions to address the risks of harm to the breached individuals. If the HITECH Act was HIPAA 2.0, we then need a HIPAA 2.1 to set expectations as to how consumer harms need to be addressed in health information breaches.

      3. Make managed identity restoration the standard.

      Cleaning up after fraud and identity theft intrudes on your life and can be a very complex, time consuming and arduous process. It can take months, if not years, in some cases to get yourself back to a “pre-theft” position. Managed Identity Restoration (MIR) services, which operate under a LPOA (limited power of attorney) have become the “gold standard” in addressing consumer harms related to data breaches. But in only a very small number of breaches, do the breaching entities offer MIR. In this new HIPAA Breach Era, an offer of MIR should become the standard, just as credit monitoring was during the Card Breach Era.

      About the Author

      Doug Pollack's avatar
      Doug Pollack

      CIPP, MBA. With over 25 years experience in technology industry products and services, Doug is an expert in personal information privacy and security. He is currently a senior executive at ID Experts.

      Q&A: How Forensics Can Help You Comply with the HIPAA Final Omnibus Rule

      by Winston Krone

      Recently Mahmood Sher-Jan, Vice President of Product Management at ID Experts, and I participated in a webinar titled, Healthcare Data Vulnerabilities:  How Forensics Can Help You Comply with the HIPAA Final Omnibus Rule.  During the webinar we analyzed top trends in healthcare data digitization and associated data breach and security incidents, as well as when investigating a security incident, what are the business considerations and implications for using an independent forensics specialist.  We were asked several questions throughout the webinar and Mahmood and I have spent the time answering as many as possible – you can read the whole document here.  There are four questions in particular that I think deserve highlighting as they are issues I have run into at different healthcare organizations recently. 

      Learn More: How Forensics Can Help You Comply with the HIPAA Final Omnibus Rule

      Q: How long do logs need to be kept under HIPAA?

      A: Technically speaking, an organization must be logging sufficient information so that its forensic investigation can determine if an intrusion took place and if data was exfiltrated.  Ideally, an organization must be able to determine a baseline – i.e. the “normal” network activity.  This can mean having access to network traffic logs going back 6 – 12 months.  As a minimum though, assume that any incident will take at least a week to be identified.  Thus any logs that are over-written or kept for less than 7 days are unlikely to assist in a forensic investigation.  Logs also need to be stored securely.  They’re a key target for hackers who attempt to delete their tracks.

      Q: When does a malware infection warrant a reportable incident or a Risk Assessment?

      A: If an organization suffers a malware infection on a key computer, the organization must be able to prove that the malware was unable to access or extract HIPAA data, or that the malware did not send out the HIPAA or state governed data to the criminals controlling the malware.  If the organization does not have the evidence (e.g. logs) to determine how the malware acted or if it actually compromised data, then the organization may be forced to assume the worse and conduct a compliant risk assessment that takes into account all the required factors in the incident to determine whether the incident is reportable.  More broadly speaking, a malware infection is a “red flag” that a technical safeguard or organizational process has failed and should trigger a new risk assessment or analysis of the threat vectors and vulnerabilities.

      Q: In your opinion, do email strings that employees pass around to their friends and coworkers pose a risk for viruses and malware?

      A:  They can pose a risk.  However, of more concern is they indicate an organization that is not treating IT security seriously.  An organization should be controlling and guiding Internet and email usage by its employees.  There should be no co-mingling of personal and email accounts and, ideally, employees should not be able to access their personal email (or indeed any personal Internet activities) from computers within an organization’s network.  Allowing endless email strings is indicative of sloppy security management.

      Q. Can you speak to the risk of wireless transmission and all the apps clients docs and nurses now use, whether the hospital's 802.11 network or to cell phones over the cellular network.

      A:  Where HIPAA data may be stored or accessed from wireless devices, those devices must be “clean” – the organization should be controlling those devices and must be able to vouchsafe for their security.  This means (in terms of best practices) that wireless devices should not be co-use (both work and personal); any installed applications should be vetted and improved; and there is no role for BYOD (Bring Your Own Device).  Even apps designed for health-care organizations have been found to have poorly designed features that cause HIPAA data to be inadvertently stored on devices or unnecessarily shared between devices.

      About the Author

      Winston Krone's avatar
      Winston Krone

      Winston Krone is the Managing Director of Kivu Consulting, Inc. in San Francisco, California. He manages a team of technology experts specializing in data breaches, computer forensic investigations, and security compliance, and has served as a legal and technology advisor to corporations, healthcare organizations and financial institutions in cases involving online privacy, hacking, and theft of trade secrets. Winston has testified as an expert in federal and state courts on unauthorized access to networks, deletion of digital evidence, and the veracity of email and online communications. Winston is qualified as an attorney in California and as a solicitor in the United Kingdom. Prior to Kivu, Winston was the Director of Computer Investigations for a publicly traded risk management company. As an attorney, he worked for major law firms in London, Brussels and San Francisco, and for the UN and US State Department in Rwanda, Bosnia, and Kosovo.

      10 Years of Data Breaches: Nothing to Celebrate for Victims of Medical Identity Theft

      by Bob Gregg

      Data breaches first appeared just 10 years ago when certain states enacted legislation that required the public disclosure of data breaches impacting consumers. Now they’re part of the consumer vocabulary. That’s no surprise, given that breaches affect the financial and physical health of consumers. Identity theft is the fastest growing crime in the U.S., according to the FBI; an identity is stolen every 3 seconds, a recent Javelin Study found. This infographic, A Decade of Data Breach, provides a snapshot of identity theft and data breach over the last decade and is available here: http://www2.idexpertscorp.com/a-decade-of-data-breach/

      Financial identity theft has become well understood in the U.S. over those 10 years and protecting consumers from harm has become an industry. But what is not well understood is that one of the fastest-growing trends in ID theft is medical identity theft, affecting over 2 million people in the U.S. in 2011.

      With its serious health risks, medical identity theft is far more dangerous than consumer or financial fraud. For instance, when a victim’s medical records are merged with a thief using the same identity, that record becomes “polluted.” The victim may be denied treatment or be misdiagnosed based on this inaccurate information. In addition, patients may be denied life insurance or billed for services not rendered. A few real-world examples illustrate the dangers:

      • In Oregon, a pregnant woman delivered a baby addicted to crack using another woman’s social security number—and then abandoned the baby. Police arrested the victim and put her children into protective custody.
      • A hospital’s billing department notified a pregnant woman in Washington that someone had used her social security number to be treated for a crack overdose at the ER of the same facility where she was about to deliver her baby.
      • A patient in Texas used a California man’s medical identity to obtain radiation treatment. When patient’s records merge with the thief’s records, healthcare providers will think the patient has a condition he doesn’t have.
      • One woman used her sister’s medical ID to receive treatment for a serious sports injury. When chronic problems arose after she had her own insurance, she was denied coverage for treatment because there was no record of her initial injury.
      • Another woman couldn’t get physical therapy following neck surgery because a Miami clinic that she had never visited claimed her insurance benefits had been maxed out.

      Healthcare organizations have a moral obligation to step up their privacy and security efforts to safeguard personally identifiable information (PII) and protected health information (PHI). And legislators need to update their thinking about data breaches and identity theft. It is not just consumers’ wallets and reputations we need to protect, it is the personal wellbeing of that consumer as well. 

      About the Author

      Bob Gregg's avatar
      Bob Gregg

      With over 30 years of experience in high technology and software services, Bob joined ID Experts as CEO in 2009. He is particularly interested in the emerging trends involving identity theft and privacy data breaches, with emphasis on healthcare. "Let's keep our private, confidential information just that...private and confidential"

      The Three Certainties of Life: Death, Taxes, and Getting Hacked

      by James Christiansen

      “Nothing is certain except death and taxes.” Or so Ben Franklin is credited with saying. I would add hacking to that list. Today’s cybercriminals possess the skills, sophistication, and technology to hack their way into nearly any system in a way that is virtually undetectable.

      It wasn’t always so. Historically, human error caused data breaches—the loss of unencrypted backup tapes or laptops, for example. Organizations didn’t implement the controls they do now. Now, encryption, firewalls, and other security measures are standard procedure. These are, and will always be, necessary.

      What I see as a whole new threat, however, is the increase in unsecured mobile devices—smartphones, tablets, and such. Data, although encrypted at the enterprise level, flows out to vulnerable access and distribution points that are hard to control. As the infographic, A Decade of Data Breach points out, 88.6 percent of healthcare professionals access patient information with unsecured smartphones. Compounding the problem is the volume of raw data distributed to these devices. The world’s computer servers process 9.6 billion petabytes of information a year.

      Advanced Persistent Threat are Like Termites

      Hackers can remain in a system indefinitely—a danger known as “advanced persistent threat.” Cybercriminals slip in below the radar and spread laterally, impacting as many systems as they can. With APTs, it has become increasingly difficult to detect an intrusion, and the average time from the initial breach to detection has grown exponentially.  

      I liken APTs to termites in your home: You no longer know where they first broke in. You have to remove one board—or one server—at a time to discover the source of the problem. And like termites, the problem is very, very costly. Reputational damage, customer churn, lawsuits, fines, and breach response costs can cost a company millions of dollars.

      The Decade Ahead: Faced with the “Where” Question

      We use to talk about if we are hacked… Then the conversation moved to when we get hacked…  Now we should also be thinking where have I been hacked? The stealth mode of APTs continue and every organization is going to be faced with the where question.

      The security and privacy risks will only increase with the proliferation of devices that can be hacked. In fact, the FDA recently warned the healthcare community of the vulnerability of medical devices to cyberattacks. Endpoint security—granting network access only to devices that meet specific standards—will be the real problem for IT professionals going forward. I shared additional thoughts on the landscape today and the outlook for the next decade, in the article A Decade of Data Breach: Tracking an Evolving Threat.

      About the Author

      James Christiansen's avatar
      James Christiansen

      James Christiansen is Chief Information Security and Risk Officer of RiskyData, an information security and privacy solutions corporation focused providing clients scalable and cost effective tools and services to manage their Information Risk. Prior to joining RiskyData, James was Chief Information Risk Officer for Evantix and CSO for Experian Americas. James had the overall responsibility for information security providing strategic direction and vision across Experian business units. James joined Experian after serving as Chief Information Security Officer for General Motors where his responsibilities included worldwide implementation of security plan for the largest financial (GMAC) and the largest manufacturing corporation in the world. Prior to joining GM he was SVP and Division Head of Information Security for Visa International, responsible for their worldwide information security program. James has been featured in the New York Times as one of the leaders in information security. He has an MBA in International Management, BS in Business Management and is the author of the “Internet Survival Series”, contributing author of “CISO Essentials” and numerous industry papers. James has been chair for the IT Fraud Summit, and co-chair of the ANSI study of the impact of security breaches on healthcare, a prominent speaker for prestigious events such as the Business Round Table, Research Board, American Bar Association, American Banker, RSA, BankInfoSecurity, ISSA and MIS Training Institute. James has more than 25 years of experience in information security, systems management, including network and operating systems management, application development and design and now meeting the significant challenge of providing risk management solutions for RiskyData.

      Data Breaches: Looking Back, Moving Forward

      by Larry Ponemon

      Since the first data breach that generated big media awareness back in 2003, companies have become savvier about the dangers and costs of data breaches. Recent Ponemon Institute research, the 2013 Cost of Data Breach Study, shows companies are doing a better job in responding to the breach incident and in determining the root causes of information losses. C-level executives and boards now realize the costly consequences of material data loss and, hence, appear to be more willing to approve investments in data protection technologies and expert personnel.

      And these investments are paying off: The study found that in 2012, data breaches cost American companies an average of $188 per lost record, and $5.4 million per incident, down from $194 per lost record, and $5.5 million per incident in 2011.[1]

      Number of Data Breaches: Trending Up
      While the cost of data breaches has declined, the number of breach incidents has soared. Another Ponemon research study, the Third Annual Benchmark Study on Patient Privacy and Data Security reveals that 94 percent of healthcare organizations surveyed suffered at least one data breach during the past two years. Several factors contribute to the increase:

      1.  Our research shows the emergence of insecure mobile devices (including BYOD), cloud computing, virtualization, and other disruptive information technologies substantially increase the risk of material data breaches. Driving this trend are unrealistic consumer expectations in having instant access to everything, all of the time. It’s what I called in a Baseline article the “consumerization of IT.”

      2.  Another trend is the increase in stealth and sophistication of malicious or criminal attackers both inside and external to the organization. In short, these “modern-day” attackers have an ability to steal the most sensitive and confidential information without detection. As I recently wrote in the Harvard Business Review, a lively international market for logins, passwords, and medical records has sprung into being. Each pilfered name or number might not be worth much on its own, but a theft of millions of records can earn a hacker an enormous profit.

      3.  A final big trend is the emergence of cyber attacks against a nation’s critical IT infrastructure (a.k.a. cyber warfare). A flurry of nation-sponsored attacks has already been revealed. Many cyber attackers have banded into government-sponsored syndicates who develop malware that doesn't even resemble the attack software of five years ago. It's now much more sophisticated, stealthier, and difficult to identify.

      This Will Get Worse Before They Get Better

      It appears that the malicious or criminal attackers—including hacktivists and national states—have an advantage over the today’s defenders of corporate data and IT infrastructure. These bad guys only have to be successful once to cause havoc for governments, companies, and people. Further, many organizations do not have the capability to withstand security exploits and information system compromises. For the longer term, however, I predict that the information security community will rise to the occasion and overcome this imbalance of power through innovations that strengthen our counter intelligence and offensive capabilities. More thoughts on this topic are in the article A Decade of Data Breach: Tracking an Evolving Threat.

       

      [1] “Data breaches cost average U.S. firm $5.4M per incident last year, says Ponemon,” FierceEnterpriseCommunications, June 5, 2013

      About the Author

      Larry Ponemon's avatar
      Larry Ponemon

      Dr. Larry Ponemon is the Chairman and Founder of the Ponemon Institute, a research “think tank” dedicated to advancing privacy and data protection practices. Dr. Ponemon is considered a pioneer in privacy auditing and the Responsible Information Management or RIM framework.

      Data Breaches: 10 Years in Review

      by Rick Kam

      Over the past 10 years I have seen many organizations experience a breach of PII and PHI. Many companies now realize that breaches are something that can happen to them, not just “the other guy.” This awareness has increased at all levels, from consumers to the executive suite, due in part to legislation like HIPAA, HITECH, Red Flag, and state data breach notification laws that require disclosure and corrective actions.

      We created an infographic to illustrate A Decade of Data Breach: http://www2.idexpertscorp.com/a-decade-of-data-breach/

      The type of data breached has also evolved from PII to now include PHI, specifically health insurance numbers to commit medical identity theft and healthcare fraud. Every study we see on this topic indicates that the significant value of healthcare data—$50 a record on the black market—to bad actors, along with the complexity of securing the healthcare ecosystem, makes it vulnerable to these kinds of crimes.

      “Identity theft will not go away until the issue of identity is solved,” says Robert Siciliano, CEO of IDTheftSecurity and a personal security and identity theft expert. “‘Identity-proofing’ consumers involves verifying and authenticating with numerous technologies, and the flexibility of consumers to recognize a slight trade-off of privacy for security.” 

      According to Robert Siciliano, and other leading industry experts, the frequency, severity, and impact of data breaches are expected to escalate; and forecast the top trends in data breach, privacy, and security here.

      On the Horizon: The Next Big Data Breach

      Moving forward, I believe the “next big data breach” will come from healthcare, thanks to the consolidation of millions of EHRs in Health Information Exchanges. The stimulus money that funded the deployment of HIEs in every state is drying up, forcing these exchanges to fund operations in ways that will increase business risk and the potential for large data breaches.

      Jim Pyles, an attorney friend with more than 40 years of experience in health law and policy, put it best. “The electronic health information privacy breach epidemic is an unanticipated ‘game changer’ in that health information can be stolen from anywhere in the world, distributed to an infinite number of locations for an infinite period of time and can cause limitless damage,” he says.

      The Third Annual Benchmark Study on Patient Privacy and Data Security indicates that organizations are not protecting sensitive information as well as they could. Healthcare entities need to operationalize incident response to better respond to data breaches and protect patient privacy.

      On a regulatory level, Medicare numbers must be changed from a person’s social security number to something unique. Other parts of the healthcare ecosystem have done this to protect patient privacy. Industries such as education have also removed SSNs as identifiers.

      Data breaches are a fact of life for organizations. Measures must be taken at all levels, from proactive efforts by consumers to holistic prevention and response strategies by executives and regulators. Together, we can overcome the causes and consequences of the everyday disaster we call data breaches. For more information on the landscape today and the outlook for the next decade, read A Decade of Data Breach: Tracking an Evolving Threat, a Q&A with me, James Christiansen, chief information risk officer at RiskyData and Dr. Larry Ponemon, chairman and founder of the Ponemon Institute.

      About the Author

      Rick Kam's avatar
      Rick Kam

      Rick Kam, CIPP, is founder and president of ID Experts. He is an expert in privacy and information security. His experience includes leading organizations in policy and solutions to address protecting PHI/PII and resolving privacy incidents and identity theft. He is the chair of the ANSI PHI Project, Identity Management Standards Panel and the Santa Fe Group Vendor Council ID Management working group. He is also an active member of the International Association of Privacy Professionals and is a member of the Research Planning Committee for the Center Identity which is part of the University of Texas Austin.

      Don’t forget to include tips: State Requirements in the Data Breach Notification Letter

      by Heather Noonan

      Maryland? California? New York or was it Nebraska? Maybe it was Florida? Keeping data breach laws and regulations organized by state can be a very daunting and confusing task.

      MORE INFO: Data Breach Response "How To" Series

      For example in Massachusetts, the notification letter cannot include the nature of the breach, but Hawaii, Iowa, Michigan and a multitude of other states require that a description be included. North Carolina requires information on directing a person to remain vigilant by reviewing account statements and monitoring credit reports, while Oregon requires information on how to report suspected incidents of identity theft to local law enforcement or the attorney general.

      While reading through each state's mandated law, you will probably find your head spinning. I know I have on more than one occasion.

      How do you uphold to 50 varying state laws? How do you keep them separated or grouped together? What about the U.S. territories? Puerto Rico has some very stringent regulations that you don't want to leave out. Do you send 25-50 different letter versions?

      Luckily, the answer is "no". Some people would suggest that you could learn from other company mistakes and research the lawsuits and the specific state legislation you don't want to get wrong. You could also go to every state's legislation and attorney general page and keep a very long, extensive list of the requirements.

      Unfortunately, both those options could hinder you and wreak havoc with the timeframe you have. It can also create a lot of room for error and typically during a data breach, time is of the essence.

      I can say from experience and what has served us well here at ID Experts, is to take the highest common denominator and verify that all state requirements are included. If a specific state is an outlier from the others and requires something outside the norm, you may want to consider sending a specific letter to that state, but you don't need 50 different letter versions.

      For a data breach notification letter, you want to understand where your affected population resides and the specific states that were affected. From there, you can look at the most aggressive and the highest common denominator and incorporate them as a whole.

      I've been writing data breach letters for many years now and I'm still shocked and surprised how some states can be on the forefront of privacy and security, while some have barely made a decision.

      With all this said, what matters the most is peoples' personal information. These laws, while often different, are making a strong impact and starting a positive change that is definitely overdue.

      MORE INFO: Many factors complicate data breach assessment and reporting

      About the Author

      Heather Noonan's avatar
      Heather Noonan

      Heather is the Senior Project Manager for the ID Experts Data Breach Response Team. She provides subject matter expertise, state and federal regulatory requirements and best practices. She is the primary point of contact for client communication and data breach engagements. Heather has been with ID Experts since 2008 and works in all areas of ID Experts including informational webinars and blog discussions. Heather has a Bachelors of Science, specializing in Business Communication, and has over 15 years of experience in client customer service with 10 years specific in Project Management.

      The Next Record Breaking Breach

      by Jeremy Henley

      There has not been a record breaking breach for a while now.  Unless you consider the little issue that the NSA has in terms of dealing with their leak(s).  I am referring to the rest of the corporate world that use PII and PHI daily for business purposes, they have not had a record setting lost for quite a while.    Does this mean all of the privacy and security risks and issues have been solved?  I don’t think so. 

      READ MORE: Many factors complicate data breach assessment and reporting

      With the final HIPAA rule in place now and the deadline for compliance of September 23, 2013 many organization are scrambling to be compliant.  However I find it hard to believe that so many healthcare entities and ALL of their business associates will become and stay compliant.  Remember compliance is a moving target.  I think the lack of compliance for some may be the source of the news grabbing records related to data breaches.   

      Looking at the healthcare space specifically we know that enforcement is increasing and we have heard from the regulators from several conferences over the last few months that fines will be increasing.  They have said they will continue to look at events and the number of violations per event as a point for how the value of the fine is calculated.  Some of the sources I have been speaking with recently mention fines north of $10M and maybe even $15M.  I believe that would be a new record for fines in the data healthcare data breach space and is sure to get folks attention.     

      Being and staying compliant is difficult, even the federal regulators will acknowledge this is a big chore.  Having best practices around privacy and security, whether they are good or not, means nothing if they are not documented.  We all know that avoiding a data breach is next to impossible and having a breach can expose your current level of compliance which does not give you time to prepare.    

      So let’s review some of the basic things that need to happen to have the best chance of avoiding or minimizing the impact from these fines.  Regardless of how your organization stacks up you need to create a baseline by completing an assessment of your current level of compliance.  This is your starting point which will be a good thing regardless the findings, not something that only documents how far out of compliance your organization actually is.  Documenting that you have identified the issues and are systematically working to improve them is what a compliance program is all about.  If you know in advance that the assessment report may expose past history that you would not like outsiders knowing about, then have the legal counsel involved in the process.

      When a regulator comes knocking on your door post breach, having this assessment complete and the documentation in your pocket will likely change the outcome of what will happen next regardless of your actual level of compliance.  You will want to make sure that you can justify why you are not compliant but a well-run assessment will document these issues.  If it would be hard for them to justify a fine for a violation they will more than likely take the limited staff they have and knock on another door.  If your organization is the one that cannot prove what you are doing from a privacy and security perspective then watch out because the target for the new record might be on your back.      

      MORE INFO: Risk Analysis and Customized Compliance

      About the Author

      Jeremy Henley's avatar
      Jeremy Henley

      Jeremy Henley is an Insurance Solutions Executive for ID Experts. He is has been certified by the Healthcare Compliance Association for Healthcare Privacy and Compliance and brings 11 years of Sales and Leadership experience to the ID Experts team.

      The Most Common Ailments of People over 55?

      by Rick Kam

      Some would suggest arthritis or perhaps forgetfulness.  Unfortunately, the answer is one of the more common issues “seniors” face is medical identity theft.

      Medical ID Theft

      I was on a FTC panel last month in DC where we discussed the potential risk of medical identity theft to seniors.  There are a lot of reasons for this increased risk, which we shared in the panel.

      LEARN MORE: 8 ways to fight medical ID theft

      One of the more interesting aspects of the problem we discussed was the need for consumers, in particular seniors to help identify medical identity theft and fraud and report it to their providers and healthcare payers. 

      I have starting using the phrase “deputizing the consumer” to help fight medical identity theft and medical fraud.  I saw a similar phase used recently in a recap of the recent Gartner Summit where the author, Tom Field summarizes one of the key themes of the summit being “deputizing the user”.

      There is one other reference I have seen in the past few days, specifically a reference it a bill introduced in the House on June 10, 2013 by Congressman Peter Roskam of Illinois to combat billions of dollars of fraud and abuse in Medicare and Medicaid.  Section 103 of the bill specifically talks to improving senior medicare patrol and fraud reporting rewards. 

      What this all means to me is that there is something important behind the idea of mobilizing the consumer to fight fraud.  Learn more about this in an alliance we are launching next month called the Medical Identity Fraud Alliance

      About the Author

      Rick Kam's avatar
      Rick Kam

      Rick Kam, CIPP, is founder and president of ID Experts. He is an expert in privacy and information security. His experience includes leading organizations in policy and solutions to address protecting PHI/PII and resolving privacy incidents and identity theft. He is the chair of the ANSI PHI Project, Identity Management Standards Panel and the Santa Fe Group Vendor Council ID Management working group. He is also an active member of the International Association of Privacy Professionals and is a member of the Research Planning Committee for the Center Identity which is part of the University of Texas Austin.

      Mobile Devices – What To Do If Your Vendors Implement BYOD

      by Brad Keller

      This article is reprinted with permission from Shared Assessments

      Driven by employee demand and the perception of better efficiency, the use of mobile devices in the workplace continues to grow. So, not only must today's IT security managers determine how to manage these devices in their own environment, they must also determine if their third party service providers' are allowing employees to access their data and/or systems through the use of a mobile device as well. This is particularly important if your vendors' follow the Bring Your Own Device "BYOD" approach to mobile device implementation.

      Unfortunately only the most recently executed vendor contracts will tend to address the issue of mobile devices. Even if your vendor agreements do cover the use of mobile devices to access your systems and data, you must be able to determine if your vendor can meet your contract's requirements for a secure mobile device environment.

      The foundation for effectively controlling mobile devices, like almost all other IT services, is the development and implementation of a thorough and easily understandable set of policies and guidelines. Keep in mind that what you are looking for is how your vendor allows their employees to use mobile devices to access your data and/or systems. How they choose to allow employees to perform other tasks unrelated to the execution of their contractual obligations (like accessing company email accounts) may reveal their understanding of mobile device risk, but it is not directly relevant to how they discharge their obligations to protect your data and systems. When assessing your vendor you should determine if their mobile device policy contains at least the following provisions:

      • Security awareness training/education
      • Acceptable use
      • Operating system security
      • User responsibilities
      • Access control
      • Data handling
      • Individual responsibility if co-mingling personal and organization data on the mobile device
      • Constituent accountability
      • Secure disposal of device at end of life
      • Vulnerability management
      • Responsibility for ensuring mobile device operating system is updated
      • Responsibility for ensuring mobile device applications are updated
      • Reporting information security incidents in the event of loss or theft
      • Prohibit sharing a mobile device with other users, including family and friends
      • Ownership of data on the device
      • Legal ownership and rights of the mobile device
      • Specific actions that organization may take in the event of a lost/stolen or compromised mobile device (e.g., remote disable, remote wipe, confiscation)
      • Data sanitization of (organization) data, settings and accounts on the mobile device at end of life
      • Creation and use of mobile hotspots on an organization's premise (BYON - Bring Your Own Network)
      • Consequences for non-compliance with mobile device policy
      • User authentication on the device
      • Device encryption

      While a vendor may be unwilling to provide you with the full content of their mobile device policy, they should be agreeable to providing you with the policy's table of contents, or other documentation to confirm all of the areas addressed by their mobile policy. Ultimately, the adequacy of your vendors' mobile device policy, and the provisions it should include, will be determined by what your vendors' allow their employees to do with mobile devices, your company's risk tolerance, and, to a large extent, the regulatory environment in which you operate.

      About the Author

      Brad Keller's avatar
      Brad Keller

      Santa Fe Group Consultant and Shared Assessments Program Director, Brad Keller, has more than 25 years of experience developing and leading risk management and third-party risk assessment programs. Brad is responsible for the development of the Shared Assessments Program’s Tools and key partnerships.

      Things you should know before going live with a Complete Data Breach Response Strategy

      by Heather Noonan

      So you find yourself in the debacle of a data breach? Where in the world do you begin? Your management team is sending you emails left and right, meetings have started to run amok, and you haven’t had lunch in the last two days.

      MORE INFO: Data Breach Response "How To" Series

      Data breaches can be full of politics, high energy, and a lot of miscommunication. If you break it down to the basics, communicate, and make some smart decisions from the beginning, you are guaranteed to see some light at the end of the tunnel.

      Data breaches are also highly regulated under State and Federal guidelines and the requirements can be rather confusing. Similar to a crisis communication strategy, there are some main things you need to consider before you pull the trigger.

      1. Affected Population - Were forensics completed? Do you know the true population of how many people were affected? It’s highly recommended that you complete digital forensics and have a final population before you begin mailing letters. It’s all too often that another 1,000 people will be found or in some cases, what you thought was your affected population, wasn’t even affected at all.
      2. Resources - Who will be the decision makers and who will be the administrators? Who will handle the mailing, the multitude of phone calls, the concerned and angry callers, and a possible investigation?
      3. State and Federal Requirements - Whether you fall under state guidelines or HITECH, you will run into many regulations with specific guidelines and timeframes. Pay close attention to these. They aren’t there just for a warning.
      4.  Forms of Notification - Most state and federal laws require notification in writing and by first class mail. You also need to ask who will handle the mailing. Will you hire a third party vendor to manage it? Do you have the necessary resources available?
      5. Contents of Notification - What happened and when? What personal information was lost? What are you doing to protect personal information from further unauthorized access? Do you need to include information for the consumer credit reporting agencies? Instructions on how to place a fraud alert or a security freeze? Consider everything that needs to be in the notification letter and take into account state and federal requirements.
      6. Contact Information - A telephone number for callers if they need further information and assistance.
      7. Notification to Regulators - State attorney generals, enforcement agencies, and the consumer credit reporting agencies all have specific deadlines and requirements of when they require notice. Remember, you not only have to notify the affected population, but other state and federal regulating bodies too.
      8. Notification to Media - Will you be issuing a press release? Do you need a public relations or marketing firm to assist? What are you legally required to say and to whom will you submit the release to?
      9. Notification to Website - Do you fall under the requirement to post notification on your website? If you are, how much information is necessary?
      10. Document - Document everything. You never know when you will need to refer to certain specifics and the decisions that were made.

      Okay, now that you have those 10 steps under control, move forward and good luck!

      MORE INFO: Experts highlight top data breach vulnerabilities

       

      SGWVDFY9DVX7

      About the Author

      Heather Noonan's avatar
      Heather Noonan

      Heather is the Senior Project Manager for the ID Experts Data Breach Response Team. She provides subject matter expertise, state and federal regulatory requirements and best practices. She is the primary point of contact for client communication and data breach engagements. Heather has been with ID Experts since 2008 and works in all areas of ID Experts including informational webinars and blog discussions. Heather has a Bachelors of Science, specializing in Business Communication, and has over 15 years of experience in client customer service with 10 years specific in Project Management.

      What I learned at the ID 360 event in Austin

      by Christine Arevalo

      I spent a week at The Center for Identity’s ID360 event in Austin, TX last month. What an amazing event this proves to be year after year! 

      ID360 is a community of participants interested in fostering research, development and implementation of innovative solutions to meet current and future challenges to identity theft, fraud, misuse and detection.  

      ID Experts has been a founding partner for the Center for Identity for five years now and this is one of the events I really look forward to. From the great weather, the caliber of speakers on such a wide array of subjects, all focused on the topic of identity, what’s not to like? I’m fond of saying, “it’s where all the identity geeks get together annually.”

      Some of the fascinating tidbits I picked up and would like to share:

      Social Implications of Predictive Modeling

      Fascinating implications about Google’s use of search terms to “predict” the spread of the flu. Can search query trends provide the basis for an accurate, reliable model of real-world phenomena?

      “Each week, millions of users around the world search for health information online. As you might expect, there are more flu-related searches during flu season, more allergy-related searches during allergy season, and more sunburn-related searches during the summer.”

      Also, relating predictive modeling, an interesting article from Forbes I somehow missed: How Target Figured Out A Teen Girl Was Pregnant Before Her Father Did

      This certainly makes you pause to consider privacy implications. Executive Director at the Center, Suzanne Barbur, made a great observation about the on-going conflict we have pertaining to privacy, in particular - defining and agreeing what it means for our customers. She noted that what consumers really desire is CONTROL. I would add that the degree of control consumers desire varies based on the type of information being shared, and with whom.

      Other interesting statistics, presented by Bryan Hjelm, VP CSID

      • Small business are still a major target for attacks -more than 50% of targets in 2012 were companies with 2500 or fewer employees.
      • Companies still think they are safe - 77% think they are safe from attack but only 83% of those polled have a formal cyber security plan.
      • Healthcare is the sector affected significantly by data breach -Of reported breaches in 2012, 36% were in healthcare, more than twice that of any other sector.

      Final note:

      One speaker made an appeal to those of us in product development within the industry to strive to deliver “real” products that add real value to consumers.  As the market continues to shift and consumer needs change it’s important for the integrity of the identity marketplace that we heed this warning.  He also asked that we do the industry a favor and be more responsible with marketing and billing practices!

      About the Author

      Christine Arevalo's avatar
      Christine Arevalo

      Christine is a founding employee of ID Experts and leads industry initatives around healthcare identity management. She has experience managing risk assessments, complex crisis communication strategies, and data breach response for ID Experts clients.

      Analyzing the US HIPAA Legacy and Future Changes on the Horizon

      by Christine Arevalo

      The US Department of Health and Human Services issued the long-awaited final omnibus rule under the Health Insurance Portability and Accountability Act of 1996 (HIPAA) on January 17th,2013.  This ruling set a federal level baseline for US healthcare privacy.

      In a recent Data Protection Law & Policy article (Vol. 10, Issue 2) analyzing HIPAA’s legacy in light of future changes, Kirk Nahra, partner at Wiley Rein, LLP, reviewed HIPAA’s beginnings, subsequent rulings to fill in the gaps, and concerns going forward. 

      He noted that while HIPAA regulations have been the primary driver of privacy protection for a decade and provide the foundational principles in most situations, even these rules reflect both inconsistent internal approaches and often provide little assistance or overall confidence in more difficult situations.

      He reflects that the current rules do not control a wide variety of situations involving healthcare privacy which other laws, particularly state laws, control or, in fact, no law controls.  He succinctly states that with each new regulation and law we see “a movement towards more confusion and controversy, rather than less.”

      Nahra provides a nice historical background for HIPAA by illustrating that for many decades healthcare privacy protection in the U.S. was driven exclusively by professional ethics and a myriad of state laws with no consistent federal baseline.  This provided gaps in application and much confusion.  When the HIPAA era began with the passage of the act in 1996, it focused on ‘portability’ - the idea that individuals could take their health insurance coverage from one employer to the next, without having pre-existing health conditions acting as an impediment to job transitions.

      When Congress passed HIPAA, it also included other healthcare topics, including large funding for an extended fight against industry fraud and the move to electronic health records (EHRs).  Nahra posits how privacy concerns around EHR implementation prompted HIPAA’s further Privacy and Security rules, respectively, and stated that these new rules had limits on the applicability to “covered entities” - such as doctors, hospitals and health insurers who might be participating in these standardized transactions.  Hence, a large number of entities who obtain or use healthcare information are not within the scope of these rules, such as consumer-facing entities, many healthcare websites, life and disability insurers, employers in their employment role, etc.

      He explains that while the covered entities are core participants in the industry they rely on vendors to provide services, many of which involve patient information.  The limitation referenced above led HHS to develop the concept of “business associates” - an entity that provides services to the healthcare industry where the performance of those services involves the use or disclosure of patient information.

      Nahra further explains the confusion with the business associate rule by noting that because HHS had no direct jurisdiction over these “business associates”, they imposed an obligation on the covered entities to implement specific contracts with the vendors that would create contractual privacy and security obligations.  The failure to do so would mean a violation of HIPAA rules and a breach of contract, but would not subject the business associate to government enforcement because said associate was not regulated under the HIPAA rules. This confusion has existed since the inception of the HIPAA Privacy Rule in 2003.

      Nahra brings us to the present with round two of HIPAA regulations, driven largely by Congress, which are only beginning to be reviewed, analyzed and implemented.  He surmises that after almost four years, the Department of Health and Human Services finally has released its omnibus HIPAA/HITECH regulation, implementing changes to the HIPAA Privacy, Security and Enforcement Rules, as well as the interim final regulation on breach notification and certain changes to the Privacy Rule as required by the Genetic Information Nondiscrimination Act (GINA Act). The regulation was published in the Federal Register on January 25, 2013.

      The recent changes result from the 2009 passage of the HITECH act.  According to Nahra, the “schizophrenic nature” of the act has been well documented with Congress’ desire to incentivize - meaning pay - healthcare providers to implement EHR systems.  Congress decided that it would impose new privacy compliance obligations on those who chose to use EHRs; and then would create a new set of privacy obligations for everyone else, unconnected in any way to the use of these EHRs.

      Nahra concludes that this statute “fixed” one of the key gaps of the original legislation and rules by applying the enforcement reach of HIPAA to not only covered entities but their 'business associates' as well.  It increased the available penalties for HIPAA violations, cut down on permitted marketing, and modified and expanded certain individual rights.

      Nahra encapsulates with a few final points.  Namely, that even with its recent expansion, HIPAA is still not a general medical privacy law and that while its scope has broadened, the protections still depend on where healthcare information starts - with a healthcare provider or health plan. He argues this leaves enormous gaps in protection, particularly given recent developments that are encouraging consumer centric involvement in their own healthcare and providing the technology to make this goal a reality.  Secondly, although the legislation does not turn business associates into covered entities, it does impose - for the first time - direct accountability on these business associates, with potential civil and criminal liability for a failure to meet these requirements.  And finally, that aside from some modest clarifications the HITECH law did not fundamentally broaden the overall HIPAA scheme, nor did it address in any way the tensions between HIPAA and the thousands of applicable state laws.

      Highlighting concerns for the future, Nahra claims the structure leads to a variety of ongoing tensions that affect the efficiency of the healthcare system, the effectiveness of individual privacy and the operations of the overall healthcare system, including the systemic benefits of large scale data analysis.

      The concerns being mainly

      1. Single rule vs. Multiple Rules - federal floor versus individual, more stringent, state laws

      2. Research - HIPAA rules create significant limitations on how research can be conducted and have been heavily criticized by many in the research community

      3. Technology vs. Security – Balancing technological advances with security in relation to breaches, etc.

      4. Health Information Exchanges – Exchanges being driven by state law privacy concerns that dictate what information can and cannot be included

      Nahra concludes by stating that the healthcare privacy model in the U.S. is a work in progress and the progress is slow, while the movement of technology is fast.   However, he offers that HIPAA works most of the time in most situations and more stringent state laws fill the gaps, when applicable, and that one solution would be to allows states to pass more stringent future laws, yet tailored to the HIPAA model.  “A better healthcare privacy system would in fact benefit individuals, healthcare business and the system on the whole, but we are a long way away from solving this wide variety of issues.”

      About the Author

      Christine Arevalo's avatar
      Christine Arevalo

      Christine is a founding employee of ID Experts and leads industry initatives around healthcare identity management. She has experience managing risk assessments, complex crisis communication strategies, and data breach response for ID Experts clients.

      Are You a HIPAA Business Associate? It isn’t as Simple a Question as it Sounds.

      by Doug Pollack

      As we enter summer this year, it is just a short few months to September 23, 2013 and so what is special about that date? That is when HIPAA business associates, those organizations that work with healthcare providers, health plans, and others who are exposed to sensitive patient data (protected health information, or PHI), are required to comply with new privacy, security and breach notification rules from the U.S. Department of Health and Human Services (HHS) Office for Civil Rights (OCR) - known as the HIPAA Omnibus Final Rule.

      MORE INFO: HIPAA Omnibus Final Rules Playbook: Business Associate Edition

      So with this date fast approaching, do you know if your organization is a HIPAA Business Associate? And do you know all of the organizations that you work with that are also HIPAA business associates? It may not be as simple as you think (or hope) to know. But first, do you really need to care?

      The answer to this question is a definitive “yes”. If you are considered a business associate under HIPAA and the HITECH Act, you have substantial obligations beginning in September to ensure the privacy and security of patient health information, and you also have notification obligations if you have a “breach” of such information. If you were investigated by OCR and found to be “neglectful” in complying with these provisions under the HIPAA Omnibus Rule, you may find your organization subject to fines, penalties, and corrective action plans, which can be financially substantial and operationally onerous.

      So let’s look at what defines a business associate. On the HHS website, they define a business associate as “a person or entity that performs certain functions or activities that involve the use or disclosure of protected health information on behalf of, or provides services to, a covered entity.” Under the Final Omnibus Rule, the definition is further explained and clarified.

      Thanks to the Godfrey & Kahn Law Firm for their description of clarifications made in the Final Rule

      “Under the Final Rule, a “business associate” is generally a person or entity that creates, receives, maintains, or transmits protected health information (PHI) in fulfilling certain functions or activities for a HIPAA-covered entity. Health information that is created or received by a covered entity, identifies an individual, and relates to that individual's physical or mental health condition, treatment, or payment for health care is considered PHI when it is transmitted by or maintained in any form of medium, including electronic media. Notably, the new definition clarifies that "business associates" include entities that "maintain" PHI for a covered entity, such as a data storage company.

      The Final Rule also clarifies the definition of a "business associate" by expressly including health information organizations, e-prescribing gateways, and other persons that provide data transmission services with respect to PHI and require "routine access" to PHI. Additionally, as further explained below, the new definition of "business associate" provides that certain subcontractors of business associates are also "business associates." Due to the significance of the new rules and the imposition of direct liability on business associates under HIPAA, entities which are unsure of whether they qualify as a business associate should clarify with legal counsel.”

      So the healthcare world that we are about to move into isn’t as simple as the one in which we are today. How so? Well first, HIPAA covered entities, those organizations such as healthcare providers and health plans, must revisit their inventory of business associates, and based on the Final Rules, see if they have other organizations that would be considered business associates based on the clarified definitions. If so, they are obligated to have business associate agreements with those organizations.

      Then second, if your organization currently works with HIPAA covered entities and has a business associate agreement with them, you would be well served to investigate and understand the new obligations that you now carry under the Final Rules. It is fairly likely that your organization is either unaware of or unprepared to comply with the provisions of the Privacy Rule, the Security Rule and the Breach Notification Rule. There are specific actions that you must take to consider yourself in compliance. Take a look at our Final Omnibus Rule Playbook for an outline of the steps you should consider.

      Third, if your organization is currently a HIPAA business associate, you now may have subcontractors that you work with that are also considered business associates under the Final Rules. You have obligations to execute a business associate agreement with them. And they have obligations to comply with the new Rules. And in some cases, these subcontractors may not even be aware that they are now considered business associates. Whether they know it or not, they do have new obligations. So hopefully they are paying attention.

      And that brings us to our fourth item. If your organization works in any way with healthcare organizations or healthcare patient data, you should get a legal opinion as to whether you could be considered a business associate under the new Rules. Waiting for your covered entity or upstream business associate to notify you of your obligations and provide you a business associate agreement to sign, may not be the best path. They may not recognize in a timely manner that your organization is, in fact, a business associate. You would be well served to be proactive in this regard and find out for yourself if you are considered a business associate under the new Rules and if so learn more about your obligations.

      So hopefully in reading this, you realize that there is a lot of do and consider this summer, before we reach September 23, 2013. If you require any further motivation, note that OCR has recently completed an audit program where they audited a collection of HIPAA covered entities as to their level of compliance with HIPAA standards. The results were really not encouraging. You can check out the presentation by Linda Sanchez, OCR Senior Advisor, Health Information Privacy and Lead, HIPAA Compliance Audits here. In this presentation, she notes that in the next phase of audits, HIPAA business associates will also be included.

      So think about it. If you received a letter from OCR notifying you that your organization is a HIPAA business associate and that you were selected for a HIPAA privacy and security audit, do you think you’d be ready?

      About the Author

      Doug Pollack's avatar
      Doug Pollack

      CIPP, MBA. With over 25 years experience in technology industry products and services, Doug is an expert in personal information privacy and security. He is currently a senior executive at ID Experts.

      How to manage employee based data breaches?

      by Heather Noonan

      I will keep this blog relatively short since it follows the same guidelines and recommendations of “How to inform internal teams of a data breach?”.

      MORE INFO: Data Breach Response "How To" Series

      Your employees are vital to the growth of your company. Their loyalty, happiness, and well-being are crucial. Look at companies like Google. They cater to every need of their employees. Why? Well, there is a very good reason for it. I recommend you Google it.

      Employee perceptions and satisfaction will be swayed during a breach, whether it is a security break-in or a system hacking. Employees fear not only for their personal information and job security, but now their trust in the company has also been altered.

      My main recommendation is to be very transparent with your employees, reach out to them, and take care of them. I know this may sound like a no brainer, but you would be VERY surprised how many companies handle a breach as just another day.  If employees’ personal information was affected, offer them some type of credit monitoring or recovery service. You want to show them that you regret that this incident happened and that the company will go above and beyond to mitigate it and make sure it never happens again. You want to regain their trust not only in you, but in the company.

      1. Be transparent with employees. Provide what you know and the details you are still working through.
      2. Provide full disclosure. Were police or federal entities involved? Did you catch the perpetrator?
      3. Provide services for affected employees. Provide a credit monitoring or recovery service in case employees are affected by true identity theft. Nothing makes an individual more upset than to have their information stolen and while they sit there helpless, you continue on with your day. Help them.
      4. Provide additional information and updates when available.
      5. Open door policy. You want to avoid rumors and employee gossip.
      6. Be patient and understanding with the disgruntled group. You will always have angry employee(s). Work with them one on one and truly listen to them. Sometimes all they need to do is vent and it may have nothing at all to do with the breach.
      7. Be leery of email notification. Information, such as this, is best handled in person at a company forum or in a comfortable environment where employees can ask specific questions. There are a lot of risks with email notification and the risk of emails being forwarded to non-employees. Remember, we live in the world of social media. In an instance like this, Facebook and Twitter are not your friends.
      8. Have managers available for employees with additional questions.
      9. Educate employees on State and Federal guidelines. As with a healthcare breach and specific states, you need to explain that the incident could be investigated.
      10. Remind employees that this is common and this too shall pass.

      Additional helpful tips can be found at “How to inform internal teams of a data breach?”.

      MORE INFO: How to inform your internal teams that your company has had a data breach?

      About the Author

      Heather Noonan's avatar
      Heather Noonan

      Heather is the Senior Project Manager for the ID Experts Data Breach Response Team. She provides subject matter expertise, state and federal regulatory requirements and best practices. She is the primary point of contact for client communication and data breach engagements. Heather has been with ID Experts since 2008 and works in all areas of ID Experts including informational webinars and blog discussions. Heather has a Bachelors of Science, specializing in Business Communication, and has over 15 years of experience in client customer service with 10 years specific in Project Management.

      Harm Standard: Gone But Not Forgotten? New Factors Mimic Current Breach Regs

      by ID Experts

      This article was reprinted with permission from the Report on Patient Privacy

      Although covered entities (CEs) have been required since 2009 to notify affected individuals and the gov­ernment, when appropriate, of breaches of unsecured protected health information (PHI), the so-called “harm” standard that triggers notice no longer exists under the new final regulations. Or does it?

      Are CEs really starting over when it comes to assess­ing whether an incident is a reportable breach under the final regulations issued on Jan. 25, which have a compli­ance deadline of Sept. 23?

      Just how hard CEs will have to work in the next few months to implement the new regulations on breaches may well depend on how thoroughly they absorbed the 2009 interim final regulation — including its chatty preamble. Another factor is whether they have a detailed process in place already that they use to assess whether incidents have to be announced, or if they’ve been just kind of winging that part of it.

      “If the CE had decided to look at the breach notifica­tion rule as a serious matter, and has attempted to com­ply,…used the interim final rule and followed the spirit of the rule, you are in pretty good shape,” Mahmood Sher-Jan, vice president of product management for ID Experts, a breach prevention, assessment and mitigation firm based in Portland, Ore., tells RPP.

      To be sure, there’s one big difference between the old and new breach regulations: The new regulation requires a presumption that an incident is a reportable breach, un­less the CE’s analysis proves the data probably haven’t been, and won’t be, misused (RPP 2/13, p. 1). And while the “harm” standard has been replaced with another that relies on a “low probability of compromise,” there’s much that’s the same, such as three exceptions in the old rule that are also found in the new rule, with the one dealing with limited data sets now omitted.

      Harm: New Regs Pose Few New Problems

      Sher-Jan and other privacy experts point out that the preamble to the 2009 regulation used some of the exact language to describe the analysis based on risk of harm that now appears in the new regulation in the form of four factors under the “low probability” standard that CEs, and now business associates, must consider to de­termine if a breach meets the legal definition of an inci­dent requiring notice.

      As in the old regulation, the new regulation states, “Breach means the acquisition, access, use, or disclosure of protected health information in a manner not permit­ted under subpart E of this part which compromises the security or privacy of the protected health information.” The old regulation also said the following, which is now gone from the new regs: “For purposes of this definition, compromises the security or privacy of the protected health information means poses a significant risk of financial, reputational, or other harm to the individual.”

      In its place is the following, which describes the four new factors to be used instead of the harm standard:

      “(2) Except as provided in paragraph (1) of this defi­nition, an acquisition, access, use, or disclosure of protect­ed health information in a manner not permitted under subpart E is presumed to be a breach unless the covered entity or business associate, as applicable, demonstrates that there is a low probability that the protected health information has been compromised based on a risk as­sessment of at least the following factors:

      “(i) The nature and extent of the protected health information involved, including the types of identifiers and the likelihood of re-identification;

      “(ii) The unauthorized person who used the pro­tected health information or to whom the disclosure was made;

      “(iii) Whether the protected health information was actually acquired or viewed; and

      “(iv) The extent to which the risk to the protected health information has been mitigated.”

      Lisa Sotto, who heads the privacy and information management practice for the New York-based law firm of Hunton & Williams, LLP, says the health care commu­nity can make a “seamless shift” to the new standard and the assessment process. “I don’t think it matters” that the standard was changed, she says. “When you are faced with the breach, you conduct an analysis based on the relevant requirements.”

      What CEs are doing now, she says, “is pulling out their incident response procedure and revising it to re­move the ‘risk of harm’ and inserting the new standard.”

      Sotto termed it “good and bad” news that the lan­guage from the preamble of the 2009 regulation has been reframed into the four factors now present in the new regulation. “Very good in that we have a clearer descrip­tion” of what goes into a risk analysis, she says, “but the negative, I venture to guess, [is] that those will be the only ones to be considered.”

      Many CEs already have experience complying with state data breach laws, many of which include similar standards, and allow for, or even require, mitigation, she says.

      Sher-Jan cautions that “no single factor should deter­mine” whether a reportable breach has occurred or not, and he warns CEs against a “tendency to drop to factor three, if it was viewed or acquired — ‘Yes’ — then it’s a breach.”

      “Mitigation will be the biggest question in my mind,” he says. “The final rule says ‘if you take the prop­er steps’…what are the proper steps? I think that will be an area” of need for greater clarification by OCR.

      It will be important for CEs and BAs to develop mitigation strategies, since the opportunity to engage in such actions is now spelled out in the regulation, he adds. OCR, in the final rules, “recognized that there can, and should, be mitigation. Even though the word ‘harm’ has been removed, there is an obligation to minimize the adverse effect,” Sher-Jan adds. “Ensuring that the PHI is secured or is no longer misused or abused is part of pro­tecting the patient,” he says.

      Regardless of where CEs are in their efforts to com­ply with the new four-factor standard, Sher-Jan says they need to be certain that whatever they do is part of an overall breach management program, with consistent policies and procedure, “metrics” and a process for de­tecting potential breaches. “How many are you [seeing]? How are you classifying them — breach or not? Are those going up or down?” he asks.

      Admitting he has a “bias toward automation,” Sher- Jan stresses that while his company has a product that will provide assistance with compliance and documenta­tion of an analysis, the ultimate decisions are up to the CE. If investigated, “You can’t say, ‘a tool told me what to do.’”

      ID Experts’ flagship product, RADAR, is a software decision-support program that “plots an incident’s risk level on a heat-map using a proprietary incident risk in­dex.” The program “takes into consideration the severity of the incident, as well as the financial, reputational, and medical risk levels associated with the exposed [informa­tion],” and compares the resulting score against federal and state breach notification laws, he says.

      Sher-Jan says the weight assigned to the various fac­tors may be “adjusted” if necessary based on the forth­coming guidance, which he hopes will “give us some scenarios” for when breach notification is required.

      So far, there is no consensus on whether the new regulation will result in more or fewer breach notifica­tions. Some organizations have made public notification of incidents, along with how they disciplined employees, in cases that some saw as marginal.

      Of course, some notifications might have had less to do with a strict interpretation of the harm standard and more to do with a CE’s desire to set an example for its workforce, or fear that the Office for Civil Rights could conclude the CE erred in not treating an incident as a breach, perhaps subjecting the organization to more ca­lamitous actions than the actual breach would.

      “In many cases CEs just notify whether there is knowledge [of a breach] in or outside. There is often a weighing in favor of notification because there can be less risk associated with it,” Sotto says, as opposed to later being second-guessed or investigated and then penalized if it is determined notice should have been made.  

      “We have handled over 900 data breaches and everyone is unique. Everyone has to be separately [assessed],” Sotto says. Of these, entities have ended up notifying af­fected individuals more than 90% of the time, a percent­age she does not expect will change.

      When the incident is “murky,” entities tend to notify, Sotto says. Circumstances in which they might not in­clude when the PHI was sent “to a single trusted partner, maybe another CE,” when it involved “innocuous data — name, address” — that is sent, and when a valid affi­davit is obtained attesting to the return or secure destruc­tion of the data, she says.

      The new standard “will be a big deal,” says Jeff Drummond, a partner in the Dallas office of Jackson Walker LLP, who adds that, of the new changes in the rule, this will have “definitely the biggest impact.”

      He disputes the final rule’s assertions that no breach­es will be reportable under the new regulation that aren’t currently reportable, and its premise that the new stan­dard is more exacting than the 2009 harm standard.

      “The new rule is no more ‘objective’ — or less ‘sub­jective’ — than the old rule. It’s still a judgment call,” says Drummond, who predicts an uptick in reported breaches.

      “For anyone with a possible breach incident that is using the new standards, unless you meet one of the three statutory exceptions, it will be very, very difficult to come to the conclusion that there is no reporting require­ment,” he says. “This is very troubling, potentially, since something as little as a breach of the minimum necessary standard could (should, will) require notification to af­fected individuals.”

      “We may see a spike,” Sher-Jan agrees, “but I don’t think it will be [among those] who were already compli­ant,” but among “people who were really on the fence [in the past], who didn’t follow the rules before.”

      “Maybe they had a process that was far more subjec­tive” than that spelled out in the new final rule, he adds.  Breach notification hasn’t fully matured, Sher-Jan says, adding that more time is needed now that the four factors are in place for that process to continue, and for some, to perhaps get underway for the first time. “Breach is still in its teenage years,” he says. “It may get a little more rambunctious before it settles down.”

      Contact Sher-Jan at mahmood.sher-jan@ idexpertscorp.com, Sotto at lsotto@hunton.com and Drummond at jdrummond@jw.com

      About the Author

      ID Experts's avatar
      ID Experts

      Cyber Risk & Privacy Liability Forum 2013

      by Jeremy Henley

      Well it’s that time of year again when all the key players of the cyber liability insurance world arrive in Philadelphia.  Net Diligence works closely with HB Litigation conferences to get more than 300 attendees to the Cyber Risk & Privacy Liability Forum on June 6-7 which is sure to be another great event.   

      MORE INFOMarket for healthcare data breach insurance growing rapidly

      We can always count on an update on the litigation surrounding data breaches.  This session usually has a great back and forth between panelists who are generally attorneys that would be opposing counsel for these kinds of legal matters.  It is also a great opportunity to learn how the plaintiff’s attorneys are making their cases. This is great research to keep our strong track record of not having clients sued by the breach victims post data breach.

      There are some interesting topics that are new to the conference this year as well like reinsurance and subrogation. These topics largely deal with how insurance companies protect themselves against major losses.  There appear to be several reinsurance carriers attending this year which is new as well.  I am looking forward to meeting them understanding there business. 

      Again there will be a panel specific to healthcare and some of the challenges of underwriting these organizations with cyber liability.  They will discuss the recently finalized HIPAA Privacy rules and the effects relative to insurance.  The only thing disappointing about this panel is that one of the questions they suggest will be answered is what services are healthcare entities looking for that are not currently offered?  However, there is no representation from a breach response vendor that can speak to all the challenges or differences of responding to a healthcare data breach. Maybe next year?

      Cyber terrorism will have a bigger spotlight this year for sure with all the press around the recent events and threats from around the globe.  The Stuxnet virus was discovered 3 years ago still has plenty of buzz.  The idea that a foreign enemy could disrupt our utilities, flight controls, or our banking systems is very interesting or maybe scary is a better term.  I can honestly say that when I joined ID Experts to prevent ID theft and respond to data breaches I didn’t realize I so close to terrorism and national security issues!

      Another conference session that is always interesting is the one on claims data.  This session provides highlights of an annual report that Net Diligence puts together offering summaries and averages of what claims are actually coming in via this form of insurance.  Every year I am even more surprised at how different the numbers collected are from what ID Experts clients actually experience. 

      Last year’s report stated that the average breach claim was $3.7M with $2.1M of this cost being tied to legal fees and settlement costs of class action law suits.  For starters, none of our clients have had the expense of a class action, so this makes ID Experts 57% less than what insurance carriers paid for those claims.  The report also breaks down the leading cost of an incident response being the credit monitoring, which also does not match with our experiences at ID Experts.  There is a better and more efficient way to manage breaches and believe this has everything to do with our patented data breach response process, YourResponse™. To see my blog on the report Net Diligence report click here.

      The event also provides several opportunities for reconnecting with friends and customers and learning what everyone is up to and what the next big thing for Privacy and Cyber liability insurance might be.  If you will be attending shoot me an email at Jeremy.henley@idexpertscorp.com and we can meet up.

      DOWNLOAD10 Things to Consider Before Purchasing Cyber Insurance

      About the Author

      Jeremy Henley's avatar
      Jeremy Henley

      Jeremy Henley is an Insurance Solutions Executive for ID Experts. He is has been certified by the Healthcare Compliance Association for Healthcare Privacy and Compliance and brings 11 years of Sales and Leadership experience to the ID Experts team.

      HHS’ Sensible Compromise on the Controversial Harm Threshold (Part 2)

      by Mahmood Sher-jan

      In part 1 of my analysis of the HIPAA final breach notification rule, I focused on the implications for covered entities and business associates of the change to the definition of “breach.” The revised definition removed the controversial “risk of harm” language and instituted an incident specific risk assessment requirement. According to HHS, the harm threshold was giving covered entities too much flexibility to apply their own perception of whether the incident could harm the affected patients.  The focus of this Part 2 analysis is on the practical choices facing covered entities to comply with the newly minted “compromise” standard and the associated risk four factors.

      MORE INFOHIPAA Final Omnibus Rule Playbook

      The “compromise” standard

      Covered entities and business associates must now assess the probability that the protected health information (PHI) has been compromised based on a risk assessment that at a minimum considers the following factors outlined in the final rule:

      (1) The nature and extent of the PHI involved, including the types of identifiers and the likelihood of re-identification;

      (2) The unauthorized person who used the protected health information or to whom the disclosure was made;

      (3) Whether the protected health information was actually acquired or viewed;

      (4) The extent to which the risk to the protected health information has been mitigated.

      What Must You Do and What are Your Options?

      The final rule is effective on March 26, 2013 and covered entities and business associates are expected to comply with the applicable requirements of the final rule by September 23, 2013.  The enforcement of the final breach notification rule by the Office for Civil Rights (OCR) will be carried out pursuant to the Enforcement Rule.  It is time to get busy and prepare for compliance.

      The single most important thing that any covered entity or business associate must do is to create a CONSISTENT methodology for conducting incident risk assessment for meeting your burden of proof.  This requires that your suspected (paper or electronic) incidents are:

      • Submitted to the organization’s incident response team and recorded.
      • Evaluated based on a consistent risk assessment model using the four factors outlined in the final rule.  This is where most organizations fall short and put themselves at risk of non-compliance. 
      • Designated as breaches or non-breaches according to a consistent decision support process using the outcome of the risk assessment.
      • Tracked and stored in a common repository with all supporting documentation, including investigation, corrective action-plan, sanction(s), attestation(s), and notification(s) for internal as well as OCR investigation or audit.

      I must admit that I have a bias for an approach that is software-based and uses analytical modeling for quantifying the outcome of any risk assessment.  There are many benefits to this approach including consistency, efficiency, collaboration, one tool for handling HIPAA and state laws, tracking performance metrics and management reporting to name a few.  None of these benefits can be effectively achieved using paper or Excel worksheet based approaches.  I was recently speaking with a Health Plan that had developed a solution using a combination of excel and Access database. They pointed out that this solution did not offer collaboration, nor did it support state laws or any audit trail to establish the consistency and efficacy of the decision process when investigated.  I have had numerous complaints from folks that are using paper or excel tools about the inherent subjectivity and lack of consistency of the risk scoring and the “human error” factor that can never be eliminated.  So why would any covered entity use these inadequate approaches?  I am told that the paper and excel approach does not require any scarce IT resources and that they are not aware of better and affordable options that is really easy to use.

      If you are a covered entity or business associate who is compliant with the IFR’s breach notification rule, you now must review your risk assessment process and make sure that you are using all the 4 required factors and embed the on-going guidance from HHS to remain compliant with the final rule. So depending on the risk factors you used before, this could be a small or big effort.  The good news is that the final rule retained all the exceptions allowed under the IFR except the narrow limited data set exception. The rules around incident discovery and notification timelines remained virtually unchanged.  So depending on your satisfaction with your existing solution, the internal effort needed to comply with the final rule and vendor-based solutions, you can chart your compliance path as September 2013 approaches.

      If you are an entity that has yet to fully comply with the IFR, the time is ticking and you need to make a choice whether you have the resources to build and maintain your own HIPAA and state(s) risk assessment solution or to buy a proven vendor solution. Our ID Experts RADARTM, online incident decisioning and management software, is a proven and easy-to-use solution used by many hospitals and health plans to perform incident risk assessment as prescribed in the HHS final rule and state laws. RADAR could help you achieve compliance in a timely and cost-effective manner.

      DOWNLOAD: HIPAA Final Rule: Top Three Actions You Must Take Now

      About the Author

      Mahmood Sher-jan's avatar
      Mahmood Sher-jan

      Mahmood is the lead inventor of ID Experts RADAR, an award winning and patented incident management software. Mahmood holds a BS in Computer Science from University of Washington and an MBA from University of Redlands.

      Big Data Increases Breach Risk

      by Deanna Jones (DJ)

      Our existence as a consumer society has led us to our current big data reality.  Everything about us is compiled, categorized, sorted, analyzed, often with our permission, though the knowledge of what we’re doing has largely been hidden in the fine print, if shared at all.  In exchange for dense digital dossiers of our lives, we often get a coupon or an offer, matched just for us, from the logs maintained about our lives.  And this is just the mild form of big data’s usage.

      The debate of big data stretches onward, its use and disuse, its privacy pitfalls and the unregulated frontier in which it thrives, but a recent Wall Street Journal blog brings to light how the data that is maintained can be used against us – not in the way you’d think – but in the case that the data is breached.

      The increase in data increases the risk of data breach.  Imagine the amount of data of an electronic health record processor for a health plan or a retail chain’s loyalty card program. If the company faces a data breach, imagine the loss of the data involved – how many millions of people would have to be notified, across how many states and in compliance with how many varying state laws? Wall Street Journal blogger, Nicholas Elliott, informs us that according to several experts “ignorance about stored data can magnify the costs of notifying customers and the risk of regulatory or legal repercussions.”

      With all the data that exists, what is needed and what is redundant or simply of no use?  Companies have been so quick to collect information, but they’ve barely stopped to think of the repercussions of what has been collected.  Bruce Radke, chair of the data privacy group at law firm Vedder Price, advised “the first step in any company’s assessment of its data should be really looking at the information you need and getting rid of everything else.”

      Radke foresees a time when breached companies will be sued for keeping too much data, with the allegation that poor data management will lead to more data being lost or compromised, unless companies adhere to stricter policies. He echoed the recommendation that ID Experts maintains, that companies should have a breach response plan in place, outlining what steps to take should a breach occur, rather than figuring it out on the fly, post-breach.

      Sadly, the best-prepared companies are those that have already been hacked. “Once someone has been through a breach, they have a very different focus,” said Bantick. They tend to be prepared for the next instance. Radke continues, “The folks that have experienced pain tend to be prepared for the next instance.”

      About the Author

      Deanna Jones (DJ) 's avatar
      Deanna Jones (DJ)

      Deanna Jones (DJ) is an Investigator within ID Experts’ Special Investigations Unit. She came to ID Experts from the Portland Police Bureau and has an extensive background in legal and insurance investigations for plaintiff case preparation, backgrounds and workman’s compensation fraud. She also worked with the former Bureau of Alcohol, Tobacco and Firearms, now under the US Treasury, where she assisted with regulatory investigations and compliance. DJ has obtained government security clearance through her duties at ID Experts and is a Certified Fraud Examiner (CFE). She holds a BA in English and Journalism and a MS is Criminology.

      Senior Identity Theft: A Problem In This Day and Age

      by Robin Slade

      The May 7th Federal Trade Commission (FTC) Senior Identity Theft: A Problem in This Day and Age panelists related that seniors are often preferred targets, especially for medical identity theft. Fraudsters, who may include unscrupulous relatives and/or caregivers, view these individuals as more trusting, less financially sophisticated and less likely to report the crime because they fear family members may think they cannot maintain their independence.[1] Panel members discussed the many still unanswered questions regarding how to prevent tax and government benefits fraud, medical identity theft, and identity theft in long-term care, and how to reach older consumers.

      Individuals are the only ones with the detailed knowledge of their medical care or financial expenditures who can raise the alarm when they do fall victim, it is essential for consumers to learn the true impact of these crimes and how to protect themselves as best they can and raise the alarm when it is needed.

      In addition, system practices continue to make personally identifiable information (PII) available in ways advantageous to criminals. For example:

      • Social Security Numbers are used on Medicare insurance cards.
      • HIPAA rules intended to protect patient privacy also prevent victims from gaining access to their records to correct them.
      • Healthcare providers have been implicated in the vast majority of crimes and may choose not to help the victims at all.[2]
      • 94% of US healthcare organizations studied have had critical PII data breaches and 45% of these organizations showed five or more breaches during the study period.[3]
      • Three out of five providers studied, including major hospitals and healthcare providers, do not have the policies and procedures in place to safeguard health records.[4]
      • More than six in ten healthcare organizations studied say they do not have enough resources to ensure data security.[5]

      We know that medical identity theft and fraud cost the healthcare industry $41 billion in 2012 and cost taxpayers and consumers in higher premiums and healthcare costs and has life-altering consequences for patients and their families, so the attitude that resources cannot be dedicated to data security indicates a lack of understanding regarding the true value of the PII.

      In financial identity theft, the financial institution may make the individual financially whole again – not so for medical identity theft victims.  We are seeing some success in which insurance companies are some of the most proactive players, as are federal government investigation units and law enforcement. And Medicare has assisted patients by simplifying EOBs and providing some consumer education. The public/private consortium, the Medical Identity Fraud Alliance, is leading the current opportunity to include all ecosystem stakeholders in developing cost effective technologies, policies, and best practices that we need to lessen patient exposure to fraud and theft.



      [1] National Crime Prevention Council, 2012.

      [2] The majority of medical identity theft occurs with provider and sometimes patient complicity, though in some cases provider licenses are stolen or data breaches provide the information needed to commit these crimes. World Privacy Forum, 2013.

      [3] Ponemon, Third Annual Benchmark Study on Patient Privacy & Data Security, 2012.

      [4] Ibid.

      [5] Ibid.

       

       

       

      SGWVDFY9DVX7

      About the Author

      Robin Slade's avatar
      Robin Slade

      Robin M. Slade is the Development Coordinator for the Medical Identity Fraud Alliance, a public/private partnership that unites the healthcare ecosystem to develop solutions and best practices for medical identity fraud. Ms. Slade is also the President and Chief Executive Officer of the Foundation for Payments Fraud Abatement and Activism and FraudAvengers.org, a non-profit corporation and weblog focused on helping consumers lessen their exposure to fraud and scams. She is also Senior Vice President and Chief Operating Officer for The Santa Fe Group, and manages the Shared Assessments Program, a consortium created by leading banks, auditing firms, and service providers to inject efficiency and cost savings into the vendor risk assessment process.

      Breach Notification Laws: An Evolving Mine Field

      by Mahmood Sher-jan

      In 2012 there were a number of states, which made changes to their breach laws including Connecticut, Texas and Vermont. The most noteworthy was Texas' House Bill 300, which amended the state's existing data breach law effective September 1, 2012, requiring covered entities in Texas to notify affected individuals regardless of their state of residency. This is ground breaking because it is the first time that a state has expanded the reach of its obligations beyond its own borders by basically saying that the obligations of a breached entity that does business in the state does not stop at the borders of the state but it follows the affected patients where ever they may reside.

      MORE INFOUpdate from Texas: Understanding the New Privacy Law

      Less ground breaking in its scope of change were the Commonwealth of Connecticut which passed House Bill 6001 effective October 1, 2012 repealing and substituting the state's existing data breach law and Vermont which amended its law in May of 2012 to require notification of affected individuals and the State Attorney General within 45 days of discovery of a breach incident.

      For users of our RADAR incident risk assessment/decisioning and management software, these regulatory changes were simple to comply with because they were embedded in the software well in advance of the corresponding enforcement dates.

      Fast forward to 2013, the HIPAA final omnibus rule is now published and with enforcement date of September 23rd, 2013. After studying the final rule at great detail and a few discussions with covered entities and legal community, there's little consensus on the rule's final impact. Some folks think that a lot more breaches will be reported to HHS while other think that the difference between the number of reported incidents under the final rule as compared to the interim final rule will be insignificant. So who is right? The final rule leaves room for interpretation, especially when it comes to the 4th factor—the extent to which the risk to the PHI has been mitigated. The rule states that CEs and BAs should attempt to mitigate the risk to the PHI following any impermissible use or disclosure through a confidentiality agreement or similar means. Our RADAR software uses the factors required by the final rule to ensure compliance.

      My observation is that many "experts" seem to forget that the final rule requires the incident risk assessment to use a minimum of FOUR factors and that no one factor should determine the outcome of the assessment. In other words, when a CE mails information containing PHI to the wrong patient or policy holder and the recipient actually views the information and informs the entity of the error, this incident should not automatically constitute a breach if appropriate mitigation is performed according to the 4th factor of the final rule. The mitigation should consider the recipient and whether an attestation or confidentiality agreement is obtained from the recipient as an example of proper PHI risk mitigation. The mitigation bar may be even lower, according to the final rule, when the recipient is another entity obligated to abide by the HIPAA Privacy and Security rules. The rule suggests that in such cases, there may be a lower probability that the PHI has been compromised given these entities' existing obligations to protect PHI.

      In the final analysis, the final rule does not completely put to bed the concerns over subjectivity of the required risk assessment and nor does it create an unambiguous framework. However, the rule establishes that risk assessment is a required element of incident management and we now have a minimum of four factors to consider in the assessment. At the HCAA 2013 compliance institute conference in DC, I was able to confirm this point and HHS further told me that within a few weeks the department will issue additional guidance on the breach incident risk assessment. Apparently a paper-based tool will also be provided but my assumption is that it will be fraught with issues common to manual tools.

      So to that end, this is progress since there's no room for staying on the fence about adopting policies, procedures and tools to help your organization comply with the final breach notification rule and its looming enforcement date of September 2013. And let's not forget the continued evolution of state breach laws. Your incident management must account for any changes in these laws as well.

      DOWNLOADHIPAA Final Omnibus Rule Playbook

      About the Author

      Mahmood Sher-jan's avatar
      Mahmood Sher-jan

      Mahmood is the lead inventor of ID Experts RADAR, an award winning and patented incident management software. Mahmood holds a BS in Computer Science from University of Washington and an MBA from University of Redlands.

      HCCA 2013 - The Year of the Security Risk Analysis

      by Heather Pixton

      It is with great pleasure that I write this recap of the HCCA Annual Conference and Tradeshow. The event was held outside of Washington DC on April 21-24, and ID Experts was one of many organizations represented in the tradeshow. I can honestly say that this was one of the most fun, productive, and energized events we have ever done. The caliber of people we had visiting our booth was extremely impressive! We had Compliance Officers from complex Health Systems talking about their HIPAA/HITECH obligations, we had HIM Directors talking about data flow and security, we had Nursing Practice Managers talking about workflow risks, and we had administrators of medical practices who wanted to talk about it all! The conversations I both personally had with visitors, as well as conversations I overhead from my colleagues, were compelling and very advanced. We were not talking about basic HIPAA and Data Breach, we were having very in-depth conversations about compliance and risk mitigation. I absolutely loved hearing our team being utilized as a trusted advisor in healthcare!

      ID Experts also hosted a cocktail event… well, really more beer and burgers. It was during this event that we were able to sit down and have more in-depth conversations with the people who had visited our booth. I personally had a couple of great conversations about HIPAA risk assessments and how organizations are performing those today. We talked about Meaningful Use as well as the Final Omnibus Rules. It is apparent that these two motivating factors are playing a big part in the 2013 plan. Even organizations who operate as a business associate had a heightened awareness of their HIPAA obligations under the Final Rule. It seems that 2013 may be the year of the Security Risk Analysis.

      In addition to the amazing conversations our visitors brought to us, we also had an unbelievable response to our Wounded Warrior fundraiser. Instead of giving away iPad’s or Kindle’s we decided about a year ago to do something very meaningful. We have been collecting tributes to the Wounded Warrior Project for a year and have raised a total of $6,250.  $2,150 more was raised during the HCCA event alone! What an amazing accomplishment! We had over 400 people visit our booth and add their name to the “wall” of Wounded Warrior tributes. For every name we collected, we donated $5 to the Wounded Warrior Project. I am so proud of the attendees at this conference who added their names to this important cause.

      Overall, as I reflect on this event, I am so proud to be part of ID Experts and thrilled that our experience in healthcare is being so well received. This event reinforced for me why ID Experts is a perfect fit in healthcare… we understand the environment, we speak the language, and we genuinely care about the privacy and protection of health information. No other industry can match the care and compassion that is found in healthcare, and I am so happy to be a part of that greater community.

      About the Author

      Heather Pixton's avatar
      Heather Pixton

      Heather Seward came to ID Experts with 12 years of experience in sales and marketing, and is using her experience to grow new territories for the company. Heather will encourage this growth through securing strategic partnerships and developing strong relationships in the industry. Before joining ID Experts, Heather was President of a successful small business, managing a variety of tasks including sales, marketing, and operations. Heather has a BA from Southern Oregon State College.

      Patient Identity Infection—A Multi-Faceted Risk Facing Patients

      by Mahmood Sher-jan

      At ID Experts we have been helping identity theft victims and patients protect and restore their identities for over a decade.  It is our mission after all.  It shapes our company culture and values.  We know very well that identity theft and medical identity theft are growing problems.  So what are the risks to patients’ identity in the healthcare setting and how to we protect against these risks?  We know intuitively that prevention is the best medicine but how can we truly prevent a problem with so many root causes—some intentional but most unintentional?

      MORE INFO: Balancing Privacy and Access: Preparing for the Risks of Health Information Exchanges

      I attended a session by Mark Ruppert, Dir. Internal Audit, Cedars-Sinai Medical Center, at the HCAA 2013 Compliance Institute conference.  The session highlighted how in a hospital and research setting, the vulnerabilities to patient’s identity are endless.  And I am not just talking about technology-based vulnerabilities—the human kind is pretty prevalent.  Here’s a sample list, from the session, of ways that a patient’s identity could get infected and ultimately her medical record could be falsified or tampered with in a healthcare setting:

      Errors during the initial patient identity capture process due to:

      • Inadvertent human error
      • Untrained front end personnel
      • Incomplete front-end capture processes
      • Faulty system edits and or interfaces
      • System errors that could miss-classify a patient. It took one year to get a deceased patient back into the living world once the patient was incorrectly flagged to be deceased in the system.
      • Patient claiming to be victim of ID Theft
      • Misdirected information (mail, fax, email)
      • Spouse causing compromise of PHI due to family issues such as divorce
      • Appointment replacements
      • Sharing Insurance cards between patient and family members
      • Stolen and lost equipment with unprotected PHI (laptops, media devices, medical equipment, fax machines)
      • Moving multi-function printers around a facility – settings are not changed so it would capture and send PHI.
      • Inappropriate disposal of PHI
      • Community outreach programs- lack of sufficient training for volunteers and attending physicians
      • Patients presenting with false ID (shared ID/Stolen IDs/No ID)
      • Research subjects presenting false ID and representing false information.
      • Not knowing where your PHI resides, not knowing who should have access
      • Medical record access is not controlled and systems containing PHI are not protected
      • Individuals authorized to access PHI are not trained on proper handling
      • Inappropriate system access (creating false patient information)
      • Authorized individuals access PHI and misuse their authority
      • Systems/files containing PHI are not known and can’t be identified for proper protection.
      • Vendor supplied equipment and medical devices with PHI can leave facility without proper disposal.
      • Unnecessary collection of PHI using old forms that are no longer necessary.
      • Sending information requests to unauthorized individuals
      • Curious employees, clinicians, consultants, etc.
      • Corrupt employees, clinicians and consultants
      • Weak Physical security--open facility access
      • Inappropriate use of email/texting/tweeting (& other social media)
      • Insufficient policies & procedures and workforce training.

      Looking at this list, which is not exhaustive by any means, it is easy to assume that there are endless opportunities for infecting patients’ identity and that patients should fear getting an identity infection as much as any other type of medical infection when visiting a medical facility.  This also highlights the immense patient protection challenges that privacy, compliance and security folks face within healthcare institutions. A good place to start is to perform a periodic risk assessment where the scope of the assessment needs to include these threat and vulnerabilities.  This will hopefully help elevate management’s awareness and concern and create the need for better controls and monitoring for patient protection.

      On another note, ID Experts would like to thank all of the compliance professionals at HCCA who helped raise $2,150 for the Wounded Warrior Project. To date through the various trade show booths at conferences, we have raised $8,400 for the non-profit organization.

      ID Experts donated $5.00 per person to the Wounded Warrior Project who came by the booth to show their support for this non-profit organization whose mission is to honor and empower wounded warriors.  Their vision is to foster the most successful, well-adjusted generation of wounded service members in our nation's history.

      SEE ALSOPreparing for the Security and Privacy Risks that are Engendered by Health Information Exchanges and Electronic Health Records

      About the Author

      Mahmood Sher-jan's avatar
      Mahmood Sher-jan

      Mahmood is the lead inventor of ID Experts RADAR, an award winning and patented incident management software. Mahmood holds a BS in Computer Science from University of Washington and an MBA from University of Redlands.

      Big Data Will Turn Privacy Upside Down

      by Jon Neiditz

      This post by Jon Neiditz is part of our ongoing series of contributed content.  Reprinted with permission - you can read the full article here: Big Data Will Turn Privacy Upside Down in a Way that Will Put New Burdens on Cloud Providers, and Individuals May Turn Big Data Upside Down

      Privacy scholars and practitioners the world over have now noted that the current regulation of privacy simply does not work well in a big data world.[1]  Thus to the extent that they are openly welcoming or at least acknowledging the inevitability of such a world, many of them (us) are beginning to seek new approaches.  Among the major concerns regarding the application of current privacy law to big data are the following:

      1. Current privacy law focuses on data minimization in collection, while big data extracts unpredictable value from combinations of data the collection of which might not have appeared necessary.
      2. Current privacy law often requires that data be destroyed when no longer needed for the purpose for which it was collected, while big data looks for derivative uses and opportunities. 
      3. Current privacy law often relies solely on notice and consent at the time the data is collected, (although this is changing with privacy by design, that often emphasizes “just in time” notices for particular uses and disclosures of data), and big data uses are generally not known at the time of collection.
      4. Current privacy law allows for free alienation of all personal rights at the time of consent or authorization, which makes no sense when the uses of the data are not known. 
      5. Current privacy law exempts information that has been anonymized or de-identified, but big data facilitates reidentification of anonymized data.[2]
      6. The White House Privacy Bill of Rights, drawing on the work of scholar Helen Nissenbaum, made “respect for context” one of its core principles, while big data companies like Google take big data in precisely the opposite (context-disruptive) direction.

      If big data leads us to (a) give up on data minimization and destruction as soon as the primary use has been completed, (b) limits reliance on complete alienation of rights based on an initial notice and consent, and (c) undermines to some extent reliance on the effectiveness of de-identification, then the protection of privacy must have what are called in information security “compensating controls.”  Those controls are particularly important to emphasize here, because in my view they go to the very heart of using cloud computing in big data:

      1. EXTREMELY good information security.  Insofar as cloud computing may raise both regulatory- and risk-based information security issues that, say, a DoD-certified facility does not, then I would suggest that cloud-based big data providers hold themselves to a strong service organization-oriented assurance standard such as a SOC 2  Report on Controls at a Service Organization Relevant to Security, Availability, Processing Integrity, Confidentiality or Privacy. Big data cloud repositories are already — and will be increasingly – targets for hackers, particularly as the value of the data increases. 
      2. A focus on big data company accountability for appropriate use of the data, (to complement in my view some continued reliance on informed end-user notice and consent).    This point is made well in two ways by Mayer-Schonberger and Cukier.[3]  First, they stress the need for a formal big data use assessment and plan based on regulatory ground rules.  The plan would incorporate “differential privacy” (now being explored by Microsoft and others) that deliberately obscures or masks the data, and maximum retention periods prior to secure data destruction.  The problem posed by this idea is big data’s black box problem stressed in Section 2 above; complexity of big data analysis and proprietary innovation make public accountability difficult.  I believe their call for quasi auditor s– both independent/external and employees internally  — given the infelicitous name of “Algorithmist,” will be a necessary and likely development, particularly in the absence of changing individual rights like those discussed below. 

      These new professionals would be experts in the areas of computer science, mathematics, and statistics; they would act as reviewers of big-data analyses and predictions. Algorithmists would take a vow of impartiality and confidentiality, much as accountants and certain other professionals do now. They would evaluate the selection of data sources, the choice of analytical and predictive tools, including algorithms and models, and the interpretation of results.[4]

      These two “compensating controls” — as substitutes for the more traditional privacy regulatory requirements at the beginning of this section — would put a great deal of regulatory and auditing pressure on both big data firms and cloud providers to become both less messy and more transparent. 

      A third force that would have a similar impact might come from Europe in the next year.  Among the sets of ideas under consideration in the transformation of European data protection regulation that is now underway are many that would put more power in the hands of individuals.  In the US, Tene and Polonetsky made the case for such a shift in control in a popular paper,[5] and Rubinstein extended their thinking, incorporating Doc Searls’ work on Vendor Relations Management (VRM).[6]  If the European Union proceeds in this direction, it will render the creation of aggregators that lower transactional costs likely, opening the door to individuals to play an active role in the big data economy.  And where would individuals store their big data, but in their “personal clouds” that many of them already have and others will soon given the consumerism of IT? 

      One way or another, value will be delivered to the individual as an individual, thanks to big data.  The question as between an American approach and a European approach may be how much the individual will be consciously involved in the creation of that value.  


      [1] For two good looks at where big data may lead privacy regulation, see,Christopher Kuner, Fred H. Cate, Christopher Millard and Dan Jerker B. Svantesson, The challenge of ‘big data’ for data protection,  Oxford Journal of International Data Privacy Law, Volume 2, Issue 2, (Pp. 47-49) and Ira Rubinstein, “Big Data: The End of Privacy or a New Beginning,” International Data Privacy Law (2013)

      [2] But see, Ann Cavoukian & Khaled El Emam, Info. & Privacy Comm’r of Ont., Dispelling the Myths Surrounding De-identification: Anonymization Remains a Strong Tool for Protecting Privacy 7 (2011), available at http://www.ipc.on.ca/images/Resources/anonymization.pdf.

      [3] Mayer-Schonberger and Cukier, op cit., pp. 172-184.

      [4] Ibid., pp. 179-182.

      [5] Omer Tene and Jules Polonetsky, ‘Big Data for All: Privacy and User Control in the Age of Analytics,’ (forthcoming) Northwestern Journal of Technology and Intellectual Property.

      [6] Rubinstein, op. cit., at p.8. 

      About the Author

      Jon Neiditz's avatar
      Jon Neiditz

      In April 2013, I brought my big data, privacy and information security practice to Kilpatrick Townsend, a firm that has become one of the leading information law firms in the world. I had led information management practices at other law and consulting firms, worked in-house and in government. My work includes: • Regular management of responses to data security breaches -- including the largest governmental data breach on record • Global privacy and information security counsel to private and public sector organizations • Assistance in formulating “big data” plans, including: - contracts that value and protect newly-defined data assets in new ways, and - information management programs that combine defensible disposal with protected “lakes” of big data • Assistance in “privacy by design” initiatives My J.D. was from Yale and my B.A. from Dartmouth.

      How to inform internal teams of a data breach

      by Heather Noonan

      What is the best way to tell your internal teams that your company has had a data breach? A data breach isn't unlike any other public relations debacle. Like any crisis that needs a public relations strategy and a game plan, it needs to be well thought out and executed with finesse. Unfortunately during all this, your company faces reputational harm, deadlines, and client, consumer, and media backlash.

      MORE INFOData Breach Response "How To" Series

      For your internal teams, gather your decision makers and be transparent with what you do and don't know about the breach. Discuss what is being done and the plans in place. Bring in legal and human resources to provide input on the decisions being made. Assuming your information technology (IT) team is already involved and doing their job to fix what may have been broken, whether it was a break-in or a hack, make sure you keep everyone on the same page. I have found that communication is KEY in instances like this. If you aren't communicating well, right from the beginning, you will have half the company moving in one direction, poor decisions being executed, and your right hand won't know what your left hand is doing. Also remind your teams to keep information confidential as you work through forensics and put the pieces together.

      I have seen too many companies want to send a company email to explain the data breach. This can be a very bad company decision. Unless your employees were all affected, I would highly recommend against this. Rumors begin this way. People begin to talk and ask immediate questions, which then starts the telephone and "what if" game. Your best intentions email will often be forwarded to an employee's friend or family member. That friend or family member then forwards the email and so on and so forth. (Not pretty.)

      Yes, definitely tell your company what happened, but tell them during a company forum. Tell them face to face where they are able to ask questions. Let them voice their concerns and let you explain how the company is working through this incident, how people are being cared for, and the changes that are being made.

      A couple pieces of advice from someone who has seen the good and bad decisions made while a company works through a data breach.

      1. Don’t rush and don’t panic. When we rush we can often make quick, irrational decisions.
      2. Don’t make emotional decisions.  (same as above)
      3. Keep to the facts.
      4. Don’t play the hypothetical game.
      5. Be transparent and avoid rumors.
      6. Be very leery of email notification.
      7. Keep the initial information on a need to know basis as you gather all the evidence.
      8. Dedicate your main decision makers. Keep the key people involved and make decisions as a group. Even the smallest decision can affect the final outcome.
      9. Avoid too many cooks in the kitchen. Too many people making decisions can become very problematic and tiresome.
      10. Remember you are a team and you are protecting the company. Too often employees become worried about themselves and the politics involved.
      11. Avoid politics during decision making. Same as above, when politics are involved, bad decisions can be made.
      12. Remember State and Federal guidelines when making decisions. If under HITECH, you will most likely be investigated.
      13. Document everything. Every decision should be documented, no matter how small. This will be vital years from now.
      14. Keep your door open- People will continue to have questions and concerns. Be ready for them. Don’t think that because the incident was five months ago that questions won’t come up and you won’t have to deal with them…again.
      15. Take the high road- Don’t backlash against people that attack you or the company. Always take the high road and save face.
      16. Smile through it all and remain the leader. This too shall pass.
      17. Again, don’t rush and don’t panic. Take the time to make sure everyone is in the car before you drive off and make sure you have a map.

      SEE ALSOFTC Announces Agenda, Panelists for Upcoming Senior ID Theft Workshop

      About the Author

      Heather Noonan's avatar
      Heather Noonan

      Heather is the Senior Project Manager for the ID Experts Data Breach Response Team. She provides subject matter expertise, state and federal regulatory requirements and best practices. She is the primary point of contact for client communication and data breach engagements. Heather has been with ID Experts since 2008 and works in all areas of ID Experts including informational webinars and blog discussions. Heather has a Bachelors of Science, specializing in Business Communication, and has over 15 years of experience in client customer service with 10 years specific in Project Management.

      FTC Workshop to Highlight Senior Identity Theft

      by Rick Kam

      Medical identity theft is the latest threat to affect patients—especially senior citizens. Although medical identity theft can make a victim of anyone who seeks healthcare, there are factors that can increase a person’s likelihood of becoming a victim of medical identity theft. One is age. Simply being a senior citizen can elevate a person’s chances of falling prey to this crime.

      Learn how and why, and more about what can be done, as we explore the best consumer education and outreach to seniors at the Federal Trade Commission’s workshop, “Senior Identity Theft: A Problem in This Day and Age.” The FTC workshop is open to the public and will take place next week, on May 7, 2013 in Washington D.C. Consumer advocates, government officials and representatives of private industry—that’s where I fit in—will discuss the challenges facing victims of senior identity theft. FTC Chairwoman, Edith Ramirez, will provide opening remarks.  

      For more information, please visit http://www.ftc.gov/opa/2013/04/senioridtheft.shtm

      About the Author

      Rick Kam's avatar
      Rick Kam

      Rick Kam, CIPP, is founder and president of ID Experts. He is an expert in privacy and information security. His experience includes leading organizations in policy and solutions to address protecting PHI/PII and resolving privacy incidents and identity theft. He is the chair of the ANSI PHI Project, Identity Management Standards Panel and the Santa Fe Group Vendor Council ID Management working group. He is also an active member of the International Association of Privacy Professionals and is a member of the Research Planning Committee for the Center Identity which is part of the University of Texas Austin.

      Do you really need security to attest to meaningful use?

      by Doug Pollack

      CMS (the Centers for Medicare & Medicaid Services) has begun auditing participants in the federally funded electronic health record (EHR) incentive payment program that makes funding available to hospitals and other healthcare organizations who can demonstrate meaningful use of certified EHR systems.  And while one of the meaningful use criteria is that the organization carry out a HIPAA security risk analysis, the initial audits have found that one of the two most common adverse findings is non-compliance with the requirement to conduct a security risk analysis.

      MORE INFO: Meaningful Use Stage 2: ToolKit

      As noted by iHealthBeat in their article One in 20 Meaningful Use Attesters to Face Audits, April 23, 2013, Robert Anthony, deputy director of the CMS’ Health IT Initiatives Group, noted that “a few health care providers with adverse audit notices are starting the appeals process, and that some providers are facing investigation for possible fraud”.

      But Anthony acknowledges that the security risk analysis is one of the greatest areas of confusion for providers, even though the EHR incentive program requires little beyond what they should be doing under HIPAA. 

      “We're certainly seeing some instances where people haven't done [a HIPAA security risk analysis] or people just aren't sure what they're supposed to be doing,” Anthony said. “There are not additional requirements here beyond HIPAA….they need to have something in their risk analysis that is specific to their EHR. It doesn't have to be all about the EHR. It needs to be stated and indicated that it's about your practice.”

      The inspector general at the US Department of Health and Human Services (HHS) issued a report last December that was critical of the oversight provided by CMS over the meaningful use incentive payment program. As of February 2013, 234,000 organizations have received EHR incentive payments totaling $12.7 billion. The OIG report in particular highlighted that CMS was not conducting audits before making payments.

      CMS now plans to conduct both pre-payment and post-payment audits with a goal of auditing 5% of all participants in the program.  And given that Anthony has now further clarified what should have been totally clear, that meaningful use attestation must include confirmation that your organization carried out a security risk analysis, per the requirements of the HIPAA Security Rule, the answer to this post’s title question is “yes” – if you attest to meaningful use, you are well advised to have and documented a security risk analysis of your EHR and any related systems.

      It is terrific to see CMS and HHS shine a bright light on the need for a rigorous analysis of your system’s security. Millions of Americans will find their ePHI (protected health information) entrusted to these EHRs, and we all will sleep a little better knowing that our healthcare providers take the privacy and security of this information seriously.

      SEE ALSO: HIPAA Security Risk Analysis

      About the Author

      Doug Pollack's avatar
      Doug Pollack

      CIPP, MBA. With over 25 years experience in technology industry products and services, Doug is an expert in personal information privacy and security. He is currently a senior executive at ID Experts.

      ID Experts doing its part for the Wounded Warrior

      by Bob Gregg

      Part of ID Experts' core mission is to give back to its' community, not just in the Portland, Oregon area (our home town), but on the national stage. All of us made a conscious effort to find the right organization to get involved with about a year and a half ago. Unanimously, we chose the Wounded Warrior Project.

      The mission of Wounded Warrior Project is to honor, empower and enable wounded warriors from all branches of military service. The organization raises public awareness to the needs of injured service members, providing them with unique, direct programs.

      Typically in a trade show booth, attendees have the opportunity to win some good prizes like iPads, Kindles, or even TVs. It's all part of the excitement of walking the show room floor – that one chance of dropping a business card into a fish bowl and walking away with an expensive new toy.

      We're a little different. Like the March of Dimes fund raising drives at your local McDonald's, all we ask is that attendees sign our Wounded Warrior Respect Board at each show and for that, ID Experts donates $5.00 to the Wounded Warrior Project. These small individual donations add up. We raised around $6,250 last year through these various trade shows and conferences we attended.

      We'll be doing this again this week at the Health Care Compliance Associations' Compliance Institute in National Harbor, MD, April 21 – 24 in Booth #417-419. If you happen to be attending this event, come see us, and help us help the Wounded Warrior.

      About the Author

      Bob Gregg's avatar
      Bob Gregg

      With over 30 years of experience in high technology and software services, Bob joined ID Experts as CEO in 2009. He is particularly interested in the emerging trends involving identity theft and privacy data breaches, with emphasis on healthcare. "Let's keep our private, confidential information just that...private and confidential"

      Why does a victim of a data breach benefit from having a Recovery Solution?

      by Heather Noonan

      Someone once explained recovery solutions with  the analogy of repairing your car. Yes, you could probably put in your own engine or reattach your own bumper, but would you want to?  With the time it takes to get the parts, educate yourself, acquire the tools needed, it could be days to weeks to even months before you attempt, let alone finish the task at hand. Recovery solutions are the same idea. Yes, you could contact the Social Security Administration, the IRS, the credit bureaus, the creditors, etc., but the amount of time and energy it takes is very daunting.

      MORE INFOData Breach Response "How To" Series

      Let me explain.

      With recovery services, you are assigned a specific, personal recovery advocate throughout the entire process. These trained specialists know who to contact, how to get up the chain of command, the specific phone numbers and how to reach a live human being without waiting on hold for hours at a time. The recovery advocate will do the work for you, so you can go on living your life, working and being with your family.

      Why should a company purchase recovery solutions? Simple answer. They were responsible for your information and the problem, they should make it better. Recovery services will repair someone’s credit if and when there is identity theft. They will work on the individual’s behalf, as their power of attorney, to bring their credit back to pre-theft status. They will be the person that tells your story over and over and will push when they need to push. They will speak with the agencies that don’t care about your story and who see you as just another victim. Recovery advocates will carry that stress and burden for you. They will make the phone calls, fax in the numerous documents, follow up relentlessly to make sure things are getting down, and progress is being made, so you can carry on without pulling your hair out and losing sleep.

      Remember the last time you lost your debit card or your cell phone and the panic you felt? When someone has stolen your credit card or your identity, speaking to multiple agencies and multiple people can make you feel the same way. You automatically go into panic mode and want the damage mended now, and now isn’t soon enough. You simply want someone to just “fix it”. This is where a recovery advocate comes in. And yes, it can take months to have information removed from your credit, but it’s worth it to have someone assisting and doing it for you.  

      After a breach, the last thing you want as a company is one of your patients or clients threatening to sue you. They will sue if they feel ignored or had to spend weeks on the phone trying to fix something that was turned upside for them. Take it from me and I have seen it. There are always a handful of individuals that if you push them too far, they will be going directly to their lawyer and you will not only be paying for their broken credit, but you will be paying for their legal fees and mental anguish, too.

      Recovery solutions are also interminable for people. They typically don’t end at one year like a credit monitoring service will. If you are enrolled in a recovery solution, the recovery assistance for an individual will be there for the original issue for as long as they need it.

      If you still aren’t sold, I have one last example that hit it on the nail for me.  Plus the fact that I don’t want to replace my own engine and don’t have the time or skills for it, I learned that medical identity theft is one of the most complicated thefts there is. Not only do you need to contact the doctor’s office, the billing office, the specific lab where the service took place, the insurance carrier, but you also need to contact the collection agency who will be collecting on the late payment. You also need to contact medical records as it is now on your record. The list goes on and on. Something I didn’t realize or understand either. I didn’t realize how complicated this could be and after thinking about it, it wears me out. Including recovery services is a logical idea for the company and individuals that have been breached. Take my advice on this one. Following a company breach, the last thing you want is to be sued and more reputational harm.

      SEE ALSO: HIPAA data breach prevention tips for health care IT leaders

      About the Author

      Heather Noonan's avatar
      Heather Noonan

      Heather is the Senior Project Manager for the ID Experts Data Breach Response Team. She provides subject matter expertise, state and federal regulatory requirements and best practices. She is the primary point of contact for client communication and data breach engagements. Heather has been with ID Experts since 2008 and works in all areas of ID Experts including informational webinars and blog discussions. Heather has a Bachelors of Science, specializing in Business Communication, and has over 15 years of experience in client customer service with 10 years specific in Project Management.

      Analytics May Reduce PHI Exposure Risk in a Healthcare Data Breach

      by Megan Bell

      This post by Kivu Consulting's Megan Bell is part of our ongoing series of contributed content.

      Ponemon’s Third Annual Benchmark Study on Patient Privacy and Data Security reported that most healthcare organizations have experienced a breach–94% of healthcare organizations in the study have had a data breach in the past two years, and 45% have experienced more than 5 data breaches.  In many cases, digital forensics is used to identify the reasons for a breach.  Lesser known is the importance of forensics in determining the extent to which PII / PHI is exposed and the number of affected individuals. 

      DOWNLOAD: Ponemon’s Third Annual Benchmark Study on Patient Privacy and Data Security

      Quantifying exposed PII / PHI takes place once a breach is established.  This step is often limited in scope due to misinformation.

      1. Data intimidation.  Just the size of 10 terabytes of data may appear daunting.  However, the total size of a data set does not necessarily correlate with the number of records that is relevant to analysis of exposed PII / PHI.
      2. Time constraints.  Time allotted to the assessment of exposed PII / PHI is often a small fraction of a forensics investigation.  Minimizing time for analysis without evidence may increase risks such as scrutiny from regulators.

      Best Practice Approach to Quantifying PHI:

      Using a consistent approach to evaluate and quantify exposed PII / PHI and identify individuals will reduce costs and risks associated with a healthcare data breach.

      1.  Build a profile of the source data.  Analysis of exposed PII / PHI begins with knowledge of the source data such as a single database or several thousand emails.  The more that is known, the better the efficiency in identifying exposed PII / PHI.

      • Characterize user habits.  In many cases, user-based patterns are present.  This includes the entry of notes into a database or the storage of files on a computer.  Understanding user habits facilitates in locating pockets of exposed PII / PHI to review and may eliminate other sources.
      • Look for similar populations of data.  Consider the case of a laptop that has Excel files, emails and x-ray images with embedded patient data.  Creating three separate populations for review improves the speed and accuracy of targeting and extracting PII / PHI.

      2.  Develop metrics and reporting templates before initiating analysis.

      • Identify notification-specific PII / PHI data elements.  Review of potentially exposed PII / PHI begins with understanding notification requirements for a specific investigation.  Criteria for notification are often provided by counsel and, in the U.S, are derived from a broad set of federal and state laws.  International laws can also affect the scope of PII / PHI information (e.g., EU data protection laws).
      • Establish pre-approved reporting templates.  Reporting templates are used to collect and organize exposed PII / PHI.  Fields may include names, address, email, phone numbers, birth dates and diagnosis codes.
      • Use caution in reporting certain data such as credit card numbers.  If data such as credit card numbers or financial data are identified, then counsel should be consulted for appropriate data-handling and reporting.

      3.  Apply a combination of tools to for effective analysis.  Using appropriate digital forensics and search-oriented tools expedites identification of potentially relevant PII / PHI.

      • PII / PHI such as medical record numbers (MRNs) should be located with tools that search for data patterns.
      • Email and text files should be evaluated using a search tool and responsive search terms (e.g., “Last Name”).
      • PII/ PHI is frequently stored in compressed files (e.g., zip files).  Any potentially relevant data set should be analyzed for such compressed files and a determination made whether the scanning tools being used are correctly searching such compressed files.
      • PHI is often contained in digital files that are not normally searchable by automated review tools (e.g., patient information found within image files such as x-rays or CAT scans).  The presence of such files should be investigated and other non-automated reviews such as manual sampling should be carried out.

      4.  Create an audit trail.  Audit trails assist in documenting all phases of analysis—from preliminary profiling to identification of exposed PII / PHI for affected individuals.  They are also crucial in proving to regulators, plaintiffs’ attorneys, and a skeptical public that a proper examination was indeed carried out.  

      SEE ALSO: Digital Forensic: The First Step in Data Breach Response

      About the Author

      Megan Bell's avatar
      Megan Bell

      Megan Bell directs data analysis projects and manages business development initiatives for Kivu Consulting, a strategic ID Experts partner. She has 15 years’ experience designing and implementing reporting and analysis solutions for software, insurance, and consumer product companies. Kivu Consulting combines technical, legal and business experience to offer investigative, discovery and analysis services to clients worldwide. Kivu’s professionals work with organizations to effectively investigate, mitigate and prevent data breaches.

      Add a Comment

      Your comment may need to be approved before it will appear on the site. Thanks for waiting.