Healthcare Data Security: How Bad is it?

June 25, 2011
122 Views

It is really bad, according to a recent survey by the Ponemon Institute (available here with registratio

It is really bad, according to a recent survey by the Ponemon Institute (available here with registration). The white paper, entitled Health Data at Risk in Development: A Call for Data Masking, presents the results of a survey of 492 health care IT professionals on their companies’ practices regarding use of live personal health care data in application testing.

It makes a scary read.  Here are the lowlights:

  • 57 percent of respondents say “their organizations use patient billing and insurance information in development and test of IT applications.”
  • 57 percent responded that their company “does not protect real data used in software development and testing.”
  • Many respondents “admit real data used in the testing and development environment has been lost or stolen.” “Thirty-eight percent say they have had a breach involving real data and 12 percent are uncertain.”

The white paper lists a litany of health care data transgressions like those above, then reviews the stiff legal penalties associated with health care data security breaches, which can be as high as $250,000 per violation.

The paper ends with these recommendations:

  • Assign a Chief Information Security Officer (CISO) “for the safeguarding of real data used in application testing and development.
  • “Create policies and procedures for the protection of real data used in application testing and development.
  • “Educate employees about the importance of protecting sensitive data in application testing and development.
  • “Use encryption, data leak prevention, access management, and other information security technologies.
  • “Use de-identified, masked, or dummy data rather than live data in the test and development process.”
READ
Making Healthcare Mobile: How U.S. Clinics Keep Pace with Medical Software Innovations

Certainly all of these measures can be valuable, and to this list I would add a seventh recommendation from a recent article: “background checks and non-disclosure agreements for developers and testers as with health care staff and claims administrators.”

I believe that most organizations by now consistently apply education, encryption/physical security, and background checks. The current strategy of choice seems to be having trustworthy individuals work in a secure, encrypted environment.

When organizations move beyond this prevailing strategy, they must do so in a way that promotes rather than inhibits IT productivity.  According to Data Architect Cameron Snapp, “not only do businesses have to establish these policies (and get the developers to follow them), but they also should provide effective infrastructure, data accessibility, processes, and tools that enable application staff to follow them. For example, if an organization masks production personal health data for use in test, then it must accurately mimic production.  Otherwise test cases might fail even though the application works as designed!” Cameron advises that “security is two-leveled: organizations must establish policies and regulate adherence, but also enable productivity with processes, tools, and actionable data that doesn’t inhibit progress.”

Hopefully recent highly publicized breaches in the financial world will drive information security to the C level of the organization and mandate effective masking tools in application development and test.