Embedded

View Original

Device Security Checklist

I am not a security expert. I am not a cryptographer. I do care, perhaps slightly more than average, how my device data can be used maliciously. Possibly, it is because I am a developer and can think of all the slimy ways I could misuse someone else’s data were I to put down the white hat and put on a black one.

We invited Jen Costillo on Embedded.fm as she was preparing her talk on ethics and wearables at the Silicon Valley Embedded Systems Conference (ESC). While we strayed considerably from her talk, we did dive into privacy and security specifics. It made me want to create a checklist or score card for device developers, similar to what EFF did for instant messaging clients.

Many people I talk to seem unclear on why privacy and security are important. In the show, we used the example of a political protest where a corrupt government captures all BLE addresses, even yours as you walk by to get coffee. By sniffing from multiple places (or at multiple times), it is possible to get a high probability BLE address to person match. That issue is about privacy: can someone find you from your device’s information?

Given they know who you are, can they find out where you live? Well, if your wearable squawks how many steps you’ve taken with nonexistent or poor security, they might find out you’ve only taken 100 steps to get your coffee. That limits the radius considerably. Security is about data being opaque to other people.

Mike Ryan destroyed the illusion of Bluetooth Low Energy security in his USENIX 2013 talk. One of his best points was that if you are not a cryptographer, you should use an existing cryptographic algorithm, not invent your own. That would be very high on my checklist as it is one of the more difficult things to do on an embedded system where resources can be tight and using proper crypto is expensive likely in terms of both power and money.

In Jen’s talk, she discussed health wearables, including FDA and HIPAA issues. That got me thinking about how someone (a committee of someones) has already thought a lot about privacy and security. So I don’t really have to. I just need to understand HIPAA well enough to see how it applies to my devices. Note: I am not a lawyer or a HIPAA expert, expect some fumbling here and be sure to make your own analysis.

HIPAA was intended to make the health care system in the United States more efficient by standardizing health care transactions.– HIPAA wiki article (Yes, it is all this boring. More so if you go to the source material.)

The section of interest in HIPAA is Title II which has two parts. First, the privacy rule describes who can see your information. The security rule describes safeguards needed to retain confidentiality and integrity of the information. This sounds familiar: there are whole books about electronic protected health information (ePHI) but I’m more interested in the general concepts of privacy and security, trying to figure out if my device has satisfied those needs in general.

Not everyone can see why BLE MAC address tracking has downsides; in fact, having my coffee vendor know my BLE MAC and favorite drink could reduce my wait time. My goal is not to design for my personal comfort level but to recognize there is a spectrum of needs (and costs associated with achieving them). Not every privacy point is relevant to all users. Ideally, our devices would cater to users’ different tolerances, allowing them to opt in or set their thresholds accordingly.

To that end, I’ve wanted to find a checklist to identify what issues should be of concern as I design and develop products. I haven’t found one that works so I’ll take a stab at making one, assuming an internet of things (probably wearable) device, a phone application, and a cloud server of some sort.

Scoring is a little tricky. Start with a scale of 1 (this bullet point is not addressed in this device) to 10 (this bullet point is bulletproof in our product), how would you rank the device you are working on? If you don’t know, who does? If you don’t know (and can’t find someone else who does, then the score for that answer is an automatic 1: transparency is part of security). There is the temptation to answer “not applicable” to many questions because that is simpler than thinking about them. However, are you really, really sure that the question can’t apply to your device?

Privacy

  • Does all of the data go to the server and can a user opt to have all personal and usage data stay in their own hands? 
  • Is the data format / API documented / publicly available, allowing a user to uses their data without the app / cloud service?
  • Do you have a policy for deleting your storage of their data? Can users request early timeout? Can users request data for local storage (their home computer)?
  • Does the user know who has access to their personal and device data?
  • (3rd party companies, data research companies, marketing, IT administrators, technical support, developers, users, user’s friends/family, etc.) Are there separate roles with differing privileges for different consumers of the data?
  • Do developers (or others) have eye-in-the-sky access for debugging? If so, can developers (or others) also see the user’s personal information (i.e. email address) as well as the device information?  Is the device data stored according to an anonymized user ID with the personal information stored separately?
  • If a wearable, does the device send data that makes it (and the user) identifiable to sniffers (i.e. a MAC address)?

Security

  • Does your device allow/require a passcode to get information from the device? From the app? From the server? How often are these passcodes required? Can the user set the timeouts?
  • Do authorized users get logged off automatically after a timeout? How long does that take?
  • Is the data backed up by you? Is that protected to the same level as the live data? Does the backup ever expire or is it retained indefinitely?
  • Can you check to see if the device, app, and/or server have been hacked? What tools do you have in place to detect intrusion? Can these tools be used as part of a security audit?
  • Do you have a way of authenticating the device as yours? Can you verify the user using the device? Do you add device authentication and encryption keys to a database before it leaves the factory to prevent replication?
  • Are your firmware updates secure, signed, and verified? (Atmel has a nice white paper discussing all the ways firmware updates can go wrong.) Can your devices be replicated?
  • Does your company have procedures for securing its servers from intrusion?  Does it have backups in case of destruction? Are the backups audited regularly?
  • Do you encrypt data as it travel across the network (from device to app, from app to server, from server to backup, from server to user interface)? Are there other steps you take to reduce the risk that the user data can be intercepted or modified while on a network? Do you have different data paths for different pieces of information? (For example, in TCP/IP you can send your authentication via one port and user data via another so port sniffers have to work a little harder to capture the streams.)

Transparency

  • Do you have a stated policy to limit access to data? Does different information (personal vs data) have different security handling?
  • Are your security and privacy policies explained clearly? Are your terms of service easily summarized and presented to the user? (SeeTerms of Service; Didn’t Read for some ways to make it easier on your users.)
  • Does the user know what type of data you collect and where it gets stored (i.e. server vs local vs app)? Do you allow opt in/out for sharing data (past, present and future)?
  • Does the user know what steps you take to protect their data? What encryption you use?

Ethics

  • Do you collect more data than the minimum necessary to fulfill the product’s needs? Do you store it?
  • Do you make a risk analysis for your product? Does it include the risk of violating the user’s privacy? Do you categorize your data (app, device, and server) as high, medium, or low risk based on risk analysis? Do you use the evaluation from the risk management to select the appropriate authentication mechanism?
  • Do you have a team or person whose official role is to advocate for ethics, privacy and security for the customer?
  • Do you have security audits? Privacy audits? Code audits with security and privacy goals? Do you use a risk analysis to determine the frequency and scope of audits?
  • Do you encourage your own developers to do penetration testing? Help/encourage/support them as they consider how to hack your products?

After recording the Embedded.fm show, I got a preview of Jen’s ESC talk. One thing I liked is that Jen recommends considering the questions from a journalistic perspective using who, what, why, where, and how:

  • What data is collected?
  • Who has access to the data?
  • When does the data expire?
  • Where is the data stored?
  • Why is the data collected?
  • How is the data encrypted?

Obviously, my checklist has a few more questions than that but as long as we start talking about transparency, privacy, security, and ethics in embedded systems, I don’t care whose questions you use.

Thinking about device security requires looking at the forest and the trees.


References

(This was originally published at element14 on July 14, 2015. It is reposted with minor modifications.)