Sunday, December 21, 2014

Some thoughts on trust, and the Abbott Libre "spyware"

Trust 

Just as health care, Internet and its latest fad, cloud computing, is based on trust.
Labeled for reuse by Google Image Search


In health care, the patient trusts doctors to have his best interest in mind. It's fuzzy, human, philosophies may differ and at times, it fails. It is usually only as a "best effort". Doctors aren't supposed to inject methotrexate telling you it is just penicillin. Big pharma isn't supposed to sell you penicillin marked as ciclosporin despite the fact that it would be a very profitable business. There are a ton of safety mechanism to prevent that from happening. Sometimes, doctors misbehave (as they did in the famous Tuskegee syphilis experiment). Sometimes "big pharma" hides significant side-effects (see the Vioxx case for example). In the vast majority of cases, however, you can trust your doctor to focus on your health and pharmaceutical companies to have the mandatory quality controls in place.

In principle, our relationship with technology and software is also based on trust. You don't expect your TV to spy on your potential porn watching habits. But then, it happens (LG TVs). You don't expect to spend your IT security money on software that is actually intended to make you less secure. Yet, it happens (RSA Security BSafe). I could literally fill dozens of pages with similar horror stories. At that is not counting the obviously unintentional bugs that have huge consequences such as the "go-to-fail" SSL bug. Effective regulation is extremely hard, in part because the field moves at such a quick pace that no one has time to verify, in part because most programs in use today are formally unprovable and, of course, also because governments will actively undermine any eventual really secure implementations of security.  A certain amount of trust is however necessary and expected. When Microsoft sells you a word processor, they don't covertly export your invoices to your competitor. If they decide to store them in or upload them to the cloud, they usually tell you so. Of course, it is not always black or white (see this for example), the average user's informed content isn't always that informed because most of today's IT technology is indistinguishable from magic for many people, but at least this is the idea. A respectable program does or tries to do what it claims to do and that's it. Programs that don't behave is such a way fall into a few categories, usually called spyware or malware.

Breach of trust, can of worms


In this case, Abbott, knowingly or unknowingly opened a whole can of worms.

1 - they shipped a program that executes significant undocumented actions: it could even be argued that the program exports more data than what it provides to its user and that Abbott keeps more data about the user than the user himself is allowed to keep. A legitimate question would be: what is the main purpose of that program? Provide data to diabetic patients or collect a huge amount of data for Abbott?

2 - they apparently went to some length to hide what they were doing from the normal user hidden directory, remote connections encrypted while local weren't, a strange "meter must always be connected" requirement, absolutely no visual feedback of what was going on, etc...

3 - they explicitly stated they weren't doing what they were doing. that part could actually be a first - I don't remember RSA Security stating something like "our random generator is not intentionally weakened by the NSA" in their license agreement. It sounds like pure "ass covering" in case something goes wrong, but is bizarre. "I do not have a gun and do not plan to rob the bank at the corner of the street, officer". Then a robbery happens. It could be accidental, or it could be intentional.

I can imagine the following internal discussion. "We should insert that disclaimer" says the legal department. "Well, we're doing just that...." says developer. "That a very important part of our plan, will users realize it" says manager? "Ugh... ugh... it's encrypted and hidden but..." says programmer. "Users have other worries, ship it" says manager.

4 - they appear to be intentionally violate EU data confidentiality and retention laws (unless they have an explicit agreement to do what they do) not only within the EU itself, but by exporting sensitive health data as well. This is so blatant that I am really puzzled by this. They must have applied for some kind of approval at least?

What could go wrong?

If all of the above behaviors become the norm, I can guarantee rough sailing ahead. If illegal, obfuscated and undocumented behaviors are OK, the door is open for virtually anything.

What about covertly collecting information about competitor software? While we are at it, why shouldn't they also export my Dexcom data for example? If it is hidden well enough, who will know? In the same vein, why wouldn't the next version of my Dexcom software install a proxy on my computer, grab the data Abbott is exporting, and also benefit from the data trove? Note: the good side of the above point is that if you plan to run a crowd sourced research project, are running your own trials or developing a competitor, you could simply ask the user "can we change your Libre upload URL to our server?" Think about it for a minute... What would make uploading my Dexcom data as well more unethical than what is done already? Then, of course, Dexcom could discover the export, be annoyed by it and fight back. I could, right now, devise a small piece of software that covertly corrupts the data I send to Abbott in a believable way. If a grass-root community is p***** at Abbott, it could probably derail its big data analysis plans by corrupting data en masse. Where does this stop?

Then, what about covertly interfering with the competition hardware or software? It could be done very, very discretely as the recent advanced persistent threats have shown. Let's put my black hat on for a minute and assume I am working for an hypothetical Abbroche company intent on killing its awful Dextronic competitor.

First, I'd implement a modular architecture that can be remotely updated. This could be quite open. Then, at some point, I could covertly or even openly, collect information about which competitor product is present on the user machine. Doing it openly is justifiable (for compatibility purpose for example) but could bite back later in terms of traceability. A second step could be, having identified the exact version/hardware of what I am dealing with, assuming I need significant resources to carry on the attack, to send small chunks of encrypted data that would progressively build a local payload. At some point, when I am ready to execute my attack, I would make an ephemeral decryption key available, would decrypt the payload in RAM, tamper with, for example, the competitor firmware or the data it works on. Dextronic devices could begin failing, sensor data could be misrepresented, or worse. If the above attack comes over a few months, through a chain of compromised servers in China or eslewhere, "attribution" as the FBI likes to say, will be extremely hard.

In fact, it is almost impossible to defend against such a nightmarish scenario, regardless of the constraints the regulators put on the systems. Even if the regulators go for a complete check and make sure everything works in the framework of "best practices", it remains likely to fail. Even huge organizations and companies whose core business is IT security or defense keep failing... Abbott has, at this point, not made such an attack more likely from a technical point of view.

What Abbott has done, is breach the only tenuous lifeline we could hang on to: trust.

Trust that a piece of software or hardware produced by a respectable company tries to do what it claims to do and sticks to that. Trust that when potentially controversial or intrusive actions are taken, an effort is made to inform the user and to obtain his consent.

"do you agree to upload your data securely to Abbott for research and product improvement purposes? Y/N"

How hard is it to implement that simple dialog or tick box?


What really amazes me here is that, if Abbott had asked if I wanted to upload some of my data to their servers, I would have said yes. If Abbott had asked if I was running another CGM and wanted to upload that as well, I would have said yes. If Abbott had asked for correlated BG meter results, I would have said yes. But the fact that I would have - in all my wisdom ;-) - said yes doesn't mean that I believe all diabetics or diabetic parents should do the same.

Abbott seems to want our data: fine.

Ask for it. Don't damage the image of what seems to be a great product with spyware.

No comments:

Post a Comment