Thursday, January 22, 2015

Some Libre raw data experiments

There has been a lot of interest in the diabetes on-line technical communities to address some perceived Libre shortcomings. The first improvement the community is after is "Wouldn't it be great not to have to carry an additional device and be able to scan the data with a phone?" The second one is "Can't we turn this thing into a real CGM?" While replacing the Libre reader with a phone doesn't really interest me (for example Max would not be allowed into school exams with a phone but can keep his Dexcom), the second improvement does, as it could allow us to drop the Dexcom which we mostly use for night monitoring.

Obtaining the data from the NFC tag is easy. Understanding its organization was as well. Understanding it completely to reproduce exactly the results provided by the Abbott meter is going, I think, to be hard if one limits oneself to a black box analysis approach.

Even in the simple case of the historical data as it is reported by the Libre and as it stored on the chip, which is easy to study at leisure, there are many catches...

All roads lead to Rome but... all cities are Rome to somebody.


It is easy to find a fitting function that links the raw data to the reported data for some period of time. Why is it "easy"? Unfortunately, not because we are "genius hackers". The real reason is less glorious: there is an infinity of possible fitting functions. We'd have to be extremely dumb not to find one! The problem with those almost random functions is that they are good until they break.

Another issue is data quality: Abbott will reject values when the temperature is out of range, there are a few flags that are raised on some occasions, a quality assessment, etc...  But let's get back to the "simple" historical data.

We know a few things and have noticed some peculiarities
  • that the data is displayed with a certain delay. It is not simply the immediate average of the last 15 minutes. The Abbott algorithm takes its time before producing the value. 
  • that the same historical raw data isn't always displayed as the same official value.
Based on this, here's the result of a few interpretative tests

The wide red line is Abbott's view of the data. The dotted blue line is the result of a simple fitting function that has been made public. Decent, but not perfect. The dotted purple function is the result of a similar fitting function I have been using for a while. While it looks better on this data set, there is no guarantee it will always look better. Technically, it is just another fitting function in the sea of infinite fitting functions. (for the technically minded reader, the "purple" function takes a more functional approach: it basically subtracts a bias from the raw value, works with power of 2 compatible with the likely device architecture and then scales the result - it is, in a way, inspired by the TI documentation). If such a function was adequate (I believe it is not), that way of generating it is more natural for the system.

The wide green line is essentially the same function, plus a small algorithmic correction (also just a guess) that I implemented after noticing the Abbott reader was late in generating its data, that the fitting function always had problems with sharp increases and decreases and, finally, that two equivalent raw values did not necessary lead to the same historical values. (for techies: detect a condition that is causing issues to your guessed fitting function and smooth your way out of it). Just like the Abbott official data, this method would, of course, trail by one value. It does however give consistently nice results on my data sets. But its interest is limited: if you want the historical data, look at it on the meter or download it. I saw it as a stepping stone to a better interpretation of real time data.

Pseudo-CGM

What happens when that guessed function plus another algorithm is applied to the real time spot value? Testing that part is not easy as one needs to collect bot a raw reading and an official reading every minute. The chart below shows the result of such an attempt. The correlation is quite good and might actually not be too far from the real interpretation of the data... most of the time and in that range. But it also breaks, possibly because something triggers an Abbott algorithm under some circumstances. On a very small scale, we find the same kind of previously observed behavior  - see earlier posts - some kind of "trigger happiness" of the official Abbott data compared to the state of the raw data.  A small trigger happy increase and a small trigger happy decrease towards the end of the run. And, unfortunately, what I would describe as an "incident": while the raw data shows an increase, Abbott decides to report that period as flat.... (for the technically minded, the algorithmic part added on top of the conversion part takes into account what the immediate data is projected to do based on the previous trend. The historical interpretation looks at what happened before and after for smoothing. The immediate interpretation just projects what the next point should be and averages with the observed data.)


Please note that I have absolutely no idea about how the interpretation behaves in the very high ranges. We don't have too many, at least when I am around, and I don't see myself sending the kid high on purpose for an experiment. And of course, the limited sensor supply doesn't help much...

OK, where's the source code?

Sorry, I am not releasing anything at this point. Let me reassure you: should I hit the magic formula, I do not plan to exploit it commercially in any way. If that happened, since I love my quiet life, I will probably lose it somewhere to be found ;-).  My reasons are as follows


  • I am sure my formulas will break at some point. Possibly catastrophically. It would be irresponsible to release them in the wild.
  • they are easy to replicate, pick your bias, your power of 2, your scaling factor, a smoothing method for historical data and an extrapolation method for the immediate data and off you go. If you try, maybe you'll hit the gold that has eluded me so far. And I'll be happy!
  • there's the thorny issue of calibration. My function guesses are possibly the approximation of a calibration function (I know there is calibration data, important enough that it requires special care): if it approximates a sigmoid calibration curve in its near linear part, big problems are guaranteed on both ends.
  • there's the thorny issue of errors: we know that Abbott refuses to give a value in some situations (some unknown, some known such as a temperature outside a well defined range - see TI Docs for related examples). Even a perfect fitting function will fail if it does not recognizes all flagged conditions and does not ignore the correct data.

I'll probably try to run a couple of experiments when our last sensor has expired. But don't hold your breath for a breakthrough.



No comments:

Post a Comment