• Please Remember: Members are only permitted to share their own experiences. Members are not qualified to give medical advice. Additionally, everyone manages their health differently. Please be respectful of other people's opinions about their own diabetes management.
  • We seem to be having technical difficulties with new user accounts. If you are trying to register please check your Spam or Junk folder for your confirmation email. If you still haven't received a confirmation email, please reach out to our support inbox: support.forum@diabetes.org.uk

Data analysis - CGM + Exercise + Food

PerSpinasAdAstra

Well-Known Member
Relationship to Diabetes
Type 2
Pronouns
He/Him
The original conversation in this thread began here and has been split so each topic has it's own focus. https://forum.diabetes.org.uk/board...port-lack-thereof-for-ble-cgm-profile.115492/
Edit made by @Anna DUK.

Off topic but as Garmin devices are mentioned I have a query:

I'm currently learning Python with the intention of writing some software to help analyse CGM, exercise and meal data. Still very early days. At present the simplest approach I've come up with is to take CGM data from Tidepool (using that system to collect CGM and glucometer data and using their CSV/JSON export function so that I don't have to parse data from many different devices separately), exercise data from Fitbit/Garmin, and meal data from a food tracking app with a function to export to CSV. On the exercise side I was thinking of taking advantage of this project which downloads from Garmin Connect and dumps the data into a SQLite database:

Basically the idea is to collect as much relevant data as possible and then try to generate fancy graphs and display relevant information and statistics relating to the factors that might have influenced the shape of those graphs. I want to try to observe the degree to which exercise, sleep quality and duration, stress, steps, medications etc. affect the CGM graphs associated with many near-identical meals, and to better observe changes in the nature of my diabetes over time. I believe that in the long-term developing ways to gather and analyse this kind of data might help to answer some question about T2 progression and remission - how much exercise is enough, are some diet approaches better than others, how much weight loss after diagnosis is enough, how important is it to build and maintain muscle mass, is it actually a good idea to go off medications completely after big weight loss if HbA1c suggests medications are unecessary - that kind of thing.

My query is: would such software be of any interest to T1s using Garmin devices while cycling or running or whatever? Is retrospective data analysis which includes a comprehensive exercise component of any use to you? I have almost zero knowledge of insulin therapy and so I have no idea if including the capability to display insulin data would be of any use to anyone. The Tidepool data export does include comprehensive insulin data, where available, but has no facility to track exercise and so I wonder, does that mean nobody cares? Should I just focus on creating the software that I wish I had for my own purposes or should I try to incorporate all the insulin data that Tidepool can give me because Garmin data plus insulin data might be useful to someone?
 
Last edited by a moderator:
I'm doing something very similar, same Python package to download the Garmin data, some MATLAB code to export and analyse data from XDrip sqlite backups. The data in Tidepool unfortunately does not differentiate between basal and bolus for MDI - not the end of the world. The XDrip+ data has everything in it already. Supporting multiple data sources would be the way forward.

I first wrote some very rudimentary MATLAB code a couple of years ago before I'd used Python very much - I've since done lots more with Python (bike computer software, which I must test on the bike computer) so was about to start the process of moving all over to Python and trying to make it less labour intensive to select certain types of data (e.g. DP metrics following exercise of a certain duration, etc.) either via a GUI or via a set of domain-specific functions.

Perhaps we should break these posts out into a data analysis/software thread of its own?

I've got other projects in the offing too - I'd like to get a Python-based Android app up and running to accept CGM data from XDrip+ and provide a way for people to experiment with BG prediction algorithms (via scripting so they are easy to modify and add). Write your own, run it in parallel with other ones, see which works best, etc. This also needs food and exercise data ideally, so working out how an ecosystem would work (probably via Intents in an ideal world, but via XDrip+'s localhost webserver in the meantime) is part of the fun. I don't really want to have to write a food/meal logging app, but I think I might need to write a simple version simply to see whether the whole thing can be made to work.

Lots to do, not enough time as usual! 🙂
 
I'm doing something very similar, same Python package to download the Garmin data, some MATLAB code to export and analyse data from XDrip sqlite backups. The data in Tidepool unfortunately does not differentiate between basal and bolus for MDI - not the end of the world. The XDrip+ data has everything in it already. Supporting multiple data sources would be the way forward.

I first wrote some very rudimentary MATLAB code a couple of years ago before I'd used Python very much - I've since done lots more with Python (bike computer software, which I must test on the bike computer) so was about to start the process of moving all over to Python and trying to make it less labour intensive to select certain types of data (e.g. DP metrics following exercise of a certain duration, etc.) either via a GUI or via a set of domain-specific functions.

Perhaps we should break these posts out into a data analysis/software thread of its own?

I've got other projects in the offing too - I'd like to get a Python-based Android app up and running to accept CGM data from XDrip+ and provide a way for people to experiment with BG prediction algorithms (via scripting so they are easy to modify and add). Write your own, run it in parallel with other ones, see which works best, etc. This also needs food and exercise data ideally, so working out how an ecosystem would work (probably via Intents in an ideal world, but via XDrip+'s localhost webserver in the meantime) is part of the fun. I don't really want to have to write a food/meal logging app, but I think I might need to write a simple version simply to see whether the whole thing can be made to work.

Lots to do, not enough time as usual! 🙂
Re Tidepool and why I'm leaning that way at present - if I'm going to spend a lot of time writing something I'd like to support as many data sources as possible, and levels of user technical knowledge as possible, without creating a huge ongoing software maintenance problem. I am actually using xDrip but getting that installed and working is not an easy task for many people. Tidepool is user-friendly, has it's own tech support staff and can import data from the official systems - LibreView and Clarity. It also distinguishes between different finger stick meters and CGM devices in the data export which (as far as I can tell) xDrip doesn't. Tidepool distinguishes between not just different meter models but different meter serial numbers basically. I'm experimenting with CGMs and finger stick meters at present - Libre 2, Dexcom One+ and (so far) ten different finger stick meters - testing each drop of blood with several meters at a time in order to compare them. I'm trying to determine which meters are the most consistent in their readings and so most suitable for calibrating CGM readings, and whether any of the meters with cheaper test strips are up to the task. Tidepool seems to be the simplest way to get all the readings into a single data export for parsing. I'd like the software to be useful not just for analysing BG, exercise and food etc. but the analysing the accuracy and consistency of different CGM and finger stick models, but without having to update the software whenever new makes and models of meters/CGMs appear in future. xDrip doesn't appear to be ideal for that at present, but maybe I'm wrong? Perhaps software that can import from both xDrip and Tidepool covers all the bases?

I too would love to see a smartphone app but what I want is vastly too ambitious for me. I've been setting my sights much smaller over time and I think the data analysis and graphing software with a desktop GUI (or local, web-based interface implemented with Flask + Dash + Plotly) is doable. At present I'm thinking a database for BG data (similar to the Tidepool data model, unless the existing xDrip DB schema enables devices to be distinguished from one another?) in SQLite, the Garmin DB mentioned above, and a food DB into which I can dump the export from Cronometer. The paid version of that app supports timestamps on logged food items so the export, dumbed into a DB, should do the trick. For most purposes a note describing the meal logged in xDrip or the Libre app is enough but I'd like to be able to analyse variables such as fat, protein and fibre in a meal at some point in future. I'd also like to be able to analyse the effects of saturated fat on my cholesterol levels. If I want to be able to do that in future I have to start gathering the data now, so I've been logging many of my meals with app that over the past few months while I've been wearing CGMs so that I can put it all together some day.

Breaking all this stuff out into a data analysis thread sounds like a great idea. I've actually encountered someone else on another forum who has written software in Python for cleaning up xDrip CGM readings for analysis purposes, and is planning his own analysis system in future. A T1, computer science student I think, trying to calculate (things I don't understand) - basal rate, insulin sensitivity factor and insulin to carb ratio? I think? (from memory). And perhaps make it possible to use machine learning to analyse xDrip data in future by filtering out bad readings, noise in the data, I think. Perhaps I could invite him/her here and see if it's possible to plan something that would suit all of our purposes? I'm years away from learning everything I need to learn to make the data analysis system I want, but perhaps there are others out there who are each working away on their own version of similar systems in Python. Maybe we could all do each other a favour by working on something together?
 
It also distinguishes between different finger stick meters and CGM devices in the data export which (as far as I can tell) xDrip doesn't.
I've not looked at that data in the export, I will do so and report back.

I too would love to see a smartphone app but what I want is vastly too ambitious for me. I've been setting my sights much smaller over time and I think the data analysis and graphing software with a desktop GUI (or local, web-based interface implemented with Flask + Dash + Plotly) is doable.
Likewise I need to set sights somewhat lower and actually be able to feel some progress, for me the on-device logging would make life easier though and I could also experiment with prediction algorithms, which is my main goal (though historic data analysis is also of interest e.g. sensor performance, and a necessity for some/most aspects of prediction.)

At present I'm thinking a database for BG data (similar to the Tidepool data model, unless the existing xDrip DB schema enables devices to be distinguished from one another?) in SQLite, the Garmin DB mentioned above, and a food DB into which I can dump the export from Cronometer.
I was thinking along similar lines, though I currently read directly from the XDrip+ export rather than moving the table data to a source-agnostic database which might make sense.

The paid version of that app supports timestamps on logged food items so the export, dumbed into a DB, should do the trick. For most purposes a note describing the meal logged in xDrip or the Libre app is enough but I'd like to be able to analyse variables such as fat, protein and fibre in a meal at some point in future.
This is also what I want to do, I've been logging data for years just using a text-based description, which I had planned to just use short term and then use a description -> EAN -> nutrient lookup service to turn into a useful database.

I have written part of this, but I now have so much data that making the description -> EAN part work for all the data is quite a mammoth undertaking on its own. I have done this, but it's not ideal and I still use the same text format for want of a better option.

Which is where it's interesting to hear about that app, I'll have a look at it. I think a DIY job might be the required end goal though as getting a workflow between meal planning and BG trajectory to determine when and how much insulin to take (and meal adjustment suggestions) would probably work better in a single app.

Breaking all this stuff out into a data analysis thread sounds like a great idea. I've actually encountered someone else on another forum who has written software in Python for cleaning up xDrip CGM readings for analysis purposes, and is planning his own analysis system in future. A T1, computer science student I think, trying to calculate (things I don't understand) - basal rate, insulin sensitivity factor and insulin to carb ratio? I think? (from memory). And perhaps make it possible to use machine learning to analyse xDrip data in future by filtering out bad readings, noise in the data, I think. Perhaps I could invite him/her here and see if it's possible to plan something that would suit all of our purposes? I'm years away from learning everything I need to learn to make the data analysis system I want, but perhaps there are others out there who are each working away on their own version of similar systems in Python. Maybe we could all do each other a favour by working on something together?

Sounds good, I'll flag the first post to the mods and ask them to move it all into a new thread and we can continue there (or here until they move it) 🙂
 
This is also what I want to do, I've been logging data for years just using a text-based description, which I had planned to just use short term and then use a description -> EAN -> nutrient lookup service to turn into a useful database.

I have written part of this, but I now have so much data that making the description -> EAN part work for all the data is quite a mammoth undertaking on its own. I have done this, but it's not ideal and I still use the same text format for want of a better option.

Which is where it's interesting to hear about that app, I'll have a look at it. I think a DIY job might be the required end goal though as getting a workflow between meal planning and BG trajectory to determine when and how much insulin to take (and meal adjustment suggestions) would probably work better in a single app.
I too have been considering how to link quick meal descriptions to more detailed food data. Logging everything in Cronometer, weighing every piece of fruit and and so on, is a pain. Far too much hassle to do all the time. I was thinking along the lines of a food DB which enabled meal data to be tagged with descriptions that could be used consistently elsewhere. This would enable all instances of a regularly-eaten meal to be linked together. If the quick description of a meal logged in say xDrip matches a detailed meal log with timestamps that are very close then a good estimate of the composition of that specific meal exists. If the description is there but there is no matching food logging data based on the timestamp, then averaged data for that meal description could be used (via the tags). So 'eggs on toast with small orange' could be linked with a specific instance of that meal, if logged, or with average properties of all previously logged meals with that exact tag. If the orange was weighed, great, if not, use the carb value of your average breakfast orange or whatever. It didn't occur to me to use EANs (which are barcodes basically? A unique identifier for a food product item?) but ideally I do need a way to match all my old meal descriptions logged in the Libre and xDrip apps to my Cronometer data, whether I actually weighed the bread and the orange that day or not. I also need a way to log meal data very quickly in future when I'm eating the same meal for the 100th time.

A quick food planning/logging app specifically for this purpose would be fantastic. In an ideal world one app could do very many useful things. The Tidepool app does have some food description logging functionality with 'favourite meals' I believe but I've never looked at it. Perhaps one day, if an effort at a sophisticated set of data analysis tools can be organised, the Tidepool team might be wiling to make a few small changes to their app to give it more of the functionality you'd need?

BTW thanks @Anna DUK for moving the posts to a new thread. That was quick 🙂
 
Apologies in advance, this is probably rather disjointed as I started before cooking supper for the kids, then came back to it! My thanks also @Anna DUK for your quick forum curation! 🙂

I completely agree, there is no perfect example of what I want either. I was working from EANs (yes barcodes, or rather what the barcodes now contain) mainly as everything I buy comes from a supermarket (or can be approximated by something I could find there) and they have plentiful data available. What I really want is a meal composer, which allows me to input the ingredients, and then select what quantity (% rather than weight at that point) comprises a meal, though I guess I could weight the finished product and work out the water loss.

There're lots of ways and I'm not quite sure what the perfect app looks like (nor do I really want to have to experiment - lack of time, but I nothing new that ticked all the boxes has cropped up in the past few years). I did start writing an all singing and dancing app but it become very complicated quite quickly and I ran out of time to fiddle with the internals (and by the time I came back to it, the Android model had changed and there was a different way to do things, which isn't great for the motivation.)

I must admit I'm not overly excited by the prospect of writing a food logging app, I'm much more interested in the BG modelling and analysis parts, but one needs all the bits.

My ideal eco-system has apps talking to one another in real time via e.g. Android broadcast Intents - XDrip+ or similar (even ideally the manufacturer's apps) to gather CGM and finger prick data and then share it, a dedicated food logging app to log and share, e.g. Garmin for exercise, sleep, and then my BG prediction app (or someone else's) to pull the bits together.

Going back to food logging, it would be useful to handle unknown foods without the need to look up and equivalent and instead use general descriptions. I don't know how accurate this would be, but probably not too bad and tolerable when you can't measure the weight of what you're eating (buffet, etc.) - this is where some interesting modelling can come in with either uncertainties or probably more understandably ranges associated with different "meals" which are then tuned looking at how BG responds. This is the stuff that interests me along with looking at how macros combine in food stuffs and what that does to absorption, and how to could feed into modelling.

In terms of data analysis there's lots of (at least vaguely) interesting stuff to do with my existing data looking at CGM drop-out rates/durations, calibration offsets (via automatic fingerprick reading calibration, which would be useful in general as a step between raw data from e.g. XDrip+ and plotting it/using it in a prediction algorithm.) Re calibration, one issue I find with XDrip+ is that it only allows linear calibrations, which is ok most of the time, but sometimes isn't, and when it isn't the allowable calibration (such that sensor off-scale low is still above hypo to allow alarms) means that a sensor is often unusable. It would be nice, for those like me who suffer from repeated calibration issues, to come up with a way to make these usable to avoid sore arms.

I wonder if a wiki page is the way to pull together descriptions of the functionalities, which can then be addressed for each constituent app.
(Actually Google docs/etc. is perhaps easier as no wiki host is necessary)
 
Re. the 'meal composer' functionality - Cronometer has facilities to create 'custom meals' and 'custom recipes'. Custom meals are just sets of food items with quantities. Recipes are similar but enable raw weight of the ingredients to be distinguished from cooked weight, and it has a serving size function. So lets say a stir fry - if the quantity of each ingredient is consistent each time you make it, and you eat it out of lets say a bowl, you can very quickly add a bowl of stir fry to your food log in the app or you can weigh the cooked stir fry and log the portion in grams (cooked weight) rather than in bowls. There is similar functionality in other food tracking apps though each does things a little differently.

For the unknown meal problem like the buffet there are other apps which work using image recognition to estimate the carbs/calories in the meal though they're not reputed to be accurate (an example is 'Calorie Mama'). Point your phone camera at a plate of food and it guesses the carbs and calories and so on. Interestingly the company that makes that specific app also has a diabetes app ('Glucose Buddy'). It's clear that someone in that company is thinking ahead. Not useful for my purposes but perhaps the solution to the food tracking problem is to create a source-agnostic food DB and data import functions for a set of the best of the existing apps which, collectively, do the all the things you might need to do?

Maybe we should make a big long rambling thread, a wish list of software functionality that others can see and contribute to, and then turn it into some kind of Google Docs or Wiki thing at some point in future? Once it goes into a Wiki or whatever the discussion becomes less visible and I know there are others on the forum who have either written diabetes software of their own, do their own BG data analysis, or currently work in software development who might take an interest.
 
While randomly musing I've been wondering about the easiest/best language/toolbox for data analysis. All the data are time-series, and I assume there must be some analysis packages out there that have an approach to handing things like week by week views, getting x hours of data following some sort of condition (e.g. I want to look at DP following nights with average BG <5mmol/l)

I can do all this stuff longhand in MATLAB (which I have used for so many years, it's my default) or Python, but I do wonder whether there's already a toolbox which provides either an approach or a set of functions to simplify this sort of data analysis:

In MATLAB I would probably have to do something like this:
- for each Garmin sleep event, get start/end datetime
-- extract BG data across sleep event, extract BG data for 2h after waking (FotF), extract data for time period 2am-waking time (for DP)
-- calculate metrics and save to array
- plot arrays, do something with the data

Some of the above could be done with SQL, I think there are SQL extensions for time-series data, but from what I looked at the approach of handling indices in code seems simpler than dealing with combining SQL recordsets even if that means I need multiple lines to extract data from multiple tables, but that may just be based on my relative lack of experience with SQL.

The approach I will probably take it to write a whole load of wrapper functions to do common operations:

e.g.

event_ids = GetSleepEvents(starts_on_date, optional_duration_filter)
event_id = GetNextSleepEvent(previous_event_id, optional_duration_filter)

event_ids = GetExerciseEvents(starts_on_date, optional_type_filter, optional_duration_filter)
event_id = GetNextExerciseEvent(previous_event_id, optional_type_filter, optional_duration_filter)

data = GetDataForTimeRange('metric name', start_date, end_date)
data = GetDataForEvent('metric name', event_id)


etc., etc.

But I wonder if something similar already exists for the analysis of time-series data. Even if it can't be used directly, I'm more than willing to learn from what others have done and implement the same set of component functions, which have presumably been found to be useful (if they exist).
 
Python is definitely the language for this - I had a quick look early on and there are loads of packages that might be useful for time series analysis. So many in fact that I just assumed 'problem solved' and moved on to looking at other aspects. At that time I was just trying to confirm that Python was the language I should try to learn for this task. It will be months before I've learned enough about the language and available packages to have any clue on what the best approach on the (mathematical) data analysis side might be and what the most useful package(s) for that are:

Re. any time series specific features that might exist in SQL I may be wrong but I believe it might be a good idea to do as little of the data processing on the database engine side as possible, to just use it for organised storage and retrieval and to keep it as simple as possible. I looked up available database abstraction layers for Python and SQLAlchemy appears to be popular and supports both 'core' relational and object-relational approaches at the same time. It fully supports SQLite and MySQL but does have a (lesser) degree of support for a mess of different DB systems. In order to be able to change the database system used in future without too much hassle it's probably a good idea to use no features that are lacking in a typical SQL implementation, if possible, i.e. use only SQLAlchemy features that work with all or at least most of the supported back-end DB systems.

At this point I don't even know if an object-relational approach might be useful for some aspects of an analysis system. For example a 'meal event instance' - is it worthwhile to pull time series and other data from the DBs, create a 'meal object', calculate metrics for it and so on, store that data as attributes of the object, and then store it in the DB using an object-relational approach? Would that simplify the writing of code for comparing one meal instance with another, for analysing the average properties of a specific category of meal? Is an object-oriented approach the best way to tie the data from all the various sources together and is object-relational approach the best way to store it in a DB? I have no idea what the best approach for this might be, but it seems that by learning SQLAlchemy and writing any DB-related code to work with that package it probably enables the greatest level of code reusability and flexibility while enabling the back-end database systems to be changed in future if appropriate. I haven't even stared learning that package yet but at this time that seems to be on the cards.

I've been most focussed recently on visualisations and the best way to deploy a system in the most user-friendly way possible. I found a project/repositories which demonstrate a way to write a desktop application in Python as if it were a web application. It enables everything required - a web viewer, web app server, nice visualisation functions/UI tools and Python itself to be packaged as executables for Windows, Mac and Linux. The big advantage I see in this approach is that the code could quite easily be repurposed and deployed as a standard internet-accessible web application, enabling a stand-alone desktop application to be a useful stepping stone to a more typical server implementation if anyone wants to do that. These are the boilerplate and demonstration projects I found - the second link is an ECG data file viewer which can be downloaded from the 'Releases' link on the right as a zipped executable. It demonstrates how Flask + Dash + Plotly can be used to make a very nice web UI for time series visualisations and how minimal technical skill is required to download and use the finished software and run it on a desktop machine. If you'd like to try it out you can find sample data to display via the third link (syn.dat and syn.hea):

For the time being I'm going to keep learning Python and the basics of these packages. I'm currently going through Angela Yu's '100 Days of Code Python Bootcamp' course on the Udemy site - video-based, recently updated, well-rated, suitable for people who already know the basics of programming as she doesn't spend a lot of time explaining core concepts (and you can play the videos at twice-normal speed to skip through stuff you already know), course is extremely cheap (they have a 'flash sale' pretty much every other week - entire course is €13 today, cheaper than a book:

That course covers the language itself along with the basics of almost all the packages listed above. I was then going to try a rough proof of concept as an exercise - try to get my Tidepool export into a SQLite DB, then into Pandas data frames, then create some plots from those using Plotly, and get that working as an EXE for Windows. After that I intended to start learning software engineering concepts, design patterns and so on, how to actually design useful and powerful software and then implement it. Realistically it will take me years to learn how to do all that reasonably well. I learned how to code in several languages in college, a long time ago, but never used those skills and so lost much of that knowledge. If nothing else I see this project as a fairly comprehensive learning exercise that might, maybe, help me and perhaps others one day to manage their diabetes more easily without surrendering any data to a private company. That's the hope at least.
 
Since nobody wants to reinvent the wheel, there are two resources you may want to investigate:

The first will let you retrieve nutrition information for a barcode.


The second will let you parse a natural-language meal description and return nutrition information (in the form of a JSON array).


I have recently used both of these to good effect in my diabetes software (web application written in .net and C#).
 
Python is definitely the language for this - I had a quick look early on and there are loads of packages that might be useful for time series analysis. So many in fact that I just assumed 'problem solved' and moved on to looking at other aspects.
I agree Python is the right language, sorry poor choice of words on my part above.

At this point I don't even know if an object-relational approach might be useful for some aspects of an analysis system. For example a 'meal event instance' - is it worthwhile to pull time series and other data from the DBs, create a 'meal object', calculate metrics for it and so on, store that data as attributes of the object, and then store it in the DB using an object-relational approach? Would that simplify the writing of code for comparing one meal instance with another, for analysing the average properties of a specific
My feeling is that an OO approach makes sense from the point of view of ease of use, I was just hoping someone might have already made something similar from which one might learn what worked (what they ended up with), though it was probably always a vain hope as quite often as soon as a toolkit works adequately no more development happens, so there's not all that much to learn from that aside from what the authors' starting point was! 🙂

For the time being I'm going to keep learning Python and the basics of these packages. I'm currently going through Angela Yu's '100 Days of Code Python Bootcamp' course on the Udemy site - video-based, recently updated, well-rated, suitable for people who already know the basics of programming as she doesn't spend a lot of time explaining core concepts (and you can play the videos at twice-normal speed to skip through stuff you already know), course is extremely cheap (they have a 'flash sale' pretty much every other week - entire course is €13 today, cheaper than a book:
Sounds good, I use Python for work (automation, some data analysis, though I've used MATLAB for so many years it's still my default tool) and I've also spent a lot of time of the past couple of years developing a piece of DIY bike computer software in Python (PyQT to be exact). I must get it up and running on the hardware ASAP before I get distracted with going riding rather than writing code! 🙂
 
I would encourage anyone attempting to do this type of analysis to learn SQL, as having data in a relational database opens up a wealth of possibilities.

Shortly after I was first diagnosed with Type 1, I started developing software to make my life a little easier. I started with a Windows Form application that stored all data in a Sqlite3 database. After a few years, I migrated my data to an MySQL database and created a web application so that I could access my data from any mobile device. It is now the only software I use to manage my diabetes.

One element of my system which may be of particular interest here is the ability to look back at previous times I ate a particular type of meal so that I can understand what effect, if any, that meal had on my blood sugar. I ran an example analysis just now and captured the output in the attached graphic. This is purely intended as an illustration of what type of analysis is possible when the underlying data is easily accessible in an SQL database.

In this example, I picked a 'chia toast' lunch recipe, because I remembered reaching the conclusion at the end of January that chia toast does not agree with me for whatever reason.

Anyway, I hope that all makes sense and is of some use.


recipe-use history example.png
 
Thanks @littlevoice359

Out of interest how do you do data entry into your system, is that via a web page on e.g. a mobile phone too?

I can't quite make it out in the attached images, but are you also tracking food constituents? (e.g. looked up via the links you posted above)
 
I would agree with you @littlevoice359 that a database of some sort is the best data analysis tool and a good place to start is to understand how relational data bases work so that you can construct the tables and their relationships at the outset. Get that right then extracting sub sets of the data using SQL - either native or using tools within the database application - allows the exploration of ideas.

I have all my data on an Access database which I found quite easy to construct once I got a hang of the basics of data base operations. I am no fan of Microsoft but have to say that the Access database is very easy to use once you get into it. Being part of the Office suite it shares a lot routines so things like graph plotting are a doddle if you know how to use Excel.

Just checked... my 90 day waking average blood glucose is currently 6.9 with a predicted HbA1c of 50.7. Can plot various graphs by clicking on buttons. Everything will update whenever I add data. I can update the HbA1c prediction whenever I have an HbA1c result. Don't see why I could not add food intake. A bit of work constructing the data base but a lot of work putting in data.

I keep trying to reconstruct it in Open Office Base, but to date that has defeated me.
 
Thanks @littlevoice359

Out of interest how do you do data entry into your system, is that via a web page on e.g. a mobile phone too?

I can't quite make it out in the attached images, but are you also tracking food constituents? (e.g. looked up via the links you posted above)
Data entry is via mobile device (iPad usually if I am home and iPhone when I’m out). For my G7 sensor I pull data from the Dexcom Share server.

For meals prepared at home I use nutritional information from my own database. I recently added ability to pull nutritional information from external sources when eating out.

Perhaps these larger, clearer screenshots may ge easier to see.
 

Attachments

  • IMG_2661.jpeg
    IMG_2661.jpeg
    70.7 KB · Views: 3
  • IMG_2660.png
    IMG_2660.png
    65.4 KB · Views: 3
  • IMG_2659.png
    IMG_2659.png
    59.7 KB · Views: 3
I would agree with you @littlevoice359 that a database of some sort is the best data analysis tool and a good place to start is to understand how relational data bases work so that you can construct the tables and their relationships at the outset. Get that right then extracting sub sets of the data using SQL - either native or using tools within the database application - allows the exploration of ideas.

I have all my data on an Access database which I found quite easy to construct once I got a hang of the basics of data base operations. I am no fan of Microsoft but have to say that the Access database is very easy to use once you get into it. Being part of the Office suite it shares a lot routines so things like graph plotting are a doddle if you know how to use Excel.

Just checked... my 90 day waking average blood glucose is currently 6.9 with a predicted HbA1c of 50.7. Can plot various graphs by clicking on buttons. Everything will update whenever I add data. I can update the HbA1c prediction whenever I have an HbA1c result. Don't see why I could not add food intake. A bit of work constructing the data base but a lot of work putting in data.

I keep trying to reconstruct it in Open Office Base, but to date that has defeated me.
In my system, the status page includes a GMI number, which is a HbA1c predictor. That is updated automatically each time this page is accessed. It currently shows a figure of 48, which is in line with recent past lab results.

The status page also lets me run a number of reports by selecting from a dropdown list.

Hopefully these attachments will help better explain, though I fear the resolution will be somewhat reduced by the upload process in posting this reply.
 

Attachments

  • IMG_2665.jpeg
    IMG_2665.jpeg
    53.1 KB · Views: 5
  • IMG_2664.jpeg
    IMG_2664.jpeg
    94.2 KB · Views: 4
  • IMG_2663.jpeg
    IMG_2663.jpeg
    69.8 KB · Views: 4
  • IMG_2662.jpeg
    IMG_2662.jpeg
    71.3 KB · Views: 4
This is exactly the kind of system that I was thinking about @littlevoice359 - everything in one place. Creating it as a web application that you can access from a tablet or mobile phone is the way it should be done. I couldn't understand why all the commercial applications out there were so unsophisticated in some ways until I learned of the GDPR and HIPAA problems. To turn your app into a product for others with everyone's data in one place and including metrics like HbA1c means you'd be storing health data, a special category under GDPR. If it produces any output for the eyes of a doctor, like a report, it becomes subject to the American equivalent, HIPAA regulations. Managing the bureaucracy involved, designing the system to the same standards as a system for managing medical records, employing data controllers, IT security staff, achieving security standards certification which is necessary, practically speaking, to get cyber insurance, is very expensive. Add any feature that looks like blood glucose prediction or calibration and it might become a 'medical device', subject to approval by the likes of the FDA in the US and equivalent authorities in the EU. Getting a medical device approved is very, very expensive. It all adds up to ridiculous money very fast, many millions just to launch the thing. Cronometer, the food tracking app I mentioned, used to allow you to track cholesterol and HbA1c in the free version. Those features have since been removed from the app, at least for me. I think they figured out that by storing those metrics they were falling foul of GDPR. They're a Canadian company so perhaps the American version of the app still allows you to track those metrics. You can still do it in Europe by adding 'custom biometrics' in the paid version of the app but the company can claim that this is the user's choice, that the system is not designed to store 'health data'.

This is what led me to think in terms of a desktop application which is built as a web application. Those with the technical skill could deploy their own personal web application as you have done and access it from a phone or tablet from anywhere. Those without those technical skills could run it as a desktop app. No storing other people's health data, no GDPR problems. No providing software to others that might be used to make 'treatment decisions' (with anything too fancy done via custom-made plug-ins) - no 'medical device' regulation problems.

Something like your app which also helps to analyse the effects of exercise and to analyse the properties of the devices used to gather blood glucose data is what I've been thinking about. To illustrate what I mean by that - this is a plot for one meal where I was wearing an auto-calibrated Libre 2 using the official app and did finger-stick readings every 5 to 10 minutes with four meters, testing the same drop of blood. The red circles are Libre 2 'scans'. It's obvious to the eye that the Libre 2 read high that day and that the GlucoNavii meter reads low, lower as BG concentration rises. Also obvious to the eye that a 33 minute walk pushed by BG down, but that both the timing and duration of the walk were insufficient to control my BG levels after that meal. Further you can see where the Libre 2 did the 'sensor error, please wait 15 minutes' thing and recalibrated itself (during my walk, which caused a rapid change in BG levels which can trigger a Libre 2 'reset' and recalibration). The Libre 2 readings are much closer to the meter readings after the walk than before the meal.

What I hope for is a system that facilitates eyeball analysis but also statistical analysis. Example - do I have enough readings to prove to a statistically-significant standard that the GlucoNavii is a terrible meter to use to calibrate CGM data? No idea. Without knowing the properties of the measuring devices it becomes difficult to impossible to compare CGM data from one sensor model to the next.

Edit - typos and clarification.
 

Attachments

  • UnwiseFeast_Dec12th_2024_Libre 2_4_meter plot.png
    UnwiseFeast_Dec12th_2024_Libre 2_4_meter plot.png
    180.6 KB · Views: 4
Last edited:
<truncated>

What I hope for is a system that facilitates eyeball analysis but also statistical analysis. Example - do I have enough readings to prove to a statistically-significant standard that the GlucoNavii is a terrible meter to use to calibrate CGM data? No idea. Without knowing the properties of the measuring devices it becomes difficult to impossible to compare CGM data from one sensor model to the next.
A difficulty you are going to run into is that it is just not possible to include all the information you need in a single graph or report, if for no other reason than you need to take into account notes you may have made that might give necessary context to the data on display.

For my part, I included in my software a ‘drill down’ capability that lets me see grahical data and logbook entries for the period of the graph, to help me make sense of what I am looking at.

Hope that makes sense?

Anyway best of luck with your statistical analysis!
 
A difficulty you are going to run into is that it is just not possible to include all the information you need in a single graph or report, if for no other reason than you need to take into account notes you may have made that might give necessary context to the data on display.
Indeed - the more I think about it and try to break down the problems want to solve the longer my wish list of features/graphs/reports gets.

For example you posted a screenshot of your AGP plot with TIR and GMI and so on. That type of plot is one I'll probably want to use to display the average properties for each meal, showing the average as a solid line with the variations as shaded curves. The purpose of that would be to filter out the noise of minor variables and clearly show how high my BG typically goes after a regularly-eaten meal and for how long. Calculating the area under the curve for that solid line above my fasting level would give me a 'meal score', mmol/L-minutes, a nice simple metric that can be used in various ways. The problem with that approach is major variables such as exercise. I would need to be able to label each meal graph as 'clean', 'exercise' or 'noisy', for want of better phrasing.

The average plot for the meal would be drawn from the 'clean' graphs only, the 'baseline', the AEOTP (Ambulatory Eggs On Toast Profile 😉). Examples of 'exercise', where the curve is very heavily-influenced by a bout of exercise after eating, might be displayed in such a way to easily compare one with another and the 'baseline' plot. That would enable me to observe the effects on the BG curve of various types and durations of exercise after eating, compared to what that meal typically looks like. 'Noisy' plots aren't useful except to observe inconvenient variables. One example of noise appears to be caused by the temperature of the CGM sensor. If you take a hot shower wearing a CGM you'll often see a spike which I believe is caused by inaccurate readings from the CGM when the temperature of the sensor changes quickly. With the Libre 2 at least (not sure about the G7/One+) if you go out in very cold weather you'll see a similar effect. Eating a meal, going out in the cold for ten minutes, coming back in and taking a hot show would probably produce a graph that is useless for the purpose of analysing the meal response. That appears to be a sensor problem, not actual changes in BG. An inconvenient variable, difficult to identify and measure.

Just that one problem, trying to capture and analyse data relating to effects of eating a given repeatable meal, gets more complicated the more I think about it and the more I learn about how the sensors work. The potential might be huge though. That 'clean' plot of the average response after a repeatable meal might enable me to observe the effects of changes in medications, changes over time in insulin resistance and insulin response. The average plot for near-identical meals, if the readings are reasonably accurate and the data reasonably clean, or rather the difference between such plots generated from data many months apart, would be more useful to me than HbA1c results. It would be like a digital snapshot of my insulin resistance and pancreas function which I could compare from year to year. The software I want will likely never be finished - I'll be thinking about how this thing might work and how it might be improved for the rest of my life 😉
 
Last edited:
Unless you habitually shower immediately after eating I think any one-offs will simply appear as noise when you plot your graphs, there will be lots of noise. Exercise has a massive effect for me, so I need a multidimensional model whatever happens.

Really interesting to hear what people are doing and plan to do, please keep up the discussions 🙂
 
Back
Top