Researchers involved in a recent study trained an artificial intelligence (AI) model to diagnose type 2 diabetes in patients after six to 10 seconds of listening to their voice. Canadian medical researchers trained the machine-learning AI to recognise 14 vocal differences in the voice of someone with type 2 diabetes compared to someone without diabetes. […]
Bit of a clickbait/hype as usual. Their specificity numbers for a voice match alone (without age, BMI, and so on) are 0.58 with a stdev of 0.14: which basically says only close to half of those diagnosed with T2DM would be marked by this classifier. Then there’s the questionable population size of the study, which might be normal for health-related papers but it’s too small for a data-driven model.
This seems like a solution in search of a problem. A finger-prick a1c test is about $30 (probably cheaper in reality, but that’s what they try to bill to insurance at least), and is an extremely accurate way to diagnose both diabetes and pre-diabetes.
I would think Parkinson’s disease or other diagnoses that can have big impacts on speech but don’t have simple tests and require a skilled exam from a neurologist or something would be a better match for this kind of tech.
At 10 cents per polio vaccine, there are still places where administering them is non-negligible because it’s not the cost of the test itself that’s an issue. Granted, this probably isn’t good enough to make a difference, but if you could get tested just with a cell phone with Internet access without needing to physically ship anything not need a medical professional at the site, it probably could make a big difference in some areas. Granted, those are places where getting medical treatment afterwards is probably hard but at least dealing with it via dietary choice, when that luxury is an option, may help.
On the one hand, using voice as a pre-screening test in places where the normal screening test is too expensive to administer routinely seems like a great thing. i.e.: Read this paragraph to the machine, and we’ll figure out whether it’s worth actually testing you for T2DM, Parkinson’s, stomach cancer, lung cancer, etc, etc. If that substantially reduces the number of tests administered without making too many false negatives, then you can really improve health in some very poor areas.
This data set is definitely not going to give that. It’s not even particularly compelling evidence that it’s possible. It is, IMO, compelling enough to study further. Bigger sample sizes, fewer than 84 recordings over 2 weeks. It kind of looks like p-value chasing, and running a bigger study would answer that.
but that’s the thing: with the reported numbers I wouldn’t even say they can pre-screen anyone based on voice alone. And I don’t think they reported the metrics of experiments with “everything but voice” either, which could have answered whether voice is actually bringing anything substantial to the table.
Bit of a clickbait/hype as usual. Their specificity numbers for a voice match alone (without age, BMI, and so on) are 0.58 with a stdev of 0.14: which basically says only close to half of those diagnosed with T2DM would be marked by this classifier. Then there’s the questionable population size of the study, which might be normal for health-related papers but it’s too small for a data-driven model.
This seems like a solution in search of a problem. A finger-prick a1c test is about $30 (probably cheaper in reality, but that’s what they try to bill to insurance at least), and is an extremely accurate way to diagnose both diabetes and pre-diabetes.
I would think Parkinson’s disease or other diagnoses that can have big impacts on speech but don’t have simple tests and require a skilled exam from a neurologist or something would be a better match for this kind of tech.
At 10 cents per polio vaccine, there are still places where administering them is non-negligible because it’s not the cost of the test itself that’s an issue. Granted, this probably isn’t good enough to make a difference, but if you could get tested just with a cell phone with Internet access without needing to physically ship anything not need a medical professional at the site, it probably could make a big difference in some areas. Granted, those are places where getting medical treatment afterwards is probably hard but at least dealing with it via dietary choice, when that luxury is an option, may help.
On the one hand, using voice as a pre-screening test in places where the normal screening test is too expensive to administer routinely seems like a great thing. i.e.: Read this paragraph to the machine, and we’ll figure out whether it’s worth actually testing you for T2DM, Parkinson’s, stomach cancer, lung cancer, etc, etc. If that substantially reduces the number of tests administered without making too many false negatives, then you can really improve health in some very poor areas.
This data set is definitely not going to give that. It’s not even particularly compelling evidence that it’s possible. It is, IMO, compelling enough to study further. Bigger sample sizes, fewer than 84 recordings over 2 weeks. It kind of looks like p-value chasing, and running a bigger study would answer that.
but that’s the thing: with the reported numbers I wouldn’t even say they can pre-screen anyone based on voice alone. And I don’t think they reported the metrics of experiments with “everything but voice” either, which could have answered whether voice is actually bringing anything substantial to the table.
Which is the problem for 99% of all studies I see on the news.