AI and healthcare represent a very tempting combination for any company with an outlook on the future. Google, one of the biggest corporations on the planet, wants to be right there in the front row of innovation, when it comes to the intersection of these fields.
Google and its sister companies, parts of the holding company Alphabet, are heavily investing in AI powered healthcare solutions. This has potentially huge implications for every Google user, the number of which is more than one billion.
Google made an attempt to invest in this field 10 years ago, but the venture it was involved in, Google Health, failed to work as planned. However, Google has now re-started to focus its effort on healthcare.
Hundreds of employees are working on these health projects, often partnering with other companies and academics.
The company knows the value of being in the healthcare sphere. “It’s pretty hard to ignore a market that represents about 20 percent of [U.S.] GDP,” says John Moore, an industry analyst at Chilmark Research. “So whether it’s Google or it’s Microsoft or it’s IBM or it’s Apple, everyone is taking a look at what they can do in the healthcare space.”
Google doesn’t disclose the size of its investment, but Moore says it’s likely in the billions of dollars.
The push into AI and health is a natural evolution for a company that has developed algorithms that reach deep into our lives through the Web.
A new study has shown how artificial intelligence (AI) can be used in healthcare, with a particular focus on its uses in high pressure environments such as the intensive care unit (ICU).
Google is not the only big player to take an interest in healthcare. IBM Watson Health announced February 13th it plans to make a 10-year, $50 million investment in research collaborations with two separate academic centers – Brigham and Women’s Hospital and Vanderbilt University Medical Center – to advance the science of artificial intelligence (AI) and its applications to major public health issues.
Both companies understand that AI and machine learning can be put to work in healthcare just as well as in any other field.
“The fundamental underlying technologies of machine learning and artificial intelligence are applicable to all manner of tasks,” said Greg Corrado, a neuroscientist at Google. This is true, he says, “whether those are tasks in your daily life, like getting directions or sorting through email, or the kinds of tasks that doctors, nurses, clinicians and patients face every day.”
Things move along pretty fast. Google’s sister company Verily got a billion-dollar boost this year for its already considerable efforts. Among other projects, a software that can diagnose diabetic retinopathy is now used in India.
The new research is published in the April edition of Ophthalmology, the Journal of the American Academy of Ophthalmology.
This new study, derived from previous work from Google AI, proves its algorithm works roughly as well as human experts in screening patients for diabetic retinopathy. More than 29 million Americans have diabetes, and are at risk for diabetic retinopathy, a disease that causes blindness. In the disease’s early stages, people typically don’t notice changes in their vision, since the eyes and brain adapt to gradual vision loss. This is why diabetic retinopathy can go undetected and cause irreversible vision loss. People with diabetes must undergo yearly screenings, but sometimes even these prove to be inaccurate. A study found a 49 percent error rate among internists, diabetologists, and medical residents.
Recent AI advances could improve access to more accurate diabetic retinopathy screening.
A test has been developed to prove AI’s utility in this case. Ten ophthalmologists (four general ophthalmologists, one trained outside the US, four retina specialists, and one retina specialist in training) were asked to read each image once under one of three conditions: unassisted, grades only, and grades + heatmap.
Both of the latter types of assistance improved physicians’ diagnostic accuracy, with the amount of improvement depending on the physician’s level of expertise.
When receiving no assistance, general ophthalmologists were significantly less accurate than the algorithm, and retina specialists were not significantly more accurate than the algorithm. When assisted by the algorithm, general ophthalmologists were as accurate as the AI, but retina specialists exceeded the model’s performance.
“What we found is that AI can do more than simply automate eye screening, it can assist physicians in more accurately diagnosing diabetic retinopathy,” said lead researcher, Rory Sayres, PhD.. “AI and physicians working together can be more accurate than either alone.”
In another part of the project, Verily is working on tools to monitor blood sugar of diabetic patients. The company is also working to perfect surgical robots that learn from each surgery.
It is important to retain medical data that are not usually collected for AI research purposes. To accumulate more useful data, Verily has partnered with Duke University and Stanford University for Project Baseline, which aims to find 10,000 volunteers willing to give necessary data to the company.
But even simple search engine queries can provide useful data about users. Rediet Abebe has attempted to find how search engine queries and social media data can provide informatno useful to AI powered solutions in healthcare.
Some of the healthcare specific problems researchers like Abebe are trying to solve through AI are those related to U.S. public health emergencies — like the nation’s disproportionately high maternal mortality rate. Abebe is currently on a 12-member body advising the National Institutes of Health on how AI can better serve biomedical and clinical research. Among the members are Google AI senior research scientist Greg Corrado, Intel principal engineer Michael McManus, Verily engineering director David Glazer, and AI Now Institute cofounder Kate Crawford, as well as professors from Stanford University, MIT, and other universities.
The group is expected to share some intermediary findings in June, while its final advisory thoughts will be delivered to NIH director Francis Collins in December.
“They want us to envision what kind of stuff we’d do to create real bridges between AI and biomedical and public health research,” Abebe said. “I’m really excited about the broad set of techniques we have and the unique style of doing research that the AI community has and using that to help address problems that impact underserved and marginalized communities.”