EN

Google"s AI learned to diagnose from photos more accurately than doctors

Googles AI learned to diagnose from photos more accurately than doctors

Google has introduced a revolutionary AI system AMIE for medical diagnostics, demonstrating outstanding results in disease recognition. The innovative development is based on the advanced multimodal language model Gemini 2.0 Flash, capable of simultaneously analyzing textual information and visual data with high accuracy.

AMIE (Articulate Medical Intelligence Explorer) underwent in-depth training to identify various skin pathologies, interpret electrocardiograms, and perform comprehensive analysis of laboratory indicators. During testing, the system demonstrated exceptional flexibility, effectively processing ordinary photographs, PDF documents, and various medical data. This versatility makes AMIE a promising solution for remote diagnostics, especially in the context of the rapid development of telemedicine services.

To verify the effectiveness of artificial intelligence, Google researchers organized large-scale testing covering 105 simulated medical consultations. The experiment involved 25 actors portraying patients with various symptoms. In each clinical case, both AMIE and practicing physicians formulated diagnostic conclusions based on provided photographs and text descriptions. An independent expert commission evaluated the accuracy of diagnoses. The results were impressive: artificial intelligence on average outperformed doctors in diagnostic accuracy, especially when working with complex visual materials.

Despite the encouraging results, Google specialists remain cautious and emphasize that the conducted tests cannot fully reproduce the multifaceted nature of real clinical practice. For full implementation of AMIE in everyday medical activities, additional, more extensive research will be required in conditions that are as close as possible to real clinical situations.