In a groundbreaking development, researchers from MIT, Massachusetts General Hospital, and the Broad Institute have introduced Tyche, a machine learning framework designed to capture uncertainty in medical image segmentation. Medical image segmentation is crucial for extracting regions of interest, such as tissues or lesions, from images to aid in clinical quantification, diagnosis, and surgical planning. However, traditional AI models typically provide only one output, limiting their effectiveness in reflecting the range of interpretations that expert annotators might offer for the same image.
Tyche addresses this limitation by generating multiple segmentation outputs without the need for retraining, enabling clinicians to understand the uncertainty inherent in medical imaging. Lead study author Marianne Rakic explains, “Having options can help in decision-making. Even just seeing that there is uncertainty in a medical image can influence someone’s decisions, so it is important to take this uncertainty into account.”
The Tyche framework utilizes a modified neural network architecture that processes data through a ‘context set’ of images. By leveraging just a handful of reference images—sometimes as few as 16—Tyche can produce diverse segmentation predictions. This innovative approach allows the model to “communicate” among potential segmentations, resulting in high-quality predictions that reflect the variability in human expert annotations.
By improving the capture of uncertainty in medical image segmentation, Tyche could revolutionize clinical workflows, providing healthcare professionals with the tools to make more informed decisions. The research team aims to expand Tyche’s capabilities further by incorporating a more flexible context set that includes various image types and textual data, ensuring it becomes a versatile asset in the medical imaging landscape.