Figure 1. Lesion matching in different views of the same breast. Matching requires lesions' positions to be known.
Worldwide, breast cancer is the most frequently diagnosed and most lethal form of cancer in women. One way to address the disease is to diagnose patients early as it frequently translates into a better prognosis. As such, many countries implement programs that screen asymptomatic women who are over a certain age. These programs are often based on screening mammography, an exam in which two images (views) are obtained from each breast (see Figure 1). Current state-of-the-art algorithms use these images to detect lesions that are indicative of breast cancer. However, the way they fuse information between the two views is naive. While radiologists often analyze the same lesion in the two views, algorithms fuse the decision at a much higher level, often averaging the two images' diagnosis. Addressing this can decrease the errors made by computer-aided diagnosis tools, ultimately helping patients get a more accurate diagnosis.
[1] - Ribli, D., Horváth, A., Unger, Z. et al. Detecting and classifying lesions in mammograms with Deep Learning. Sci Rep 8, 4165 (2018). https://doi.org/10.1038/s41598-018-22437-z
[2] - Schaffter T, Buist DSM, Lee CI, et al. Evaluation of Combined Artificial Intelligence and Radiologist Assessment to Interpret Screening Mammograms. JAMA Netw Open. 2020;3(3):e200265. https://doi.org/10.1001/jamanetworkopen.2020.0265