Instance-level Medical Image Classification for Text-based Retrieval in a Medical Data Integration Center
A medical data center receives a large volume of medical images from various clinical departments, encompassing X-rays, CT scans, and MRI scans. Ideally, all images should be properly filled in indexing fields with standard clinical terms. Nonetheless, some of these images attached with bad annotation or even missed the complete annotation, posing challenges in search functionality and data integration tasks in a medical data center. To tackle this issue, accurate and meaningful descriptors are needed for these indexing fields, so that users can efficiently search for their desired images and integrate them to other international standards. This paper aims to provide concise annotation in missing or incorrect indexing fields with essential instance-level information, including radiology modalities (e.g. X-rays), anatomical regions (e.g. chest), and body orientations (e.g. lateral), by using a deep learning classification method. In order to showcase the capability of our deep learning algorithm in generating annotations for indexing fields, we conducted three experiments. These experiments utilized two open-source datasets, namely the ROCO dataset and the IRMA dataset, along with a custom SNOMED CT dataset. While the outcomes of the three experiments are satisfying in the context of less critical tasks and serve as a valuable testing ground for image retrieval, they also highlight the need for further exploration of potential challenges. This essay further elaborates on the identified issues and presents well-founded recommendations aimed at refining and advancing our proposed approach.
Cite
Rights
Use and reproduction:
Public Domain