(JW Insights) Jun 14 -- A research team led by Macao University of Science and Technology (MUST) in south China has developed a new AI-based model as a clinical diagnostic aid that processes multimodal input in a unified manner, reported Xinhua on June 13.
The model IRENE, presumably the first such approach, was designed to help make medical decisions by jointly learning holistic representations of medical images, unstructured chief complaints, and structured clinical information, according to the team, which also included researchers from the West China Hospital of Sichuan University and the University of Hong Kong.
Zhang Kang, the team leader and a professor at MUST, said IRENE has the ability to jointly interpret multimodal clinical information simultaneously, and its intra- and intermodal attentional operations are consistent with daily clinical practices.
The team applied IRENE to pulmonary disease identification and adverse clinical outcome prediction in patients infected with COVID-19, achieving desirable results, Zhang said.
In the face of limited medical resources in certain regions, machine-learning techniques have become the de facto choice for automatic yet intelligent medical diagnosis in order to meet the increasing demand for precision medicine.
Among these techniques, the development of deep learning endows machine-learning models with the ability to detect diseases from medical images near or at the level of human experts.
Although AI-based medical image diagnosis has achieved tremendous progress in recent years, how to jointly interpret medical images and their associated clinical context remains a challenge, according to the IRENE research team.
The study has been published in the latest edition of Nature Biomedical Engineering, a monthly peer-reviewed scientific journal, said the Xinhua report.
(Chen HX/Li PP)
RELATED
READ MOST
No Data Yet~