Scientists have made a series of advances in the field of digital pathology image analysis

Recently, Dr. Qin Wenjian’s team of the Shenzhen Institute of Advanced Technology and Medical Robots and Minimally Invasive Surgical Instruments Research Center of the Chinese Academy of Sciences has made a number of research progress in the field of digital pathology image analysis, and the team has carried out a series of related research on how to calculate the huge size of a single digital pathological image, the utilization of multi-magnification information, and the technical challenges of cross-scale information fusion, realizing the scientific research idea of “from algorithm model innovation to actual clinical verification”. THE RELATED WORK WAS PUBLISHED IN THE EEEE JOURNAL OF BIOMEDICAL INFORMATION AND HEALTH INFORMATICS, AND THE AMERICAN JOURNAL OF PATHOLOGY AND GENES.

Pathological diagnosis is the gold standard for clinical diagnosis of cancer, which is to observe the morphological changes of microscopic tissues and cells through microscopic surgery, stage cancer diagnosis, and provide preoperative diagnosis, treatment plan and postoperative expectation reference for tumor patients. With the increase in the number of cancer patients, the demand for pathological diagnosis is also increasing, but the number of pathologists in China is seriously insufficient. Digital pathological imaging technology provides a feasible technical way to solve the dilemma of serious shortage of pathologists and uneven resource allocation in China, which realizes the digitization of complete pathological slides through optical stitching scanning and obtains panoramic pathological images. However, in the face of a single gigapixel panoramic pathology image, manual analysis of pathological cells is a huge amount of work and error-prone. 

In recent years, with the improvement of digital pathology imaging speed and the maturity of deep learning algorithms, the features of digital pathology images are automatically extracted by deep learning algorithms, and quantitative evaluation is given after quantitative calculation, the analysis process has good repeatability, stability and robustness, which can not only obtain objective diagnostic results, but also rely on computer automatic calculation, greatly improve work efficiency and reduce the workload of doctors. Despite advances in pathological intelligence computing based on deep learning, in real-world clinical applications, pathologists typically combine information at different magnifications, i.e., from subnuclear O (0.1 μm) to cells (≈O (10 μm)) and intercellular (≈O (100 μm)) to other tissues of larger scales (≈O (1 mm)). Therefore, there are still technical challenges in how to solve the effective use of multi-magnification information stored in the form of “pyramid” and the rapid and accurate calculation of a single huge size image. 

In view of the shortcomings of histopathological image information fusion under different magnifications, Qin Wenjian’s team proposed an innovative deep multi-magnification similarity learning method through long-term unremitting accumulation, which not only helps the interpretability of multi-magnification learning models, but also facilitates the visualization of low-dimensional (such as cell level) to high-dimensional (such as tissue level) feature representation, and overcomes the difficulty of cross-magnification information dissemination understanding. At the same time, with the help of the design of similarity cross-entropy loss function, the similarity of information between cross-magnifications can be better learned. Finally, the effectiveness of the proposed method is verified by experiments with different backbone network feature extraction and different magnification combinations, and its explanatory ability is further demonstrated by visualization, and experiments are carried out on public and clinical histopathological datasets to verify the performance of the method, and compared with the existing methods, the area under the curve (AUC) and accuracy have achieved excellent performance. The related research work was published in the IEEE Journal of Biomedical and Health Informatics (DOI: 10.1109/ JBHI.2023.3237137), PhD student Diao Songhui is the first author of the paper.

Figure 1: A new framework and results of the multimagnification pathology image calculation method proposed by the research team

The team’s previous research found that experienced pathologists often need to perform a visual examination of the cancerous/suspected area in the panoramic pathology image to obtain the final diagnosis. However, due to the characteristics of panoramic pathological images with a single huge size, artificial vision diagnosis is a labor-intensive and time-consuming challenging task. In order to combine with the actual clinical diagnosis and treatment steps and verify the effectiveness of the algorithm in the clinic, the team cooperated with clinical hospitals to realize the research of automatic diagnosis algorithm, and proposed a weak supervision framework based on multi-magnification attention convolutional neural network. This method only requires image-level labels (without pixel-level annotation), which can realize the detection of regions of interest (cancer), directly suggest suspicious lesion areas for the diagnosis of clinical pathologists, and improve the efficiency of clinically assisted diagnosis. The proposed method is demonstrated in the TGCA liver cancer dataset. The experimental results show that the framework is significantly superior to the single-scale detection method according to the area under the curve, accuracy, sensitivity, and specificity indicators, and provides a very fast detection time. At the same time, compared with the diagnostic results of the three pathologists, the performance of the method proposed by the team was better than that of primary and intermediate pathologists, and slightly lower than that of senior pathologists. Related research work is “Weakly Supervised Framework for Cancer Region Detection of Hepatocellular Carcinoma in Whole-Slide Pathologic Images Based on Multiscale Attention Convolutional Neural Network” The title was published in The American Journal of Pathology, (DOI: 10.1016/j.ajpath.2021.11.009), and doctoral student Songhui Diao is the first author of the paper.

Figure 2: Results of the multiplection weak supervision framework proposed by the team

In order to integrate pathological image morphological information and molecular gene function information to achieve accurate prediction of patient survival status, the team designed a multimodal survival prognosis prediction model that integrates pathological images and genes based on the accumulation of pathological image calculation, which revealed that multimodal information has great potential for cancer prognosis, and is expected to provide effective tools for clinical diagnosis and decision-making through multimodal model modeling. The related research work was published in Genes (DOI: 10.3390/genes13101770) under the title “Integrative Histology-Genomic Analysis Predicts Hepatocellular Carcinoma Prognosis Using Deep Learning”, with Jiaxin Hou as the first author.

Figure 3: The prognostic prediction model and experimental results of fusion pathology and genes proposed by the team

The above research work has been supported by the National Natural Science Foundation of China Youth Program, General Project, Shenzhen Basic Key Project, Chinese Academy of Sciences Youth Innovation Promotion Association, etc. (Source: Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences) 

Related paper information:

Special statement: This article is reproduced only for the need to disseminate information, and does not mean to represent the views of this website or confirm the authenticity of its content; If other media, websites or individuals reprint and use from this website, they must retain the “source” indicated on this website and bear their own legal responsibilities such as copyright; If the author does not wish to be reprinted or contact the reprint fee, please contact us.

Source link

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button