Skip to main content
  • Research Article
  • Open access
  • Published:

Sagittal alignment in an MR-TRUS fusion biopsy using only the prostate contour in the axial image

Abstract

Purpose

This paper examines the feasibility of automated alignment in sagittal direction in MR-TRUS fusion biopsy of the prostate by comparing the prostate contour in axial images between different modalities. In the treatment of prostate cancer, an important factor affecting the prognosis of patients is focal therapy of cancer within the prostate. Therefore, MR-TRUS fusion biopsy of the prostate is attracting attention as one of the most effective localization techniques. Because the accuracy of this biopsy is highly dependent on the doctor performing it, automation should reduce variability in diagnostic performance.

Method

The MR image is scaled to the same scale as the TRUS image, and the contours of the prostate on the MR and TRUS images are compared in polar coordinates. In addition, this method makes it possible to perform a robust comparison against deformation by comparing specific angle ranges. It is also possible to improve the accuracy of error calculation by accumulating contour data.

Result

The axial image selected by the proposed method using the prostate contour obtained from the doctor-labeled segmentation image has an error of about 4 mm in the sagittal direction on average compared to the axial image selected by the doctor did. Furthermore, using the inaccurate prostate contours obtained by performing segmentation with U-Net only slightly reduced the accuracy. In addition, it was found that alignment accuracy is improved by using the angler weight.

Conclusion

It has been shown that sagittal alignment can be performed with some degree of accuracy using only axial images. Also, the angular weight values used indicate that when comparing axial images, it may be an important factor in determining the same axial cross section to compare the parts that deform due to probe pressure.

Introduction

Prostate cancer is one of the most common cancers in men [1]. One characteristic of prostate cancer is the presence of multiple foci of cancer within the prostate. However, the majority of these cancers do not affect the prognosis of patients, and it is important to locally treat cancers of 0.5 cc or larger, called significant cancers [2]. And that requires measuring the exact location of the cancer. Biopsy must be done to locate the cancer. Among various prostate cancer biopsies, MR-TRUS fusion biopsy of the prostate [3] is well known as an effective localization method for accurately locating prostate cancer.

The procedure for this biopsy is as follows: To begin with, the doctor carries out segmentation of the prostate and marking of the cancer in the magnetic resonance (MR) image beforehand. From the contour of the prostate obtained at this time, a 3D model of the prostate is inferred, and sagittal alignment is performed by matching the contour of the 3D model and the contour of the prostate on the sagittal image of Transrectal ultrasound (TRUS). Then, during the operation, registration of the axial image of TRUS and the MR image is performed to estimate the position of the cancer on the TRUS image. Image registration is the process of associating information contained in two or more images, volumes, and so on, and aligning them in the same coordinate space. Although this biopsy technique has improved the detection rate of cancer, the accuracy of the registration operation is highly dependent on the operator, and it is considered necessary to automate the registration between MR-TRUS.

Various image registration automation techniques have been proposed to date, but these techniques have changed significantly in recent years with the advent of techniques using deep learning. Wu et al. [4, 5] first applied deep learning to image registration. This method uses a convolutional auto-encoder (CAE) and has proven to be superior to existing methods in registration tasks for 3D brain volumes. In addition, as in the method of Cao et al. [6], CNN is proposed in which conversion parameters are directly derived without performing an iterative process.

Registration in MR-TRUS is a very difficult task due to the large differences in the appearance of the images and the direction in which the prostate is imaged. Nevertheless, Haskins et al. [7] registered MR-TRUS by using CNN to calculate the similarity, and achieved high registration accuracy with a Target Registration Error (TRE) of only 3.86.

However, while there has been much research on techniques for performing registration in the sagittal and axial directions in an end-to-end manner and methods for performing registration between images, methods for performing only sagittal registration using a two-dimensional axial image have not been widely studied. With regard to sagittal alignment, the physician now performs the alignment manually while looking at the sagittal image, but when looking at the axial image, due to the registration error, the physician finally performs the detailed alignment while looking at the axial image. Therefore, it is meaningful to perform sagittal alignment using an axial image.

However, realtime performance is lost when calculating image similarity using iterative methods such as Normalized Cross Correlation (NCC). Also, MR images and TRUS images are very different and in many cases cannot be registered correctly. In addition, when the ultrasound probe is inserted into the body, the prostate is physically compressed and some contours of the ultrasound image are deformed. Based on the above, our method calculates similarity by comparing only the prostate contours partially. Convolution neural networks (CNN) such as U-Net [8] are used to extract contours. Methods for segmenting the prostate in MR images have been greatly advanced at CNN, and many methods capable of highly accurate segmentation with a Dice coefficient exceeding 0.9 have been proposed in competitions such as PROMISE 12 (Fig. 1).

Fig. 1
figure 1

The procedure for sagittal alignment in an MR-TRUS fusion biopsy. After aligning on the sagittal image, fine adjustment is performed on the axial image

This paper proposes a method to find the corresponding MR image from the axial image of TRUS using such an automatic segmentation method. In order to create a robust system against segmentation errors for a single MR image, we examine a method to to compare prostatic contour in axial images by angle. This method computes the difference in contour shape between different modality images by comparing the prostatic contour obtained by U-Net at each angle in polar coordinates. This method provides a robust system against segmentation errors by calculating the error between contours in multiple images and against pressure deformation by evaluating each angle. This study was conducted with the approval of the Tokai University Hachioji Hospital Medical Ethics Review Board (approval number: 13R222).

Method

Prostate segmentation for MR and TRUS images by U-Net

U-Net is used for segmentation of prostate in MR and TRUS images. U-Net is a type of CNN that has made significant progress in medical image segmentation. U-Net is trained to obtain segmentation networks for MR images and segmentation networks for TRUS images. From the binary image obtained from U-Net, a pixel indicating the outline of the prostate gland is obtained.

Matching axial-sectional images of MR and TRUS

Step 1: Preprocessing


First, convert the contour lines of MR and TRUS images to polar coordinates. Since each pixel in the image data is considered to be a point in Cartesian coordinates, the centroid of the prostate can be calculated from each pixel of the prostate outline obtained from the binarized image. Polar coordinate transformation is performed for each pixel with the center of gravity as the origin.

Next, since deflection angles to each pixel are not sampled at regular intervals, deflection angles are set at regular intervals in order to compare radial distance of the same angles. The number of samples is 360, and a radial distance of an angle closest to the angle set according to the number of samples is used as an approximate value. Polar coordinate is adopted in order to evaluate the matching between MR and TRUS image for each angle.


Step 2: Scale adjustment


Because the scales of MR and TRUS images are different, the data stored in Dicom was used to unify the TRUS scales. Here, mm per pixel is 0.4688 mm in the MR image and TRUS image is 0.2047 mm, so the MR image was magnified 2.29 times.


Step 3: Calculation of error


Then, using the difference between the two radial data sampled once, the error degree for each angle is calculated by the following equation (1). In this formula, m represents the number of samples, and \(r_{MRI}(\theta )\) and \(r_{TRUS}(\theta )\) represent the magnitude of the i-th deflection of the contours obtained from the TRUS and MR images.

$$\begin{aligned} Error=\frac{1}{m} \sum ^{m}_{i=1} \left| r_{MR}\left( \theta _{i}\right) -r_{TRUS}\left( \theta _{i}\right) \right| \end{aligned}$$
(1)

Step 4: Compare the error of each image


This formula gives the error between contours from MR and TRUS images. These are computed for a plurality of MR images, and it is determined that the MR image with minimal error shows an axial cross-section similar to the TRUS image (Fig.2).

Fig. 2
figure 2

a Aligns the MR-TRUS to the same image scale and pole-transforms the prostate contour to calculate the error of the radial distance between the two contours. b The MR image having the smallest error is an image corresponding to the TRUS image

Angle weights

In this method, since the degree of error between two contours is computed using the difference between each angle, “weight” can be set for each angle. It is possible to exclude parts where deformation is likely to occur due to pressure from the TRUS probe from the evaluation. We modify equation (1) taking into account angle weights (\(w_{i}\)) and obtain equation (2).

$$\begin{aligned} Error=\frac{1}{m} \sum ^{m}_{i=1} w_{i}\left| r_{MR}\left( \theta _{i}\right) -r_{TRUS}\left( \theta _{i}\right) \right| \end{aligned}$$
(2)

Datasets

This experiment verifies sagittal alignment accuracy for two data sets. One is a data set that uses images that have been segmented by experts, and the other uses images that have been segmented by U-Net. Each image data was trimmed for segmentation tasks. For each image, the center of the \(512 \times 512\) size image was cut out to create a \(256 \times 256\) size image (Fig. 3).

Fig. 3
figure 3

Overview of the data set

Dataset 1: Segmentation mask labeled by an expert

Data from 45 patients were used. For each patient, 5–11 MR images of the prostate and 1 TRUS image were saved. All of the images were labeled by experts to produce a segmentation mask. One MR image corresponding to one TRUS image was also specified by the expert.

Dataset 2: Segmentation mask generated by U-Net

U-Net training and segmentation were performed using the same MR and TRUS images as Dataset1. The corresponding combination of MR and TRUS images is the same as in Dataset 1. The U-Net training method is described below.

Training U-Net

A typical U-Net was used to assume the case where segmentation was not ideal. We trained U-Net using the Adam [9] optimizer with a learning rate of 0.001 as an optimization function and Dice coefficient [10] as a loss function. At this time, the learning rate of Adam was set to 0.001. During training, the Data Augmentation described in the next section was applied. Intersection over union (IoU), a representative index for segmentation tasks, was used for evaluation. Calculate the IoU of test data for the model with the highest IoU for validation data. The accuracy of U-Net obtained by sixfold cross validation of the data set achieved IoU 0.89.

Data augmentation

During training U-Net, data augmentation was applied to improve the segmentation accuracy. A multiple of processes among rotation at an arbitrary angle within a range of 15 degrees from the left to the right, lateral and vertical shifts within a range of 10%, a maximum zoom of 20%, and inversion in the vertical direction are randomly performed for all images. Each image pixel value is divided by 255 and normalized to a value of 0–1. Each operation except normalization was applied at random just before the image was entered into the CNN.

Experiment

In this experiment, the accuracy of the proposed method is confirmed by aligning the sagittal direction with the proposed method using two data.

It also verifies that applying “Angle weights” improves accuracy. Of the data in the 6 divided MR-TRUS pairs, 5/6 of the data was used to determine the angle weights, which was used to improve the accuracy of sagittal misalignment of the remaining data. A graph shown in Fig. 4 is obtained by comparing the error between contours of the MR-TRUS correct pair with the error between contours of the MR-TRUS incorrect pair. Using these values, the angle weight is determined in the following two patterns.

  • A threshold is determined and the angle at which the weight falls below the threshold is not evaluated.

  • The pre-acquired data of difference between the error of the contour of the correct pair of MR-TRUS and the error of the contour of the incorrect pair is normalized by min-max and weighted by 0–1.

Fig. 4
figure 4

A graph showing the difference between the error of the contour of the correct pair of MR-TRUS and the error of the contour of the incorrect pair. (Units are pixels) It can be seen that the side of the prostate affected by probe pressure differs in the degree of error between the correct and incorrect pairs of MR-TRUS images

We also confirm the accuracy of these proposed methods by comparing them with sagittal alignment using Hu moments. We selected Hu moment-based matching method as a comparison, because it is simple and it can evaluate the similarity of shapes. The iterative method takes about 10 s per pair, so it seems to be not practical for our application and likely won’t work. This method fits for our application to evaluate the shape of a contour obtained from a binary image regardless of position or rotation. The assessment is based on how far apart the images selected by each patient’s alignment operation are from the images selected by the doctor. The interval between each MR image is 3 mm. Thus, the metric is the average error for each patient.

Result

The resulting graph is shown in Table 1. All of them were more accurate than alignment by shape comparison using hu moment.

Table 1 Error when aligning with each method (mm)

The method of applying angular weights obtained using Min-max normalization was most accurate, with an error of 4 mm from the cross-section specified by the physician.

Discussion

From Table 1, compared to the method of determining image similarity using Hu moments, the proposed method was able to achieve sagittal alignment with high precision. The higher accuracy of the proposed method, which does not consider rotation, compared with the Hu moment method, which considers rotation, suggests that the prostate on MR images and the prostate on TRUS are oriented almost identically. From this, it can be seen that the proposed method can detect the vicinity of the same axial section with high accuracy even for the prostate shape deformed by the pressure, which is not completely the same axial section. From the above, it has been shown that alignment with an error about 4 mm is possible using the proposed method when segmentation is in an ideal state.

In this experiment, accuracy was improved by considering angle weight in some methods. The method of selectively eliminating deformed parts and making comparisons has helped to improve accuracy a little. A similar improvement was seen in using Min-max normalization to fine-tune weights between 0 and 1. However, using Min-max normalization seems to be more effective.

In both cases, the upper and lower parts of the prostate were excluded from the evaluation. One of the reasons why the lower part of the prostate is excluded from the evaluation is that the prostate contour on the TRUS image is deformed into a shape along the probe and becomes similar in each image. Also, the upper part of the prostate gland is not likely to be affected by deformation and the difference is not clear, so it is considered that it is not evaluated.

In this way, it has been confirmed that the sagittal alignment accuracy is improved by using the angle weight to evaluate the portion that is likely to be deformed by the probe pressure. These results show that the parts of the axial image that are not affected by probe pressure are similar in each image. In other words, the correct combination of MR and TRUS images can be found accurately by comparing in detail the portion of each axial image that was deformed by the probe.

In this experiment, we verified the accuracy of sagittal alignment using only axial images. However, since the TRUS image is likely to exist between two MR axial sections, it is necessary to review the labeling of the data set in order to verify the accuracy of the proposed method in more detail in the future. In addition, all images of our dataset clearly showed the prostate. This is to reduce segmentation errors by reducing images with little or no prostate. But for full automation, you have to automate everything, including which images are used to calculate the error. Therefore, it is necessary to carry out an experiment with a system that comprehensively judges whether or not the prostate is reflected in the captured MR image.

As for the calculation time, if the calculation by CNN is not included, it is possible to calculate in 0.1 ms for each calculation, which shows excellent real-time performance.

Conclusion

In this paper, we explored a method of sagittal alignment for MR-TRUS image registration by comparing prostate shapes in axial images at different modalities. Therefore, we proposed a method to compare the contour of segmented prostate on polar coordinates. The effectiveness of this approach is that even if the prostate contour is altered by pressure, any angle can be excluded from the evaluation to reduce its susceptibility.

The proposed method performed the same sagittal alignment with an error of about 4 mm using the prostate contour obtained from the doctor’s labeled segmentation image. Furthermore, even if an incorrect prostate contour was used with U-Net, alignment was possible with an error of approximately 4.13 mm. Furthermore, in both cases, the accuracy is higher than when performing image matching using Hu moments.

Accuracy has been improved by using angle weights. It has been shown that the accuracy of sagittal alignment is slightly improved by using angle weights to evaluate areas that are susceptible to deformation under probe pressure. In the future, more data analysis will allow for more accurate alignment.

Future work

Currently, this method calculates the error between prostate contours on a single TRUS image and on multiple MR images. However, in the original operation, multiple TRUS images can be acquired, and the resolution in the sagittal direction of TRUS images is higher than that of MR images. Therefore, it is necessary to obtain multiple TRUS images by analyzing the moving image data and perform sagittal alignment experiments using these data.

In our method, the origin of the polar coordinate is defined as the center of gravity of the prostate contour, but when the prostate is deformed by pressure, the center of gravity may shift. At present, the effect of the shift of the center of gravity on the accuracy and the countermeasures have not been examined, so it is necessary to examine them in the future.

Availability of data and materials

Not applicable.

References

  1. Bray F, Ferlay J, Soerjomataram I, Siegel RL, Torre LA, Jemal A (2018) Global cancer statistics 2018: globocan estimates of incidence and mortality worldwide for 36 cancers in 185 countries. CA Cancer J Clin 68(6):394–424

    Article  Google Scholar 

  2. Stamey TA, Freiha FS, McNeal JE, Redwine EA, Whittemore AS, Schmid H-P (1993) Localized prostate cancer. Relationship of tumor volume to clinical significance for treatment of prostate cancer. Cancer 71(S3):933–938

    Article  Google Scholar 

  3. Singh AK, Kruecker J, Xu S, Glossop N, Guion P, Ullman K, Choyke PL, Wood BJ (2008) Initial clinical experience with real-time transrectal ultrasonography-magnetic resonance imaging fusion-guided prostate biopsy. BJU Int 101(7):841–845

    Article  Google Scholar 

  4. Wu G, Kim M, Wang Q, Gao Y, Liao S, Shen D (2013) Unsupervised deep feature learning for deformable registration of MR brain images. In: International conference on medical image computing and computer-assisted intervention. Berlin: Springer; p. 649–656.

    Chapter  Google Scholar 

  5. Wu G, Kim M, Wang Q, Munsell BC, Shen D (2015) Scalable high-performance image registration framework by unsupervised deep feature representations learning. IEEE Trans Biomed Eng 63(7):1505–1516

    Article  Google Scholar 

  6. Cao X, Yang J, Zhang J, Nie D, Kim M, Wang Q, Shen D (2017) Deformable image registration based on similarity-steered cnn regression. In: International conference on medical image computing and computer-assisted intervention. Berlin: Springer; p. 300–308.

    Chapter  Google Scholar 

  7. Haskins G, Kruecker J, Kruger U, Xu S, Pinto PA, Wood BJ, Yan P (2019) Learning deep similarity metric for 3D MR-TRUS image registration. Int J Comput Assist Radiol Surg 14(3):417–425

    Article  Google Scholar 

  8. Ronneberger O, Fischer P, Brox T (2015) U-net: Convolutional networks for biomedical image segmentation. In: International conference on medical image computing and computer-assisted intervention. Berlin: Springer; p. 234–241.

    Google Scholar 

  9. Kingma DP, Ba J (2014) Adam: a method for stochastic optimization. arXiv preprint arXiv:1412.6980.

  10. Sudre CH, Li W, Vercauteren T, Ourselin S, Cardoso MJ (2017) Generalised dice overlap as a deep learning loss function for highly unbalanced segmentations. In: Deep learning in medical image analysis and multimodal learning for clinical decision support. Berlin: Springer; p. 240–248.

    Chapter  Google Scholar 

Download references

Acknowledgements

The authors would like to gratefully acknowledge the financial support of JSPS KAKENHI JP17H03200 Grant Number.

Funding

This work was supported by JSPS KAKENHI Grant Number JP17H03200.

Author information

Authors and Affiliations

Authors

Contributions

RI, NK, SS designed the study, performed the analyses, and helped in the manuscript writing. YN, KT, YS contributed to the discussion of the results. All authors read and approved the final manuscript.

Corresponding author

Correspondence to Norihiro Koizumi.

Ethics declarations

Competing interests

The authors declare that they have no competing interests.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Igarasihi, R., Koizumi, N., Nishiyama, Y. et al. Sagittal alignment in an MR-TRUS fusion biopsy using only the prostate contour in the axial image. Robomech J 7, 4 (2020). https://doi.org/10.1186/s40648-020-0155-9

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s40648-020-0155-9

Keywords