Capability Enhancement of The X-Ray Micro-Tomography System Via ML-assisted Approaches
Capability Enhancement of The X-Ray Micro-Tomography System Via ML-assisted Approaches
Abstract
Ring artifacts in X-ray micro-CT images are one of the primary causes
of concern in their accurate visual interpretation and quantitative analy-
sis. The geometry of X-ray micro-CT scanners is similar to the medical
CT machines, except the sample is rotated with a stationary source and
detector. The ring artifacts are caused by a defect or non-linear responses
in detector pixels during the MicroCT data acquisition. Artifacts in Mi-
croCT images can often be so severe that the images are no longer useful
for further analysis. Therefore, it is essential to comprehend the causes of
artifacts and potential solutions to maximize image quality. This article
presents a convolution neural network (CNN)-based Deep Learning (DL)
model inspired by UNet with a series of encoder and decoder units with
skip connections for removal of ring artifacts. The proposed architecture
has been evaluated using the Structural Similarity Index Measure (SSIM)
and Mean Squared Error (MSE). Additionally, the results are compared
with conventional filter-based non-ML techniques and are found to be
better than the latter.
1
1 INTRODUCTION
Computed tomography (CT) or computed axial tomography (CAT) is one of the
key technologies that has increased the capability of investigating the interior
of any given solid sample and is extensively employed in the field of medical
imaging [1]. The X-rays are incident on the given sample from 360o s and the
X-rays are collected from the other side of the sample by an X-ray detector [2].
The collected X-ray will have lesser intensity than the incident X-ray beam, due
to the attenuation introduced by the sample, often called projections. These
projections, as a function of the angle are subjected to tomographic recon-
structions which give out the internal topology of the sample, by either classic
analytical or iterative reconstructions [3, 4]. Generally, the CT/CAT gives high-
resolution 3D images of the complete sample (large volume sample, like the hu-
man body), whereas the MicroCT, gives very high-resolution images for a partic-
ular smaller location within the sample [5, 6]. The CT/CAT/MicroCT employs
X-ray sources as the main radiation source, however, nowadays synchrotron-
based X-ray beam sources are also employed for MicroCT applications. The
synchrotron X-ray MicroCT (SXMCT) has a wide scope of applicability ranging
from material sciences to Palaeontology [7, 8, 9, 10]. However, the capabilities
are restricted due to the incomplete projection information. The complete pro-
jection information cannot be achieved due to the radiation dose limitations,
space and time constraints, object shape, and malfunctioning in the imaging
systems. This incomplete character of the measured data gives rise to the Ar-
tifacts in the final reconstructed images [11, 12, 13, 14]. These artifacts studies
and possible removal have become a hot research topic among researchers. Nu-
merous forms of artifacts have been reported for the SXMCT images, namely,
beam-hardening artifacts, ring artifacts, motion artifacts, scattering, and many
more. These are attributed to different reasons which lead to incomplete mea-
sured datasets. Every artifact impacts the reconstruction in a very unique way,
and the impact is visible in the final reconstructed image. These artifacts limit
the applicability of the MicroCT/SXMCT [14].
The ring artifacts (RAF) are the most commonly visible and appear to be
a ring-like structure of either maximum intensity or the lowest [15, 16]. These
rings may have different radii, intensity, and thickness, refer to Figure 1. The
RAF is the impact of the detector malfunctioning, very specifically, camera pixel
malfunction [17, 18]. When a detector that is supposed to measure the X-ray
coming out of the sample is not working properly, it registers a fixed value,
which is either the maximum value (hot pixels) or zero in case of a dead pixel.
These pixels only give out the most extreme brightness on the scale. The data
from such pixels are observed as simple straight lines in the sinogram, and when
reconstruction is carried out of these sinograms, RAF are visible as rings in the
reconstructed image. These rings, when present in the area of interest in the
reconstructed image, restrict the image analysis and limit the image processing
efforts. This RAF is purely due to instrumental malfunctioning. The RAFs are a
kind of high-frequency artifacts in the image, hence their removal is a paramount
requirement. The RAF correction approaches a wide range of strategies [19].
2
Traditionally the RAF is removed via flat-field correction methodology [20].
This method involves measuring an image without subjecting a sample to the
X-ray beam. The effects of all sorts of non-uniformities, like, the scintilla-
tor’s CCD detector’s non-uniform response, are recorded and considered as flat
fields. This flat field is then compared with the images received, and the nec-
essary RAF is removed. The flat field correction is not very effective for RAF
removal due to a wide range of detector response functions. The hardware-based
techniques [21, 22] invloves a nonstationary detector system that compensates
for the malfunctioning of individual detector pixels. This results in an average
detector performance, and a considerable reduction in RAF is reported. How-
ever, specialized hardware is needed to carry out this kind of RAF correction.
The third category includes image-based processing techniques. These are fur-
ther classified into tomogram-based (post-processing of CT data) techniques
[16, 23, 24, 25, 26, 27] and sinogram-based (pre-processing) techniques [15, 28].
Sinogram-based methods work directly with the sinogram data and aim to filter
out the artifacts using low pass filters [29, 30, 31, 32]. The ring artifacts appear
as straight lines in a vertical direction on the sinogram, making their recognition
and interpretation easier when using sinogram-based algorithms. Furthermore,
iterative systems based on relative total variations (RTV) have been presented
in [15], where the intensity deviations smoothing approach and image inpaint-
ing are used in the correction process. Most of these, however, fail to remove
the powerful artifacts related to dead detector elements or damaged areas on
the scintillator, and the majority of these techniques are only effective against
specific kinds of stripes. The limitation of the discussed methods gives a strong
motivation for the development of the deep-learning (DL) based ring artifacts
removal procedure. Unlike filtering and hardware-based techniques, Machine
Learning techniques have proven very promising in reducing these artifacts be-
cause of their capacity for feature detection and extraction, robustness to noise,
flexibility, speed and efficiency, generalization, and other factors.
The DL-based techniques have shown promising results in artifact removal
for other applications like image [33, 34, 35, 36, 37]. A correction method based
on a residual neural network is proposed in [35], where the artifacts correc-
tion network uses complementary information of each wavelet coefficient and
a residual mechanism of the residual block to obtain high-precision artifacts
through low operation costs. In [36], a general open framework for MAR, which
adopts the CNN to distinguish tissue structures from artifacts, is discussed. The
approach involves two phases: CNN training and MAR. In the CNN training
phase, a database with various CT images is built, and image patches are used
to train a CNN. In the MAR phase, the trained CNN is employed to gener-
ate corrected images with reduced artifacts, followed by additional artifact re-
duction through replacing metal-affected projections and FBP reconstruction.
CNN has been used in [37] to refine the performance of the normalized metal
artifacts reduction (NMAR) method. A CNN-Based Hybrid Ring artifact Re-
duction Algorithm for CT Images has been proposed in [33] which uses image
mutual correlation to generate a hybrid corrected image by fusing the infor-
mation from ring artifacts reduction in the sinogram domain and output given
3
by CNN. Polar coordinate transformation using a radial basis function neural
network (RBFNN) to remove ring artifacts is proposed in [34]. Ring artifacts
are transformed into linear artifacts by polar coordinate transformation and
smoothing operators are applied to locate them exactly. Subsequently, RBFNN
was operated on each linear artifact.
The biggest problem with the SXMCT images is that these images contain
different backgrounds (attributed to different samples) and several artifacts,
including the Ring artifacts. Therefore the individual study for the ring RAF
removal is not possible. The article puts forward a DL-based ring artifacts
removal approach which is relatively faster and has a wide range of applicability.
The article also addresses the training data issue. The design procedure of
purpose-specific synthetic data, hand-crafted data of SXMCT having only ring
artifacts is described. The procedure assists in generating the required large
diversity (wide diversity in terms of brightness, shape, size, and location) and
volume hand-crafted SXMCT data with experimental inputs. The results of the
DL-based approach have also been compared with the other conventional non-
ML techniques to check the efficiency of our models. The detailed methodology,
including the synthetic data generation along with the proposed Deep-Learning
Architecture, is presented in section II. Section III contains the results and
discussion, followed by conclusions in section IV.
2 METHODOLOGY
Figure 1 represents the actual sample of the SXMCT image containing RAF.
As visible in the figure, ring thickness, the distance between the rings, ring
color, azimuthal angle of rings, and total number of rings are the important
features. The complete process followed in RAF removal consists of the steps
4
Figure 2: (a) Flowchart for ring artifact simulation and its removal, where D1 is the
set of micro-ct images without ring artifacts, and D2 is the set of synthetic images
having ring artifacts. (b) Flowchart of Non-ML method.
5
shown in Figure 2. We have explored two methodologies, DL-based technique
and non-ML techniques. As seen in Figure 2(a), in the DL-based approach, we
begin with synthetic data generation, and then pre-processing of the synthetic
images is done where they are normalized. Further, the network is trained on
these images, and finally, the network is evaluated based on several parameters.
In our work, we have used the UNet model for RAF correction. The key idea
behind UNet is to use a Convolutional Neural Network (CNN) with both an
encoding and a decoding path, with skip connections that allow the network to
propagate high-resolution features from the encoder to the decoder. The UNet
architecture can be used to reconstruct high-quality images from an incomplete
or noisy input image, which is often achieved by training the UNet on a set of
paired images, where the input images consist of degraded versions of the cor-
responding ground truth images as well as the ground truth images. To learn
a mapping from the degraded input images to the high-quality ground truth
images is the main goal. During training, the UNet learns to identify the rel-
evant features in the input image and uses them to reconstruct the missing or
degraded parts of the picture. The skip connections in the UNet architecture
allow the network to preserve the high-frequency details of the image, which is
essential for reconstructing sharp and accurate images. During training, UNet
is typically optimized using a loss function such as MSE, which compares the
network’s output to the ground truth image and penalizes their differences. The
network weights are updated using backpropagation, which computes the gra-
dient of the loss concerning the network parameters and uses it to update the
consequences in the direction that minimizes the loss. Once trained, it can re-
construct new images by applying the learned mapping to new degraded input
images.
In non-ML approach, as seen in Figure 2(b), the image with ring artifacts is
converted from cartesian to polar coordinate system. In Cartesian coordinates,
the image is represented in terms of x and y coordinates, forming a grid-like
structure. In polar coordinates, the representation is based on the radial dis-
tance from a central point and the angle at which a point is located relative
to that central point. Further, the filter is applied to this image. Again, the
filtered image is transformed from polar coordinates to cartesian coordinates.
The performance of both approaches has been evaluated using SSIM and MSE.
The dataset used in the research work consists of the images obtained by per-
forming micro-CT of the ice cream stick made up of bamboo. By infusing the
domain knowledge, we have synthetically produced the images with the ring
artifacts using the above-mentioned features.
2.1 Synthetic data generation
Figure 3 shows the high-level process followed in synthetic data generation. We
have an experimental image from the setup that can generate micro-ct images
without RAF. This image is superimposed with synthetic ring artifact masks,
thereby producing the synthetic image with RAF.
For a similar simulation of ring artifacts in the sample images, we create
concentric rings on a white background using the below five features of the ring.
6
Figure 3: Synthetic Data Generation Process
7
Figure 4 shows the parameters used for creating the concentric rings. By ran-
domly initializing these parameters, we have created 25 such masks, which are
then used to generate the synthetic training data.
As discussed in the above section, circular ring artifacts (i.e., masks) have
been created by varying the 5 features. The experimental images have been
obtained by performing micro-CT of bamboo ice-cream stick. We have super-
imposed these images with the masks which are generated synthetically by the
above methods. We have superimposed 25 such masks generated by varying the
above parameters with 101 sample images provided. Thus, 2525 diverse images
are developed because in order to ensure that the trained network can handle
images other than the original training dataset, it is necessary to expose it to
a diverse dataset while training. Figure 5 shows a few synthetic data samples.
The training dataset consists of 2525 pairs of images where one is with RAF and
the other is without RAF which were directly obtained from the experiment.
8
Figure 6: Proposed encoder-decoder based architecture with skip connections.
9
to modify the information content of the data, and they can be categorized
into various types based on their functions. Each category of filter is designed
to address specific aspects of image processing, noise reduction, edge enhance-
ment, or feature extraction. The choice of filter depends on the characteristics
of the data and the goals of the image processing task. Here, we have used the
Fast Fourier Transform (FFT) filter, Bilateral filter, Stripe Filter, and Butter-
worth filter and compared the results with the results of DL-based approaches.
FFT-based methods rely on the assumption that ring artifacts correspond to
high-frequency components in the Fourier domain. As a result, they can be re-
moved by damping these components[38]. Butterworth filter aids in suppression
by selectively filtering specific frequencies associated with ring artifacts, leading
to improved image quality and is characterized by its ability to offer a customiz-
able frequency response[39]. Bilateral filtering uses a nonlinear combination of
adjacent image values to smooth images while maintaining edges[40, 41]. Stripe
filter can be used to eliminate stripe artifacts and ring artifacts and it is based
on wavelet decomposition and Fourier filtering[42].
(2µx µy + C1 ) (2σxy + C2 )
SSIM(x, y) = (3)
µ2x + µ2y + C1 σx2 + σy2 + C2
where µx , µy are the mean values of the pixels in images x and y, respectively.
σx2 , σy2 are the variances of the pixels in images x and y, respectively. σxy
is the covariance between the pixels in images x and y, and C1 and C2 are
constants added to avoid division by zero. The numerator of the equation
measures the similarity in terms of luminance, contrast, and structure between
the two images. At the same time, the denominator normalizes the similarity
measure to ensure that the values are between -1 and 1. SSIM values range
between -1 and 1, where a value of 1 indicates that the two images are identical.
A value of 0 indicates that the two images are entirely dissimilar, and a value
of -1 indicates perfect anti-correlation.
MSE is a commonly used loss function in image reconstruction tasks because
it measures the average squared difference between predicted and actual pixel
values. MSE penalizes significant errors more heavily than minor errors, and
the goal of image reconstruction is typically to minimize the overall difference
between the predicted and actual images. A higher value of MSE designates a
more significant difference between the original image and the processed image.
The formula is as follows :
N N
1 XX 2
MSE = (Bij − Aij ) (4)
N 2 i=1 j=1
10
Bij are the predicted pixel values at i and j indices and Aij are the actual pixel
values at i and j indices, and n is the total number of pixels in the image.
Figure 7: Comparison of (I) (a)Input, (b)Actual, and (c)Output images and (II)
100x100 sized patch of (a)Input, (b)Actual, and (c)Output images obtained through
the trained network
11
been kept static. However, the concentric ring radius has been varied for all the
datasets. Firstly, we varied the radii of rings and created a testing dataset of
303 images. Similarly, the second dataset has been generated by randomizing
the thickness and radius alone. Finally, we have generated the third testing
dataset by varying the number of rings and their radii. The results obtained for
each of the datasets mentioned above are shown in Table 1.
Table 1: Results of various testing datasets
Figure 8: Row-1 represents feature maps of the output of the six encoder units; Row-2
represents feature maps of the output of the six encoder units; The corresponding (a-f)
images represent the encoder-decoder pairs for the feature output in spatial dimension
12
the decoder initially learns low-level features followed by high-level features.
Initial encoders retain most of the input image features. This gives us an in-
tuition that these initial filters might be primitive edge detectors. As we go
deeper, the features extracted by the filters become visually less interpretable
due to the pooling operations. The pooling operations reduce the resolution of
the feature maps while increasing the neurons’ receptive field. Consequently,
the information in the deeper layers becomes more abstract and less spatially
localized. As we move on to the decoders, the initial decoder layers receive the
most abstract and downsampled feature maps from the corresponding encoder
layers and hence the feature maps produced are less interpretable. As we move
forward, the upsampling techniques increase the spatial resolution of the feature
maps. It also concatenates them with the corresponding feature maps from the
encoder through skip connections. This merging of feature maps helps to rein-
troduce spatial information and low-level details that are crucial for accurate
reconstruction.
13
3.2.2 Effect of Up-sampling method
Here, we have systematically investigated the impact of employing various up-
sampling methods. The upsampling layers follow an interpolation scheme, which
increases the input dimension. Various upsampling methods like transposed con-
volution, bicubic, nearest, and bilinear interpolation have been used to analyze
their effect on model performance. In Table 3, we have presented the outcomes
for all the upsampling methods. It is evident from the results that the trans-
posed convolution yields the most favorable outcomes, and hence, it has been
used in the proposed architecture of the network.
4 Conclusion
This study introduces a CNN-based deep learning model inspired by UNet with
a series of encoder-decoder units with skip connections to denoise the micro-CT
images with a specific goal of removing ring artifacts. Proposed approach in-
volves training the model with a dataset consisting of synthetic images which is
14
Figure 9: Comparison of ML (UNET) vs various non-ML (filter based) techniques
generated by masking the micro-ct images (without ring artifact) with concen-
tric ring structures. These ring structures are simulated by varying parameters
like ring thickness, the distance between two concentric rings, azimuthal angle,
the total number of rings in the image, and the color of the rings. The trained
network is then used to reconstruct the synthetic images which removes the ring
artifacts. The results have also been compared with various non-ML techniques.
The results of both approaches have been evaluated using various metrics such
as SSIM and MSE. For the DL based techniques, the average SSIM performance
metrics is about 0.9523, and the average MSE is about 0.0011. Testing carried
out on different datasets has been discussed. Ablation studies on the effect of
encoder-decoder units and other upsampling methods have also been discussed.
To get a detailed idea of how the network is removing the RAF, feature map
visualization has also been performed. We observe that, DL based technique
offer definite advantages over non-ML approaches. Because of end-to-end learn-
ing capabilities of DL model, the complex and varied nature of ring artifacts
can be captured effectively without the need for explicit filtering methods or
manual feature design. Moreover, our results show that machine learning based
approach do exceptionally well in feature learning, automatically obtaining hi-
erarchical features that are essential for handling the variety of attributes of
ring artifacts. They can generalize well to the unseen pattern. Since DL model
can handle complexity and non-linear relationships, they perform better than
traditional techniques, which could find it difficult to deal with the complex
nature of ring artifacts. In contrast to conventional methods for addressing ring
artifacts, the methodology presented in this study offers a more efficient solu-
tion in removing such artifacts. The findings underscore that utilizing a deep
learning-based approach not only offers a more efficient solution but also serves
as a viable alternative to traditional non-machine learning methods.
References
[1] Philip J Withers, Charles Bouman, Simone Carmignato, Veerle Cnudde,
David Grimaldi, Charlotte K Hagen, Eric Maire, Marena Manley, Anton
Du Plessis, and Stuart R Stock. X-ray computed tomography. Nature
Reviews Methods Primers, 1(1):18, 2021.
15
[2] Douglas P Boyd. Computed tomography: physics and instrumentation.
Academic Radiology, 2:S138–S140, 1995.
[3] Lee W Goldman. Principles of ct and ct technology. Journal of nuclear
medicine technology, 35(3):115–128, 2007.
[8] Charlotta Kämpfe Nordström, Hao Li, Hanif M Ladak, Sumit Agrawal, and
Helge Rask-Andersen. A micro-ct and synchrotron imaging study of the
human endolymphatic duct with special reference to endolymph outflow
and meniere’s disease. Scientific Reports, 10(1):8295, 2020.
[9] Camilla Albeck Neldam, Torsten Lauridsen, Alexander Rack, Tore Tran-
berg Lefolii, Niklas Rye Jørgensen, Robert Feidenhans, and Else Marie
Pinholt. Application of high resolution synchrotron micro-ct radiation in
dental implant osseointegration. Journal of Cranio-Maxillofacial Surgery,
43(5):682–687, 2015.
[10] Christian Norvik, Christian Karl Westöö, Niccolò Peruzzi, Goran Lovric,
Oscar van der Have, Rajmund Mokso, Ida Jeremiasen, Hans Brunnström,
Csaba Galambos, Martin Bech, et al. Synchrotron-based phase-contrast
micro-ct as a tool for understanding pulmonary vascular pathobiology and
the 3-d microanatomy of alveolar capillary dysplasia. American Journal of
Physiology-Lung Cellular and Molecular Physiology, 318(1):L65–L75, 2020.
[11] Kaan Orhan, Karla de Faria Vasconcelos, and Hugo Gaêta-Araujo. Ar-
tifacts in micro-ct. Micro-computed Tomography (micro-CT) in Medicine
and Engineering, pages 35–48, 2020.
16
[12] Mohamed Elsayed Eldib, Mohamed Hegazy, Yang Ji Mun, Myung Hye Cho,
Min Hyoung Cho, and Soo Yeol Lee. A ring artifact correction method:
Validation by micro-ct imaging with flat-panel detectors and a 2d photon-
counting detector. Sensors, 17(2):269, 2017.
[13] Anton Du Plessis, Chris Broeckhoven, Anina Guelpa, and Stephan Gerhard
Le Roux. Laboratory x-ray micro-computed tomography: a user guideline
for biological samples. Gigascience, 6(6):gix027, 2017.
[14] F Edward Boas, Dominik Fleischmann, et al. Ct artifacts: causes and
reduction techniques. Imaging Med, 4(2):229–240, 2012.
[15] Jakub Šalplachta, Tomáš Zikmund, Marek Zemek, Adam Břı́nek, Yoshihiro
Takeda, Kazuhiko Omote, and Jozef Kaiser. Complete ring artifacts reduc-
tion procedure for lab-based x-ray nano ct systems. Sensors, 21(1):238,
2021.
[16] Zhouping Wei, Sheldon Wiebe, and Dean Chapman. Ring artifacts removal
from synchrotron ct image slices. Journal of Instrumentation, 8:C06006,
06 2013.
[17] Nghia T Vo, Robert C Atwood, and Michael Drakopoulos. Superior tech-
niques for eliminating ring artifacts in x-ray micro-tomography. Optics
express, 26(22):28396–28412, 2018.
[18] Emran Mohammad Abu Anas, Soo Yeol Lee, and Md Kamrul Hasan. Clas-
sification of ring artifacts for their effective removal using type adaptive
correction schemes. Computers in biology and medicine, 41(6):390–401,
2011.
[19] Emran Anas, Soo Yeol Lee, and Md Kamrul Hasan. Removal of ring ar-
tifacts in ct imaging through detection and correction of stripes in the
sinogram. Physics in medicine and biology, 55:6911–30, 11 2010.
[20] Matthias Ruf and Holger Steeb. An open, modular, and flexible micro
x-ray computed tomography system for research. Review of Scientific In-
struments, 91(11), 2020.
[21] Yining Zhu, Mengliu Zhao, Hongwei Li, and Peng Zhang. Micro-ct arti-
facts reduction based on detector random shifting and fast data inpainting.
Medical physics, 40(3):031114, 2013.
[22] GR Davis and JC Elliott. X-ray microtomography scanner using time-
delay integration for elimination of ring artefacts in the reconstructed im-
age. Nuclear Instruments and Methods in Physics Research Section A:
Accelerators, Spectrometers, Detectors and Associated Equipment, 394(1-
2):157–162, 1997.
17
[23] Jan Sijbers and Andrei Postnov. Reduction of ring artefacts in high resolu-
tion micro-ct reconstructions. Physics in Medicine & Biology, 49(14):N247,
2004.
[24] Yiannis Kyriakou, Daniel Prell, and Willi A Kalender. Ring artifact
correction for high-resolution micro ct. Physics in medicine & biology,
54(17):N385, 2009.
[25] Xiaokun Liang, Zhicheng Zhang, Tianye Niu, Shaode Yu, Shibin Wu,
Zhicheng Li, Huailing Zhang, and Yaoqin Xie. Iterative image-domain
ring artifact removal in cone-beam ct. Physics in Medicine & Biology,
62(13):5276, 2017.
[26] Luxin Yan, Tao Wu, Sheng Zhong, and Qiude Zhang. A variation-based
ring artifact correction method with sparse constraint for flat-detector ct.
Physics in Medicine & Biology, 61(3):1278, 2016.
[27] ANM Ashrafuzzaman, Soo Yeol Lee, and Md Kamrul Hasan. A self-
adaptive approach for the detection and correction of stripes in the sino-
gram: suppression of ring artifacts in ct imaging. EURASIP Journal on
Advances in Signal Processing, 2011:1–13, 2011.
[28] Dong-Jiang Ji, Gang-Rong Qu, Chun-Hong Hu, Bao-Dong Liu, Jian-Bo
Jian, and Xiao-Kun Guo. Anisotropic total variation minimization ap-
proach in in-line phase-contrast tomography and its application to correc-
tion of ring artifacts. Chinese Physics B, 26(6):060701, 2017.
[29] Carsten Raven. Numerical removal of ring artifacts in microtomography.
Review of scientific instruments, 69(8):2978–2980, 1998.
[30] Beat Münch, Pavel Trtik, Federica Marone, and Marco Stampanoni. Stripe
and ring artifact removal with combined wavelet—fourier filtering. Optics
express, 17(10):8567–8591, 2009.
[31] Fazle Sadi, Soo Yeol Lee, and Md Kamrul Hasan. Removal of ring artifacts
in computed tomographic imaging using iterative center weighted median
filter. Computers in biology and medicine, 40(1):109–118, 2010.
[32] Mohamed Elsayed Eldib, Mohamed Hegazy, Yang Ji Mun, Myung Hye Cho,
Min Hyoung Cho, and Soo Yeol Lee. A ring artifact correction method:
Validation by micro-ct imaging with flat-panel detectors and a 2d photon-
counting detector. Sensors, 17(2):269, 2017.
[33] Shaojie Chang, Xi Chen, Jiayu Duan, and Xuanqin Mou. A cnn-based
hybrid ring artifact reduction algorithm for ct images. IEEE Transactions
on Radiation and Plasma Medical Sciences, 5(2):253–260, 2021.
[34] Zhen Chao and Hee-Joung Kim. Removal of computed tomography ring
artifacts via radial basis function artificial neural networks. Physics in
Medicine & Biology, 64(23):235015, 2019.
18
[35] Tianyu Fu, Yan Wang, Kai Zhang, Jin Zhang, Shanfeng Wang, Wanxia
Huang, C Yao, C Zhou, and Q Yuan. Deep-learning-based ring artifact
correction for tomographic reconstruction. Journal of Synchrotron Radia-
tion, 30(3), 2023.
[36] Yanbo Zhang and Hengyong Yu. Convolutional neural network based metal
artifact reduction in x-ray computed tomography. IEEE transactions on
medical imaging, 37(6):1370–1381, 2018.
[37] Lars Gjesteby, Qingsong Yang, Yan Xi, Hongming Shan, Bernhard Claus,
Yannan Jin, Bruno De Man, and Ge Wang. Deep learning methods for ct
image-domain metal artifact reduction. In Developments in X-ray Tomog-
raphy XI, volume 10391, pages 147–152. SPIE, 2017.
[38] Nghia T. Vo, Robert C. Atwood, and Michael Drakopoulos. Superior tech-
niques for eliminating ring artifacts in x-ray micro-tomography. Opt. Ex-
press, 26(22):28396–28412, Oct 2018.
[39] Li Zhongshen. Design and analysis of improved butterworth low pass fil-
ter. In 2007 8th International Conference on Electronic Measurement and
Instruments, pages 1–729–1–732, 2007.
[40] C. Tomasi and R. Manduchi. Bilateral filtering for gray and color im-
ages. In Sixth International Conference on Computer Vision (IEEE Cat.
No.98CH36271), pages 839–846, 1998.
19