Articles | Open Access |

Analysis of Multi-Tracer Adaptability in Machine Intelligence Models for Positron Emission Tomography Bias Adjustment

Dr. Aisha Rahman , Department of Clinical Imaging, Southeast Asia Health Sciences University, Kuala Lumpur, Malaysia

Abstract

Positron Emission Tomography (PET) imaging plays a critical role in functional and molecular diagnostics; however, its quantitative accuracy is significantly influenced by attenuation-related biases. Traditional correction techniques rely heavily on structural imaging modalities such as computed tomography (CT) or magnetic resonance imaging (MRI), which introduce limitations in multi-tracer adaptability due to modality-specific inconsistencies and tracer-dependent variations. Recent advances in machine intelligence, particularly deep learning-based models, have enabled data-driven attenuation correction methods that demonstrate improved generalization capabilities across imaging conditions. Nevertheless, the ability of these models to maintain robustness across diverse radiotracers remains an unresolved challenge.

This study presents a comprehensive analysis of multi-tracer adaptability in machine intelligence models designed for PET bias adjustment. It examines how variations in tracer distribution, photon attenuation properties, and biological uptake patterns affect the generalization capacity of learning-based correction systems. By synthesizing existing frameworks—including convolutional neural networks, adversarial architectures, and joint reconstruction models—the research evaluates their effectiveness in handling heterogeneous tracer datasets.

The proposed analytical framework integrates spectral and structural feature learning with domain adaptation mechanisms to enhance cross-tracer generalizability. Emphasis is placed on understanding how training strategies, including multi-site normalization (Onofrey, 2019) and adversarial learning (Arabi et al., 2019), contribute to model robustness. Furthermore, the study explores the role of joint activity–attenuation reconstruction (Rezaei, 2012; Rezaei et al., 2018) and synthetic CT generation (Dong, 2019) in reducing tracer-specific biases.

Findings indicate that while deep learning approaches significantly outperform traditional methods in single-tracer scenarios, their performance degrades when exposed to unseen tracer distributions unless explicit generalization strategies are incorporated. The integration of multi-tracer datasets and hybrid modeling approaches emerges as a key factor in achieving reliable bias correction.

This research contributes to the advancement of PET imaging by providing a critical evaluation of machine intelligence adaptability, identifying limitations in current methodologies, and proposing directions for developing tracer-agnostic correction frameworks. The results hold significant implications for improving clinical reliability and expanding the applicability of PET imaging across diverse diagnostic contexts.

Keywords

Positron Emission Tomography, Attenuation Correction, Multi-Tracer Adaptability

References

H. Arabi, G. Zeng, G. Zheng, and H. Zaidi, “Novel adversarial semantic structure deep learning for MRI-guided attenuation correction in brain PET/MRI,” Eur. J. Nucl. Med. Mol. Imag., vol. 46, pp. 2746–2759, Dec. 2019.

J. A. Onofrey, “Generalizable multi-site training and testing of deep neural networks using image normalization,” in Proc. IEEE 16th Int. Symp. Biomed. Imag. (ISBI), 2019, pp. 348–351.

A. Martinez-Möller, M. Souvatzoglou, N. Navab, M. Schwaiger, and S. G. Nekolla, “Artifacts from misaligned CT in cardiac perfusion PET/CT studies: Frequency, effects, and potential solutions,” J. Nucl. Med., vol. 48, no. 2, pp. 188–193, 2007.

J. F. Barrett and N. Keat, “Artifacts in CT: Recognition and avoidance,” Radiographics, vol. 24, no. 6, pp. 1679–1691, 2004.

F. E. Boas and D. Fleischmann, “CT artifacts: Causes and reduction techniques,” Imag. Med., vol. 4, no. 2, pp. 229–240, 2012.

R. Boellaard, “FDG PET/CT: EANM procedure guidelines for tumour imaging: Version 2.0,” Eur. J. Nucl. Med. Mol. Imag., vol. 42, pp. 328–354, Feb. 2015.

T. J. Bradshaw, G. Zhao, H. Jang, F. Liu, and A. B. McMillan, “Feasibility of deep learning-based PET/MR attenuation correction in the pelvis using only diagnostic MR images,” Tomography, vol. 4, no. 3, pp. 138–147, 2018.

Y. Chen and H. An, “Attenuation correction of PET/MR imaging,” Magn. Reson. Imag. Clin., vol. 25, no. 2, pp. 245–255, 2017.

X. Dong, “Synthetic CT generation from non-attenuation corrected PET images for whole-body PET imaging,” Phys. Med. Biol., vol. 64, no. 21, 2019, Art. no. 215016.

X. Dong, “Deep learning-based attenuation correction in the absence of structural information for whole-body positron emission tomography imaging,” Phys. Med. Biol., vol. 65, no. 5, 2020, Art. no. 55011.

F. Hashimoto, M. Ito, K. Ote, T. Isobe, H. Okada, and Y. Ouchi, “Deep learning-based attenuation correction for brain PET with various radiotracers,” Ann. Nucl. Med., vol. 35, pp. 691–701, Jun. 2021.

D. Hwang, “Generation of PET attenuation map for whole-body time-of-flight 18F-FDG PET/MRI using a deep neural network trained with simultaneously reconstructed activity and attenuation maps,” J. Nucl. Med., vol. 60, no. 8, pp. 1183–1189, 2019.

P. Isola, J.-Y. Zhu, T. Zhou, and A. A. Efros, “Image-to-image translation with conditional adversarial networks,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., 2017, pp. 1125–1134.

C. N. Ladefoged, “A multi-centre evaluation of eleven clinically feasible brain PET/MRI attenuation correction techniques using a large cohort of patients,” Neuroimage, vol. 147, pp. 346–359, Feb. 2017.

A. P. Leynes, “Zero-echo-time and Dixon deep pseudo-CT (ZeDD CT): Direct generation of pseudo-CT images for pelvic PET/MRI attenuation correction using deep convolutional neural networks with multiparametric MRI,” J. Nucl. Med., vol. 59, no. 5, pp. 852–858, 2018.

F. Liu, H. Jang, R. Kijowski, T. Bradshaw, and A. B. McMillan, “Deep learning MR imaging-based attenuation correction for PET/MR imaging,” Radiology, vol. 286, no. 2, pp. 676–684, 2018.

F. Liu, H. Jang, R. Kijowski, G. Zhao, T. Bradshaw, and A. B. McMillan, “A deep learning approach for 18F-FDG PET attenuation correction,” EJNMMI Phys., vol. 5, pp. 1–15, Nov. 2018.

Y. Lu, “Respiratory motion compensation for PET/CT with motion information derived from matched attenuation-corrected gated PET data,” J. Nucl. Med., vol. 59, no. 9, pp. 1480–1486, 2018.

F. Milletari, N. Navab, and S.-A. Ahmadi, “V-net: Fully convolutional neural networks for volumetric medical image segmentation,” in Proc. 4th Int. Conf. 3D Vis. (3DV), 2016, pp. 565–571.

J. Nuyts, A. Rezaei, and M. Defrise, “The validation problem of joint emission/transmission reconstruction from TOF-PET projections,” IEEE Trans. Radiat. Plasma Med. Sci., vol. 2, no. 4, pp. 273–278, Jul. 2018.

J. A. Onofrey, “Generalizable multi-site training and testing of deep neural networks using image normalization,” in Proc. IEEE 16th Int. Symp. Biomed. Imag. (ISBI), 2019, pp. 348–351.

O. Ronneberger, P. Fischer, and T. Brox, “U-net: Convolutional networks for biomedical image segmentation,” in Proc. 18th Int. Conf. Med. Image Comput. Comput.-Assist. Interv. (MICCAI), 2015, pp. 234–241.

H. Rothfuss, “LSO background radiation as a transmission source using time of flight,” Phys. Med. Biol., vol. 59, no. 18, pp. 5483–5500, 2014.

A. Rezaei, “Simultaneous reconstruction of activity and attenuation in time-of-flight PET,” IEEE Trans. Med. Imag., vol. 31, no. 12, pp. 2224–2233, Dec. 2012.

A. Rezaei, C. M. Deroose, T. Vahle, F. Boada, and J. Nuyts, “Joint reconstruction of activity and attenuation in time-of-flight PET: A quantitative analysis,” J. Nucl. Med., vol. 59, no. 10, pp. 1630–1635, 2018.

I. Shiri, “Direct attenuation correction of brain PET images using only emission data via a deep convolutional encoder-decoder (deep-DAC),” Eur. Radiol., vol. 29, pp. 6867–6879, Jun. 2019.

I. Shiri, “Differential privacy preserved federated transfer learning for multi-institutional 68Ga-PET image artefact detection and disentanglement,” Eur. J. Nucl. Med. Mol. Imag., vol. 51, no. 1, pp. 40–53, 2023.

L. Shi, J. Zhang, T. Toyonaga, D. Shao, J. A. Onofrey, and Y. Lu, “Deep learning-based attenuation map generation with simultaneously reconstructed PET activity and attenuation and low-dose application,” Phys. Med. Biol., vol. 68, no. 3, 2023, Art. no. 35014.

K. D. Spuhler, J. Gardus, Y. Gao, C. DeLorenzo, R. Parsey, and C. Huang, “Synthesis of patient-specific transmission data for PET attenuation correction for PET/MRI neuroimaging using a convolutional neural network,” J. Nucl. Med., vol. 60, no. 4, pp. 555–560, 2019.

M. Teimoorisichani, H. Sari, V. Panin, D. Bharkhada, A. Rominger, and M. Conti, “Using LSO background radiation for CT-less attenuation correction of PET data in long axial FOV PET scanners,” J. Nucl. Med., vol. 62, May 2021, Art. no. 15302021.

T. Toyonaga, “Deep learning-based attenuation correction for whole-body PET—A multi-tracer study with 18F-FDG, 68 Ga-DOTATATE, and 18F-Fluciclovine,” Eur. J. Nucl. Med. Mol. Imag., vol. 49, no. 9, pp. 3086–3097, 2022.

B. Zhou, “POUR-Net: A population-prior-aided over-under-representation network for low-count PET attenuation map generation,” 2024, arXiv:2401.14285.

J.-Y. Zhu, T. Park, P. Isola, and A. A. Efros, “Unpaired image-to-image translation using cycle-consistent adversarial networks,” in Proc. IEEE Int. Conf. Comput. Vis., 2017, pp. 2223–2232.

Article Statistics

Downloads

Download data is not yet available.

Copyright License

Download Citations

How to Cite

Dr. Aisha Rahman. (2026). Analysis of Multi-Tracer Adaptability in Machine Intelligence Models for Positron Emission Tomography Bias Adjustment. International Journal of Medical Science and Public Health Research, 7(04), 1–9. Retrieved from https://ijmsphr.com/index.php/ijmsphr/article/view/278