A generative adversarial network with “zero-shot” learning for … – Nature.com


Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.
Advertisement
Scientific Reports volume 13, Article number: 1051 (2023)
Metrics details
Positron imaging technology has shown good practical value in industrial non-destructive testing, but the noise and artifacts generated during the imaging process of flow field images will directly affect the accuracy of industrial fault diagnosis. Therefore, how to obtain high-quality reconstructed images of the positron flow field is a challenging problem. In the existing image denoising methods, the denoising performance of positron images of industrial flow fields in special fields still needs to be strengthened. Considering the characteristics of few sample data and strong regularity of positron flow field image,in this work, we propose a new method for image denoising of positron flow field, which is based on a generative adversarial network with zero-shot learning. This method realizes image denoising under the condition of small sample data, and constrains image generation by constructing the extraction model of image internal features. The experimental results show that the proposed method can reduce the noise while retaining the key information of the image. It has also achieved good performance in the practical application of industrial flow field positron imaging.
Positron Emission Tomography (PET) can be used to detect the flow field in the cavity of complex industrial parts. The reconstructed flow field image can describe the internal state of the cavity and help experts to judge and eliminate faults. In practice, due to the inherent imaging characteristics of the technology, the existence of noise and artifacts is inevitable. The main reasons are as follows: (1) The limitation of imaging system hardware equipment: this type of noise cannot be avoided and eliminated, and the process is not subject to human intervention; (2) There are a large number of response lines in the original PET sampling data with zero counts: this type of response line is in the Speckle background noise will be generated during the reconstruction process; (3) The reconstruction process includes the influence of other factors such as algorithm and parameter selection.
The noise in positron flow field image is unnecessary or redundant interference information, which will directly interfere with the judgment of industrial failures. Therefore, to obtain a clean positron flow field image, it is necessary to denoise the original image. technology.
In recent years, researchers have devoted themselves to studying the application of positron imaging technology in industrial non-destructive testing. Maximum Likelihood Expectation Maximization (MLEM) is the current general algorithm for positron image reconstruction1. The optimization of loss function2, improvement of statistical model3 and introduction of prior knowledge4 of the algorithm have improved the quality of reconstructed images to varying degrees, but there are still loss of data details and artifacts.In addition, the existing iterative reconstruction algorithms have relatively high requirements for computing costs, which do not meet the actual industrial application scenarios.
On the other hand, to improve the image quality, many researches directly pre-filter the sinusoidal data in the original sampling, and model the sinusoidal data to obtain the noise characteristics for filtering5,6.
Therefore, in the image post-processing stage, it has higher research significance and practical application value to improve the image quality by denoising or artifact suppression of the reconstructed flow field image7. Realized the adaptive estimation of the image noise relationship by using the non-local mean algorithm to study the image redundancy information and optimize the non-local weights in the image8; used the knowledge of sparse learning, a method based on batch dictionary learning is proposed to suppress speckle noise and fringe artifacts in the reconstructed image9; realized the fast 3D matched filter, and removed the random noise to obtain a better signal-to-noise ratio effect in the reconstructed image.
Although some progress has been made in related researches, there is still a lack of targeted research on industrial reconstruction images. In practical applications, the sampling data of flow field positron images is low, and the requirement for texture details is high, which makes the existing denoising methods unable to meet the processing of such industrial positron images.
Therefore, to address the above issues, we propose a denoising model of a generative adversarial network incorporating zero-shot learning knowledge for denoising reconstructed images of positron flow fields in closed cavities for higher quality Image. Specifically, the contributions of this paper are as follows:
To our best knowledge, this is the first domain generative network model for image denoising of industrial positron flow field.
It realizes the image denoising under the condition of scarce data by adding feature input learning from the pixel information inside the images.
It constructs a new loss function, which combines perceptual loss and edge loss to preserve image details as much as possible.
It provides SOTA denoising results in the positron images in industrial flow field.
Deep convolutional neural network (CNN) is the most popular network in the current image processing task. With the proposal of the network10,11, the overall implementation of CNN tended to be mature in the depth and width, so it shows a good effect in the image denoising task12. Trained a set of fast and effective convolutional neural network fusion modules based on prior knowledge, which is not only effective in Gaussian noise but also suitable for low-level vision applications13. Used deep convolution networks to optimize the network and learn the end-to-end image mapping, to improve the image quality14. Proposed the denoising convolutional neural networks (DnCNNs), which used residual learning and batch normalized training networks for blind denoising, and realized the processing of Gaussian noise of different levels using a single network model.
Generative adversarial network15 consists of two parts, the generative model G and the discriminative model D. The input random noise generates realistic images through the generative network training, at the same time, the discriminative network distinguishes the true from the false. The mathematical model is shown in Eq. (1).
Here, (mathbb {E}(bullet )) represents the expectation operator; x represents the real data and z represents the input random noise. When D is trained as the optimal discriminator, which means the JS divergence is minimized, and the training for G is completed, the optimal data generation network can be obtained.
Generative adversarial model can supplement training data and have achieved good performance in few-sample tasks. The proposal of papers16,17 further provided possibility for the specific implementation of the model, including the researches on network convergence, model collapse, and the optimization of the loss function. These researches all achieved good performances in many fields such as image super-resolution18, image transformation19, image style transfer20. The combination of the model and convolutional neural network also shows excellent performance in the task of image denoising21. Trained the two networks jointly, and the voxel loss function is constructed to realize image denoising and obtain a high peak signal-to-noise ratio22 highly under-sampled data to reduce the artifacts and contrast, which improved the image quality under the framework of the conditional generative network23. Used GAN to model the noise distribution to generate noise samples, and formed a clean image set as training data. The network had trained for blind denoising and achieved well results24. Rendered small pixel samples using the features of GAN and obtained the higher quality real images by training the noisy images.
The limitation of deep neural network is that it needs enough sample data to train a good network model. Therefore, when dealing with small sample data, to obtain a good model through training, we consider learning the attributes of existing samples, and then using the knowledge of partial transfer learning to identify the type attributes of unknown data. “Zero-shot” learning25 is an unsupervised learning network based on zero samples, and the original core idea is to realize the transfer learning of unknown data by learning the attributes and labels of existing samples. In recent years, great progress has been made in the research of related networks26. Improved the original model and trained a labeling framework for model embedding directly, which realized the prediction of data categories27. Solved the problem of domain drift by adding new constraints in the network and it can ensure the original visual feature information while semantic embedding.
At the same time, due to the excellent performance of “zero-shot” learning in the unsupervised field, more and more researches on models that integrate “zero-shot” learning under the framework of generative adversarial network are also gradually carried out28. Proposed the loss function of gradient signal to solve the problem of zero-shot learning by generating data samples simulating unsupervised learning in a generative adversarial network29. Used the Coupled GANs extension as conditional GANs, which can capture the joint distribution of domain adaptation samples in different tasks, and complete the adaptive domain training. The above papers show that it is feasible to embed a zero-shot learning module in GAN framework, and some achievements have been achieved.
Therefore, we consider denoising the industrial positron flow field image by integrating the knowledge of “zero-shot” learning in the framework of generative adversarial network. The rest of this paper is organized as follows. The proposed method is introduced in Section “Method“. The experiments and results are presented in “Experiments” Section. Finally, relevant issues are discussed and conclusions are drawn in “Conclusions” Section.
The data set of natural images is easy to obtain, and the performance of the neural network model can be improved through large-scale data training. However, the positron emission tomography image of industrial flow field belongs to the research object of scarce sample data. The research on the application of positron emission tomography imaging technology in the field of industrial flow field detection is still in the preliminary stage, with strong data field characteristics and high sampling difficulty, resulting in less sample data and difficult to obtain. Therefore, in the absence of a sufficient number of training samples, we plan to divide the image through the repeatability characteristics of the internal pixels of a single image, and extract the internal features of the image in a small enough scale. In the actual industrial application scenario, the positron flow field image is greatly affected by the environment generated by the flow field. Even for the same industrial part, different reconstruction images will be obtained due to different usage scenarios. In this case, a considerable part of the image information is not easy to obtain from the external image data. At the same time, considering the regularity and repeatability of the industrial flow field image itself, it is necessary to obtain more image features by learning the internal information of the image, so as to avoid the loss of details in the denoising process. The experiments in the paper30 show that the information entropy of the image is smaller than that of the image from the external data set. Furthermore, by observing the internal statistical information of the image, more accurate prediction results can be obtained compared with the external statistical information of the image.
Considering the above factors, we establish a feature extraction model of internal image information. The principle is based on “zero-shot” learning, and the purpose is to use a single small number of images for feature extraction in the case of small sample data. A prerequisite for the feasibility of this model is that the flow field image is different from the general image, and its own regularity is strong.The specific model construction process is as follows:first, a convolutional neural network needs to be trained, and small-scale image samples extracted from the flow field image are used as training samples. The image examples here are obtained by randomly slicing the flow field image. Then, by learning the mapping relationship between the area with high image noise and the position with low noise, a convolution network for learning the internal information of the flow field image is obtained. The network adopts a full convolution layer network structure, and each layer of the network is activated using the RELU (Rectified Linear Units) function. The corresponding relationship is shown in Fig. 1, and the aim is to obtain the feature space correspondence of the same category of images: (f_{z s l}: X rightarrow X^{prime }). Here, since the network is trained by a single image, it can greatly reduce the training time and complexity of the network, realize the internal feature extraction of the image, and its output is used as a conditional input of the generative countermeasure network to construct the denoising model of specific images.
X represents the noisy image part of lower sampled data, (X^{prime }) represents a clearer part of the image; Y represents the test images. The proposed network model can learn a corresponding mapping relationship through the information extraction of pixels in the image, and applied to the test images on the right to obtain the output of clearer images.
After extracting the internal features of the flow field image, a positron flow field image denoising model is built with the generative adversarial network as the model framework. The input of the network is the feature vector and random noise extracted in the convolutional network in the previous section and the overall network structure is constructed by the residual network (ResNet)31. The specific implementation is shown in Fig. 2: the generator is consist of convolution layers, residual blocks and deconvolutional layers. The kernel is (3 times 3), the output is a separate (3 times 3) characteristic graph, the stride is 1, the padding is 1 and the activation function is Rectified Linear Units (Relu). The discriminator is consist of six convolutional layers, which adopts the full convolutional networks, the kernel is (3 times 3), the activation function of the first five layers of convolutional network is Relu, and the last output layer is the sigmoid function.
n and s mean the kernels and stride of the convolutional layer.
Through the above steps, the positron flow field images with high noise are converted into clearer images. In addition to the basic adversarial loss function in the network, to obtain a better positron flow images denoising model, we consider constructing a new loss function to measure the performance of the denoising model. Firstly, to preserve the image information and detail features as much as possible in the denoising process, we add a perception loss function as shown as Eq. (2).
Here (Vert bullet Vert) represents Frobenius norm, wdh respectively represent the width, height, and depth of the feature space.
Additionally, to avoid excessive smoothing of the edge of the denoised image as much as possible, we give an edge loss function, and the mathematical expression of the function is shown as Eq. (3). Here, (hat{x}) represents the original images containing pixel feature information.
Therefore, combined with the above two loss functions and the original adversarial loss function of the generative adversarial network, the overall joint loss function constructed in this paper is shown as Eq. (4).
Here, (lambda _{1}) and (lambda _{2}) are weighted parameters, which weigh the weight between the three loss functions and the specific value is determined by the training effect of the network model in the actual process.
The generative adversarial model with zero-shot learning proposed in this paper is focused on positron flow field images in the industrial field and the overall framework is shown in Fig. 3. The model consists of three parts. The first part is the image feature learning network proposed in “Image feature learning” Section, which is composed of fully convolutional neural network. The second part is the generative adversarial network constructed in “Generative Adversarial Networks” Section. Here, the input of the generative network is the mapping relationship between the higher quality image and the noise image obtained in Fig. 1, which is the prior constraint. Then, we train the lower sampled noise image by the discriminative network and use the loss function defined above to characterize the model. Finally, we can obtain a denoising model on neural network for positron flow field images.
Network framework diagram of denoising model.
The image data we used in the experiment is the positron flow field image in the industrial field. The data is obtained through GATE (Geant4 Application for Tomographic Emission) simulation. GATE is a special PET simulation software based on Monte Carlo. The specific simulation process is as follows: construct the geometric model of the detector; construct the geometric model of the scanned object; set particle transmission parameters; set the front-end electronic characteristics; set the data output format and obtain the data, and then the positron images are reconstructed by the algorithm. Here the reconstruction algorithm we used is MLEM.
We obtained twenty kinds of flow field images under different scenes, which contain different water medium equipment as much as possible, including water tunnels, tanks, and pipe flow devices of different specifications. At the same time, in the simulation process, we set two different standards of reagent dose and sampling time. In principle, the longer the sampling time, the higher the activity, and the better the quality of the reconstructed images.
In our experiments, the model is shown in “Method” Section. All the networks were optimized using Adam algorithm32, and the hyper-parameters for Adam were set as (alpha =1 textrm{e}-5, ,beta _{1}=0.2,, beta _{2}=0.9). The networks were implemented in Python with Tensorflow, and the GPU used in the training is NVIDIA 2080Ti. We set the mini-batch is 64 in the process of the training. The loss function constructed above is shown in Fig. 4, and we can see the convergence of the model in the process of the denoising network for positron flow field images clearly.
100 epochs are set in the training: (a) the loss function of the generative model; (b) the loss function of discriminative model. the whole model tends to converge around 80 epochs.
At the same time, to show the effect of the denoising model, the two quantitative indicators are used to measure the performance of the model synchronously, namely peak signal to noise ratio (PSNR) and structure similarity image measure (SSIM). These two are currently more common indicators and the mathematical expression is shown in Eq. (5), and the changes of values during model training are shown in Fig. 5.
The two line chart of PSNR and SSIM are changed with the training. The reference image is a randomly selected image in the data set, which can basically reflect the changes of the image in the training process.
To show the denoising effect of the model on the positron flow field images proposed in the paper, we selected two different flow field slice images: laminar flow and turbulent flow. The model simulates two different fluid states due to the change of velocity in the same space. When the velocity is very small, the fluid flows in layers and does not mix; When the velocity increases, vortices will be produced in the flow field. Compare the effects of two flow field images under different denoising models. The basic description of the models used as the comparison is shown in Table  1. The first three models are the current general denoising models, and the last one is the ablation experimental model without changing the loss function.
The denoising results obtained under different network models are shown in Fig. 6 and Fig. 7 respectively. It is not difficult to find that all methods have a certain denoising effect on the positron flow field images. Compared with the original reconstructed images, the quality of the obtained images is improved to varying degrees, but obviously, the quality of the images obtained by the method proposed in this paper is the best after denoising.
The positron flow field images are the laminar flow images obtained by simulation. The specific parameters are sampling time 1s and nuclide activity 800 bq.
The positron flow field images are the vortex images obtained by simulation. The specific parameters are sampling time 1s and nuclide activity 800 bq.
Positron flow field images are the gray images obtained from the reconstructed sampling data. So the quality of the image cannot be accurately evaluated by human eyes alone, and there may be visual bias, especially in the details of the image. For quantitative analysis, we calculated the PSNR and SSIM, and the summary data are in Table  2.
From the values of the two indicators given in the table, it can be seen that the methods proposed in this paper have good performance. However, only using the generative countermeasure network to reduce the image noise is likely to generate beautiful positron images that do not conform to the characteristics of the industrial field, which cannot achieve the practical application effect. Therefore, the results further prove the necessity of constructing a new loss function.
At the same time, due to the objective conditions of the existing industrial positron imaging technology, image blur (blocky or smooth artifacts) may occur, which will also affect the results of quantitative evaluation indicators. Therefore, different from natural images, the application of positron flow field images in industrial non-destructive testing needs more expert experience to judge.
To further verify the advanced nature of the model, we designed the experiment as follows: it simulated the flow state of liquid in the engine pipe. The annular detector used is 64 detector rings with a radius of about 64 mm, and each detector ring is composed of 23 pairs of detector heads; the carrier solvent is water-soluble hydraulic oil containing (textrm{F}^{18}) with radioactivity of about 1 mC. The sampling time last for 60s, and we took the sampling data at equal time intervals for image reconstruction to obtain the fluid images in a more stable state. We denoised the images and the results are shown in Fig. 8.
Fluid diagram of intertnal fluid in engine pipeline.
In practical application, the fluid state (including whether there are cracks, irregular sections, etc.) can be observed through images to judge the internal conditions of the pipeline. In principle, the better the image quality, the higher the accuracy of the detection results.
From the different image effects under each model in the figure, we can see that the method proposed in this paper has a good denoising effect, and the image quality has been significantly improved. This method uses the extracted image internal information feature fusion generative adversarial network to denoise the image. Theoretically, the more complex the internal structure of the industrial part cavity and the more complex the flow field image, the better the denoising effect of this model.
The main goal of this paper is to denoise the reconstructed industrial positron flow field image, aiming at small sample data in special fields. It can be seen from the results of simulation experiments and field experiments that the proposed model has a good denoising effect, especially in the actual application scenario, the denoising process preserves the details of the image, and achieves the denoising task well.
The image resolution currently used is (128 times 128).In the experiment, we tried to improve the pixels of the image during the reconstruction process and denoise the image with higher pixels. The model can also achieve the denoising effect, but the result is not satisfactory. The main reason is that there is too little sampling data, which leads to less pixel information in the reconstructed image with higher resolution. Information loss and image distortion will occur after de-noising. Therefore, how to obtain a higher resolution image is also a future research direction.
In conclusion, we have proposed a generative adversarial network for positron flow field images denoising based on “zero-shot” learning, Which is dedicated to solving the problem of poor image quality under the scarce samples in industrial positron detection, and making the denoised images more readable. The experimental results also prove the feasibility of the proposed method. In the future, we plan to segment the image, try to directly process the region of interest (ROI), or fuse different neural networks to directly process the reconstructed original data, so as to further improve the image quality of industrial positron flow field.
The data used in the study comes from two parts: the simulation data comes from GATE and it is a special simulation software for PET/SPECT equipment based on Mento Carlo; the real data comes from cooperative enterprises. The datasets used and/or analysed during the current study available from the corresponding author on reasonable request.
Shepp, L. A. & Vardi, Y. Maximum likelihood reconstruction for emission tomography. IEEE Trans. Med. Imaging 1(2), 113–122 (1982).
Article  CAS  Google Scholar 
De Man, B. & Basu, S. Distance-driven projection and backprojection in three dimensions. Phys. Medi. Biol. 49(11), 2463 (2004).
Article  Google Scholar 
Elbakri, I. A. & Fessler, J. A. Statistical image reconstruction for polyenergetic X-ray computed tomography. IEEE Trans. Med. Imaging 21(2), 89–99 (2002).
Article  Google Scholar 
Liu, Y., Ma, J., Fan, Y. & Liang, Z. Adaptive-weighted total variation minimization for sparse data toward low-dose X-ray computed tomography image reconstruction. Phys. Med. Biol. 57(23), 7923 (2012).
Article  Google Scholar 
Wang, J., Lu, H., Li, T. & Liang, Z. Sinogram noise reduction for low-dose ct by statistics-based nonlinear filters. In. Soc. Opt. Photonics 5747, 2058–2066 (2005).
Google Scholar 
Wang, J., Li, T., Lu, H. & Liang, Z. Penalized weighted least-squares approach to sinogram noise reduction and image reconstruction for low-dose X-ray computed tomography. IEEE Trans. Med. Imaging 25, 1272–1283 (2006).
Article  Google Scholar 
Ma, J. et al. Low-dose computed tomography image restoration using previous normal-dose scan. Med. Phys. 38(10), 5713–5731 (2011).
Article  Google Scholar 
Chen, Y. et al. Improving abdomen tumor low-dose CT images using a fast dictionary learning based processing. Phys. Med. Biol. 58(16), 5803 (2013).
Article  Google Scholar 
Feruglio, P. F., Vinegoni, C., Gros, J., Sbarbati, A. & Weissleder, R. Block matching 3d random noise filtering for absorption optical projection tomography. Phys. Medi. Biol. 55(18), 5401 (2010).
Article  Google Scholar 
LeCun, Y., Bottou, L., Bengio, Y. & Haffner, P. Gradient-based learning applied to document recognition. Proc. IEEE 86(11), 2278–2324 (1998).
Article  Google Scholar 
Krizhevsky, A., Sutskever, I. & Hinton, G. E. Imagenet classification with deep convolutional neural networks. Adv. Neural Inf. Process. Syst. 25, 1097–1105 (2012).
Zhang, K., Zuo, W., Gu, S. & Zhang, L. Learning deep CNN denoiser prior for image restoration. In Proc. of the IEEE Conf. on Computer Vision and Pattern Recognition, pp. 3929–3938 (2017).
Chen, H. et al. IEEE 14th International Symposium on Biomedical Imaging (ISBI 2017). IEEE 2017, 143–146 (2017).
Zhang, K., Zuo, W., Chen, Y., Meng, D. & Zhang, L. Beyond a gaussian denoiser: Residual learning of deep cnn for image denoising. IEEE Trans. Image Process. 26(7), 3142–3155 (2017).
Goodfellow, I. et al. Generative adversarial nets. Adv. Neural Inf. Process. Syst. 27, 139–144 (2014).
Mirza, M. & Osindero, S. Conditional generative adversarial nets. arXiv preprint arXiv:1411.1784 (2014).
Radford, A., Metz, L. & Chintala, S. Unsupervised representation learning with deep convolutional generative adversarial networks. arXiv preprint arXiv:1511.06434 (2015).
Ledig, C. et al. Photo-realistic single image super-resolution using a generative adversarial network. In Proc. of the IEEE Conf. on Computer Vision and Pattern Recognition, pp. 4681–4690 (2017).
Isola, P., Zhu, J.-Y., Zhou, T. & Efros, A. A. Image-to-image translation with conditional adversarial networks. In Proc. of the IEEE Conf. on Computer Vision and Pattern Recognition, pp. 1125–1134 (2017).
Karras, T., Laine, S. & Aila, T. A style-based generator architecture for generative adversarial networks. In Proc. of the IEEE/CVF Conf. on Computer Vision and Pattern Recognition, pp. 4401–4410 (2019).
Wolterink, J. M., Leiner, T., Viergever, M. A. & Išgum, I. Generative adversarial networks for noise reduction in low-dose CT. IEEE Trans. Med. Imaging 36(12), 2536–2545 (2017).
Yu, S. et al. Deep de-aliasing for fast compressive sensing mri. arXiv preprint arXiv:1705.07137 (2017).
Chen, J., Chen, J., Chao, H. & Yang, M. Image blind denoising with generative adversarial network based noise modeling. In Proc. of the IEEE Conf. on Computer Vision and Pattern Recognition, pp. 3155–3164 (2018).
Alsaiari, A., Rustagi, R., Thomas, M. M., Forbes, A. G. et al. Image denoising using a generative adversarial network. In 2019 IEEE 2nd International Conf. on Information and Computer Technologies (ICICT). IEEE, Amsterdam, pp. 126–132 (2019).
Lampert, C. H., Nickisch, H. & Harmeling, S. Learning to detect unseen object classes by between-class attribute transfer. In 2009 IEEE Conf. on Computer Vision and Pattern Recognition. IEEE, Amsterdam, pp. 951–958 (2009).
Akata, Z., Perronnin, F., Harchaoui, Z. & Schmid, C. Label-embedding for attribute-based classification. In Proc. of the IEEE Conf. on Computer Vision and Pattern Recognition, pp. 819–826 (2013).
Fu, Y., Hospedales, T. M., Xiang, T. & Gong, S. Transductive multi-view zero-shot learning. IEEE Trans. Pattern Anal. Mach. Intell. 37(11), 2332–2345 (2015).
Sariyildiz, M. B. & Cinbis, R. G. Gradient matching generative networks for zero-shot learning. In Proc. of the IEEE/CVF Conf. on Computer Vision and Pattern Recognition, pp. 2168–2178 (2019).
Wang, J. & Jiang, J. Conditional coupled generative adversarial networks for zero-shot domain adaptation. In Proc. of the IEEE/CVF International Conf. on Computer Vision, pp. 3375–3384 (2019).
Zontak, M. & Irani, M. Internal statistics of a single natural image,. In CVPR. IEEE 2011, 977–984 (2011).
Google Scholar 
He, K., Zhang, X., Ren, S. & Sun, J. Deep residual learning for image recognition. In Proc. of the IEEE Conf. on computer vision and pattern recognition, pp. 770–778 (2016).
Kingma, D. P. & Ba, J. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014).
Download references
This work was supported in part by The National Natural Science Foundation of China (G. No. 61873124, 51875289, 62071229), The Aeronautical Science Foundation of China (G. No. 2020Z060052001, No. 20182952029), The Fundamental Research Funds for the Central Universities (G. No. NJ2020014, NS2019017), Nondestructive Detection and Monitoring Technology for High Speed Transportation Facilities, Key Laboratory of Ministry of Industry and Information Technology,Graduate Innovation Base (Laboratory) Open Fund (G. No. kfjj20200318, G. No. kfjj20200303).
College of Automation Engineering, Nanjing University of Aeronautics and Astronautics, CO, Nanjing, 211106, People’s Republic of China
Mingwei Zhu, Min Zhao, Min Yao & Ruipeng Guo
You can also search for this author in PubMed Google Scholar
You can also search for this author in PubMed Google Scholar
You can also search for this author in PubMed Google Scholar
You can also search for this author in PubMed Google Scholar
M.Z. wrote the main manuscript text, including the conceptualization and methodology. M.Z. provided the experimental environment and edited the original draft . All authors reviewed the manuscript.
Correspondence to Mingwei Zhu.
The authors declare no competing interests.
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
Reprints and Permissions
Zhu, M., Zhao, M., Yao, M. et al. A generative adversarial network with “zero-shot” learning for positron image denoising. Sci Rep 13, 1051 (2023). https://doi.org/10.1038/s41598-023-28094-1
Download citation
Received: 24 August 2022
Accepted: 12 January 2023
Published: 19 January 2023
DOI: https://doi.org/10.1038/s41598-023-28094-1
Anyone you share the following link with will be able to read this content:
Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative
By submitting a comment you agree to abide by our Terms and Community Guidelines. If you find something abusive or that does not comply with our terms or guidelines please flag it as inappropriate.
Advertisement
Scientific Reports (Sci Rep) ISSN 2045-2322 (online)
© 2023 Springer Nature Limited
Sign up for the Nature Briefing newsletter — what matters in science, free to your inbox daily.

source


Leave a Reply

Your email address will not be published.