参考文献/References:
[1] REINHARD E, ASHIKHMIN M, GOOCH B, et al. Color transfer between images[J]. IEEE Computer Graphics and Applications, 2002, 21(5): 34-41.
[2] SUNKAVALLI K, JOHNSON M K, MATUSIK W, et al. Multi-scale image harmonization[J]. ACM Transactions on Graphics, 2010, 29(4): 125.
[3] GATYS L A, ECKER A S, BETHGE M. Image style transfer using convolutional neural networks[C]// IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Las Vegas, USA: IEEE, 2016: 2414-2423.
[4] LI Chuan, WAND M. Combining Markov random fields and convolutional neural networks for image synthesis[C]// IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Las Vegas, USA: IEEE, 2016: 2479-2486.
[5] FIER J, JAMRIKA O, SIMONS D, et al. Example-based synthesis of stylized facial animations[J]. ACM Transactions on Graphics, 2017, 36(4): 155
[6] GATYS L A, ECKER A S, BETHGE M. A neural algorithm of artistic style[J]. Computer Science. (2015-09-02)[2018-01-01].https://arxiv.org/pdf/1508.06576v2.pdf
[7] KRIZHEVSKY A, SUTSKEVER I, HINTON G E. ImageNet classification with deep convolutional neural networks[C]// Proceedings of the 25th International Conference on Neural Information Processing Systems. Lake Tahoe, USA: Curran Associates Inc, 2012: 1097-1105.
[8] SIMONYAN K, ZISSERMAN A. Very deep convolutional networks for large-scale image recognition[J]. Computer Science. (2014-09-04)[2018-01-01]. https://arxiv.org/abs/1409.1556
[9] GATYS L A, ECKER A S, BETHGE M. Texture synthesis using convolutional neural networks[C]// Proceedings of the 28th International Conference on Neural Information Processing System. [S.l.:s.n.], 2015, 1: 262-270.
[10] SELIM A, ELGHARIB M, DOYLE L. Painting style transfer for head portraits using convolutional neural networks[J]. ACM Transactions on Graphics, 2016, 35(4): 1-18.
[11] SARAGIH J M, LUCEY S, COHN J F. Face alignment through subspace constrained mean-shifts[C]// IEEE 12th International Conference on Computer Vision. Kyoto, Japan: IEEE, 2009: 1034-1041.
[12] BEIER T, NEELY S. Feature-based image metamorphosis[C]// Proceedings of the 19th Annual Conference on Computer Graphics and Interactive Techniques. New York, USA: ACM, 1992: 35-42.
[13] LIU Ce, YUEN J, TORRALBA A. Sift flow: dense correspondence across scenes and its applications[M]// Dense Image Correspondences for Computer Vision. Berlin: Springer, 2016: 15-49.
[14] SHIH Y C, PARIS S, BARNES C, et al. Style transfer for headshot portraits[J]. ACM Transactions on Graphics, 2014, 33(4): 1-14.
[15] LUAN F, PARIS S, SHECHTMAN E, et al. Deep photo style transfer[C]// IEEE Conference on Computer Vision and Pattern Recognition. Honolulu, USA: IEEE, 2017: 6997-7005.
[16] CHEN L C, PAPANDREOU G, KOKKINOS I, et al. DeepLab: semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected CRFs[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2018, 40(4): 834-848.
[17] LEVIN A, LISCHINSKI D, WEISS Y. A closed form solution to natural image matting[C]// Proceedings of the 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition. Washington D C: IEEE Computer Society, 2006, 1: 61-68.
[18] JADERBERG M, SIMONYAN K, ZISSERMAN A, et al. Spatial transformer networks[J]. Computer Science. (2015-06-05)[2018-01-01]. https://arxiv.org/pdf/1506.02025v2.pdf
[19] KINGMA D P, BA J. Adam: a method for stochastic optimization[J]. Computer Science. (2014-11-22)[2018-01-01]. https://arxiv.org/pdf/1412.6980v8.pdf
相似文献/References:
[1]殷夫,纪震,周家锐,等.PET/CT图像质量主观评价与感知模型[J].深圳大学学报理工版,2015,32(2):205.[doi:10.3724/SP.J.1249.2015.02205]
Yin Fu,Ji Zhen,Zhou Jiarui,et al.Subjective assessment and perception model of PET/CT image quality[J].Journal of Shenzhen University Science and Engineering,2015,32(3):205.[doi:10.3724/SP.J.1249.2015.02205]
[2]贾志成,郑笑,郭艳菊,等.改进鲸群优化子空间匹配追踪的稀疏解混算法[J].深圳大学学报理工版,2020,37(1):63.[doi:10.3724/SP.J.1249.2020.01063]
JIA Zhicheng,ZHENG Xiao,GUO Yanju,et al.Sparse unmixing using the improved whale optimized subspace matching pursuit algorithm[J].Journal of Shenzhen University Science and Engineering,2020,37(3):63.[doi:10.3724/SP.J.1249.2020.01063]