
[1] R. Ulichney, Digital halftoning. MIT press, 1987. [2] D. L. Lau and G. R. Arce, Modern digital halftoning. CRC Press, 2008. [3] V. Ostromoukhov, “A simple and efficient errordiffusion algorithm,” in Proceedings of the 28th annual conference on Computer graphics and interactive techniques, 2001, pp. 567–572. [4] D. E. Knuth, “Digital halftones by dot diffusion,” ACM Trans. Graph., vol. 6, no. 4, pp. 245–273, 1987. [5] D. J. Lieberman and J. P. Allebach, “Efficient model based halftoning using direct binary search,” in Proceedings of International Conference on Image Processing, 1997, vol. 1, pp. 775–778. [6] T. Silva, “An Intuitive Introduction to Generative Adversarial Networks  GAN.” . [7] P.C. Chang and C.S. Yu, “Neural net classification and LMS reconstruction to halftone images,” in Visual Communications and Image Processing’98, 1998, vol. 3309, pp. 592–603. [8] M. Mese and P. P. Vaidyanathan, “Lookup table (LUT) method for inverse halftoning,” IEEE Trans. Image Process., vol. 10, no. 10, pp. 1566–1578, 2001. [9] K. Simonyan and A. Zisserman, “Very deep convolutional networks for largescale image recognition,” arXiv Prepr. arXiv1409.1556, 2014. [10] J. M. Guo and S. Sankarasrinivasan, “Digital Halftone Database (DHD): A Comprehensive Analysis on Halftone Types,” in 2018 AsiaPacific Signal and Information Processing Association Annual Summit and Conference (APSIPA ASC), 2018, pp. 1091–1099. [11] O. Gustavsson, “AM halftoning and FM halftoning.” . [12] B. E. Bayer, “An optimum method for twolevel rendition of continuous tone pictures,” in IEEE International Conference on Communications, June, 1973, 1973, vol. 26. [13] R. W. Floyd and L. S. Steinberg, “An adaptive algorithm for spatial gray scale,” 1975. [14] J. Jarvis, C. R.I. T. on Communications, and undefined 1976, “A new technique for displaying continuous tone images on a bilevel display,” ieeexplore.ieee.org. [15] P. Stucki, “A Multipleerror Correction Computation Algorithm for Bilevel Image Hardcopy Reproduction,” 1981. [16] R. Neelamani, R. D. Nowak, and R. G. Baraniuk, “WInHD: Waveletbased inverse halftoning via deconvolution,” IEEE Trans. Image Process., 2002. [17] T. D. Kite, N. DameraVenkata, B. L. Evans, and A. C. Bovik, “A fast, highquality inverse halftoning algorithm for error diffused halftones,” IEEE Trans. Image Process., vol. 9, no. 9, pp. 1583–1592, 2000. [18] M. Mese and P. P. Vaidyanathan, “Optimized halftoning using dot diffusion and methods for inverse halftoning,” IEEE Trans. Image Process., vol. 9, no. 4, pp. 691–709, Apr. 2000. [19] Y. F. Liu, J. M. Guo, and J. Der Lee, “Inverse halftoning based on the bayesian theorem,” IEEE Trans. Image Process., vol. 20, no. 4, pp. 1077–1084, 2011. [20] P. W. Wong, “Inverse halftoning and kernel estimation for error diffusion,” IEEE Trans. Image Process., vol. 4, no. 4, pp. 486–498, 1995. [21] Y.T. Kim, G. R. Arce, and N. Grabowski, “Inverse halftoning using binary permutation filters,” IEEE Trans. Image Process., vol. 4, no. 9, pp. 1296–1311, 1995. [22] M. Mese and P. P. Vaidyanathan, “Recent advances in digital halftoning and inverse halftoning methods,” IEEE Trans. Circuits Syst. I Fundam. Theory Appl., vol. 49, no. 6, pp. 790–805, 2002. [23] C.H. Son and H. Choo, “Local learned dictionaries optimized to edge orientation for inverse halftoning,” IEEE Trans. Image Process., vol. 23, no. 6, pp. 2542–2556, 2014. [24] J. Luo, R. De Queiroz, Z. F.I. T. on Signal, and undefined 1998, “A robust technique for image descreening based on the wavelet transform,” ieeexplore.ieee.org. [25] Zhang Xiaohua, Liu Fang, and Jiao LiCheng, “An effective image halftoning and inverse halftoning technique based on HVS,” in Proceedings Fifth International Conference on Computational Intelligence and Multimedia Applications. ICCIMA 2003, pp. 441–445. [26] R. Girshick, J. Donahue, T. Darrell, and J. Malik, “Rich feature hierarchies for accurate object detection and semantic segmentation,” Nov. 2013. [27] J. Wang, J. Yang, K. Yu, F. Lv, … T. H.2010 I. computer, and undefined 2010, “Localityconstrained linear coding for image classification,” Citeseer. [28] R. Haralick, K. S.I. T. on systems, and undefined 1973, “Textural features for image classification,” ieeexplore.ieee.org. [29] O. Stenroos, “Object detection from images using convolutional neural networks.” . [30] P. Viola, M. J.C. (1), and undefined 2001, “Rapid object detection using a boosted cascade of simple features,” researchgate.net. [31] U. Schmidt and S. Roth, “Shrinkage Fields for Effective Image Restoration.” [32] Y. Du, W. Wang, L. W. and pattern recognition, and undefined 2015, “Hierarchical recurrent neural network for skeleton based action recognition,” openaccess.thecvf.com. [33] S. Ji, W. Xu, M. Yang, K. Y.I. transactions on pattern, and undefined 2012, “3D convolutional neural networks for human action recognition,” ieeexplore.ieee.org. [34] O. Ronneberger, P. Fischer, and T. Brox, “UNet: Convolutional Networks for Biomedical Image Segmentation.” [35] S. Xie and Z. Tu, “Holisticallynested edge detection,” in Proceedings of the IEEE international conference on computer vision, 2015, pp. 1395–1403. [36] G. Larsson, M. Maire, and G. Shakhnarovich, “Learning representations for automatic colorization,” in European Conference on Computer Vision, 2016, pp. 577–593. [37] S. Iizuka, E. SimoSerra, and H. Ishikawa, “Let there be color!: joint endtoend learning of global and local image priors for automatic image colorization with simultaneous classification,” ACM Trans. Graph., vol. 35, no. 4, p. 110, 2016. [38] “Convolutional Neural Network.” . [39] S. Y. Fei Fei Li Justin Johnson, “Pooling Layer  CS231n.” . [40] K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2016, pp. 770–778. [41] C. Ledig et al., “PhotoRealistic Single Image SuperResolution Using a Generative Adversarial Network.” [42] S. Ioffe and C. Szegedy, “Batch Normalization: Accelerating Deep Network Training b y Reducing Internal Covariate Shift,” 2015. [43] L. Zhang, Q. Wang, H. Lu, and Y. Zhao, “EndtoEnd Learning of Multiscale Convolutional Neural Network for Stereo Matching,” arXiv Prepr. arXiv1906.10399, 2019. [44] C. Szegedy, S. Ioffe, V. Vanhoucke, and A. A. Alemi, “Inceptionv4, inceptionresnet and the impact of residual connections on learning,” in ThirtyFirst AAAI Conference on Artificial Intelligence, 2017. [45] M. Xia and T.T. Wong, “Deep Inverse Halftoning via Progressively Residual Learning.” [46] A. A. Rusu et al., “Progressive Neural Networks,” Jun. 2016. [47] T. Gao, J. Du, L. Dai, C. L. INTERSPEECH, and undefined 2016, “SNRBased Progressive Learning of Deep Neural Network for Speech Enhancement.,” staff.ustc.edu.cn. [48] B. Lim, S. Son, H. Kim, S. Nah, and K. M. Lee, “Enhanced Deep Residual Networks for Single Image SuperResolution,” Jul. 2017. [49] L. Gatys, A. S. Ecker, and M. Bethge, “Texture synthesis using convolutional neural networks,” in Advances in neural information processing systems, 2015, pp. 262–270. [50] J. Bruna, P. Sprechmann, and Y. LeCun, “Superresolution with deep convolutional sufficient statistics,” arXiv Prepr. arXiv1511.05666, 2015. [51] J. Johnson, A. Alahi, and L. FeiFei, “Perceptual losses for realtime style transfer and superresolution,” in European conference on computer vision, 2016, pp. 694–711. [52] A. Krizhevsky, I. Sutskever, and G. E. Hinton, “Imagenet classification with deep convolutional neural networks,” in Advances in neural information processing systems, 2012, pp. 1097–1105. [53] H. Zhao, O. Gallo, I. Frosio, and J. Kautz, “Loss functions for image restoration with neural networks,” IEEE Trans. Comput. Imaging, vol. 3, no. 1, pp. 47–57, 2016. [54] T.Y. Lin et al., “Microsoft coco: Common objects in context,” in European conference on computer vision, 2014, pp. 740–755. [55] M. E. et al., “Visual Object Classes Challenge 2012.” 2012. [56] MIT, “Places365Challenge.” . [57] Z. Wang, A. C. Bovik, H. R. Sheikh, E. P. Simoncelli, and others, “Image quality assessment: from error visibility to structural similarity,” IEEE Trans. image Process., vol. 13, no. 4, pp. 600–612, 2004. [58] D. P. Kingma and J. Ba, “Adam: A method for stochastic optimization,” arXiv Prepr. arXiv1412.6980, 2014. [59] R. Neelamani, R. Nowak, and R. Baraniuk, “Modelbased inverse halftoning with waveletvaguelette deconvolution,” in Proceedings 2000 International Conference on Image Processing (Cat. No. 00CH37101), 2000, vol. 3, pp. 973–976. [60] Z. Xiong, M. T. Orchard, and K. Ramchandran, “Inverse halftoning using wavelets,” IEEE Trans. image Process., vol. 8, no. 10, pp. 1479–1483, 1999. [61] P.C. Chang, C.S. Yu, and T.H. Lee, “Hybrid LMSMMSE inverse halftoning technique,” IEEE Trans. Image Process., vol. 10, no. 1, pp. 95–103, 2001. [62] J.M. Guo, Y.F. Liu, J.H. Chen, and J.D. Lee, “Inverse Halftoning With Context Driven Prediction,” IEEE Trans. Image Process., vol. 23, no. 4, pp. 1923–1924, 2014. [63] I. Goodfellow et al., “Generative adversarial nets,” in Advances in neural information processing systems, 2014, pp. 2672–2680.
