Akuisisi Foreground dan Background Berbasis Fitur DTC pada Matting Citra secara Otomatis

Penulis

  • Meidya Koeshardianto Institut Teknologi Sepuluh November Surabaya, Universitas Trunojoyo Madura
  • Eko Mulyanto Yuniarno Institut Teknologi Sepuluh Nopember Surabaya
  • Mochamad Hariadi Institut Teknologi Sepuluh Nopember Surabaya

DOI:

https://doi.org/10.25126/jtiik.2020732195

Abstrak

Teknik pemisahan foreground dari background pada citra statis merupakan penelitian yang sangat diperlukan dalam computer vision. Teknik yang sering digunakan adalah image segmentation, namun hasil ekstraksinya masih kurang akurat. Image matting menjadi salah satu solusi untuk memperbaiki hasil dari image segmentation. Pada metode supervised, image matting membutuhkan scribbles atau trimap sebagai constraint yang berfungsi untuk melabeli daerah tersebut adalah foreground atau background. Pada makalah ini dibangun metode unsupervised dengan mengakuisisi foreground dan background sebagai constraint secara otomatis. Akuisisi background ditentukan dari varian nilai fitur DCT (Discrete Cosinus Transform) yang dikelompokkan menggunakan algoritme k-means. Untuk mengakuisisi foreground ditentukan dari subset hasil klaster fitur DCT dengan fitur edge detection. Hasil dari proses akuisisi foreground dan background tersebut dijadikan sebagai constraint. Perbedaan hasil dari penelitian diukur menggunakan MAE (Mean Absolute Error) dibandingkan dengan metode supervised matting maupun dengan metode unsupervised matting lainnya. Skor MAE dari hasil eksperimen menunjukkan bahwa nilai alpha matte yang dihasilkan mempunyai perbedaan 0,0336 serta selisih waktu proses 0,4 detik dibandingkan metode supervised matting. Seluruh data citra berasal dari citra yang telah digunakan para peneliti sebelumnya

Abstract

The technique of separating the foreground and the background from a still image is widely used in computer vision. Current research in this technique is image segmentation. However, the result of its extraction is considered inaccurate. Furthermore, image matting is one solution to improve the effect of image segmentation. Mostly, the matting process used scribbles or trimap as a constraint, which is done manually as called a supervised method. The contribution offered in this paper lies in the acquisition of foreground and background that will be used to build constraints automatically. Background acquisition is determined from the variant value of the DCT feature that is clustered using the k-means algorithm. Foreground acquisition is determined by a subset resulting from clustering DCT values with edge detection features. The results of the two stages will be used as an automatic constraint method. The success of the proposed method, the constraint will be used in the supervised matting method. The difference in results from In the research experiment was measured using MAE (Mean Absolute Error) compared with the supervised matting method and with other unsupervised matting methods. The MAE score from the experimental results shows that the alpha matte value produced has a difference of 0.336, and the difference in processing time is 0.4 seconds compared to the supervised matting method. All image data comes from images that have been used by previous researchers.


Downloads

Download data is not yet available.

Referensi

AKSOY, Y. et al. (2018) ‘Semantic soft segmentation’, ACM Transactions on Graphics, 37(4), pp. 1–13. doi: 10.1145/3197517.3201275.

CHO, D. et al. (2016) ‘Automatic Trimap Generation and Consistent Matting for Light-Field Images’, IEEE Transactions on Pattern Analysis and Machine Intelligence, PP(99), pp. 1–14. doi: 10.1109/TPAMI.2016.2606397.

CHO, H.-W. et al. (2017) ‘Image Matting for Automatic Target Recognition’, IEEE Transactions on Aerospace and Electronic Systems, 13(9), pp. 1–1. doi: 10.1109/TAES.2017.2690529.

EISEMANN, M., WOLF, J. AND MAGNOR, M. (2009) ‘Spectral video matting’, Proc. VMV, 11.

GRADY, L. et al. (2005) ‘Random walks for interactive alpha-matting’, Proceedings of VIIP, 2005, pp. 423–429. Available at: http://cs.tum.edu/fileadmin/user_upload/Lehrstuehle/Lehrstuhl_XV/Research/Publications/2005/VIIP05.pdf.

KOESHARDIANTO, M. (2016) ‘Video Object Extraction Using Feature Matching Based on Nonlocal Matting’, Conference proceeding: International Seminar on Intelligent Technology and Its Application, (July), p. 201.

LEVIN, A., LISCHINSKI, D. AND WEISS, Y. (2008) ‘A closed-form solution to natural image matting’, Pami, 30(2), pp. 228–42. doi: 10.1109/TPAMI.2007.1177.

LEVIN, A., RAV-ACHA, A. AND LISCHINSKI, D. (2008) ‘Spectral matting’, IEEE Transactions on Pattern Analysis and Machine Intelligence, 30(10), pp. 1699–1712. doi: 10.1109/TPAMI.2008.168.

LI, W. et al. (2016) ‘Roto++: Accelerating Professional Rotoscoping using Shape Manifolds’, ACM Transactions on Graphics, 35(4), pp. 1–14. doi: 10.1145/2897824.2925973.

LONG, Z., LIU, Y. AND HAN, S. (2017) ‘Video segmentation based on strong target constrained video saliency’, 2017 2nd International Conference on Image, Vision and Computing, ICIVC 2017, pp. 356–360. doi: 10.1109/ICIVC.2017.7984577.

LU, Y. et al. (2016) ‘Coherent Parametric Contours for Interactive Video Object Segmentation’, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 642–650. doi: 10.1109/CVPR.2016.76.

MASHTALIR, S. AND MASHTALIR, V. (2016) ‘Sequential temporal video segmentation via spatial image partitions’, Proceedings of the 2016 IEEE 1st International Conference on Data Stream Mining and Processing, DSMP 2016, (August), pp. 239–242. doi: 10.1109/DSMP.2016.7583549.

MIKSIK, O. et al. (2017) ‘ROAM: A rich object appearance model with application to rotoscoping’, Proceedings - 30th IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2017, 2017-Janua(2014), pp. 7426–7434. doi: 10.1109/CVPR.2017.785.

PORTER, T. AND DUFF, T. (1984) ‘Compositing digital images’, ACM SIGGRAPH Computer Graphics, 18(3), pp. 253–259. doi: 10.1145/964965.808606.

ROTHER, C., KOLMOGOROV, V. AND BLAKE, A. (2004) ‘“GrabCut”: interactive foreground extraction using iterated graph cuts’, ACM Transactions on Graphics, 23(3), p. 309. doi: 10.1145/1015706.1015720.

SHEN, X. et al. (2016) ‘Deep automatic portrait matting’, Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 9905 LNCS, pp. 92–107. doi: 10.1007/978-3-319-46448-0_6.

SMITH, A. R. AND BLINN, J. F. (1996) ‘Blue screen matting’, SIGGRAPH ’96 Proceedings of the 23rd annual conference on Computer graphics and interactive techniques, pp. 259–268. doi: http://dx.doi.org/10.1145/237170.237263.

SUN, J. et al. (2004) ‘Poisson matting’, ACM Transactions on Graphics, 23(3), p. 315. doi: 10.1145/1015706.1015721.

TAN, W. et al. (2013) ‘Automatic Matting of Identification Photos’, in 2013 International Conference on Computer-Aided Design and Computer Graphics. IEEE, pp. 387–388. doi: 10.1109/CADGraphics.2013.60.

VICAS, G. et al. (2013) ‘Automatic trimap generation for digital image matting’, 2013 Asia-Pacific Signal and Information Processing Association Annual Summit and Conference, APSIPA 2013. doi: 10.1109/APSIPA.2013.6694178.

WANG, J. AND COHEN, M. F. (2007) ‘Image and Video Matting: A Survey’, Foundations and Trends® in Computer Graphics and Vision, 3(xx), pp. 97–175. doi: 10.1561/0600000019.

ZENG, Z. et al. (2017) ‘Pixel Modeling Using Histograms Based on Fuzzy Partitions for Dynamic Background Subtraction’, IEEE Transactions on Fuzzy Systems, 25(3), pp. 584–593. doi: 10.1109/TFUZZ.2016.2566811.

ZHANG, Y. et al. (2015) ‘An Automatic Method for Image Matting Based on Saliency Detection ⋆’, 10, pp. 3571–3578. doi: 10.12733/jcis14117.

ZHU, Q. et al. (2015) ‘Targeting accurate object extraction from an image: A comprehensive study of natural image matting’, IEEE Transactions on Neural Networks and Learning Systems, 26(2), pp. 185–207. doi: 10.1109/TNNLS.2014.2369426.

Diterbitkan

22-05-2020

Terbitan

Bagian

Ilmu Komputer

Cara Mengutip

Akuisisi Foreground dan Background Berbasis Fitur DTC pada Matting Citra secara Otomatis. (2020). Jurnal Teknologi Informasi Dan Ilmu Komputer, 7(3), 547-554. https://doi.org/10.25126/jtiik.2020732195