Efisiensi Big Data Menggunakan Improved Nearest Neighbor

Penulis

Aditya Hari Bawono, Ahmad Afif Supianto

Abstrak

Klasifikasi adalah salah satu metode penting dalam kajian data mining. Salah satu metode klasifikasi yang populer dan mendasar adalah k-nearest neighbor (kNN). Pada kNN, hubungan antar sampel diukur berdasarkan tingkat kesamaan yang direpresentasikan sebagai jarak. Pada kasus mayoritas terutama pada data berukuran besar, akan terdapat beberapa sampel yang memiliki jarak yang sama namun amat mungkin tidak terpilih menjadi tetangga, maka pemilihan parameter k akan sangat mempengaruhi hasil klasifikasi kNN. Selain itu, pengurutan pada kNN menjadi masalah komputasi ketika dilakukan pada data berukuran besar. Dalam usaha mengatasi klasifikasi data berukuran besar dibutuhkan metode yang lebih akurat dan efisien. Dependent Nearest Neighbor (dNN) sebagai metode yang diajukan dalam penelitian ini tidak menggunakan parameter k dan tidak ada proses pengurutan sampel. Hasil percobaan menunjukkan bahwa dNN dapat menghasilkan efisiensi waktu sebesar 3 kali lipat lebih cepat daripada kNN. Perbandingan akurasi dNN adalah 13% lebih baik daripada kNN.

Abstract

Classification is one of the important methods of data mining. One of the most popular and basic classification methods is k-nearest neighbor (kNN). In kNN, the relationships between samples are measured by the degree of similarity represented as distance. In major cases, especially on big data, there will be some samples that have the same distance but may not be selected as neighbors, then the selection of k parameters will greatly affect the results of kNN classification. Sorting phase of kNN becomes a computation problem when it is done on big data. In the effort to overcome the classification of big data a more accurate and efficient method is required. Dependent Nearest Neighbor (dNN) as method proposed in this study did not use the k parameters and no sample at the sorting phase. The proposed method resulted in 3 times faster than kNN. The accuracy of the proposed method is13% better results than kNN.

 

Teks Lengkap:

PDF

Referensi


AFANDIE, M. N., CHOLISSODIN, I., & SUPIANTO, A. A. (2014). Implementasi metode k-nearest neighbor untuk pendukung keputusan pemilihan menu makanan sehat dan bergizi. Repositori Jurnal Mahasiswa PTIIK UB, 3(1).

BOHACIK, J., & ZABOVSKY, M. (2017). Nearest Neighbor Method Using Non-nested Generalized Exemplars in Breast Cancer Diagnosis, 40–44.

CATTRAL, R., & OPPACHER, F. (2007). Discovering rules in the poker hand dataset. Proceedings of the 9th Annual Conference on Genetic and Evolutionary Computation - GECCO ’07, (May), 1870.

https://doi.org/10.1145/1276958.1277329

EKARISTIO, I., SOEBROTO, A. A., & SUPIANTO, A. A. (2015). Pengembangan sistem pendukung keputusan pemilihan bibit unggul sapi bali menggunakan metode k-nearest neighbor. Journal of Environmental Engineering and Sustainable Technology, 02(01), 49–57.

ERTUĞRUL, Ö. F., & TAĞLUK, M. E. (2017). A novel version of k nearest neighbor: Dependent nearest neighbor. Applied Soft Computing Journal, 55, 480–490. https://doi.org/10.1016/j.asoc.2017.02.020

HAN, J., KAMBER, M., & PEI, J. (2012). Data Mining: Concepts and Techniques. San Francisco, CA, itd: Morgan Kaufmann. https://doi.org/10.1016/B978-0-12-381479-1.00001-0

MINELLI, M., CHAMBERS, M., & DHIRAJ, A. (2013). Big Data Analytics - Emerging BI and Analitics trends for today’s businesses.

MULLICK, S. S., DATTA, S., DAS, S., & MEMBER, S. (2018). Adaptive Learning-Based k -Nearest Neighbor Classifiers With Resilience to Class Imbalance, 1–13.

NEO, T. K. C., & VENTURA, D. (2012). A direct boosting algorithm for the k-nearest neighbor classifier via local warping of the distance metric. Pattern Recognition Letters, 33(1), 92–102. https://doi.org/10.1016/j.patrec.2011.09.028

PAN, Z., WANG, Y., & KU, W. (2017). A new general nearest neighbor classification based on the mutual neighborhood information. Knowledge-Based Systems, 121, 142–152. https://doi.org/10.1016/j.knosys.2017.01.021

SONG, Y., LIANG, J., LU, J., & ZHAO, X. (2017). An efficient instance selection algorithm for k nearest neighbor regression. Neurocomputing, 251, 26–34. https://doi.org/10.1016/j.neucom.2017.04.018




DOI: http://dx.doi.org/10.25126/jtiik.2019662085