Constrained Density Peak Clustering

Constrained Density Peak Clustering

Viet-Thang Vu, T. T. Quyen Bui, Tien Loi Nguyen, Doan-Vinh Tran, Hong-Quan Do, Viet-Vu Vu, Sergey M. Avdoshin
Copyright: © 2023 |Pages: 19
DOI: 10.4018/IJDWM.328776
Article PDF Download
Open access articles are freely available for download

Abstract

Clustering is a commonly used tool for discovering knowledge in data mining. Density peak clustering (DPC) has recently gained attention for its ability to detect clusters with various shapes and noise, using just one parameter. DPC has shown advantages over other methods, such as DBSCAN and K-means, but it struggles with datasets that have both high and low-density clusters. To overcome this limitation, the paper introduces a new semi-supervised DPC method that improves clustering results with a small set of constraints expressed as must-link and cannot-link. The proposed method combines constraints and a k-nearest neighbor graph to filter out peaks and find the center for each cluster. Constraints are also used to support label assignment during the clustering procedure. The efficacy of this method is demonstrated through experiments on well-known data sets from UCI and benchmarked against contemporary semi-supervised clustering techniques.
Article Preview
Top

Introduction

The goal of clustering is to group a collection of objects together in a way that maximizes similarity within a cluster and dissimilarity between clusters (Ezugwu et al., 2022; Xu & Wunsch, 2005). It is a popular machine learning technique used in various fields, such as image processing, text mining, social science, and big data analysis, to mention just a few (Ezugwu et al., 2022; Krishnaswamy et al., 2023; Saha & Mukherjee, 2021; Chen et al., 2022; Liang & Chan, 2021; Hoi et al., 2022). Clustering can reveal the underlying structure of data, identify relationships between objects, and even detect outliers. Since it is an unsupervised learning task, clustering does not rely on prior data knowledge. Nevertheless, recent advances in machine learning have given rise to semisupervised clustering as a promising research area. Semisupervised clustering algorithms can leverage side information, such as labeled data or constraints, to enhance clustering quality and efficiency (Basu et al., 2008).

According to Jonschkowski et al. (2015), side information refers to additional data that are not part of the input or output space but can be helpful in the learning process. It is also used in other machine learning models, including support vector machines, multiview learning, and deep learning (Jonschkowski et al., 2015; Geoffrey et al., 2011). Generally speaking, side information can be expressed as constraints or labeled data, also known as seeds. In this paper, the following constraints are used to guide the clustering process for a given data set X = {x1, x2, …, xn}:

Must-Link: A Must-Link constraint between two data points xi and xj indicates that they must be placed in the same cluster.

Cannot-Link: A Cannot-Link constraint between two data points xi and xj indicates that they must be placed in separate clusters and should not be grouped together.

In Figure 1, a graphical illustration of the various types of side information that can be incorporated for data classification is presented. Over the last 20 years, several semisupervised clustering techniques have been developed in the literature. Typically, these methods are derived from unsupervised algorithms and aim to incorporate side information to improve clustering performance. Some of the significant techniques in this category include semisupervised K-means (Pelleg & Baras, 2007; Basu et al., 2002, 2004; Bilenko et al., 2004; Davidson & Ravi, 2005), semisupervised fuzzy clustering (Bensaid et al., 1996; Maraziotis, 2012; Abin, 2016; Grira et al., 2008), semisupervised spectral clustering that has been investigated in various research papers (Mavroeidis, 2010; Wang et al., 2014; Mavroeidis & Bingham, 2010), semisupervised density-based clustering (Böhm & Plant, 2008; Lelis & Sander, 2009; Ruiz et al., 2010; Vu et al., 2019), semisupervised hierarchical clustering that has been explored in Davidson and Ravi (2009), and semisupervised graph-based clustering (Kulis et al., 2009; Anand & Reddy, 2011; Vu, 2018), to mention a few.

Complete Article List

Search this Journal:
Reset
Volume 20: 1 Issue (2024)
Volume 19: 6 Issues (2023)
Volume 18: 4 Issues (2022): 2 Released, 2 Forthcoming
Volume 17: 4 Issues (2021)
Volume 16: 4 Issues (2020)
Volume 15: 4 Issues (2019)
Volume 14: 4 Issues (2018)
Volume 13: 4 Issues (2017)
Volume 12: 4 Issues (2016)
Volume 11: 4 Issues (2015)
Volume 10: 4 Issues (2014)
Volume 9: 4 Issues (2013)
Volume 8: 4 Issues (2012)
Volume 7: 4 Issues (2011)
Volume 6: 4 Issues (2010)
Volume 5: 4 Issues (2009)
Volume 4: 4 Issues (2008)
Volume 3: 4 Issues (2007)
Volume 2: 4 Issues (2006)
Volume 1: 4 Issues (2005)
View Complete Journal Contents Listing