elki中的通用dbscan(gdbscan)是如何在java/scala中实现的?我目前正试图找到一种有效的方法来在elki上实现加权dbscan,以抵消sklearn实现加权dbscan带来的低效。
我现在这样做的原因是因为sklearn在太字节级(在云上,我就是这样)的数据集集群上实现dbscan太差劲了。
例如,我使用数据库创建函数和dbscan函数编写了以下代码,该函数读取数组数组,并吐出集群索引的索引。
/* Libraries imported from the ELKI library - https://elki-project.github.io/releases/current/doc/overview-summary.html */
import de.lmu.ifi.dbs.elki.algorithm.clustering.kmeans.KMeansElkan
import de.lmu.ifi.dbs.elki.data.model.{ClusterModel, DimensionModel, KMeansModel, Model}
import de.lmu.ifi.dbs.elki.data.model
import de.lmu.ifi.dbs.elki.data.{Clustering, DoubleVector, NumberVector}
import de.lmu.ifi.dbs.elki.database.{Database, StaticArrayDatabase}
import de.lmu.ifi.dbs.elki.datasource.ArrayAdapterDatabaseConnection
import de.lmu.ifi.dbs.elki.distance.distancefunction.minkowski.SquaredEuclideanDistanceFunction
import de.lmu.ifi.dbs.elki.distance.distancefunction.minkowski.EuclideanDistanceFunction
import de.lmu.ifi.dbs.elki.distance.distancefunction.NumberVectorDistanceFunction
import de.lmu.ifi.dbs.elki.algorithm.clustering.DBSCAN
// Imports for generalized DBSCAN
import de.lmu.ifi.dbs.elki.algorithm.clustering.gdbscan // Generalized dbscan function here required for weighted dbscan
import de.lmu.ifi.dbs.elki.algorithm.clustering.gdbscan.CorePredicate // THIS IS IMPORTANT TO GET GENERALIZED DBSCAN
import de.lmu.ifi.dbs.elki.algorithm.clustering.gdbscan.GeneralizedDBSCAN
import de.lmu.ifi.dbs.elki.utilities.ELKIBuilder
import de.lmu.ifi.dbs.elki.database.relation.Relation
import de.lmu.ifi.dbs.elki.datasource.DatabaseConnection
import de.lmu.ifi.dbs.elki.database.ids.DBIDIter
import de.lmu.ifi.dbs.elki.index.tree.metrical.covertree.SimplifiedCoverTree
import de.lmu.ifi.dbs.elki.data.{`type`=>TYPE} // Need to import in this way as 'type' is a class method in Scala
import de.lmu.ifi.dbs.elki.index.tree.spatial.rstarvariants.rstar.RStarTreeFactory // Important
def createDatabaseWeighted(data: Array[Array[Double]], distanceFunction: NumberVectorDistanceFunction[NumberVector]): Database = {
val indexFactory = new SimplifiedCoverTree.Factory[NumberVector](distanceFunction, 0, 30)
// Create a database
val db = new StaticArrayDatabase(new ArrayAdapterDatabaseConnection(data), java.util.Arrays.asList(indexFactory))
// Load the data into the database
val CustomPredicate = CorePredicate
db
}
def dbscanClusteringOriginalTest(data: Array[Array[Double]], distanceFunction: NumberVectorDistanceFunction[NumberVector] = SquaredEuclideanDistanceFunction.STATIC, epsilon: Double = 10, minpts: Int = 10) = {
// Use the same `distanceFunction` for the database and DBSCAN <- is it required??
val db = createDatabaseWeighted(data, distanceFunction)
val rel = db.getRelation(TYPE.TypeUtil.NUMBER_VECTOR_FIELD) // Create the required relational database
val dbscan = new DBSCAN[DoubleVector](distanceFunction, epsilon, minpts) // Epsilon and minpoints needed - either you define in the function input, or will use default values
val result: Clustering[Model] = dbscan.run(db)
var ClusterCounter = 0 // Indexing the number of datapoints allocated from DBSCAN
result.getAllClusters.asScala.zipWithIndex.foreach { case (cluster, idx) =>
println("The type is " + cluster.getNameAutomatic)
/* Isolate only the clusters and store the median from the DBSCAN results */
if (cluster.getNameAutomatic == "Cluster" || cluster.getNameAutomatic == "Noise") {
ClusterCounter += 1
val ArrayMedian = Array[Double]()
println(s"# $idx: ${cluster.getNameAutomatic}")
println(s"Size: ${cluster.size()}")
println(s"Model: ${cluster.getModel}")
println(s"ids: ${cluster.getIDs.iter().toString}")
}
}
}
我可以让它非常有效地运行,但我目前正在努力如何用gdbscan函数获得类似的效果。例如,有一个答案表明,可以通过修改elki上的corepertise(dbscan的elki实现中的sample\u weight选项)来实现这一点,但我不确定如何实现这一点。
任何指点都将不胜感激!
1条答案
按热度按时间uwopmtnx1#
实现自己的gdbscan核心 predicate 。
与其像标准实现中那样计算邻居,不如添加它们的权重。
那你就可以了。