MLlib SVM 並不支援參數選擇。這部分是還好,畢竟自己做一個 For 迴圈來選擇 c-SVM 中的權重就可以了。不過,感覺整合進入現有的 MLlib 中應該也不是太大的問題
MLlib SVM 分類器
在 Spark 中,SVM 被實作在 MLlib 中,我們先從其提供的範例開始。
/*
* Licensed to the Apache Software Foundation (ASF) under one or more
* contributor license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright ownership.
* The ASF licenses this file to You under the Apache License, Version 2.0
* (the "License"); you may not use this file except in compliance with
* the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
// scalastyle:off println
package org.apache.spark.examples.mllib
import org.apache.spark.{SparkConf, SparkContext}
// $example on$
import org.apache.spark.mllib.classification.{SVMModel, SVMWithSGD}
import org.apache.spark.mllib.evaluation.BinaryClassificationMetrics
import org.apache.spark.mllib.util.MLUtils
// $example off$
object SVMWithSGDExample {
def main(args: Array[String]): Unit = {
val conf = new SparkConf().setAppName("SVMWithSGDExample")
val sc = new SparkContext(conf)
// $example on$
// Load training data in LIBSVM format.
val data = MLUtils.loadLibSVMFile(sc, "data/mllib/sample_libsvm_data.txt")
// Split data into training (60%) and test (40%).
val splits = data.randomSplit(Array(0.6, 0.4), seed = 11L)
val training = splits(0).cache()
val test = splits(1)
// Run training algorithm to build the model
val numIterations = 100
val model = SVMWithSGD.train(training, numIterations)
// Clear the default threshold.
model.clearThreshold()
// Compute raw scores on the test set.
val scoreAndLabels = test.map { point =>
val score = model.predict(point.features)
(score, point.label)
}
// Get evaluation metrics.
val metrics = new BinaryClassificationMetrics(scoreAndLabels)
val auROC = metrics.areaUnderROC()
println(s"Area under ROC = $auROC")
// Save and load model
model.save(sc, "target/tmp/scalaSVMWithSGDModel")
val sameModel = SVMModel.load(sc, "target/tmp/scalaSVMWithSGDModel")
// $example off$
sc.stop()
}
}
// scalastyle:on println
val conf = new SparkConf().setAppName("SVMWithSGDExample")
.setMaster("local")
之後,有機會應該會解說兩者的差異。
在這分程式中,關於 SVM 參數設定只有在以下的部分:
// Run training algorithm to build the model
val numIterations = 100
val model = SVMWithSGD.train(training, numIterations)
Multi-class SVM 的延伸
考慮到原本 SVM 的設計是一對一的分類,當我們要處理多類別分類時,就需要做一些小小的改變。在 libSVM 和 Spark 中,都是採用 One-vs-One 的方法,簡單來說,假設有三類: A、B、C,One-vs-One 會產生 3 (3*2/2) 個不同的分類器,分別是: {A vs B}、{B vs A}、{C vs A}。同一筆待分類的資料會通過 3 個分類器,最終的分類結果則按此 3 分類器投票產出。
LIBSVM implements the "one-against-one" approach for multi-class classification. If k is the number of classes, then k(k-1)/2 classifiers are constructed and each one trains data from two classes.
In classification we use a voting strategy: each binary classification is considered to be a voting where votes can be cast for all data points x - in the end a point is designated to be in a class with the maximum number of votes.