go-iforest
GO implementation of Isolation Forest algorithm.
Isolation Forest is an unsupervised learning algorithm that is able to detect anomalies (data patterns that differ from normal instances). Detection is performed by recursive data partitioning, which can be represented by a tree structure. At each iteration data is splitted using randomly chosen feature and its value (random number between maximum and minimum value of chosen feature). Due to the fact that anomalies are rare and different from other instances, smaller number of partitions is needed to isolate them. This is equivalent to the path length in created tree. Shorter path means that given instance can be an anomaly. To improve accuracy the ensemble of such trees is created and result is averaged over all trees.
To get more information about algorithm, please refer to this paper: IFOREST.
Stable release can be downloaded by issuing the following command:
go get -u gopkg.in/e-XpertSolutions/go-iforest.v1
This example shows how to use the Isolation Forest. You will need to load the data, initialize iforest with proper parameters and use two functions: Train(), Test() to create the model. First one is used to build the trees, second one to find proper "anomaly threshold" and detect anomalies in given data. After that you can pass new instances to Predict() which result in labeling them as normal "0" or anomaly "1". It is possible to use parallel versions of testing and detecting functions - they use multiple go routines to speed up computations. Created models can be saved and read from the files using Save() and Load() methods.
package mainimport( "fmt" "github.com/e-XpertSolutions/go-iforest/iforest")func main(){ // input data must be loaded into two dimensional array of the type float64 // please note: loadData() is some custom function - not included in the // library var inputData [][]float64 inputData = loadData("filename") // input parameters treesNumber := 100 subsampleSize := 256 outliersRatio := 0.01 routinesNumber := 10 //model initialization forest := iforest.NewForest(treesNumber, subsampleSize, outliers) //training stage - creating trees forest.Train(inputData) //testing stage - finding anomalies //Test or TestParaller can be used, concurrent version needs one additional // parameter forest.Test(inputData) forest.TestParallel(inputData, routinesNumber) //after testing it is possible to access anomaly scores, anomaly bound // and labels for the input dataset threshold := forest.AnomalyBound anomalyScores := forest.AnomalyScores labelsTest := forest.Labels //to get information about new instances pass them to the Predict function // to speed up computation use concurrent version of Predict var newData [][]float64 newData = loadData("someNewInstances") labels, scores := forest.Predict(newData) }
Contributions are greatly appreciated. The project follows the typicalGitHub pull request modelfor contribution.
The sources are release under a BSD 3-Clause License. The full terms of that
license can be found in LICENSE
file of this repository.
上一篇:awd-lstm-lm-vis
下一篇:IForest-On-Spark
还没有评论,说两句吧!
热门资源
seetafaceJNI
项目介绍 基于中科院seetaface2进行封装的JAVA...
spark-corenlp
This package wraps Stanford CoreNLP annotators ...
Keras-ResNeXt
Keras ResNeXt Implementation of ResNeXt models...
capsnet-with-caps...
CapsNet with capsule-wise convolution Project ...
inferno-boilerplate
This is a very basic boilerplate example for pe...
智能在线
400-630-6780
聆听.建议反馈
E-mail: support@tusaishared.com