Boosting is a meta-learning approach that aims at combining an ensemble of weak classifiers to form a strong classifier. Adaptive Boosting (Adaboost) implements this idea as a greedy search for a linear combination of classifiers by overweighting the examples that are misclassified by each classifier.
The icsiboost project implements Adaboost over stumps (one-level decision trees) on discrete and continuous attributes (words and real values). See http://en.wikipedia.org/wiki/AdaBoost and the papers by Y. Freund and R. Schapire for more details. This approach is one of most efficient and simple to combine continuous and nominal values. Our implementation is aimed at allowing training from millions of examples by hundreds of features in a reasonable time/memory.
USAGE: icsiboost [options] -S < stem >
--version print version info
-S < stem > defines model/data/names stem
-n < iterations > number of boosting iterations
-E < smoothing > set smoothing value (default=0.5)
-V verbose mode
-C classification mode -- reads examples from < stdin >
-o long output in classification mode
--cutoff < freq > ignore nominal features occuring unfrequently
--jobs < threads > number of threaded weak learners
--do-not-pack-model do not pack model (to get individual training steps)
--output-weights output training examples weights at each iteration
--model < model > save/load the model to/from this file instead of < stem >.shyp
--train < file > bypass the .data filename to specify training examples
--test < file > output additional error rate from an other file during training (can be used multiple times, not implemented)
What's New in This Release:
· This release brings a few bugfixes in training and test procedures, and error rate reports on multi-class problems.
· Moreover, optimization of the most called functions brought nice training speed improvements.
· This release also updates the documentation and tries to improve the handling of rare cases.
· The F-measure framework has been widely tested on diverse classification problems.