SHOGUN Changelog

What's new in SHOGUN 3.2.0

Feb 18, 2014
  • Features:
  • Fully support python3 now
  • Add mini-batch k-means [Parijat Mazumdar]
  • Add k-means++ [Parijat Mazumdar]
  • Add sub-sequence string kernel [lambday]
  • Bugfixes:
  • Compile fixes for upcoming swig3.0
  • Speedup for gaussian process' apply()
  • Improve unit / integration test checks
  • libbmrm uninitialized memory reads
  • libocas uninitialized memory reads
  • Octave 3.8 compile fixes [Orion Poplawski]
  • Fix java modular compile error [Bjoern Esser]

New in SHOGUN 3.1.1 (Jan 6, 2014)

  • Fix compile error occurring with CXX0X
  • Bump data version to required version

New in SHOGUN 3.1.0 (Jan 6, 2014)

  • This version contains mostly bugfixes, but also feature enhancements.
  • Most important, a couple of memory leaks related to apply() have been fixed.
  • Writing and reading of shogun features as protobuf objects is now possible.
  • Custom Kernel Matrices can now be 2^31-1 * 2^31-1 in size.
  • Multiclass ipython notebooks were added, and the others improved.
  • Leave-one-out crossvalidation is now conveniently supported.

New in SHOGUN 2.0.0 (Sep 6, 2012)

  • It includes everything which has been carried out before and during the Google Summer of Code 2012.
  • Students have implemented various new features such as structured output learning, gaussian processes, latent variable SVM (and structured output learning), statistical tests in kernel reproducing spaces, various multitask learning algorithms, and various usability improvements, to name a few.

New in SHOGUN 1.1.0 (Dec 13, 2011)

  • This version introduced the concept of 'converters', which enables you to construct embeddings of arbitrary features.
  • It also includes a few new dimension reduction techniques and significant performance improvements in the dimensionality reduction toolkit.
  • Other improvements include a significant compilation speed-up, various bugfixes for modular interfaces and algorithms, and improved Cygwin, Mac OS X, and clang++ compatibility.
  • Github Issues is now used for tracking bugs and issues.

New in SHOGUN 1.0.0 (Sep 2, 2011)

  • This version features interfaces to new languages including Java, C#, Ruby, and Lua, a model selection framework, many dimension reduction techniques, Gaussian Mixture Model estimation, and a full-fledged online learning framework.

New in SHOGUN 0.10.0 (Dec 7, 2010)

  • Features:
  • Serialization of objects deriving from CSGObject, i.e. all shogun objects (SVM, Kernel, Features, Preprocessors, ...) as ASCII, JSON, XML and HDF5
  • Create SVMLightOneClass
  • Add CustomDistance in analogy to custom kernel
  • Add HistogramIntersectionKernel (thanks Koen van de Sande for the patch)
  • Matlab 2010a support
  • SpectrumMismatchRBFKernel modular support (thanks Rob Patro for the patch)
  • Add ZeroMeanCenterKernelNormalizer (thanks Gorden Jemwa for the patch)
  • Swig 2.0 support
  • Bugfixes:
  • Custom Kernels can now be > 4G (thanks Koen van de Sande for the patch)
  • Set C locale on startup in init_shogun to prevent incompatiblies with ascii floats and fprintf
  • Compile fix when reference counting is disabled
  • Fix set_position_weights for wd kernel (reported by Dave duVerle)
  • Fix set_wd_weights for wd kernel.
  • Fix crasher in SVMOcas (reported by Yaroslav)
  • Cleanup and API Changes:
  • Renamed SVM_light/SVR_light to SVMLight etc.
  • Remove C prefix in front of non-serializable class names
  • Drop CSimpleKernel and introduce CDotKernel as its base class. This way all dot-product based kernels can be applied on top of DotFeatures and only a single implementation for such kernels is needed.

New in SHOGUN 0.9.3 (May 31, 2010)

  • Features:
  • Experimental lp-norm MCMKL
  • New Kernels: SpectrumRBFKernelRBF, SpectrumMismatchRBFKernel, WeightedDegreeRBFKernel
  • WDK kernel supports amino acids
  • String Features now support append operations (and creation of
  • python-dbg support
  • Allow floats as input for custom kernel (and matrices > 4GB in size)
  • Bugfixes:
  • Static linking fix.
  • Fix sparse linear kernel's add_to_normal
  • Cleanup and API Changes:
  • Remove init() function in Performance Measures
  • Adjust .so suffix for python and use python distutils to figure out install paths

New in SHOGUN 0.9.2 (Mar 31, 2010)

  • Features:
  • Direct reading and writing of ASCII/Binary files/HDF5 based files.
  • Implemented multi task kernel normalizer.
  • Implement SNP kernel.
  • Implement time limit for libsvm/libsvr.
  • Integrate Elastic Net MKL (thanks Ryoata Tomioka for the patch).
  • Implement Hashed WD Features.
  • Implement Hashed Sparse Poly Features.
  • Integrate liblinear 1.51
  • LibSVM can now be trained with bias disabled.
  • Add functions to set/get global and local io/parallel/... objects.
  • Bugfixes:
  • Fix set_w() for linear classifiers.
  • Static Octave, Python, Cmdline and Modular Python interfaces Compile cleanly under Windows/Cygwin again.
  • In static interfaces testing could fail when not directly done after training.

New in SHOGUN 0.8.0 (Aug 17, 2009)

  • This release contains a quite large number of bugfixes documentation updates (tutorials and a method overview are now available for C++ developers, with static and modular interfaces).
  • Multiple Kernel Learning has been reworked, and works using interleaved optimization via SVMLight or the wrapper algorithm via any SVM like LibSVM for regression and one and two-class classification.

New in SHOGUN 0.7.3 (May 4, 2009)

  • Features:
  • Improve libshogun/developer tutorial.
  • Implement convenience function for parallel quicksort.
  • Fasta/fastq file loading for StringFeatures.
  • Bugfixes:
  • get_name function was undefined in Evaluation causing the PerformanceMeasures class to be defunct.
  • Workaround bugs in the std template library for math functions.
  • Compiles cleanly under OSX now, thanks to James Kyle.
  • Cleanup and API Changes:
  • Make sure that all destructors are declared virtual.

New in SHOGUN 0.7.2 (Mar 23, 2009)

  • This release contains several cleanups and enhancements.
  • Shogun now supports all data types from python_modular: dense, scipy-sparse csc_sparse matrices and strings of type bool, char, (u)int{8,16,32,64}, and float{32,64,96}.
  • In addition, individual vectors and strings can now be obtained and even changed. See examples/python_modular/features_*.py for examples.
  • Now AUC maximization works with arbitrary kernel SVMs.
  • Further documentation updates, polished examples, and bugfixes were made.

New in SHOGUN 0.7.1 (Mar 9, 2009)

  • This release contains several cleanups, feature enhancements, and bugfixes.
  • The configure script has been improved to smoothe installation.
  • The documentation and the tutorial on how to develop with libshogun were improved.
  • The elwms (eilergendewollmilchsau) interface was added, which interfaces to Python, Octave, R, and Matlab in one file and provides the run_{octave,python,r} commands to run code from other languages transparently, making variables available to the target language and avoiding file I/O.
  • The Custom Kernel no longer requires features nor initialization, even not when used in CombinedKernel.

New in SHOGUN 0.7.0 (Feb 20, 2009)

  • This release adds several drastic changes.
  • Most importantly, shogun has been split into libshogun and libshogunui, multiple kernel learning (for classification) is now available via glpk, support has been added for "dotfeatures" which enable learning of linear classifiers with mixed datatypes, and shogun now runs on the iphone.

New in SHOGUN 0.6.7 (Nov 26, 2008)

  • This release contains several cleanups and bugfixes.
  • The ambigous self-defined data types for char, int, float, and so on were replaced with "standardized" types.
  • It fixes non-contiguous arrays and vectors in the Python modular interface.
  • Improper assignment of labels in the constructor of WDSVMOcas was fixed.
  • This problem led to segfaults on destruction in the (Python) modular interface.
  • A segfault opportunity in MultiClassSVM was fixed.

New in SHOGUN 0.6.6 (Oct 31, 2008)

  • This release contains several cleanups and bugfixes:
  • Implement KernelNormalizer class with a couple of normalization functions that can now be attached to almost any kernel via set_normalizer() in the modular and set_kernel_normalization in the static interfaces. This fixes a long standing bug in the WeightedDegreePositionStringKernel normalization WARNING will break compatibility to all previously trained WD-shift kernel models, use FIRSTELEMENT / CFirstElementKernelNormalizer for an approximation to the previous buggy behaviour. Also breaks WordMatchKernel as for this kernel normalization is now enabled by default.
  • The custom kernel no longer requires lhs/rhs features (it will create its own dummy features)
  • Linear kernels don't use kernel cache (only slows down things)
  • Integrate the Oligo string-kernel (from Meinecke et.al 2004)
  • Remove use_precompute hack from SVMLight.
  • Add precompute_kernels function to turn kernels appended to a combined kernel into CustomKernels (i.e. precomputed ones).
  • Add distances BrayCurtis, ChiSquare, Cosine and Tanimoto.
  • Bugfixes:
  • Support Intel MKL on 32bit archs.
  • Fix compilation when atlas/lapack is not available.
  • Include missing file regression/Regression.h.
  • Fix formula in CosineDistance.

New in SHOGUN 0.6.5 (Oct 11, 2008)

  • This release contains several cleanups and bugfixes.
  • It implements a general kernel normalizer that can be attached to kernels via set_normalizer() in the modular and set_kernel_normalization in the static interfaces.
  • This fixes a long standing bug in the WeightedDegreePositionStringKernel normalization, but also breaks compatibility for a few kernels.
  • This release integrates the Oligo string-kernel, and adds distances BrayCurtis, ChiSquare, Cosine, and Tanimoto.
  • It supports Intel MKL on 32-bit architectures.
  • It fixes compilation when Atlas/Lapack is not available.