Releases: RubixML/ML
Releases · RubixML/ML
0.0.8-alpha
- Added Model Orchestra meta estimator
- Added Stop Word Filter transformer
- Added document frequency smoothing to TF-IDF Transformer
- Added Uniform neural net weight initializer
- Improved Gaussian Mixture numerical stability
- Fixed missing probabilities in Classification Tree
- Removed MetaEstimator interface
- Added model Wrapper interface
- AdaBoost is now probabilistic
- Added Constant guessing strategy
- Added N-Gram word tokenizer
- Added Skip-Gram word tokenizer
- Changed FCM and K Means default max epochs
- Added zip method to Labeled dataset
- Removed stop word filter from Word Count Vectorizer
- Changed order of t-SNE hyper-parameters
- Grid search now has automatic default Metric
- Base k-D Tree now uses highest variance splits
- Renamed Raw Pixel Encoder to Image Vectorizer
0.0.7-alpha
- Added Support Vector Machine classifier and regressor
- Added One Class SVM anomaly detector
- Added Verbose interface for logging
- Added Linear Discriminant Analysis (LDA) transformer
- Manifold learners are now considered Estimators
- Transformers can now transform labels
- Added Cyclic neural net Optimizer
- Added k-d neighbors search with pruning
- Added post pruning to CART estimators
- Estimators with explicit loss functions are now Verbose
- Grid Search: Added option to retrain best model on full dataset
- Filesystem Persister now keeps backups of latest models
- Added loading backup models to Persister API
- Added PSR-3 compatible screen logger
- Grid Search is now Verbose
- t-SNE embedder is now Verbose
- Added Serializer interface
- Added Native and Binary serializers
- Fixed Naive Bayes reset category counts during partial train
- Pipeline and Persistent Model are now Verbose
- Classification and Regression trees now Verbose
- Random Forest can now return feature importances
- Gradient Boost now accepts base and booster estimators
- Blurry Median strategy is now Blurry Percentile
- Added Mean strategy
- Removed dataset save and load methods
- Subsumed Extractor api into Transformer
- Removed Concentration metric
- Changed Metric and Report API
- Added Text Normalizer transformer
- Added weighted predictions to KNN estimators
- Added HTML Stripper transformer
0.0.6-alpha
- Added Gradient Boost regressor
- Added t-SNE embedder
- AdaBoost now uses SAMME multiclass algorithm
- Added Redis persister
- Added Max Absolute Scaler
- Added Principal Component Analysis transformer
- Pipeline is now Online and has elastic option
- Added Elastic interface for transformers
- Z Scale Standardizer is now Elastic
- Min Max Normalizer is now Elastic
- TF-IDF Transformer is now Elastic
- Added Huber Loss cost function
- Added Swiss Roll generator
- Moved Generators to the Datasets directory
- Added Persister interface for Persistable objects
- Added overwrite protection to Persistent Model meta estimator
- Multiclass Breakdown report now breaks down user-defined classes
- Renamed restore method to load on Datasets and Persisters
- Random Forest now accepts a base estimator instance
- CARTs now use max features heuristic by default
- Added build/quick factory methods to Datasets
- Added Interval Discretizer transformer
- GaussianNB and Naive Bayes now accept class prior probabilities
- Single layer neural net estimators now use snapshotting
- Removed Image Patch Descriptor
- Added Learner interface for trainable estimators
- Added smart cluster initialization to K Means and Fuzzy C Means
- Circle and Half Moon generators now generate Labeled datasets
- Gaussian Mixture now uses K Means initialization
- Removed Isolation Tree anomaly detector
0.0.5-alpha
- Added Gaussian Mixture clusterer
- Added Batch Norm hidden layer
- Added PReLU hidden layer
- Added Relative Entropy cost function to nn
- Added random weighted subset to datasets
- Committee Machine classifier only and added expert influence
- Added type method to Estimator API
- Removed classifier, detector, clusterer, regressor interfaces
- Added epsilon smoothing to Gaussian Naive Bayes
- Added option to fit priors in Naive Bayes classifiers
- Added Jaccard distance kernel
- Fixed Hamming distance calculation
- Added Alpha Dropout layer
- Fixed divide by 0 in Cross Entropy cost function
- Added scaling parameter to Exponential cost function
- Added Image Patch Descriptor extractor
- Added Texture Histogram descriptor
- Added Average Color descriptor
- Removed parameters from Dropout and Alpha Dropout layers
- Added option to remove biases in Dense and Placeholder layers
- Optimized Dataset objects
- Optimized matrix and vector operations
- Added grid params to Param helper
- Added Gaussian RBF activation function
- Renamed Quadratic cost function to Least Squares
- Added option to stratify dataset in Hold Out and K Fold
- Added Monte Carlo cross validator
- Implemented noise as layer instead of activation function
- Removed Identity activation function
- Added Xavier 1 and 2 initializers
- Added He initializer
- Added Le Cun initializer
- Added Normal (Gaussian) initializer
0.0.4-alpha
- Added Dropout hidden layer
- Added K-d Neighbors classifier and regressor
- Added Extra Tree Regressor
- Added Adaline regressor
- Added sorting by column to Dataset
- Added sort by label to Labeled Dataset
- Added appending and prepending to Dataset
- Added Dataset Generators
- Added Noisy ReLU activation function
- Fixed bug in dataset stratified fold
- Added stop word filter to Word Count Vectorizer
- Added centering and scaling options for standardizers
- Added min dimensionality estimation on random projectors
- Added Gaussian Random Projector
- Removed Ellipsoidal distance kernel
- Added Thresholded ReLU activation function
- Changed API of Raw Pixel Encoder
0.0.3-alpha
- Added Extra Tree classifier
- Random Forest now supports Extra Trees
- New Decision Tree implementation
- Added Canberra distance kernel
- Committee Machine is now a Meta Estimator Ensemble
- Added Bootstrap Aggregator Meta Estimator Ensemble
- Added Guassian Naive Bayes
- Naive Bayes classifiers are now Online Estimators
- Added tolerance to Robust Z Score detector
- Added Concentration clustering metric (Calinski Harabasz)
0.0.2-alpha
Core neural net update, anomaly detection, and much more
0.0.1-alpha
Implemented head method on Supervised and Unsupervised dataset.