Skip to content

Releases: tensorflow/adanet

AdaNet v0.9.0

09 Jul 20:53
Compare
Choose a tag to compare
  • Drop support for Tensorflow 1.* . Only TensorFlow >= 2.1 is supported.
  • Drop support for Python 2.* . Only Python >= 3.6 is supported.
  • Preserved the outputs in the PredictionOutput that are not in the best_export_outputs.
  • Add warm_start support to adanet Estimators.
  • Added support for predicting/serving on TPU.
  • Introduce support for AutoEnsembleTPUEstimator.
  • Introduce experimental adanet.experimental Keras ModelFlow APIs.
  • Replace reports.proto with simple serialized JSON. No longer have proto dependencies.

AdaNet v0.8.0

02 Oct 17:31
Compare
Choose a tag to compare
  • Add support for TensorFlow 2.0.
  • Begin developing experimental Keras API for auto-ensembling.
  • Support advanced subnetworks and subestimators that need to read and write from disk by giving them a dedicated subdirectory in model_dir.
  • Fix race condition in parallel evaluation during distributed training.
  • Support subnetwork hooks requesting early stopping.
  • Adding AdaNet replay. The ability to rerun training without having to determine the best candidate for the iteration. A list of best indices from the previous run is provided and honored by AdaNet.
  • Introduced adanet.ensemble.MeanEnsembler with a basic implementation for taking the mean of logits of subnetworks. This also supports including the mean of last_layer (helpful if subnetworks have same configurations) in the predictions and export_outputs of the EstimatorSpec.
  • BREAKING CHANGE: AdaNet now supports arbitrary metrics when choosing the best ensemble. To achieve this, the interface of adanet.Evaluator is changing. The Evaluator.evaluate_adanet_losses(sess, adanet_losses) function is being replaced with Evaluator.evaluate(sess, ensemble_metrics). The ensemble_metrics parameter contains all computed metrics for each candidate ensemble as well as the adanet_loss. Code which overrides evaluate_adanet_losses must migrate over to use the new evaluate method (we suspect that such cases are very rare).
  • Allow user to specify a maximum number of AdaNet iterations.
  • BREAKING CHANGE: When supplied, run the adanet.Evaluator before Estimator#evaluate, Estimator#predict, and Estimator#export_saved_model. This can have the effect of changing the best candidate chosen at the final round. When the user passes an Evaluator, we run it to establish the best candidate during evaluation, predict, and export_saved_model. Previously they used the adanet_loss moving average collected during training. While the previous ensemble would have been established by the Evaluator, the current set of candidate ensembles that were not done training would be considered according to the adanet_loss. Now when a user passes an Evaluator that, for example, uses a hold-out set, AdaNet runs it before making predictions or exporting a SavedModel to use the best new candidate according to the hold-out set.
  • Support tf.keras.metrics.Metrics during evaluation.
  • Allow users to disable summaries to reduce memory and disk footprint.
  • Stop individual subnetwork training on OutOfRangeError raised during bagging.
  • Train forever if max_steps and steps are both None.

AdaNet v0.7.0

26 Jun 20:41
Compare
Choose a tag to compare
  • Add embeddings support on TPU via TPUEmbedding.
  • Train the current iteration forever when max_iteration_steps=None.
  • Introduce adanet.AutoEnsembleSubestimator for training subestimators on different training data partitions and implement ensemble methods like bootstrap aggregating (a.k.a bagging).
  • Fix bug when using Gradient Boosted Decision Tree Estimators with AutoEnsembleEstimator during distributed training.
  • Allow AutoEnsembleEstimator's candidate_pool argument to be a lambda in order to create Estimators lazily.
  • Remove adanet.subnetwork.Builder#prune_previous_ensemble for abstract class. This behavior is now specified using adanet.ensemble.Strategy subclasses.
  • BREAKING CHANGE: Only support TensorFlow >= 1.14 to better support TensorFlow 2.0. Drop support for versions < 1.14.
  • Correct eval metric computations on CPU and GPU.

AdaNet v0.6.2

29 Apr 23:01
Compare
Choose a tag to compare
  • Fix n+1 global-step increment bug in adanet.AutoEnsembleEstimator. This bug incremented the global_step by n+1 for n canned Estimators like DNNEstimator.

AdaNet v0.6.1

29 Mar 15:47
Compare
Choose a tag to compare
  • Maintain compatibility with TensorFlow versions >=1.9.

AdaNet v0.6.0

28 Mar 02:22
Compare
Choose a tag to compare
  • Officially support AdaNet on TPU using adanet.TPUEstimator with adanet.Estimator feature parity.
  • Support dictionary candidate pools in adanet.AutoEnsembleEstimator constructor to specify human-readable candidate names.
  • Improve AutoEnsembleEstimator ability to handling custom tf.estimator.Estimator subclasses.
  • Introduce adanet.ensemble which contains interfaces and examples of ways to learn ensembles using AdaNet. Users can now extend AdaNet to use custom ensemble-learning methods.
  • Record TensorBoard scalar, image, histogram, and audio summaries on TPU during training.
  • Add debug mode to help detect NaNs and Infs during training.
  • Improve subnetwork tf.train.SessionRunHook support to handle more edge cases.
  • Maintain compatibility with TensorFlow versions 1.9 thru 1.13 Only works for TensorFlow version >=1.13. Fixed in AdaNet v0.6.1.
  • Improve documentation including adding 'Getting Started' documentation to adanet.readthedocs.io.
  • BREAKING CHANGE: Importing the adanet.subnetwork package using from adanet.core import subnetwork will no longer work, because the package was moved to the adanet/subnetwork directory. Most users should already be using adanet.subnetwork or from adanet import subnetwork, and should not be affected.

AdaNet v0.5.0

17 Dec 23:18
Compare
Choose a tag to compare
  • Support training on TPU using adanet.TPUEstimator.
  • Allow subnetworks to specify tf.train.SessionRunHook instances for training with adanet.subnetwork.TrainOpSpec.
  • Add API documentation generation with Sphinx.
  • Fix bug preventing subnetworks with Resource variables from working beyond the first iteration.

AdaNet v0.4.0

30 Nov 00:05
Compare
Choose a tag to compare
  • Add shared field to adanet.Subnetwork to deprecate, replace, and be more flexible than persisted_tensors.
  • Officially support multi-head learning with or without dict labels.
  • Rebuild the ensemble across iterations in Python without a frozen graph. This allows users to share more than Tensors between iterations including Python primitives, objects, and lambdas for greater flexibility. Eliminating reliance on a MetaGraphDef proto also eliminates I/O allowing for faster training, and better future-proofing.
  • Allow users to pass custom eval metrics when constructing an adanet.Estimator.
  • Add adanet.AutoEnsembleEstimator for learning to ensemble tf.estimator.Estimator instances.
  • Pass labels to adanet.subnetwork.Builder's build_subnetwork method.
  • The TRAINABLE_VARIABLES collection will only contain variables relevant to the current adanet.subnetwork.Builder, so not passing var_list to the optimizer.minimize will lead to the same behavior as passing it in by default.
  • Using tf.summary inside adanet.subnetwork.Builder is now equivalent to using the adanet.Summary object.
  • Accessing the global_step from within an adanet.subnetwork.Builder will return the iteration_step variable instead, so that the step starts at zero at the beginning of each iteration. One subnetwork incrementing the step will not affect other subnetworks.
  • Summaries will automatically scope themselves to the current subnetwork's scope. Similar summaries will now be correctly grouped together correctly across subnetworks in TensorBoard. This eliminates the need for the tf.name_scope("") hack.
  • Provide an override to force the AdaNet ensemble to grow at the end of each iteration.
  • Correctly seed TensorFlow graph between iterations. This breaks some tests that check the outputs of adanet.Estimator models.

AdaNet v0.3.0

07 Nov 19:32
Compare
Choose a tag to compare
  • Add official support for tf.keras.layers.
  • Fix bug that incorrectly pruned colocation constraints between iterations.

AdaNet v0.2.0

02 Nov 15:12
Compare
Choose a tag to compare
  • Estimator no longer creates eval metric ops in train mode.
  • Freezer no longer converts Variables to constants, allowing AdaNet to handle Variables larger than 2GB.
  • Fixes some errors with Python 3.