Skip to content

Releases: denizyuret/Knet.jl

Knet v1.1.1 Performance Improvements

01 Oct 03:13
Compare
Choose a tag to compare
  • General performance improvements.
  • New GPU memory manager. (with @ekinakyurek)
  • New logging system using Base.CoreLogging.
  • New cuda macros and profiling system using TimerOutputs.
  • Tutorial available on Colab. (with @jekbradbury)
  • Added cpucopy, gpucopy serialization. (with @ekinakyurek)
  • Added softmax, logsoftmax, logistic loss and binary-cross-entropy. (@CarloLucibello, @ekinakyurek)
  • Added elu and selu. (with @CarloLucibello)
  • Speed up matmul gradient avoiding transpose.
  • Defined permutedims(::KnetMatrix)
  • Fixed scripts under Knet/prof, added new results.

Knet v1.1.0 with the callable object interface.

12 Sep 17:22
Compare
Choose a tag to compare

The new suggested way to define models/layers is as callable objects.

struct Linear; w; b; end
(m::Linear)(x) = m.w * x .+ m.b

This way a model acts as a (predict) function as well as a collection of parameters:

m = Linear(randn(10,784), zeros(10))
y = m(x)             # gives the prediction
for p in params(m)   # iterates over parameters

For training the parameters should be marked as AutoGrad.Param objects:

m = Linear(Param(randn(10,784)), Param(zeros(10)))
y = m(x)             # returns the same y value as above (test mode)
y = @diff m(x)       # gives an object with prediction as well as grad info
value(y)             # gives the prediction value
gradient(y, m.w)     # gives the gradient of value(y) wrt m.w

This interface is not mandatory, everything should be backwardly compatible and old Knet
code should continue to work. However the new interface should allow people to easily
define their layer/model collections and thus address Issues #144, #147, #341.

I am working on a minimal set of utilities for the new interface on the dy/1.1 branch:

  • A new train! function that works with the new interface.
  • param and param0 make declaring parameters easier.
  • params recursively finds all Params in a given object.
  • Additional loss and update methods can handle callable objects.
  • Better RNN interface: m=LSTM(input,hidden); m(x) => y
  • Possibly other layers/models defined for MLP and CNNs.

I am not sure about the last item because I'd rather keep the Knet interface minimal and let
people work on their own model/layer collections. I am updating Knet/examples/dl-tutorial
notebooks as I work on the new interface if you want to see examples.

Knet v1.0.1 Release Notes

31 Aug 15:17
Compare
Choose a tag to compare
  • Improved gpu diagnostics.
  • build.jl no longer depends on Knet.
  • AutoGrad 1.0.1 compatibility fixes.
  • Fixed some examples and notebooks.
  • Fixed Documenter, avoiding python dependency.
  • JLD2 FileIO interface (@ekinakyurek).

Knet v1.0.0: Julia 1.0 compatibility release.

20 Aug 03:36
Compare
Choose a tag to compare

Knet v0.9.2 Release Notes

14 Aug 12:15
Compare
Choose a tag to compare
  • Bounded package requirements, in particular julia at 0.6 0.7- in REQUIRE.
  • Updated dl-tutorial.
  • CUDNN 7.1.4 compatibility fixes.

Knet v0.9.1 Release Notes

28 May 13:47
Compare
Choose a tag to compare

Compatibility

  • Library discovery now done using CUDAapi.
  • GPU direct peer access support (@cangumeli).
  • Removed gpu-architecture compiler flags from build.jl to support machines with heterogenous gpu types.
  • Added JuliaBox compatibility to Jupyter notebooks.

General

  • Fixed default dropout behavior which was not applying dropout to input to obey the pdrop argument.
  • Added support for mean(f::Function,x::KnetArray).
  • Added vcat support for scalar arguments.
  • Fixed batchnorm cpu backward pass (@CarloLucibello)

Documentation and Examples

Knet v0.9.0 speed release

26 Dec 14:08
Compare
Choose a tag to compare

Compatibility

  • Windows GPU support implemented.
  • MacOS GPU support improved: nvml only used when available.
  • CUDA up to v"9.1" and cuDNN up to v"7.0.5" are tested.
  • Pre-0.6 Julia versions no longer supported.

General

  • rnninit and rnnforw implement cudnn RNNs (with @cangumeli).
  • conv4 performance significantly improved using cudnnFind.
  • batchnorm implemented using CUDNN (@cangumeli).
  • logp performance significantly improved using cudnnSoftmaxForward.
  • DBGFLAGS and PROFILING constants defined in Knet.jl.
  • optimizers creates optimization structs for the whole model.
  • dropout now detects training mode automatically.
  • nll returns negative log likelihood given score matrix and answer index vector.
  • accuracy returns ratio of correct answers given score matrix and answer index vector.
  • minibatch(x,y,b) returns a batch iterator.
  • knetgc is now exported to cudaFree garbage collected pointers.
  • randn!, mean(a,dims), reshape with Colon is now supported by KnetArray (@CarloLucibello).
  • Using CUDAapi and CUDAdrv in build.jl if installed.
  • Got rid of the Combinatorics dependency in test.
  • curandInit called at initialization to prevent memory fill before first dropout.
  • deconv4 bug fixed (@ilkerkesen).

Documentation and Examples

  • New benchmarking notebooks under examples/DeepLearningFrameworks (with @kirnap, @ilkarman).
  • Knet/data now has download utilities: cifar.jl, fashion-mnist.jl, gutenberg.jl, housing.jl, imagenet.jl, imdb.jl, mikolovptb.jl, mnist.jl, treebank.jl, wikiner.jl
  • All examples updated to use the new RNNs and replaced/supported with IJulia notebooks.
  • New variational-autoencoder example (@CarloLucibello).
  • DyNet benchmark examples added (@ilkerkesen).
  • Deep Convolutional Generative Adversarial Networks example added (@ilkerkesen).

Knet v0.8.5 bugfix release

21 Oct 07:15
Compare
Choose a tag to compare

Knet v0.8.5 Release Notes

308ab57 2017-10-20

General

  • Fixed memory leak with certain broadcast kernels (@ilkerkesen).
  • Fixed dropout efficiency bug introduced in 0.8.4.
  • Added conditional support for SpecialFunctions.
  • Added Nesterov optimizer (@CarloLucibello).
  • Removed Compat dependency.
  • Proper handling of concat KnetArrays of incompatible eltypes (#175).
  • Fixed dotted function handling in Julia5 (#173).

Documentation and Examples

  • Fixed julia6 compat problem in examples/mnist.jl.
  • charlm.jl can now save generated text (@DoguD).
  • Added fashion-mnist.jl example (@quaertym).
  • Added missing MNIST.loaddata() to tutorial.jl.
  • Fixed julia4 compat problem in examples/vgg.jl.

Knet v0.8.4 Julia6 compat release

09 Sep 13:49
Compare
Choose a tag to compare
  • Julia 0.6 compatibility fixes.
  • Fixed compiler flags in Makefile for compatibility.
  • charlm unicode character problem fixed.

Knet v0.8.3 Release Notes

18 May 06:58
Compare
Choose a tag to compare

General

  • KnetArray support for general broadcasting operations (@EnisBerk).
  • KnetArray support for general reduction operations (@ilkerkesen).
  • KnetArray support for permutedims up to 5D (@ekyurdakul).
  • KnetArray indexing support for Int, Colon, UnitRange, StepRange, CartesianIndex, Array{Int}, Array{Bool}, Array{CartesianIndex}. Most combinations work for 2-D. N-D indexing incomplete. See @doc KnetArray for details.
  • KnetArray support for multi-argument hcat and vcat.
  • KnetArray support for saving to and loading from JLD files.
  • Implemented hyperband and goldensection hyperparameter optimization algorithms.
  • Added per weight-array gradient clip to update!.
  • Fixed update! issues with grad::Void and other mismatched w,grad types.
  • Fixed update! issues with grad::Dict missing keys.
  • Added setseed to do srand in both cpu and gpu.
  • Added dropout(a,p) as a Knet primitive.
  • Implemented mean(::KnetArray,r).

Testing and Benchmarking

  • Benchmarks added under prof/ for reduction, broadcast, concat, conv operations; rnnlm, s2s models.

Documentation and Examples

  • RNN chapter and IJulia notebook added.
  • Updated the CNN chapter.
  • Solutions to installation problems documented.
  • Fixed vgg and resnet demos to use the new version of Images.jl and to work on CPU-only machines. Fixed batch normalization bug in resnet. (@ilkerkesen)
  • Fixed charlm demo to use indexing operations, Adam, and dropout.
  • Added rnnlm demo.