Skip to content

Releases: eth-sri/diffai

DiffAI Version 3

01 Apr 11:10
Compare
Choose a tag to compare

Version from the Arxiv paper https://arxiv.org/abs/1903.12519

Updates

  • Added DSL to specify complex objectives and complex training scheduling.
  • Added abstract layers for increasing precision in deeper networks
  • Added onyx exporting
  • Included examples of trained nets such as ResNet34

Abstract

We present a training system, which can provably defend significantly larger neural networks than previously possible, including ResNet-34 and DenseNet-100. Our approach is based on differentiable abstract interpretation and introduces two novel concepts: (i) abstract layers for fine-tuning the precision and scalability of the abstraction, (ii) a flexible domain specific language (DSL) for describing training objectives that combine abstract and concrete losses with arbitrary specifications. Our training method is implemented in the DiffAI system.

Version 1.0

14 Nov 15:57
590856a
Compare
Choose a tag to compare

The initial version used to reproduce the results in the ICML Paper