forked from MDAnalysis/pmda
-
Notifications
You must be signed in to change notification settings - Fork 0
/
CHANGELOG
92 lines (65 loc) · 2.54 KB
/
CHANGELOG
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
# -*- tab-width: 2; indent-tabs-mode: nil; coding: utf-8 -*-
===============
PMDA CHANGELOG
===============
The rules for this file:
* entries are sorted newest-first.
* summarize sets of changes - don't reproduce every git log comment here.
* don't ever delete anything.
* keep the format consistent (79 char width, M/D/Y date format) and do not
use tabs but use spaces for formatting
* accompany each entry with github issue/PR number (Issue #xyz)
* release numbers follow "Semantic Versioning" http://semver.org
------------------------------------------------------------------------------
2019/**/** VOD555, nawtrey
* 0.3.0
Enhancements
* add timer for block-IO and block-compute
* store block information in `_block` attribute (Issue #89)
* add parallel density class (Issue #8)
* add parallel RMSF class (Issue #90)
Fixes
* default _conclude() in pmda.custom.AnalysisFromFunction fails with
scalar per frame data (Issue #87)
Changes
* Update all docs with the SciPy paper reference (Issue #98)
2019/05/23 VOD555
* 0.2.1
Enhancements (internal)
* add timer for the time to start the workers
11/02/18 VOD555, richardjgowers, mimischi, iparask, orbeckst, kain88-de
* 0.2.0
Enhancements
* add timing for _conclude and _prepare (Issue #49)
* add parallel particle-particle RDF calculation module pmda.rdf (Issue #41)
* add readonly_attributes context manager to ParallelAnalysisBase
* add parallel implementation of Leaflet Finder (Issue #47)
* add parallel site-specific RDF calculation module pmda.rdf.InterRDF_s
(Issue #60)
Fixes
* stacking results failed with odd number of frames (Issue #58)
* always distribute frames over blocks so that no empty blocks are
created ("balanced blocks", Issue #71)
Changes
* requires dask >= 0.18.0 and respects/requires globally setting of the dask
scheduler (Issue #48)
* removed the 'scheduler' keyword from the run() method; use
dask.config.set(scheduler=...) as recommended in the dask docs
* uses single-threaded scheduler if n_jobs=1 (Issue #17)
* n_jobs=1 is now the default for run() (used to be n_jobs=-1)
* dask.distributed is now a dependency
06/07/18 orbeckst
* 0.1.1
Fixes
* The 0.1.0 release was not pip-installable and did not ship code (d'oh);
this release is pip-installable (Issue #42)
05/11/18 kain88-de, orbeckst
* 0.1.0
Enhancements
* add base class for parallel analysis
* add parallel rmsd class (with superposition)
* add parallel contacts class
* add parallal AnalysisFromFunction class
Deprecations
Fixes
Changes