-
Notifications
You must be signed in to change notification settings - Fork 0
/
abagail.html
258 lines (108 loc) · 3.9 KB
/
abagail.html
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN">
<html>
<head>
<title>ABAGAIL</title>
</head>
<body bgcolor="#D4D9C3">
<font face="Arial,Helvetica">
<a href="index.html">Go back to Andrew Guillory's home page</a>
<h1>ABAGAIL</h1>
<b>the <i>Absolute Best</i> Andrew Guillory Artificial Intelligence Library</b>
<br>
<p>This library is the result of close to a year of research
and class work in artificial intelligence (AI). It contains
a number of interconnected Java packages that implement
machine learning and artificial intelligence algorithms.
These are artificial intelligence
algorithms implemented for the kind of people that
like to implement algorithms themselves.</p>
<b>June 27, 2005</b><p>Fixed a bug in the SVM code, expanded
the reinforcement learning package.</p>
<a href="ABAGAIL.zip">Download Java Source Code</a>
<br/>
<h2>Features</h2>
<h3>Hidden Markov Models</h3>
<ul>
<li>Baum-Welch reestimation algorithm, scaled forward-backward
algorithm, Viterbi algorithm</li>
<li>Support for Input-Output Hidden Markov Models</li>
<li>Write your own output or transition probability distribution
or use the provided distributions, including neural network
based conditional probability distributions</li>
</ul>
<h3>Neural Networks</h3>
<ul>
<li>Feed-forward backpropagation neural networks of
arbitrary topology</li>
<li>Configurable error functions with sum of squares,
weighted sum of squares</li>
<li>Multiple activation functions with logistic sigmoid,
linear, tanh, and soft max</li>
<li>Choose your weight update rule with standard update rule,
standard update rule with momentum, Quickprop, RPROP</li>
<li>Online and batch training</li>
</ul>
<h3>Support Vector Machines</h3>
<ul>
<li>Fast training with the sequential minimal optimization
algorithm</li>
<li>Support for linear, polynomial, tanh, radial basis
function kernels</li>
</ul>
<h3>Decision Trees</h3>
<ul>
<li>Information gain or GINI index split criteria</li>
<li>Binary or all attribute value splitting</li>
<li>Chi-square signifigance test pruning with configurable
confidence levels</li>
<li>Boosted decision stumps with AdaBoost</li>
</ul>
<h3>K Nearest Neighbors</h3>
<ul>
<li>Fast kd-tree implementation for instance based algorithms of
all kinds</li>
<li>KNN Classifier with weighted or non-weighted classification,
customizable distance function</li>
</ul>
<h3>Linear Algebra Algorithms</h3>
<ul>
<li>Basic matrix and vector math,
a variety of matrix decompositions based on
the standard algorithms</li>
<li>Solve square systems, upper triangular systems,
lower triangular systems, least squares</li>
<li>Singular Value Decomposition, QR Decomposition,
LU Decomposition, Schur Decomposition, Symmetric Eigenvalue Decomposition,
Cholesky Factorization</li>
<li>Make your own matrix decomposition with the easy to use
Householder Reflection and Givens Rotation classes</li>
</ul>
<h3>Optimization Algorithms</h3>
<ul>
<li>Randomized hill climbing, simulated annealing,
genetic algorithms, and discrete dependency tree MIMIC</li>
<li>Make your own crossover functions, mutation functions, neighbor
functions, probability distributions, or use the provided ones.</li>
<li>Optimize the weights of neural networks and solve
travelling salesman problems</li>
</ul>
<h3>Graph Algorithms</h3>
<ul>
<li>Kruskals MST and DFS</li>
</ul>
<h3>Clustering Algorithms</h3>
<ul>
<li>EM with gaussian mixtures, K-means</li>
</ul>
<h3>Data Preprocessing</h3>
<ul>
<li>PCA, ICA, LDA, Randomized Projections</li>
<li>Convert from continuous to discrete, discrete to binary</li>
</ul>
<h3>Reinforcement Learning</h3>
<ul>
<li>Value and policy iteration for Markov decision processes</li>
</ul>
</font>
</body>
</html>