-
Notifications
You must be signed in to change notification settings - Fork 15
Exponential and logarithm optimisation resources
Mamy Ratsimbazafy edited this page Dec 5, 2018
·
2 revisions
Optimising exponential and logarithm functions is critical for machine learning as many activations and loss functions are relying on those especially:
- Negative log-likelihood and cross-entropy loss
- sigmoid
- softmax
- softmax cross-entropy (using the log-sum-exp techniques)
The default implementation in <math.h>
are very slow. The usual way to implement them is via polynomial approximation.
- Taylor Expansion: https://en.wikipedia.org/wiki/Taylor_series#Exponential_function
- Remez algorithm and Chebyshev approximation: https://en.wikipedia.org/wiki/Remez_algorithm
- Euler's continued fractions: https://en.wikipedia.org/wiki/Euler%27s_continued_fraction_formula#The_exponential_function
- Padé approximant: https://en.wikipedia.org/wiki/Padé_table#An_example_–_the_exponential_function
- Using the fact that
e^x = 2^x/ln(2)
and split x into an integer portion to be computed with shifts and a fractional part - Range reduction