Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix automated conversion in adaptive solve #317

Open
wants to merge 2 commits into
base: master
Choose a base branch
from
Open

Conversation

utkarsh530
Copy link
Member

No description provided.

@utkarsh530
Copy link
Member Author

@ChrisRackauckas, I just wanted your comment here. So currently StepRangeLen promotes ref type in it to FP64 as it's done here: https://github.com/JuliaLang/julia/blob/2fb06a7c25fa2b770a8f6e8a45fec48c002268e4/base/twiceprecision.jl#L369

From the document on that code file itself:

# Necessary for creating nicely-behaved ranges like r = 0.1:0.1:0.3
# that return r[3] == 0.3.  Otherwise, we have roundoff error due to
#     0.1 + 2*0.1 = 0.30000000000000004

So basically by default it will create some FP64 types in the StepRangeLen something like this:

julia> saveat = 0.1f0:0.1f0:10.0f0
0.1f0:0.1f0:10.0f0

julia> typeof(saveat)
StepRangeLen{Float32, Float64, Float64, Int64}

This causes issues with backends that do not completely support double precision (Apple, Intel).

The current PR explicitly creates the StepRangeLen completely with types of the range argument, allowing complete type as FP32 if the types of the ranges are so. The tests fail because they still have FP64 types, which will generate different values within ranges due to rounding-off errors. What should we do in this case, update the tests, remove the explicit cast by us, and give warnings when using saveat as ranges in limited double precision support backends?

@ChrisRackauckas
Copy link
Member

Give nice errors when using saveat as ranges in limited double precision support backends

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants