Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Consistent and correct methods for changing the timestep on the fly. #26

Open
hdrake opened this issue Jul 31, 2020 · 8 comments
Open
Labels
bug Something isn't working

Comments

@hdrake
Copy link
Collaborator

hdrake commented Jul 31, 2020

Quoting @fonsp

I was looking at the new IO functions, I really like the feature of sharing your configuration!

However, I found that the new Economics struct contain baseline_emissions and extra_CO2, which are time series, so they depend on the Domain. This means that:

  1. Domain and Economics are not independent parts of ClimateModelParameters
  2. You can't modify the domain, in particular when using the default model parameters. Playing around with dt is a way to find out if the discretization affects the result, but that is no longer easy to do.

This also means that the MargoAPI can only use dt=5, but I found that dt=20 has representative results with better performance.

Some ideas:

Storing a time series:

  • If we store these as time series, then Domain should be an immutable struct, because changing it will either error, or lead to invisibly false results.
  • Store a time series with dt=1, and interpolate into different domains. This might be the easiest solution!

Alternatives to storing a time series:

  • Store the parameters describing these series, so for baseline_emissions, that would be the parameters for ramp_emissions.
    (I'm not sure what the plans are for extra_CO2)
    but this would limit what you can use for these parameters

  • Store the function itself. So:
    mutable struct Economics
    ...

    baseline_emissions::Function
    extra_CO₂::Function
    end

The problem now is that JSON2 can't convert Functions. So instead, you could store the string:
mutable struct Economics
...

baseline_emissions_code::String
extra_CO₂_code::String

end

and use eval(Meta.parse(code)) to get a Julia function from the code. But this is obviously quite messy.

Let me know what you think!

For the API, I will just reconstruct these two timeseries from a different dt :)

@hdrake
Copy link
Collaborator Author

hdrake commented Jul 31, 2020

Quoting @fonsp again

Changed my mind:
I think that storing the ramp_emissions parameters is the best solution because it is the simplest.

People could still use a custom shape for the emissions curve by overriding the Julia methods, and that's the same method to modify anything else in ClimateMARGO.jl. But maybe the choice depends on how common you think it will be to use baseline emissions that do not have the ramp shape?

@hdrake
Copy link
Collaborator Author

hdrake commented Aug 1, 2020

Following up on @fonsp's discussion, I think it will be fairly common to use baseline emissions that do not have the ramp shape (e.g. we do this in the Belaia et al. DICE+SG example).

Would it make more sense to just handle all input timeseries in a Forcing submodule that depends on the Domain submodule? Make this a separate Forcing submodule would make it easily extendable in case we want to add other kinds of forcings like volcanoes, solar variations, etc.

@fonsp
Copy link
Member

fonsp commented Aug 7, 2020

Yes, that's a good idea! But the problem is:
If you import a model from someone else, then you can only run it with the original domain.

This restriction is fine, (and I can't think of an easy solution that still supports JSON exports), but:

  • the default model that comes with MARGO should not be loaded from JSON, but it should be a function of a Domain (which defaults to 2020:5:220).
  • we should prevent these situation where you change the domain, but leave the Economics, and get false results without an error.

@fonsp
Copy link
Member

fonsp commented Aug 7, 2020

But maybe it's okay to drop JSON support. In that case, baseline_emissions can just be a Function, and all our problems disappear :)

JSON is great to share models, but the restriction to values and arrays is limiting. I think that this will become a larger issue if we make ClimateMARGO more extensible.

Since the package will probably be used in an interactive environment, and never outside of Julia, you might as well share Julia code! You can just send someone the code to set up a model:

ClimateModelParameters(Physics(...), Economics(...))

But rather than sending snippets of code, you can send someone an entire notebook, which includes the code to set up the model parameters. I think that it will be common to share MARGO setups using notebooks, rather than JSON-blobs, because you can also include your findings and text. In fact, if you used a JSON-formatted model, then you would need to send 2 files, and make sure that the paths are correct.

@fonsp
Copy link
Member

fonsp commented Aug 7, 2020

So my suggestion is to drop JSON support, and change the type of time-series parameters from Array{Float64,1} to Function. The package can include common forcing functions, like the ramp shape:

function ramp_emissions(q0::Float64, n::Float64, t1::Float64, t2::Float64)::Function
    return function(t)
        t0 = t[1]
        Δt0 = t1 - t0
        Δt1 = t2 - t1
        q = zeros(size(t))
        increase_idx = (t .<= t1)
        decrease_idx = ((t .> t1) .& (t .<= t2))
        q[increase_idx] .= q0 * (1. .+ (n-1) .*(t[increase_idx] .- t0)/Δt0)
        q[decrease_idx] .= n * q0 * (t2 .- t[decrease_idx])/Δt1
        q[t .> t2] .= 0.
        return q
    end
end

@hdrake
Copy link
Collaborator Author

hdrake commented Aug 17, 2020

But maybe it's okay to drop JSON support. In that case, baseline_emissions can just be a Function, and all our problems disappear :)

JSON is great to share models, but the restriction to values and arrays is limiting. I think that this will become a larger issue if we make ClimateMARGO more extensible.

Since the package will probably be used in an interactive environment, and never outside of Julia, you might as well share Julia code! You can just send someone the code to set up a model:

ClimateModelParameters(Physics(...), Economics(...))

But rather than sending snippets of code, you can send someone an entire notebook, which includes the code to set up the model parameters. I think that it will be common to share MARGO setups using notebooks, rather than JSON-blobs, because you can also include your findings and text. In fact, if you used a JSON-formatted model, then you would need to send 2 files, and make sure that the paths are correct.

I understand the idea and it seems appealing, but how would this look in practice? Let's say I want to load in the default MARGO parameters, change the feedback parameter, and re-optimize.

Would I just call @run MARGO_defaults.jl in my REPL / notebook and that would run

m = ClimateModelParameters(Physics(...), Economics(...))

and then I can just modify

m.Physics.B -= 0.5
optimize_controls!(m)

It just seems like this could get messy in many use cases. What if I wanted to load 10 different MARGO configurations from different users and compare them, but their various notebooks have shared / conflicting variable assignments, for example?

@fonsp
Copy link
Member

fonsp commented Aug 17, 2020

Default parameters

I think the defaults should be created using a function inside the ClimateMARGO module. You would use it like:

m = ClimateMARGO.default_parameters()
m.Physics.B -= 0.5
...

It's good that you mention modifying it - if the default params are a simple variable (ClimateMARGO.default_parameters), then modifying it would modify the original. So a function that creates it is best. This could also just be the 0-argument generator/constructor:

m = ClimateMARGO.ClimateModelParameters()
...

.jl to replace .json

About running a "parameter script": instead of the script assigning to m, it can just return the object. For example, if your script strange_parameters.jl is:

p = Physics(...)
p.B -= 0.5
e = Economics(...)
ClimateModelParameters(p, e)

then you can do in your REPL:

julia> m = include("strange_parameters.jl")

This works, because the last statement in any Julia block (including scripts) is the returned object.

But writing a separate script (in this format) is not the most ergonomic way - you would rather define it in your notebook.

Sharing notebooks

If someone shares a notebook that defines a model, then you can of course just run that notebook. But if you want to extract only the model definition from a notebook, then I think a simple copy paste is the easiest. If their notebook is a Jupyter notebook, then this might be the only way.

But if their model is defined in a script or a Pluto notebook, then you can include their code, and use it in your own code. The naming collision that you mentioned is actually something that I am working on, I will refine and release my solution in the next couple of weeks. Maybe I can write up a demo - this would also be a realistic use case for me to test it on.

Comparing noteboks

About comparing 10 different notebooks:

  • when you are comparing multiple models and some of them are written in Jupyter, you need to open those notebooks and copy their model setup codes into a single comparison notebook/script. The key is rewriting a cell like
m = ClimateMARGO.default_parameters()
m.Physics.B -= 0.5
...

into

function strange_paramaters()
	m = ClimateMARGO.default_parameters()
	m.Physics.B -= 0.5
	...
	return m
end
  • when you are comparing multiple models, and all of them are written as scripts or Pluto notebooks, then you can use a script/Pluto notebook to import all of them - this should be the easiest.

With the current JSON method, you open each notebook, and run the export function to generate the .jsons. But you need to set the same timestep for each model before exporting to JSON, and it cannot be changed afterwards.

Perhaps I overestimate the need to change the timestep after importing? Maybe allowing the default parameters to have variable timestep is good enough.

@hdrake
Copy link
Collaborator Author

hdrake commented Aug 25, 2020

I've started a huge overhaul of the code base in #45 (see the corresponding issue #44; please let me know what you think!) and am trying to include some of these ideas there so that we can swap out the .json-IO for a .jl-based IO.

I took your advice and set the type of the emissions functions as a julia Function rather than as a fixed timeseries:

struct RampingEmissions <: EmissionsParams
    func::Function
end

in the restructed economics structure

mutable struct Economics
    emissions::EmissionsParams
    growth::GrowthParams
    discounting::DiscountingParams
    controlcosts::ControlCostParams
    damages::DamageParams
end

In principle, this means that we should now be able to change the timestep on the fly just like any other parameter since the diagnostics just call the emissions function:

emissions(q, M) = q .* (1. .- M)
function emissions(m::ClimateModel; M=false)
    return emissions(
        m.economics.emissions.func(t(m)),
        m.controls.deployed["M"] .* (1. .- .~future_mask(m) * ~M)
    )
end

@hdrake hdrake added the bug Something isn't working label Aug 25, 2020
@hdrake hdrake added this to the v1.0 release milestone Sep 16, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

2 participants