Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Dynamic loss function (changes over generations) #162

Open
Jgmedina95 opened this issue Dec 4, 2022 · 2 comments
Open

Dynamic loss function (changes over generations) #162

Jgmedina95 opened this issue Dec 4, 2022 · 2 comments

Comments

@Jgmedina95
Copy link

Hi Miles!

Most of this idea builds upon the one discussed here:
#92 (comment)
But it is different so I decided to make a new issue.

Ive been using the idea of 'custom_loss_functions' and it has worked great so far. Its really problem specific so I haven't made any pull requests as I dont see it as a generalizable idea, but this one might be.

Is it possible to make a loss function with a variable hyperparamenter? That changes after X generations?
like

return L2loss + alfa*custom_loss
so that alfa changes over generations (maybe decreasingly or increasingly) or this would have to be fixed?

So far I've been saving the state and restarting it changing the loss parameters on my needs but I was wondering if this could be done since the beginning.

@MilesCranmer
Copy link
Owner

Very interesting idea, I could definitely see this being useful for regularizations!

One difficult thing about implementing this is that the loss function is recalculated only when expressions change. So some of the losses may be out-of-date, especially in the hall of fame.

Do you want the absolute loss to change, or just for the search to favor different things over time? If the latter, it will be much easier. You could modify this line:

scores[i] = member.score * exp(adaptive_parsimony_scaling * frequency)
, which is the loss used in tournaments, with your custom loss. Then the search will favor expressions based on your time variable loss, but the loss actually stored will be fixed to e.g., L2Loss.

@Jgmedina95
Copy link
Author

I got time to think about this:
"One difficult thing about implementing this is that the loss function is recalculated only when expressions change. So some of the losses may be out-of-date, especially in the hall of fame."
But I guess its possible to reevaluate the population members with the new loss function. like
for member in populations member.loss = EvaluateLoss(X, member, new_options) end
and similarly with the hall of fame.

The idea could (?) be used to: 1) pause the search, 2) change the loss metric, 3)reevaluate the loss of each member in the saved_state, 4) reevaluate the scores, 5)and then continue.
Ive got issues with step 4. I thought that using the built-in functions like

function loss_to_score(

would help me reevaluate all the scores, but when I try to replicate the scores of a member I don't get the same value.

image

If all this was possible, a dynamic loss could be done externally without changing too much into the code, and just readjusting the population, as described, accordingly right?

And to answer the question, yes, im more interested in adjusting the search to favor different things :)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants