Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Google Floods Part 3: Adding Inundation Maps; COUNTRY=mozambique #1332

Open
wants to merge 62 commits into
base: feature/google-floods/1317
Choose a base branch
from

Conversation

lowriech
Copy link
Collaborator

@lowriech lowriech commented Aug 22, 2024

api/app/googleflood.py Outdated Show resolved Hide resolved
@lowriech
Copy link
Collaborator Author

@gislawill this is updated to call gauges, rather than rely on a previous call passing its attributes

@gislawill gislawill force-pushed the feature/google-floods-inundation branch from d358674 to bb77ff7 Compare September 2, 2024 23:50
@gislawill gislawill changed the base branch from feature/google-floods/1317 to feature/google-floods-forecast-chart/1317 September 2, 2024 23:51
Copy link

github-actions bot commented Sep 2, 2024

Build succeeded and deployed at https://prism-1332.surge.sh
(hash 3222328 deployed at 2024-11-23T14:41:42)

@gislawill gislawill marked this pull request as ready for review October 7, 2024 02:57
@gislawill gislawill changed the title Adding Google Floods Inundation API Adding Google Floods Inundation Map Oct 7, 2024
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

These two files are each about 4 MB — which is unfortunately how large the response from Google is at this endpoint.

@gislawill gislawill changed the title Google Floods Part 3: Adding Inundation Maps; COUNTRY=cambodia Google Floods Part 3: Adding Inundation Maps; COUNTRY=mozambique Oct 31, 2024
@gislawill
Copy link
Collaborator

Note: on 11/7/24, I received this error message intermittently but very frequently. Hoping the google floods has resolved it. Or will need to reach out to them

Request failed at url post https://floodforecasting.googleapis.com/v1/floodStatus:searchLatestFloodStatusByArea?key={GOOGLE_FLOODS_API_KEY}: {'regionCode': 'ML'}
Response status code: 502
Response content: (abbreviated)

    <title>Error 502 (Server Error)!!1</title>
    <p>The server encountered a temporary error and could not complete your request.
    <p>Please try again in 30 seconds.  <ins>That's all we know.</ins>

@wadhwamatic
Copy link
Member

looks good from my side Will. some surprising discrepancies in Cambodia between NASA SERVIR flood extent and the Google model, but we've also known there to be overestimation by the SERVIR tool.

@gislawill
Copy link
Collaborator

gislawill commented Nov 12, 2024

The 502 errors from Google Floods have continued and seem to be a result of us hitting the searchLatestFloodStatusByArea endpoint too frequently. I've just emailed the google floods team about this issue.

In the meantime, I've added google floods response caching which gets around the issue for now. I've set the cache timeout to only 10 minutes as we know this data can change quickly.

@ericboucher, I'm curious for your thoughts on this caching (a45de24). Specifically,

  • I'm not seeing any cache clearing so far. These new cached files could bloat the EC2 instance
  • Would it be helpful to include the cache timeout logic in the caching utilities for easier use elsewhere?

@@ -98,13 +98,13 @@ def cache_file(url: str, prefix: str, extension: str = "cache") -> FilePath:


@timed
def cache_geojson(geojson: GeoJSON, prefix: str) -> FilePath:
def cache_geojson(geojson: GeoJSON, prefix: str, cache_key: str) -> FilePath:
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please explicit the inputs here, esp. the cache_key and how it should be used.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Added some add comments and updated the params to be more explicit

api/app/googleflood.py Outdated Show resolved Hide resolved
api/app/googleflood.py Outdated Show resolved Hide resolved
@ericboucher
Copy link
Collaborator

@gislawill thanks for this. Yes I think it could be useful to add the cache timeout mechanism for other requests as well.

And for the cache clearing, I think it would be worth implementing but not sure if there is an easy way. We could do something simple like delete any file that's more than 30 days old or something like this. @wadhwamatic any thoughts on that point?

@wadhwamatic
Copy link
Member

And for the cache clearing, I think it would be worth implementing but not sure if there is an easy way. We could do something simple like delete any file that's more than 30 days old or something like this. @wadhwamatic any thoughts on that point?

@ericboucher - yes, we should delete older files in this case and 30 days sounds like a good initial period to try

@gislawill
Copy link
Collaborator

@wadhwamatic and @ericboucher, we have many different options for clearing this cache. I think whichever option we go with, it should be easy to trigger the cache clearing automatically and manually. We'll also want to be able to easily adjust the "minimum age to clear" to allow for experimentation with this cache age and for clearing caches after a bug fix rollout.

The two good options would be a shell script and an endpoint:

  • Script
    • A Python script (could be Bash too) that loops through the cache directory and clears files based on age
    • The age would default to 30 days but could be provided as an environment variable
      • A provided age of 0 would clear all cached files
    • This could be triggered manually by SSH-ing into the container and running it
  • Endpoint
    • A FastAPI endpoint that loops through the cache directory and clears files based on age
    • The age would default to 30 days but could be provided in the request body
    • This could be triggered manually by any request client (i.e. curl)
      • Very easy, though not secure (but can't think of any real risks associated with it)

Two options for automated triggers:
- A cron daemon in the Docker image to keep simple
- An ECS Scheduled Task which is a slightly more complicated set up but easier to monitor and more resilient (retries, alerting)

I'm leaning towards a script that's triggered by an ECS Scheduled Task. Curious for your thoughts. Also curious if either of you would have a preference between adding this cache clearing in this PR vs a follow-up?

@ericboucher
Copy link
Collaborator

@gislawill the endpoint option seems to be convenient and allows any of us to trigger it manually when necessary. We could "secure" it with a token or something simple, that should be enough to test things out.

I'd prefer to have this added in a follow-up PR as it would make it a bit easier to discuss implementation and triggers. If we go with the endpoint option, we could use a simple GitHub action to run daily and call this endpoint.

@gislawill
Copy link
Collaborator

Sounds good to me @ericboucher. @wadhwamatic I've created issue #1371 for this work.

This PR should be ready for review. It's blocked on rollout by the copy + translations questions here: #1328 (comment)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants