-
Notifications
You must be signed in to change notification settings - Fork 28
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Utilize Bigquery Storage API #71
Labels
Comments
cloud.google.com/bigquery/docs/reference/storage/ |
This is really interesting, thanks for sharing. Yeah it'll need a separate branch while it's on beta but certainly worth looking into Or better yet utilising it via an option but using gcs dumps by default I'll have a look over the coming weeks |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Beta officially announced today, there is the opportunity to leverage the bigquery storage api for reading tables from bq. In theory it should have lower latency than gcs dumps and also be able to leverage predicate pushdowns and column projection while also being avro based.
Are there any plans to integrate the storage all with this or another spark dataframe project?
The text was updated successfully, but these errors were encountered: