Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Table expiration with write() operation #1189

Open
mikeperello-scopely opened this issue Feb 23, 2024 · 2 comments
Open

Table expiration with write() operation #1189

mikeperello-scopely opened this issue Feb 23, 2024 · 2 comments
Labels
enhancement New feature or request

Comments

@mikeperello-scopely
Copy link

Hi,

I am using the BigQuery connector to transfer data to a BQ table. I am wondering if there is an option to set a table expiration when using the dataframe.write() operation. I see that the connector supports other types of expiration, but not this one.

Thank you!

@isha97 isha97 added the enhancement New feature or request label Mar 4, 2024
@nicodds
Copy link

nicodds commented Apr 18, 2024

This would be super-helpful in the case of merge queries. It happens that you need to update and/or add rows to a table. In that case, the pattern is to write the new data to a temporary table and then perform the merge query. Currently, using the connector, you can't perform the merge query and the temporary table delete. If you could add an option to set the expiration timestamp to the table, you just need to perform the merge outside of spark

@MasterDDT
Copy link

This would be helpful in general to create temp tables via the Spark Dataframe APIs

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

4 participants