-
Notifications
You must be signed in to change notification settings - Fork 2
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Feature/image benchmarking #880
base: main
Are you sure you want to change the base?
Conversation
Performance reviewCommit
|
…david_benchmark.py
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks, @qh681248. Some minor changes requested
@@ -556,7 +556,7 @@ contextmanager-decorators=contextlib.contextmanager | |||
# system, and so shouldn't trigger E1101 when accessed. Python regular | |||
# expressions are accepted. | |||
generated-members= | |||
|
|||
cv2.* |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can you tell me why this has been added? Are we accessing non-existent members of cv2?
|
||
The benchmarking process follows these steps: | ||
1. Load an input image and downsample it using a specified factor. | ||
2. Convert to grayscale and extract non-zero pixel locations and values. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What is meant by 'non-zero pixel locations and values' ? Does this mean pixels that have value above zero? i.e., are not completely black?
The benchmarking process follows these steps: | ||
1. Load an input image and downsample it using a specified factor. | ||
2. Convert to grayscale and extract non-zero pixel locations and values. | ||
3. Generate coresets with varying algorithms. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
change 'varying' to 'various' or 'different'
Benchmark performance of different coreset algorithms on pixel data from an image. | ||
|
||
The benchmarking process follows these steps: | ||
1. Load an input image and downsample it using a specified factor. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can you briefly describe why we downsample the image?
) | ||
|
||
# Set up the original data object and coreset parameters | ||
data = Data(pre_coreset_data) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
do we need to jax this? data = Data(jnp.asarray(pre_coreset_data))
|
||
# Set up the original data object and coreset parameters | ||
data = Data(pre_coreset_data) | ||
coreset_size = 8_000 // (downsampling_factor**2) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why is this the coreset size? Can you add detail in the docstring above to explain
|
||
# Initialize each coreset solver | ||
key = random.PRNGKey(0) | ||
solvers = initialise_solvers(Data(jnp.array(pre_coreset_data)), key) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why are we doing Data(jnp.array(pre_coreset_data))
again here? Surely this can just be:
solvers = initialise_solvers(data, key)
.
NB: I'm not sure which is correct for data
- either Data(jnp.array(pre_coreset_data))
or Data(jnp.asarray(pre_coreset_data))
PR Type
Description
This PR extends the original benchmark framework and adds a visual benchmark. The benchmarking process follows these steps:
How Has This Been Tested?
Existing tests pass as expected.
New tests introduced with this change verify that...
Does this PR introduce a breaking change?
Checklist before requesting a review