Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Triplet Margin Loss] Issue 1118 #1120

Open
wants to merge 61 commits into
base: main
Choose a base branch
from

Conversation

cvnad1
Copy link

@cvnad1 cvnad1 commented Oct 26, 2024

@vroulet Hi Vincent, Added code and tests for the Triplet Margin Loss Function #1118 . Kindly review the code and please do comment in case of any changes.

@cvnad1
Copy link
Author

cvnad1 commented Oct 30, 2024

@vroulet May I know if there's anything that needs to be changed?

Copy link
Collaborator

@vroulet vroulet left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you @cvnad1 for doing this! Sorry for the delay. Here are some comments

anchor: The anchor embeddings. Shape: [batch_size, feature_dim].
positive: The positive embeddings. Shape: [batch_size, feature_dim].
negative: The negative embeddings. Shape: [batch_size, feature_dim].
margin: The margin value. Default: 1.0.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No need to put the default values since they are given in the signature.

by V. Balntas et al. Default: False.
reduction: Specifies the reduction to apply to the output:
'none' | 'mean' | 'sum'. Default: 'mean'.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Add reference

margin: The margin value. Default: 1.0.
p: The norm degree for pairwise distance. Default: 2.
eps: Small epsilon value to avoid numerical issues. Default: 1e-6.
swap: Use the distance swap optimization from "Learning shallow
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Use rst formatting for references (see e.g. the docstring of Adam)

swap: bool = False,
reduction: str = 'mean',
) -> chex.Array:
"""Triplet margin loss function.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Add an example (doctest)

@@ -53,5 +53,41 @@ def test_batched(self):
)


class TripletMarginLossTest(chex.TestCase):

def setUp(self):
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Avoid using numerical values as expected returns.
They may fail depending on the backend for example.
You may consider simple test cases with a "handmade" function (see e.g. the lbfgs tests). You can check for specific inputs (like zeros or ones).

You may also add a test for some specific behaviors (like using swap here).

Also you should test this function under jit/vmap etc... (see the chex.all_variant utility in some other tests).

@Saanidhyavats
Copy link

@vroulet we have worked on your suggestion and all the tests are passing. I think the code is ready to be merged.

@cvnad1
Copy link
Author

cvnad1 commented Nov 7, 2024

@vroulet
Apologies for the late response but for the past couple of days we have been trying to implement your suggestions and were facing some errors which we rectified for most things. In the latest commit, we are facing some new errors which we are unable to comprehend. It would be of great help if you could review our latest commit and give your valuable feedback to solve this. Again, Sorry for all the commits we made with errors.

@cvnad1 cvnad1 requested a review from vroulet November 11, 2024 02:55
@Saanidhyavats
Copy link

@vroulet we tried multiple things to solve the error in pipeline, the tests for triplet_loss are passing locally. The errors we are getting here seems to be not from the function we implemented. Can you guide us on this?

@vroulet
Copy link
Collaborator

vroulet commented Nov 11, 2024

Hello @cvnad1 , @Saanidhyavats ,
Yes, a recent submit had broken the tests at head, it's been fixed, sorry for that. You can sync with head and rerun.

@cvnad1
Copy link
Author

cvnad1 commented Nov 13, 2024

@vroulet We have modified the code based on your review. Could you please verify if everything's correct?

Copy link
Collaborator

@vroulet vroulet left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Last round of comments, I'll take care of further formattings on my end.
Could you also squish your commits if possible?

anchors: chex.Array,
positives: chex.Array,
negatives: chex.Array,
axis: chex.Numeric = -1,
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

axis: int = -1

positives: chex.Array,
negatives: chex.Array,
axis: chex.Numeric = -1,
p: chex.Numeric = 2,
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

norm_degree rather than p.
Moreover we have

||x||_p = (sum_i |x_i|^p)**(1/p)

not

||x||_p = sqrt(sum_i x_i^p)

You may want to include the case ||x||_inf in a separate PR?

>>> Array([0.14142442, 0.14142442], dtype=float32)

Args:
anchors: An array of anchor embeddings, with shape [batch, feature_dim].
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Add indents appropriately like that:

        anchors: An array of anchor embeddings, with shape [batch, feature_dim].
        positives: An array of positive embeddings
          (similar to anchors), with shape [batch, feature_dim].
        negatives: An array of negative embeddings
          (dissimilar to anchors), with shape [batch, feature_dim].
        axis: The axis along which to compute the distances
          (default is -1).
        p: The norm degree for distance calculation
          (default is 2 for Euclidean distance).
        margin: The minimum margin by which the positive distance
          should be smaller than the negative distance.
        eps: A small epsilon value to ensure numerical stability
          in the distance calculation.
        reduction: Specifies the reduction to apply to the
          output: 'none' | 'mean' | 'sum'.

If reduction is 'mean' or 'sum', returns a scalar.

References:
Learning shallow convolutional feature descriptors with triplet losses
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Use the following formatting for references

       References:
         V. Balntas et al, `Learning shallow convolutional feature descriptors with triplet losses
         <https://bmva-archive.org.uk/bmvc/2016/papers/paper119/abstract119.pdf>`_, 2016

by V. Balntas, E. Riba et al.
<https://bmva-archive.org.uk/bmvc/2016/papers/paper119/abstract119.pdf>
"""
chex.assert_type([anchors], float)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Remove the three chex.assert_type(...).

p: chex.Numeric = 2,
margin: chex.Numeric = 1.0,
eps: chex.Numeric = 1e-6,
reduction: str = 'none',
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Remove the reduction option. No losses in optax reduce the losses after computation.
So better to keep this loss follow the same principle.
The user can take care of the reduction easily after computing the losses.

negative_distance = jnp.sqrt(jnp.power(anchors - negatives, p).sum(axis) + eps
)
loss = jnp.maximum(positive_distance - negative_distance + margin, 0)
if reduction == 'mean':
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As said above, remove the reduction options.

@Saanidhyavats
Copy link

@vroulet we have worked on the suggested changes. It will be really helpful if you can squash commits for us. Thank you!

@cvnad1 cvnad1 requested a review from vroulet December 13, 2024 05:34
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants