Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

FixMatch with MixUp #64

Open
Ryoo72 opened this issue May 24, 2021 · 0 comments
Open

FixMatch with MixUp #64

Ryoo72 opened this issue May 24, 2021 · 0 comments

Comments

@Ryoo72
Copy link

Ryoo72 commented May 24, 2021

Thanks for the great research. Can I ask you a question? The paper said "One may replace strong augmentation in FixMatch modality-agnostic augmentation strategies, such as MixUp". I can't understand this part well. I am curious about the specific way MixUp was used in FixMatch.
For example, If there are two unlabeled data, FixMatch mixes up two data with 0.4:0.6 ratio. And FixMatch gives consistency loss between Pseudo-label and prediction. So, Ideal weakly Augmented model prediction should be [..., 0.4, 0.6, ...]. Am I right? Thanks for reading.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant