-
Notifications
You must be signed in to change notification settings - Fork 28
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Can't get the right results #4
Comments
shall we need to normalize the CLIP embedding before inputting it to the model? |
I downloaded several images from laion2B-en-aesthetic and tried to use the CLIP model (ViT-L/14) to get the clip embedding and take it as input to the NSFW detector. However, the results were different from those shown on laion2B-en-aesthetic. |
Yes you need to normalize the embeddings in input
…On Thu, Aug 25, 2022, 09:48 BIGJUN777 ***@***.***> wrote:
I downloaded several images from laion2B-en-aesthetic
<https://huggingface.co/datasets/laion/laion2B-en-aesthetic> and tried to
use the CLIP model (ViT-L/14) to get the clip embedding and take it as
input to the NSFW detector. However, the results were different from those
shown on laion2B-en-aesthetic
<https://huggingface.co/datasets/laion/laion2B-en-aesthetic>.
—
Reply to this email directly, view it on GitHub
<#4 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AAR437UBH2TZIBVBUJKO5TLV24QMLANCNFSM57RNSXEA>
.
You are receiving this because you are subscribed to this thread.Message
ID: ***@***.***>
|
Yes, I tried to leverage the normalization from improved-aesthetic-predictor.
But the results still could not match your results shown on laion2B-en-aesthetic. I checked the img_embs in your provided dataset, and their data type is float16. I tried to use fp16 inference to get the float16 embeddings, but the results were still not correct. Did I miss something? Thanks.
I constructed the CLIP model as following:
|
@rom1504 @christophschuhmann Any ideas on my problems? Thanks. |
Hi there,
When using this model, I took safe images as inputs but got opposite results. The code I constructed is almost the same as yours.
I encountered several warnings when inferring. My running environment: autokeras==1.0.19 and tensorflow==2.9.1. Did I miss something?
The text was updated successfully, but these errors were encountered: