Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

train_list.txt #19

Open
xz0305 opened this issue Sep 11, 2023 · 7 comments
Open

train_list.txt #19

xz0305 opened this issue Sep 11, 2023 · 7 comments

Comments

@xz0305
Copy link

xz0305 commented Sep 11, 2023

请问这个文件是需要自己生成吗

@Carlyx
Copy link
Collaborator

Carlyx commented Sep 11, 2023

是的,这个文件是数据处理过程中生成的,处理过程用到了[video-preprocessing。可以重新写一下dataloader直接使用mp4/png格式的数据。

@xz0305
Copy link
Author

xz0305 commented Sep 11, 2023

好的谢谢,可以发一下你的这个txt文件吗,我想看一下他的格式

@Carlyx
Copy link
Collaborator

Carlyx commented Sep 11, 2023

你好,我目前已经没有相关的文件了,但应该类似于:

1.mp4
2.mp4
3.mp4

表示训练集有上述3个视频。

@xz0305
Copy link
Author

xz0305 commented Sep 11, 2023

好的 非常感谢

@xz0305
Copy link
Author

xz0305 commented Sep 11, 2023

`损失函数中,为什么两次用到了自身重建对,与论文中提到的并不一致

        rec_loss = self.criterion_vgg(fake_selfpose, img_target).mean()*2
        rec_loss += self.criterion_vgg(fake_selfexp, img_source).mean()*2

rec_loss += F.l1_loss(fake_selfpose, img_target) rec_loss += F.l1_loss(fake_selfexp, img_source)

而且我理解的是,fake_selfpose和fake_selfexp都应该与img_source计算损失,为什么会使用到img_target呢

@xz0305
Copy link
Author

xz0305 commented Sep 12, 2023

你好 可以回答一下吗

@Carlyx
Copy link
Collaborator

Carlyx commented Sep 12, 2023

你好,感谢指出,已更正。自重建loss的目标是针对两个生成器进行自生成约束,因此GT应该和输入保持一致。实验中我们给vgg较大的权重,l1较小的权重,并且发现权重的轻微变化对于结果的影响较小。

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants