Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Poor results or wrong useage of GLEAN in face images. #3

Open
ewrfcas opened this issue Jun 11, 2021 · 18 comments
Open

Poor results or wrong useage of GLEAN in face images. #3

ewrfcas opened this issue Jun 11, 2021 · 18 comments

Comments

@ewrfcas
Copy link

ewrfcas commented Jun 11, 2021

Hi, I have tried GLEAN in mmediting for 64->1024 face SR. But the generated results are very poor.
My command is python restoration_demo.py configs/restorers/glean/glean_ffhq_16x.py workdirs/glean_ffhq_16x_20210527-61a3afad.pth tests/data/1009.png preds/1009.png --device 2

My input is 64x64 face image
image
and the output is
image

@ewrfcas
Copy link
Author

ewrfcas commented Jun 11, 2021

Here is the log
image

@ckkelvinchan
Copy link
Owner

Hello, did you use bicubic downsampling kernel?

In addition, it seems that restoration_inference did not normalize the images from [-1, 1] back to [0, 1]. As a workaround, you can normalize the output in your backbone by using output = (output + 1) / 2.0 and see how the results look. I will see how to modify restoration_inference later.

@yzxing87
Copy link

yzxing87 commented Jun 11, 2021

Same issue here. In my case, the max value of output exceeds 1 and the min value is greater than 0.

@ckkelvinchan
Copy link
Owner

Hello, did you use bicubic downsampling kernel?

In addition, it seems that restoration_inference did not normalize the images from [-1, 1] back to [0, 1]. As a workaround, you can normalize the output in your backbone by using output = (output + 1) / 2.0 and see how the results look. I will see how to modify restoration_inference later.

I am sorry that it seems that this is not the problem. The code should have normalized it back to [0, 1].

I just tried two images in CelebA-HQ and they work fine. Could you please try the images in CelebA-HQ? Please note that MATLAB imresize should be used for downsampling.

@ckkelvinchan
Copy link
Owner

How about trying the images here? These two images work fine on my side.

@yzxing87
Copy link

How about trying the images here? These two images work fine on my side.

Hi, I have tried your images with command

python demo/restoration_demo.py configs/restorers/glean/glean_ffhq_16x.py \
    pretrain/glean_ffhq_16x_20210527-61a3afad.pth \
    00001.png \
    results/00001.png \
    --device 1

But the output is still poor.

Is it because of my wrong usage? Thank you

@ckkelvinchan
Copy link
Owner

I used this command:

python demo/restoration_demo.py configs/restorers/glean/glean_ffhq_16x.py https://download.openmmlab.com/mmediting/restorers/glean/glean_ffhq_16x_20210527-61a3afad.pth ./00001.png outputs/00001.png

Did you modify the codes?

Here are my results: results.zip

@ewrfcas
Copy link
Author

ewrfcas commented Jun 11, 2021

Still fails with bicubic resizing and data rescaling to [0,1].
Here is the code in restoration_inference.py
image
Failed results.

@ckkelvinchan
Copy link
Owner

Still fails with bicubic resizing and data rescaling to [0,1].
Here is the code in restoration_inference.py
image
Failed results.

How about removing data['lq'] = (data(['lq'] + 1) / 2). When adding this line, the input to the network would be [0, 1], which is incorrect.

I used the latest version of MMEditing codes without any modifications.

@ewrfcas
Copy link
Author

ewrfcas commented Jun 11, 2021

The output should be converted back to [-1,1]?

@ckkelvinchan
Copy link
Owner

The output should be converted back to [-1,1]?

The code will handle the conversion itself. There is no need to modify the code. The original code will do.

@ewrfcas
Copy link
Author

ewrfcas commented Jun 11, 2021

Still fails with bicubic resizing and data rescaling to [0,1].
Here is the code in restoration_inference.py
image
Failed results.

How about removing data['lq'] = (data(['lq'] + 1) / 2). When adding this line, the input to the network would be [0, 1], which is incorrect.

I used the latest version of MMEditing codes without any modifications.

Sorry, I am confused for the input range. Which is the correct value range, [0,1] or [-1,1]?

@ckkelvinchan
Copy link
Owner

Still fails with bicubic resizing and data rescaling to [0,1].
Here is the code in restoration_inference.py
image
Failed results.

How about removing data['lq'] = (data(['lq'] + 1) / 2). When adding this line, the input to the network would be [0, 1], which is incorrect.
I used the latest version of MMEditing codes without any modifications.

Sorry, I am confused for the input range. Which is the correct value range, [0,1] or [-1,1]?

The inputs and outputs of GLEAN will be [-1, 1]. But you do not need to modify the code, as I have implemented the conversion.

@ewrfcas
Copy link
Author

ewrfcas commented Jun 11, 2021

Still fails with bicubic resizing and data rescaling to [0,1].
Here is the code in restoration_inference.py
image
Failed results.

How about removing data['lq'] = (data(['lq'] + 1) / 2). When adding this line, the input to the network would be [0, 1], which is incorrect.
I used the latest version of MMEditing codes without any modifications.

Sorry, I am confused for the input range. Which is the correct value range, [0,1] or [-1,1]?

The inputs and outputs of GLEAN will be [-1, 1]. But you do not need to modify the code, as I have implemented the conversion.

Thanks. I use the original codes to test the image given above, but the result is still failed.
image

@ckkelvinchan
Copy link
Owner

I am able to reproduce your error just now. Please remove --device 1. I will investigate why and please remove it for the moment.

@yzxing87
Copy link

Thanks, removing device works! It seems that the data and the model are placed on different devices. If I set the CUDA_VISIBLE_DEVICES in the very beginning, the demo can also work fine when --device is set to other GPUs.

@ewrfcas
Copy link
Author

ewrfcas commented Jun 11, 2021

I am able to reproduce your error just now. Please remove --device 1. I will investigate why and please remove it for the moment.

Nice,it works. Maybe the model failed to load weights properly with --device?

@ckkelvinchan
Copy link
Owner

ckkelvinchan commented Jun 11, 2021

It may be, thank you for letting me know :) I will look into the problem~

I will keep this issue open for the moment in case others encounter this problem. Thanks again.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants