Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Thank you :) Wanted to show you my SRFormer_light model :) #10

Closed
Phhofm opened this issue Jun 13, 2023 · 5 comments
Closed

Thank you :) Wanted to show you my SRFormer_light model :) #10

Phhofm opened this issue Jun 13, 2023 · 5 comments
Labels
enhancement New feature or request

Comments

@Phhofm
Copy link

Phhofm commented Jun 13, 2023

Thank you for this network :)
I just wanted to show real quick that I have trained a (4x) SRFormer_light model for anime super-resolution with real degradation (compression, noise, blur), and created visual outputs which can be compared to a SwinIR_light (=small) model I have trained on the same dataset with the same config (so same losses, gt_size, batch size, both from scratch meaning no pretrain). Just one visual comparison (more outputs in the link below) of Input, SwinIR_light and SRFormer_light:

grafik

More visual outputs can be found here and the trained SRFormer_light model file can be found here

@Phhofm Phhofm changed the title Self-trained model Thank you :) Wanted to show you my SRFormer_light model :) Jun 13, 2023
@Z-YuPeng
Copy link
Collaborator

Greatful! The model you trained demonstrates the performance of SRFormer in anime SR. I want to express my sincere gratitude for your work.

@Phhofm
Copy link
Author

Phhofm commented Jun 30, 2023

PS I think SRFormer is impressive and its results are growing on me. I trained/finetuned another model (not lightweight but an SRFormer base model) and I think the results are very good even when compared with HAT-L (which is a bigger and slower arch/model, trained with the same config). It is trained with otf jpg compression and blur. Here are results compared with Real-ESRGAN (RRDBNet) and HAT-S and HAT-L

https://imgsli.com/MTg5MDY3/0/3
https://imgsli.com/MTg5MDYy/0/3
https://imgsli.com/MTg5MDY1/0/3

The model files (pth and onnx conversions in the onnx folder) with training info can be found here

seeufer
bibli
dearalice

@Z-YuPeng
Copy link
Collaborator

Z-YuPeng commented Jul 20, 2023

SRFormer is accepted by ICCV 2023, We will enrich our repo recently and provide more demos, and we will add a link in our repo about the third-party model you trained so that people can learn more about SRFormer!

@Z-YuPeng Z-YuPeng added the enhancement New feature or request label Jul 20, 2023
@Feynman1999
Copy link
Contributor

PS I think SRFormer is impressive and its results are growing on me. I trained/finetuned another model (not lightweight but an SRFormer base model) and I think the results are very good even when compared with HAT-L (which is a bigger and slower arch/model, trained with the same config). It is trained with otf jpg compression and blur. Here are results compared with Real-ESRGAN (RRDBNet) and HAT-S and HAT-L

https://imgsli.com/MTg5MDY3/0/3 https://imgsli.com/MTg5MDYy/0/3 https://imgsli.com/MTg5MDY1/0/3

The model files (pth and onnx conversions in the onnx folder) with training info can be found here

seeufer bibli dearalice

excelent work! can you share some info about your training dataset? that's helpful to me !

@terrainer
Copy link

terrainer commented Jul 29, 2023

PS I think SRFormer is impressive and its results are growing on me. I trained/finetuned another model (not lightweight but an SRFormer base model) and I think the results are very good even when compared with HAT-L (which is a bigger and slower arch/model, trained with the same config). It is trained with otf jpg compression and blur. Here are results compared with Real-ESRGAN (RRDBNet) and HAT-S and HAT-L
https://imgsli.com/MTg5MDY3/0/3 https://imgsli.com/MTg5MDYy/0/3 https://imgsli.com/MTg5MDY1/0/3
The model files (pth and onnx conversions in the onnx folder) with training info can be found here
seeufer bibli dearalice

excelent work! can you share some info about your training dataset? that's helpful to me !

The dataset used for the anime pretrain is HFA2K_LUDVAE, available in the #dataset-releases channel of the Enhance Everything! Discord server.

The realistic dataset is Nomos8K, also in that Discord server.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

4 participants