-
Notifications
You must be signed in to change notification settings - Fork 0
/
Copy pathindex.html
135 lines (128 loc) · 9.1 KB
/
index.html
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
<!DOCTYPE html>
<html>
<head>
<meta content="en-us" http-equiv="Content-Language">
<meta content="text/html; charset=utf-8" http-equiv="Content-Type">
<title>Chao Fan</title>
<meta charset="utf-8">
</head>
<body leftmargin="40" bgcolor="#FFFFFF" link="#444444" text="#444444">
<table border="0" id="table1" width="720">
<tbody>
<tr>
<td width="323">
<p align="center"><font face="Arial"><img border="0" src="2021091212575129-274x300.jpeg"></font></p>
</td>
<td>
<font face="Arial" size="5"><b> Chao Fan <span lang="zh-cn">樊超</span></b></font>
<p><font face="Arial" style="font-size: 11pt;"> Ph.D candidate in CS</font></p>
<p><font face="Arial" style="font-size: 11pt;"> SUSTech, Shenzhen, China</font></p>
<p><font face="Arial" style="font-size: 11pt;"> 12131100 at mail dot sustech dot edu dot cn</font></p>
</td>
</tr>
</tbody>
</table>
<table border="1" style="border-width: 0px;" width="820">
<tbody>
<tr>
<td style="border-style: none; border-width: medium;">
<p style="margin-top: 3px; margin-bottom: 3px;"><font face="Arial" style="font-size: 11pt;">
I major in Computer Vision and Deep Learning, and have published several papers about <b>Gait Recognition</b> on top venues like CVPR, ECCV, and T-PAMI (8 in total).
<br><br>
My past research scope involves <b> Gait Recognition + Self-Supervised Learning, Large Vision Models, and Generative Models </b>.
<br><br>
And now, I'm seeking <b> Internship </b> or <b> Visiting </b> opportunities to end my Ph.D. career, as well as start my next life stage.
<br><br>
The potential research I look forward to might be around <b> Human-Centered Vision Tasks </b> close to <b> Representation Learning </b> and <b> Conditional Generative Models</b>.
<br><br>
Also, I'm open to discussing other potential directions. Please feel free to get in touch with me!
</font></p>
</td>
</tr>
</tbody>
</table><br>
<meta charset="utf-8">
<p><b><font face="Arial" size="4">Selected Works</font></b> (* Equal Contribution)</p>
<p><span class="style8"><strong><font face="Arial"><a href="https://scholar.google.com/citations?user=lgDtKZcAAAAJ&hl=zh-CN"><font color="#808080">Full List (Google Scholar)</font></a><br></font></strong></span></p>
<table border="1" id="table2" style="border-width: 0px;" width="1154">
<tbody>
<tr>
<td style="border-style: none; border-width: medium;" valign="top" width="19"> </td>
<td bgcolor="#FFCC99" style="border-style: none; border-width: medium;" valign="top" width="14"> </td>
<td style="border-style: none; border-width: medium;" valign="top" width="1108">
<p style="margin-left: 10px; line-height: 150%; margin-top: 8px; margin-bottom: 8px;"><i><font face="Arial">Learning Gait Representation from Massive Unlabelled Walking Videos: A Benchmark</font></i><font face="Arial"><i><font style="font-size: 12pt;"><br></font></i></font>
<font color="#000000" face="Arial" size="2"><b>Chao Fan</b>, Saihui Hou, Jilong Wang, Yongzhen Huang, and Shiqi Yu</font><b><font color="#000000" face="Arial" size="2"><br></font></b>
<font face="Arial" size="2"> <b>TPAMI2023</b>, Gait Recognition + Contrastive Learning</font><br>
<font face="Arial" size="2">
<a href="https://ieeexplore.ieee.org/document/10242019"><font color="#808080"><font color="#808080">Paper</font></font></a> <b><a href="https://github.com/ShiqiYu/OpenGait"><font color="#808080" face="Arial" size="2">code</font></a></b>
</td>
</tr>
<tr>
<td style="border-style: none; border-width: medium;" valign="top" width="19"> </td>
<td bgcolor="#FFCC99" style="border-style: none; border-width: medium;" valign="top" width="14"> </td>
<td style="border-style: none; border-width: medium;" valign="top" width="1108">
<p style="margin-left: 10px; line-height: 150%; margin-top: 8px; margin-bottom: 8px;"><i><font face="Arial">OpenGait: Revisiting Gait Recognition Toward Better Practicality</font></i><font face="Arial"><i><font style="font-size: 12pt;"><br></font></i></font>
<font color="#000000" face="Arial" size="2"><b>Chao Fan</b>, Junhao Liang, Chuanfu Shen, Saihui Hou, Yongzhen Huang, and Shiqi Yu</font><b><font color="#000000" face="Arial" size="2"><br></font></b>
<font face="Arial" size="2"> <b>CVPR 2023, Highlight</b>, a Comprehensive Benchmark Study for Gait Recognition</font><br>
<font face="Arial" size="2">
<a href="https://openaccess.thecvf.com/content/CVPR2023/papers/Fan_OpenGait_Revisiting_Gait_Recognition_Towards_Better_Practicality_CVPR_2023_paper.pdf"><font color="#808080"><font color="#808080">Paper</font></font></a> <b><a href="https://github.com/ShiqiYu/OpenGait"><font color="#808080" face="Arial" size="2">code</font></a></b>
</td>
</tr>
<tr>
<td style="border-style: none; border-width: medium;" valign="top" width="19"> </td>
<td bgcolor="#FFCC99" style="border-style: none; border-width: medium;" valign="top" width="14"> </td>
<td style="border-style: none; border-width: medium;" valign="top" width="1108">
<p style="margin-left: 10px; line-height: 150%; margin-top: 8px; margin-bottom: 8px;"><i><font face="Arial">BigGait: Learning Gait Representation You Want by Large Vision Models</font></i><font face="Arial"><i><font style="font-size: 12pt;"><br></font></i></font>
<font color="#000000" face="Arial" size="2">Dingqiang Ye*, <b>Chao Fan*</b>, Jingzhe Ma, Xiaoming Liu, and Shiqi Yu</font><b><font color="#000000" face="Arial" size="2"><br></font></b>
<font face="Arial" size="2"> <b>CVPR 2024</b>, Gait Recognition + Large Vision Models</font><br>
<font face="Arial" size="2">
<a href="https://arxiv.org/pdf/2402.19122.pdf"><font color="#808080"><font color="#808080">Paper</font></font></a> <b><a href="https://github.com/ShiqiYu/OpenGait"><font color="#808080" face="Arial" size="2">code</font></a></b>
</td>
</tr>
<tr>
<td style="border-style: none; border-width: medium;" valign="top" width="19"> </td>
<td bgcolor="#FFCC99" style="border-style: none; border-width: medium;" valign="top" width="14"> </td>
<td style="border-style: none; border-width: medium;" valign="top" width="1108">
<p style="margin-left: 10px; line-height: 150%; margin-top: 8px; margin-bottom: 8px;"><i><font face="Arial">GaitEditer: Attribute Editing for Gait Representation Learning</font></i><font face="Arial"><i><font style="font-size: 12pt;"><br></font></i></font>
<font color="#000000" face="Arial" size="2">Jingzhe Ma*, Dingqiang Ye*, <b>Chao Fan</b>*, and Shiqi Yu</font><b><font color="#000000" face="Arial" size="2"><br></font></b>
<font face="Arial" size="2"> <b>Arxiv 2023 (Submitted to T-PAMI)</b>, Gait Recognition + GAN Inversion</font><br>
<font face="Arial" size="2">
<a href="https://arxiv.org/pdf/2303.05076.pdf"><font color="#808080"><font color="#808080">Paper</font></font></a> <b><a href="https://github.com/ShiqiYu/OpenGait"><font color="#808080" face="Arial" size="2">code</font></a></b>
</td>
</tr>
</tbody>
</table>
<p><b><font face="Arial" size="4"><br>
Activities</font></b></p>
<ul>
<li>
<p style="line-height: 150%; margin-left: 15px; margin-top: 0pt; margin-bottom: 0pt;"><font face="Arial" size="2"><b>Reviewer</b>: CVPR, ICCV, ECCV, NeurIPS, ICLR, ICML, and T-PAMI </font></p>
</li>
</ul>
<ul>
<li>
<p style="line-height: 150%; margin-left: 15px; margin-top: 0pt; margin-bottom: 0pt;"><font face="Arial" size="2"><b>Remote Visit</b> <a href="http://cvlab.cse.msu.edu">CV Lab</a> at Michigan State University (Sep. 2023 - Mar. 2024)</font>: Work on <a href="https://arxiv.org/pdf/2402.19122.pdf">BigGait</a> under the Supervision of Prof. <a href="https://scholar.google.com/citations?user=Bii0w1oAAAAJ&hl=en">Xiaoming Liu</a> </p>
</li>
</ul>
<ul>
<li>
<p style="line-height: 150%; margin-left: 15px; margin-top: 0pt; margin-bottom: 0pt;"><font face="Arial" size="2"> <b>Invited Talk: </b> Progress in Gait Recognition (Mar. 2024): <a href="https://event.baai.ac.cn/activities/768">Video (in Chinese) </a>, <a href="https://github.com/ChaoFan996/ChaoFan996.github.io/blob/main/240315-Progress%20in%20Gait%20Recognition.pdf">Slides (in English) </a> </font></p>
</li>
</ul>
<p></p>
<p><b><font face="Arial" size="4"><br>
Awards</font></b></p>
<ul>
<li>
<p style="line-height: 150%; margin-left: 15px; margin-top: 0pt; margin-bottom: 0pt;"><font face="Arial" size="2"><b>China Undergraduate Mathematical Contest in Modeling (CUMCM 2016)</b>: First Prize</font></p>
</li>
</ul>
<ul>
<li>
<p style="line-height: 150%; margin-left: 15px; margin-top: 0pt; margin-bottom: 0pt;"><font face="Arial" size="2"><b>China National Scholarship for Doctoral Students (2023)</b>: First Prize</font></p>
</li>
</ul>
<p></p>
<a href="https://www.easycounter.com/"> <img src="https://www.easycounter.com/counter.php?chaofan996" border="0" alt="Web Counter"></a><br><a href="https://www.easycounter.com/">Website Hit Counters</a>
</body>
</html>