-
Notifications
You must be signed in to change notification settings - Fork 1
/
Copy pathcv.yaml
199 lines (181 loc) · 12.2 KB
/
cv.yaml
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
yname:
first: Chaitanya
last: Ahuja
# phone:
email: mail@chahuja.com
pdf: /cv.pdf
src: https://github.com/chahuja/cv
url: "chahuja.com"
social:
github: chahuja
twitter: chahuja
google_scholar: CX8zqPoAAAAJ&hl
# LaTeX formatting.
style: banking # casual, classic, oldstyle, or banking
color: blue # blue, orange, green, red, purple, grey and black
color2: '0.25,0.25,0.25' # Make the font under the name a darker grey.
# (tag, section type, title)
order:
- [about, NONE]
- [news, News]
# - [industry, Experience]
# - [research, Research Experience]
- [book_publications, Book Chapters]
# - [preprints_publications, Preprints]
- [selected_publications, Selected Publications]
- [resources_publications, Resources]
- [education, Education]
- [talks, Academic Talks]
- [advising, Student Mentorship]
- [teaching, Teaching Experience]
- [service, Professional Activities and Service]
#- [honors, Honors \& Awards]
# - [projects, Projects]
# - [coursework, CMU Graduate Coursework]
# - [skills, Skills]
# - [NEWPAGE, NEWPAGE]
# - [all_publications, All Publications]
# - [activities, Activities]
# primary - pdf
# success - webpage
# info - code
# secondary - gray
# dark - abstract
# danger - red
# warning - yellow
news:
- date: Feb 2023
info: 'Survey paper on Co-Speech Gestures accepted in the STAR track at Eurographics 2023. <a href="https://arxiv.org/abs/2301.05339" target="_blank"><button type="button" class="btn btn-primary">pdf</button></a>'
- date: May 2022
info: 'Excited to join Meta AI as a Research Scientist'
- date: April 2022
info: 'Defended my PhD dissertation on <b>Communication Beyond Words: Grounding Visual Body Motion with Language</b> <a href="https://chahuja.com/files/chaitanya_ahuja_phd_thesis.pdf" target="_blank"><button type="button" class="btn btn-primary">pdf</button></a>'
- date: April 2022
info: 'Humbled to be a Highlighted Reviewer at ICLR 2022'
- date: March 2022
info: 'Paper on Low-Resource Adaptation of Spatio-Temporal Crossmodal Generative Models accepted at CVPR 2022'
- date: May 2020
info: 'We are organizing the <b>First Workshop on Crossmodal Social Animation</b> at <a href="http://iccv2021.thecvf.com/">ICCV2021</a>. Consider submissing your work. <a href="http://sites.google.com/view/xs-anim" target="_blank"><button type="button" class="btn btn-success">webpage</button></a>'
- date: December 2020
info: 'Succesfully proposed my thesis titled <b>Communication Beyond Words: Grounding Visual Body Motion with Language</b> <a href="https://drive.google.com/open?id=1lrk5J4vJjBirAyZMOK6pbozGkB4DKebO&authuser=cahuja%40andrew.cmu.edu&usp=drive_fs" target="_blank"><button type="button" class="btn btn-primary">pdf</button></a>'
- date: September 2020
info: "Paper on Co-Speech Gesture Generation from Language accepted at Findings at EMNLP 2020"
- date: September 2020
info: "Paper on Impact of Personality on Non-verbal behvaiours accepted at IVA 2020"
- date: August 2020
info: 'PATS (Pose-Audio-Transcripts-Style) Dataset released. <a href="http://chahuja.com/pats" target="_blank"><button type="button" class="btn btn-success">webpage</button></a>'
- date: August 2020
info: 'Code for Style Transfer for Co-Speech Gesture Animation released. <a href="https://github.com/chahuja/mix-stage" target="_blank"><button type="button" class="btn btn-info">code</button></a>'
- date: July 2020
info: "Paper on Style Transfer for Co-Speech Gesture Animation accepted at ECCV 2020"
- date: August 2019
info: 'Paper on Visual Pose Forecasting for Personalized Avatar during Dyadic Conversations accepted at ICMI 2019. <a href="https://arxiv.org/pdf/1910.02181.pdf" target="_blank"><button type="button" class="btn btn-primary">pdf</button></a>'
- date: August 2019
info: "Honourable mention in LTI SRS symposium on my talk on Natural Language Grounded Pose Forecasting"
- date: July 2019
info: 'Paper on Natural Language Grounded Pose Forecasting accepted at 3DV 2019 <a href="https://arxiv.org/pdf/1907.01108.pdf" target="_blank"><button type="button" class="btn btn-primary">pdf</button></a> <a href="http://chahuja.com/language2pose" target="_blank"><button type="button" class="btn btn-success">webpage</button></a>'
- date: March 2018
info: "Excited to work at Facebook Reality Labs in Summer'18"
- date: January 2018
info: 'Paper on Lattice Recurrent Units accepted at AAAI 2018 <a href="https://arxiv.org/abs/1710.02254" target="_blank"><button type="button" class="btn btn-primary">pdf</button></a> <a href="http://chahuja.com/lru" target="_blank"><button type="button" class="btn btn-success">webpage</button></a>'
- date: October 2017
info: 'Our survey on Multimodal Machine Learning is online <a href="https://arxiv.org/pdf/1705.09406.pdf" target="_blank"><button type="button" class="btn btn-primary">pdf</button></a>'
#about: 'I am a PhD candidate at the Language Technologies Institute at <b>Carnegie Mellon University</b>. I am advised by [Dr. Louis-Philippe Morency (LP)](https://www.cs.cmu.edu/~morency/) in the Multicomp Lab and we work on anything multimodal. Lately, my research efforts have been directed towards <b>grounding body gestures</b> in Speech, and Language. As an undergraduate researcher at <b>Indian Institute of Technology(IIT), Kanpur</b>, I worked with [Dr. Rajesh Hegde](http://home.iitk.ac.in/~rhegde/) on <b>Spatial Audio</b> and <b>Speaker Diarization</b>, and [Dr. Vinay Namboodiri](https://www.cse.iitk.ac.in/users/vinaypn/) on <b>Video Summarization</b>.'
#about: 'I am a final year PhD candidate at the Language Technologies Institute at <b>Carnegie Mellon University</b>. I am advised by [Dr. Louis-Philippe Morency (LP)](https://www.cs.cmu.edu/~morency/) in the Multicomp Lab. My research focuses on endowing agents and remote avatars with realistic <b>Virtual Presence</b> and <b>Social Intelligence</b> by means of <b>Multimodal Generative Modeling</b>. These directions have the potential of making a meaningful impact on remote communication, collaborations, education and mental health for human-human and human-robot interaction, especially now when a lot of social and work spaces are gradually moving online. <br> <br> In the past, I have interned at Facebook Reality Labs on generation of nonverbal behaviours for a communicating avatar. As an undergraduate researcher at <b>Indian Institute of Technology(IIT), Kanpur</b>, I worked with [Dr. Rajesh Hegde](http://home.iitk.ac.in/~rhegde/) on <b>Spatial Audio</b> and <b>Speaker Diarization</b>, and [Dr. Vinay Namboodiri](https://www.cse.iitk.ac.in/users/vinaypn/) on <b>Video Summarization</b>'
about: 'I am a Reasearch Scientist at Meta AI working on Human-Centric Multimodal Machine Learning and Generative Modeling. Prior to that I completed my PhD at the Language Technologies Institute at <b>Carnegie Mellon University</b> where I was advised by [Dr. Louis-Philippe Morency (LP)](https://www.cs.cmu.edu/~morency/) in the [Multicomp Lab](http://multicomp.cs.cmu.edu/). My research focused on endowing agents and remote avatars with <b>Social Intelligence</b> by means of <b>Multimodal Learning</b>. One of the use-cases where we extensively apply these technologies is <b>Computer Animation</b>. These directions have the potential of making a meaningful impact on remote communication, collaborations, education and mental health for human-human and human-robot interaction, especially now when a lot of social and work spaces are gradually moving online. <br> <br> In the past, I have also interned at Facebook Reality Labs on generation of nonverbal behaviours for a communicating avatar. As an undergraduate researcher at <b>Indian Institute of Technology(IIT), Kanpur</b>, I worked with [Dr. Rajesh Hegde](http://home.iitk.ac.in/~rhegde/) on <b>Spatial Audio</b> and <b>Speaker Diarization</b>, and [Dr. Vinay Namboodiri](https://www.cse.iitk.ac.in/users/vinaypn/) on <b>Video Summarization</b>'
selected_publications:
name: "C. Ahuja"
file: selected.bib
book_publications:
name: "C. Ahuja"
file: book.bib
preprints_publications:
name: "C. Ahuja"
file: preprints.bib
resources_publications:
name: "C. Ahuja"
file: resources.bib
education:
- school: Carnegie Mellon University
location: Pittsburgh, PA
degree: Ph.D. in Language Technologies
dates: 2015 -- 2022
overallGPA: 4.02/4.00
details:
- "Thesis: <a href='https://chahuja.com/files/chaitanya_ahuja_phd_thesis.pdf'>Communication Beyond Words: Grounding Visual Body Motion with Language</a>"
- "Advisor: <a href='https://www.cs.cmu.edu/~morency/'>Louis-Philippe Morency</a>"
- school: Indian Institute of Technology
location: Kanpur, India
degree: B.Tech. in Electrical Engineering
dates: 2011 -- 2015
overallGPA: 9.5/10
details:
- "Advisors: <a href='http://home.iitk.ac.in/~rhegde/'>Rajesh Hegde</a>, <a href='https://vinaypn.github.io/'>Vinay P. Namboodiri</a>"
# talks:
# - title: EMNLP 2020
# link: https://slideslive.com/38940175/no-gestures-left-behind-learning-relationships-between-spoken-language-and-freeform-gestures
# kind: other
# filename: emnlp2020_talk.png
# - title: ECCV 2020
# link: https://www.youtube.com/embed/L7ZGHmMJLCc
# kind: youtube
talks:
- title: "Communication Beyond Words: Grounding Visual Body Motion with Spoken Language"
location: KTH Stockholm, Online
year: April 2021
- title: Learning Relationships between Spoken Language and Freeform Gestures
location: EMNLP 2020 Workshop on NLP Beyond Text, Online
year: Novermber 2020
url: "https://slideslive.com/38940175/no-gestures-left-behind-learning-relationships-between-spoken-language-and-freeform-gestures"
- title: Style Transfer for Co-speech Gesture Generation
year: September 2020
url: "https://www.youtube.com/embed/L7ZGHmMJLCc"
- title: End-to-End Visual Pose Forecasting for Personalized Avatar during Dyadic Conversations
location: ACM International Conference on Multimodal Interaction, Suzhou, China
year: October 2019
- title: Natural Language Grounded Pose Forecasting
location: LTI Student Research Symposium, Pittsburgh PA
year: August 2019
teaching:
- name: <a href="https://structuredprediction11763.github.io/structuredprediction.github.io/">Structured Prediction for Language and Other Discrete Data</a>
semester: Spring 2018
short: CMU 11-763
position: Head TA
- name: Multimodal Machine Learning
semester: Spring 2017
short: CMU 11-777
position: Head TA
service:
- details: "Co-organizer: ICCV 2021 First Workshop on Crossmodal Social Animation"
year: 2021
url: https://sites.google.com/view/xs-anim
- details: "Co-organizer: Multimodal Machine Learning Reading Group, CMU"
year: Spring 2020
- details: "Conference Program Commiitee: Neurips, SIGGRAPH, ICLR, ACL, EMNLP, ACM Multimedia, ICMI"
- details: "Workshop Program Committee: NeurIPS workshop on Multimodal Machine Learning, ACL Workshop on Multimodal Language, NAACL-HLT Student Research Workshop, ICMI GENEA Workshop"
- details: "Grant Reviewer: Army Research Office (ARO)"
- details: CMU Graduate Applicant Support Program Volunteer
year: 2020
- details: CMU AI Undergraduate Research Mentor
year: 2020-21
- details: CMU Graduate Student Association Representative for Language Technologies Institute
year: 2017
advising:
- name: Dong Won Lee
details: "(CMU BS → CMU MS in Machine Learning): Self-supervised generative models."
url: https://www.linkedin.com/in/don-dong-won-lee-ab964b172/
- name: Shradha Sehgal
details: "(IIIT Hyderabad B.Tech.): Evaluation of generative models."
url: https://web.iiit.ac.in/~shradha.sehgal/
- name: Arvin Wu
details: "(CMU BS): Social intelligence benchmarking."
- name: Nikitha Murikinati
details: "(CMU BS): Study of relationships between co-speech gestures and prosody."
url: https://www.linkedin.com/in/nikithamurikinati/
- name: Sharath Rao
details: "(CMU MS → PlayStation) Back-channel prediction in dyadic conversations."
url: https://www.linkedin.com/in/sharathrao1/
- name: Qingtao Hu
details: "(CMU MS → Amazon): Unsupervised disentanglement of style and content in images."
- name: Anirudha Rayasam
details: "(CMU MS → Google): Language grounded pose forecasting."