-
Notifications
You must be signed in to change notification settings - Fork 1
/
Copy pathindex.html
843 lines (816 loc) · 53.5 KB
/
index.html
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
---
layout: default
tags: about
---
<script src="https://ajax.googleapis.com/ajax/libs/jquery/3.4.1/jquery.min.js"></script>
<script type="text/javascript">
function readMore() {
$('#readMore').hide();
$('#more').show();
}
function readLess() {
$('#readMore').show();
$('#more').hide();
}
</script>
<img src="images/profile/adi_ny_crop.png" alt="Aditya Agarwal" width="240" style="float: right; padding: 20px; border-radius: 50%;" />
<div class="bio" style="text-align:justify">
<p>
I am a first year EECS PhD student at MIT advised by professors <a href="https://people.csail.mit.edu/lpk/" class="uline-special">Leslie Kaelbling</a> and <a href="https://people.csail.mit.edu/tlp/" class="uline-special">Tomas Lozano-Perez</a>. I am part of the <a href="https://lis.csail.mit.edu/" class="uline-special">Learning and Intelligent Systems (LIS)</a> Group, part of the broader <a href="https://www.csail.mit.edu/" class="uline-special">CSAIL</a> <a href="https://ei.csail.mit.edu/" class="uline-special">Embodied Intelligence (EI)</a> Group.
I am broadly interested in problems at the intersection of robotic perception and TAMP. Previously, I was a visiting researcher at <a href="https://montrealrobotics.ca/" class="uline-special">REAL</a> at <a href="https://www.umontreal.ca/en/" class="uline-special">Université de Montréal</a> associated with <a href="https://mila.quebec/en/" class="uline-special">Mila</a>. I was advised by Prof. <a href="https://liampaull.ca/" class="uline-special">Liam Paull</a>.
<!-- I am a research intern at the <a href="https://montrealrobotics.ca/" class="uline-special">Robotics and Embodied AI Lab (REAL)</a> at <a href="" class="uline-special">Université de Montréal</a> and <a href="https://mila.quebec/en/" class="uline-special">Mila</a>. -->
<!-- I work on representation learning for robotics systems, supervised by professors <a href="https://liampaull.ca/" class="uline-special">Liam Paull</a> (UdeM) and <a href="http://www.cs.toronto.edu/~florian/" class="uline-special">Florian Shkurti</a> (UofT). -->
<!-- I will be joining the <a href="https://lis.csail.mit.edu/" class="uline-special">Learning and Intelligent Systems (LIS) Group</a> as a PhD student in EECS at <a href="https://www.csail.mit.edu/" class="uline-special">MIT CSAIL</a> this fall. -->
<!-- I will work at the intersection of perception and task & motion planing for robotics systems supervised by professors <a href="https://people.csail.mit.edu/lpk/" class="uline-special">Leslie Pack Kaelbling</a> and <a href="https://people.csail.mit.edu/tlp/" class="uline-special">Tomas Lozano-Perez</a>, with the overarching goal of building general-purpose and autonomous robots that can seamlessly integrate with humans. -->
</p>
<p>
I completed my Masters by research from IIIT Hyderabad, and was advised by professors <a href="https://scholar.google.com/citations?user=U9dH-DoAAAAJ&hl=en" class="uline-special">C V Jawahar</a> and <a href="https://vinaypn.github.io/" class="uline-special">Vinay Namboodiri</a> in the computer vision (<a href="https://cvit.iiit.ac.in/" class="uline-special">CVIT</a>) lab and by professors <a href="https://scholar.google.co.in/citations?user=QDuPGHwAAAAJ&hl=en" class="uline-special">Madhava Krishna</a> and <a href="https://cs.brown.edu/people/ssrinath/" class="uline-special">Srinath Sridhar</a> (Brown University) in the robotics (<a href="https://robotics.iiit.ac.in/" class="uline-special">RRC</a>) lab.
<!-- I completed my MS by research from <a href="https://www.iiit.ac.in/" class="uline-special">IIIT Hyderabad</a> supervised by professors <a href="https://scholar.google.com/citations?user=U9dH-DoAAAAJ&hl=en" class="uline-special">C V Jawahar</a> and <a href="https://vinaypn.github.io/" class="uline-special">Vinay Namboodiri</a> in the computer vision (<a href="https://cvit.iiit.ac.in/" class="uline-special">CVIT</a>) lab and by Prof. <a href="https://scholar.google.co.in/citations?user=QDuPGHwAAAAJ&hl=en" class="uline-special">Madhava Krishna</a> in the robotics (<a href="https://robotics.iiit.ac.in/" class="uline-special">RRC</a>) lab. -->
<!-- I am an MS by research student at IIIT Hyderabad. I am advised by <a href="https://scholar.google.com/citations?user=U9dH-DoAAAAJ&hl=en" class="uline-special">Prof. C V Jawahar</a> and <a href="https://vinaypn.github.io/" class="uline-special">Prof. Vinay Namboodiri</a> at the computer vision (<a href="https://cvit.iiit.ac.in/" class="uline-special">CVIT</a>) Lab, and by <a href="https://scholar.google.co.in/citations?user=QDuPGHwAAAAJ&hl=en" class="uline-special">Prof. Madhava Krishna</a> at the robotics (<a href="https://robotics.iiit.ac.in/" class="uline-special">RRC</a>) Lab of IIIT Hyderabad. -->
My work spanned the broad areas of talking-face generation, video understanding and generation, and robotic perception & manipulation.
<!-- My work spanned the areas of 3D shape completion, video understanding, implicit representations, robotic manipulation, and talking-face generation. -->
</p>
<!-- <p>
My research interests lie broadly at the intersection of <strong>computer vision</strong> and <strong>robotics</strong>.
My long-term goal is to design task-driven representations of the 3D world that can enable embodied agents to perceive and interact with the 3D environment intelligently, and perform various robotic tasks efficiently.
</p> -->
<br>
<p>
Apart from academia, I spent time as a <a href="https://www.youtube.com/watch?v=-ZEXU20tkFw" class="uline-special">Software Engineer</a> at Microsoft India, and worked for Bing (People Also Ask feature) and Azure (<a href="" class="uline-special">Education</a> and Healthcare initiatives) organizations. My work was covered in the press.
<!-- Previously, I was a <a href="https://www.youtube.com/watch?v=-ZEXU20tkFw" class="uline-special">Software Engineer at Microsoft</a> India in the People Also Ask (PAA) team. I worked on techniques in deep learning and NLP to show a block of related questions and answers for a user query on Bing's search page. -->
</p>
<br>
<p>
I completed my Bachelors from <a href="https://pes.edu/" class="uline-special">PES University</a> (formerly PESIT) Bangalore in Computer Science. I worked in the areas of sound event detection and localization, and spent a considerable amount of time interning at
The <a href="https://www.ucalgary.ca/" class="uline-special">University of Calgary</a> (through <a href="https://www.mitacs.ca/en/programs/globalink/globalink-research-internship" class="uline-special">MITACS</a>) with <a href="https://contacts.ucalgary.ca/info/enel/profiles/168-42833" class="uline-special">Prof. Mike Smith</a> and <a href="https://www.microsoft.com/en-us/research/lab/microsoft-research-india/" class="uline-special">Microsoft Research India</a>.
<!-- I completed my Bachelors from <a href="https://pes.edu/" class="uline-special">PES University</a> (formerly PESIT) Bangalore in Computer Science. At PESIT, I worked in the areas of sound event detection and localization. -->
<!-- I also spent a summer as a MITACS research intern at the <a href="https://www.ucalgary.ca/" class="uline-special">University of Calgary</a> on localizing an audio noise nuisance called the Ranchlands Hum, supervised by Prof. <a href="http://people.ucalgary.ca/~smithmr/MRSmith_DepartmentalWeb/indexMain.htm" class="uline-special">Mike Smith</a>, -->
<!-- and a year as an intern at <a href="https://www.microsoft.com/en-us/research/lab/microsoft-research-india/" class="uline-special">Microsoft Research India</a>, working in the areas of blended learning and AI in healthcare. -->
<!-- I spent a wonderful summer in Canada as a MITACS intern at the <a href="https://www.ucalgary.ca/" class="uline-special">University of Calgary</a> on characterizing and localizing an audio noise nuisance, and was advised by <a href="http://people.ucalgary.ca/~smithmr/MRSmith_DepartmentalWeb/indexMain.htm" class="uline-special">Prof. Mike Smith</a>. -->
<!-- on characterizing and localizing an audio nuisance dubbed the Ranchlands Hum. -->
<!-- I also spent a year as a research intern at <a href="https://www.microsoft.com/en-us/research/lab/microsoft-research-india/" class="uline-special">Microsoft Research Labs</a>, working in the areas of blended learning and AI in healthcare. -->
</p>
<p><strong>Research Interests: </strong><span style="color:gray">My research interests lie broadly at the intersection of computer vision and robotics. My goal is to integrate representation learning with task & motion planning to achieving general-purpose robot autonomy.</span></p>
<!-- <p><strong>Interest in Robotics: </strong><span style="color:gray">I focus on improving the perception, grasping, and navigation, capabilities of robotics systems, aiming to develop agents that can intelligently perceive and reason about the environment and perform tasks efficiently. Specifically, I work in <a href="https://bipashasen.github.io/scarp/" class="uline-special"><span style="color:red">3D scene completion</span></a>, implicit neural representations for 3D shapes, and exploring synergies between different motion primitives for enabling complex robotics behavior. I am also interested in grounding natural language instructions with the robot's underlying representation of the 3D world to allow behaviors such as language-guided <a href="https://arxiv.org/pdf/2205.04090.pdf" class="uline-special"><span style="color:red">tabletop manipulation</span></a>, voice-based semantic navigation, and human-robotic interaction.</span></p> -->
<!-- In robotics, I focus on improving the perception, grasping, and navigation capabilities of robotics systems with the eventual aim of developing agents that can perceive and reason about the environment intelligently and perform tasks efficiently. Specifically, I work in the areas of 3D scene completion, implicit neural representations for 3D shapes, and exploring synergies between different motion primitives for enabling complex robotics behavior.
I am also interested in grounding natural language instructions with the robot's underlying representation of the 3D world to enable behaviors such as language-guided tabletop manipulation, voice-based semantic navigation, and human-robotic interaction. -->
<!-- <p><strong>Interest in Computer Vision: </strong><span style="color:gray"> My research in computer vision has focused on generative modeling, <a href="/INRV/" class="uline-special"><span style="color:red">video representation and understanding</span></a>, and <a href="http://cvit.iiit.ac.in/research/projects/cvit-projects/faceoff" class="uline-special"><span style="color:red">face-swapping</span></a>. My recent work was toward designing an alternate representation space for videos where videos are encoded as implicit functions. In face-swapping, I introduced a new line of research towards video face-swapping that tackles pressing challenges in the moviemaking industry. I am also profoundly concerned about the potential of deepfakes misuse. My recent work is toward deepfake detection and attribution with the eventual aim of releasing an ethical framework for training and releasing generative models in the wild.</span></p> -->
<!-- In CV, my research has been focused towards generative modeling, video representation and understanding, and face-swapping. My recent work was toward designing an alternate representation space for videos where videos are encoded as implicit functions. -->
<!-- Conditioning videos as INRs offers several possiblities such as video inpainting, superresolution, completion, denoising without explicitly enforcing these capabilities. -->
<!-- In face-swapping, I introduced a new line of research towards video face-swapping, that tackles pressing challenges in the moviemaking industry. I am also deeply concerned about the potential of deepfakes misue. My very recent work is toward deepfake detection and attribution with the eventual aim of releasing an ethical framework for training and releasing generative models in the wild. -->
<!-- <p><strong>Research Interests: <span style="color:gray">My interests lie in designing improved representations of the 3D and 2D worlds that are most suitable for enabling diverse tasks and applications. I envision a scenario where robots perceive and interact with the environment seamlessly to perform tasks such as navigation and manipulation.
In robotics, I am working on 3D scene completion and denoising NeRFs to improve robotics perception and grasping; and on exploring synergies between different motion primitives toward robotic planning and manipulation.
I am also interested in grounding natural language instructions with the robot's underlying representation of the 3D world to enable behaviors such as voice-based semantic navigation, language-guided robotic tabletop manipulation, and human-robotic interaction.
On the vision side, I am interested in generative modeling, video understanding, and face manipulation. Existing video generation works rely on learning temporally coherent trajectories in the learned space of image-based generators which is highly restrictive.
I am working on learning an efficient representation space for videos that can enable several video-based generative tasks to augment the experiences offered by content streaming platforms.
I am also working on an ethical framework for releasing generative models. </span></strong></p> -->
<!-- <p><strong>Contact: </strong>I am always happy to discuss and collaborate on research ideas. Email me at <a href="mailto:skymanaditya1@gmail.com" class="uline">skymanaditya1@gmail.com</a></p> -->
<br/>
<div class="container">
<div class="row" style="text-align:center;">
<div class="col">
<a href="https://pes.edu/"><img src="images/affiliations/resized/pes.jpg" style="max-height:60px;width:60%"></a>
</div>
<div class="col" style="text-align:center;">
<a href="https://www.iiit.ac.in/"><img src="images/affiliations/resized/iiith.png" style="max-height:60px;width:60%"></a>
</div>
<div class="col" style="text-align:center;">
<a href="https://www.mit.edu/"><img src="images/affiliations/mit_square.png" style="max-height:60px;width:60%"></a>
</div>
<div class="col" style="text-align:center;">
<a href="https://www.microsoft.com/en-us/research/lab/microsoft-research-india/"><img src="images/affiliations/resized/microsoft.png" style="width:60%;max-height:60px"></a>
</div>
<div class="col">
<a href="https://www.ucalgary.ca/"><img src="images/affiliations/resized/uofc.png" style="width:60%;max-height:60px"></a>
</div>
<div class="col">
<a href="https://www.vmware.com/in.html/"><img src="images/affiliations/vmware.jpeg" style="width:60%;max-height:60px"></a>
</div>
<div class="col" style="text-align:center;">
<a href="https://www.microsoft.com/en-in"><img src="images/affiliations/resized/microsoft.png" style="width:60%;max-height:60px"></a>
</div>
<div class="col" style="text-align:center;">
<a href="https://www.umontreal.ca/en/"><img src="images/affiliations/umontreal_logo.jpeg" style="width:60%;max-height:60px"></a>
</div>
</div>
<div class="row" style="text-align:center;">
<div class="col">
<div style="padding:10px"><h6>2013 - 2017</h6></div>
</div>
<div class="col">
<div style="padding:10px"><h6>2021 - 2023</h6></div>
</div>
<div class="col">
<div style="padding:10px"><h6>2023 - Current</h6></div>
</div>
<div class="col">
<div style="padding:10px"><h6>Spring <br>'15 & '16</h6></div>
</div>
<div class="col">
<div style="padding:10px"><h6>Summer '16</h6></div>
</div>
<div class="col">
<div style="padding:10px"><h6>Fall '17</h6></div>
</div>
<div class="col">
<div style="padding:10px"><h6>2018 - 2021</h6></div>
</div>
<div class="col">
<div style="padding:10px"><h6>Summer '23</h6></div>
</div>
</div>
</div>
<hr/>
<br/>
<div id="research">
<h2>
<a name="research">Publications</a>
</h2>
<br/>
<table width="100%" align="center" valign="middle" cellspacing="0" cellpadding="0" style="border-collapse: collapse;">
<tbody>
<tr>
<td width="30%">
<div class="one" style="text-align:center;">
<img src="images/projects/concept_graphs_arxiv2023/banner.png" style="max-height: 400px;">
</div>
</td>
<td valign="top" width="70%">
<h5>
ConceptGraphs: Open-Vocabulary 3D Scene Graphs for Perception and Planning
</h5>
<p class="authors" style="display:inline">Qiao Gu*, Ali Kuwajerwala*, Sacha Morin*, Krishna Murthy Jatavallabhula*, Bipasha Sen, <p class="self" style="display:inline">Aditya Agarwal</p>, <p class="authors" style="display:inline">Kirsty Ellis, Celso Miguel de Melo, Corban Rivera, William Paul, Rama Chellapa, Chuang Gan, Joshua B. Tenenbaum, Antonio Torralba, Florian Shkurti, Liam Paull</p>
</p>
<p>
<!-- <i><span style="color:gray"></span>Under Review</i>, <a href="https://www.icra2023.org/" class="uline-special"><span style="color:red">ICRA 2023</span></a> -->
<a href="images/projects/concept_graphs_arxiv2023/2024_icra_concept_graphs.pdf" class="uline-special"><span style="color:red">Under Review 2023</span></a>
</p>
<p>
<!-- <a href="https://arxiv.org/pdf/2301.07213.pdf">Paper / </a> -->
<a href="images/projects/concept_graphs_arxiv2023/2024_icra_concept_graphs.pdf">Paper / </a>
<a href="https://concept-graphs.github.io/">Project Page / </a>
<!-- <a href="https://bipashasen.github.io/scarp/">Project Page / </a> -->
<!-- <a href="https://www.youtube.com/watch?v=o2PuRVZ3jJA">Short Video / </a> -->
<a href="">Code (Coming Soon) / </a>
<a href="">Video (Coming Soon)</a>
<!-- <a href="images/projects/scarp_icra2023/poster.pdf">Poster / </a> -->
<!-- <a href="https://youtu.be/lyEc991wyTI">Long Video</a> -->
</p>
<!-- <p style="color:red">More details coming soon!</p> -->
<p>
We propose ConceptGraphs, an open-vocabulary graph-structured representations for 3D scenes, built by leveraging 2D foundation
models and fusing their output to 3D by multiview association. The resulting representations generalize to novel semantic classes,
without the need to collect large 3D datasets or finetune models. The utility of this representation is demonstrated through downstream robotic planning tasks.
</p>
</td>
</tr>
<tr>
<tr>
<td width="30%">
<div class="one" style="text-align:center;">
<img src="images/projects/edmp_arxiv2023/banner.png" style="max-height: 400px;">
</div>
</td>
<td valign="top" width="70%">
<h5>
EDMP: Ensemble-of-costs-guided Diffusion for Motion Planning
</h5>
<p class="authors" style="display:inline">Kallol Saha*, Vishal Mandadi*, Jayaram Reddy*, Ajit Srikanth, <p class="self" style="display:inline">Aditya Agarwal</p>, <p class="authors" style="display:inline">Bipasha Sen, Arun Singh, Madhava Krishna</p>
</p>
<p>
<!-- <i><span style="color:gray"></span>Under Review</i>, <a href="https://www.icra2023.org/" class="uline-special"><span style="color:red">ICRA 2023</span></a> -->
<a href="https://arxiv.org/pdf/2309.11414.pdf" class="uline-special"><span style="color:red">Under Review 2023</span></a>
</p>
<p>
<!-- <a href="https://arxiv.org/pdf/2301.07213.pdf">Paper / </a> -->
<a href="https://arxiv.org/pdf/2309.11414.pdf">Paper / </a>
<a href="https://ensemble-of-costs-diffusion.github.io/">Project Page / </a>
<!-- <a href="https://bipashasen.github.io/scarp/">Project Page / </a> -->
<!-- <a href="https://www.youtube.com/watch?v=o2PuRVZ3jJA">Short Video / </a> -->
<a href="">Code (Coming Soon) / </a>
<a href="https://youtu.be/F2UI0UNsdjo">Video</a>
<!-- <a href="images/projects/scarp_icra2023/poster.pdf">Poster / </a> -->
<!-- <a href="https://youtu.be/lyEc991wyTI">Long Video</a> -->
</p>
<!-- <p style="color:red">More details coming soon!</p> -->
<p>
We propose EDMP for motion planning, that combines the strengths of classical motion planning (offering remarkable adaptability) and deep-learning-based motion planning (prior understanding over diverse valid trajectories).
Our diffusion-based network is trained on a set of diverse kinematically valid trajectories. For any new scene at the time of inference,
we compute scene-specific costs such as "collision cost" and guide the generation of valid trajectories that satisfy scene-specific constraints.
</p>
</td>
</tr>
<tr>
<td width="30%">
<div class="one" style="text-align:center;">
<img src="images/projects/hypnerf_arxiv2023/project_picture.png" style="max-height: 400px;">
</div>
</td>
<td valign="top" width="70%">
<h5>
HyP-NeRF: Learning Improved NeRF Priors using a HyperNetwork
</h5>
<p class="authors" style="display:inline">Bipasha Sen*, Gaurav Singh*, </p><p class="self" style="display:inline">Aditya Agarwal*</p><p class="authors" style="display:inline">, Madhava Krishna, Srinath Sridhar</p>
</p>
<p>
<!-- <i><span style="color:gray"></span>Under Review</i>, <a href="https://www.icra2023.org/" class="uline-special"><span style="color:red">ICRA 2023</span></a> -->
<a href="https://arxiv.org/pdf/2306.06093.pdf" class="uline-special"><span style="color:red">NeurIPS 2023</span></a>
</p>
<p>
<!-- <a href="https://arxiv.org/pdf/2301.07213.pdf">Paper / </a> -->
<a href="images/projects/hypnerf_arxiv2023/HyP-NeRF.pdf">Paper / </a>
<!-- <a href="https://bipashasen.github.io/scarp/">Project Page / </a> -->
<!-- <a href="https://www.youtube.com/watch?v=o2PuRVZ3jJA">Short Video / </a> -->
<a href="https://github.com/skymanaditya1/HyP-NeRF">Code (Coming Soon) / </a>
<a href="https://www.youtube.com/watch?v=40JOIWJAvGs">Video</a>
<!-- <a href="images/projects/scarp_icra2023/poster.pdf">Poster / </a> -->
<!-- <a href="https://youtu.be/lyEc991wyTI">Long Video</a> -->
</p>
<!-- <p style="color:red">More details coming soon!</p> -->
<p>
We propose HyP-NeRF, a latent conditioning method for learning generalizable category-level NeRF priors using hypernetworks.
We use hypernetworks to estimate both the weights and the multi-resolution hash encodings resulting in significant quality gains.
To further improve quality, we incorporate a denoise and finetune strategy that denoises images rendered from NeRFs estimated by the hypernetwork and finetunes it while retaining multiview consistency.
</p>
</td>
</tr>
<tr>
<td width="30%">
<div class="one" style="text-align:center;">
<img src="images/projects/synergistic_case2023/teaser.png" style="max-height: 400px;">
</div>
</td>
<td valign="top" width="70%">
<h5>
Disentangling Planning and Control for Non-prehensile Tabletop Manipulation
</h5>
<p class="authors" style="display:inline">Vishal Reddy Mandadi, ..., </p><p class="self" style="display:inline">Aditya Agarwal</p><p class="authors" style="display:inline">, ..., Madhava Krishna</p>
</p>
<p>
<!-- <i><span style="color:gray"></span>Under Review</i>, <a href="https://www.icra2023.org/" class="uline-special"><span style="color:red">ICRA 2023</span></a> -->
<a href="https://case2023.org/" class="uline-special"><span style="color:red">CASE 2023</span></a>
</p>
<p>
<!-- <a href="https://arxiv.org/pdf/2301.07213.pdf">Paper / </a> -->
<a href="">Paper (Coming Soon)/ </a>
<!-- <a href="https://bipashasen.github.io/scarp/">Project Page / </a> -->
<a href="">Video (Coming Soon)</a>
<!-- <a href="https://github.com/vanhalen42/SCARP">Code / </a> -->
<!-- <a href="images/projects/scarp_icra2023/poster.pdf">Poster / </a> -->
<!-- <a href="https://youtu.be/lyEc991wyTI">Long Video</a> -->
<!-- <p style="color:red">More details coming soon!</p> -->
</p>
<p>
We propose a framework that disentangles planning and control for tabletop manipulation in unknown scenes
using a pushing-by-striking method (without tactile feedback) by explicitly modeling the object dynamics.
Our method consists of two components: an A* planner for path-planning and a low-level RL controller that models object dynamics.
</p>
</td>
</tr>
<tr>
<td width="30%">
<div class="one" style="text-align:center;">
<img src="images/projects/scarp_icra2023/banner.png" style="max-height: 400px;">
</div>
</td>
<td valign="top" width="70%">
<h5>
SCARP: 3D Shape Completion in ARbitrary Poses for Improved Grasping
</h5>
<p class="authors" style="display:inline">Bipasha Sen*, </p><p class="self" style="display:inline">Aditya Agarwal*</p><p class="authors" style="display:inline">, Gaurav Singh*, Brojeshwar B., Srinath Sridhar, Madhava Krishna</p>
</p>
<p>
<!-- <i><span style="color:gray"></span>Under Review</i>, <a href="https://www.icra2023.org/" class="uline-special"><span style="color:red">ICRA 2023</span></a> -->
<a href="https://www.icra2023.org/" class="uline-special"><span style="color:red">ICRA 2023</span></a>
</p>
<p>
<!-- <a href="https://arxiv.org/pdf/2301.07213.pdf">Paper / </a> -->
<a href="images/projects/scarp_icra2023/ICRA_2429_published.pdf">Paper / </a>
<a href="https://bipashasen.github.io/scarp/">Project Page / </a>
<a href="https://www.youtube.com/watch?v=o2PuRVZ3jJA">Short Video / </a>
<a href="https://github.com/vanhalen42/SCARP">Code / </a>
<a href="images/projects/scarp_icra2023/poster.pdf">Poster / </a>
<a href="https://youtu.be/lyEc991wyTI">Long Video</a>
</p>
<p>
We propose a mechanism for completing partial 3D shapes in arbitrary poses by learning a disentangled feature representation of pose and shape.
We rely on learning rotationally equivariant pose features and geometric shape features by training a multi-tasking objective.
SCARP improves the shape completion performance by 45% and grasp proposals by 71.2% over existing baselines.
</p>
</td>
</tr>
<tr>
<td width="30%">
<div class="one" style="text-align:center;">
<img src="images/projects/faceoff_wacv2023/teaser_images/faceoff1.png" style="max-height: 200px;">
</div>
</td>
<td valign="top" width="70%">
<h5>
FaceOff: A Video-to-Video Face Swapping System
</h5>
<p class="self" style="display:inline">Aditya Agarwal*</p><p class="authors" style="display:inline">, Bipasha Sen*, Rudrabha Mukhopadhyay, Vinay P. Namboodiri, C.V. Jawahar</p>
</p>
<p>
<a href="https://wacv2023.thecvf.com/home" class="uline-special"><span style="color:red">WACV 2023</span></a>
<!-- IEEE/CVF Winter Conference on Applications of Computer Vision, WACV 2023 -->
</p>
<p>
<a href="https://openaccess.thecvf.com/content/WACV2023/papers/Agarwal_FaceOff_A_Video-to-Video_Face_Swapping_System_WACV_2023_paper.pdf">Paper / </a>
<a href="http://cvit.iiit.ac.in/research/projects/cvit-projects/faceoff">Project Page / </a>
<a href="https://www.youtube.com/watch?v=aNhs-mqMOcE">Video / </a>
<a href="images/projects/faceoff_wacv2023/615-wacv-post.pdf">Poster / </a>
<a href="https://github.com/skymanaditya1/FaceOff">Code / </a>
<a href="images/projects/faceoff_wacv2023/0615-supp.pdf">Supplementary</a>
</p>
<p>
We propose a novel direction of video-to-video (V2V) face-swapping that tackles a pressing challenge in the moviemaking industry:
swapping the actor's face and expressions on the face of their body double.
Existing face-swapping methods preserve only the identity of the source face without swapping the expressions.
In FaceOff, we swap the source's facial expressions along with the identity on the target's background and pose.
</p>
</td>
</tr>
<tr>
<td width="30%">
<div class="one" style="text-align:center;">
<img src="images/projects/moocs-lrt_wacv2023/teaser_images/lipreading2.png" style="max-height: 200px;">
</div>
</td>
<td valign="top" width="70%">
<h5>
Towards MOOCs for Lipreading: Using Synthetic Talking Heads to Train Humans in Lipreading at Scale
</h5>
<!-- <p class="authors">
<b>Aditya Agarwal*</b>, Bipasha Sen*, Rudrabha Mukhopadhyay, Vinay P. Namboodiri, C.V. Jawahar
</p> -->
<p class="self" style="display:inline">Aditya Agarwal*</p><p class="authors" style="display:inline">, Bipasha Sen*, Rudrabha Mukhopadhyay, Vinay P. Namboodiri, C.V. Jawahar</p>
</p>
<p>
<a href="https://wacv2023.thecvf.com/home" class="uline-special"><span style="color:red">WACV 2023</span></a>
<!-- IEEE/CVF Winter Conference on Applications of Computer Vision, WACV 2023 -->
</p>
<p>
<a href="https://openaccess.thecvf.com/content/WACV2023/papers/Agarwal_Towards_MOOCs_for_Lipreading_Using_Synthetic_Talking_Heads_To_Train_WACV_2023_paper.pdf">Paper / </a>
<a href="http://cvit.iiit.ac.in/research/projects/cvit-projects/mooc-lip">Project Page / </a>
<a href="https://youtu.be/tc2Pt6dyjjo">Video / </a>
<a href="images/projects/moocs-lrt_wacv2023/720-wacv-post.pdf">Poster / </a>
<!-- <a href="https://github.com/skymanaditya1/FaceOff">Code / </a> -->
<a href="images/projects/moocs-lrt_wacv2023/0720-supp.pdf">Supplementary</a>
</p>
<p>
Hard-of-hearing people rely on lipreading the mouth movements of the speaker to understand the spoken content.
In this work, we developed computer vision techniques and built upon existing AI models, such as TTS and talking-face generation,
to generate synthetic lipreading training content in any language.
</p>
</td>
</tr>
<tr>
<td width="30%">
<div class="one" style="text-align:center;">
<img src="images/projects/inr-v_tmlr2022/teaser.png" style="max-height: 200px;">
</div>
</td>
<td valign="top" width="70%">
<h5>
INR-V: A Continuous Representation Space for Video-based Generative Tasks
</h5>
<p class="authors" style="display:inline">Bipasha Sen*, </p><p class="self" style="display:inline">Aditya Agarwal*</p><p class="authors" style="display:inline">, Vinay P. Namboodiri, C.V. Jawahar</p>
</p>
<p>
<a href="https://www.jmlr.org/tmlr/" class="uline-special"><span style="color:red">TMLR 2022</span></a>
<!-- Transactions on Machine Learning Research, TMLR 2022 -->
</p>
<p>
<a href="https://openreview.net/pdf?id=aIoEkwc2oB">Paper / </a>
<a href="https://openreview.net/forum?id=aIoEkwc2oB">OpenReview / </a>
<!-- <a href="http://cvit.iiit.ac.in/research/projects/cvit-projects/inr-v">Project Page / </a> -->
<a href="/INRV/">Project Page / </a>
<a href="https://www.youtube.com/watch?v=ViIwnu5vcck">Video / </a>
<a href="https://github.com/bipashasen/INRV">Code</a>
</p>
<p>
Inspired by the recent works on parameterizing 3D shapes and scenes as Implicit Neural Representations (INRs),
we encode videos as INRs. We train a hypernetwork to learn a prior over these INR functions and propose two techniques,
i) Progressive Training and ii) Video-CLIP Regularization to stabilize hypernetwork training.
INR-V shows remarkable performance on several video-generative tasks on many benchmark datasets.
</p>
<!-- <p>
*Denotes equal contribution
</p> -->
</td>
</tr>
<tr>
<td width="30%">
<div class="one" style="text-align:center;">
<img src="images/projects/ocrtoc_icra2022/tabletop1.png" style="max-height: 200px;">
</div>
</td>
<td valign="top" width="70%">
<h5>
Approaches and Challenges in Robotic Perception for Table-top Rearrangement and Planning
</h5>
<p class="self" style="display:inline">Aditya Agarwal*</p><p class="authors" style="display:inline">, Bipasha Sen*, Shankara Narayanan V*, Vishal Reddy Mandadi*, Brojeshwar Bhowmick, K Madhava Krishna</p>
</p>
<p>
<a href="https://rpal.cse.usf.edu/rgmc_icra2022/" class="uline-special"><span style="color:red">3<sup>rd</sup> in ICRA 2022 Open Cloud Table Organization Challenge</span></a>
<!-- 3<sup>rd</sup> in ICRA 2022 Open Cloud Table Organization Challenge -->
</p>
<p>
<a href="https://arxiv.org/pdf/2205.04090.pdf">Paper / </a>
<a href="https://rpal.cse.usf.edu/rgmc_icra2022/">Competition / </a>
<a href="https://youtu.be/5VSP49OB0ZI">Video / </a>
<a href="images/projects/ocrtoc_icra2022/WeeklyPresentations.pdf">Slides / </a>
<a href="https://github.com/skymanaditya1/ocrtoc_iiith/tree/jan30_sub_6dof">Code / </a>
<a href="https://blogs.iiit.ac.in/rrc/">News1 / </a>
<a href="https://www.iiit.ac.in/files/media/Sakshi-RRC.jpeg">News2</a>
</p>
<p>
In this challenge, we proposed an end-to-end pipeline in ROS incorporating the perception and planning stacks to manipulate objects from their
initial configuration to a desired target configuration on a tabletop scene using a two-finger manipulator.
The pipeline involves the following steps - (1) 3D scene registration, (2) Object pose estimation, (3) Grasp generation,
(4) Task Planning, and (5) Motion Planning.
</p>
</td>
</tr>
<tr>
<td width="30%">
<div class="one" style="text-align:center;">
<img src="images/projects/als_bmvc2021/banner_images/image_overlay_white.png" style="max-height: 400px;">
</div>
</td>
<td valign="top" width="70%">
<h5>
Personalized One-Shot Lipreading for an ALS Patient
</h5>
<p class="authors" style="display:inline">Bipasha Sen*, </p><p class="self" style="display:inline">Aditya Agarwal*</p><p class="authors" style="display:inline">, Rudrabha Mukhopadhyay, Vinay P. Namboodiri, C.V. Jawahar</p>
</p>
<p>
<a href="https://www.bmvc2021-virtualconference.com/" class="uline-special"><span style="color:red">BMVC 2021</span></a>
<!-- British Machine Vision Conference, BMVC 2021 -->
</p>
<p>
<a href="https://www.bmvc2021-virtualconference.com/assets/papers/1468.pdf">Paper / </a>
<a href="https://www.youtube.com/watch?v=_famGVaem-8">Video</a>
</p>
<p>
We tackled the challenge of lipreading medical patients in a one-shot setting.
There were two primary issues in training existing lipreading models - i) lipreading datasets had people suffering from no disabilities,
ii) lipreading datasets lacked medical words. We devised a variational encoder-based domain adaptation technique to adapt models
trained on large amounts of synthetic data to enable lipreading with one-shot real examples.
</p>
</td>
</tr>
<tr>
<td width="30%">
<div class="one" style="text-align:center;">
<img src="images/projects/reed_slt2021/architecture.png" style="max-height: 400px;">
</div>
</td>
<td valign="top" width="70%">
<h5>
REED: An Approach Towards Quickly Bootstrapping Multilingual Acoustic Models
</h5>
<p class="authors" style="display:inline">Bipasha Sen*, </p><p class="self" style="display:inline">Aditya Agarwal*</p><p class="authors" style="display:inline">, Mirishkar Sai Ganesh, Anil Kumar Vuppala</p>
</p>
<p>
<a href="http://2021.ieeeslt.org/" class="uline-special"><span style="color:red">SLT 2021</span></a>
<!-- IEEE Spoken Language Technology Workshop, SLT 2021 -->
</p>
<p>
<a href="images/projects/reed_slt2021/reed_paper.pdf">Paper / </a>
<a href="images/projects/reed_slt2021/reed_presentation.pdf">Slides / </a>
<a href="images/projects/reed_slt2021/MASR_Synapse.pdf">MLADS Paper</a>
</p>
<p>
We tackled the problem of building a multilingual acoustic model in a low-resource setting.
We proposed a mechanism to bootstrap and validate the compatibility of multiple languages using CNNs operating directly on raw speech signals.
Our method improves training and inference times by 4X and 7.4X, respectively, with comparable WERs against RNN-based baseline systems.
</p>
</td>
</tr>
<tr>
<td width="30%">
<div class="one" style="text-align:center;">
<img src="images/projects/actionrecognition_isvc2020/architecture.png" style="max-height: 400px;">
</div>
</td>
<td valign="top" width="70%">
<h5>
An Approach Towards Action Recognition using Part Based Hierarchical Fusion
</h5>
<p class="self" style="display:inline">Aditya Agarwal*</p><p class="authors" style="display:inline">, Bipasha Sen*</p>
</p>
<p>
<a href="https://www.isvc.net/" class="uline-special"><span style="color:red">ISVC 2020</span></a>
<!-- International Symposium on Visual Computing, ISVC 2020 -->
</p>
<p>
<a href="images/projects/actionrecognition_isvc2020/humanactionrecognition_partbasedhierarchicalfusion.pdf">Paper / </a>
<a href="images/projects/actionrecognition_isvc2020/slides_190.pdf">Slides / </a>
<a href="images/projects/actionrecognition_isvc2020/PBAR_Synapse.pdf">MLADS Paper</a>
</p>
<p>
The human body can be represented as an articulation of rigid and hinged joints, which can be combined to form the parts of the body.
In this work, we think of human actions as a collective action of these parts.
We propose a Hierarchical BiLSTM network to model the spatio-temporal dependencies of the motion by fusing the pose-based joint trajectories
in a part-based hierarchical fashion.
</p>
</td>
</tr>
<tr>
<td width="30%">
<div class="one" style="text-align:center;">
<img src="images/projects/icacci2016/architecture.png" style="max-height: 400px;">
</div>
</td>
<td valign="top" width="70%">
<h5>
Minimally Supervised Sound Event Detection using a Neural Network
</h5>
<p class="self" style="display:inline">Aditya Agarwal</p><p class="authors" style="display:inline">, Syed Munawwar Quadri, Savitha Murthy, Dinkar Sitaram</p>
</p>
<p>
<a href="https://ieeexplore.ieee.org/xpl/conhome/7592392/proceeding" class="uline-special"><span style="color:red">ICACCI 2016</span></a>
<!-- International Conference on Advances in Computing, Communications and Informatics, ICACCI 2016 -->
</p>
<p>
<a href="images/projects/icacci2016/icacci2016_paper.pdf">Paper / </a>
<a href="images/projects/icacci2016/icacci_poster2016.pdf">Poster / </a>
<a href="https://github.com/skymanaditya1/Minimally-Supervised-Sound-Event-Detection">Code</a>
</p>
<p>
We solve the task of polyphonic sound event detection by training on a minimally annotated dataset of single sounds.
Single sounds represented as MFCC features are used to train a neural network.
Polyphonic sounds are preprocessed using PCA and NMF, and source-separated sounds are inferred using the learned network.
Our system achieves reasonable accuracy of source separation and detection with minimal data.
</p>
</td>
</tr>
</tbody>
</table>
</div>
<br/>
<hr/>
<!-- <div id="talks">
<h2>News & Announcements</h2>
<table width="100%" align="center" valign="middle" cellspacing="0" cellpadding="0" style="border-collapse: collapse;">
<tbody>
<tr>
<td width="20%">
<h5>
January, 2023
</h5>
</td>
<td valign="middle" width="80%">
<h5>
I will be attending the Google Research Week 2023 in Bangalore from 29<sup>th</sup> January to 31<sup>st</sup> January.
</h5>
</td>
</tr>
<tr>
<td width="20%">
<h5>
December, 2022
</h5>
</td>
<td valign="middle" width="80%">
<h5>
I will be serving as a Reviewer at the Neural Fields Workshop at ICLR 2023 (NF2023).
</h5>
</td>
</tr>
<tr>
<td width="20%">
<h5>
November, 2022
</h5>
</td>
<td valign="middle" width="80%">
<h5>
I served as a Reviewer at ICRA 2023.
</h5>
</td>
</tr>
<tr>
<td width="20%">
<h5>
September, 2020
</h5>
</td>
<td valign="middle" width="80%">
<h5>
Gave a tutorial on Geometric Deep Learning and Graph Convolutional Networks (GCN)
</h5>
<p class="authors">
Reading Group: Vision for Mobility and Safety
<br>
<a href="reports/GCN-Tutorial-Wed-meeting.pdf">Slides</a>
</p>
</td>
</tr>
</tbody>
</table>
</div> -->
<div id="news">
<h2>News & Announcements</h2>
<br />
<ul>
<!-- <li>
<p>
[Nov '23] Serving as reviewer for ICRA 2024 and CVPR 2024.
</p>
-->
<li>
<p>
[Nov '23] Awarded <a href="https://nips.cc/" class="uline">NeurIPS 2023</a> Scholar Award (~$1700).
</p>
</li>
<li>
<p>
[Nov '23] Co-presented 2 papers at <a href="https://sites.google.com/view/corl2023-prl/schedule?authuser=0" class="uline">PRL</a>, <a href="https://openreview.net/group?id=robot-learning.org/CoRL/2023/Workshop/TGR" class="uline">TGR</a>, and <a href="https://openreview.net/group?id=robot-learning.org/CoRL/2023/Workshop/LangRob" class="uline">LangRob</a> workshops at <a href="https://www.corl2023.org/" class="uline">CoRL2023</a> in Atlanta, Georgia.
</p>
</li>
<li>
<p>
[Sep '23] <a href="https://arxiv.org/pdf/2306.06093.pdf" class="uline">Hyp-NeRF</a> accepted at NeurIPS 2023. See you in New Orleans, Louisiana</a>.
</p>
</li>
<li>
<p>
[Sep '23] Joined <a href="https://lis.csail.mit.edu/people/" class="uline">Massachusetts Institute of Technology</a> as a PhD student in EECS</a>.
</p>
</li>
<!-- <li>
<p>
[July '23] Served as a reviewer for <a href="https://s2023.siggraph.org/" class="uline">SIGGRAPH 2023</a>.
</p>
</li>
<li> -->
<p>
[May '23] I'll be starting as a research intern at <a href="https://mila.quebec/en/" class="uline">Mila - Quebec Artificial Intelligence Institute</a>, Montreal with professors <a href="https://liampaull.ca/" class="uline">Liam Paull</a> and <a href="http://www.cs.toronto.edu/~florian/" class="uline">Florian Shkurti</a>. I will work on learning representations for 3D robotic manipulation.
</p>
</li>
<li>
<p>
[Apr '23] I'll be joining MIT CSAIL as a PhD student this Fall. I will be a part of the <a href="https://www.csail.mit.edu/research/learning-and-intelligent-systems" class="uline">Learning and Intelligent Systems (LIS)</a> Group with professors <a href="https://www.csail.mit.edu/person/leslie-kaelbling" class="uline">Leslie Pack Kaelbling</a> and <a href="https://people.csail.mit.edu/tlp/" class="uline">Tomas Lozano-Perez</a>.
</p>
</li>
<!-- <li>
<p>
[Apr '23] Serving as a reviewer for <a href="https://ieee-iros.org/" class="uline">IROS 2023</a>.
</p>
</li> -->
<li>
<p>
[Apr '23] Full page abstract on <a href="http://icmpc.org/icmpcprograms/ICMPC17_2023program.pdf" class="uline">"Uncovering Biases Against Indian Artists"</a> accepted at <a href="https://www.icmpc.org/" class="uline">ICMPC17-APSCOM7</a> for a spoken presentation. Awarded a Travel Grant of ¥30,000 to attend the conference in Tokyo, Japan.
</p>
</li>
<li>
<p>
[Mar '23] Awarded a generous travel grant of $2250.00 by <a href="https://www.icra2023.org/" class="uline">ICRA 2023</a> IEEE RAS Travel Grant Committee to attend the premier robotics conference in London, UK from 29<sup>th</sup> May to 2<sup>nd</sup> Jun.
</p>
</li>
<li>
<p>
[Mar '23] Invited for a talk at Columbia University - slide deck <a href="https://iiitaphyd-my.sharepoint.com/:p:/g/personal/aditya_ag_research_iiit_ac_in/EfulzHAgBLlImxOxLqOZEJYBqOtnMrXz9pqUfL8plYbfVg?e=5cFHkz" class="uline">here</a>. The talk was organized as part of my graduate visit days to Brown, Columbia, and MIT.
</p>
</liw>
<li>
<p>
[Jan '23] 5 works on <a href="images/projects/inr-v_tmlr2022/rnd_poster.pdf" class="uline">Implicit Video Parameterization</a>, <a href="images/projects/faceoff_wacv2023/rnd_poster.pdf" class="uline">V2V Face-Swapping</a>, <a href="images/projects/moocs-lrt_wacv2023/rnd_poster.pdf" class="uline">MOOCs for Lipreading</a>, <a href="images/projects/scarp_icra2023/rnd_poster.pdf" class="uline">3D Shape Completion</a>, and <a href="images/projects/synergistic_iros2023/rnd_poster.pdf" class="uline">Synergistic Tabletop Manipulation</a> presented at <a href="https://rndshowcase.iiit.ac.in/" class="uline">IIIT Hyderabad's RnD showcase</a>.
</p>
</li>
<li>
<p>
[Jan '23] 1 paper accepted at <a href="https://www.icra2023.org/" class="uline">ICRA 2023</a> on <a href="https://bipashasen.github.io/scarp/" class="uline">3D Shape Completion in Arbitrary Poses</a>. Featured as the <a href="https://t.co/sT467zT9wv" class="uline">"Publication of the Week"</a> in "Weekly Robotics".
</p>
<li>
<p>
[Jan '23] Attending <a href="https://sites.google.com/view/researchweek2023/home" class="uline">Google Reserach Week</a> in Bangalore from 29<sup>th</sup> Jan to 31<sup>st</sup> Jan.
</p>
<!-- <li>
<p>
[Dec '22] Serving as a reviewer at <a href="https://blog.iclr.cc/2022/12/21/announcing-the-accepted-workshops-at-iclr-2023/" class="uline">Neural Fields Workshop</a> at ICLR 2023 (NF2023).
</p>
<li> -->
<!-- <p>
[Nov '22] Served as a reviewer at <a href="https://www.icra2023.org/" class="uline">ICRA 2023</a> .
</p> -->
<li>
<p>
[Oct '22] Journal paper on <a href="https://openreview.net/forum?id=aIoEkwc2oB" class="uline">representation space for video-based generative tasks</a> accepted at <a href="https://www.jmlr.org/tmlr/" class="uline">TMLR 2022</a>.
</p>
<li>
<p>
[Aug '22] Two papers on
<a href="https://openaccess.thecvf.com/content/WACV2023/papers/Agarwal_FaceOff_A_Video-to-Video_Face_Swapping_System_WACV_2023_paper.pdf" class="uline">video face swapping</a>
and <a href="https://openaccess.thecvf.com/content/WACV2023/papers/Agarwal_Towards_MOOCs_for_Lipreading_Using_Synthetic_Talking_Heads_To_Train_WACV_2023_paper.pdf" class="uline">talking-face generation</a>
accepted at <a href="https://wacv2023.thecvf.com/home" class="uline">WACV 2023</a> round 1 (acceptance rate 21.6%).
</p>
<li>
<p>
[May '22] We were in the News (<a href="https://www.iiit.ac.in/files/media/Sakshi-RRC.jpeg" class="uline">news1</a>, <a href="https://blogs.iiit.ac.in/monthly_news/iiiths-robotics-research-centre-proves-mettle-with-two-prestigious-wins/" class="uline">news2</a>)
for winning 3rd place at the <a href="https://www.icra2022.org/program/competitions" class="uline">ICRA 2022 international robotics competition</a>
on tabletop rearrangement and planning. Awarded a grant of $1000.00.
</p>
<p>
[Oct '21] 1 paper accepted at <a href="https://www.bmvc2021-virtualconference.com/" class="uline">BMVC</a>
on <a href="https://www.bmvc2021-virtualconference.com/conference/papers/paper_1468.html" class="uline">lipreading in a one-shot setting using domain adaptation</a>.
</p>
<li>
<p>
[Mar '21] I will be joining <a href="https://www.iiit.ac.in/" class="uline">IIIT Hyderabad</a> as an MS by Research student.
</p>
<li>
<p>
[Nov '20] 1 paper accepted at <a href="http://2021.ieeeslt.org/" class="uline">SLT</a> on building <a href="images/projects/reed_slt2021/reed_paper.pdf" class="uline">multilingual acoustic model for low-resource languages</a>.
</p>
<li>
<p>
[Sep '17] Completed my Bachelor's degree from <a href="https://pes.edu/" class="uline">PES University</a> in Computer Science. Received Academic Distinction Award for exceptional academic performance.
</p>
<li>
<p>
[Jan '16] I will be interning at the <a href="https://www.ucalgary.ca/" class="uline">University of Calgary</a> in Summer 2016 fully-funded through the <a href="https://www.mitacs.ca/en/programs/globalink/globalink-research-internship" class="uline">MITACS Globalink Research Award</a>.
</p>
</ul>
</div>
<hr/>
<div id="news">
<h2>Academic Services</h2>
<ul>
<li>
<p>
Reviewer for <a href="https://2024.ieee-icra.org/" class="uline">ICRA 2024</a>, <a href="https://cvpr.thecvf.com/" class="uline">CVPR 2024</a>.
</p>
</li>
<li>
<p>
Reviewer for <a href="https://s2023.siggraph.org/" class="uline">SIGGRAPH 2023</a>, <a href="https://ieee-iros.org/" class="uline">IROS 2023</a>, <a href="https://iclr.cc/virtual/2023/events/workshop" class="uline">ICLR 2023 workshops</a>, <a href="https://www.icra2023.org/" class="uline">ICRA 2023</a>.
</p>
</li>
<li>
<p>
[Aug '22] Coordinator for the <a href="https://cvit.iiit.ac.in/summerschool2022/" class="uline">6th CVIT Summer School</a> on AI.
</p>
</li>
<li>
<p>
[Aug '22] Gave a <a href="https://youtu.be/QtSH7dv1CwA?t=5353" class="uline">talk</a> on the challenges in tabletop rearrangement and planning at CVIT Summer School 2022.
</p>
</li>
<li>
<p>
[Feb '22] I will be taking month long tutorial sessions in machine learning for faculties across universities in India as part of the CSEDU-ML program conducted jointly by IIIT-H, IIT-H, and IIT-D.
</p>
</li>
<li>
<p>
[Aug '21] Coordinator for the <a href="https://cvit.iiit.ac.in/summerschool2021/" class="uline">5th CVIT Summer School</a>
on AI and conducted tutorial sessions on self-supervised learning and multimodal learning.
</p>
</li>
</ul>
</div>
<hr />
<div id="news">
<h2>Professional Achievements</h2>
<ul>
<li>
<p>
[Aug '17] Winners at the <a href="https://blogs.vmware.com/opensource/2017/08/25/open-source-global-borathon/" class="uline">VMWare Global Relay Opensource Borathon</a> among all participating teams at <a href="https://www.vmware.com/" class="uline">VMWare</a>.
</p>
</li>
<li>
<p>
[Mar '20] I was selected as one of the two individuals out of 6,000 employees at Microsoft India to undergo a video shoot for the company's campus hiring program. Available on <a href="https://www.youtube.com/watch?v=-ZEXU20tkFw" class="uline">YouTube</a>.
</p>
</li>
<li>
<p>
[Mar '18] My work helped scale the <a href="https://communitytraining.microsoft.com/" class="uline">Microsoft Community Training</a> platform to its first 100K users. The work was covered by several media outlets
(<a href="https://news.microsoft.com/en-in/microsoft-project-sangam-swacch-bharat-mission/", class="uline">[1]</a>,
<a href="https://www.digitalcreed.in/microsofts-project-sangam-accelerates-indias-swachh-bharat-mission/", class="uline">[2]</a>,
<a href="https://egov.eletsonline.com/2019/03/microsoft-project-gives-major-boost-to-swachh-bharat-mission/", class="uline">[3]</a>,
<a href="https://swachhindia.ndtv.com/government-launches-a-mobile-app-and-website-to-train-citizens-on-sanitation-waste-management-32353/", class="uline">[4]</a>).
I was awarded the "Delight your Customer" Award by Microsoft for my outstanding work.
</p>
</li>
<li>
<p>
[Feb '17] My work on building <a href="https://www.microsoft.com/en-us/research/lab/microsoft-research-india/" class="uline">Microsoft Research India's</a> flagship project <a href="https://www.microsoft.com/en-us/research/wp-content/uploads/2012/10/MassivelyEmpoweredClassroomsTechReportv1.pdf" class="uline">Massively Empowered Classroom</a> was deployed by <a href="http://web.mie.ac.mu/" class="uline">Mauritius Institute of Education</a>.
It was inaugurated by MD MSR India and Minister of Tertiary Education, Mauritius and was covered by the press
(<a href="https://education.govmu.org/Documents/archived/2017/SPEECH%20MIE%20FINAL%2008%20FEB%202017.pdf" class="uline">[1]</a>,
<a href="https://ict.io/en/launch-of-the-virtual-campus-of-the-mauritius-institute-of-education/" class="uline">[2]</a>,
<a href="https://defimedia.info/mauritius-institute-education-mo-klas-build-network-digital-learning-resources" class="uline">[3]</a>).
</p>
</li>
</ul>
</div>
<div hidden="hidden">
<script type="text/javascript" id="clustrmaps" src="//clustrmaps.com/map_v2.js?d=HlPcyrzc7e8yFyrAN5zVRB4Q6oQOU5EwTtuv9SXJE7Y&cl=ffffff&w=a"></script>
</div>
<hr>
<p align="right">
<small>Forked and modified from <a href="https://virajprabhu.github.io/">Viraj Prabhu's</a> adaptation of <a href="https://github.com/johno/pixyll">Pixyll</a> theme</a></small></p>