-
Notifications
You must be signed in to change notification settings - Fork 0
/
Copy pathindex.html
241 lines (226 loc) · 16.9 KB
/
index.html
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
<!DOCTYPE html>
<html lang="en">
<head>
<meta http-equiv="Content-Type" content="text/html; charset=UTF-8" />
<title>Yuchen Wu</title>
<meta name="author" content="Yuchen Wu" />
<meta name="viewport" content="width=device-width, initial-scale=1" />
<link rel="stylesheet" type="text/css" href="stylesheet.css" />
<link rel="icon" href="data/picture.jpeg" />
</head>
<body>
<table style="width: 100%; max-width: 850px; border: 0px; border-spacing: 0px; border-collapse: separate; margin-right: auto; margin-left: auto">
<tbody>
<tr style="padding: 0px">
<td style="padding: 0px">
<table style="width: 100%; border: 0px; border-spacing: 0px; border-collapse: separate; margin-right: auto; margin-left: auto">
<tbody>
<tr style="padding: 0px">
<td style="padding: 2.5%; width: 63%; vertical-align: middle">
<p style="text-align: center">
<name>Yuchen Wu</name>
</p>
<p>
I am a MASc student in Robotics at
<a href="https://www.utoronto.ca/">University of Toronto</a> supervised by <a href="http://asrl.utias.utoronto.ca/~tdb/">Prof. Tim Barfoot</a>. I am a part of the <a href="http://asrl.utias.utoronto.ca/">Autonomous Space Robotics Laboratory (ASRL)</a> and the <a href="https://robotics.utoronto.ca/">UofT Robotics Institute</a>.
</p>
<p>I received my BASc degree in <a href="https://engsci.utoronto.ca/">Engineering Science (Robotics)</a> at <a href="https://www.utoronto.ca/">University of Toronto</a>. During my undergraduate study, I worked with <a href="http://www.cs.toronto.edu/~florian/">Prof. Florian Shkurti</a> at the <a href="https://rvl.cs.toronto.edu/">Robot Vision & Learning</a> lab on imitation and reinforcement learning.</p>
<p style="text-align: center"><a href="mailto:[email protected]">Email</a>  /  <a href="data/cv/cv.pdf">CV</a>  /  <a href="https://www.linkedin.com/in/yuchen-wu-9b6199253">LinkedIn</a>  /  <a href="https://scholar.google.com/citations?user=Niv8kqsAAAAJ&hl=en">Google Scholar</a>  /  <a href="https://github.com/cheneyuwu/">GitHub</a></p>
</td>
<td style="padding: 2.5%; width: 40%; max-width: 40%">
<a href="data/picture.jpeg"><img style="width: 100%; max-width: 100%; border-radius: 50%" alt="profile photo" src="data/picture.jpeg" class="hoverZoomLink" /></a>
</td>
</tr>
</tbody>
</table>
<table style="width: 100%; border: 0px; border-spacing: 0px; border-collapse: separate; margin-right: auto; margin-left: auto">
<tbody>
<tr>
<td style="padding: 20px; width: 100%; vertical-align: middle">
<heading>Research</heading>
<p>I'm interested in mobile robot state estimation. My research currently focuses on lidar & radar mapping and localization.</p>
</td>
</tr>
</tbody>
</table>
<table style="width: 100%; border: 0px; border-spacing: 0px; border-collapse: separate; margin-right: auto; margin-left: auto">
<tbody>
<tr>
<td style="padding: 20px; width: 35%; vertical-align: middle">
<!-- <iframe width="100%" src="https://www.youtube.com/embed/okS7pF6xX7A" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe> -->
</td>
<td width="65%" valign="middle">
<a href="https://arxiv.org/abs/2211.02047">
<papertitle> Along Similar Lines: Local Obstacle Avoidance for Long-term Autonomous Path Following </papertitle>
</a>
<br />
<a href="linkedin.com/in/jordy-sehn-457b99190">Jordy Sehn</a>, <strong>Yuchen Wu</strong>, <a href="http://asrl.utias.utoronto.ca/~tdb/">Timothy D. Barfoot</a>
<br />
Submitted to <em>International Conference on Robotics and Automation (ICRA)</em>, 2023
<br />
<a href="https://arxiv.org/pdf/2211.02047.pdf">paper</a> / <a href="https://github.com/utiasASRL/vtr3">code</a> / <a href="data/bib/sehn_icra23.bib">bibtex</a>
<p style="margin-top: 5pt">We develop a local path planner specific to path-following tasks, which allows a lidar variant of VT&R3 to reliably avoid obstacles during path repeating. This planner is demonstrated using VT&R3 but generalizes to any path-following applications.</p>
</td>
</tr>
<tr>
<td style="padding: 20px; width: 35%; vertical-align: middle">
<!-- <iframe width="100%" src="https://www.youtube.com/embed/okS7pF6xX7A" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe> -->
</td>
<td width="65%" valign="middle">
<a href="https://ieeexplore.ieee.org/document/9968059">
<papertitle> Picking Up Speed: Continuous-Time Lidar-Only Odometry using Doppler Velocity Measurements </papertitle>
</a>
<br />
<strong>Yuchen Wu</strong>, <a href="https://scholar.google.ca/citations?user=uoH44gEAAAAJ&hl=en">David J. Yoon</a>, <a href="http://asrl.utias.utoronto.ca/~keenan/">Keenan Burnett</a>, Soeren Kammel, Yi Chen, Heethesh Vhavle, <a href="http://asrl.utias.utoronto.ca/~tdb/">Timothy D. Barfoot</a>
<br />
<em>IEEE Robotics and Automation Letters (RA-L)</em>, 2023
<!-- <br /> -->
<!-- <em>IEEE International Conference on Robotics and Automation (ICRA)</em>, 2023 -->
<br />
<a href="https://arxiv.org/pdf/2209.03304.pdf">paper</a> / <a href="https://github.com/utiasASRL/steam_icp">code</a> / <a href="data/bib/wu_icra23.bib">bibtex</a>
<p style="margin-top: 5pt">We present the first continuous-time lidar-only odometry algorithm using these Doppler velocity measurements from an FMCW lidar to aid odometry in geometrically degenerate environments.</p>
</td>
</tr>
<tr>
<td style="padding: 20px; width: 35%; vertical-align: middle">
<iframe width="100%" src="https://www.youtube.com/embed/okS7pF6xX7A" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>
</td>
<td width="65%" valign="middle">
<a href="https://ieeexplore.ieee.org/abstract/document/9835037">
<papertitle> Are We Ready for Radar to Replace Lidar in All-Weather Mapping and Localization? </papertitle>
</a>
<br />
<a href="http://asrl.utias.utoronto.ca/~keenan/">Keenan Burnett*</a>, <strong>Yuchen Wu*</strong>, <a href="https://scholar.google.ca/citations?user=uoH44gEAAAAJ&hl=en">David J. Yoon</a>, <a href="https://www.dynsyslab.org/prof-angela-schoellig/">Angela P. Schoellig</a>,
<a href="http://asrl.utias.utoronto.ca/~tdb/">Timothy D. Barfoot</a>
<br />
<em>IEEE Robotics and Automation Letters (RA-L)</em>, 2022
<!-- <br /> -->
<!-- <em>IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)</em>, 2022 -->
<br />
<a href="https://arxiv.org/pdf/2203.10174.pdf">paper</a> / <a href="https://www.youtube.com/watch?v=okS7pF6xX7A&list=PLC0E5EB919968E507">video</a> / <a href="https://github.com/utiasASRL/vtr3">code</a> / <a href="data/bib/burnett_iros22.bib">bibtex</a>
<p style="margin-top: 5pt">We present an extensive comparison between three topometric localization systems: radar-only, lidar-only, and a cross-modal radar-to-lidar system across varying seasonal and weather conditions using the Boreas dataset.</p>
</td>
</tr>
<tr>
<td style="padding: 20px; width: 35%; vertical-align: middle">
<iframe width="100%" src="https://www.youtube.com/embed/Cay6rSzeo1E" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>
</td>
<td width="65%" valign="middle">
<a href="https://www.boreas.utias.utoronto.ca/#/">
<papertitle>Boreas: A Multi-Season Autonomous Driving Dataset </papertitle>
</a>
<br />
<a href="http://asrl.utias.utoronto.ca/~keenan/">Keenan Burnett</a>, <a href="https://scholar.google.ca/citations?user=uoH44gEAAAAJ&hl=en">David J. Yoon</a>, <strong>Yuchen Wu</strong>, Andrew Zou Li, Haowei Zhang, Shichen Lu, Jingxing Qian, Wei-Kang Tseng, Andrew Lambert, Keith Y.K. Leung, <a href="https://www.dynsyslab.org/prof-angela-schoellig/">Angela P. Schoellig</a>,
<a href="http://asrl.utias.utoronto.ca/~tdb/">Timothy D. Barfoot</a>
<br />
Accepted by <em>International Journal of Robotics Research (IJRR)</em>
<br />
<a href="https://www.boreas.utias.utoronto.ca/#/">website</a> / <a href="https://arxiv.org/pdf/2203.10168.pdf">paper</a> / <a href="https://www.youtube.com/watch?v=Cay6rSzeo1E">video</a> / <a href="https://github.com/utiasASRL/pyboreas">code</a> /
<a href="data/bib/burnett_ijrr.bib">bibtex</a>
<p style="margin-top: 5pt">The Boreas dataset was collected by driving a repeated route over the course of 1 year resulting in stark seasonal variations. In total, Boreas contains over 350km of driving data including several sequences with adverse weather conditions such as rain and heavy snow.</p>
</td>
</tr>
<tr>
<td style="padding: 20px; width: 35%; vertical-align: middle">
<iframe width="100%" src="https://www.youtube.com/embed/KkG6TQOVXak" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>
</td>
<td width="65%" valign="middle">
<a href="https://3dv2021.surrey.ac.uk/demos/">
<papertitle> Visual Teach & Repeat using Deep Learned Features </papertitle>
</a>
<br />
<a href="http://asrl.utias.utoronto.ca/~keenan/">Mona Gridseth</a>, <strong>Yuchen Wu</strong>,
<a href="http://asrl.utias.utoronto.ca/~tdb/">Timothy D. Barfoot</a>
<br />
Demo at <em>International Conference on 3D Vision (3DV)</em>, 2021
<br />
<a href="https://www.youtube.com/watch?v=okS7pF6xX7A&list=PLC0E5EB919968E507">video</a> /
<a href="https://github.com/utiasASRL/vtr3">code</a>
<p style="margin-top: 5pt">We provide a demo of Visual Teach and Repeat 3 for autonomous path following on a mobile robot, which uses deep learned features to tackle localization across challenging appearance change. Corresponding paper on deep learned features: <a href="https://arxiv.org/abs/2109.04041">link</a>.</p>
</td>
</tr>
<tr>
<td style="padding: 20px; width: 35%; vertical-align: middle">
<iframe width="100%" src="https://www.youtube.com/embed/E5Tg9juP8ck" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>
</td>
<td width="65%" valign="middle">
<a href="https://github.com/utiasASRL/vtr3">
<papertitle>Visual Teach & Repeat 3</papertitle>
</a>
<br />
<strong>Yuchen Wu</strong>, <a href="linkedin.com/in/ben-congram">Ben Congram</a>,
<a href="linkedin.com/in/zi-cong-daniel-guo">Daniel Guo</a>
<br />
Open Source Project
<br />
<a href="https://utiasasrl.github.io/vtr3/">website</a> / <a href="https://youtu.be/E5Tg9juP8ck">video</a> /
<a href="https://github.com/utiasASRL/vtr3">code</a>
<p style="margin-top: 5pt">VT&R3 is a C++ implementation of the Teach and Repeat navigation framework developed at <a href="http://asrl.utias.utoronto.ca/">ASRL</a>. It allows user to teach a robot a large (kilometer-scale) network of paths where the robot navigate freely via accurate (centimeter-level) path following, using a lidar/radar/camera as the primary sensor (no GPS).</p>
</td>
</tr>
<tr>
<td style="padding: 20px; width: 35%; vertical-align: middle">
<iframe width="100%" src="https://www.youtube.com/embed/rH56GpbTTnw" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>
</td>
<td width="65%" valign="middle">
<a href="https://ieeexplore.ieee.org/document/9561333">
<papertitle> Shaping Rewards for Reinforcement Learning with Imperfect Demonstrations using Generative Models </papertitle>
</a>
<br />
<strong>Yuchen Wu</strong>,
<a href="https://mila.quebec/en/person/melissa-mozifian/">Melissa Mozifian</a>,
<a href="http://www.cs.toronto.edu/~florian/">Florian Shkurti</a>
<br />
<em>International Conference on Robotics and Automation (ICRA)</em>, 2021
<br />
<a href="https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=9561333">paper</a> /
<a href="data/bib/wu_icra21.bib">bibtex</a>
<p style="margin-top: 5pt">We propose a method that combines reinforcement and imitation learning by shaping the reward function with a state-and-action-dependent potential that is trained from demonstration data, using a generative model.</p>
</td>
</tr>
</tbody>
</table>
<table style="width: 100%; border: 0px; border-spacing: 0px; border-collapse: separate; margin-right: auto; margin-left: auto">
<tbody>
<tr>
<td style="padding: 10px; width: 100%; vertical-align: middle">
<heading>Theses</heading>
</td>
</tr>
<tr>
<td style="padding-left: 20px; padding-right: 20px; padding-top: 5px; padding-bottom: 5px; width: 100%; vertical-align: middle">
MASc Thesis:
<a href="data/thesis/masc.pdf">
<papertitle> VT&R3: Generalizing the Teach and Repeat Navigation Framework </papertitle>
</a>
<br />
</td>
</tr>
<tr>
<td style="padding-left: 20px; padding-right: 20px; padding-top: 5px; padding-bottom: 5px; width: 100%; vertical-align: middle">
BASc Thesis:
<a href="data/thesis/basc.pdf">
<papertitle> Combining Reinforcement Learning and Imitation Learning through Reward Shaping for Continuous Control </papertitle>
</a>
<br />
</td>
</tr>
</tbody>
</table>
<table style="width: 100%; border: 0px; border-spacing: 0px; border-collapse: separate; margin-right: auto; margin-left: auto">
<tbody>
<tr>
<td style="padding: 0px">
<p style="text-align: right; font-size: small">
<a href="http://jonbarron.info">Website Template</a>
</p>
</td>
</tr>
</tbody>
</table>
</td>
</tr>
</tbody>
</table>
</body>
</html>