-
Notifications
You must be signed in to change notification settings - Fork 0
/
Copy pathproject.html
279 lines (256 loc) · 22.5 KB
/
project.html
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
<!doctype html>
<html lang="en">
<head>
<!-- Global site tag (gtag.js) - Google Analytics -->
<script async src="https://www.googletagmanager.com/gtag/js?id=UA-159856695-1"></script>
<script>
window.dataLayer = window.dataLayer || [];
function gtag(){dataLayer.push(arguments);}
gtag('js', new Date());
gtag('config', 'UA-159856695-1');
</script>
<meta charset="utf-8">
<meta name="keywords" content="utaustin,course,artificial intelligence,robotics,robot learning">
<meta name="viewport" content="width=device-width, initial-scale=1, shrink-to-fit=no">
<!-- CSS -->
<link rel="stylesheet" href="https://maxcdn.bootstrapcdn.com/bootstrap/4.0.0/css/bootstrap.min.css" integrity="sha384-Gn5384xqQ1aoWXA+058RXPxPg6fy4IWvTNh0E263XmFcJlSAwiGgFAW/dAiS6JXm" crossorigin="anonymous">
<link rel="stylesheet" type="text/css" href="css/style.css">
<!-- JavaScript -->
<script src="https://code.jquery.com/jquery-3.3.1.slim.min.js" integrity="sha384-q8i/X+965DzO0rT7abK41JStQIAqVgRVzpbzo5smXKp4YfRvH+8abtTE1Pi6jizo" crossorigin="anonymous"></script>
<script src="https://cdnjs.cloudflare.com/ajax/libs/popper.js/1.14.7/umd/popper.min.js" integrity="sha384-UO2eT0CpHqdSJQ6hJty5KVphtPhzWj9WO1clHTMGa3JDZwrnQq4sF86dIHNDz0W1" crossorigin="anonymous"></script>
<script src="https://stackpath.bootstrapcdn.com/bootstrap/4.3.1/js/bootstrap.min.js" integrity="sha384-JjSmVgyd0p3pXB1rRibZUAYoIIy6OrQ6VrjIEaFf/nJGzIxFDsf4x0xIM+B07jRM" crossorigin="anonymous"></script>
<title>CS391R: Robot Learning</title>
<link rel="icon" href="resources/favicon.ico" />
<link rel="shortcut icon" href="resources/favicon.ico" />
</head>
<body>
<nav class="navbar navbar-expand-md navbar-dark fixed-top">
<a class="navbar-brand" href="index.html">CS391R - Fall 2020</a>
<button class="navbar-toggler" type="button" data-toggle="collapse" data-target="#navbarsExampleDefault" aria-controls="navbarsExampleDefault" aria-expanded="false" aria-label="Toggle navigation">
<span class="navbar-toggler-icon"></span>
</button>
<div class="collapse navbar-collapse" id="navbarsExampleDefault">
<ul class="navbar-nav mr-auto">
<li class="nav-item active">
<a class="nav-link" href="logistics.html">Logistics</a>
</li>
<li class="nav-item active">
<a class="nav-link" href="syllabus.html">Syllabus</a>
</li>
<li class="nav-item active">
<a class="nav-link" href="project.html">Course Project</a>
</li>
</ul>
</div>
</nav>
<div class="content">
<div class="bg-no-highlight">
<div class="container">
<div class="row">
<div class="col-md-11">
<h2>Course Project</h2>
<h5> </h5>
<h5>This course has concluded. You can view the list of student projects from our past offerings <a href="https://www.cs.utexas.edu/~yukez/cs391r_reports/">here</a>.</h5>
</div>
</div>
<div class="section">
<div class="row">
<div class="col-md-12">
The primary objective of the course project is to give you in-depth, hands-on experiences applying AI-based techniques to practical robot learning problems. A successful project topic should involve at least one, ideally both, of the two critical components: a <strong>perception</strong> component, i.e., processing raw sensory data, and a <strong>decision making</strong> component, i.e., controlling robot actions, for example,
<ul>
<li>Learning vision-based robot manipulation with deep reinforcement methods;</li>
<li>Self-supervised representation learning of visual and tactile data;</li>
<li>Model-based object pose estimation for 6-DoF grasping from RGB-D images.</li>
</ul>
Potential projects can have the following flavors:
<ul>
<li><strong>Improve an existing approach.</strong> You can select a paper you are interested in, reimplement it, and improve it with what you learned in the course.</li>
<li><strong>Apply an algorithm to a new problem.</strong> You will need to understand the strengths and weaknesses of an existing algorithm from research work, reimplement it, and apply it to a new problem.</li>
<li><strong>Stress test existing approaches.</strong> This kind of project involves a thorough comparison of several existing approaches to a robot learning problem.</li>
<li><strong>Design your own approach.</strong> In these kinds of projects, you come up with an entirely new approach to a specific problem. Even the problem may be something that has not been considered before.</li>
<li><strong>Mix and Match approaches.</strong> For these projects, you typically combine approaches that have been developed separately to address a larger and more complex problem.</li>
<li><strong>Join a research project.</strong> You can join an existing Robot Learning project with UT faculty and researchers. You are expected to articulate your own contributions in your project reports (more detail below).</li>
</ul>
<p>You may work individually or pair up with one teammate on the project, and grades will be calibrated by team size. Projects of a larger scope are expected for teams of two. Your project may be related to research in another class project as long as consent is granted by instructors of both classes; however, you must clearly indicate in the project proposal, milestone, and final reports the exact portion of the project that is being counted for this course. In this case, you must prepare separate reports for each course, and submit your final report for the other course as well.</p>
</div>
</div>
</div>
</div>
</div>
<div class="bg-highlight" style="padding-bottom: 30px;">
<div class="container">
<div class="section"></div>
<div class="section">
<div class="row">
<div class="col-md-12">
<h4>Grading Policy</h4>
The course project is worth 40% of the total grade. The following shows the breakdown:
<ul>
<li>Project Proposal (5%). Due Thu Sept 17.</li>
<li>Project Milestone (5%). Due Thu Oct 15.</li>
<li>Final Report (25%). Due Fri Dec 11.</li>
<li>Spotlight Talk (5%). Week 15.</li>
</ul>
</div>
</div>
</div>
</div>
</div>
<div class="bg-no-highlight">
<div class="container">
<div class="section"></div>
<div class="section">
<div class="row">
<div class="col-md-12">
<h4>Project Inspirations and Resources</h4>
<p>To inspire ideas, you might also look at recent <strong>robotics publications</strong> from top-tier conferences, as well as other resources below.</p>
<ul>
<li><a href="https://roboticsconference.org/">RSS</a>: Robotics: Science and Systems</li>
<li><a href="https://www.icra2019.org/">ICRA</a>: IEEE International Conference on Robotics and Automation</li>
<li><a href="https://www.iros2019.org/">IROS</a>: IEEE/RSJ International Conference on Intelligent Robots and Systems</li>
<li><a href="https://www.robot-learning.org/">CORL</a>: Conference on Robot Learning</li>
<li><a href="https://iclr.cc/Conferences/2019/Schedule">ICLR</a>: International Conference on Learning Representations</li>
<li><a href="https://papers.nips.cc/book/advances-in-neural-information-processing-systems-32-2019">NeurIPS</a>: Neural Information Processing Systems</li>
<li><a href="http://proceedings.mlr.press/v97/">ICML</a>: International Conference on Machine Learning</li>
<li><a href="https://www.cs.utexas.edu/~yukez/publications/">Publications</a> from the UT Robot Perception and Learning Lab</li>
</ul>
<p>You may also look at popular <strong>simulated environments</strong> and <strong>robotics datasets</strong> as listed below.</p>
<p><strong>Simulated Environments</strong><p>
<ul>
<li><a href="https://robosuite.ai">robosuite</a>: MuJoCo-based toolkit and benchmark of learning algorithms for robot manipulation</li>
<li><a href="https://github.com/StanfordVL/robovat">RoboVat</a>: Tabletop manipulation environments in Bullet Physics</li>
<li><a href="https://gym.openai.com/">OpenAI Gym</a>: MuJoCo-based environments for continuous control and robotics</li>
<li><a href="https://ai2thor.allenai.org/">AI2-THOR</a>: open-source interactive environments for embodied AI</li>
<li><a href="https://sites.google.com/view/rlbench">RLBench</a>: robot learning benchmark and learning environment built around V-REP</li>
<li><a href="http://carla.org/">CARLA</a>: self-driving car simulator in Unreal Engine 4</li>
<li><a href="https://microsoft.github.io/AirSim/">AirSim</a>: simulator for autonomous vehicles built on Unreal Engine / Unity</li>
<li><a href="http://svl.stanford.edu/gibson2/">Interactive Gibson</a>: interactive environment for learning robot manipulation and navigation</li>
<li><a href="https://aihabitat.org/">AI Habitat</a>: simulation platform for research in embodied artificial intelligence</li>
</ul>
<p><strong>Robotics Datasets</strong></p>
<ul>
<li><a href="https://berkeleyautomation.github.io/dex-net/">Dex-Net</a>: 3D synthetic object model dataset for object grasping</li>
<li><a href="http://roboturk.stanford.edu/">RoboTurk</a>: crowdsourced human demonstrations in simulation and real world</li>
<li><a href="https://www.robonet.wiki/">RoboNet</a>: video dataset for large-scale multi-robot learning</li>
<li><a href="https://rse-lab.cs.washington.edu/projects/posecnn/">YCB-Video</a>: RGB-D video dataset for model-based 6D pose estimation and tracking</li>
<li><a href="https://www.nuscenes.org/">nuScenes</a>: large-scale multimodal dataset for autonomous driving</li>
</ul>
</div>
</div>
</div>
</div>
</div>
</div>
<div class="bg-highlight">
<div class="container">
<div class="section"></div>
<div class="section">
<div class="row">
<div class="col-md-12">
<h4>Project Proposal</h4>
<p>The project proposal should be one paragraph (300-400 words). Your project proposal should describe:</p>
<ul>
<li>(20%) What is the problem that you will be investigating? Why is it interesting?</li>
<li>(20%) What reading will you examine to provide context and background?</li>
<li>(20%) What data will you use? If you are collecting new data, how will you do it?</li>
<li>(20%) What method or algorithm are you proposing? If there are existing implementations, will you use them and how? How do you plan to improve or modify such implementations? You don't have to have an exact answer at this point, but you should have a general sense of how you will approach the problem you are working on.</li>
<li>(20%) How will you evaluate your results? Qualitatively, what kind of results do you expect (e.g., plots or figures)? Quantitatively, what kind of analysis will you use to evaluate and/or compare your results (e.g., what performance metrics or statistical tests)?</li>
</ul>
<p><strong>Submission:</strong> Please submit your proposal as a PDF on <a href="https://canvas.utexas.edu/">Canvas</a>. <strong>Only one person on your team should submit.</strong></p>
</div>
</div>
</div>
</div>
</div>
<div class="bg-no-highlight">
<div class="container">
<div class="section"></div>
<div class="section">
<div class="row">
<div class="col-md-12">
<h4>Project Milestone</h4>
<p>Your project milestone report should be between 2-3 pages using the <a href="http://rss2019.informatik.uni-freiburg.de/docs/RSS19_LaTeX_template.zip">RSS template</a> in LaTeX. The following is a suggested structure for your report:</p>
<ul>
<li><strong>Title, Author(s)</strong></li>
<li><strong>Introduction:</strong> Introduce your problem and the overall plan for approaching your problem</li>
<li><strong>Problem Statement:</strong> Describe your problem precisely specifying the dataset to be used, expected results and evaluation</li>
<li><strong>Literature Review:</strong> Describe important related work and their relevance to your project</li>
<li><strong>Technical Approach:</strong> Describe the methods you intend to apply to solve the given problem</li>
<li><strong>Intermediate/Preliminary Results:</strong> State and evaluate your results up to the milestone</li>
</ul>
<p><strong>Submission:</strong> Please submit your milestone as a PDF on <a href="https://canvas.utexas.edu/">Canvas</a>. <strong>Only one person on your team should submit.</strong><p>
</div>
</div>
</div>
</div>
</div>
<div class="bg-highlight">
<div class="container">
<div class="section"></div>
<div class="section">
<div class="row">
<div class="col-md-12">
<h4>Final Report</h4>
<p>Your final write-up is required to be between 6-8 pages (8 pages max) using the <a href="http://rss2019.informatik.uni-freiburg.de/docs/RSS19_LaTeX_template.zip">RSS template</a>, structured like a paper from a robotics conference. Please use this template so we can fairly judge all student projects without worrying about altered font sizes, margins, etc. After the class, we will post all the final reports online so that you can read about each others' work. If you do not want your writeup to be <strong>posted online</strong>, please let us know when submitting. The following is a suggested structure for your report, as well as the rubric that we will follow when evaluating reports. You don't necessarily have to organize your report using these sections in this order, but that would likely be a good starting point for most projects.</p>
<ul>
<li><strong>Title, Author(s)</strong></li>
<li><strong>Abstract:</strong> Briefly describe your problem, approach, and key results. Should be no more than 300 words.</li>
<li><strong>Introduction (10%):</strong> Describe the problem you are working on, why it's important, and an overview of your results</li>
<li><strong>Related Work (10%):</strong> Discuss published work that relates to your project. How is your approach similar or different from others?</li>
<li><strong>Data (10%):</strong> Describe the data or simulation environment you are working with for your project. What type is it? Where did it come from? How much data are you working with? How many simulation runs did you work with? Did you have to do any preprocessing, filtering, or other special treatment to use this data in your project?</li>
<li><strong>Methods (30%):</strong> Discuss your approach for solving the problems that you set up in the introduction. Why is your approach the right thing to do? Did you consider alternative approaches? You should demonstrate that you have applied ideas and skills built up during the quarter to tackling your problem of choice. It may be helpful to include figures, diagrams, or tables to describe your method or compare it with other methods.</li>
<li><strong>Experiments (30%):</strong> Discuss the experiments that you performed to demonstrate that your approach solves the problem. The exact experiments will vary depending on the project, but you might compare with previously published methods, perform an ablation study to determine the impact of various components of your system, experiment with different hyperparameters or architectural choices, use visualization techniques to gain insight into how your model works, discuss common failure modes of your model, etc. You should include graphs, tables, or other figures to illustrate your experimental results.</li>
<li><strong>Conclusion (5%):</strong> Summarize your key results - what have you learned? Suggest ideas for future extensions or new applications of your ideas.</li>
<li><strong>Writing / Formatting (5%):</strong> Is your paper clearly written and nicely formatted?</li>
<li><strong>Supplementary Material</strong>, not counted toward your 6-8 page limit and submitted as a separate file. Your supplementary material might include:
<ul>
<li>Source code (if your project proposed an algorithm, or code that is relevant and important for your project).</li>
<li>Cool videos, interactive visualizations, demos, etc.</li>
</ul>
Examples of things to not put in your supplementary material:
<ul>
<li>The entire PyTorch/TensorFlow Github source code.</li>
<li>Any code that is larger than 10 MB.</li>
<li>Model checkpoints.</li>
<li>A computer virus.</li>
</ul>
</li>
</ul>
<p><strong>Submission:</strong> You will submit your final report as a PDF and your supplementary material as a separate PDF or ZIP file. We will provide detailed submission instructions as the deadline nears.</p>
<p><strong>Additional Submission Requirements:</strong> We will also ask you do do the following when you submit your project report:</p>
<ul>
<li><strong>Your report PDF should list <em>all</em> authors who have contributed to your work; enough to warrant a co-authorship position.</strong> This includes people not enrolled in CS391R such as faculty/advisors if they sponsored your work with funding or data, significant mentors (e.g., PhD students or postdocs who coded with you, collected data with you, or helped draft your model on a whiteboard). All authors should be listed directly underneath the title on your PDF. Include a footnote on the first page indicating which authors are not enrolled in CS391R. All co-authors should have their institutional/organizational affiliation specified below the title.</p>
If you have non-CS391R contributors, you will be asked to describe the following:
<ul>
<li><strong>Specify the involvement of non-CS391R contributors</strong> (discussion, writing code, writing paper, etc). For an example, please see the author contributions for <a href="https://www.nature.com/nature/journal/v529/n7587/full/nature16961.html#author-information">AlphaGo (Nature 2016)</a>.</li>
<li><strong>Specify whether the project has been submitted to a peer-reviewed conference or journal.</strong> Include the full name and acronym of the conference (if applicable). For example: Neural Information Processing Systems (NIPS). This only applies if you have already submitted your paper/manuscript and it is under review as of the report deadline.</li>
</ul>
</li>
<li><strong>Any code that was used as a base for projects must be referenced and cited in the body of the paper.</strong> This includes assignment code, finetuning example code, open-source, or Github implementations. You can use a footnote or full reference/bibliography entry.</li>
<li><strong>If you are using this project for multiple classes, submit the other class PDF as well.</strong> Remember, it is not allowed to use the same final report PDF for multiple classes.</li>
</ul>
<p>In summary, include all contributing authors in your PDF; include detailed non-CS391R co-author information; tell us if you submitted to a conference, cite any code you used, and submit your dual-project report.</p>
</div>
</div>
</div>
</div>
</div>
<div class="bg-no-highlight">
<div class="container">
<div class="section"></div>
<div class="section">
<div class="row">
<div class="col-md-12">
<h4>Spotlight Talk</h4>
<p>You will have an opportunity to present your awesome work to the instructor and other students in the last week of class. This resembles the spotlight talks in large AI conferences, such as CVPR, NeurIPS, RSS, and CoRL. See an example spotlight video in <a href="https://www.youtube.com/watch?v=y77FKikIIyE">RSS 2018</a>. Each team is required to submit an MP4 video of the slides with a resolution of 1280x720 preferred. This will enable us to load all talks onto the same laptop without any configuration or format issues while allowing presenters to use whatever graphics or video tools they choose to generate the presentation. Presentations should be limited to <strong>4 minutes and 55 seconds</strong> with the next speaker's video starting automatically at the 5-minute mark. If your video is longer than 4:55 it will be truncated. Please see the <a href="http://cvpr2018.thecvf.com/submission/presenter_instructions#spotloght_sessions">CVPR Presenter Instructions</a> page for more details about converting presentation slides to videos. We will send out information about uploading the presentation slides and videos as the deadline nears. For each project team, the spotlight talk can be presented by one member or multiple. The spotlight is worth <strong>5%</strong> of the total grade, and will be graded with the same criteria as the in-class paper presentations.</p>
</div>
</div>
</div>
</div>
</div>
</div>
<footer class="navbar navbar-expand-md navbar-dark" style="color: white;">
© 2020 UT-Austin CS391R
</footer>
</body>
</html>