forked from alshedivat/al-folio
-
Notifications
You must be signed in to change notification settings - Fork 1
/
Copy pathindex.xml
1002 lines (877 loc) · 68.3 KB
/
index.xml
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
<?xml version="1.0" encoding="utf-8" standalone="yes" ?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom">
<channel>
<title>Efstratios Gavves</title>
<link>https://egavves.github.io/</link>
<atom:link href="https://egavves.github.io/index.xml" rel="self" type="application/rss+xml" />
<description>Efstratios Gavves</description>
<generator>Wowchemy (https://wowchemy.com)</generator><language>en-us</language><lastBuildDate>Sat, 01 Jun 2030 13:00:00 +0000</lastBuildDate>

<item>
<title>ERC Starting Grant</title>
<link>https://egavves.github.io/project/eva/</link>
<pubDate>Wed, 01 Sep 2021 00:00:00 +0000</pubDate>
<guid>https://egavves.github.io/project/eva/</guid>
<description><p>Visual artificial intelligence automatically interprets what happens in visual data like videos. Today’s research strives with queries like: <em>&ldquo;Is this person playing basketball?&quot;</em>; <em>&ldquo;Find the location of the brain stroke&rdquo;</em>; or <em>&ldquo;Track the glacier fractures in satellite footage&rdquo;</em>. All these queries are about visual observations already taken place. Today’s algorithms focus on explaining past visual observations. Naturally, not all queries are about the past: <em>&ldquo;Will this person draw something in or out of their pocket?&quot;</em>; <em>&ldquo;Where will the tumour be in 5 seconds given breathing patterns and moving organs?&quot;</em>; or, <em>&ldquo;How will the glacier fracture given the current motion and melting patterns?&quot;</em>. For these queries and all others, the next generation of visual algorithms must expect what happens next given past visual observations. Visual artificial intelligence must also be able to prevent before the fact, rather than explain only after it. I propose an ambitious 5-year project to design algorithms that learn to expect the possible futures from visual sequences.</p>
<p>The main challenge for expecting possible futures is having visual algorithms that learn temporality in visual sequences. Today’s algorithms cannot do this convincingly. First, they are time-deterministic and ignore uncertainty, part of any expected future. I propose time-stochastic visual algorithms. Second, today’s algorithms are time-extrinsic and treat time as an external input or output variable. I propose time-intrinsic visual algorithms that integrate time within their latent representations. Third, visual algorithms must account for all innumerable spatiotemporal dynamics, despite their finite nature. I propose time-geometric visual algorithms that constrain temporal latent spaces to known geometries.</p>
<p>EVA addresses fundamental research issues in the automatic interpretation of future visual sequences. Its results will serve as a basis for ground-breaking technological advances in practical vision applications.</p>
</description>
</item>
<item>
<title>NWO VIDI</title>
<link>https://egavves.github.io/project/timing/</link>
<pubDate>Wed, 01 Sep 2021 00:00:00 +0000</pubDate>
<guid>https://egavves.github.io/project/timing/</guid>
<description><p>From Facebook’s 3.5 billion live streams to the complex MRI sequences and satellite footage monitoring glaciers, video recognition becomes increasingly relevant. Ultimately it will enable to understand what is happening, where and when in videos by artificial intelligence. Encouraged by the breakthrough of deep representation learning in static image recognition, today&rsquo;s video recognition algorithms emphasize static representations. In effect, they are time invariant. Ignoring time like this suffices in simple short videos, but in tomorrow’s applications recognizing time is imperative: it determines whether a suspect draws something in or out of their pocket, where a tumour will move in the MRI or at which rate glaciers melt in satellite footage. For all these cases and more, video algorithms must be time equivariant, that is yield representations that change proportionally to the temporal change in the input. As we move to video understanding where temporality is critical, time equivariant algorithms are a must.</p>
<p>This is a 5-year research program that studies, develops and evaluates time equivariant video algorithms. To achieve this, we will approach video algorithms from two angles: time geometry, and time supervision. Geometry helps with accounting for innumerable patterns without blowing up the representation complexity. Time supervision helps with learning time equivariance, without relying on strong manual supervision. A temporal decathlon competition will be introduced to the community to evaluate, disseminate and utilize the temporal behaviour of video algorithms. The decathlon will serve as a proxy for designing better video algorithms more efficiently. It will also open up video algorithms to other disciplines, where researchers have videos and know their temporal properties but do not have a common reference point. All research will be published in the top relevant conferences and journals. The major innovation of the proposed research is understanding and exploiting time in video algorithms.</p>
</description>
</item>
<item>
<title>QUVA</title>
<link>https://egavves.github.io/project/quva/</link>
<pubDate>Tue, 01 Sep 2020 00:00:00 +0000</pubDate>
<guid>https://egavves.github.io/project/quva/</guid>
<description><p>Lorem ipsum dolor sit amet, consectetur adipiscing elit. Duis posuere tellus ac convallis placerat. Proin tincidunt magna sed ex sollicitudin condimentum. Sed ac faucibus dolor, scelerisque sollicitudin nisi. Cras purus urna, suscipit quis sapien eu, pulvinar tempor diam. Quisque risus orci, mollis id ante sit amet, gravida egestas nisl. Sed ac tempus magna. Proin in dui enim. Donec condimentum, sem id dapibus fringilla, tellus enim condimentum arcu, nec volutpat est felis vel metus. Vestibulum sit amet erat at nulla eleifend gravida.</p>
<p>Nullam vel molestie justo. Curabitur vitae efficitur leo. In hac habitasse platea dictumst. Sed pulvinar mauris dui, eget varius purus congue ac. Nulla euismod, lorem vel elementum dapibus, nunc justo porta mi, sed tempus est est vel tellus. Nam et enim eleifend, laoreet sem sit amet, elementum sem. Morbi ut leo congue, maximus velit ut, finibus arcu. In et libero cursus, rutrum risus non, molestie leo. Nullam congue quam et volutpat malesuada. Sed risus tortor, pulvinar et dictum nec, sodales non mi. Phasellus lacinia commodo laoreet. Nam mollis, erat in feugiat consectetur, purus eros egestas tellus, in auctor urna odio at nibh. Mauris imperdiet nisi ac magna convallis, at rhoncus ligula cursus.</p>
<p>Cras aliquam rhoncus ipsum, in hendrerit nunc mattis vitae. Duis vitae efficitur metus, ac tempus leo. Cras nec fringilla lacus. Quisque sit amet risus at ipsum pharetra commodo. Sed aliquam mauris at consequat eleifend. Praesent porta, augue sed viverra bibendum, neque ante euismod ante, in vehicula justo lorem ac eros. Suspendisse augue libero, venenatis eget tincidunt ut, malesuada at lorem. Donec vitae bibendum arcu. Aenean maximus nulla non pretium iaculis. Quisque imperdiet, nulla in pulvinar aliquet, velit quam ultrices quam, sit amet fringilla leo sem vel nunc. Mauris in lacinia lacus.</p>
<p>Suspendisse a tincidunt lacus. Curabitur at urna sagittis, dictum ante sit amet, euismod magna. Sed rutrum massa id tortor commodo, vitae elementum turpis tempus. Lorem ipsum dolor sit amet, consectetur adipiscing elit. Aenean purus turpis, venenatis a ullamcorper nec, tincidunt et massa. Integer posuere quam rutrum arcu vehicula imperdiet. Mauris ullamcorper quam vitae purus congue, quis euismod magna eleifend. Vestibulum semper vel augue eget tincidunt. Fusce eget justo sodales, dapibus odio eu, ultrices lorem. Duis condimentum lorem id eros commodo, in facilisis mauris scelerisque. Morbi sed auctor leo. Nullam volutpat a lacus quis pharetra. Nulla congue rutrum magna a ornare.</p>
<p>Aliquam in turpis accumsan, malesuada nibh ut, hendrerit justo. Cum sociis natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Quisque sed erat nec justo posuere suscipit. Donec ut efficitur arcu, in malesuada neque. Nunc dignissim nisl massa, id vulputate nunc pretium nec. Quisque eget urna in risus suscipit ultricies. Pellentesque odio odio, tincidunt in eleifend sed, posuere a diam. Nam gravida nisl convallis semper elementum. Morbi vitae felis faucibus, vulputate orci placerat, aliquet nisi. Aliquam erat volutpat. Maecenas sagittis pulvinar purus, sed porta quam laoreet at.</p>
</description>
</item>
<item>
<title>POP-AART</title>
<link>https://egavves.github.io/project/popaart/</link>
<pubDate>Wed, 01 Sep 2021 00:00:00 +0000</pubDate>
<guid>https://egavves.github.io/project/popaart/</guid>
<description><p>Lorem ipsum dolor sit amet, consectetur adipiscing elit. Duis posuere tellus ac convallis placerat. Proin tincidunt magna sed ex sollicitudin condimentum. Sed ac faucibus dolor, scelerisque sollicitudin nisi. Cras purus urna, suscipit quis sapien eu, pulvinar tempor diam. Quisque risus orci, mollis id ante sit amet, gravida egestas nisl. Sed ac tempus magna. Proin in dui enim. Donec condimentum, sem id dapibus fringilla, tellus enim condimentum arcu, nec volutpat est felis vel metus. Vestibulum sit amet erat at nulla eleifend gravida.</p>
<p>Nullam vel molestie justo. Curabitur vitae efficitur leo. In hac habitasse platea dictumst. Sed pulvinar mauris dui, eget varius purus congue ac. Nulla euismod, lorem vel elementum dapibus, nunc justo porta mi, sed tempus est est vel tellus. Nam et enim eleifend, laoreet sem sit amet, elementum sem. Morbi ut leo congue, maximus velit ut, finibus arcu. In et libero cursus, rutrum risus non, molestie leo. Nullam congue quam et volutpat malesuada. Sed risus tortor, pulvinar et dictum nec, sodales non mi. Phasellus lacinia commodo laoreet. Nam mollis, erat in feugiat consectetur, purus eros egestas tellus, in auctor urna odio at nibh. Mauris imperdiet nisi ac magna convallis, at rhoncus ligula cursus.</p>
<p>Cras aliquam rhoncus ipsum, in hendrerit nunc mattis vitae. Duis vitae efficitur metus, ac tempus leo. Cras nec fringilla lacus. Quisque sit amet risus at ipsum pharetra commodo. Sed aliquam mauris at consequat eleifend. Praesent porta, augue sed viverra bibendum, neque ante euismod ante, in vehicula justo lorem ac eros. Suspendisse augue libero, venenatis eget tincidunt ut, malesuada at lorem. Donec vitae bibendum arcu. Aenean maximus nulla non pretium iaculis. Quisque imperdiet, nulla in pulvinar aliquet, velit quam ultrices quam, sit amet fringilla leo sem vel nunc. Mauris in lacinia lacus.</p>
<p>Suspendisse a tincidunt lacus. Curabitur at urna sagittis, dictum ante sit amet, euismod magna. Sed rutrum massa id tortor commodo, vitae elementum turpis tempus. Lorem ipsum dolor sit amet, consectetur adipiscing elit. Aenean purus turpis, venenatis a ullamcorper nec, tincidunt et massa. Integer posuere quam rutrum arcu vehicula imperdiet. Mauris ullamcorper quam vitae purus congue, quis euismod magna eleifend. Vestibulum semper vel augue eget tincidunt. Fusce eget justo sodales, dapibus odio eu, ultrices lorem. Duis condimentum lorem id eros commodo, in facilisis mauris scelerisque. Morbi sed auctor leo. Nullam volutpat a lacus quis pharetra. Nulla congue rutrum magna a ornare.</p>
<p>Aliquam in turpis accumsan, malesuada nibh ut, hendrerit justo. Cum sociis natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Quisque sed erat nec justo posuere suscipit. Donec ut efficitur arcu, in malesuada neque. Nunc dignissim nisl massa, id vulputate nunc pretium nec. Quisque eget urna in risus suscipit ultricies. Pellentesque odio odio, tincidunt in eleifend sed, posuere a diam. Nam gravida nisl convallis semper elementum. Morbi vitae felis faucibus, vulputate orci placerat, aliquet nisi. Aliquam erat volutpat. Maecenas sagittis pulvinar purus, sed porta quam laoreet at.</p>
</description>
</item>
<item>
<title>IvI grant</title>
<link>https://egavves.github.io/project/nnds/</link>
<pubDate>Tue, 01 Sep 2020 00:00:00 +0000</pubDate>
<guid>https://egavves.github.io/project/nnds/</guid>
<description><p>Artificial Intelligence has repeatedly been breaking records in core machine learning and computer vision tasks including object recognition, increasing neural network depth, and action classification, mainly due to the success of deep neural networks. Despite their success, the inherent complexity of deep neural networks renders them opaque to in-depth understanding of how their complex capabilities arise from the simple dynamics of the artificial neurons. As a consequence, deep networks are often associated with lack of explainability of predictions, instability, or even lack of transparency when it comes to improving neural network building blocks.</p>
<p>In this project, deep neural networks (DNN) will be studied in the context of complex adaptive systems analysis. The goal is to gain insights into the structural and functional properties of the neural network computational graph resulting from the learning process. Techniques that will be employed include dynamical systems theory and iterative maps (chaotic attractors; Lyapunov exponent), information theory (Shannon entropy, mutual information, multivariate measures such as synergistic information), and network theory. Overarching questions include: How do these (multilayer) networks self-organize to solve a particular task? How is information represented in these systems? Is there a set of fundamental properties underlying the structure and dynamics of deep neural networks?</p>
</description>
</item>
<item>
<title>NWO LIFT</title>
<link>https://egavves.github.io/project/flora/</link>
<pubDate>Tue, 01 Sep 2020 00:00:00 +0000</pubDate>
<guid>https://egavves.github.io/project/flora/</guid>
<description><p>In today’s deep learning era in computer vision and machine learning, computers rival humans in what were once considered to be human-only tasks, with impressive results in object scene recognition, semantic segmentation and action classification. In this setting automotive companies, including BMW, Toyota and Tesla to name just three examples, have taken up the challenge to use deep learning technologies to design the next generation of vehicles, namely fully autonomous, self-driving cars. Current vehicles already enjoy conditional automation (level 3 out of 5), where the vehicle can drive autonomously for as long as no unexpected situations happen. However, the ultimate goal is to arrive at vehicles that drive fully autonomously (level 5 out of 5), where cars navigate without human intervention, avoiding collisions while driving comfortably and being socially aware of their surroundings. Going from conditional to full automation, however, is disproportionally hard, as the recent fatal accidents by the experimental Tesla and Uber autonomous vehicles have shown. Significant academic and industrial research efforts are necessary to reach autonomous vehicles with full automation levels that can predict and avoid collisions, while driving comfortably.</p>
<p>For research in collision avoidance, predicting future trajectories of traffic participants is of the utmost important. Unfortunately, current collision checkers operate under the assumption of a static world where objects do not move. Especially important are the most vulnerable traffic participants, like pedestrians, bicyclists and motorcyclists, who unlike cars, are highly manoeuvrable and cannot be modelled by standard techniques. Predicting the future trajectories of vulnerable traffic participants is crucial, as they run the highest risk of getting injured in accidents. In Germany alone 2.5 million traffic accidents occurred in 2016, including 65,000 heavy injuries and 3,000 fatalities, while in the Netherlands there were 613 fatalities in traffic accidents in 2017, 407 involving cyclists and motorcyclists. This proposal places special importance in predicting future trajectories of vulnerable traffic participants.</p>
</description>
</item>
<item>
<title>TKI-NKI</title>
<link>https://egavves.github.io/project/histo-ai/</link>
<pubDate>Tue, 01 Sep 2020 00:00:00 +0000</pubDate>
<guid>https://egavves.github.io/project/histo-ai/</guid>
<description><p>Lorem ipsum dolor sit amet, consectetur adipiscing elit. Duis posuere tellus ac convallis placerat. Proin tincidunt magna sed ex sollicitudin condimentum. Sed ac faucibus dolor, scelerisque sollicitudin nisi. Cras purus urna, suscipit quis sapien eu, pulvinar tempor diam. Quisque risus orci, mollis id ante sit amet, gravida egestas nisl. Sed ac tempus magna. Proin in dui enim. Donec condimentum, sem id dapibus fringilla, tellus enim condimentum arcu, nec volutpat est felis vel metus. Vestibulum sit amet erat at nulla eleifend gravida.</p>
<p>Nullam vel molestie justo. Curabitur vitae efficitur leo. In hac habitasse platea dictumst. Sed pulvinar mauris dui, eget varius purus congue ac. Nulla euismod, lorem vel elementum dapibus, nunc justo porta mi, sed tempus est est vel tellus. Nam et enim eleifend, laoreet sem sit amet, elementum sem. Morbi ut leo congue, maximus velit ut, finibus arcu. In et libero cursus, rutrum risus non, molestie leo. Nullam congue quam et volutpat malesuada. Sed risus tortor, pulvinar et dictum nec, sodales non mi. Phasellus lacinia commodo laoreet. Nam mollis, erat in feugiat consectetur, purus eros egestas tellus, in auctor urna odio at nibh. Mauris imperdiet nisi ac magna convallis, at rhoncus ligula cursus.</p>
<p>Cras aliquam rhoncus ipsum, in hendrerit nunc mattis vitae. Duis vitae efficitur metus, ac tempus leo. Cras nec fringilla lacus. Quisque sit amet risus at ipsum pharetra commodo. Sed aliquam mauris at consequat eleifend. Praesent porta, augue sed viverra bibendum, neque ante euismod ante, in vehicula justo lorem ac eros. Suspendisse augue libero, venenatis eget tincidunt ut, malesuada at lorem. Donec vitae bibendum arcu. Aenean maximus nulla non pretium iaculis. Quisque imperdiet, nulla in pulvinar aliquet, velit quam ultrices quam, sit amet fringilla leo sem vel nunc. Mauris in lacinia lacus.</p>
<p>Suspendisse a tincidunt lacus. Curabitur at urna sagittis, dictum ante sit amet, euismod magna. Sed rutrum massa id tortor commodo, vitae elementum turpis tempus. Lorem ipsum dolor sit amet, consectetur adipiscing elit. Aenean purus turpis, venenatis a ullamcorper nec, tincidunt et massa. Integer posuere quam rutrum arcu vehicula imperdiet. Mauris ullamcorper quam vitae purus congue, quis euismod magna eleifend. Vestibulum semper vel augue eget tincidunt. Fusce eget justo sodales, dapibus odio eu, ultrices lorem. Duis condimentum lorem id eros commodo, in facilisis mauris scelerisque. Morbi sed auctor leo. Nullam volutpat a lacus quis pharetra. Nulla congue rutrum magna a ornare.</p>
<p>Aliquam in turpis accumsan, malesuada nibh ut, hendrerit justo. Cum sociis natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Quisque sed erat nec justo posuere suscipit. Donec ut efficitur arcu, in malesuada neque. Nunc dignissim nisl massa, id vulputate nunc pretium nec. Quisque eget urna in risus suscipit ultricies. Pellentesque odio odio, tincidunt in eleifend sed, posuere a diam. Nam gravida nisl convallis semper elementum. Morbi vitae felis faucibus, vulputate orci placerat, aliquet nisi. Aliquam erat volutpat. Maecenas sagittis pulvinar purus, sed porta quam laoreet at.</p>
</description>
</item>
<item>
<title>TKI-AMC</title>
<link>https://egavves.github.io/project/airborne/</link>
<pubDate>Tue, 01 Sep 2020 00:00:00 +0000</pubDate>
<guid>https://egavves.github.io/project/airborne/</guid>
<description><p>Stroke is the leading cause of disability in adults and every year 40,000 patients suffer stroke in the Netherlands. Large vessel occlusions (LVO) disproportionally contribute to stroke-related dependence and death. After publication of the Dutch MR CLEAN trial in 2015, endovascular thrombectomy (EVT) has been described as one of the biggest breakthroughs in modern medicine. EVT for ischemic stroke patients with LVO arriving within 6 hours increases the odds of achieving a good outcome nearly 2.5 fold. Imaging based treatment selection promises equal effectiveness of EVT for LVO stroke in patients arriving later or for wake-up strokes and requires information on the volume of salvageable and infarcted tissue.</p>
<p>Assessment of salvageable and infarcted tissue with CT perfusion (CTP) is possible but difficult. Most primary stroke centres (PSCs) cannot perform and interpret CTP necessitating transfer from PSCs to comprehensive stroke centres (CSCs). Conversely, non-contrast CT (NCCT) and CT angiography (CTA) are currently available in all PSCs.</p>
<p>The aim of AIRBORNE is to develop AI-based tooling for estimation of infarct volume on NCCT and CTA scans using deep learning neural networks making CTP unnecessary. Early treatment selection will speed up transfers and decrease onset to treatment times, improving the outcome of treated patients
This project contributes to topsector Life Sciences &amp; Health goals by applying AI algorithms in cloud-based decision tools to improve outcomes and healthcare efficacy while gaining insight into data efficient AI.</p>
</description>
</item>
<item>
<title>Example Talk</title>
<link>https://egavves.github.io/talk/example-talk/</link>
<pubDate>Sat, 01 Jun 2030 13:00:00 +0000</pubDate>
<guid>https://egavves.github.io/talk/example-talk/</guid>
<description><div class="alert alert-note">
<div>
Click on the <strong>Slides</strong> button above to view the built-in slides feature.
</div>
</div>
<p>Slides can be added in a few ways:</p>
<ul>
<li><strong>Create</strong> slides using Wowchemy&rsquo;s <a href="https://wowchemy.com/docs/managing-content/#create-slides" target="_blank" rel="noopener"><em>Slides</em></a> feature and link using <code>slides</code> parameter in the front matter of the talk file</li>
<li><strong>Upload</strong> an existing slide deck to <code>static/</code> and link using <code>url_slides</code> parameter in the front matter of the talk file</li>
<li><strong>Embed</strong> your slides (e.g. Google Slides) or presentation video on this page using <a href="https://wowchemy.com/docs/writing-markdown-latex/" target="_blank" rel="noopener">shortcodes</a>.</li>
</ul>
<p>Further event details, including <a href="https://wowchemy.com/docs/writing-markdown-latex/" target="_blank" rel="noopener">page elements</a> such as image galleries, can be added to the body of this page.</p>
</description>
</item>
<item>
<title>BISCUIT: Causal Representation Learning from Binary Interactions</title>
<link>https://egavves.github.io/publication/lippe-2023-biscuit/</link>
<pubDate>Sun, 01 Jan 2023 00:00:00 +0000</pubDate>
<guid>https://egavves.github.io/publication/lippe-2023-biscuit/</guid>
<description></description>
</item>
<item>
<title>Graph Switching Dynamical Systems</title>
<link>https://egavves.github.io/publication/liu-2023-grass/</link>
<pubDate>Sun, 01 Jan 2023 00:00:00 +0000</pubDate>
<guid>https://egavves.github.io/publication/liu-2023-grass/</guid>
<description></description>
</item>
<item>
<title>Latent Field Discovery In Interacting Dynamical Systems With Neural Fields</title>
<link>https://egavves.github.io/publication/kofinas-2023-aether/</link>
<pubDate>Sun, 01 Jan 2023 00:00:00 +0000</pubDate>
<guid>https://egavves.github.io/publication/kofinas-2023-aether/</guid>
<description></description>
</item>
<item>
<title>Modelling Long Range Dependencies in ND: From Task-Specific to a General Purpose CNN</title>
<link>https://egavves.github.io/publication/devries-2023-perfunet/</link>
<pubDate>Sun, 01 Jan 2023 00:00:00 +0000</pubDate>
<guid>https://egavves.github.io/publication/devries-2023-perfunet/</guid>
<description></description>
</item>
<item>
<title>Modulated Neural ODEs</title>
<link>https://egavves.github.io/publication/auzina-2023-monode/</link>
<pubDate>Sun, 01 Jan 2023 00:00:00 +0000</pubDate>
<guid>https://egavves.github.io/publication/auzina-2023-monode/</guid>
<description></description>
</item>
<item>
<title>Neural Modulation Fields for Conditional Cone Beam Neural Tomography</title>
<link>https://egavves.github.io/publication/papa-2023-neuralct/</link>
<pubDate>Sun, 01 Jan 2023 00:00:00 +0000</pubDate>
<guid>https://egavves.github.io/publication/papa-2023-neuralct/</guid>
<description></description>
</item>
<item>
<title>Noise2Aliasing: Unsupervised Deep Learning for View Aliasing and Noise Reduction in 4DCBCT</title>
<link>https://egavves.github.io/publication/papa-2023-noise-2-aliasing/</link>
<pubDate>Sun, 01 Jan 2023 00:00:00 +0000</pubDate>
<guid>https://egavves.github.io/publication/papa-2023-noise-2-aliasing/</guid>
<description></description>
</item>
<item>
<title>PC-Reg: A pyramidal prediction–correction approach for large deformation image registration</title>
<link>https://egavves.github.io/publication/yin-2023-pcreg/</link>
<pubDate>Sun, 01 Jan 2023 00:00:00 +0000</pubDate>
<guid>https://egavves.github.io/publication/yin-2023-pcreg/</guid>
<description></description>
</item>
<item>
<title>Spatio-temporal physics-informed learning: A novel approach to CT perfusion analysis in acute ischemic stroke</title>
<link>https://egavves.github.io/publication/devries-2023-sppinn/</link>
<pubDate>Sun, 01 Jan 2023 00:00:00 +0000</pubDate>
<guid>https://egavves.github.io/publication/devries-2023-sppinn/</guid>
<description></description>
</item>
<item>
<title>Time does tell: Self-supervised time-tuning of dense image representations</title>
<link>https://egavves.github.io/publication/salehi-2023-time/</link>
<pubDate>Sun, 01 Jan 2023 00:00:00 +0000</pubDate>
<guid>https://egavves.github.io/publication/salehi-2023-time/</guid>
<description></description>
</item>
<item>
<title>Towards open-vocabulary video instance segmentation</title>
<link>https://egavves.github.io/publication/wang-2023-towards/</link>
<pubDate>Sun, 01 Jan 2023 00:00:00 +0000</pubDate>
<guid>https://egavves.github.io/publication/wang-2023-towards/</guid>
<description></description>
</item>
<item>
<title>CITRIS: Causal Identifiability from Temporal Intervened Sequences</title>
<link>https://egavves.github.io/publication/lippe-2022-citris/</link>
<pubDate>Sat, 01 Jan 2022 00:00:00 +0000</pubDate>
<guid>https://egavves.github.io/publication/lippe-2022-citris/</guid>
<description></description>
</item>
<item>
<title>Continual Learning of Dynamical Systems With Competitive Multi-Head Reservoirs</title>
<link>https://egavves.github.io/publication/bereska-2022-continualdynamics/</link>
<pubDate>Sat, 01 Jan 2022 00:00:00 +0000</pubDate>
<guid>https://egavves.github.io/publication/bereska-2022-continualdynamics/</guid>
<description></description>
</item>
<item>
<title>Deep Learning-Based Thrombus Localization and Segmentation in Patients with Posterior Circulation Stroke</title>
<link>https://egavves.github.io/publication/zoetmulder-2022-thrombus/</link>
<pubDate>Sat, 01 Jan 2022 00:00:00 +0000</pubDate>
<guid>https://egavves.github.io/publication/zoetmulder-2022-thrombus/</guid>
<description></description>
</item>
<item>
<title>DeepSMILE: Self-supervised heterogeneity-aware multiple instance learning for DNA damage response defect classification directly from H&E whole-slide images</title>
<link>https://egavves.github.io/publication/schirris-2022-deepsmile/</link>
<pubDate>Sat, 01 Jan 2022 00:00:00 +0000</pubDate>
<guid>https://egavves.github.io/publication/schirris-2022-deepsmile/</guid>
<description></description>
</item>
<item>
<title>Dynamic Prototype Convolution Network for Few-Shot Semantic Segmentation</title>
<link>https://egavves.github.io/publication/liu-2022-prototypes/</link>
<pubDate>Sat, 01 Jan 2022 00:00:00 +0000</pubDate>
<guid>https://egavves.github.io/publication/liu-2022-prototypes/</guid>
<description></description>
</item>
<item>
<title>Efficient Neural Causal Discovery without Acyclicity Constraints</title>
<link>https://egavves.github.io/publication/lippe-2022-enco/</link>
<pubDate>Sat, 01 Jan 2022 00:00:00 +0000</pubDate>
<guid>https://egavves.github.io/publication/lippe-2022-enco/</guid>
<description></description>
</item>
<item>
<title>NFormer: Robust Person Re-Identification With Neighbor Transformer</title>
<link>https://egavves.github.io/publication/wang-2022-nformer/</link>
<pubDate>Sat, 01 Jan 2022 00:00:00 +0000</pubDate>
<guid>https://egavves.github.io/publication/wang-2022-nformer/</guid>
<description></description>
</item>
<item>
<title>Stability Regularization for Discrete Representation Learning</title>
<link>https://egavves.github.io/publication/pervez-2022-stabreg/</link>
<pubDate>Sat, 01 Jan 2022 00:00:00 +0000</pubDate>
<guid>https://egavves.github.io/publication/pervez-2022-stabreg/</guid>
<description></description>
</item>
<item>
<title>Automated final lesion seentation in posterior circulation acute ischemic stroke using deep learning</title>
<link>https://egavves.github.io/publication/zoetmulder-2021-automated/</link>
<pubDate>Fri, 01 Jan 2021 00:00:00 +0000</pubDate>
<guid>https://egavves.github.io/publication/zoetmulder-2021-automated/</guid>
<description></description>
</item>
<item>
<title>Batch Bayesian Optimization on Permutations using Acquisition Weighted Kernels</title>
<link>https://egavves.github.io/publication/oh-2021-batch/</link>
<pubDate>Fri, 01 Jan 2021 00:00:00 +0000</pubDate>
<guid>https://egavves.github.io/publication/oh-2021-batch/</guid>
<description></description>
</item>
<item>
<title>DeepSMILE: Self-supervised heterogeneity-aware multiple instance learning for DNA damage response defect classification directly from H&E whole-slide images</title>
<link>https://egavves.github.io/publication/schirris-2021-deepsmile/</link>
<pubDate>Fri, 01 Jan 2021 00:00:00 +0000</pubDate>
<guid>https://egavves.github.io/publication/schirris-2021-deepsmile/</guid>
<description></description>
</item>
<item>
<title>Efficient Neural Causal Discovery without Acyclicity Constraints</title>
<link>https://egavves.github.io/publication/lippe-2021-efficient/</link>
<pubDate>Fri, 01 Jan 2021 00:00:00 +0000</pubDate>
<guid>https://egavves.github.io/publication/lippe-2021-efficient/</guid>
<description></description>
</item>
<item>
<title>Federated mixture of experts</title>
<link>https://egavves.github.io/publication/reisser-2021-federated/</link>
<pubDate>Fri, 01 Jan 2021 00:00:00 +0000</pubDate>
<guid>https://egavves.github.io/publication/reisser-2021-federated/</guid>
<description></description>
</item>
<item>
<title>Mixed Variable Bayesian Optimization with Frequency Modulated Kernels</title>
<link>https://egavves.github.io/publication/oh-2021-mixed/</link>
<pubDate>Fri, 01 Jan 2021 00:00:00 +0000</pubDate>
<guid>https://egavves.github.io/publication/oh-2021-mixed/</guid>
<description></description>
</item>
<item>
<title>Model decay in long-term tracking</title>
<link>https://egavves.github.io/publication/gavves-2021-model/</link>
<pubDate>Fri, 01 Jan 2021 00:00:00 +0000</pubDate>
<guid>https://egavves.github.io/publication/gavves-2021-model/</guid>
<description></description>
</item>
<item>
<title>Multiple-instance learning for assessing prognosis of ductal carcinoma in situ</title>
<link>https://egavves.github.io/publication/dal-2021-multiple/</link>
<pubDate>Fri, 01 Jan 2021 00:00:00 +0000</pubDate>
<guid>https://egavves.github.io/publication/dal-2021-multiple/</guid>
<description></description>
</item>
<item>
<title>Neural Feature Matching in Implicit 3D Representations</title>
<link>https://egavves.github.io/publication/chen-2021-neural/</link>
<pubDate>Fri, 01 Jan 2021 00:00:00 +0000</pubDate>
<guid>https://egavves.github.io/publication/chen-2021-neural/</guid>
<description></description>
</item>
<item>
<title>Quasibinary Classifier for Images with Zero and Multiple Labels</title>
<link>https://egavves.github.io/publication/liao-2021-quasibinary/</link>
<pubDate>Fri, 01 Jan 2021 00:00:00 +0000</pubDate>
<guid>https://egavves.github.io/publication/liao-2021-quasibinary/</guid>
<description></description>
</item>
<item>
<title>Rotation Equivariant Siamese Networks for Tracking</title>
<link>https://egavves.github.io/publication/gupta-2021-rotation/</link>
<pubDate>Fri, 01 Jan 2021 00:00:00 +0000</pubDate>
<guid>https://egavves.github.io/publication/gupta-2021-rotation/</guid>
<description></description>
</item>
<item>
<title>Roto-translated Local Coordinate Frames For Interacting Dynamical Systems</title>
<link>https://egavves.github.io/publication/kofinas-2021-roto/</link>
<pubDate>Fri, 01 Jan 2021 00:00:00 +0000</pubDate>
<guid>https://egavves.github.io/publication/kofinas-2021-roto/</guid>
<description></description>
</item>
<item>
<title>Self-selective context for interaction recognition</title>
<link>https://egavves.github.io/publication/kilickaya-2021-self/</link>
<pubDate>Fri, 01 Jan 2021 00:00:00 +0000</pubDate>
<guid>https://egavves.github.io/publication/kilickaya-2021-self/</guid>
<description></description>
</item>
<item>
<title>Sparse-Shot Learning for Extremely Many Localisations</title>
<link>https://egavves.github.io/publication/panteli-2021-sparse/</link>
<pubDate>Fri, 01 Jan 2021 00:00:00 +0000</pubDate>
<guid>https://egavves.github.io/publication/panteli-2021-sparse/</guid>
<description></description>
</item>
<item>
<title>Spectral Smoothing Unveils Phase Transitions in Hierarchical Variational Autoencoders</title>
<link>https://egavves.github.io/publication/pervez-2021-spectral/</link>
<pubDate>Fri, 01 Jan 2021 00:00:00 +0000</pubDate>
<guid>https://egavves.github.io/publication/pervez-2021-spectral/</guid>
<description></description>
</item>
<item>
<title>Tackling occlusion in Siamese tracking with structured dropouts</title>
<link>https://egavves.github.io/publication/gupta-2021-tackling/</link>
<pubDate>Fri, 01 Jan 2021 00:00:00 +0000</pubDate>
<guid>https://egavves.github.io/publication/gupta-2021-tackling/</guid>
<description></description>
</item>
<item>
<title>Transformers for Ischemic Stroke Infarct Core Seentation from Spatio-temporal CT Perfusion Scans</title>
<link>https://egavves.github.io/publication/de-2021-transformers/</link>
<pubDate>Fri, 01 Jan 2021 00:00:00 +0000</pubDate>
<guid>https://egavves.github.io/publication/de-2021-transformers/</guid>
<description></description>
</item>
<item>
<title>Unsharp Mask Guided Filtering</title>
<link>https://egavves.github.io/publication/shi-2021-unsharp/</link>
<pubDate>Fri, 01 Jan 2021 00:00:00 +0000</pubDate>
<guid>https://egavves.github.io/publication/shi-2021-unsharp/</guid>
<description></description>
</item>
<item>
<title>Welcome to Wowchemy, the website builder for Hugo</title>
<link>https://egavves.github.io/post/getting-started/</link>
<pubDate>Sun, 13 Dec 2020 00:00:00 +0000</pubDate>
<guid>https://egavves.github.io/post/getting-started/</guid>
<description><h2 id="overview">Overview</h2>
<ol>
<li>The Wowchemy website builder for Hugo, along with its starter templates, is designed for professional creators, educators, and teams/organizations - although it can be used to create any kind of site</li>
<li>The template can be modified and customised to suit your needs. It&rsquo;s a good platform for anyone looking to take control of their data and online identity whilst having the convenience to start off with a <strong>no-code solution (write in Markdown and customize with YAML parameters)</strong> and having <strong>flexibility to later add even deeper personalization with HTML and CSS</strong></li>
<li>You can work with all your favourite tools and apps with hundreds of plugins and integrations to speed up your workflows, interact with your readers, and much more</li>
</ol>
<figure id="figure-the-template-is-mobile-first-with-a-responsive-design-to-ensure-that-your-site-looks-stunning-on-every-device">
<div class="d-flex justify-content-center">
<div class="w-100" ><img src="https://raw.githubusercontent.com/wowchemy/wowchemy-hugo-modules/master/academic.png" alt="The template is mobile first with a responsive design to ensure that your site looks stunning on every device." loading="lazy" data-zoomable /></div>
</div><figcaption>
The template is mobile first with a responsive design to ensure that your site looks stunning on every device.
</figcaption></figure>
<h2 id="get-started">Get Started</h2>
<ul>
<li>👉 <a href="https://wowchemy.com/templates/" target="_blank" rel="noopener"><strong>Create a new site</strong></a></li>
<li>📚 <a href="https://wowchemy.com/docs/" target="_blank" rel="noopener"><strong>Personalize your site</strong></a></li>
<li>💬 <a href="https://discord.gg/z8wNYzb" target="_blank" rel="noopener">Chat with the <strong>Wowchemy community</strong></a> or <a href="https://discourse.gohugo.io" target="_blank" rel="noopener"><strong>Hugo community</strong></a></li>
<li>🐦 Twitter: <a href="https://twitter.com/wowchemy" target="_blank" rel="noopener">@wowchemy</a> <a href="https://twitter.com/GeorgeCushen" target="_blank" rel="noopener">@GeorgeCushen</a> <a href="https://twitter.com/search?q=%23MadeWithWowchemy&amp;src=typed_query" target="_blank" rel="noopener">#MadeWithWowchemy</a></li>
<li>💡 <a href="https://github.com/wowchemy/wowchemy-hugo-modules/issues" target="_blank" rel="noopener">Request a <strong>feature</strong> or report a <strong>bug</strong> for <em>Wowchemy</em></a></li>
<li>⬆️ <strong>Updating Wowchemy?</strong> View the <a href="https://wowchemy.com/docs/hugo-tutorials/update/" target="_blank" rel="noopener">Update Tutorial</a> and <a href="https://wowchemy.com/updates/" target="_blank" rel="noopener">Release Notes</a></li>
</ul>
<h2 id="crowd-funded-open-source-software">Crowd-funded open-source software</h2>
<p>To help us develop this template and software sustainably under the MIT license, we ask all individuals and businesses that use it to help support its ongoing maintenance and development via sponsorship.</p>
<h3 id="-click-here-to-become-a-sponsor-and-help-support-wowchemys-future-httpswowchemycomplans"><a href="https://wowchemy.com/plans/" target="_blank" rel="noopener">❤️ Click here to become a sponsor and help support Wowchemy&rsquo;s future ❤️</a></h3>
<p>As a token of appreciation for sponsoring, you can <strong>unlock <a href="https://wowchemy.com/plans/" target="_blank" rel="noopener">these</a> awesome rewards and extra features 🦄✨</strong></p>
<h2 id="ecosystem">Ecosystem</h2>
<ul>
<li><strong><a href="https://github.com/wowchemy/hugo-academic-cli" target="_blank" rel="noopener">Hugo Academic CLI</a>:</strong> Automatically import publications from BibTeX</li>
</ul>
<h2 id="inspiration">Inspiration</h2>
<p><a href="https://academic-demo.netlify.com/" target="_blank" rel="noopener">Check out the latest <strong>demo</strong></a> of what you&rsquo;ll get in less than 10 minutes, or <a href="https://wowchemy.com/user-stories/" target="_blank" rel="noopener">view the <strong>showcase</strong></a> of personal, project, and business sites.</p>
<h2 id="features">Features</h2>
<ul>
<li><strong>Page builder</strong> - Create <em>anything</em> with <a href="https://wowchemy.com/docs/page-builder/" target="_blank" rel="noopener"><strong>widgets</strong></a> and <a href="https://wowchemy.com/docs/content/writing-markdown-latex/" target="_blank" rel="noopener"><strong>elements</strong></a></li>
<li><strong>Edit any type of content</strong> - Blog posts, publications, talks, slides, projects, and more!</li>
<li><strong>Create content</strong> in <a href="https://wowchemy.com/docs/content/writing-markdown-latex/" target="_blank" rel="noopener"><strong>Markdown</strong></a>, <a href="https://wowchemy.com/docs/import/jupyter/" target="_blank" rel="noopener"><strong>Jupyter</strong></a>, or <a href="https://wowchemy.com/docs/install-locally/" target="_blank" rel="noopener"><strong>RStudio</strong></a></li>
<li><strong>Plugin System</strong> - Fully customizable <a href="https://wowchemy.com/docs/customization/" target="_blank" rel="noopener"><strong>color</strong> and <strong>font themes</strong></a></li>
<li><strong>Display Code and Math</strong> - Code highlighting and <a href="https://en.wikibooks.org/wiki/LaTeX/Mathematics" target="_blank" rel="noopener">LaTeX math</a> supported</li>
<li><strong>Integrations</strong> - <a href="https://analytics.google.com" target="_blank" rel="noopener">Google Analytics</a>, <a href="https://disqus.com" target="_blank" rel="noopener">Disqus commenting</a>, Maps, Contact Forms, and more!</li>
<li><strong>Beautiful Site</strong> - Simple and refreshing one page design</li>
<li><strong>Industry-Leading SEO</strong> - Help get your website found on search engines and social media</li>
<li><strong>Media Galleries</strong> - Display your images and videos with captions in a customizable gallery</li>
<li><strong>Mobile Friendly</strong> - Look amazing on every screen with a mobile friendly version of your site</li>
<li><strong>Multi-language</strong> - 34+ language packs including English, 中文, and Português</li>
<li><strong>Multi-user</strong> - Each author gets their own profile page</li>
<li><strong>Privacy Pack</strong> - Assists with GDPR</li>
<li><strong>Stand Out</strong> - Bring your site to life with animation, parallax backgrounds, and scroll effects</li>
<li><strong>One-Click Deployment</strong> - No servers. No databases. Only files.</li>
</ul>
<h2 id="themes">Themes</h2>
<p>Wowchemy and its templates come with <strong>automatic day (light) and night (dark) mode</strong> built-in. Alternatively, visitors can choose their preferred mode - click the moon icon in the top right of the <a href="https://academic-demo.netlify.com/" target="_blank" rel="noopener">Demo</a> to see it in action! Day/night mode can also be disabled by the site admin in <code>params.toml</code>.</p>
<p><a href="https://wowchemy.com/docs/customization" target="_blank" rel="noopener">Choose a stunning <strong>theme</strong> and <strong>font</strong></a> for your site. Themes are fully customizable.</p>
<h2 id="license">License</h2>
<p>Copyright 2016-present <a href="https://georgecushen.com" target="_blank" rel="noopener">George Cushen</a>.</p>
<p>Released under the <a href="https://github.com/wowchemy/wowchemy-hugo-modules/blob/master/LICENSE.md" target="_blank" rel="noopener">MIT</a> license.</p>
</description>
</item>
<item>
<title>Automatic triage of 12-lead ECGs using deep convolutional neural networks</title>
<link>https://egavves.github.io/publication/vandeleur-2020-automatic/</link>
<pubDate>Wed, 01 Jan 2020 00:00:00 +0000</pubDate>
<guid>https://egavves.github.io/publication/vandeleur-2020-automatic/</guid>
<description></description>
</item>
<item>
<title>Categorical normalizing flows via continuous transformations</title>
<link>https://egavves.github.io/publication/lippe-2020-categorical/</link>
<pubDate>Wed, 01 Jan 2020 00:00:00 +0000</pubDate>
<guid>https://egavves.github.io/publication/lippe-2020-categorical/</guid>
<description></description>
</item>
<item>
<title>Low Bias Low Variance Gradient Estimates for Hierarchical Boolean Stochastic Networks</title>
<link>https://egavves.github.io/publication/pervez-2020-low/</link>
<pubDate>Wed, 01 Jan 2020 00:00:00 +0000</pubDate>
<guid>https://egavves.github.io/publication/pervez-2020-low/</guid>
<description></description>
</item>
<item>
<title>Pic: Permutation invariant convolution for recognizing long-range activities</title>
<link>https://egavves.github.io/publication/hussein-2020-pic/</link>
<pubDate>Wed, 01 Jan 2020 00:00:00 +0000</pubDate>
<guid>https://egavves.github.io/publication/hussein-2020-pic/</guid>
<description></description>
</item>
<item>
<title>Pointmixup: Auentation for point clouds</title>
<link>https://egavves.github.io/publication/chen-2020-pointmixup/</link>
<pubDate>Wed, 01 Jan 2020 00:00:00 +0000</pubDate>
<guid>https://egavves.github.io/publication/chen-2020-pointmixup/</guid>
<description></description>
</item>
<item>
<title>Siamese tracking of cell behaviour patterns</title>
<link>https://egavves.github.io/publication/panteli-2020-siamese/</link>
<pubDate>Wed, 01 Jan 2020 00:00:00 +0000</pubDate>
<guid>https://egavves.github.io/publication/panteli-2020-siamese/</guid>
<description></description>
</item>
<item>
<title>Variance Reduction in Hierarchical Variational Autoencoders</title>
<link>https://egavves.github.io/publication/pervez-2020-variance/</link>
<pubDate>Wed, 01 Jan 2020 00:00:00 +0000</pubDate>
<guid>https://egavves.github.io/publication/pervez-2020-variance/</guid>
<description></description>
</item>
<item>
<title>3d neighborhood convolution: Learning depth-aware features for rgb-d and rgb semantic seentation</title>
<link>https://egavves.github.io/publication/chen-20193-d/</link>
<pubDate>Tue, 01 Jan 2019 00:00:00 +0000</pubDate>
<guid>https://egavves.github.io/publication/chen-20193-d/</guid>
<description></description>
</item>
<item>
<title>Combinatorial bayesian optimization using the graph cartesian product</title>
<link>https://egavves.github.io/publication/oh-2019-combinatorial/</link>
<pubDate>Tue, 01 Jan 2019 00:00:00 +0000</pubDate>
<guid>https://egavves.github.io/publication/oh-2019-combinatorial/</guid>
<description></description>
</item>
<item>
<title>I bet you are wrong: Gambling adversarial networks for structured semantic seentation</title>
<link>https://egavves.github.io/publication/samson-2019-bet/</link>
<pubDate>Tue, 01 Jan 2019 00:00:00 +0000</pubDate>
<guid>https://egavves.github.io/publication/samson-2019-bet/</guid>
<description></description>
</item>
<item>
<title>Increasing Expressivity of a Hyperspherical VAE</title>
<link>https://egavves.github.io/publication/davidson-2019-increasing/</link>
<pubDate>Tue, 01 Jan 2019 00:00:00 +0000</pubDate>
<guid>https://egavves.github.io/publication/davidson-2019-increasing/</guid>
<description></description>
</item>
<item>
<title>Initialized Equilibrium Propagation for Backprop-Free Training</title>
<link>https://egavves.github.io/publication/oconnor-2019-initialized/</link>
<pubDate>Tue, 01 Jan 2019 00:00:00 +0000</pubDate>
<guid>https://egavves.github.io/publication/oconnor-2019-initialized/</guid>
<description></description>
</item>
<item>
<title>SafeCritic: Collision-aware trajectory prediction</title>
<link>https://egavves.github.io/publication/vanderheiden-2019-safecritic/</link>
<pubDate>Tue, 01 Jan 2019 00:00:00 +0000</pubDate>
<guid>https://egavves.github.io/publication/vanderheiden-2019-safecritic/</guid>
<description></description>
</item>
<item>
<title>Spherical regression: Learning viewpoints, surface normals and 3d rotations on n-spheres</title>
<link>https://egavves.github.io/publication/liao-2019-spherical/</link>
<pubDate>Tue, 01 Jan 2019 00:00:00 +0000</pubDate>
<guid>https://egavves.github.io/publication/liao-2019-spherical/</guid>
<description></description>
</item>
<item>
<title>The seventh visual object tracking vot2019 challenge results</title>
<link>https://egavves.github.io/publication/kristan-2019-seventh/</link>
<pubDate>Tue, 01 Jan 2019 00:00:00 +0000</pubDate>
<guid>https://egavves.github.io/publication/kristan-2019-seventh/</guid>
<description></description>
</item>
<item>
<title>Timeception for complex action recognition</title>
<link>https://egavves.github.io/publication/hussein-2019-timeception/</link>
<pubDate>Tue, 01 Jan 2019 00:00:00 +0000</pubDate>
<guid>https://egavves.github.io/publication/hussein-2019-timeception/</guid>
<description></description>
</item>
<item>
<title>Training a spiking neural network with equilibrium propagation</title>
<link>https://egavves.github.io/publication/oconnor-2019-training/</link>
<pubDate>Tue, 01 Jan 2019 00:00:00 +0000</pubDate>
<guid>https://egavves.github.io/publication/oconnor-2019-training/</guid>
<description></description>
</item>
<item>
<title>Videograph: Recognizing minutes-long human activities in videos</title>
<link>https://egavves.github.io/publication/hussein-2019-videograph/</link>
<pubDate>Tue, 01 Jan 2019 00:00:00 +0000</pubDate>
<guid>https://egavves.github.io/publication/hussein-2019-videograph/</guid>
<description></description>
</item>
<item>
<title>BOCK: Bayesian Optimization with Cylindrical Kernels</title>
<link>https://egavves.github.io/publication/oh-2018-bock/</link>
<pubDate>Mon, 01 Jan 2018 00:00:00 +0000</pubDate>
<guid>https://egavves.github.io/publication/oh-2018-bock/</guid>
<description></description>
</item>
<item>
<title>Dynamic Adaptation on Non-Stationary Visual Domains</title>
<link>https://egavves.github.io/publication/shkodrani-2018-dynamic/</link>
<pubDate>Mon, 01 Jan 2018 00:00:00 +0000</pubDate>
<guid>https://egavves.github.io/publication/shkodrani-2018-dynamic/</guid>
<description></description>
</item>
<item>
<title>Improving word embedding compositionality using lexicographic definitions</title>
<link>https://egavves.github.io/publication/scheepers-2018-improving/</link>
<pubDate>Mon, 01 Jan 2018 00:00:00 +0000</pubDate>
<guid>https://egavves.github.io/publication/scheepers-2018-improving/</guid>
<description></description>
</item>
<item>
<title>Long-term tracking in the wild: A benchmark</title>
<link>https://egavves.github.io/publication/valmadre-2018-long/</link>
<pubDate>Mon, 01 Jan 2018 00:00:00 +0000</pubDate>
<guid>https://egavves.github.io/publication/valmadre-2018-long/</guid>
<description></description>
</item>
<item>
<title>Relaxed quantization for discretized neural networks</title>
<link>https://egavves.github.io/publication/louizos-2018-relaxed/</link>
<pubDate>Mon, 01 Jan 2018 00:00:00 +0000</pubDate>
<guid>https://egavves.github.io/publication/louizos-2018-relaxed/</guid>
<description></description>
</item>
<item>
<title>Searching and Matching Texture-free 3D Shapes in Images</title>
<link>https://egavves.github.io/publication/liao-2018-searching/</link>
<pubDate>Mon, 01 Jan 2018 00:00:00 +0000</pubDate>
<guid>https://egavves.github.io/publication/liao-2018-searching/</guid>
<description></description>
</item>
<item>
<title>The sixth visual object tracking vot2018 challenge results</title>
<link>https://egavves.github.io/publication/kristan-2018-sixth/</link>
<pubDate>Mon, 01 Jan 2018 00:00:00 +0000</pubDate>
<guid>https://egavves.github.io/publication/kristan-2018-sixth/</guid>
<description></description>
</item>
<item>
<title>Training a network of spiking neurons with equilibrium propagation</title>
<link>https://egavves.github.io/publication/oconnor-2018-training/</link>
<pubDate>Mon, 01 Jan 2018 00:00:00 +0000</pubDate>
<guid>https://egavves.github.io/publication/oconnor-2018-training/</guid>
<description></description>
</item>
<item>
<title>Video Time: Properties, Encoders and Evaluation</title>
<link>https://egavves.github.io/publication/ghodrati-2018-video/</link>
<pubDate>Mon, 01 Jan 2018 00:00:00 +0000</pubDate>
<guid>https://egavves.github.io/publication/ghodrati-2018-video/</guid>
<description></description>
</item>
<item>
<title>Videolstm convolves, attends and flows for action recognition</title>
<link>https://egavves.github.io/publication/li-2018-videolstm/</link>
<pubDate>Mon, 01 Jan 2018 00:00:00 +0000</pubDate>
<guid>https://egavves.github.io/publication/li-2018-videolstm/</guid>
<description></description>
</item>
<item>
<title>Action recognition with dynamic image networks</title>
<link>https://egavves.github.io/publication/bilen-2017-action/</link>
<pubDate>Sun, 01 Jan 2017 00:00:00 +0000</pubDate>
<guid>https://egavves.github.io/publication/bilen-2017-action/</guid>
<description></description>
</item>
<item>
<title>Reflectance and natural illumination from single-material specular objects using deep learning</title>
<link>https://egavves.github.io/publication/georgoulis-2017-reflectance/</link>
<pubDate>Sun, 01 Jan 2017 00:00:00 +0000</pubDate>
<guid>https://egavves.github.io/publication/georgoulis-2017-reflectance/</guid>
<description></description>
</item>
<item>
<title>Self-Supervised Video Representation Learning With Odd-One-Out Networks</title>
<link>https://egavves.github.io/publication/fernando-2017-self/</link>
<pubDate>Sun, 01 Jan 2017 00:00:00 +0000</pubDate>
<guid>https://egavves.github.io/publication/fernando-2017-self/</guid>
<description></description>
</item>
<item>
<title>Temporally Efficient Deep Learning with Spikes</title>
<link>https://egavves.github.io/publication/oconnor-2017-temporally/</link>
<pubDate>Sun, 01 Jan 2017 00:00:00 +0000</pubDate>
<guid>https://egavves.github.io/publication/oconnor-2017-temporally/</guid>
<description></description>
</item>
<item>
<title>Tracking by natural language specification</title>
<link>https://egavves.github.io/publication/li-2017-tracking/</link>
<pubDate>Sun, 01 Jan 2017 00:00:00 +0000</pubDate>
<guid>https://egavves.github.io/publication/li-2017-tracking/</guid>
<description></description>
</item>
<item>
<title>Tracking for half an hour</title>
<link>https://egavves.github.io/publication/tao-2017-tracking/</link>
<pubDate>Sun, 01 Jan 2017 00:00:00 +0000</pubDate>
<guid>https://egavves.github.io/publication/tao-2017-tracking/</guid>
<description></description>
</item>
<item>
<title>Unified embedding and metric learning for zero-exemplar event detection</title>
<link>https://egavves.github.io/publication/hussein-2017-unified/</link>
<pubDate>Sun, 01 Jan 2017 00:00:00 +0000</pubDate>
<guid>https://egavves.github.io/publication/hussein-2017-unified/</guid>
<description></description>
</item>
<item>
<title>University of Amsterdam and Renmin University at TRECVID 2017: Searching Video, Detecting Events and Describing Video.</title>
<link>https://egavves.github.io/publication/snoek-2017-university/</link>
<pubDate>Sun, 01 Jan 2017 00:00:00 +0000</pubDate>
<guid>https://egavves.github.io/publication/snoek-2017-university/</guid>
<description></description>
</item>
<item>
<title>Automatic comment generation using a neural translation model</title>
<link>https://egavves.github.io/publication/haije-2016-automatic/</link>
<pubDate>Fri, 01 Jan 2016 00:00:00 +0000</pubDate>
<guid>https://egavves.github.io/publication/haije-2016-automatic/</guid>
<description></description>
</item>
<item>
<title>Deep reflectance maps</title>
<link>https://egavves.github.io/publication/rematas-2016-deep/</link>
<pubDate>Fri, 01 Jan 2016 00:00:00 +0000</pubDate>
<guid>https://egavves.github.io/publication/rematas-2016-deep/</guid>
<description></description>
</item>
<item>
<title>Dynamic image networks for action recognition</title>
<link>https://egavves.github.io/publication/bilen-2016-dynamic/</link>
<pubDate>Fri, 01 Jan 2016 00:00:00 +0000</pubDate>
<guid>https://egavves.github.io/publication/bilen-2016-dynamic/</guid>
<description></description>
</item>
<item>
<title>Online action detection</title>
<link>https://egavves.github.io/publication/de-2016-online/</link>
<pubDate>Fri, 01 Jan 2016 00:00:00 +0000</pubDate>
<guid>https://egavves.github.io/publication/de-2016-online/</guid>
<description></description>
</item>
<item>
<title>Rank pooling for action recognition</title>
<link>https://egavves.github.io/publication/fernando-2016-rank/</link>
<pubDate>Fri, 01 Jan 2016 00:00:00 +0000</pubDate>
<guid>https://egavves.github.io/publication/fernando-2016-rank/</guid>
<description></description>
</item>
<item>
<title>Siamese instance search for tracking</title>
<link>https://egavves.github.io/publication/tao-2016-siamese/</link>
<pubDate>Fri, 01 Jan 2016 00:00:00 +0000</pubDate>
<guid>https://egavves.github.io/publication/tao-2016-siamese/</guid>
<description></description>
</item>
<item>
<title>Active transfer learning with zero-shot priors: Reusing past datasets for future tasks</title>
<link>https://egavves.github.io/publication/gavves-2015-active/</link>
<pubDate>Thu, 01 Jan 2015 00:00:00 +0000</pubDate>
<guid>https://egavves.github.io/publication/gavves-2015-active/</guid>
<description></description>
</item>
<item>
<title>Guiding the long-short term memory model for image caption generation</title>
<link>https://egavves.github.io/publication/jia-2015-guiding/</link>
<pubDate>Thu, 01 Jan 2015 00:00:00 +0000</pubDate>
<guid>https://egavves.github.io/publication/jia-2015-guiding/</guid>
<description></description>
</item>
<item>
<title>Learning to rank based on subsequences</title>
<link>https://egavves.github.io/publication/fernando-2015-learning/</link>
<pubDate>Thu, 01 Jan 2015 00:00:00 +0000</pubDate>
<guid>https://egavves.github.io/publication/fernando-2015-learning/</guid>
<description></description>
</item>
<item>
<title>Local alignments for fine-grained categorization</title>
<link>https://egavves.github.io/publication/gavves-2015-local/</link>
<pubDate>Thu, 01 Jan 2015 00:00:00 +0000</pubDate>
<guid>https://egavves.github.io/publication/gavves-2015-local/</guid>
<description></description>
</item>
<item>
<title>Modeling video evolution for action recognition</title>
<link>https://egavves.github.io/publication/fernando-2015-modeling/</link>
<pubDate>Thu, 01 Jan 2015 00:00:00 +0000</pubDate>
<guid>https://egavves.github.io/publication/fernando-2015-modeling/</guid>
<description></description>
</item>
<item>
<title>Attributes make sense on seented objects</title>
<link>https://egavves.github.io/publication/li-2014-attributes/</link>
<pubDate>Wed, 01 Jan 2014 00:00:00 +0000</pubDate>
<guid>https://egavves.github.io/publication/li-2014-attributes/</guid>
<description></description>
</item>
<item>
<title>Conceptlets: Selective semantics for classifying video events</title>
<link>https://egavves.github.io/publication/mazloom-2014-conceptlets/</link>
<pubDate>Wed, 01 Jan 2014 00:00:00 +0000</pubDate>
<guid>https://egavves.github.io/publication/mazloom-2014-conceptlets/</guid>
<description></description>
</item>
<item>
<title>Costa: Co-occurrence statistics for zero-shot classification</title>
<link>https://egavves.github.io/publication/mensink-2014-costa/</link>
<pubDate>Wed, 01 Jan 2014 00:00:00 +0000</pubDate>
<guid>https://egavves.github.io/publication/mensink-2014-costa/</guid>
<description></description>
</item>
<item>
<title>Locality in generic instance search from one example</title>
<link>https://egavves.github.io/publication/tao-2014-locality/</link>
<pubDate>Wed, 01 Jan 2014 00:00:00 +0000</pubDate>
<guid>https://egavves.github.io/publication/tao-2014-locality/</guid>
<description></description>
</item>
<item>
<title>An example conference paper</title>
<link>https://egavves.github.io/example-publication/example/</link>
<pubDate>Mon, 01 Jul 2013 00:00:00 +0000</pubDate>
<guid>https://egavves.github.io/example-publication/example/</guid>
<description><div class="alert alert-note">
<div>
Click the <em>Cite</em> button above to demo the feature to enable visitors to import publication metadata into their reference management software.
</div>
</div>
<div class="alert alert-note">
<div>
Create your slides in Markdown - click the <em>Slides</em> button to check out the example.
</div>
</div>
<p>Supplementary notes can be added here, including <a href="https://wowchemy.com/docs/writing-markdown-latex/" target="_blank" rel="noopener">code, math, and images</a>.</p>
</description>
</item>
<item>
<title>Codemaps-seent, classify and search objects locally</title>
<link>https://egavves.github.io/publication/li-2013-codemaps/</link>
<pubDate>Tue, 01 Jan 2013 00:00:00 +0000</pubDate>
<guid>https://egavves.github.io/publication/li-2013-codemaps/</guid>
<description></description>
</item>
<item>
<title>Fine-grained categorization by alignments</title>
<link>https://egavves.github.io/publication/gavves-2013-fine/</link>
<pubDate>Tue, 01 Jan 2013 00:00:00 +0000</pubDate>
<guid>https://egavves.github.io/publication/gavves-2013-fine/</guid>
<description></description>
</item>
<item>
<title>Searching informative concept banks for video event detection</title>
<link>https://egavves.github.io/publication/mazloom-2013-searching/</link>
<pubDate>Tue, 01 Jan 2013 00:00:00 +0000</pubDate>
<guid>https://egavves.github.io/publication/mazloom-2013-searching/</guid>
<description></description>
</item>
<item>
<title>Convex reduction of high-dimensional kernels for visual classification</title>
<link>https://egavves.github.io/publication/gavves-2012-convex/</link>
<pubDate>Sun, 01 Jan 2012 00:00:00 +0000</pubDate>
<guid>https://egavves.github.io/publication/gavves-2012-convex/</guid>
<description></description>
</item>
<item>
<title>Visual synonyms for landmark image retrieval</title>
<link>https://egavves.github.io/publication/gavves-2012-visual/</link>
<pubDate>Sun, 01 Jan 2012 00:00:00 +0000</pubDate>
<guid>https://egavves.github.io/publication/gavves-2012-visual/</guid>
<description></description>
</item>
<item>
<title>Personalizing automated image annotation using cross-entropy</title>
<link>https://egavves.github.io/publication/li-2011-personalizing/</link>
<pubDate>Sat, 01 Jan 2011 00:00:00 +0000</pubDate>
<guid>https://egavves.github.io/publication/li-2011-personalizing/</guid>
<description></description>
</item>
<item>
<title>Landmark image retrieval using visual synonyms</title>
<link>https://egavves.github.io/publication/gavves-2010-landmark/</link>
<pubDate>Fri, 01 Jan 2010 00:00:00 +0000</pubDate>
<guid>https://egavves.github.io/publication/gavves-2010-landmark/</guid>
<description></description>
</item>
<item>
<title>The MediaMill TRECVID 2010 semantic video search engine</title>
<link>https://egavves.github.io/publication/snoek-2010-mediamill/</link>
<pubDate>Fri, 01 Jan 2010 00:00:00 +0000</pubDate>
<guid>https://egavves.github.io/publication/snoek-2010-mediamill/</guid>
<description></description>
</item>
<item>
<title></title>
<link>https://egavves.github.io/admin/config.yml</link>
<pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
<guid>https://egavves.github.io/admin/config.yml</guid>
<description></description>
</item>