-
Notifications
You must be signed in to change notification settings - Fork 0
/
Copy pathindex.json
43 lines (1 loc) · 33.4 KB
/
index.json
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
[{"authors":null,"categories":null,"content":"Naoshi Kaneko (金子 直史) is an assistant professor in Department of Integrated Information Technology, Aoyama Gakuin University. His research interests include computer vision, pattern recognition, non-verbal behavior generation and machine learning.\nkaneko \u0026lt;at\u0026gt; it aoyama ac jp\nLinks Google Scholar researchmap Aoyama Computer Vision Lab ","date":1656633600,"expirydate":-62135596800,"kind":"term","lang":"en","lastmod":1661335005,"objectID":"2525497d367e79493fd32b198b28f040","permalink":"","publishdate":"0001-01-01T00:00:00Z","relpermalink":"","section":"authors","summary":"Naoshi Kaneko (金子 直史) is an assistant professor in Department of Integrated Information Technology, Aoyama Gakuin University. His research interests include computer vision, pattern recognition, non-verbal","tags":null,"title":"Naoshi Kaneko","type":"authors"},{"authors":[],"categories":null,"content":" Click on the Slides button above to view the built-in slides feature. Slides can be added in a few ways:\nCreate slides using Wowchemy’s Slides feature and link using slides parameter in the front matter of the talk file Upload an existing slide deck to static/ and link using url_slides parameter in the front matter of the talk file Embed your slides (e.g. Google Slides) or presentation video on this page using shortcodes. Further event details, including page elements such as image galleries, can be added to the body of this page.\n","date":1906549200,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1906549200,"objectID":"a8edef490afe42206247b6ac05657af0","permalink":"https://aoikaneko.github.io/talk/example-talk/","publishdate":"2017-01-01T00:00:00Z","relpermalink":"/talk/example-talk/","section":"event","summary":"An example talk using Wowchemy's Markdown slides feature.","tags":[],"title":"Example Talk","type":"event"},{"authors":["Eiichi Asakawa","Naoshi Kaneko","Dai Hasegawa","Shinichi Shirakawa"],"categories":[],"content":"","date":1656633600,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1661334982,"objectID":"9718c938235b2d8233d8ee0d682c9da5","permalink":"https://aoikaneko.github.io/publication/asakawa-2022-evaluation/","publishdate":"2022-08-24T09:56:22.333069Z","relpermalink":"/publication/asakawa-2022-evaluation/","section":"publication","summary":"[Impact Factor: 9.657]","tags":[],"title":"Evaluation of Text-to-Gesture Generation Model Using Convolutional Neural Network","type":"publication"},{"authors":["Hiroki Kojima","Naoshi Kaneko","Seiya Ito","Kazuhiko Sumi"],"categories":[],"content":"","date":1640995200,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1661334995,"objectID":"ab5c154c22d6bcbe7b434a07f8160950","permalink":"https://aoikaneko.github.io/publication/kojima-2022-multimodal/","publishdate":"2022-08-24T09:56:34.877698Z","relpermalink":"/publication/kojima-2022-multimodal/","section":"publication","summary":"","tags":[],"title":"Multimodal Pseudo-Labeling under Various Shooting Conditions: Case Study on RGB and IR Images","type":"publication"},{"authors":["Seiya Ito","Byeongjun Ju","Naoshi Kaneko","Kazuhiko Sumi"],"categories":[],"content":"","date":1640995200,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1661334989,"objectID":"30daa95afb3403c2d946d8ef8d50f1a5","permalink":"https://aoikaneko.github.io/publication/ito-2022-viewpointindependent/","publishdate":"2022-08-24T09:56:28.683269Z","relpermalink":"/publication/ito-2022-viewpointindependent/","section":"publication","summary":"","tags":[],"title":"Viewpoint-Independent Single-View 3D Object Reconstruction Using Reinforcement Learning","type":"publication"},{"authors":["(In Japanese) 長崎 好輝","林 昌希","金子 直史","青木 義満"],"categories":[],"content":"","date":1640995200,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1661334999,"objectID":"e1b953672a3ab77f0c58a703bba57a0e","permalink":"https://aoikaneko.github.io/publication/nagasaki-2022-temporal/","publishdate":"2022-08-24T09:56:38.662699Z","relpermalink":"/publication/nagasaki-2022-temporal/","section":"publication","summary":"","tags":[],"title":"動画内の音と映像によるイベント推定タスクにおける時間方向クロスモーダルアテンションの導入 (Temporal Cross-Modal Attention for Audio-Visual Event Localization)","type":"publication"},{"authors":["Taras Kucherenko","Dai Hasegawa","Naoshi Kaneko","Gustav Eje Henter","Hedvig Kjellström"],"categories":[],"content":"","date":1612137600,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1661334997,"objectID":"d6bf322a27f15a10bca5ddbfc3202892","permalink":"https://aoikaneko.github.io/publication/kucherenko-2021-moving/","publishdate":"2022-08-24T09:56:36.72307Z","relpermalink":"/publication/kucherenko-2021-moving/","section":"publication","summary":"[Impact Factor: 3.353]","tags":[],"title":"Moving Fast and Slow: Analysis of Representations and Post-Processing in Speech-Driven Automatic Gesture Generation","type":"publication"},{"authors":["Takafumi Nagi","Naoshi Kaneko","Seiya Ito","Kazuhiko Sumi"],"categories":[],"content":"","date":1609459200,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1661334999,"objectID":"20f239a857f26a710093a791d48c144a","permalink":"https://aoikaneko.github.io/publication/nagi-2021-automatic/","publishdate":"2022-08-24T09:56:39.249271Z","relpermalink":"/publication/nagi-2021-automatic/","section":"publication","summary":"","tags":[],"title":"Automatic Dataset Collection for Speech-Driven Gesture Generation","type":"publication"},{"authors":["Taku Fujitomi","Seiya Ito","Naoshi Kaneko","Kazuhiko Sumi"],"categories":[],"content":"","date":1609459200,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1661334983,"objectID":"db660e6afd42fc7acfc44a35a8a84828","permalink":"https://aoikaneko.github.io/publication/fujitomi-2021-bidirectional/","publishdate":"2022-08-24T09:56:22.936651Z","relpermalink":"/publication/fujitomi-2021-bidirectional/","section":"publication","summary":"","tags":[],"title":"Bi-Directional Recurrent MVSNet for High-Resolution Multi-View Stereo","type":"publication"},{"authors":["Gakuto Maruyama","Naoshi Kaneko","Seiya Ito","Kazuhiko Sumi"],"categories":[],"content":"","date":1609459200,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1661334997,"objectID":"50fa378eac3a9b89c0568793dbdd7a56","permalink":"https://aoikaneko.github.io/publication/maruyama-2021-reducing/","publishdate":"2022-08-24T09:56:37.353814Z","relpermalink":"/publication/maruyama-2021-reducing/","section":"publication","summary":"","tags":[],"title":"Reducing Depth Ambiguity in 3D Human Pose and Body Shape Estimation","type":"publication"},{"authors":["Seiya Ito","Naoshi Kaneko","Kazuhiko Sumi"],"categories":[],"content":"","date":1609459200,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1661334988,"objectID":"d91a3552c3b0ff9dca4146c2c76118ba","permalink":"https://aoikaneko.github.io/publication/ito-2021-seeing/","publishdate":"2022-08-24T09:56:28.117565Z","relpermalink":"/publication/ito-2021-seeing/","section":"publication","summary":"","tags":[],"title":"Seeing Farther than Supervision: Self-Supervised Depth Completion in Challenging Environments","type":"publication"},{"authors":["Yoshiki Nagasaki","Masaki Hayashi","Naoshi Kaneko","Yoshimitsu Aoki"],"categories":[],"content":"","date":1609459200,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1661334998,"objectID":"2b76255496bd642f7603e8c898fcf79c","permalink":"https://aoikaneko.github.io/publication/nagasaki-2021-temporal/","publishdate":"2022-08-24T09:56:38.028172Z","relpermalink":"/publication/nagasaki-2021-temporal/","section":"publication","summary":"","tags":[],"title":"Temporal Cross-Modal Attention for Audio-Visual Event Localization","type":"publication"},{"authors":["Hiroki Kojima","Naoshi Kaneko","Seiya Ito","Kazuhiko Sumi"],"categories":[],"content":"","date":1609459200,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1661334994,"objectID":"737593aa2bf3378bb05cb47c18c0137d","permalink":"https://aoikaneko.github.io/publication/kojima-2021-you/","publishdate":"2022-08-24T09:56:34.24606Z","relpermalink":"/publication/kojima-2021-you/","section":"publication","summary":"","tags":[],"title":"You Don't Drink a Cupboard: Improving Egocentric Action Recognition with Co-Occurrence of Verbs and Nouns","type":"publication"},{"authors":["Naoshi Kaneko","吳恩達"],"categories":["Demo","教程"],"content":"Overview The Wowchemy website builder for Hugo, along with its starter templates, is designed for professional creators, educators, and teams/organizations - although it can be used to create any kind of site The template can be modified and customised to suit your needs. It’s a good platform for anyone looking to take control of their data and online identity whilst having the convenience to start off with a no-code solution (write in Markdown and customize with YAML parameters) and having flexibility to later add even deeper personalization with HTML and CSS You can work with all your favourite tools and apps with hundreds of plugins and integrations to speed up your workflows, interact with your readers, and much more Get Started 👉 Create a new site 📚 Personalize your site 💬 Chat with the Wowchemy community or Hugo community 🐦 Twitter: @wowchemy @GeorgeCushen #MadeWithWowchemy 💡 Request a feature or report a bug for Wowchemy ⬆️ Updating Wowchemy? View the Update Tutorial and Release Notes Crowd-funded open-source software To help us develop this template and software sustainably under the MIT license, we ask all individuals and businesses that use it to help support its ongoing maintenance and development via sponsorship.\n❤️ Click here to become a sponsor and help support Wowchemy’s future ❤️ As a token of appreciation for sponsoring, you can unlock these awesome rewards and extra features 🦄✨\nEcosystem Hugo Academic CLI: Automatically import publications from BibTeX Inspiration Check out the latest demo of what you’ll get in less than 10 minutes, or view the showcase of personal, project, and business sites.\nFeatures Page builder - Create anything with widgets and elements Edit any type of content - Blog posts, publications, talks, slides, projects, and more! Create content in Markdown, Jupyter, or RStudio Plugin System - Fully customizable color and font themes Display Code and Math - Code highlighting and LaTeX math supported Integrations - Google Analytics, Disqus commenting, Maps, Contact Forms, and more! Beautiful Site - Simple and refreshing one page design Industry-Leading SEO - Help get your website found on search engines and social media Media Galleries - Display your images and videos with captions in a customizable gallery Mobile Friendly - Look amazing on every screen with a mobile friendly version of your site Multi-language - 34+ language packs including English, 中文, and Português Multi-user - Each author gets their own profile page Privacy Pack - Assists with GDPR Stand Out - Bring your site to life with animation, parallax backgrounds, and scroll effects One-Click Deployment - No servers. No databases. Only files. Themes Wowchemy and its templates come with automatic day (light) and night (dark) mode built-in. Alternatively, visitors can choose their preferred mode - click the moon icon in the top right of the Demo to see it in action! Day/night mode can also be disabled by the site admin in params.toml.\nChoose a stunning theme and font for your site. Themes are fully customizable.\nLicense Copyright 2016-present George Cushen.\nReleased under the MIT license.\n","date":1607817600,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1607817600,"objectID":"279b9966ca9cf3121ce924dca452bb1c","permalink":"https://aoikaneko.github.io/post/getting-started/","publishdate":"2020-12-13T00:00:00Z","relpermalink":"/post/getting-started/","section":"post","summary":"Welcome 👋 We know that first impressions are important, so we've populated your new site with some initial content to help you get familiar with everything in no time.","tags":["Academic","开源"],"title":"Welcome to Wowchemy, the website builder for Hugo","type":"post"},{"authors":["Seiya Ito","Naoshi Kaneko","Kazuhiko Sumi"],"categories":[],"content":"","date":1601510400,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1661334987,"objectID":"3cb1e9da2949e0b509bc39413f411639","permalink":"https://aoikaneko.github.io/publication/ito-2020-latent/","publishdate":"2022-08-24T09:56:26.95097Z","relpermalink":"/publication/ito-2020-latent/","section":"publication","summary":"[Impact Factor: 3.275]","tags":[],"title":"Latent 3D Volume for Joint Depth Estimation and Semantic Segmentation from a Single Image","type":"publication"},{"authors":["Yusuke Nakasato","Seiya Ito","Naoshi Kaneko","Kazuhiko Sumi"],"categories":[],"content":"","date":1577836800,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1661335000,"objectID":"ce4597f1bd2d6ccfbab86e19fa381e43","permalink":"https://aoikaneko.github.io/publication/nakasato-2020-complex/","publishdate":"2022-08-24T09:56:39.876575Z","relpermalink":"/publication/nakasato-2020-complex/","section":"publication","summary":"","tags":[],"title":"Complex Nonlinear and Grid-Sampling Harmonic Convolution for Rotation Equivariant Image Recognition","type":"publication"},{"authors":["Naoshi Kaneko","Mei Oyama","Masaki Hayashi","Seiya Ito","Kazuhiko Sumi"],"categories":[],"content":"","date":1577836800,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1661334994,"objectID":"c26b5c0f8afdd26df129cd68c99e4d0e","permalink":"https://aoikaneko.github.io/publication/kaneko-2020-feature/","publishdate":"2022-08-24T09:56:33.696477Z","relpermalink":"/publication/kaneko-2020-feature/","section":"publication","summary":"","tags":[],"title":"Feature Bridging Networks for 3D Human Body Shape Estimation from a Single Depth Map","type":"publication"},{"authors":["Junji Takahashi","Kawabe Masato","Seiya Ito","Naoshi Kaneko","Wataro Takahashi","Toshiki Sakamoto","Akihiro Shibata","Yong Yu"],"categories":[],"content":"","date":1577836800,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1661335003,"objectID":"5a672d9d3e4549333acfa07678730fcc","permalink":"https://aoikaneko.github.io/publication/takahashi-2020-imageretrieval/","publishdate":"2022-08-24T09:56:42.548295Z","relpermalink":"/publication/takahashi-2020-imageretrieval/","section":"publication","summary":"","tags":[],"title":"Image-Retrieval Method Using Gradient Dilation Images for Cloud-Based Positioning System with 3D Wireframe Map","type":"publication"},{"authors":["Ryo Tamura","Seiya Ito","Naoshi Kaneko","Kazuhiko Sumi"],"categories":[],"content":"","date":1577836800,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1661335004,"objectID":"b08787412d74881262a271853c7ec78d","permalink":"https://aoikaneko.github.io/publication/tamura-2020-detailed/","publishdate":"2022-08-24T09:56:43.788562Z","relpermalink":"/publication/tamura-2020-detailed/","section":"publication","summary":"","tags":[],"title":"Towards Detailed 3D Modeling: Mesh Super-Resolution via Deformation","type":"publication"},{"authors":["(In Japanese) 伊東 聖矢","金子 直史","鷲見 和彦"],"categories":[],"content":"","date":1577836800,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1661334988,"objectID":"1390556e5a1d42a91671dcbb1d0103de","permalink":"https://aoikaneko.github.io/publication/ito-2020-self/","publishdate":"2022-08-24T09:56:27.526575Z","relpermalink":"/publication/ito-2020-self/","section":"publication","summary":"","tags":[],"title":"自己教師あり学習を用いた多眼ステレオ (Self-Supervised Learning for Multi-View Stereo)","type":"publication"},{"authors":[],"categories":[],"content":"Create slides in Markdown with Wowchemy Wowchemy | Documentation\nFeatures Efficiently write slides in Markdown 3-in-1: Create, Present, and Publish your slides Supports speaker notes Mobile friendly slides Controls Next: Right Arrow or Space Previous: Left Arrow Start: Home Finish: End Overview: Esc Speaker notes: S Fullscreen: F Zoom: Alt + Click PDF Export Code Highlighting Inline code: variable\nCode block:\nporridge = \u0026#34;blueberry\u0026#34; if porridge == \u0026#34;blueberry\u0026#34;: print(\u0026#34;Eating...\u0026#34;) Math In-line math: $x + y = z$\nBlock math:\n$$ f\\left( x \\right) = ;\\frac{{2\\left( {x + 4} \\right)\\left( {x - 4} \\right)}}{{\\left( {x + 4} \\right)\\left( {x + 1} \\right)}} $$\nFragments Make content appear incrementally\n{{% fragment %}} One {{% /fragment %}} {{% fragment %}} **Two** {{% /fragment %}} {{% fragment %}} Three {{% /fragment %}} Press Space to play!\nOne Two Three A fragment can accept two optional parameters:\nclass: use a custom style (requires definition in custom CSS) weight: sets the order in which a fragment appears Speaker Notes Add speaker notes to your presentation\n{{% speaker_note %}} - Only the speaker can read these notes - Press `S` key to view {{% /speaker_note %}} Press the S key to view the speaker notes!\nOnly the speaker can read these notes Press S key to view Themes black: Black background, white text, blue links (default) white: White background, black text, blue links league: Gray background, white text, blue links beige: Beige background, dark text, brown links sky: Blue background, thin dark text, blue links night: Black background, thick white text, orange links serif: Cappuccino background, gray text, brown links simple: White background, black text, blue links solarized: Cream-colored background, dark green text, blue links Custom Slide Customize the slide style and background\n{{\u0026lt; slide background-image=\u0026#34;/media/boards.jpg\u0026#34; \u0026gt;}} {{\u0026lt; slide background-color=\u0026#34;#0000FF\u0026#34; \u0026gt;}} {{\u0026lt; slide class=\u0026#34;my-style\u0026#34; \u0026gt;}} Custom CSS Example Let’s make headers navy colored.\nCreate assets/css/reveal_custom.css with:\n.reveal section h1, .reveal section h2, .reveal section h3 { color: navy; } Questions? Ask\nDocumentation\n","date":1549324800,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1549324800,"objectID":"0e6de1a61aa83269ff13324f3167c1a9","permalink":"https://aoikaneko.github.io/slides/example/","publishdate":"2019-02-05T00:00:00Z","relpermalink":"/slides/example/","section":"slides","summary":"An introduction to using Wowchemy's Slides feature.","tags":[],"title":"Slides","type":"slides"},{"authors":["Taras Kucherenko","Dai Hasegawa","Gustav Eje Henter","Naoshi Kaneko","Hedvig Kjellström"],"categories":[],"content":"","date":1546300800,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1661334996,"objectID":"2924bf043f94582909a5dace9e9f9502","permalink":"https://aoikaneko.github.io/publication/kucherenko-2019-analyzing/","publishdate":"2022-08-24T09:56:35.493273Z","relpermalink":"/publication/kucherenko-2019-analyzing/","section":"publication","summary":"","tags":[],"title":"Analyzing Input and Output Representations for Speech-Driven Gesture Generation","type":"publication"},{"authors":["(In Japanese) 金子 直史","竹内 健太","長谷川 大","白川 真一","佐久田 博司","鷲見 和彦"],"categories":[],"content":"","date":1546300800,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1661334993,"objectID":"1073d57010ab9011b968b5fc9d55de04","permalink":"https://aoikaneko.github.io/publication/kaneko-2019-bi/","publishdate":"2022-08-24T09:56:32.486498Z","relpermalink":"/publication/kaneko-2019-bi/","section":"publication","summary":"","tags":[],"title":"Bi-Directional LSTM Network を用いた発話に伴うジェスチャの自動生成手法 (Speech-to-Gesture Generation Using Bi-Directional LSTM Network)","type":"publication"},{"authors":["Kazunari Takagi","Seiya Ito","Naoshi Kaneko","Kazuhiko Sumi"],"categories":[],"content":"","date":1546300800,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1661335002,"objectID":"4c3648d87654aec0486f5eb7402e2581","permalink":"https://aoikaneko.github.io/publication/takagi-2019-boosting/","publishdate":"2022-08-24T09:56:41.76694Z","relpermalink":"/publication/takagi-2019-boosting/","section":"publication","summary":"","tags":[],"title":"Boosting Monocular Depth Estimation with Channel Attention and Mutual Learning","type":"publication"},{"authors":["Natsuki Hase","Seiya Ito","Naoshi Kaneko","Kazuhiko Sumi"],"categories":[],"content":"","date":1546300800,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1661334984,"objectID":"0c812e2c0e0051edd5a61bd914809feb","permalink":"https://aoikaneko.github.io/publication/hase-2019-data/","publishdate":"2022-08-24T09:56:24.046194Z","relpermalink":"/publication/hase-2019-data/","section":"publication","summary":"","tags":[],"title":"Data Augmentation for Intra-Class Imbalance with Generative Adversarial Network","type":"publication"},{"authors":["Naoshi Kaneko","Yoshiaki Akazawa","Kazuhiko Sumi"],"categories":[],"content":"","date":1546300800,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1661334993,"objectID":"c3bf80d9afcb01662c6c6d2696c28f43","permalink":"https://aoikaneko.github.io/publication/kaneko-2019-deep/","publishdate":"2022-08-24T09:56:33.089398Z","relpermalink":"/publication/kaneko-2019-deep/","section":"publication","summary":"","tags":[],"title":"Deep Monocular Depth Estimation in Partially-Known Environments","type":"publication"},{"authors":["Nobuhiro Suga","Seiya Ito","Naoshi Kaneko","Kazuhiko Sumi"],"categories":[],"content":"","date":1546300800,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1661335001,"objectID":"9965d4b06434b29e274b052fa35dd219","permalink":"https://aoikaneko.github.io/publication/suga-2019-multiple/","publishdate":"2022-08-24T09:56:41.188479Z","relpermalink":"/publication/suga-2019-multiple/","section":"publication","summary":"","tags":[],"title":"Multiple Human Tracking with Dual Cost Graphs","type":"publication"},{"authors":["Taras Kucherenko","Dai Hasegawa","Naoshi Kaneko","Gustav Eje Henter","Hedvig Kjellström"],"categories":[],"content":"","date":1546300800,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1661334996,"objectID":"b3c2b37118fba79b0908aad2821dea81","permalink":"https://aoikaneko.github.io/publication/kucherenko-2019-importance/","publishdate":"2022-08-24T09:56:36.102068Z","relpermalink":"/publication/kucherenko-2019-importance/","section":"publication","summary":"","tags":[],"title":"On the Importance of Representations for Speech-Driven Gesture Generation","type":"publication"},{"authors":["Masato Fukuzaki","Seiya Ito","Naoshi Kaneko","Kazuhiko Sumi"],"categories":[],"content":"","date":1546300800,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1661334983,"objectID":"6b956503c8731525087922d8e568b50a","permalink":"https://aoikaneko.github.io/publication/fukuzaki-2019-robot/","publishdate":"2022-08-24T09:56:23.494498Z","relpermalink":"/publication/fukuzaki-2019-robot/","section":"publication","summary":"","tags":[],"title":"Robot Grasp Planning with Integration Map of Graspability and Object Occupancy","type":"publication"},{"authors":["(In Japanese) 釜田 祐哉","伊東 聖矢","金子 直史","鷲見 和彦"],"categories":[],"content":"","date":1546300800,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1661334989,"objectID":"9466e6b9ac22b63c3e4a2ad0eca917b3","permalink":"https://aoikaneko.github.io/publication/kamata-2019-automatic/","publishdate":"2022-08-24T09:56:29.29943Z","relpermalink":"/publication/kamata-2019-automatic/","section":"publication","summary":"","tags":[],"title":"食品チラシ画像を用いたレシピ推薦システム (Automatic Recipe Recommendation from Food Flyers)","type":"publication"},{"authors":["(In Japanese) 金子 直史","伊東 聖矢","鷲見 和彦"],"categories":[],"content":"","date":1514764800,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1661334992,"objectID":"a7917e043c7fcb09f153e230bb5c12a3","permalink":"https://aoikaneko.github.io/publication/kaneko-2018-clothes/","publishdate":"2022-08-24T09:56:31.815541Z","relpermalink":"/publication/kaneko-2018-clothes/","section":"publication","summary":"","tags":[],"title":"ClothesAwarePoseNet: 衣服の領域分割を考慮した人物姿勢推定法 (ClothesAwarePoseNet: Two-Stream Convolutional Networks for Clothing-Aware Human Pose Estimation)","type":"publication"},{"authors":["Seiya Ito","Naoshi Kaneko","Yuma Shinohara","Kazuhiko Sumi"],"categories":[],"content":"","date":1514764800,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1661334986,"objectID":"507e8d3c00233b71ac1fbd68edd0bd2b","permalink":"https://aoikaneko.github.io/publication/ito-2018-deep/","publishdate":"2022-08-24T09:56:25.71521Z","relpermalink":"/publication/ito-2018-deep/","section":"publication","summary":"","tags":[],"title":"Deep Modular Network Architecture for Depth Estimation from Single Indoor Images","type":"publication"},{"authors":["Dai Hasegawa","Naoshi Kaneko","Shinichi Shirakawa","Hiroshi Sakuta","Kazuhiko Sumi"],"categories":[],"content":"","date":1514764800,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1661334985,"objectID":"43363902d82a006bd6d882ad4d925ff8","permalink":"https://aoikaneko.github.io/publication/hasegawa-2018-evaluation/","publishdate":"2022-08-24T09:56:24.595221Z","relpermalink":"/publication/hasegawa-2018-evaluation/","section":"publication","summary":"","tags":[],"title":"Evaluation of Speech-to-Gesture Generation Using Bi-Directional LSTM Network","type":"publication"},{"authors":["Seiya Ito","Naoshi Kaneko","Junji Takahashi","Kazuhiko Sumi"],"categories":[],"content":"","date":1514764800,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1661334986,"objectID":"cfdc7129dc35915ceb6026ddf049a5b2","permalink":"https://aoikaneko.github.io/publication/ito-2018-global/","publishdate":"2022-08-24T09:56:26.297098Z","relpermalink":"/publication/ito-2018-global/","section":"publication","summary":"","tags":[],"title":"Global Localization from a Single Image in Known Indoor Environments","type":"publication"},{"authors":["Kaho Yamada","Seiya Ito","Naoshi Kaneko","Kazuhiko Sumi"],"categories":[],"content":"","date":1514764800,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1661335005,"objectID":"760c1d05b90d6aabe520505c5875bcd4","permalink":"https://aoikaneko.github.io/publication/yamada-2018-human/","publishdate":"2022-08-24T09:56:44.578025Z","relpermalink":"/publication/yamada-2018-human/","section":"publication","summary":"","tags":[],"title":"Human Action Recognition via Body Part Region Segmented Dense Trajectories","type":"publication"},{"authors":["Seiya Ito","Naoshi Kaneko","Takeshi Yoshida","Kazuhiko Sumi"],"categories":[],"content":"","date":1483228800,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1661334985,"objectID":"91e0451d686b8b5446b09e0e5b8f5ef8","permalink":"https://aoikaneko.github.io/publication/ito-2017-detection/","publishdate":"2022-08-24T09:56:25.14367Z","relpermalink":"/publication/ito-2017-detection/","section":"publication","summary":"","tags":[],"title":"Detection of Defective Regions in 3D Reconstruction to Support Image Acquisition","type":"publication"},{"authors":["Naoshi Kaneko","Takeshi Yoshida","Kazuhiko Sumi"],"categories":[],"content":"","date":1483228800,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1661334991,"objectID":"6a11928002b8d2c5efeae8c1ce792527","permalink":"https://aoikaneko.github.io/publication/kaneko-2017-fast/","publishdate":"2022-08-24T09:56:31.18621Z","relpermalink":"/publication/kaneko-2017-fast/","section":"publication","summary":"","tags":[],"title":"Fast Obstacle Detection for Monocular Autonomous Mobile Robots","type":"publication"},{"authors":["Kenta Takeuchi","Dai Hasegawa","Shinichi Shirakawa","Naoshi Kaneko","Hiroshi Sakuta","Kazuhiko Sumi"],"categories":[],"content":"","date":1483228800,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1661335003,"objectID":"c1fbd6c72ab0614aca8504983c058ea1","permalink":"https://aoikaneko.github.io/publication/takeuchi-2017-speechtogesture/","publishdate":"2022-08-24T09:56:43.201975Z","relpermalink":"/publication/takeuchi-2017-speechtogesture/","section":"publication","summary":"","tags":[],"title":"Speech-to-Gesture Generation: A Challenge in Deep Learning Approach with Bi-Directional LSTM","type":"publication"},{"authors":["Mei Oyama","Naoshi Kaneko","Masaki Hayashi","Kazuhiko Sumi","Takeshi Yoshida"],"categories":[],"content":"","date":1483228800,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1661335001,"objectID":"6a8b2f6ba399f8b8b03fd9b45509538e","permalink":"https://aoikaneko.github.io/publication/oyama-2017-twostage/","publishdate":"2022-08-24T09:56:40.530342Z","relpermalink":"/publication/oyama-2017-twostage/","section":"publication","summary":"","tags":[],"title":"Two-Stage Model Fitting Approach for Human Body Shape Estimation from a Single Depth Image","type":"publication"},{"authors":null,"categories":null,"content":"Lorem ipsum dolor sit amet, consectetur adipiscing elit. Duis posuere tellus ac convallis placerat. Proin tincidunt magna sed ex sollicitudin condimentum. Sed ac faucibus dolor, scelerisque sollicitudin nisi. Cras purus urna, suscipit quis sapien eu, pulvinar tempor diam. Quisque risus orci, mollis id ante sit amet, gravida egestas nisl. Sed ac tempus magna. Proin in dui enim. Donec condimentum, sem id dapibus fringilla, tellus enim condimentum arcu, nec volutpat est felis vel metus. Vestibulum sit amet erat at nulla eleifend gravida.\nNullam vel molestie justo. Curabitur vitae efficitur leo. In hac habitasse platea dictumst. Sed pulvinar mauris dui, eget varius purus congue ac. Nulla euismod, lorem vel elementum dapibus, nunc justo porta mi, sed tempus est est vel tellus. Nam et enim eleifend, laoreet sem sit amet, elementum sem. Morbi ut leo congue, maximus velit ut, finibus arcu. In et libero cursus, rutrum risus non, molestie leo. Nullam congue quam et volutpat malesuada. Sed risus tortor, pulvinar et dictum nec, sodales non mi. Phasellus lacinia commodo laoreet. Nam mollis, erat in feugiat consectetur, purus eros egestas tellus, in auctor urna odio at nibh. Mauris imperdiet nisi ac magna convallis, at rhoncus ligula cursus.\nCras aliquam rhoncus ipsum, in hendrerit nunc mattis vitae. Duis vitae efficitur metus, ac tempus leo. Cras nec fringilla lacus. Quisque sit amet risus at ipsum pharetra commodo. Sed aliquam mauris at consequat eleifend. Praesent porta, augue sed viverra bibendum, neque ante euismod ante, in vehicula justo lorem ac eros. Suspendisse augue libero, venenatis eget tincidunt ut, malesuada at lorem. Donec vitae bibendum arcu. Aenean maximus nulla non pretium iaculis. Quisque imperdiet, nulla in pulvinar aliquet, velit quam ultrices quam, sit amet fringilla leo sem vel nunc. Mauris in lacinia lacus.\nSuspendisse a tincidunt lacus. Curabitur at urna sagittis, dictum ante sit amet, euismod magna. Sed rutrum massa id tortor commodo, vitae elementum turpis tempus. Lorem ipsum dolor sit amet, consectetur adipiscing elit. Aenean purus turpis, venenatis a ullamcorper nec, tincidunt et massa. Integer posuere quam rutrum arcu vehicula imperdiet. Mauris ullamcorper quam vitae purus congue, quis euismod magna eleifend. Vestibulum semper vel augue eget tincidunt. Fusce eget justo sodales, dapibus odio eu, ultrices lorem. Duis condimentum lorem id eros commodo, in facilisis mauris scelerisque. Morbi sed auctor leo. Nullam volutpat a lacus quis pharetra. Nulla congue rutrum magna a ornare.\nAliquam in turpis accumsan, malesuada nibh ut, hendrerit justo. Cum sociis natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Quisque sed erat nec justo posuere suscipit. Donec ut efficitur arcu, in malesuada neque. Nunc dignissim nisl massa, id vulputate nunc pretium nec. Quisque eget urna in risus suscipit ultricies. Pellentesque odio odio, tincidunt in eleifend sed, posuere a diam. Nam gravida nisl convallis semper elementum. Morbi vitae felis faucibus, vulputate orci placerat, aliquet nisi. Aliquam erat volutpat. Maecenas sagittis pulvinar purus, sed porta quam laoreet at.\n","date":1461715200,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1461715200,"objectID":"e8f8d235e8e7f2efd912bfe865363fc3","permalink":"https://aoikaneko.github.io/project/example/","publishdate":"2016-04-27T00:00:00Z","relpermalink":"/project/example/","section":"project","summary":"An example of using the in-built project page.","tags":["Deep Learning"],"title":"Example Project","type":"project"},{"authors":["Naoshi Kaneko","Junji Takahashi","Takeshi Yoshida"],"categories":[],"content":"","date":1451606400,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1661334991,"objectID":"4373d8d934befd6e9f0a51b545db95e2","permalink":"https://aoikaneko.github.io/publication/kaneko-2016-logical/","publishdate":"2022-08-24T09:56:30.580278Z","relpermalink":"/publication/kaneko-2016-logical/","section":"publication","summary":"","tags":[],"title":"Logical Conjunction Based 3D Line Segments Matching between Observed Line Segments and Pre-Constructed 3D Wire Frame Model","type":"publication"},{"authors":["Naoshi Kaneko","Tomohiko Saito","Kazuhiko Sumi"],"categories":[],"content":"","date":1356998400,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1661334990,"objectID":"0768581842eefd34767cca589088245d","permalink":"https://aoikaneko.github.io/publication/kaneko-2013-realtime/","publishdate":"2022-08-24T09:56:29.982273Z","relpermalink":"/publication/kaneko-2013-realtime/","section":"publication","summary":"","tags":[],"title":"Real-Time Virtual Dress Fitting System Using Gaming Sensor and 3D Textile Simulation","type":"publication"}]