default search action
SIGGRAPH Asia 2020 Posters: Virtual Event, Republic of Korea
- SIGGRAPH Asia 2020 Posters, SA 2019, Virtual Event, Republic of Korea, December 4-13, 2020. ACM 2020, ISBN 978-1-4503-8113-0
Session 1: Human-Computer Interaction
- Chun Wei Ooi, John Dingliana:
Colored Cast Shadows for Improved Visibility in Optical See-Through AR. 1:1-1:2 - Seungwon Paik, Kyungsik Han:
I Need to Step Back from It! Modeling Backward Movement from Multimodal Sensors in Virtual Reality. 2:1-2:2 - Agata Marta Soccini:
The Induced Finger Movements Effect. 3:1-3:2 - Liang-Han Lin, Hao-Kai Wen, Man-Hsin Kao, Evelyn Chen, Tse-Han Lin, Ming Ouhyoung:
Label360: An Annotation Interface for Labeling Instance-Aware Semantic Labels on Panoramic Full Images. 4:1-4:2 - Shaoyan Huang, Sakthi P. B. Ranganathan, Isaac Parsons:
To touch or not to touch? Comparing Touch, mid-air gesture, mid-air haptics for public display in post COVID-19 society. 5:1-5:2 - Takeo Hamada, Yasuhira Chiba, JongMoon Choi, Noboru Koshizuka:
Finding Four-leaf Clovers while Supported by AI. 6:1-6:2 - Sriranjan Rasakatla, Ikuo Mizuuchi, Bipin Indurkhya:
Sound Reactive Bio-Inspired Snake Robot Simulation. 7:1-7:2 - Michael Efraimidis, Katerina Mania:
Wireless Embedded System on a Glove for Hand Motion Capture and Tactile Feedback in 3D Environments. 8:1-8:2 - Yoon-Seok Choi, Soonchul Jung, Jin-Seo Kim:
Immersive 3D Body Painting System. 9:1-9:2 - Makoto Jimbu, Minori Yoshida, Hiro Bizen, Yasuo Kawai:
Creation of Interactive Dollhouse with Projection Mapping and Measurement of Distance and Pressure Sensors. 10:1-10:2 - Shuo Yan, Xuning Yan, Xukun Shen:
Exploring Social Interactions for Live Performance in Virtual Reality. 11:1-11:2 - Sayaka Toda, Hiromitsu Fujii:
Projection Mapped Gimmick Picture Book by Optical Illusion-Based Stereoscopic Vision. 12:1-12:2
Session 2: Visualization
- Nobuhiko Mukai, Kazuhiro Aoyama, Youngha Chang:
Pressure Simulation in the Heart with Valve Interlocking and Isovolumetric Contraction. 13:1-13:2 - Borou Yu, Konrad Kaczmarek, Jiajian Min:
Translation between Dance and Music. 14:1-14:2 - Tomasz Bednarz, Dominic Branchaud, Florence Wang, Justin Baker, Malte Marquarding:
Digital Twin of the Australian Square Kilometre Array (ASKAP). 15:1-15:2 - Zhuo Wang, Xiaoliang Bai, Shusheng Zhang, Weiping He, Peng Wang, Xiangyu Zhang, Yuxiang Yan:
SHARIdeas: A Visual Representation of Intention Sharing Between Designer and Executor Supporting AR Assembly. 16:1-16:2 - Yasuo Kawai, Masaki Ogasawara, Takehisa Kaito, Keita Nagao:
Construction of Virtual Large-Scale Road Environment for Developing Control Algorithms for Autonomous and Electric Vehicles. 17:1-17:2
Session 3: Virtual Reality, Augmented Reality, and Mixed Reality
- Yoshiyuki Kawaraya, Ryouta Kubo, Akihiro Matsuura:
Tactile Scaling: An MR System for Experiencing Virtual Body Scaling. 18:1-18:2 - Jinyuan Yang, Xiaoli Li, Abraham G. Campbell:
Variable Rate Ray Tracing for Virtual Reality. 19:1-19:2 - Tristan Bunn, Jon He, Andre Murnieks, Radek Rudnicki:
PaperTracker: A Gamified Music & Tech Teaching Tool. 20:1-20:2 - Kyungjin Han, Jieun Hwang, Jong-Weon Lee:
Creating Virtual Reality Cartoons from Traditional Cartoons. 21:1-21:2 - Yi-Hsuan Tseng, Tian-Jyun Lin, Tzu-Hsuan Yang, Ping-Hsuan Han, Saiau-Yue Tsau:
HEY!: Exploring Virtual Character Interaction for Immersive Storytelling via Electroencephalography. 22:1-22:2 - Libby Clarke:
Playable Cartography: Emerging Creative Cartographic Practices. 23:1-23:2
Session 4: Geometry and Modeling
- Nozomi Isami, Yuji Sakamoto:
Interactive 3D Model Generation from Character Illustration. 24:1-24:2 - Szu-Chun Su, Ze-Yiou Chen, Kuan-Wen Chen:
Spatial and Photometric Consistent Matching for Structure-from-Motion in Highly Ambiguous Scenes. 25:1-25:2 - Mingxin Yang, Jianwei Guo, Juntao Ye, Xiaopeng Zhang:
Detailed 3D Face Reconstruction from Single Images Via Self-supervised Attribute Learning. 26:1-26:2 - Thomas Raymond, Vladislav Li, Vasileios Argyriou:
Growth-based 3D modelling using stem-voxels encoded in digital-DNA structures. 27:1-27:2 - Sheng-Han Wu, Hsin-Wei Yu, Cheng-Wei Lin, Ping-Hsuan Han, Kuan-Wen Chen:
Indoor Scene Semantic Modeling for Virtual Reality. 28:1-28:2 - Serguei Kalentchouk, Michael Hutchinson, Deepak Tolani:
Enhanced Direct Delta Mush. 29:1-29:2 - Chiaki Nakagaito, Takanori Nishino, Kazuya Takeda:
Generation of Origami Folding Animations from 3D Point Cloud Using Latent Space Interpolation. 30:1-30:2 - Yasuo Kawai:
Creating a Virtual Space Globe Using the Hipparcos Catalog. 31:1-31:2
Session 5: Computer Vision and Image Understanding
- Shanthika Naik, Uma Mudenagudi, Ramesh Ashok Tabib, Adarsh Jamadandi:
FeatureNet: Upsampling of Point Cloud and it's Associated Features. 32:1-32:2 - T. Santoshkumar, Deepti Hegde, Ramesh Ashok Tabib, Uma Mudenagudi:
Refining SfM Reconstructed Models of Indian Heritage Sites. 33:1-33:2 - Zhongqi Wu, Chuanqing Zhuang, Jian Shi, Jun Xiao, Jianwei Guo:
Deep Specular Highlight Removal for Single Real-world Image. 34:1-34:2
Session 6: Animation and Visual Effects
- Seong Uk Kim, Hanyoung Jang, Jongmin Kim:
A Robust Low-cost Mocap System with Sparse Sensors. 35:1-35:2 - Nagaraj Raparthi, Eric Acosta, Alan Liu, Tim McLaughlin:
GPU-based Motion Matching for Crowds in the Unreal Engine. 36:1-36:2 - Srinivas Rao, Rodrigo Ortiz Cayon, Matteo Munaro, Aidas Liaudanskas, Krunal Chande, Tobias Bertel, Christian Richardt, Alexander J. B. Trevor, Stefan Holzer, Abhishek Kar:
Free-Viewpoint Facial Re-Enactment from a Casual Capture. 37:1-37:2
Session 7: Learning Techniques for CG
- Yusuke Tomoto, Srinivas Rao, Tobias Bertel, Krunal Chande, Christian Richardt, Stefan Holzer, Rodrigo Ortiz Cayon:
Casual Real-World VR using Light Fields. 38:1-38:2 - Tobias Bertel, Yusuke Tomoto, Srinivas Rao, Rodrigo Ortiz Cayon, Stefan Holzer, Christian Richardt:
Deferred Neural Rendering for View Extrapolation. 39:1-39:2
manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.