Zhenliang Zhang | BIGAI
Zhenliang Zhang | BIGAI
Home
Publications
Projects
Contact
HI-Lab
Light
Dark
Automatic
Virtual Reality
[TOG/SiggraphAsia 2023] Commonsense Knowledge-Driven Joint Reasoning Approach for Object Retrieval in Virtual Reality
we propose a commonsense knowledge-driven joint reasoning approach for object retrieval, where human grasping gestures and context are modeled using an And-Or graph (AOG).
Haiyan Jiang
,
Dongdong Weng
,
Xiaonuo Dongye
,
Le Luo
,
Zhenliang Zhang
PDF
Cite
Project
Video
Web
[UIST 2023] Neighbor-Environment Observer: An Intelligent Agent for Immersive Working Companionship
In this paper, we propose a joint observation strategy for artificial agents to support users across virtual and physical environments.
Zhe Sun
,
Qixuan Liang
,
Meng Wang
,
Zhenliang Zhang
PDF
Cite
Project
Web
News
[VR 2020] Exploring the differences of visual discomfort caused by long-term immersion between virtual environments and physical environments
To investigate the effects of visual discomfort caused by long-term immersing in virtual environments (VEs), we conducted a comparative …
Jie Guo
,
Dongdong Weng
,
Hui Fang
,
Zhenliang Zhang
,
Jiaming Ping
,
Yue Liu
,
Yongtian Wang
Cite
[VR 2020] Extracting and transferring hierarchical knowledge to robots using virtual reality
We study the knowledge transfer problem by training the task of folding clothes in the virtual world using an Oculus Headset and …
Zhenliang Zhang
,
Jie Guo
,
Dongdong Weng
,
Yue Liu
,
Yongtian Wang
Cite
Project
[ISMAR 2019] Mixed reality office system based on maslow’s hierarchy of needs: Towards the long-term immersion in virtual environments
In a mixed reality (MR) environment that combines the physical objects with the virtual environments, users’ feelings are …
Jie Guo
,
Dongdong Weng
,
Zhenliang Zhang
,
Haiyan Jiang
,
Yue Liu
,
Yongtian Wang
,
Henry Been-Lirn Duh
Cite
Project
[ICRA 2019] High-fidelity grasping in virtual reality using a glove-based system
This paper presents a design that jointly provides hand pose sensing, hand localization, and haptic feedback to facilitate real-time …
Hangxin Liu
,
Zhenliang Zhang
,
Xu Xie
,
Yixin Zhu
,
Yue Liu
,
Yongtian Wang
,
Song-Chun Zhu
Cite
Code
Project
[TURC 2019] VRGym: A virtual testbed for physical and interactive AI
We propose VRGym, a virtual reality testbed for realistic human-robot interaction. Different from existing toolkits and virtual reality environments, the VRGym emphasizes on building and training both physical and interactive agents for robotics, machine learning, and cognitive science.
Xu Xie
,
Hangxin Liu
,
Zhenliang Zhang
,
Yuxing Qiu
,
Feng Gao
,
Siyuan Qi
,
Yixin Zhu
,
Song-Chun Zhu
Cite
Project
[VR 2019] Evaluation of maslows hierarchy of needs on long-term use of HMDs - A case study of office environment
Long-term exposure to VR will become more and more important, but what we need for long term immersion to meet users fundamental needs …
Jie Guo
,
Dongdong Weng
,
Zhenliang Zhang
,
Yue Liu
,
Yongtian Wang
Cite
[SID 2019] Subjective and objective evaluation of visual fatigue caused by continuous and discontinuous use of HMDs
During continuous use of displays, a short rest can relax users’ eyes and relieve visual fatigue. As one of the most important …
Jie Guo
,
Dongdong Weng
,
Zhenliang Zhang
,
Yue Liu
,
Henry B.L. Duh
,
Yongtian Wang
Cite
[SID 2019] Vision-tangible interactive display method for mixed and virtual reality: Toward the human-centered editable reality
Building a human-centered editable world can be fully realized in a virtual environment. Both mixed reality (MR) and virtual reality …
Zhenliang Zhang
,
Yue Li
,
Jie Guo
,
Dongdong Weng
,
Yue Liu
,
Yongtian Wang
Cite
»
Cite
×