User Tools

Site Tools


seminar:2020

Hi5 Lab - 2020 Summer Seminar Series

In the time of social distancing and safer-at-home policies, it is easy to lose contact and comradery with each other. For this reason, the Hi5 Lab is hosting our first virtual summer seminar series. This is geared toward keeping people who are interested in AR/VR and related topics in contact, engaged, and apprised of the latest research, science, and engineering.

We have scheduled an excellent group of some of the AR/VR community's best early-career investigators to present a wide cross-section of the newest, most exciting, and groundbreaking research. Their presentations will be live-streamed to the Hi5 YouTube Channel, and recordings will be archived there for you to watch in the future!

Be sure to keep an eye on this page for updates! New speakers and seminar details are being added daily.

Presentations will be available via live-stream and playback on our YouTube channel.


June 12, 2020

Matias Volonte

PhD Candidate, Clemson University

Matias Volonte is a PhD candidate in Human Centered Computing at Clemson University, South Carolina. His doctoral research investigates the effect that appearance and affective behavior of virtual humans have on users' visual attention and overall behaviors.

Matias holds a master’s degree in Digital Production Arts oriented to Visual Effects from Clemson University, South Carolina and a bachelor’s degree in Audiovisual Communication from Universidad Blas Pascal, Argentina. Prior to starting her doctorate, Matias worked in the film industry as an artist and technical developer for feature films and television commercials.


Effects of Virtual Human in Dyadic and Crowd Settings on Emotion, Visual Attention and Task Performance in Interactive Simulations

June 12, 2020 at 3:00PM CDT

Anthropomorphic virtual characters are digital entities that mimic humans’ behavior and appearance. Currently, virtual humans have the potential to revolutionize human-computer interaction since they could be used as interfaces for social or collaborative scenarios. Present technology advancements provide the means to create virtual humans with an animation and an appearance of fidelity that will make them almost indistinguishable from humans. Understanding the emotional impact on users during interaction with digital humans is primordial, specifically in training systems, since emotion influences learning results. This presentation describes the result of studies focused on understanding the impact that animation, rendering fidelity and affective behaviors of virtual characters have on users’ emotions and visual attention in dyadic and crowd setups.


June 19, 2020

Mohammed Safayet Arefin

PhD Student, Mississippi State University

Mohammed Safayet Arefin is a Ph.D. student in the Department of Computer Science and Engineering at Mississippi State University. He is advised by Dr. J. Edward Swan II as part of the Spatial Perception and Augmented Reality Lab (SPAAR Lab). His Ph.D. research involves the areas of augmented reality (AR), applied perception, and human-computer interaction. His doctoral research focuses on improving the clarity of out-of-focus graphical contents in AR, resemble the human visual system. Another purpose of his research is to explore the impact of context switching and focal distance switching on human performance in the AR system. Arefin completed his M.S. in Computer Science at Mississippi State University under the supervision of Dr. Ed Swan. He received his bachelor’s degree in computer science and Engineering from Chittagong University of Engineering & Technology, Bangladesh.


Effects of AR Display Context Switching and Focal Distance Switching on Human Performance: Replication on an AR Haploscope

June 19, 2020 at 3:00PM CDT

In augmented reality (AR) environments, information is often distributed between real and virtual contexts, and often appears at different distances from the user. Besides, when using AR displays, such as Google Glass, Microsoft HoloLens, or Magic Leap One, interacting with AR content requires the observer’s eyes to focus on the optical depth of the display. Therefore, users are required to perform a rapid transition between fixating on the graphical content presented through the AR display and fixating on the real-world content. Additionally, to integrate information, users need to shift their eye focus from one focal distance of virtual content to another focal distance of real-world content and vice versa. Consequently, two primary AR interface design issues arise: context switching and focal distance switching. This presentation will discuss the impact of context switching and focal distance switching on user performance and eye-fatigue in the AR system. Furthermore, this presentation will talk about a successfully replicated experiment on a custom-built AR Haploscope.


June 26, 2020

Brooke Krajancich

PhD Candidate, Stanford University

Brooke Krajancich is a PhD candidate in the Electrical Engineering Department at Stanford University. She is advised by Professor Gordon Wetzstein as a part of the Stanford Computational Imaging Lab and the inaugural class of Knight-Hennessy Scholars. Her research focuses on developing computational techniques that leverage the co-design of optical elements, image processing algorithms and intimate knowledge of the human visual system for improving current-generation virtual and augmented reality displays. Brooke moved to California for graduate school after receiving her Bachelor’s (with first class honors) in electrical engineering and mathematics at the University of Western Australia.


Factored Occlusion: Single Spatial Light Modulator Occlusion-capable Optical See-through Augmented Reality Display

June 26, 2020 at 3:00PM CDT

Occlusion is a powerful visual cue that is crucial for depth perception and realism in optical see-through augmented reality (OST-AR). However, existing OST-AR systems typically additively overlay physical and digital content with beam combiners – resulting in virtual objects that appear semi-transparent and unrealistic. This presentation will discuss Factored Occlusion, a new, recently published approach for obtaining occlusion in OST-AR. Rather than additively combining the real and virtual worlds, we employ a single digital micromirror device to merge the respective light paths in a multiplicative manner. This unique approach allows us to simultaneously block light incident from the physical scene on a pixel-by-pixel basis while also modulating the light emitted by a light-emitting diode to display digital content. As such, we enable occlusion capable OST-AR with only a single spatial light modulator.


July 3, 2020

Alex Peer

PhD Candidate, University of Wisconsin - Madison

Alex Peer is a Computer Sciences PhD candidate in the Virtual Environments group of the Wisconsin Institute for Discovery at the University of Wisconsin - Madison. He earned his Bachelors in Computer Science at Eastern Michigan University. During his M.S. at Michigan State University he studied how robots might use spoken language to negotiate task errors with human partners, advised by Dr. Joyce Chai. Under the supervision of Dr. Kevin Ponto, his doctoral work has explored perception in virtual and augmented reality. Broadly, his research interests lie where humans and computers intersect - in helping the two communicate, collaborate, and better augment each other's capabilities.


Distance Misperception in VR and AR, and the Possibility of Individual Calibration

July 3, 2020 at 3:00PM CDT

When viewing environments using VR and AR displays, people sometimes perceive things to be closer or further than intended - they experience a distance misperception. Decades of research have identified several influencing factors, but no definite cause or solution. My work has studied this effect across different devices, measures, and manipulation techniques; while probing from many angles, we've found growing evidence that individual difference between viewers may have a strong influence.


July 10, 2020

Jerald Thomas

PhD Candidate, University of Minnesota

Jerald Thomas is a Ph.D. candidate in the Department of Computer Science and Engineering at the University of Minnesota, Twin Cities Campus. His doctoral research is on exploring novel methods and uses for redirected walking, much of which is inspired by robotics. Jerald received his B.S. in Electrical and Computer Engineering at the University of Minnesota Duluth under the advisement of Dr. Stan Burns and his M.S. in Computer Science at the University of Southern California under the advisement of Dr. Evan Suma Rosenberg. His research interests include Virtual and Augmented Reality, Redirected Walking, Human Computer Interaction, and Collaborative Virtual Environments.


User Redirection and Alignment for Virtual Reality Experiences in Arbitrary Physical Spaces

July 10, 2020 at 3:00PM CDT

One of the most formidable challenges virtual reality researchers currently face is how to have the user effectively navigate the virtual environment. Natural locomotion, or walking, has been shown to have several benefits compared to other navigation techniques, but is restricted by the physical environment size and layout. Redirected walking is a technique that enables natural locomotion within a virtual environment that is larger than the available physical space by introducing unnoticeable discrepancies between the user's physical and virtual movements. However, it relies on physical environments that are convex (typically rectangular), free of obstacles, and static. These requirements are restrictive and not representative of real world environments. Additionally, redirected walking does not allow for user interactions with the physical environment, which has been shown to improve the user’s experience. These limitations ultimately mean that redirected walking based experiences cannot take full advantage of most real world physical environments. In this talk I will introduce my dissertation research in which I use techniques from the field of robotics to address these limitations.


July 17, 2020

Haley Adams

PhD Student, Vanderbilt University

Through her research, Haley Adams quantifies perceptual differences between virtual, augmented, and real environments to better understand how those differences affect the way people interact with immersive technology. In her other work as a graduate researcher at Vanderbilt University's Learning in Virtual Environments (LiVE) Lab—she has designed medical visualizations and investigated the effects of cognitive development on perception and action in VR. Outside of research, she is passionate about pedagogy and has previously directed coding camps for middle and high school girls.


A Strange View: Using Visual Perception to Improve XR

July 17, 2020 at 3:00PM CDT

Immersive technology (XR) supplements or completely replaces our perception of the world around us. These virtual experiences often feel as strange as they do compelling–due to limits in the current technology as well as the influence of our perceptual system. Therefore, building immersive technology without an understanding of how each design decision affects the perception of end users is a reckless pursuit. Fortunately, many researchers use this knowledge of human perception to their advantage. By doing so, they are able to both evaluate current technology and develop new, more efficient algorithms for the next generation of XR displays. I am one of those researchers.

In today's talk, I will discuss how researchers use visual perception to evaluate immersive technology–specifically head-mounted displays (HMDs) like the Oculus Rift and Microsoft HoloLens. And I will review some common techniques as well as their trade-offs. Then, I will illustrate this process with a couple of examples from my own research with optical see-through augmented reality displays.


July 24, 2020

Mahdi Azmandian

Software Engineer, Sony - PlayStation Magic Lab

Mahdi Azmandian is a Software Engineer at the PlayStation Magic Lab. He earned his PhD in Computer Science from the University of Southern California. Working under the supervision of Dr. Evan Suma Rosenberg at the ICT Mixed Reality Lab, his dissertation focused on developing a framework for Redirected Walking for Virtual Reality applications. During graduate school, the theme of his research was the “engineering of illusions”, pouring a great deal of finesse and craftsmanship into executing the perfect magic tricks. And now he brings this experience to the Magic Lab, where he explores novel technologies through the lens of PlayStation, dreaming up new experiences that push the boundaries of play.


Illusions for Good: Leveraging Human Perception to Solve Challenges in Virtual Reality

July 24, 2020 at 3:00PM CDT

Perceptual illusions have been studied in a variety of domains, offering deep insight into how sensory information is fused and interpreted in the brain. With the advent of virtual reality, these illusions have found a new home, one where the goal is not just to fool the senses but to address fundamental challenges. This talk will cover three instances of such illusions: Redirected Walking, Haptic Retargeting, and Dynamic Spectating. With redirected walking, we'll see how we can explore massive virtual environments on foot within a small physical space. Haptic retargeting will demonstrate how you can use your hands to make a castle of cubes using just a single physical cube. And dynamic spectating enables immersively watching a game like Fortnite, following the action alongside players, seamlessly traversing the arena with ease. This collection of work highlights the paradigm of tapping into the peculiarities of perception to overcome existing technical limitations.


July 31, 2020

Hunter Finney

BS in Computer Science, University of Mississippi
PhD Student, University of Utah

Hunter Finney is a PhD student in Human-Centered Computing at the University of Utah in Salt Lake City. He has just finished his Bachelor's in Computer Science at the University of Mississippi. During his time at the University of Mississippi, Hunter worked in the High Fidelity Virtual Environments (Hi5) Lab for multiple years. The majority of his work focuses on human visual perception in virtual environments. Hunter is currently set to work in the Visual Perception and Spatial Cognition Lab in Utah this fall.


A Perfect Union: An example of how VR elegantly contributes to perception–versus–action hypothesis debate using the Ebbinghaus Illusion

July 31, 2020 at 3:00PM CDT

The Two-stream hypothesis for vision seemed to have clear support from influential studies such as Aglioti, DeSouza, and Goodale (1995) using grasps to measure the effect of the Ebbinghaus Illusion. Under further inspection over the years, the Ebbinghaus Illusion seemed to flip the script and no longer support perception–versus–action hypothesis with a few key discoveries that allows virtual reality to showcase itself. In this talk, I aim to share the work I've done to help shed more light on the Ebbinghaus Illusion. To do this, we will learn about the two streams of vision and how the history of the Ebbinghaus Illusion has evolved over the years. I hope for this talk to be educational for newcomers, insightful for the initiated, and inspiring for the scholar.


August 7, 2020

Nate Phillips

PhD Candidate, Mississippi State University

Nate Phillips is a PhD candidate in the department of Computer Science and Engineering at Mississippi State University, where he works in the Spatial Perception and Augmented Reality (SPAAR) Lab. His research is primarily focused on perception in augmented reality (AR), ranging across such issues as: depth cues in AR, stereo camera/AR integration, and x-ray vision. He holds a B.S. in both Electrical Engineering and Computer Science from Christian Brothers University in Memphis, TN, and a M.S. in Computer Science at Mississippi State University. Other interests include fencing, reading, and an active interest in how we educate ourselves in science and technology.


Augmented Reality X-Ray Vision for Accurate Perceptual Situation Awareness at Action-Space Distances

August 7, 2020 at 3:00PM CDT

X-Ray vision (or the ability to see through walls) has long been of interest to the augmented reality (AR) research community. This functionality represents a clear AR use case-potentially granting operators egocentric depth information for occluded content. One venue where this technology would be particularly helpful is in SWAT team room-entry operations. Typically in these operations, team members must enter a room with little or no information. This can lead to tense, dangerous situations where officers often must make quick decisions about the use of force, leading to unnecessary risk, injury, and sometimes even death. In order to reduce these risks, we propose the use of an x-ray vision system for SWAT officers, using robot-mounted cameras and sensors to provide room data and displaying the resulting data through a “window metaphor” to increase information saliency. Evaluating and testing this system will both provide us with novel data on x-ray vision accuracy and may also recommend an approach for increasing safety in SWAT room-entry operations.

seminar/2020.txt · Last modified: 2022/04/06 17:14 by jones