The Pitch You Lab session will take place on Wednesday, Oct. 17th, between 16:00 - 17:30 in Hall 2 - Plenary.
I am forming a new lab at the University of Queensland where the primary focus of research is on making XR interfaces more empathetic by measuring, interpreting, and sharing physiological and neurological signals. UQ is one of the top 50 universities in the world and has a large immensely beautiful campus that offers superb student experience. We are part of a larger co-innovation group that includes interdisciplinary researchers in social robotics, interaction design, and cyber security, which offers a great opportunity to inter- and multi-disciplinary research. The lab has a set of VR and AR displays and a range of high quality physiological and neurological sensors. We also have access to a large experimental place with full-body motion capture facilities.
The Perception & Interaction Lab is located at the University of Genoa, Italy, at the Dept. of Informatics, Bioengineering, Robotics, and Systems Engineering (DIBRIS). The aim of our research is to develop new paradigms that allow us to achieve an ecological human-computer interaction in virtual, augmented and mixed reality environments. We assess the undesired effects (such as the visual fatigue and the perceptual discomfort) of the new visual technologies (e.g. 3D displays, mobile devices, virtual and augmented reality headsets) on the users, and we evaluate the usability of such technologies in various fields of application. In particular, we study the relationships between the spatio-temporal geometrical structure of VR/AR/MR and the human visual perception.
The Graphics and Virtual Reality group led by Henry Fuchs has been around since 1978 at UNC Chapel Hill, under various names: the recent being Telepresence group, Office of the Future, and as part of BeingThere Centre, and others. Even our current name needs to be interpreted broadly to include such current topics as low-latency rendering, fast head-tracking, immersive medical learning experiences, telepresence, and augmented and virtual reality. Our group has been interested in such a variety of topics over the years, that we've had many different names for our projects and web pages. Some of our historical research projects are Office of the Future, Gang of Five, Ultrasound, Pixel-Planes and Pixelflow, 3D scanning of deformable objects and Accommodation supporting and light field near-eye displays. Our current research includes 3D scene acquisition & reconstruction, 3D tracking, fast rendering hardware and algorithms, autostereoscopic 3D displays, head-mounted and other near-eye displays, telepresence, and their applications to medicine. We have three papers being presented at ISMAR 2018, all three accepted to publication in TVCG, which presents research in auto-focus near-eye displays for both real and virtual imagery, volumetric near-eye display and egocentric reconstruction for telepresence applications.
How strong will the relationship between people and computers be in the future? We want to extend the way people are in the computer society of the future. Therefore, we are studying technologies to calculate and wisely support interaction between people and the real world. With AR, VR and augmented human technologies, we materialize our "vision", or dreams, into the reality.
TWNKLS is European market leader in creating, developing and implementing AR applications. As a pioneer, we have more than 8 years' experience in developing AR for Industry, Commerce, Security and Health. Our AR applications have been awarded several times and have +3.9 million downloads worldwide. In addition to existing software, we develop our own AR-computer vision technology which enables us to develop a suitable solution for every challenge. We use a team of experts, strong partnerships, and a proven approach to ensure success for our clients. Our own R&D department ensures that we can not only deliver a tailor-made solution, but also one that is future-proof. The most beautiful visionary videos circulate on YouTube, but we like to focus on what really is possible. Because: How did Otolift use Augmented Reality to reduce its entire process from sale to installation from 63 days to 48 hours? And how did Innovam train 450 of it's Lamborghini mechanics in 2 weeks with a AR blended learning approach? And what has driven IKEA to fully focus on AR with the IKEA Place app, the world's most successful -non-game- AR application? Whether you are orienting yourself in the world of augmented reality or you need a to keep a complex app system up and running - you can rely on TWNKLS to get the job done right. Equally fine, if you want to make a difference whilst working on state-of-the-art AR projects, then TWNKLS is the home for you.
Join Daqri's team of Computer Vision and Augmented Reality experts work on cutting edge technology and have a chance to make a large impact on one of the most exciting startups. Our team is composed of former academics with strong industry experience. We value high quality, robust and fast code. We put testing at the heart of our technologies, and we're excited about new algorithmic solutions.
Vienna, the capital of Austria, is consistently voted the best city to live in the world (Mercer 2010-2018 Quality of Living Survey). Vienna is known for its coffee culture, low crime rates, music and historical architecture. Living in the heart of Europe allows easy access to most major European cities within a short train ride or a short flight.
SREAL (pronounced Surreal) is a research lab at the University of Central Florida which is in sunny Orlando, Florida (USA). The SREAL team includes faculty researchers, software developers, graduate and undergraduate researchers, artists, and collaborators across campus and around the world.
The core faculty members include Prof. Greg Welch, Prof. Charlie Hughes, and Prof. Gerd Bruder. Alumni and former advisees are now engaged in their own research all over the world, in both academia and industry. While we work hard and strive for excellence, the lab has a very collegial spirit, reinforced by both workplace practices and external social events.
The lab space consists of over 7,000 square feet of experimental and office space, and includes a wide array of VR/AR equipment, as well as unique infrastructure such as the Human-Surrogate Interaction Space (HuSIS). The HuSIS includes a central room-sized space for human subject experiments and is outfitted with projectors, tracking equipment (body and eye), cameras, and physiological measurement devices that support measuring human responses (physiological and behavioral) during controlled experiments. The lab also includes transportable HuSIS capabilities that support experiments ""in the wild.""
SREAL is part of several larger UCF entities, most notably the Institute for Simulation & Training (IST), which houses it, and the Department of Computer Science, which is the home department of most of SREAL's students.
VISUS @University of Stuttgart is one of the largest visualization research labs in Europe. Our vision at VISUS is to create and investigate novel visual interfaces and interaction paradigms for the future. Historically grounded in application-driven visualization research, VISUS is increasingly focussing on virtual and augmented reality research. A core part of the VISUS building, for instance, is the 88 million pixel, 6 x 2.25 meter power wall that fosters 2D and 3D visualizations and various forms of immersive interactions. A new VR/AR lab was just instantiated, which will be led by the newly hired faculty member Michael Sedlmair. Many exciting topics are ahead: immersive analytics, perceptual foundations of visualization and user interaction in VR/AR, or many collaborative topics such as VR/AR visualization in architecture and the simulation of complex systems.
VISUS is currently the home of 30 passionate researchers funded by a variety of funding sources, such as Collaborative Research Centers, excellence clusters, BMBF, and VW Stiftung. VISUS is also an Intel Parallel Computing Center, and closely coupled with other related labs at the University of Stuttgart, such as VIS and SimTech. The University of Stuttgart is one of the top nine leading and oldest technical universities (TU9) in Germany and consistently ranked among the world's best universities.
We currently have two open PhD positions in VR/AR and would love to meet with interested candidates at ISMAR.
Voxar Labs performs Research, Development and Innovation projects on three major areas, which are visualization, tracking and natural interaction, all converging to one great technology field that is augmented reality. These are carried out in collaboration with academic and research institutions, government agencies and industry partners, in Brazil and overseas. Voxar Labs is a research group with the mission of developing people by augmenting experiences. The values representing the core priorities in Voxar Labs culture are creativity, cooperation, reliability, responsibility, flexibility, and enjoyment. It develops and transfers technology related to visualization, tracking and natural interaction techniques focusing augmented reality in multi-disciplinary application domains. The laboratory has several ongoing projects including international cooperations, projects with the industry, as well as research and academic projects. Voxar Labs is part of the Informatics Center of the Federal University of Pernambuco, located in Recife - Pernambuco, Brazil.
We hope to provide a welcoming space for people from a wide breadth of areas pertaining to the human condition, such as technical, design, artistic, and psychological. When in doubt, contact us! We're always looking for innovative thinkers! The Doctor of Philosophy (PhD) in Human Interface Technology is a multidisciplinary degree that is designed to allow students from a variety of backgrounds to undertake research in Human Interface Technology. The research generally focuses on technology falling somewhere along the Reality-Virtuality Continuum. The PhD typically takes three years, and the student will be able to completely focus on research during that time. The Master of Human Interface Technology (MHIT) degree is a one-year program that takes students through the theory, process, and techniques for producing creative technical solutions.
Visual computing covers a range of disciplines within computer science such as computer graphics, image processing, visualisation, computer vision, and virtual or augmented reality. Our research in the VisualComputing Otago group develops and uses techniques across all of these and more, but most of our work falls into three main themes: Using visual input to make models of the world; Understanding images and the models we build from them using tools from machine learning; Visualising data in the context of the world around us; and Allowing people to interact with data and information in new ways.
The Spatial Perception And Augmented Reality (SPAAR) Lab studies the perception and technology required to give virtual objects definite spatial locations. Projects involve perception, measurement, calibration, and display technologies. Methods are interdisciplinary, including computer science, empirical methods, psychology, cognitive science, optics, and engineering. Culture involves teamwork, and the pursuit of beauty through high-quality scholarship and intellectual merit.
Did you know how much time does a videographer spend on doing routine tasks from just one day of video shooting? Let us say this was a single-day wedding with 8 hours of raw video it takes: 4-5 working days for raw material sorting, 4 working days for music selection, X working days to build first edit. People need many short target-specific videos for each spectrum of audience from the same raw material
What Do We Do in NAÏVE: We create a piece of software that will shorten post-production timing from 1-2 weeks to several hours leaving more time to creativity
NAÏVE Technology
NAÏVE Evolutionary Goal: To build a new industrial standard in video production evaluating the routine processes and save time for creation
NAÏVE search for:
The Columbia University Computer Graphics and User Interfaces Lab performs research in AR, VR, and 3D user interfaces to improve how people interact with and through computers. We develop novel interaction and visualization techniques, implement prototype systems in which to test those techniques, and design and run user studies to evaluate their effectiveness. We use a wide range of displays and devices, from worn, to held, to stationary, exploring how people work and play, individually and in teams, indoors and outdoors. Our lab developed the first mobile AR system using a see-through display with satellite tracking and has pioneered applications of AR to fields as diverse as maintenance, tourism, and journalism. Much of our current research addresses collaboration between people, whether co-located or remote, engaged in skilled tasks. In addition to designing for end users, we also craft software infrastructure for AR and VR programmers, to make it easier for them to create systems that are both powerful and understandable.
We are an academic research laboratory split between the School of Information Technology and Mathematical Sciences at the University of South Australia, and the Auckland Bioengineering Institute at the University of Auckland.
Directed by Prof. Mark Billinghurst, staff and student investigate Empathic Computing, which are computer systems that recognise and share emotions and help people better understand one another.
The Empathic Computing Laboratory is working to make Empathic Computing mainstream and investigate how emerging technologies such as Augmented Reality (AR), Virtual Reality (VR), wearable systems and physiological sensing can be used to enhance face to face and remote collaboration.
Our main research themes include:
Research Objectives:
The research objective of CAMP is to study and model medical procedures and introduce advanced computer integrated solutions to improve their quality, efficiency, and safety. We aim at improvements in medical technology for diagnosis and therapeutic procedures requiring partnership between:
The chair forms such research triangles and actively participates in the existing ones. The chair aims at providing creative physicians with the technology and partnership, which allow them to introduce new diagnosis, therapy and surgical techniques taking full advantage of advanced computer technology.
Research Portfolio:
Being interested in applied science, the chair aims at keeping an intelligent balance between incremental, radical and fundamental research in following fields:
Education:
Biomedical Engineering is growing extremely fast, and needs a new generation of engineers who own the necessary multi-disciplinary know-how. The chair aims at bringing new elements into the curriculum of computer science students.
We invite laboratories from all areas of xR (Augmented, Virtual, and Mixed Reality) and 3DUI (3D User Interface) to apply for presenting in the "Pitch Your Lab" session at the 17th IEEE International Symposium on Mixed and Augmented Reality (ISMAR) in Munich. The presentation might describe laboratory's research area, members, equipment, current and past research projects, industrial and academic collaborations, and open positions. Applications must be submitted through a web application form with an optional one-page lab introduction material. Accepted presentations will require at least one presenter to attend the conference to orally present an introduction to the lab in the "Pitch Your Lab" session. Presentations and lab introduction materials will not be included in the proceedings, but will be made accessible through conference website.
Inquiries: pitchyourlab@ismar18.org