We are excited to announce that GI 2015 will feature a total of 9 invited speakers, and 3 talks by award winners.
Physics-based Animation Sound: Progress and Challenges
Decades of advances in computer graphics have made it possible to convincingly animate a wide range of physical phenomena, such as fracturing solids and splashing water. Unfortunately, our visual simulations are essentially "silent movies" with sound added as an afterthought. In this talk, I will describe recent progress on physics-based sound synthesis algorithms that can help simulate rich multi-sensory experiences where graphics, motion, and sound are synchronized and highly engaging. I will describe work on specific sound phenomena, and highlight the important roles played by precomputation techniques, and reduced-order models for vibration, radiation, and collision processing.
Bio: Doug L. James is a Full Professor of Computer Science at Stanford University since June 2015, and was previously an Associate Professor of Computer Science at Cornell University from 2006-2015. He holds three degrees in applied mathematics, including a Ph.D. in 2001 from the University of British Columbia. In 2002 he joined the School of Computer Science at Carnegie Mellon University as an Assistant Professor, before joining Cornell in 2006. His research interests include computer graphics, computer sound, physically based animation, and reduced-order physics models. Doug is a recipient of a National Science Foundation CAREER award, and a fellow of both the Alfred P. Sloan Foundation and the Guggenheim Foundation. He recently received a Technical Achievement Award from The Academy of Motion Picture Arts and Sciences for "Wavelet Turbulence," and the Katayanagi Emerging Leadership Prize from Carnegie Mellon University and Tokyo University of Technology. He is the Technical Papers Program Chair of ACM SIGGRAPH 2015.
The New Face of the User Interface
For most researchers outside of the field, Human Computer Interaction (HCI) is the study and evaluation of interactive systems and techniques. While this is an important part of our discipline, nowadays HCI is as much about inventing and building the underlying technologies as it is studying their use. In this talk I will demonstrate why it is an exciting time to be a computer science researcher in this discipline. Ultimately we focus on inventing disruptive interactive technologies that are grounded by real-world problems that have direct impact on users.
The Interactive 3D Technologies (I3D) Group has been born out of this new approach to HCI research. I will demonstrate examples of projects within the group, which are all motivated by pushing the boundaries of how people can interact with computers, but doing this through technical innovation. These projects move computing beyond the mouse and keyboard into the physical world around us.
I will highlight two concrete research themes that the group is tackling. The first concerns making computers more aware of their physical environment, such that the real world can be leveraged in the digital domain. The second focuses on increasing the computer's physical understanding of the user, such that modalities such as gestures, gaze, expressions, and other aspects of human-human communication can be brought into the interface. This includes technical challenges such as developing new depth sensing technologies, real-time 3D reconstruction, detection, and tracking algorithms, and interactive 3D recognition techniques. As well as applications for augmented reality, robotics, 3D scanning and fabrication, performance capture, and so called "natural" user interfaces.
HCI has traditionally been collaborative and multi-discipline in nature, incorporating disciplines across engineering and the social sciences. I3D also values this type of multi-discipline collaborative research, but across sub-disciplines of computer science, in particular computer vision, computer graphics, machine learning, optics, and hardware engineering. This talk will highlight the importance of working within this intersection point within computer science, and shows the impact on HCI, and conversely, how our research has begun to impact these sub-disciplines directly.
Bio: Professor Shahram Izadi is a Principal Researcher and Research Manager within Microsoft Research Redmond. He leads the Interactive 3D Technologies (I3D) group. He describes his work as: mashing together exotic sensing and display hardware with signal processing, vision and graphics algorithms to create new interactive systems, which enable users to experience computing in magical ways. His group has had many notable projects and publications to date: HoloLens,KinectFusion; RetroDepth; MotionKeyboard; Digits; Augmented Projectors; KinEtre; Vermeer; HoloDesk; Mouse 2.0; SurfacePhysics; SecondLight; and ThinSight. He was involved in a number of products including the Microsoft HoloLens, Touch Mouse, Sensor-in-Pixel, Kinect One and the Kinect for Windows SDK. Shahram has been at Microsoft Research since 2005 and prior to that spent time at Xerox PARC. He received a TR35 award in 2009 and was nominated one of the Microsoft Next in 2012.
Liquid Animation Revisited: The Tri-Mesh Strikes Back
Triangle meshes are a conceptually simple, well-studied, and ubiquitous geometric representation in computer graphics. Despite this fact, liquid animation has traditionally relied almost exclusively on implicit representations, such as particle clouds or level set methods. While this choice simplifies the handling of topological operations like the merging and splitting of liquid volumes, it also sacrifices the many crucial properties and tools that have made triangle meshes so successful across computer graphics. Over the last five years or so, I and a number of other researchers have begun to reconsider this choice, raising two key questions. First, how can the limitations of triangle meshes be overcome for use in liquid animation? Secondly, and perhaps more excitingly, what can be achieved with this “new” primitive at our disposal?
In this talk I will provide an overview of this research direction, drawing on several of my recent projects for illustration. I will begin by describing how my collaborators and I adapted triangle mesh-based surface tracking to robustly support challenging multimaterial topological changes for the first time, enabling its use in animating multi-liquid flows. I will then discuss two further projects that use triangle meshes not just as a passive geometric representation for the liquid surface, but as the fundamental representation for the physical dynamics of the fluid. The first is a method to capture the buckling, folding, and stretching of thin sheets of viscous liquid, such as molasses, chocolate sauce, or cake batter; the second is a method to simulate the fascinating deformations of soap bubbles and films as they rearrange, pinch apart, and wobble. I will describe how adopting triangle meshes allowed us to develop radically different approaches to animate these liquid phenomena with substantially reduced computational cost.
Bio: Christopher Batty is an Assistant Professor in the Cheriton School of Computer Science at the University of Waterloo, where he directs the Computational Motion Group. His research is primarily focused on the development of novel physical simulation techniques for applications in computer graphics and computational physics, with an emphasis on the diverse behaviors of fluids. Techniques he introduced have been adopted by the visual effects industry, and incorporated into commercial software such as Side Effects' Houdini and Maya's Bifrost. He has also collaborated with and consulted for Exocortex Technologies, the makers of Clara.io and other visual effects software. Christopher received his PhD from the University of British Columbia in 2010, and was a Banting Postdoctoral Fellow at Columbia University through 2013. Prior to his graduate work, he developed physics-based animation software at former Canadian visual effects studio Frantic Films, where he contributed to water and smoke effects on films like "Scooby-Doo 2", "Cursed", and "Superman Returns".
Design Tools for Fabrication
Recent advances in digital fabrication technologies have lowered one barrier to creating physical artifacts; machines like 3D printers and CNC routers are more affordable than ever. However, in order for digital fabrication to become more accessible to a broad range of people (designers, hobbyists, etc.), we need software tools that help users create useful, working designs without requiring a large amount of expertise in mechanical engineering or machining. In my talk, I’ll describe some work that contributes to this goal, focusing on the problem of creating functional mechanical objects. In addition, I’ll also discuss some opportunities for leveraging ideas and techniques from fine art, crafting, and industrial design to build better fabrication tools.
Bio: Wilmot (Wil) Li is a senior research scientist in the Creative Technologies Lab within Adobe Research. He earned his Ph.D. in computer science at the University of Washington, where he was a member of the Graphics and Imaging Laboratory (GRAIL) from 2000-2007. Wil has worked on a variety of topics at the intersection of computer graphics and HCI, including interactive visualization techniques for complex 3D objects, 3D shape analysis, adaptive document layout, high-level video editing tools, and design tools for fabrication. He was born and raised in Toronto, Canada.
How playing games will make you a better human being
The negative stereotypes about the effects of playing computer or video games are a rich source of material for mass media; we hear less often about the positive aspects of digital game play. In her talk, Regan Mandryk will address prevalent negative stereotypes, debunking common myths on how playing digital games makes you: 1) fat and lazy, 2) stupid, 3) unable to focus, and 4) socially isolated. Drawing from her own research and the research of other academics who study digital games, Dr. Mandryk will leave you itching to go play games so that you can become a smarter, fitter, better-focused, and more social individual.
Bio: Regan Mandryk is an Associate Professor of Computer Science at the University of Saskatchewan. She pioneered the area of physiological evaluation for computer games in her Ph.D. research at Simon Fraser University with support from Electronic Arts. She continues to investigate novel ways of understanding players and player experience in partnership with multiple industrial collaborators, but also develops and evaluates persuasive games, exergames, games for special populations including children with neurodevelopmental disorders, games that foster interpersonal relationships, and ubiquitous games that merge the real world with the game world. She has been the invited keynote speaker at two international game conferences, led the Games theme in the GRAND NCE, was the papers chair for the inaugural CHI PLAY conference, and is leading the new games subcommittee for SIGCHI.
Reflections on my 15+ years exploring personalized users interfaces
There is no such thing as an average user. Users bring their own individual needs, desires, and skills to their everyday use of interactive technologies. Yet many of today’s technologies, from desktop applications to mobile devices and apps, are still designed for some mythical average user. It seems intuitive that interfaces should be designed with adaptation in mind so that they would better accommodate individual differences among users. Yet, what seems intuitive is not necessarily straightforward.
I will highlight selected examples of my group’s research in the area of personalized user interfaces, reaching briefly back to my own PhD work and moving through to recent day explorations. I will touch on various approaches to adaptation, what we’ve learned about the strengths and limitations of those approaches, and where promising future opportunities lie.
Bio: Joanna McGrenere is a Professor and the Associate Head of Graduate Affairs in the Department of Computer Science at the University of British Columbia (UBC). She received a PhD from the University of Toronto in 2002, an MSc from UBC in 1996, and a BSc from Western University in 1993, all in Computer Science. Her broad research area is Human Computer Interaction (HCI), with a specialization in interface personalization, universal usability, assistive technology, and computer supported cooperative work. She often serves on program committees for the top conferences in HCI, including serving as the Papers Co-Chair for CHI in 2015. She is a member of the editorial board for ACM Transactions on Accessible Computing. Joanna won a Microsoft Research Software Engineering Innovation Foundation (SEIF) award in 2013, a Killam award for Excellence in Mentoring (2012), an Outstanding Young Computer Science Research Award from the Canadian Association of Computer Science (2011), was appointed as a Peter Wall Institute for Advanced Studies Early Career Scholar (2010), and was the first recipient of the Anita Borg Early Career Scholar Award (2004). Joanna is also chairing the steering committee for the relatively new HCI@UBC initiative: an interdisciplinary meeting for scholars working in the area of Human-Computer Interaction at UBC.
The Evolution of CAD: New Tools for our 3D-Printed Future
For the past several decades, engineering and architectural CAD has evolved in parallel - but separate - from mainstream computer graphics research. Modern techniques in geometry processing, shape modeling, and 3D interaction have not made their way into professional CAD tools. However, the rise of digital fabrication as a viable manufacturing process has renewed interest in computer graphics research in the CAD world. 3D printing lifts constraints on form imposed by CNC machining and injection molding which have tied the hands of CAD practitioners. In this new world, it is now plausible to use triangle meshes or implicit surfaces as the basis for CAD systems which are more intuitive, more expressive, and simplify the use of 3D capture in design workflows. My group focuses on these areas, and I will present some recent results. To demonstrate the potential of novel CAD tools, I will describe recent work in which I and my collaborators travelled to Uganda to test a system for 3D-printing leg prosthetics for children. Modern geometry processing techniques were critical to this project, allowing us to integrate 3D scans and CAD parts in an easy-to-use prosthetic design tool.
Bio: Ryan Schmidt is a Research Scientist and head of the Design & Fabrication Group at Autodesk Research in Toronto, Canada. Ryan's research focuses on interactive 3D design systems, with the goal of making them more expressive and efficient. This work involves aspects of geometry processing, shape modeling, and 2D/3D interaction techniques. He is the creator of several novel 3D design tools, including Meshmixer, which was acquired by Autodesk in 2011. At Autodesk he has evolved Meshmixer into one of the standard tools for 3D printing. In his current work he is exploring the new design workflows made possible by the fusion of 3D scanning, direct mesh modeling, and advanced digital fabrication. Ryan received his BSc and MSc at the University of Calgary, and his PhD at the University of Toronto.
The Breadth/Depth Dichotomy: a Force for Mediocrity in Commercial User Interface Technologies
Those seeking to build commercial software (or “content”) face a dilemma: given the wide range of platforms and the even wider range of sensing and output capabilities, they are forced to choose between designing once for a theoretical “lowest-common-denominator” platform (breadth) or significantly redesigning their software for each hardware capability (depth). Simultaneously, new platforms are starved for content, and usually wither and die due to a lack of software which takes advantages of their unique capabilitites.
In this talk, I will describe this dichotomy, and our experiences at MERL and Microsoft working with developers to overcome it. I'll also present a discussion of a wide range of innovative UI technologies which have failed to address it, and which quickly disappeared as a result. I'll discuss the research community's attempts at a solution, including our own work at U of T in the "Symphony of Devices". Finally, we will discuss the “call to action”: finding a way out of this platform-killing conundrum.
Bio: Daniel Wigdor is an assistant professor of computer science and co-director of the Dynamic Graphics Project at the University of Toronto. His research is in the area of human-computer interaction, with major focuses on the architecture of highly-performant UI’s, on development methods for ubiquitous computing, and on post-WIMP interaction methods. Before joining the faculty at U of T in 2011, Daniel was a researcher at Microsoft Research, the user experience architect of the Microsoft Surface Table, and a company-wide expert in user interfaces for new technologies. Simultaneously, he served as an affiliate assistant professor in both the Department of Computer Science & Engineering and the Information School at the University of Washington. Prior to 2008, he was a fellow at the Initiative in Innovative Computing at Harvard University, and conducted research as part of the DiamondSpace project at Mitsubishi Electric Research Labs. He is co-founder of Iota Wireless, a startup dedicated to the commercialization of his research in mobile-phone gestural interaction, and of Tactual Labs, a startup dedicated to the commercialization of his research in high-performance, low-latency user input. For his research, he has been awarded an Ontario Early Researcher Award (2014) and the Alfred P. Sloan Foundation’s Research Fellowship (2015), as well as best paper awards or honorable mentions at CHI 2015, CHI 2014, Graphics Interface 2013, CHI 2011, and UIST 2004. Two of his projects were selected as the People’s Choice Best Talks at CHI 2014.
Teaching Computers to See Using Big 3D Data
On your one-minute walk from the coffee machine to your desk each morning, you pass by dozens of scenes – a kitchen, an elevator, your office – and you effortlessly recognize them and perceive their 3D structure. But this one-minute scene-understanding problem has been an open challenge in computer vision for decades. Recently, researchers have come to realize that a large amount of image data is the key to several major breakthroughs in image recognition, as exemplified by face detection and deep feature learning. However, while an image is a 2D array, the world is 3D and it is not possible to bypass 3D reasoning during scene understanding.
In this talk, I will advocate the use of big 3D data in all major steps of scene understanding. I will share my experience on how to use big 3D data for bottom-up object detection, top-down context reasoning, 3D feature learning and shape representation. As examples, I will present three of our recent works to demonstrate the power of big 3D data: Sliding Shapes -- a 3D object detector trained from a large amount of depth maps rendered from CAD models, PanoContext -- a data-driven non-parametric context model for panoramic scene parsing, and 3D ShapeNets -- a Convolutional Deep Belief Network learned from CAD models on the Internet. Finally, I will discuss several remaining open challenges for big 3D data.
Bio: Jianxiong Xiao is an Assistant Professor in the Department of Computer Science at Princeton University and the director of the Princeton Vision Group. He received his Ph.D. from the Computer Science and Artificial Intelligence Laboratory (CSAIL) at Massachusetts Institute of Technology (MIT). His research interests are in computer vision, with a focus on data-driven scene understanding. He has been motivated by the goal of building computer systems that automatically understand visual scenes, both inferring the semantics (e.g. SUN Database) and extracting 3D structure (e.g. Big Museum). His work has received the Best Student Paper Award at the European Conference on Computer Vision (ECCV) in 2012 and Google Research Best Papers Award for 2012, and has appeared in popular press in the United States. Jianxiong was awarded the Google U.S./Canada Fellowship in Computer Vision in 2012, MIT CSW Best Research Award in 2011, and two Google Research Awards in 2014 and in 2015. More information can be found at: http://vision.princeton.edu.
These talks are in addition to those given by winners of the 2015 CHCCS Achievement Award and the 2014 Bill Buxton and Alain Fournier Ph.D. Dissertation awards.