I attended the 16th annual Culture and Computer Science conference in Berlin, organized in Schloss Köpenick, a beautiful art museum setting along a river. The smaller conference focuses on the cultural and ethical implications of varying computational trends across the public. Hybrid Systems, this year’s theme, mainly explored Virtual, Augmented, and Mixed Reality technologies and research. Having found myself severely out of touch on the current state of affairs in that sector of development, my main goal was to become better acquainted and possibly inspired. I will highlight some of the presentations that caught my attention.
Definitions
To place the terminology in perspective, I list the four types of realities in the Mixed Reality Continuum.
Real environment: where we find ourselves unaided by any virtual augmentation.
Augmented reality: Mostly real but with a virtual layer on top. For example, overlaying physical objects and environments with descriptions or supplementary information is an augmented reality application.
Augmented virtuality: Mostly virtual, but with some physical overlay. Given a virtual conference room for example, one application might involve disproportionally sized human heads of the actual participants levitating over the table (or wedged onto stick figure corpses), carved from a live video transmission. Were the participants virtual avatars of their physical counterparts, this would be considered a fully virtual environment.
Virtual environment: A fully virtual surrounding without any “window” to the real environment.
Presentations
Data Analytics for enriching public spaces. The key idea involves the re-cultivation of historical and cultural awareness in public spaces rich in (not easily accessible) historical context. The Cyber Parks project fosters many ideas discussed in this presentation. Ethnoally, a mobile application developed to enable the collection of multimodal field notes and conduct audio-visual research, is another tool to facilitate public space augmentation.
The presenter focused on the application of expanding the Fado awareness in Lisboa, Portugal. Fado is a rich Portuguese music tradition, full of many difficult to synthesize references to physical locations in Lisboa. In fact, many such references are nowhere to be documented other than the songs. The Augmented Reality (and data analytics) based project would enable the digitization of this information and reference by means of geolocation and mobile devices. For example, a mobile or other Augmented device user at a plaza in Lisboa could access the available references to that location in different Fado songs. I can see how this type of application could be adapted to a variety of historical domains.
Time Travel via Augmented Reality. The Look Again project is an example of AR usage to enable a multidimensional historical context in a physical environment. The project has so far focused on observing the Old Corner Bookstore in Boston throughout different points in history by means of a mobile AR application. See the Historic Boston page for greater detail and examples. The application could be adapted to a myriad of public landmarks significantly evolved over time.
Enhanced audio. I have never been an audiophile in any sense of the word, hence my assessment doesn’t pay the due diligence this technology otherwise deserves.
One enhanced audio application involved concert halls, and the ability to emulate a live concert setting with the aid of VR and Ambisonic Audio equipment. Effectively, the listener can experience the concert from one of many strategic vantage points in the concert hall, including among the orchestra. As the listener rotates, the audio direction respectively changes to give the impression of live acoustics.
Another application seeks to vitalize the recorded story telling element in cultural spaces by means of similarly enhanced acoustics. See binci - Binaural tools for the Creative Industries for greater detail. Similar to an audio tour, this application allows the listener to feel part of a live audio conversation in a more interactive manner. I tested out a few demo use cases, listening to some cultural space related story in 3D audio. The audio perspective changed depending on how I tilted my head.
VR as an archaeological tool. I experienced a VR demonstration of recreating a historical environment, using the Pnyx public assembly in 5th-4th centuries BCE Athens as a case study. Public meetings, debates and voting largely shaped Athenian democracy in this space.
How does the project relate to archaeological research? By simulating this environment visually and acoustically, a researcher could better gauge how effectively participants followed the speech and how the orator handled a large public under the physical constraints of the environment. Two-dimensional reconstructions do not quiet capture this element to the same extent. These types of virtual reconstructions can raise questions not apparent to researchers by traditional means.
Demosthenes, the orator in the simulated environment, spoke German, which brought up some debate in the audience with respect to “linguistic finesse”. After all, ancient Greek arguably possessed certain properties that attributed to assembly communication in a particular way. The extent to which this parameter contributes to the overall experience I find uncertain.
With my German at an extremely low level, I mostly roamed among the Pnyx assembly, armed with the VR head mounted display, earphones and a movement control interface. This must have been my first ever VR experience, but I adapted without too much difficulty or any “cyber sickness”. I varied parameters such as the audience and orator mood and volume, as well as the orator platform height. The acoustics varied depending on the direction I faced and distance from the respective audio sources.
I didn’t experiment with many parameters, including some disabled out of computational constraints: weather conditions, the time of day, wind, texture quality, object inter-distances, and assembly size.
Technological constraints are still a limiting factor. The variables were notably scaled down since an incredible amount of live computation takes place among the different objects. The simulation leveraged the Unity game engine, very common in similar VR applications, capable of much more in traditional game settings not limited by live constraints. The technical setup of this presentation consisted of two powerful (game-purposed) and intricately synchronized laptops, one focused on the computation (auralization as used in the related publication), and one to synthesize acoustic input with the visual simulation.
Novel use of a Chatbot. A group of Coding Da Vinci Hackathon participants developed a Chatbot as an interactive medium with a cultural heritage dataset. In particular, the project leveraged a dataset of ~11000 Jewish children prosecuted during the Nazi Regime, provided by the International Tracing Service (ITS). The Chatbot transformed the dataset into an interactive experience by means of a tour throughout Berlin, focusing around landmarks and memories specific to the children’s lives. (The dataset was complimented by others to provide missing details sufficient for a location-based narrative.) The Marbles of Remembrance project page provides further detail.
One could consider this an Augmented Reality application, in the sense that it facilitates historical information with a richer interaction medium. Additionally, the idea of an interactive and customizable tour by means of a Chatbot I thought was a novel way to experiment with a classical presentation medium. Not to be confused with a museum guided audio tour, a reactive medium, the application takes on a more proactive role by not only communicating in a more human interchange fashion, but also by providing specific route and geolocation information in navigating the user around Berlin.
How specifically does this function? The chat application, once activated, guides the user to the next relevant destination (containing information pertaining to a particular child), if desired, via friendly dialog. It provides multiple options and the liberty to explore or skip destinations, resulting in a tour customizable in sequence and depth.
Stories told from multiple perspectives. Inspired by the Rashomon effect, this research deals with conflicting-perspective behavioral analysis, as applied to interdisciplinary team work. Stories can be told from multiple perspectives, evident in the Akira Kurosawa film Rashomon or the more recent Hollywood animated film Hoodwinked, used as a case study in this presentation. Other films immediately come to mind such as Usual Suspects or one of my personal favorites Basic.
The causal relationships in these conflicting perspectives resemble Bayesian Inference in one sense, or Reinforcement Learning, in the sense of having to navigate an unpredictable environment with partial information and absence of a behavioral model.
The research develops a visual model of conflicting storytelling by means of a causal graph that incorporates observables (differentiating between matching, conflicting, or partially overlapping perspectives) as well as possible explanations. The graph really provides more information and flexibility that I don’t explore here. It can then be analyzed by probabilistic means to obtain likelihood explanations for certain events and draw higher-level inferential conclusions (ex: what motivates each individual, who really did what, etc.). Often this type of analysis reveals a series of gray areas and exposes common prejudices.
In the more practical applications of Interdisciplinary team work, the research aims to apply this sort of modelling as a method to benefit team interaction. Each team member often possesses a unique perspective and motivational force behind a problem, giving rise to interpersonal conflict if not adequately handled. Building a probabilistic model around the team causal relationships can significantly aid in such conflicts by externalizing, visualizing, and discussing behavioral patterns not traditionally apparent.
Collaborative Sketching in a Virtual Environment. The ability to collaboratively sketch 3D models in a virtual setting is an idea I had entertained but never before seen in action. Much work had been explored in this domain that I will not cover here. I will rather focus on the VENTUS project, the work presented.
VENTUS combines the commonly used Unity 3D engine along with a CAD modelling kernel to provide a distributed and collaborative sketching and modelling environment. While VENTUS doesn’t render the user avatars for spacial simplicity (and probably computational constraints), it renders the hand movements, evoking a sense of progression to these sketches. Objects can be scaled, duplicated, deleted, rotated, transformed, and otherwise manipulated in accordance with traditional options available in modelling toolkits.
A user performs the sketch in mid air, with strokes visible in real time to all participants. Other features have been in development or under consideration. The time machine feature, for example, enables users to not only replay the sketch in progress, but also generate spawned off branches as means to experiment with parallel models. A mini map enables users to localize each other, this being an open virtual environment after all. Speech and pointing also facilitate communication between participants.
A question relevant to any 2D or 3D freehand sketching naturally arises. How to deal with the loss of hand motor control in the absence of a supporting physical surface? To what extent does the resulting sketching precision loss play a role in the overall experience? This point was covered in the presentation slides, but unfortunately not referenced in the publication available to me so I cannot provide a link. However, it was noted through some studies that while the absence of a physical surface attributes to somewhere between 25-30% loss of drawing precision compared to drawing in mid-air, the presence of a virtual surface reduces this loss by roughly a third. Again, I don’t have access to the proper references and quote these details from memory.
Overall, I found this application reassuring in the family of collaborative design. I am optimistic for collaborative VR design to expand into the domains of electronics, art projects, mechanical models and even programming.
Sources referenced
- Culture and Computer Science conference
- Mixed Reality Continuum
- Cyber Parks
- Ethnoally
- Fado
- Look Again
- Historic Boston
- Google Open Heritage Project
- binci - Binaural tools for the Creative Industries
- Ambisonic Audio
- Demosthenes
- Coding Da Vinci
- Marbles of Remembrance
- International Tracing Service
- Rashomon effect
- Bayesian Inference
- VENTUS project
Questions, comments? Connect.