During AWE2014 a “Open and Interoperable AR Workshop” will take place 27th may. As you know, we are vera interested by AR standards on this blog. It’s a way to spread the technology. Also we met Christine Perey, who drive the workshop and The AR Standards Portal to know more !
Hello Christine, you are leading projects and a grassroots community that focus on open and interoperable Augmented Reality. What does it mean for AR to be “open”?
It’s very hip and easy to say that something in technology is or should be “open” but your question raises an excellent point. It is not easy to define something that is “open” precisely and even more difficult to build. I begin with the definition published in an essay on the topic Education for Openness by Michael Peters in the Encyclopedia of Educational Philosophy and Theory.
Peters defines openness as “a kind of transparency which is the opposite of secrecy and most often this transparency is seen in terms of access to information especially within organization, institutions or societies.” When we say that a person is “open,” most people understand that we mean that the person’s mind is open to new experiences. “Open data” refers to a movement that advocates for making data available in digital formats (they can be proprietary or standard formats) to society for commercial and non-commercial purposes at no cost.
An “open standard” has several potential meanings (see the Wikipedia article). An open standard may be one that’s developed through an open and consensus-based process by a group of people who have entered into an agreement to contribute to a specification without requiring royalties on intellectual properties they provide. It can also mean that the standard is developed outside of an institutional framework of any type. An open standard may also refer to a standard that can be implemented by anyone without payment to its publisher. Many people don’t make (or know that there exists) a clear distinction between a specification that is published by a standards development organization (SDO) and a “defacto” (or market-driven) standard. A defacto standard refers to the most popular approach or interface as defined by the number of implementations. It may have a royalty or be given away by those who proposed or wrote it.
When a specific technology suite or an individual component is said to be “open” it’s not clear if the term applies to the business model (no charge for use) or the access to it (there are no technical barriers for using it).
I define “Open AR” as a set of values for producing and publishing AR experiences based on transparency that simplify the user’s access to the content in the real world. The transparency will most likely be based on standards that define how units of information are published, stored in association with other data, processed, delivered, rendered and, finally, presented.
When AR will be open the content of the “augmentations” for the AR experience and the interactions proposed to the user at the time of the real world trigger’s recognition will be “accessible from” (readable by) a variety of client applications. The user won’t have to use only the viewing application chosen by the experience’s publisher. The AR experience will be part of a large, contextually-sensitive digital information capturing, publishing, querying and “consuming” system.
Think of the principles and values of the World Wide Web today. When a publisher (even an individual) makes content ready for viewing “via the Web,” the application that opens it isn’t known or defined. The content can be accessed through a Web browser or a Web application. Many mobile applications for native iOS or Android also access Web-based content. The fact that it is accessible doesn’t necessarily mean that it is provided without cost. The business models, the privacy and security technologies, and communications protocols that surround the content viewing experience are completely separate from the fact that people are using agreed upon, transparent principles for signaling between the user and the source of the content.
The last two steps in the AR workflow (delivery of AR experiences based on a trigger’s detection and AR presentation at the time when the real world trigger is recognized) are very different from the steps in the delivery and presentation of Web-based content. We are in a phase during which many people are experimenting, innovating with content delivery and presentation approaches in closed and proprietary systems. These systems are much easier to experiment within because there are fewer unknowns, fewer variables that are not in the content publisher’s control.
How will AR technologies and experiences be different when your goals have been achieved?
It’s exciting to think of the possibilities but difficult to predict the variety of innovation we will soon see for AR experience delivery. There are thousands of people in hundreds of companies who are working on this in some small or big way today. We are still in the beginning. Think of it as the “CERN era” of the World Wide Web. Only the experts can build and use AR authoring tools, publish experiences and manage the publishing environments. But information technology change is accelerating. The speed with which new systems are introduced and adopted, and almost as often abandoned, is unprecedented.
Within a few years, conventions for AR experience delivery and presentation will emerge and soon thereafter, there will be open interfaces or translation services that will make these conventions easy to include in any digital information system. The proprietary tools and delivery systems will continue to be used in specific use cases with very demanding requirements but closed, proprietary AR viewing systems will seem restrictive for many general information purposes.
We can’t get there in one jump. There not only needs to be development but also education of users. The education of users about AR experiences will be in stages. The first stage is that when users encounter an AR-rich object (a book) or a venue (shopping center, school, sports stadium, etc), there will be a symbol on it. People are beginning to recognize the shape of a smartphone when they see it.
For the rest of the physical world where the density of triggers and content is lower, and eventually for everywhere, the user will be notified of AR-enabled experiences in proximity by a discovery service widget that is integrated into all the AR-enabled user agents (software clients on AR-ready hardware).
There will be physical, interactive “languages” (most likely involving gestures) that just catch on for different use cases. Most people who want AR in their information systems won’t need to have anything explained to them to control and interact with the digital content in their physical world because it is intuitive. It will most likely have a lot in common with how we interact with the physical world.
What’s the state-of-art today?
We are unable to generalize across all the types of AR and all the technology components that AR can use. Today we have some building blocks for open and interoperable AR (see this page of existing standards) and others under development (see this page of emerging standards for use in AR systems).
It all depends on where you are looking, how far back you want to go and what you consider to be most important. If you are building mobile geospatial AR experiences, then you are probably using principles first introduced in the Reference Model for Context-Aware Mobile Services published by Floch et al, in 2001. While not a standard, parts of it have been used in other standards. We can apply the taxonomy of visualization techniques published by Chi et al. in 2000 to describe digital data for AR. Parts of the Geography Markup Language of the OGC have been used in the draft ARML 2.0 Candidate Standard.
In the OSI stack for communications systems, nearly all the work happening for standards that could impact AR is at the application layer (Layer 7). Most people developing AR tools and experiences begin with the assumption that the lower layers of the stack (as provided for other information systems) are valid and can be used “as is.” In other words, we have yet to clearly identify unique AR-specific requirements for which we cannot implement standards being developed for communication between Machine-to-Machine or Internet of Things systems.
Many of the components for geospatially-referenced experiences and data formatting are defined by standards published by the Open Geospatial Consortium. To have open systems for visual recognition, there will need to be agreements on how features are encoded. This is the purpose of the Compact Descriptors for Video Search activity in the MPEG. A standard for 3D model interchange, COLLADA, is available from the Khronos Group. Other standards for 3D AR are being developed in the Web3D Consortium and the OGC.
If you focus on the AR experience delivery and presentation parts, however, it’s less clear that we will meet all the requirements for Open AR by simply reusing other standards. For AR delivery, there’s been some work done and publication of a MobAR Enabler by the Open Mobile Alliance.
In the presentation and rendering parts most systems with GPUs accelerate AR experience presentation using Khronos Group standards such as WebGL. The Khronos Group is working on three new specifications that will be directly relevent for AR on mobile platforms: the OpenVX for video processing, StreamInput for sensor fusion and KCam, an API for Camera Control
In addition, we must communicate to the vendors of technology silos that the best for them, their developers and users is to produce open and interoperable AR content and experiences.
We are beginning to define what it means for AR technologies to be interoperable. In 2013 a group of experts formed a task force to focus on interoperability. We defined interoperability as the condition you have when an authoring environment provided by a vendor (e.g., Unity, metaio, Layar, Wikitude, etc) that, when used by developers produces experience content that can be stored in a publishing system provided by another vendor (e.g., SAP, Oracle, Amazon, AR-code) and that these experiences are triggered by the AR-enabled user agent (hardware and software) of preference to the user (let’s say an AR browser published by Layar, Wikitude or metaio).
Metaio, Layar and Wikitude worked together towards this goal and at the Mobile World Congress in Barcelona in February 2014, they demonstrated a proof of concept level of interoperability. Since then they worked to put the architecture into their production systems and at AWE 2014 they will release the commercial versions of their software with support for geospatial AR experience interoperability. This is a really important milestone for all those who are interested in Open AR.
In addition, there are some open source projects that can be used to produce AR experiences. One of those, the Augmented Web library published by BuildAR, makes it possible to trigger and interact with an AR experience in Chrome. There are also some open source AR browsers such as MixARe and ARgon and an Open Source AR experience server from LightRod.org.
AR binds many technologies together (imaging, localization, vision control, rendering, web, etc.). Should AR standards be a « mashup » of many standards ?
There are many different technology groups that contribute to AR experiences so there can’t be one “AR standard.” There are many standards available today which can be used for making AR experiences. We mentioned some above and maintain a catalog of those on this page.
Many more standards will be adapted to make AR part of existing information management and visualization systems. Some will become known as standards that are developed primarily to address specific AR requirements. One of those is the MPEG’s Augmented Reality Application Format. Another is the OGC’s ARML 2.0 Candidate Standard.
It will be very valuable for us to have a Mixed and Augmented Reality Reference Model. This is currently under development within the ISO/IEC JTC/1 MAR RM Joint Adhoc Group.
I’m not in favor of there being a universally understood or standard « AR is here » logo on every building, every person, every commercial object. That is just not going to happen for many legal and cosmetic reasons.
You organize an “Open and Interoperable AR Workshop” during AWE2014. Could you summarize goals of this event ?
Our workshop on Open and Interoperable AR at AWE2014 is focused on bringing information about open and interoperable at to the attention of developers. Attendees will:
- Get up to speed on open source projects and standards that are available today for use in interoperable AR deployments and tools,
- Discover how to access and contribute to new projects and standards under development, and
- See, first-hand, the latest examples of open and interoperable AR implementations.
I look forward to seeing you there on May 26!
Grégory MAUBON est coordinateur numérique chez HCS Pharma, une startup biotech axée sur la R&D in vitro, spécialisée dans le criblage d’imagerie cellulaire à haut contenu (HCA) et à haut débit (HCS). HCS Pharma commercialise des produits basés sur la technologie BIOMIMESYS® et développe ses propres modèles cellulaires 3D dans sa matrice extracellulaire exclusive BIOMIMESYS®. Il gère les missions informatiques et anime les usages numériques liés aux besoins de l’entreprise. Il travaille sur la gestion des données en tant que CDO (Chief Data Officer) et dirige le programme R&D Intelligence Artificielle de l’entreprise.
Il est également consultant indépendant en Réalité Augmentée depuis 2008, où il a crée le site www.augmented-reality.fr et a co-fondé en 2010, RA’pro l’association de promotion de la réalité augmentée. Son expérience du domaine s’est forgée dans l’accompagnement de nombreuses entreprises, de tous les secteurs, sur la mise en place effective de la réalité augmentée ainsi que sur la définition d’objectifs et de critères de succès.