Concept
DIDYMOS-XR will research and develop robust and scalable methods for 3D scene reconstruction from heterogeneous cameras and sensor data (e.g., lidar), integrating data captured at different times and under different environmental conditions and creating accurate maps of the scene.
In order to turn digital representations into actual digital twins, they need to be extended to (i) cover all relevant parts of their environment (covering possibly a large area of the real world), (ii) be kept in sync with changes in the real world, and (iii) represent the semantics of objects in order to link sensor data to functionalities. While it is feasible to use high-end equipment for the initial capture, continuous updates need to be obtained in a cost-effective way, using the numerous but heterogeneous sensors already deployed. Automated methods based on artificial intelligence are needed in order to fuse information from these sensors, analyse the semantics of the scene and make the appropriate changes to the digital twin.
The capture of scenes at scale, as well as using cameras and other sensor data for synchronising the digital representation, bears the risk of capturing personal and sensitive data. Hence, the technologies resulting from DIDYMOS-XR must be and will be ethical and privacy-aware by design. DIDYMOS-XR will research and develop robust and scalable methods for 3D scene reconstruction from heterogeneous cameras and sensor data (e.g., lidar), integrating data captured at different times and under different environmental conditions and creating accurate maps of the scene. The semantics of elements in the scene will be automatically annotated using deep learning approaches trained with weakly or self-supervised methods able to scale and adapt to a large set of relevant objects.
Strategic Objectives
/
01
Research and develop scalable and high-fidelity methods to capture and map large indoor and outdoor spaces. +
In cases where dynamic elements of the digital twins are connected to sensors, these connections have to be mostly hand-crafted. The vision of DIDYMOS-XR is to enable advanced, more realistic and more dynamic extended reality (XR) applications, enabled by artificial intelligence (AI).
/
02
Research and develop synchronisation and update methods for digital twins, making use of already deployed sensors and heterogeneous data.
IDYMOS-XR will research and develop methods to analyse data from stationary (e.g., fixed cameras, traffic sensors) and moving sensors (e.g., on-vehicle cameras, lidar) to align data with the digital twin, determine changes and update the digital twin. These methods will build on outputs from localisation (SO3) and semantic scene understanding (SO1). The partners will research methods for self-supervised learning of the dynamics of scene elements as well as their dependency on sensor inputs.
/
03
Research and develop scalable and accurate positioning and adaptive rendering for XR applications.
In order to enable high-fidelity XR experiences, DIDYMOS-XR will research and develop approaches for large-scale consistent localisation under varying lighting conditions, dynamic scene objects and erratic camera motions, enabled through on-the-fly photometric and geometric calibration and multimodal sensor integration.
/
04
Ensure that the developed technologies are ethical, privacy-aware and safe by design through continuous impact assessment and stakeholder consultations.
DIDYMOS-XR will analyse the ethics and privacy issues raised by the technologies developed in the project, from conception through prototype development to validation, around capturing and processing sensor data for updating digital twins and high accuracy localisation of users. This will ensure that appropriate safeguards are in place, guaranteeing users’ rights as well as their acceptance to use these technologies. In addition to continuous socio-economic impact assessment, technological development will be conducted in line with a human-centred approach by relying on co-design and social scientific research on end-user needs.
/
05
Validate the technologies in five real-world XR applications in the domains of smart cities and industrial production.
The partners will involve representatives of stakeholders from the target domains in the design of the technologies from the start. They will validate the technologies in XR applications for citizen involvement in urban planning, tourism, and smart mobility in cities/villages, and collaboration with autonomous mobile robots in industrial production environments.
Terminology
Digital twin: A virtual representation of an object or system,
ranging from a machine to a city. A digital twin could encompass
the geometry, the internal mechanisms, or both
High-fidelity capture: A digital representation describing a
real-world object in high detail with regards to geometry and/or
inner workings, allowing greater accuracy of simulation
Sensor data fusion: The merging of data from multiple sensors,
with the aim to reduce the uncertainty of a task such as robotic
navigation
Scene understanding: Computational approaches to interpreting
the context of a situation, enabling better automated
decision-making
SLAM: ‘Simultaneous localisation and mapping’, methods used in
robots or autonomous systems to, at the same time, chart an
unknown environment and determine the position of the system
in that environment
Image-based localisation: The determination of a system’s
location and orientation according based on visual information
(e.g. of camera input), using visual analysis methods
Scalable rendering: Technical approaches to creating imagery
according to the capabilities of devices and their displays
Lidar: ‘Light detection and ranging’, a remote sensing method
that uses laser light to measure distances to the objects in the
environment, creating detailed 3D maps for various applications.
Localisation: The establishment of a device’s position (see SLAM),
vital to robotic applications such as self-driving cars
Consortium
Cordinator
Joanneum Research is an Austrian research and technology development company, based in Graz. They are the coordinators of DIDYMOS-XR. Read more on their website, and learn about the rest of our expert consortium below…
Participants
Lorem ipsum dolor sit amet, consetetur sadipscing elitr, sed diam nonumy eirmod tempor invidunt ut labore et dolore magna aliquyam erat, sed diam voluptua. At vero eos et accusam et justo duo dolores et ea rebum. Stet clita kasd gubergren, no sea takimata sanctus est Lorem ipsum dolor sit amet. Lorem ipsum dolor sit amet, consetetur sadipscing elitr, sed diam nonumy eirmod tempor invidunt ut labore et dolore magna
Lorem ipsum dolor sit amet, consetetur sadipscing elitr, sed diam nonumy eirmod tempor invidunt ut labore et dolore magna aliquyam erat, sed diam voluptua. At vero eos et accusam et justo duo dolores et ea rebum. Stet clita kasd gubergren, no sea takimata sanctus est Lorem ipsum dolor sit amet. Lorem ipsum dolor sit amet, consetetur sadipscing elitr, sed diam nonumy eirmod tempor invidunt ut labore et dolore magna
Lorem ipsum dolor sit amet, consetetur sadipscing elitr, sed diam nonumy eirmod tempor invidunt ut labore et dolore magna aliquyam erat, sed diam voluptua. At vero eos et accusam et justo duo dolores et ea rebum. Stet clita kasd gubergren, no sea takimata sanctus est Lorem ipsum dolor sit amet. Lorem ipsum dolor sit amet, consetetur sadipscing elitr, sed diam nonumy eirmod tempor invidunt ut labore et dolore magna
Project Outcomes
The project is divided into 6 Work Packages (WP):
- Lead the project to technical, organisational and financial success
- Coordinate the technical and scientific work through the project lifetime
- Set up communication and project management structures
- Ensure the overall project quality
- Perform management of collected and created data
- Deliver the periodic activity and management reports as required by the Grant Agreement