top of page

MIXED REALITY MODELS

Mixed Reality Holographic Models for the Interactive and Synergetic Exploration of Space Structures in Architectural Design and Education

Juan José Castellón González

Anukriti Mishra

Guangyu Xu

Jose Daniel Velazco Garcia

Nikolaos V. Tsekos

The purpose of this research is to explore the academic potential of Mixed Reality (MR) models based on computer-generated holograms for the experimental exploration of spatial structures in architectural (interactive and immersive) design and education by multiple users. This work is driven by the potential impact of concurrently sharing and interacting with mixed reality (MR) scenes via holographic interfaces. With an interfaceable, scalable, and modular computational architecture, this platform is a powerful resource for multiple users, multiple formats (digital and physical), and multiple physical locations to design as well

as assess structural and physical properties. The underlying didactic hypothesis is that holograph and MR offer an unparallel sense of immersion while enabling rapid virtual prototyping and project-driven synergy in combination with physical models. We present results on Augmented/Mixed reality as well as Virtual reality (VR) using devices such as the HoloLens and Vive head-mounted devices (HMD) and hand-held devices (HHD) such as the iOS iPad and iPhones and Android tablets all seeing the same time the same architectural holographic scene and interacting with it.

THE CONCEPT

230530_multiuser_2_v009-06.png

Fig. 1. (a) Example multi-user collaboration depicting the instructor stacking two blocks while multiple users at different locations using different devices (HMDs, tablets, or smartphones) experience on the fly those changes in the respective MR environment. Physical changes are tracked using stereo-cameras and appropriate spatial registration algorithms. (B) Illustration of the deployed framework set-up customized for Architectural immersion, underscoring the central control cyber-entity (server) and the two-way communication to the users.

In this first prototype, the primary design directive was to enable true real-time collaboration between users, i.e., as the example shown in Fig. 1(a), a user can alter the scene, and these changes are shared by the other users. The success of such bidirectional communication requires mimicking the actions that the user takes while collaborating in a real physical environment and communicating the same actions to all the other collaborating users. Moreover, we considered certain practical challenges. First, different users may have access to different devices; some to high-end head-mounted displays, others to Tables, or just a smartphone or laptop. And they run with different operating systems (iOS, Android, Windows, etc.). To accommodate that, we decided to develop a system that is impendent of OS or platform. In developing the afore-discussed tool, another design directive was the ability to run appropriate software to model the physical world forces and interactions and dynamics. We also wanted practitioners to be able to modify the computation algorithms or add new modeling modules. This created yet another challenge: such

high-quality software cannot run on a tablet or an HMD. 

Aiming to develop a useful tool for practicing professionals, academicians as well as students and address those design directives and challenges, we decided to implement a computational net that linked all different involved entities based on the open-source framework by Velazco et al. [2]. This commutation fabric offered major flexibility. (A) All demanding computations were performed on a server (a PC with high computational power and memory). This is illustrated in Fig 1(b). The devices were only used to perform renderings and as visualization interfaces. (B) Individual users were linked to the server rather than to each other, facilitating a highly efficient protocol. In addition, Cybersecurity can be easily incorporated. (C) This framework allows for easier scaling between multiple platforms. Users can collaborate using immersive or non-immersive devices such as a desktop/laptop or augmented reality (AR) devices like smartphones, tablets, and head-mounted display (HMD) devices.

THE IMPLEMENTATION

The presented prototype was developed on Unity 3D engine [3] version 2021.3.1f1 in conjunction with Visual Studio 2019 for both AR and non-AR devices. Unity uses C# as its programming language and offers various built-in packages to implement AR/VR/MR functionalities across multiple devices. The ARFoundation package [4] facilitates the use of the device's augmented reality features by wrapping the APIs of ARKit [5] (AR library for iOS provided by Apple) and ARCore [6] (AR

library for Android provided by Google). The server was implemented using the Photon Engine [7]. Experimental studies were performed using iPhone (Xs, iOS version 15.5, 64GB) as an AR platform device and an Apple desktop (Mini, OS version 13.3.1, 8GB). The application is compatible with iOS and Android for AR-enabled smartphones and tablets and both Windows and MacOS for non-AR desktops and laptops. The server for this prototype was setup up using the cloud.

230530_multiuser_2_v009-08.png

Fig. 1. (a) Example multi-user collaboration depicting the instructor stacking two blocks while multiple users at different locations using different devices (HMDs, tablets, or smartphones) experience on the fly those changes in the respective MR environment. Physical changes are tracked using stereo-cameras and appropriate spatial registration algorithms. (B) Illustration of the deployed framework set-up customized for Architectural immersion, underscoring the central control cyber-entity (server) and the two-way communication to the users.

Studies were performed to assess (a) the potential of computer-generated holograms to appreciate space and 3D structures, (b) multi-user collaboration, and (c) multi-platform compatibility. To test the multi-platform support, we ran the application on one AR device (iPhone XS) and one non-AR device (MacOS desktop). Figures 2 and 3 illustrate special immersion and provide an appreciation of 3D structures relative to the surrounding space (a computer-generated hologram of a tetrahedron) as seen by a user on an iPhone (these are screenshots made with the iPhone itself). As part of the immersion, by including real

space, the holographic structure was placed by the user on the top of a conference table.

 

The three captures in Fig. 2(a) correspond to three positions as the user physically moves closer to the structure; as a reference, you may note the relative size of the chair and window shutters in the background. The panel in Fig. 2(b) illustrates one of the features incorporated in the application: the user can change the size of the holographic structure. This is performed by scaling it relative to its original position. 

230530_multiuser_2_v009-09.png

Fig. 3: Illustration of changing rotational perspectives of the tetrahedron with iPhone. (a) The user physically rotates around the fixed object while pointing the iPhone toward it. (b) The user stands at a fixed position and rotates the object.

Another illustration of the spatial immersion enabled by holographic can also be appreciated in Fig. 3(a), which shows iPhone captures as the user physically walks around the tetrahedron. Equally impactual to perception was moving and viewing the object from above or below when not placed on a table but just at any point in the air. Fig. 3(b) shows rotations of the object by the user who stands at a fixed position as compared to a non-AR traditional 2D display. Holographic scenes via AR-enabled makes it easier to see the full 360-degree view of the virtual object whereas the standard display screen is not as effective.  

 

To test the multi-user collaborative environment, we connect two users using different types of devices using the application to visualize the same space structure of a

tetrahedron. User A connects using an AR-enabled device (iPhone), and User B connects using a non-AR device (desktop). The Users start with a tetrahedron composed of bars and nodes at equilibrium. We color-coded each bar to make it easier to identify the bars and nodes. Users then randomly remove one of the bars. The process is repeated until the tetrahedron collapses by its self-weightSince both Users see the same virtual structure, information regarding each bar removal is relayed to both Users, and their virtual structure models are updated. Figure 3 shows step by step comparison of the User views after each bar removal. After each removal, we can see the status of the structure for the Users. For this experiment, the only force working on the space structure is gravity.

Fig. 3: Illustration of changing rotational perspectives of the tetrahedron with iPhone. (a) The user physically rotates around the fixed object while pointing the iPhone toward it. (b) The user stands at a fixed position and rotates the object.

User A on AR-device (iPhone) and User B on Non-AR Device (Mac desktop). Sequence of removed bars: “Black” by User A, “Orange” by User B and “Blue” by User A. 

 

It was observed that the time delay in communication between users was negligible. The time it took for the Users to remove the bar and both structures to reflect the change was instantaneous. As the communication is bi-directional, both Users took turns removing the bars, which further underscores the collaborative nature of the application. 

​

While the prototype of the system described herein demonstrates the

potential of computer-generated holography in architectural education, we have not incorporated a comprehensive range of physical properties into the simulated structure (material properties and forces other than gravity). Incorporation of such capabilities is a work in progress, and we feel that it does not limit the merit of this proof-of-concept prototype system. Notably, performing such demanding computations on servers and not on hand-held devices (Tables and smartphones) or tetherless head-mounted devices (like HoloLens) is a needed feature enabled by the used framework (2).

bottom of page