Sedit Rolik's profile

Multicamera Real-Time 3D Modeling

Multicamera Real-Time 3D Modeling for Telepresence and Remote Collaboration:
Teleimmersion is critical for the future generation of live and interactive 3DTV apps, which will be broadcast in real time. It refers to the capability of embedding individuals from various geographical areas into a common virtual world. The ability to give users with a genuine impression of 3D telepresence as well as interactivity capabilities is critical in these types of situations. Several technologies, such as, already provide 3D experiences of real-world scenes that are rendered in 3D and sometimes with free-viewpoint representations. Live 3D teleimmersion and interactivity across several locations, on the other hand, remains a difficult ambition to achieve. For the most part, this is due to the difficulties in developing and transmitting models that include sufficient information for such applications. There are several elements to consider, including not only visual or transmission aspects, but also the fact that such models must feed 3D physical simulations, which are essential for interaction purposes. In this study, we address these difficulties and present a comprehensive framework that allows faraway persons to participate fully in a single collaborative and interactive environment while retaining their full physical presence. In a wide range of application domains, including interactive 3DTV broadcasting, video gaming, social media networking, 3D teleconferencing, collaborative manipulation of CAD models for architectural and industrial processes, remote learning and training, and other collaborative tasks such as civil infrastructure or crisis management, there is growing interest in virtual immersive and collaborative environments. Such settings rely heavily on their ability to create a virtualized version of the scene of interest, such as 3D representations of people, in order to function properly. The majority of existing systems rely on 2D representations derived through the use of mono-camera systems. While they provide a largely accurate picture of the user, they do not allow for natural interactions, such as constant visualisation with occlusions, which necessitate the usage of three-dimensional descriptions. Avatars are also used by other systems that are more suited for 3D virtual environments, such as enormous multiplayer games that are analogous to Second Life. Avatars, on the other hand, only carry a fraction of the information about their users, and while real-time motion capture environments can improve such models and enable movement, avatars do not yet give sufficiently realistic representations for teleimmersive applications.
Models that use both photometric and geometric information should be explored in order to enhance the sense of presence and realism. They provide more realistic representations that include user looks, actions, and, in some cases, even face expressions, as well as other factors. Multicamera systems are frequently studied for the creation of such 3D human models. In addition to appearance, they can provide a hierarchy of geometric representations from 2D to 3D Modeling Services, including 2D and depth representations, numerous viewpoints, and full 3D geometry, all based on photometric information. 2D and depth representations are perspective dependent, and while they allow for 3D visualisation and, to a certain extent, free-viewpoint visualisation, they are nevertheless limited in this regard as a result of this limitation. Furthermore, they are not intended for interactions that often necessitate the use of whole shape information rather than partial and discrete representations. Many of the constraints of 2D and depth representations are solved by multiple view representations, which are views taken from multiple perspectives at the same time. In particular, when utilised in conjunction with view interpolation techniques, such as, they improve the free-viewpoint capabilities of the system. When fresh perspectives that are far away from the original viewpoints are taken into consideration, the quality of the interpolated view rapidly degrades. Additionally, like with 2D and depth representations, only a limited number of interactions may be anticipated. Full 3D geometry descriptions, on the other hand, allow for unrestricted free perspectives and interactions due to the fact that they contain more information. Already, teleimmersion has been demonstrated using them. In real-time systems, existing 3D human representations frequently suffer from restrictions such as poor, incomplete, or coarse geometric models, low resolution texturing, and sluggish frame rates, to name a few examples. This is often caused by the intricacy of the 3D reconstruction method being utilised, such as stereovision or visual hull approaches, as well as the number of cameras being employed in the reconstruction.
This page describes Grimage, a real-time 3D modelling system that can be used in its entirety. There have been earlier conference publications that have been expanded upon. With the EPVH-modeling method, which computes a 3D mesh of the seen scene from segmented silhouettes and reliably produces an accurate shape model at real-time frame rates, the system is able to detect and track moving objects. By texturing this mesh using the photometric data included inside the collected silhouettes, it is possible to give it visual existence in the 3D world. Mechanical presence is also enabled by the 3D mesh, which allows the user to perform mechanical movements on virtual objects through the use of 3D meshes. To compute the collisions and reaction forces to be applied to virtual objects, the 3D mesh is loaded into a physics engine and run through its paces. Another component of our contribution is the implementation of the pipeline in a parallel architecture that is both flexible and adaptable. In order to accomplish this, we rely on FlowVR, a middleware designed specifically for simultaneous interactive applications. The application is organised into a hierarchy of components, with the leaves representing the calculation tasks in the hierarchy. The component hierarchy provides a high level of modularity, which makes system maintenance and upgrade much easier to accomplish. It is possible to deduce, during a preprocessing phase, the real degree of parallelism and mapping of jobs between nodes in the target architecture from simple data such as the list of cameras that are now available. The runtime environment transparently manages all data transfers between tasks, regardless of whether they are on the same node as one another. Through the use of a parallel framework to include the EPVH algorithm, it is possible to achieve interactive execution speeds without losing accuracy. The method we designed allowed us to conduct a number of experiments utilising one or two modelling platforms.
Multicamera Real-Time 3D Modeling
Published:

Multicamera Real-Time 3D Modeling

Published:

Creative Fields