Mobile QR Code QR CODE

2024

Acceptance Ratio

21%


  1. (Faculty of Art and Media, Jinzhong College of Information, Taigu 030800, China Yang_Zhou92@outlook.com)



Web, MYSOL, VRML, External authoring interface (EAI), Interior design

1. Introduction

The wide application of Web technology enables users to realize their respective business requirements at low cost without limitation of time and space [1]. The sharing of interior design is realized based on network. At present, the interior design system based on Internet technology has been applied. Under the guidance of the concept of smart city, future urban development puts forward higher requirements for urban management and planning. In-depth indoor 3D modeling technology is a necessary guarantee condition and an important technical component of modern urban management. There are two key problems in the process of indoor modeling: rapid 3D modeling and spatial model accuracy [2]. However, the functions and structures of modern cities tend to be diversified and complicated, and the data sources and spatial accuracy of existing underlying databases cannot meet the requirements of fast, real and accurate.

Currently, Zhang et al. In terms of protection paths, three aspects of technology empowerment, governance empowerment and legal empowerment have been used to jointly promote the proper protection of data on Internet enterprise platforms [3]; Zhu proposed a novel interior design framework based on virtual reality technology and the workflow of the proposed interior design framework consists of three steps: hard decoration design, soft decoration design and visual design [4]; Hrovatin et al. studied the development of a sensor network aimed at detecting human falls, naming this sensor network as Smart Floor [5]; Wei Xi utilized a high-definition camera as a data acquisition device to rapidly construct a detailed model of indoor 3D real-world scenes with guaranteed shooting continuity, distance to the target and certain image overlap, and developed a platform for indoor 3D real-world display system [6]; Duo et al. proposed an improved k-means text clustering algorithm, and the iterative class centers of the improved algorithm consisted of topic feature vectors, which could avoid the influence of noise [7].

The purpose of this study is to make innovative improvements on the basis of B/S architecture, propose a web-based interactive VR scene collaborative display system for interior design, and propose a panoramic camera splicing scheme. Firstly, the B/S architecture is used to share and interact the collaborative instructions between the client and the server, and the XML data transfer format and Aiax polling mechanism are used to share and transfer the 3D model data over the network. Starting from virtual reality and network database technology, a web-based VR display system is developed using virtual reality modeling language VRML, Java, MYSOL and ASP. The application server is implemented in Java, and the client is a Java Applet embedded in an HTML page, a VRML reader plug-in and a part of an HTML page. the Applet and the VRML reader communicate through an external authoring interface (EAI). Exploring the image stitching principle of the stitching panoramic camera, we propose an implementation scheme for sphere panoramic stitching in mobile, using gyroscope sensor rotation angle main code and video screen rendering, and finally combining with timestamp-based synchronous decoding algorithm of multiple video streams to realize a panoramic visualization experience based on mobile.

2. Web-based collaborative editing method based on B/S architecture

For the traditional local-based 3D scene editor only supports single-user local browsing and editing, this chapter proposes a 3D scene editing method that can realize multi-user off-site synchronous work through network on the basis of the key technology of collaborative editing based on B/S architecture. Among them, the network collaborative editing technology is the focus of this chapter, which can realize the sharing and interaction of collaborative instructions between the client and the server through the B/S architecture, and use XML data transmission format and Ajax polling mechanism to realize the network sharing and transmission of 3D model data.

2.1 XML-based 3D model data transfer

XML is the abbreviation of Extensible Markup Language (Extensible Markup Language), derived from the standard General Markup Language SGML, which is a standard of data format description language [8]. XML can be used to label data, define data types, and provide a unified method to describe structured data in applications. It is an effective tool for dealing with distributed structural information in the Internet environment [9]. Back in 1998, the W3C published the XML1.0 specification, which was used to simplify the transfer of document information over the Internet.

The structured information description document of XML can be seen as a tree consisting of many elements, one of which contains several attributes. For the 3D model used in this paper, in order to reduce the amount of network transmission data, it is necessary to use XML for the structured information description of the 3D model, i.e., the model shape, voxels, data and other information are corresponded to them by the elements and attributes in XML. And for the realization of collaborative editing between different systems, should use each three-dimensional scene editing system common support for the feature unit.

Streaming mode refers to the three-dimensional model file after special compression processing, and then decomposed into a compressed package, by the server to the client for continuous, real-time transmission. In systems that use streaming transfer methods users usually only need to go through a few seconds or tens of seconds of start-up delay to realize the decompression and browsing of compressed files on the computer using the corresponding hardware and software [10]. If the file occupies a large space, the file is decomposed into multiple compressed packages during compression processing, and the user receiving the file can first decompress part of the compressed package, and the remaining part will continue to be transmitted within the background server. The streaming transmission method reduces the user waiting time and greatly improves the efficiency.

2.2 Implementation of multi-client collaborative editing

Before the advent of Ajax technology, Web applications communicated in a very cumbersome way, generally going through several steps of submission waiting and reloading. That is, whenever a user submitted an HTTP request to the server, he or she had to wait for the server to process the request and return information [11]. Finally, the user receives the information and reloads the Web page, and the user's actions are always synchronized with the server's response processing. This communication wastes a lot of bandwidth resources because most of the HTML code in different page requests from the user is the same. In turn, the user needs to send page requests to the server frequently and needs to wait for the server to process them, resulting in unnecessary time wastage for the user.

To solve these problems, Ajax proposes a mechanism that can communicate with the server asynchronously. By introducing an Ajax engine written in JavaScript, instead of direct interaction between the client and the server [12], the Ajax engine is similar to a router placed between the client and the server and is mainly responsible for handling the user interface display and communication with the server. The user does not need to wait for the processing of the server all the time as before, and can undertake part of the server's work by himself. Ajax applications only retrieve some necessary data information from the server when requesting a page in order to reduce the waiting time and the amount of data transfer, and the server's response and data can be processed by employing JavaScript on the client side. The specific processing flow is shown in Fig. 1(a).

The key to the implementation of Ajax technology is the application of XML Http Request, which provides an asynchronous communication method for JavaScript scripts in the browser, allowing the client's Web page to obtain the latest real-time data from the server without refreshing. The collaborative editing of 3D scenes between multiple users in this paper is achieved by using Ajax's long polling mechanism. By creating an XML Http Request object, the client can send information about local operations to the server and get real-time data from the server in an asynchronous manner. The server then forwards the message to other clients [13], and other clients update the page asynchronously according to the received data to finally synchronize the multi-user browser interface.

In order to meet the requirements of multi-user page synchronization, the client needs to obtain the real-time operation information of the current user from the proxy server through the corresponding mechanism and update the page according to the information returned by the proxy server [14]. In order to achieve real-time data exchange and event processing response on the client side, the simplest solution is to use HTML Refresh technology. The principle of this technology is that the client sends a request to the server without interruption once the server has data updates, the client can get the latest data information from the server through the response to get real-time data response. However, this method has a problem that needs to be considered in depth, that is, how to determine the refresh frequency of the client. Because when the refresh frequency is too high, although the updated data can be obtained in a timely manner. However, it may increase the burden of the server and cause unnecessary waste of broadband resources, while the refresh frequency is too low to ensure the timeliness of the client data [15].

In order to meet the system requirements, we use the polling mechanism of Ajax to solve the problem. As shown in Fig. 1(b), the client sends an asynchronous HTTP request by calling an XML Http Request object using a JavaScript script. The server receives the request and calls the corresponding processing function to collaborate with the user's real-time request data, and the client request fails when the server has no data to update [16]. In the above way, the response from the server can be obtained in real time without any modification or request action by the client. And the latest real-time data is fetched without the client's knowledge to complete the latest data acquisition in collaborative editing. ajax technology achieves multi-user asynchronous communication with good real-time data and low broadband requirements, which can be supported by general browsers nowadays.

Through the client's JavaScript Settings, requests are periodically made to the proxy server according to the system-set interval T. Once a data update is detected on the proxy server, the system automatically retrieves and invokes the handler function for the update event through the client-side JavaScript. Then the DOM is parsed, and the update data of the server can be obtained through parsing, so as to realize the real-time update of the page.

Fig. 1. Ajax's asynchronous communication and polling mechanism.

../../Resources/ieie/IEIESPC.2025.14.1.83/image1.png

2.3 Implementation of other collaborative functions

The basic idea of the DR (Dead Reckoing) algorithm is that during the simulation design process, a low-order approximation model (DR model) reflecting the motion behavior of the simulated entity of the sending node is put into the receiving node, and the sending node also keeps its own DR model. In this way, the sending node does not have to send its state to the receiving node in every simulation cycle, but only sends the current state information to the receiving node when the deviation between its real state and the projected state of the DR model exceeds a predefined value [17]. The receiving node uses the DR model to project the possible state of the simulated entity of the sending node based on the previous reported information between two state updates of the sending node. This can greatly reduce the transmission of information between simulation nodes and reduce the communication volume to 90% to 50% of the previous one, thus greatly reducing the simulation requirements for network bandwidth and capacity.

Let the position of the sending node be xo, the velocity be vo, and the acceleration be ao in the last state update message, the recursive equation of the first-order DR algorithm is

(1)
$ x_{k} =x_{0} +v_{0} kh . $

The recursive equation of the second-order DR algorithm is

(2)
$ x_{k} =x_{0} +v_{0} kh+\frac{1}{2} a_{0} (kh)^{2} , $

where $h$ is the frame length of the simulation and $k$ is the number of recursive steps from the last update.

Since the essence of the DR algorithm is to use a low-order model to approximate the real motion behavior of the moving object, it is suitable for cases where the motion trajectory changes gently.

The problem of compressing 3D data models is solved with the advent of XVL, and the transfer of 3D data becomes easier over the network. However, the system needs a platform for designers to communicate with each other, and the following describes the specific browsing synchronization process for two computers by deploying a server.

First, the key technologies introduced in this chapter are used to build a collaborative editing platform, modify the platform commands of the 3D scene editor, and publish the platform commands as a Unity Web Player plug-in via Visual Studio.

$\mathrm{<}$head$\mathrm{>}$

$\mathrm{<}$meta http-equiy=``Content-Type'' content=``text/html; charset=utf-8''$\mathrm{>}$

$\mathrm{<}$title$\mathrm{>}$A C Cloud $\mathrm{<}$/title$\mathrm{>}$

$\mathrm{<}$script type=`text/java script' src=`jquerymin.js'$\mathrm{>}$$\mathrm{<}$/scri pt$\mathrm{>}$

$\mathrm{<}$script type=`text/java script'$\mathrm{>}$

function Qian Gu Yi Zhen () $\mathrm{\{}$

$\mathrm{\}}$

$\mathrm{<}$!_ _

var unity Object Ur1 = ``Unity Object2.js'';

if (document. location. protocol == `https:')

unity Object Ur1=unity Object Ur1.replace(``http://'',``http: //ssl-'');

document. write (`$\mathrm{<}$script type=

``text $\mathrm{\wedge}$ java script'' src=```+unity Object Ur1'' +'''$\mathrm{>}$$\mathrm{<}$/script$\mathrm{>}$;

_ _$\mathrm{>}$

$\mathrm{<}$/script$\mathrm{>}$

3. WEB-based interactive VR system for interior design

A WEB-based VR display system is a computer information system that is compatible, processing, displaying and applying virtual reality in an Internet or network environment. Its basic idea is to provide a virtual reality environment on the Internet, allow users to browse through a browser and access the system's data and functional services in the virtual reality environment [18]. In this paper, starting from virtual reality and web database technologies, a VR display system based on WEB approach is developed using various technologies such as virtual reality modeling language VRML, JAVA, MYSQL and ASP. Using this display system, a realistic virtual display of products can be made through the Internet, and the realism of interior design can be obtained through interactive operations.

3.1 VR system framework

The system network structure adopts B/S structure, users roam in the 3D exhibition hall through the browser in the client at will, and customize the exhibits interactively in real time, and the client can adjust its own structure dynamically according to the exhibit information, which is easy to expand. The application server provides VRML scenes and data access. The data server provides all exhibits in the exhibition hall. The B/S structure simplifies the client software, and all development, maintenance, and upgrades can be centralized on the server side [19]. The B/S structure requires only TCP/IP protocol for communication. in the B/S structure, the upgrade and maintenance of the system version is performed on the web server side and is only dynamically downloaded when the user needs access, ensuring that the user is using the latest version. The software architecture of the system is based on web technology, virtual reality technology and database technology. The application server is implemented in Java, and the client is a Java applet embedded in an HTML page, a VRML reader plug-in, and a part of an HTML page. The web server receives client requests, interprets and connects to the database via JDBC, queries the exhibits, and returns the results to the client. The software architecture is shown in Fig. 2.

Fig. 2. VR interactive system framework.

../../Resources/ieie/IEIESPC.2025.14.1.83/image2.png

3.2 Java and VRML interaction control

To enable interaction between the VRML world and the external environment, VRML provides a set of external programming interfaces EAI (External Authoring Interface). This interface consists of a set of functions on browser operations that, when called, allow external programs to affect objects in the VRML world.

In the Java language, these functions are encapsulated in 3 packages: VRML. eai. *, VRML. eai. event. *, VRML. eai. field. * (or VRML. external. *, VRML. external. field. *, VRML. external. exception. *). First [21], call get Browser () encapsulated in the VRML. eai. browser Factory class to get a Browser class instance (note: this class provides three methods to get a Browser instance: get Browser (Applet), get Browser (Applet. String, int), and get Browser (Internet Address, int). The first two functions are local calls, the latter one supports remote calls and requires the IP address and port number). Then, with the resulting Browser instance, call the Browser. get Node (String) function (this function cannot access the node in the VRML scene using the In Line language). Finally, call get Event In (), get Event out () encapsulated in the VRML. eai. node class. This allows you to access the domain of each node in the VRML scene and change the state of the node as required, thus enabling its interactivity.

Using the Browser and Node classes provided by the VRML. eai package, the VRML world can be manipulated directly by external EAI applications without the use of Script nodes and Script classes and routing, greatly enriching the functionality of VRML. However, this method also has its limitations, it can only control the VRML world through Java applets. This is because the parameters in the get Browser (Applet) function can only be instances of Applet applications [22].

The external programming of java is implemented through the EAI class (External Authoring Interface), which allows access to the currently running VRML world, so that the scenes inside the VRML world can be manipulated, controlled and modified directly from the outside. This topic implements java to dire ctly control the addition, deletion, scaling, panning, and rotation of VRML scene objects, and also to modify the texture.

In order to achieve VRML, the flexibility and perfection of the scene, it is often necessary to implement VRML in the Applet, and dynamically add and delete nodes. To add nodes dynamically, we need to use the create VRML From String() method of class Browser. The argument to this method is a VRML compliant string, which returns an array of Node-like instances. These Node instances are generated in the external environment of the VRML scene. An Applet can add Children to a Transform node by sending the Add Children event In the Transform node.

For the scene model material change segment, a node named Cone Color is defined in the VRML scene and then accessed in the Java program with the browser. get Node (`Cone Color') statement. In this case, browser is an instance of the Browser class, which is a Java wrapper around the VRML scene. It contains not only various methods for getting information about the current browsing environment, but also the get Node () method. This method returns an instance of the Node class by passing a string parameter. This Node class instance is actually the Java Applet equivalent of the node named after the parameter string in the VRML scene. Getting the Node class instance means getting the index of the node, which can then be accessed.

3.3 Interactive VR system design

First of all, the overall planning of the virtual display system. According to the characteristics of enterprises and display goods, consumer groups positioning, virtual display system to establish the purpose of the site space to do overall planning, including functional module division, product classification, product display mode. At the same time to consider the theory of human-computer interaction.

Then determine the design style of the virtual display, color tone to meet the characteristics of enterprises and display goods and consumer groups positioning. The design style should also have relevance to the image of the enterprise in reality [23].

Finally, the virtual display local design intention is proposed. The local design should be closely focused on the overall design plan of the display, which includes: shopping cart, registration and login, and other various parts.

The first task of system planning and design is to determine the core part of the system and its module composition. The whole system adopts a structured design approach. The system modules were designed and developed in accordance with the principle of overall planning and distributed implementation. With the gradual improvement of the system's software and hardware conditions, sub-systems and sub-systems can be gradually improved and increased, so that the entire system module can be truly decomposed layer by layer and structured rigorously.

The web-based virtual display system is applied to a new interactive platform for interior design, which is designed and implemented based on customer needs, display design, human-computer interaction theory and computer technologies such as network, virtual reality and database. The workflow of establishing the system is shown in Fig. 3.

The overall plan of this virtual display system is based on the combination of VRML and JAVA to establish a browser-based product display and configuration. And install the VRML browser plug-in Cortona Client developed by Parallel GrPahics to display VRML scenes. And the Java Applet embedded in the same interface is used to achieve human-computer interaction to control the VRML scene and get feedback from the scene to realize the online 3D dynamic display of commercial products. At the same time, it realizes the dynamic interaction between the user and the virtual product model on the web such as: scaling and rotation, movement, color and material and other attributes modification: it realizes the product model animation, sound and other functions.

The system adopts the B/S model, and for the adopted system model, the ASP language is chosen as the system development environment to create dynamic WEB pages or generate powerful WEB applications. ASP can be used to add interactive content to WEB pages or to compose entire WEB applications with HTML pages. The combination of VRML and Java enables control interaction with virtual product models.

The display system is a three-level model architecture: browser-WEB server-database server. Due to the encapsulation of the TCP/IP HTTP server and database communication protocol, the Client side of different networks and machines have a unified client access interface. The system framework is shown in Fig. 4(a).

In this architecture, the client user can make an interactive request to the server through the client browser, and the server side will respond to the client's request and return the corresponding information from the database to the client to achieve an interactive response. The technical framework for implementing the system is shown in Fig. 4(b).

The browser side is a VRML and Java Applet embedded in the same page, and the VRML files are displayed through VRML plug-ins such as Cortona. The server side is a WEB server and a standalone Java application, also known as an HTTP server, which is responsible for passing the appropriate multimedia data between the browser and the data server for file distribution. The Java application gets the request for data manipulation and gets the relevant data from the database through JDBC, then converts the data to HTML and returns it to the client.

Fig. 3. System workflow.

../../Resources/ieie/IEIESPC.2025.14.1.83/image3.png

Fig. 4. System architecture and technical framework diagram.

../../Resources/ieie/IEIESPC.2025.14.1.83/image4.png

4. Implementation and testing of a WEB-based interactive system

This paper initially explores the image stitching principle of stitching panoramic camera, proposes the implementation scheme of sphere panoramic stitching in mobile terminal, uses gyroscope sensor rotation angle main code and video screen rendering, and finally combines the timestamp-based synchronous decoding algorithm of multiple video streams to realize the implementation of an interactive system for Web.

4.1 Main algorithm implementation

The classes related to sensors provided by this system are shown below:

VR sensor: Variable indicating the sensor information of the virtual reality panoramic vision, used to save the sensor's position, determine the accuracy and other information.

Sensor Orientation: indicates the sensor orientation, and is used to determine the information of the sensor to obtain the orientation.

Declare a function phone VR (), when setting the gyro parameters, you need to use orientation Is Available and rotation Quat call this function, in order to achieve the use of gyro sensor rotation angle has been determined when the correct screen orientation.

The main code for the gyroscope sensor rotation angle is as follows:

var screen Orientation=(util. get Screen Orientation () * deg to rad)/2;

var screen Transform=[0,0,-Math. sin (screen Orientation),

Math. cos (screen Orientation)];

var device Rotation = quat. create ();

quat. multiply (device Rotation, device Quaternion, screen Transform);

var r22 = Math. sqrt \eqref{GrindEQ__0_5_};

quat. multiply (device Rotation, quat. from Values (-r22, 0, 0, r22),

device Rotation);

Using the Eigen library function, the rotation matrix is converted to Euler angles by calling the rotationTransfer.py script.

Among them, since the panorama roaming project uses the left hand coordinate system for coordinate representation, the rotation matrix in the right hand coordinate system is converted to the left hand coordinate system, and the rotation_matrix. Euler Angles function of Eigen library is called from the rotation matrix to the Euler Angle.

The implementation of the video rendering process operates on both the projection to the surface texture and the drawing of the graphics, both of which are performed using shaders. The main implementation code is as follows.

web GL. gl. active Texture (web GL. gl. TEXTURE0);

web GL. gl. bind Texture (web GL. gl. TEXTURE 2D, texture);

web GL. gl. uniform 1i (shader. uniforms [`u Sampler'], 0);

web GL. gl. uniform1f (shader uniforms[`eye'], eye);

web GL. gl. uniform1f (shader. uniforms [`projection'], projection);

The main implementation code of the drawing is as follows.

web GL. gl. bind Buffer (web GL. gl. ELEMENT_ ARRAY_ BUFFER, vertices Index Buffer);

web GL. gl. draw Elements (web GL. gl. TRIANGLES, 6, web GL. gl. UNSIGNED_SHORT, 0);

4.2 Implementation of a Web-based VR system

This paper develops a mobile-based panoramic video player by combining the simultaneous decoding and rendering algorithm of multiple PES video streams and four-channel image stitching fusion technology. The player can clearly play the panoramic video streams recorded by the stitched panoramic camera, considering the processing speed of the current mobile chip. The player can smoothly play the following types of stitched panoramic video streams: $176\times144\times8$, $352\times288\times5$, $352\times288\times8$, $704\times576\times8$. The first two multipliers of each data set represent the resolution of each video stream, and the last multiplier represents the number of cameras of the stitching camera [24], e.g., $176\times144\times8$ represents the video stream of 8 stitching panoramic cameras, where the resolution of each video stream is $176\times144$. To sum up: the player supports the playback of panoramic video streams up to 3 megapixels ($704576\times8$).

The architecture diagram of the mobile-based panoramic video player is shown in Fig. 5.

Fig. 5. Flow chart of mobile-based panoramic video player.

../../Resources/ieie/IEIESPC.2025.14.1.83/image5.png

In order to enhance the sense of interaction between users and panoramic video, this paper adds a gesture control module to the panoramic player. The specific experience effects are:

Single-finger mode. By dragging horizontally with one finger, the sphere panoramic video can be rotated 360 degrees around the Z-axis direction for observing horizontal 360-degree panoramic image information. By dragging in the straight direction with one finger, the sphere panoramic video can be rotated 180 degrees around the X-axis direction for observing the vertical 180-degree panoramic image information. Two-finger mode. The distance between the two fingers is controlled by the stretching and closing of the two fingers between the screens, and then the spherical panoramic video is reduced and enlarged, with a stretching multiplier of about 8 times. Single and double finger sharing mode. Through single and double fingers cross use, it can realize the zooming in and out of any area of the panoramic video, and achieve 360 degrees without dead angle special display.

Server-side design, we define the broadcast command client set, its client number; define the model object, extract VRML model data. Define object objects, extract object data from virtual scene, try to apply for service port, establish server-side Soket. then wait for client connection, update client number, define object input stream, output stream. Read the client identity marker, send the model data and its object data.

Broad cast Set bs=new Broad cast Set (MAX CLIENT_NUM);

Server Socket server Socket = null;

boo lean listening = true;

int client Number =0;

Vrm1Model vm =new Vrml Model ();

Vrml object vo=new Vrml object (MAX_ OBJECT_ NUM, vm);

server Socket = new Server Socket (4444):

while (listening)

$\mathrm{\{}$

Socket;

socket = server Socket. accept ();

client Number++;

new Vrml Thread (socket, client Number, bs, vm, vo). start ();

$\mathrm{\}}$

server Socket. close ();

Object Input Stream in=null;

Object Output Stream out=null;

in=new Object Input Stream (socket. get Input Stream ());

out=new Object Output Stream (socket. get Output Stream ());

input Vrml Command= (vrml Command) in. read Object ();

String client Type=input Vrml Command. get Object Name ()

out. write object(vm);

out. flush ();

out. write object(vo);

out. flush ()

When logging in, the server comes to determine the type of the client. If the client is a normal client, it does not have the permission to control the model objects directly [25] and adds the client's object output stream to the broadcast set directly. It is also relatively simple to exit by removing it from the broadcast domain.

bs. client Out Set. add (``client''+client Number, out);

input VRML Command= (VRML Command) in. read object ();

bs. will remove (``client''+client Number);

If the client is judged to be the master client, then it is made to obtain the permission to broadcast the master client and read the master client control commands. And it can link the database through JDBC to facilitate dynamic re-add model objects to the scene to realize the diversity of scene objects.

DB Connect driver=new DB Connect ();

Connection con=null;

Result Set rs=null;

Statement sta=null;

String sql=null;

driver. set URL (``jdbc: odbc: VRML'');

con=driver. get Connection ();

sta=con. create Statement ();

sql=``select*from model''

rs=sta. execute Query(sql);

bs. main Client=out;

bs. broad cast VRML Command (input VRML Command);

nput VRML Command= (VRML Command) in. read Object ();

4.3 System Testing

This section verifies the feasibility of Web VR stereo panoramic roaming system for creating stereo panoramic roaming in Web pages through testing. The basic function of the system is tested to verify the functional implementation of each module of the roaming generation and display system. The camera pose estimation module is used to verify the ability and effectiveness of the panoramic pose estimation algorithm to solve the pose of the panoramic camera in the panoramic space. The ability and effectiveness of the monocular panoramic depth estimation algorithm in solving the depth information of panoramic images through the dual projection fusion method is verified by the panoramic depth estimation module, and the feasibility of the scheme is further verified by performance tests.

The camera poses estimation algorithm used in the system is tested. Three sets of panoramic images are selected and their pose estimation is carried out separately, and the comparison data between the estimated and real values of the three sets of position and rotation angles are obtained, and the real values are actually measured in the scene by physical devices. Among them, the position data are presented in the form of scatter plots, and since the heights of the ground hot spots that identify the locations of panoramic shots are set as ground heights, the y-axis values are discarded and only the x and z axes are considered.

As can be seen from Table 1, the position estimates overlap well with the scatter plot of the true values, and the rotation angle estimates are similar in value to the true values, meeting the system requirements for panoramic positional estimation.

As can be seen in Fig. 6, as the number of concurrent users gradually increases, the system pressure gradually increases, the average response time increases, and the throughput first increases and then decreases. The throughput reaches the maximum of the system design range when the number of concurrent users is 250, after which the throughput gradually decreases when the number of concurrent users exceeds the system design range. Comparing the stereo panoramic roaming and the ordinary panoramic roaming, there is no significant difference in the response time and throughput, which indicates that both have almost the same access performance and meet the performance requirements of the stereo panoramic roaming scheme.

There is no significant difference in response time between stereo panorama tour and ordinary panorama tour because the steps of calculating camera pose and depth map of panoramic image are completed in the production process of stereo panorama tour after uploading panoramic image. Users can read camera pose data and depth image storage path directly from the database when accessing the stereo panorama tour, and then import depth image for rendering.

As shown in Fig. 6, the computation time of depth map for 12 sets of panoramic images ranges from 10s to 15s, and the average computation time is 12.19s, which meets the system requirements. In addition, for the input panoramic images with different resolutions, the pre-training model first converts them to $1024\times 512$ resolution images before depth estimation. Therefore, the images with different resolutions have almost no effect on the computation time of the depth map.

From the above analysis, when addressing scalability challenges associated with processing larger data sets or increasing concurrent user interactions, a series of mitigation strategies are often required to ensure system stability and efficiency. It mainly includes the following strategies, vertical scaling: Enhance the processing power of a single server, for example by upgrading CPU, memory, and storage devices. This approach is suitable for scenarios where the data set is relatively small or where the performance of a single node needs to be quickly improved. Load balancing: Use a load balancer to distribute user requests to multiple servers to avoid overloading any one node. This helps improve the availability and responsiveness of the system. Caching strategy: For frequently accessed data, the use of caching can reduce the number of database accesses, reduce latency, and reduce the pressure on back-end services.

Fig. 6. Average response time and throughput comparison.

../../Resources/ieie/IEIESPC.2025.14.1.83/image6.png

Table 1. Comparison of estimated and true values of rotation angle for three sets of panoramic images.

Group

Serial number

Estimated value

True Value

rx

ry

rx

rx

ry

rx

1

1

0

0

0

0

0

0

2

1.6

-68.1

2.1

2.8

-59.6

2.4

3

-1.2

63.2

1.1

-1.7

58.4

0

4

1.7

150.4

-0.6

0

142.3

-2.7

5

-1.8

-172.4

-3.4

-1.4

-167

-3.3

2

1

0

0

0

0

0

0

2

2.3

126.3

2.2

1.4

123.4

1.2

3

1.7

39.2

1.3

2.4

34.6

1.8

4

-1.6

-132

-0.3

-1.8

-133

0

3

1

0

0

0

0

00

0

2

0.6

149

0.2

1.4

159

0.3

3

-1.6

-137

-0.7

-2.6

-156

-1.7

4

2.5

35

-0.7

2.9

38

0

5. Conclusion

Based on Web B/S architecture, this paper proposes an interactive VR scene collaborative display system for interior design. The system is jointly developed by VRML, Java, MYSOL and ASP, and uses the External Authoring Interface (EAI) to realize the interaction between the VRML world and the external environment. Specific conclusions are as follows:

On the basis of the research on the key technologies of network collaborative editing, this paper puts forward a solution suitable for the multi-person collaborative editing problem in this paper, which mainly includes XML-based data transmission, multi-client browsing synchronization and the realization of other collaborative functions, and demonstrates the specific collaborative editing process between two clients through experiments.

The application server is implemented by Java language, and the client is a Java Applet embedded in the HTML page, VRML reader plug-in and part of the HTML page. Applets and VRML readers communicate via an External Authoring Interface (EAI). The commercial display mode is analyzed, the traditional display mode is compared with the network virtual display, and the overall framework of the system is put forward according to the design principle and design flow of the display system.

Initially explored the stitching principle of panoramic images, combined with the main code of rotation Angle of gyroscope sensor and video rendering, completed the spherical panoramic rendering of multi-channel PES video stream, and optimized the interactive VR system for interior design. Finally, through the relevant functions of gesture operation and multi-video selection on the mobile terminal, the interactive panoramic visualization experience is brought to the user. System performance test shows that the depth map calculation time of 12 sets of panoramic pictures of the interior design interactive VR system proposed in this paper ranges from 10-15s, and the average calculation time is 12.19s, which meets the requirements of the system.

The virtual display technology itself is also in the process of continuous development, and this paper is only a preliminary study. The perfect combination of virtual reality technology and VRML technology is an important research direction of interior design network and visual display. This system is still lacking in the optimization technology of the model, although it can do some simple optimization processing, but some better optimization technology has not been used in the optimization processing of the model. Future research can achieve better processing models and optimization work through further learning. In addition, the camera pose estimation module of this system does not involve the calibration of the panoramic camera, so the distance unit in the virtual panoramic space does not correspond to the distance unit in the real scene. When using the camera position information, the calculated value needs to be enlarged to a certain proportion to be applied in the three-dimensional panoramic tour. In addition, pose estimation with real scale can also be applied to the measurement of objects in the scene, so it has high research value.

REFERENCES

1 
T. Chen, Z. Pan, and J. Zheng, ``Easymall-an interactive virtual shopping system[,'' Proc. of 2008 Fifth International Conference on Fuzzy Systems and Knowledge Discovery, IEEE, vol. 4, pp. 669-673, 2008.DOI
2 
K. Dai, Y. Li, S. Zhang, et al., ``Three-dimensional online customization ordering system,'' Proc. of 8th International Conference on Computer Supported Cooperative Work in Design, IEEE, vol. 2, pp. 588-593, 2004.DOI
3 
J. Zhang, A. Yang, and F. Shuaishuai, ``Data protection of internet enterprise platforms in the era of big data,'' Journal of Web Engineering, 861–878-861–878, 2022.DOI
4 
D. Zhu, ``Research on virtual reality-based interior design methods,'' Automation Technology and Applications, vol. 2, pp. 157-160, 2019.URL
5 
Hrovatin N, A. Tošić, J. Vičič, ``In-network convolution in grid shaped sensor networks,'' Journal of Web Engineering 75–96-75–96, 2022.DOI
6 
X. Wei and Y. Sun, ``Design and implementation of an indoor live-action 3D display system based on automatic modeling,'' Collection, 12, 2019.URL
7 
J. Duo, P. Zhang and L. Hao, ``A K-means text clustering algorithm based on subject feature vector,'' Journal of Web Engineering, 1935–1946-1935–1946, 2021.DOI
8 
C. Stephanidis, HCI International 2017–Posters' Extended Abstracts, 19th International Conference, HCI International 2017, Vancouver, BC, Canada, July 9–14, 2017, Proceedings, Springer, 2017.DOI
9 
K. Dai, Y. Li , J. Han, et al., ``An interactive web system for integrated three-dimensional customization,'' Computers in Industry, vol. 57, no. 8-9, pp. 827-837, 2006.DOI
10 
X. Zhang, X. Zhang, S. Wang, and X. Ping, ``Design and implementation of robot middleware service integration framework based on DDS,'' Proc. of 2022 IEEE International Conference on Real-time Computing and Robotics (RCAR), IEEE, pp. 588-593, 2022.DOI
11 
Y. Tan and F. Qin, ``Design and research of Raspberry Pi-based indoor intelligent irrigation system,'' Water Conservation Irrigation, vol. 7, pp. 105-108, 2019.URL
12 
G. Liu, H. Zhang, R. Shang, et al., ``Hierarchical optimization scheduling of active demand response for distribution networks in 5G base stations,'' Wireless Communications and Mobile Computing, vol. 2022, 2022.DOI
13 
L. Li, ``A web-based virtual reality simulation of mounting machine,'' Journal of Multimedia, vol. 9, no. 2, 2014.DOI
14 
W. Xu, X. Wang, and J. Zhang, ``Simulation research on safety evaluation of auto parts manufacturing enterprises based on structure entropy weight method and forward cloud algorithm model,'' Proc. of 2022 2nd International Conference on Algorithms, High Performance Computing and Artificial Intelligence (AHPCAI), IEEE, pp. 573-577, 2022.DOI
15 
C. Yu, M. Sun, and R. Yang, ``Design and implementation of indoor navigation system using iBeacon technology,'' Journal of Chongqing University of Technology: Natural Sciences, vol. 32, no. 5, pp. 162-168, 2018.URL
16 
Z. Liu and M. Luo, ``Modern clothing design based on human 3D somatosensory technology,'' Journal of Sensors, vol. 2022, 2022.DOI
17 
P. Keerthan and M. Mahendran, ``Experimental study on web crippling strength of hollow flange channels under end-one-flange and interior-one-flange load cases,'' Advances in Structural Engineering, vol. 19, no. 6, pp. 966-981, 2016.DOI
18 
Q. Lin and L. Zhang, ``Collaborative virtual environment: Web-based issues,'' Wiley Encyclopedia of Computer Science and Engineering, pp. 444-453, 2007.DOI
19 
Y. Qian, H. Cai, and F. Bu, ``Automated construction and implementation of web-based 3D scenes based on ontology reasoning,'' Journal of Donghua University: Natural Science Edition, vol. 41, no. 5, pp. 638-645, 2015.URL
20 
W. T. Lee, H. I. Chen, M. S. Chen, et al., ``High‐resolution 360 video foveated stitching for real‐time VR,'' Proc. of Computer Graphics Forum, vol. 36, no. 7, pp. 115-123, 2017.DOI
21 
G. Zheng, ``Research and development of information management system for interior decoration industry,'' Journal of Wuhan University of Technology, vol. 29, no. 1, pp. 162-164, 2007.URL
22 
L. R. Ramírez-Hernández, J. C. Rodríguez-Quinoñez,, M. J. Castro-Toscano, et al., ``Improve three-dimensional point localization accuracy in stereo vision systems using a novel camera calibration method,'' International Journal of Advanced Robotic Systems, vol. 17, no. 1, 1729881419896717, 2020.DOI
23 
A. S. Wagner, Ü. Kilincsoy, and P. Vink, ``Visual customization: Diversity in color preferences in the automotive interior and implications for interior design,'' Color Research & Application, vol. 43, no. 4, pp. 471-488, 2018.DOI
24 
T. Wu, Y. Pan, and D. Xu, ``Design and implementation of 3D virtual interior pattern display based on Web publishing,'' Computer Engineering and Applications, vol. 40, no. 7, pp. 203-205, 2004.URL
25 
Z. Fang, K. Roy, S. Padiyara, et al., ``Web crippling design of cold-formed stainless-steel channels under interior-two-flange loading condition using deep belief network,'' Structures, Elsevier, vol. 47, pp. 1967-1990, 2023.DOI
Zhou Yang
../../Resources/ieie/IEIESPC.2025.14.1.83/author1.png

Zhou Yang was born in 1985, graduated with a bachelor's degree in Art and Design from the School of Information Technology, Shanxi Agricultural University, and a master's degree in Environmental Design from the School of Fine Arts, Shanxi University. Currently, he teaches as a lecturer at the School of Art and Media, Jinzhong University of Information Technology. My main research focuses on human settlement environment design and public space design.