1. Introduction
The advent of the Industrial Internet of Things (IIoT) represents a paradigm shift
in the manufacturing and industrial sectors, marked by the integration of advanced
information technologies with traditional industrial processes [1]. This convergence is catalyzing a transformative wave, enabling unprecedented levels
of automation, efficiency, and data-driven decision-making. Central to this revolution
is the concept of dynamic resource allocation, a process intrinsically linked to the
ability of IIoT systems to adapt to fluctuating operational demands in real time [2]. The essence of IIoT lies in its network of interconnected sensors, machines, and
devices, which collectively generate vast streams of data [3]. This data, when effectively harnessed, offers a granular view of the production
environment, facilitating optimized resource utilization, predictive maintenance,
and enhanced operational agility [4].
The necessity for dynamic resource allocation within IIoT stems from the inherently
variable nature of industrial environments. Factors such as fluctuating market demands,
supply chain uncertainties, and evolving production requirements necessitate a flexible
approach to resource management [5]. Traditional static resource allocation methods fall short in this regard, as they
lack the adaptability to cope with continuous changes in operational conditions [6]. In contrast, IIoT-enabled dynamic resource allocation leverages real-time data and
advanced analytics to make informed decisions about the allocation and reallocation
of resources such as manpower, machinery, and materials [7]. Doing so, not only enhances operational efficiency but also minimizes downtime and
waste, leading to improved productivity and cost-effectiveness. Furthermore, the integration
of predictive analytics into resource allocation processes enables proactive responses
to potential issues, thereby further refining the efficiency and reliability of industrial
operations [8].
The concept of the Age of Information (AoI) has emerged as a critical metric in the
realm of networked systems, particularly within the context of the Industrial Internet
of Things (IIoT) [9] AoI is defined as a measure of the time that elapses from the moment a piece of information
is generated until it is received and processed by the end-user or system [10]. This metric is distinct from latency, as it encompasses not only the delay in transmission
but also the time during which the information awaits processing and utilization.
In the fast-paced and data-driven environment of IIoT, where decisions and actions
must often be made in real-time, AoI becomes a paramount indicator of data relevance
and timeliness, directly impacting the efficiency and efficacy of operational decisions
[11].
In the context of IIoT, the significance of AoI lies in its ability to quantify the
freshness of information, which is essential for maintaining an accurate and real-time
understanding of industrial processes. IIoT systems are typified by a plethora of
sensors and devices that continuously generate data regarding various aspects of industrial
operations [12]. This data, however, rapidly loses its value if not processed and acted upon promptly.
High AoI values can indicate delays or bottlenecks in the data processing pipeline,
signaling the need for system adjustments to ensure timely data flow. Consequently,
maintaining a low AoI is crucial for operational accuracy, allowing for swift and
informed decision-making that is critical in dynamic industrial environments. Moreover,
the optimization of AoI in IIoT systems is instrumental in enhancing various facets
of industrial operations, including predictive maintenance, resource optimization,
and supply chain management. For instance, in predictive maintenance, the freshness
of data regarding equipment status is vital to accurately predicting failures and
scheduling timely maintenance, thereby preventing costly downtimes [13]. Similarly, in resource optimization, up-to-date information on resource availability
and utilization is essential for efficient allocation and scheduling. In supply chain
management, current information on inventory levels and logistics ensures seamless
operations and customer satisfaction [14]. Hence, AoI not only serves as a key performance indicator in IIoT but also as a
driving force for continuous improvement and operational excellence.
In this study, we explore and propose an innovative method for resource allocation
tailored to the dynamic and complex environments of IIoT. One of the core challenges
in IIoT resource management is the effective utilization and distribution of various
resources such as sensors, devices, and network bandwidth, to maintain information
freshness and efficient system operations [15]. Although existing resource allocation mechanisms are effective under certain conditions,
they often overlook the critical factor of AoI, potentially leading to outdated decisions
or resource wastage. This section aims to address this issue by introducing an AoI-based
distributed multi-resource management strategy.
Traditional centralized resource allocation methods face scalability and flexibility
challenges when dealing with a large number of nodes and complex tasks. To address
these challenges, this research proposes a distributed resource allocation mechanism.
Under this mechanism, individual nodes can independently make resource selection and
adjustment decisions based on local information and AoI metrics. However, this approach
introduces a new problem: how to ensure the overall system efficiency while achieving
fairness in resource distribution when nodes operate independently? To resolve this,
the research introduces an algorithm that balances individual optimization with collective
benefits, aimed at enhancing the overall responsiveness and efficiency of the system,
while ensuring fairness in resource allocation. Initially, the IIoT resource allocation
problem is modeled as a multi-agent decision-making issue, where each agent must make
optimal decisions within a context of limited resources and uncertain environments.
Subsequently, a novel AoI-based resource allocation algorithm is proposed, designed
to optimize dynamic resource distribution, thereby enhancing the timeliness and accuracy
of decisions. Furthermore, to ensure fairness and efficiency in resource allocation,
several key constraints are integrated into the algorithm. Finally, a series of simulation
experiments are conducted to validate the effectiveness of the proposed method, demonstrating
its ability to reduce average AoI and enhance resource allocation efficiency.
Unlike existing research, this paper proposes a novel distributed multi-resource management
method based on the Age of Information (AoI), capable of achieving dynamic optimization
of resource allocation while ensuring fairness. By introducing AoI as a key decision-making
parameter, this method effectively enhances the system's real-time performance and
resource utilization efficiency. The approach distinguishes itself from previous studies
by integrating AoI into a distributed framework, allowing for more adaptive and responsive
resource management in the dynamic IIoT environment. This integration not only addresses
the challenge of maintaining data freshness but also optimizes the overall system
performance in terms of throughput and resource efficiency.
The contributions of this study are as follows:
We introduce and elaborate on a novel resource allocation method grounded in the concept
of AoI. Tailored for the dynamic settings of IIoT, this framework optimizes resource
distribution and scheduling through real-time data and AoI metrics. This approach
not only enhances the timeliness and accuracy of decision-making processes but also
offers a flexible solution adaptable to rapidly changing environments.
In traditional distributed systems, individual nodes often operate independently,
potentially leading to imbalanced resource allocation and inefficiency. The algorithm
proposed in this paper successfully balances the autonomy of individual nodes with
the overall system efficiency. By employing this method, we ensure fairer and more
balanced resource distribution while maintaining effective resource utilization.
By utilizing an all-fiber laser and fiber Bragg gratings, our method exhibits excellent
resistance to environmental interferences such as temperature fluctuations and mechanical
vibrations. This enhanced stability and interference resistance make our approach
a powerful tool in applications requiring high reliability and precision, such as
in precision engineering and scientific research.
3. Model Formulation
3.1. System Model
In the depicted IIoT scenario, as illustrated in Fig. 1, the system encompasses a central controller surrounded by a multitude of sensor
nodes dispersed throughout the industrial environment. These sensor nodes are tasked
with gathering sensory information related to the environment, equipment, and personnel.
They transmit data packets, equipped with time stamps, to the central controller via
wireless channels. The central controller is responsible for processing the received
data to make real-time control decisions.
Fig. 1. Distributed array of sensor nodes of IIoT.
In our IIoT system model, the network comprises a distributed array of sensor nodes,
denoted by of sensor nodes, denoted by $\mathcal{M} = \{m_1, m_2, \dots, m_N\}$, where
$N$ represents the total number of sensor nodes. These nodes are responsible for monitoring
and collecting data from various aspects of industrial operations, such as machinery
performance, environmental conditions, and personnel activities.
Unlike traditional models that might allocate a single type of resource, our model
considers a set of distinct resources necessary for optimal data transmission and
processing. These resources include but are not limited to, communication channels,
computational capabilities, and energy supplies, denoted as $\mathcal{C} = \{c_1,
c_2, \dots, c_I\}$, $\mathcal{P} = \{p_1, p_2, \dots, p_J\}$, and $\mathcal{E} = \{e_1,
e_2, \dots, e_K\}$ respectively, where $I$, $J$, and $K$ represent the total number
of available resources in each category. The objective is to allocate these resources
dynamically to the sensor nodes in a way that minimizes the average AoI, thus ensuring
that the most current information is utilized for decision-making. The resource allocation
must account for the heterogeneous and dynamic nature of the industrial environment,
where the state of each sensor node and its surrounding conditions may change rapidly.
The time is discretized into slots $\mathcal{T} = \{t_1, t_2, \dots, t_T\}$, where
$T$ is the number of time slots in the optimization period. In each time slot, a sensor
node can be allocated multiple types of resources from $\mathcal{C}$, $\mathcal{P}$,
and $\mathcal{E}$, based on its current state and the AoI requirements. The allocation
process is governed by a distributed algorithm, where each node operates independently,
using local information to decide on the optimal resource combination that minimizes
its AoI. To formulate the problem, we introduce the following variables: $x_{i, j,k}^t$:
A binary variable indicating whether resource combination $(c_i, p_j, e_k)$ is allocated
to any sensor node at time $t$. $\Delta_m^t$: The Aol for sensor node $m$ at time
$t$.
The optimization problem can be stated as:
Here, the objective is to minimize the average AoI across all sensor nodes, subject
to the constraints that each resource combination can only be allocated to one sensor
node at a time, and each sensor node can only use one resource combination at a time.
The function $f$ represents the relationship between the last update time and the
current time slot, determining the AoI for each node.
3.2. Formulation of the Distributed Algorithm for Dynamic Resource Allocation
The distributed algorithm is designed for dynamic resource allocation in an IIoT system
to optimize the AoI. It aims to minimize the AoI across all sensor nodes while adhering
to resource constraints and maintaining a fair distribution of resources.
The algorithm operates in discrete time slots, with each sensor node making independent
decisions about its resource allocation by considering local AoI and available resources.
The decision process of each node follows a series of iterative computations within
each time slot, which can be formalized as follows:
Local AoI update: Each sensor node calculates its current AoI based on the most recent successful data
transmission. The Aol for sensor node $m$ at time slot $t$ is updated as:
where $\tau_m^{(t)}$ is the last time slot at which node $m$ successfully transmitted
its data.
Resource demand estimation: Nodes estimate their resource demand for the next time slot based on their current
AoI and predefined priorities. The demand function $D_m^{(t)}$ is defined as
where $g(.)$ is a monotonically increasing function, representing the urgency of updating
information as the AoI increases.
Resource allocation strategy: The nodes communicate their resource demands to a decentralized algorithm that allocates
resources without centralized control. Let $X_{m,i, j,k}^{(t)}$ be the binary decision
variable indicating if node $m$ is allocated resource combination $(c_i, p_j, e_k)$
at time $t$. The allocation is determined by
Conflict resolution: In case of conflicting demands for the same resources, a contention resolution protocol
is employed. Let $\mathcal{N}(c_i, p_j, e_k)$ be the set of nodes requesting the same
resource combination $(c_i, p_j, e_k)$.The conflict is resolved by
where $\phi_m^{(t)}(c_i, p_j, e_k)$ indicates whether node $m$ has the lowest AoI
among the contenders and thus wins the resource allocation.
Update of allocation: After resolving conflicts, each node updates its allocation status and prepares for
data transmission. The updated allocation matrix at time $t$ is
System feedback and adjustment: Post allocation, nodes receive feedback on the success of their data transmission.
Based on this, they adjust their future demand estimations and resource requests.
The goal is to minimize the average AoI across all nodes over the time horizon $T$,
which can be formulated as
Subject to the constraints of the system model, ensuring that resources are allocated
fairly and efficiently without overloading any single node or resource.
This algorithm innovatively employs AoI as the core indicator for resource allocation.
Through Eqs. (3)-(7), it achieves dynamic optimization of AoI, thereby ensuring data freshness while simultaneously
improving resource utilization efficiency. This approach represents a significant
advancement over traditional methods by directly incorporating the timeliness of information
into the resource allocation decision-making process.
3.3. Integration of AoI as a Key Decision-making Parameter
The AoI serves as a critical metric within each node's decision-making process. It
informs the urgency and priority with which resources are requested and allocated
in the distributed IIoT environment. The incorporation of AoI into the decision-making
algorithm ensures that the most time-sensitive data is transmitted first, maintaining
high system responsiveness and data relevance. Each sensor node calculates its demand
for resources as a function of its current AoI. The demand function D($\Delta$) reflects
the priority of the node's data transmission, with higher AoI leading to higher demand
Where $h(\cdot)$ is a function that maps AoI to resource demand, which could be linear
or a more complex non-linear function to reflect different prioritization strategies.
The nodes use their AoI to bid for resources in each time slot. The node with the
highest AoI-derived demand is given priority in the allocation process:
where $\Psi$ is the prioritization function that determines the node $m$ with the
highest demand for resources based on AoI.
To formalize the Aol within the optimization framework, we introduce an Aol optimization
function $\Omega(\Delta, R)$, which is used to minimize the Aol across all nodes given
the resource constraints $R$:
where $w_m$ is a weight factor that could be adjusted to prioritize certain nodes
over others, and $\Delta_m$ is the Aol for node $m$.
The decision variable for resource allocation at each node now becomes a function
of AoI:
After each time slot, the nodes receive feedback on the success of their data transmission.
They use this to update their AoI and adjust their demand accordingly for the next
time slot:
where $\gamma$ presents the rate at which the AoI is reduced upon successful transmission.
Nodes iteratively adjust their demand for resources based on AoI feedback, leading
to an adaptive and responsive distributed system:
where $\alpha$ is a factor that determines how quickly the resource demand adapts
to changes in AoI.
To ensure the reliability and stability of our proposed algorithm in dynamic IIoT
environments, we provide a theoretical analysis of its convergence and complexity.
Convergence analysis: It is proved that under the proposed resource allocation strategy, the average AoI
of the system converges to a steady state as $t \to \infty$. Let $\Delta(t)$ be the
average AoI of the system at time $t$. The evolution of $\Delta(t)$ can be expressed
as
where $\alpha(t)$ is a decreasing step size satisfying $\sum \alpha(t) = \infty$ and
$\sum \alpha^2(t) < \infty$, and $f(\Delta(t))$ represents the expected change in
AoI after resource allocation. Given that $f(\Delta(t))$ is bounded and $\alpha(t)$
satisfies the above conditions, we can apply the Robbins-Monro algorithm, which guarantees
that $\Delta(t)$ converges to a fixed point of $f(\Delta)$ as $t \to \infty$.
Complexity analysis: The time complexity of our algorithm is $O(NMK)$ per iteration, where $N$ is the
number of sensor nodes, $M$ is the number of resource types, and $K$ is the number
of available resource units. This is derived from the following steps in each iteration:
AoI update $O(N)$; resource demand estimation $O(N)$; resource allocation $O(NMK)$;
conflict resolution: $O(NK)$ in the worst case.
The space complexity is $O(NM)$, as the algorithm requires $O(N)$ space to store AoI
values and $O(NM)$ space for resource allocation decisions. This analysis demonstrates
that our algorithm has polynomial time complexity, making it computationally feasible
for practical IIoT applications, even as the system scales up in terms of nodes and
resources.
Scalability ensures that the algorithm can handle an increasing number of sensor nodes
and resources without significant degradation in performance. To achieve this, the
algorithm is designed with a modular structure where each node operates independently
and in parallel with others. The scalability is ensured by the following features:
Each node independently assesses its own AoI and makes resource allocation decisions
based on local information, eliminating the need for a centralized decision point
that could become a bottleneck. Resource Pooling: The resources $\mathcal{C}$, $\mathcal{P}$,
$\mathcal{E}$ are pooled and managed in a way that allows dynamic reallocation and
efficient utilization, catering to the demands of an increasing number of nodes. The
algorithm dynamically balances the load across all available resources, preventing
overutilization of any single resource which could lead to performance issues as the
system scales.
Fairness ensures that all sensor nodes have an equitable chance of accessing the resources
necessary for their data transmission, regardless of their state or location within
the network. Fairness is achieved through the following measures: Nodes with higher
AoI are given priority in resource allocation, but a system is put in place to prevent
nodes from being starved of resources, ensuring long-term fairness. The algorithm
employs a contention resolution protocol to provide all nodes with an equal opportunity
to access the resources, preventing the monopolization of resources by a subset of
nodes. The algorithm incorporates fairness constraints into the optimization problem,
ensuring that the resource allocation over time is balanced among all nodes.
Real-time responsiveness is critical in IIoT systems for timely decision-making and
system control. To ensure this, the algorithm includes the following aspects: Nodes
immediately integrate feedback from the system about the success or failure of data
transmission, allowing for quick adjustments to resource demands. When a node detects
an imminent high-priority data transmission based on its AoI, it can pre-empt resources
in anticipation, ensuring that the data is transmitted in the next available time
slot. The algorithm adapts to the changing state of the system in real-time, allowing
for resource reallocation in response to fluctuating environmental conditions and
operational demands. To encapsulate these properties, the optimization problem is
enhanced with additional constraints and objectives:
where $F(A^{(t)})$ is a fairness measure across nodes, $R(A^{(t)})$ reflects the real-time
responsiveness of the system, and $\lambda_1$, $\lambda_2$ are the weighting factors
that balance these objectives with the primary goal of minimizing AoI.
3.4. Constraint Analysis
In the context of the IIoT environment, several constraints are integral to its functioning.
These constraints ensure that the resource allocation process is realistic and sustainable,
considering the physical and operational limitations of the system. The critical constraints
include energy limits, computational capacity, and channel availability.
3.4.1 Energy limits
Each sensor node operates with a finite energy budget, which limits its data transmission
and processing capabilities. The energy constraint for a sensor node is defined as
Here, $E_{i, j,k}$ represents the energy consumed by node $m$ when allocated resource
combination $(c_i, p_j, e_k)$ at time $t$, and $E_{\max}$ is the maximum energy available
to the node over the optimization period $T$.
3.4.2 Computational capacity
Channel availability is constrained by the number of orthogonal channels that can
be simultaneously utilized without interference. The channel availability constraint
is modeled as
where $C_m^t$ denotes the computational resources required by node $m$ at time $t$,
and $C_{\max}$ is the maximum computational capacity available per time slot.
3.4.3 Channel availability
Channel availability is constrained by the number of orthogonal channels that can
be simultaneously utilized without interference. The channel availability constraint
is modeled as
Ensuring that each channel $c_i$ can be assigned to at most one sensor node at any
time slot $t$.
3.4.4 Multi-objective optimization framework
The multi-objective optimization framework captures these trade-offs and is formalized
as
Where $F(A^\tau)$ and $R(A^\tau)$ are functions representing fairness and real-time
responsiveness, and $\lambda_1, \lambda_2$ are weighting factors that govern the trade-off
between minimizing AoI and satisfying other constraints.
3.4.5 Adaptive resource allocation strategy
To navigate these trade-offs, an adaptive resource allocation strategy is employed,
allowing the system to dynamically adjust resource allocation in response to changing
conditions and AoI requirements:
This adaptive strategy enables the system to maintain a balance between minimizing
AoI and adhering to operational constraints, ensuring the sustainability and efficiency
of the IIoT environment.
Algorithm 1: Distributed Resource Allocation for IIoT.
4. Simulation and Results
4.1. Evaluation Metrics
For the comprehensive assessment of the resource allocation strategy within the IIoT
system, the following metrics are established:
(1) AoI: The average AoI is a critical metric for evaluating the timeliness of the
information across all sensor nodes within the system. It is given by
(2) System Throughput: This measures the total amount of data successfully transmitted
to the central controller within the time horizon. System throughput is represented
as
where $\rho_{m,i, j,k}^\tau$ is the data rate achieved by node $m$ when allocated
resource combination $(c_i, p_j, e_k)$ at time $\tau$.
(3) Resource Utilization Efficiency: This metric reflects the efficiency of resource
usage by considering the proportion of time each resource is actively used to transmit
data. It is formulated as
where $u_{i, j,k}^\tau$ is a binary variable indicating whether resource combination
$(c_i, p_j, e_k)$ is actively used at time $\tau$.
4.2. Simulation Experiment Design
To validate the performance of the distributed resource allocation algorithm, a set
of simulations will be designed to reflect a range of operational scenarios in an
IIoT environment. These simulations aim to test the algorithm under different conditions,
such as varying numbers of sensor nodes, fluctuating resource availability, and diverse
industrial operation demands.
The following Table 1 outlines the parameters that will be used in the simulation experiments, which align
with the constraints and objectives of our algorithm:
Table 1. Parameters used in the simulation experiments.
|
Parameter
|
Description
|
Scenario 1
|
Scenario 2
|
Scenario 3
|
|
N
|
Number of sensor nodes
|
30
|
75
|
150
|
|
I
|
Number of communication channels
|
10
|
15
|
25
|
|
J
|
Number of computational resource units
|
10
|
20
|
30
|
|
K
|
Number of energy units
|
60
|
120
|
180
|
|
T
|
Number of time slots
|
200
|
400
|
600
|
|
E
max
|
Maximum energy budget for each node
|
150
|
250
|
350
|
|
C
max
|
Maximum computational capacity per time slot
|
70
|
140
|
210
|
|
λ1
|
Weighting factor for fairness
|
0.8
|
1.0
|
1.2
|
|
λ2
|
Weighting factor for responsiveness
|
1.0
|
1.2
|
1.4
|
In these scenarios, N represents a moderate to high density of nodes. The number of communication channels
I and computational resource units J are designed to test the algorithm's performance under tight to moderate resource
availability. K and E
max are set to simulate the impact of energy constraints on the nodes. The time slots
T are extensive to observe the system's behavior overa substantial operational period.
The weighting factors λ1 and λ2 are varied to explore the balance between fairness and responsiveness.
The simulations will encompass the following scenarios: All nodes have equal priority,
and resources are ample, serving as a control scenario to benchmark against. Many
sensor nodes compete for limited resources, testing the algorithm's scalability and
fairness. Random fluctuations in the availability of resources over time, assessing
the algorithm's adaptability. Different weights are applied to AoI in the resource
demand function to examine how the prioritization of fresher information affects overall
performance. Nodes have limited energy, reflecting the real-world constraint of battery-powered
sensors. Each scenario will be run multiple times to gather average results, ensuring
the robustness of the findings. The simulation will track the average AoI, system
throughput, and resource utilization efficiency across all runs, using the metrics
established in Subsection 4.1.
4.3. Comparative Analysis of Resource Allocation Strategies
In evaluating the performance of our distributed resource allocation algorithm, we
juxtapose it against three existing methods to highlight its strengths and potential
areas for improvement. KA Algorithm: Refers to the scheduling algorithm developed
by Kadota et al. [24], which focuses on optimizing the Age of Information in wireless networks with throughput
constraints. YA Approach: Denotes the comprehensive survey and analytical work on
AoI by Yates et al. [25], highlighting its application in various networked systems and its interplay with
other performance metrics [24]. GA Strategy: Represents the AoI-focused strategy in IoT systems by Gindullina et
al. [27], particularly for systems with energy-harvesting capabilities and the challenge of
maintaining information freshness with energy constraints.
Fig. 2. Trend of average AoI across algorithms with data points.
In our simulation, we will implement the algorithms and measure their performance
using the established metrics: average AoI, system throughput, and resource utilization
efficiency. The results will shed light on each algorithm's ability to handle varying
network densities, manage resource constraints effectively, and maintain data freshness.
In the assessment of the first evaluation metric, which is the average AoI, we conducted
a series of simulations to compare our distributed resource allocation algorithm with
the KA, YA, and GA strategies. The simulations were structured to reflect a range
of operational conditions within an IIoT environment.
The results of these simulations, which highlight the average AoI across the network
for each of the algorithms under study, are visually represented in Fig. 2. This figure illustrates the performance of each algorithm over time and under various
system loads and resource constraints.
Fig. 2 presents a clear trend of decreasing AoI across all compared algorithms, indicative
of their effectiveness in managing information timeliness within the IIoT environment.
Our Algorithm shows the steepest descent in AoI, suggesting an optimal performance
in rapidly updating the information across the network. The consistent downward trajectory
reflects efficient resource allocation and effective prioritization in information
processing. KA Algorithm also shows a significant reduction in AoI, although its decline
is less steep compared to our algorithm. This indicates effective resource utilization,
but potentially with less emphasis on reducing AoI as swiftly as our algorithm. YA
Approach demonstrates a moderate decline in AoI. While the AoI values do decrease
over time, the slope suggests that this approach may balance other operational factors
alongside AoI minimization, potentially leading to a less aggressive reduction. GA
Strategy displays the gentlest slope, implying a more gradual approach to minimizing
AoI. This could be due to a stronger emphasis on energy conservation or other constraints
that may lead to a slower update rate. The convergence of all algorithms towards lower
AoI values over time suggests that each can improve the freshness of information as
the simulation progresses. However, the differing rates of descent highlight the trade-offs
that each algorithm makes between AoI reduction and other system constraints such
as throughput and energy efficiency.
4.3.1 Throughput performance under energy and fairness constraints
The second comparative experiment focuses on system throughput, which is a key indicator
of the efficiency with which a network handles data transmission under specific energy
budgets and fairness settings. In this simulation, we assess how the distributed resource
allocation algorithm, alongside the KA, YA, and GA strategies, maintains system throughput
when subjected to varying maximum energy budgets per sensor node E
max and different weighting factors for fairness λ1.
This experiment aims to test the resilience of each algorithm in scenarios where energy
availability is a limiting factor and to determine the impact of fairness considerations
on the overall throughput. The simulation varies the maximum energy budget for each
node and adjusts the weighting factor for fairness to observe the resultant changes
in the system's throughput. The results of this experiment will be crucial in understanding
the trade-offs between energy conservation, equitable resource distribution, and operational
efficiency. These findings are depicted in a graphical format as Fig. 3, allowing for a direct comparison of throughput performance across the algorithms
under the stated conditions.
Upon inspection of the graph, it is evident that all algorithms exhibit a logistic
growth in system throughput as the maximum energy budget for each node increases.
This behavior is consistent with the law of diminishing returns, where initial increases
in the energy budget result in significant throughput gains, which gradually taper
off as the energy budget approaches a saturation point. Our Algorithm demonstrates
a superior performance across all fairness factor settings. Notably, for fairness
factors of 0.8, 1.0, and 1.2, Our Algorithm consistently achieves a higher system
throughput at a given energy budget compared to the KA, YA, and GA strategies. This
suggests that Our Algorithm is not only more efficient in utilizing energy resources
but also maintains its efficiency advantage as fairness requirements increase. he
algorithms KA Algo, YA Approach, and GA Strategy show a progressive increase in throughput
with higher energy budgets; however, their growth rates are visibly outpaced by Our
Algorithm. As the fairness factor rises, the gap between Our Algorithm and these strategies
widens, indicating a more pronounced advantage for Our Algorithm under stringent fairness
conditions. It is particularly noteworthy that Our Algorithm reaches near-maximum
throughput at lower energy budgets compared to the other strategies. This efficiency
in energy utilization is crucial in IIoT environments where energy conservation is
paramount.
Fig. 3. System throughput under varying energy budgets and fairness factors.
4.3.2 Resource utilization efficiency analysis
We evaluate the resource utilization efficiency of the resource allocation algorithms.
This metric assesses how effectively each algorithm uses the available resources to
achieve the desired system output, which is critical in resource constrained IIoT
environments. The efficiency of resource use is determined by the amount of output
per unit of resource consumed. Higher efficiency indicates that an algorithm can process
more tasks or handle more data with less energy, computational time, or other resources.
The simulation results will be encapsulated in Fig. 4, portraying the resource utilization efficiency of each algorithm.
Fig. 4 showcases the resource utilization efficiency of different algorithms, with Our Algo
leading, indicating it achieves more with fewer resources. Other algorithms, while
efficient, do not match the performance of Our Algo, especially in scenarios of limited
resource availability. This suggests that Our Algo is likely the best option for resource-constrained
environments where maximizing output with minimal input is critical.
Fig. 4. Resource utilization efficiency.
4.3.3 Adaptability and efficiency under dynamic operational demands
To further understand the dynamic adaptability and operational efficiency of the distributed
resource allocation algorithm, a set of simulations will be conducted. These simulations
are designed to evaluate how well each algorithm adjusts to changes in the industrial
environment's operational demands.
The simulations will vary the intensity and frequency of operational demands, such
as data packet size, data transmission rate, and processing power requirements. The
adaptability of each algorithm will be measured by its ability to maintain system
performance without significant delays or drops in data processing quality. Efficiency
will be assessed by the algorithm's resource utilization metrics, ensuring that increased
operational demands do not lead to excessive consumption of computational or energy
resources. Data packet size will range from small to large to simulate different information
payloads. Transmission rates will be altered to reflect varying data flow scenarios.
Processing power requirements will be adjusted to represent different computational
loads.
The anticipated results will be illustrated in Fig. 5, which will display how each algorithm's performance metrics respond to the dynamic
operational demands imposed during the simulations. The graph will highlight the ability
of each algorithm to adapt to changing conditions and efficiently manage the IIoT
environment's resources.
Fig. 5 presents a clear differentiation in the adaptability of resource allocation algorithms
to increasing operational demands. Our Algo maintains a higher level of performance
for longer, suggesting superior efficiency and resilience. The other algorithms show
varying degrees of performance degradation, with GA Strategy declining most rapidly,
indicating a potential area for optimization. Overall, Our Algo stands out for its
robust handling of dynamic workloads.
Fig. 5. Performance under dynamic operational demands.
4.4. Comparative Analysis with Existing Algorithms
To further demonstrate the effectiveness of mentioned proposed algorithm, we conducted
a comprehensive comparison with three existing algorithms: KA (Kadota et al. [24]), YA (Yates et al. [25]), and GA (Gindullina et al. [27]). The comparison focuses on three key performance metrics: Average Age of Information
(AoI), System Throughput, and Resource Utilization Efficiency. In Table 2, it presents a quantitative comparison of these algorithms under different scenarios:
Table 2. Performance comparison of different algorithms.
|
Algorithm
|
Average AoI (ms)
|
System throughput (Mbps)
|
Resource utilization efficiency (%)
|
|
Our algorithm
|
15.3
|
95.7
|
92.4
|
|
KA algorithm
|
22.1
|
87.3
|
85.6
|
|
YA approach
|
19.8
|
90.1
|
88.2
|
As evident from Table 2, our proposed algorithm outperforms the existing methods across all three metrics.
Our algorithm achieves the lowest average AoI of 15.3 ms, indicating superior information
freshness. This is a significant improvement over KA (22.1 ms), YA (19.8 ms), and
GA (25.6 ms). With a throughput of 95.7 Mbps, our algorithm demonstrates excellent
data transmission capability, surpassing KA (87.3 Mbps), YA (90.1 Mbps), and GA (82.9
Mbps). Our algorithm exhibits the highest efficiency at 92.4%, compared to KA (85.6%),
YA (88.2%), and GA (79.8%), indicating superior resource management. These results
underscore the effectiveness of our AoI-based distributed multi-resource management
approach in optimizing IIoT system performance. The significant improvements in AoI,
throughput, and resource utilization efficiency demonstrate the algorithm's capability
to maintain data freshness while ensuring efficient resource allocation in dynamic
IIoT environments.
5. Discussion and Conclusion
In discussion, it is important to consider the implications of the simulated performances.
The findings suggest that Our Algo demonstrates a strong ability to adapt to the complexities
of an IIoT system, particularly in terms of resource management and operational responsiveness.
It outperforms the other algorithms under a variety of conditions, maintaining lower
AoI and higher throughput, which is indicative of a more efficient real-time data
handling capability. The comparative analysis also highlights the inherent trade-offs
each algorithm makes. For instance, while GA Strategy prioritizes energy conservation,
it does so at the expense of increased AoI. Conversely, Our Algo appears to strike
a more effective balance between maintaining data freshness and resource utilization,
which could translate to improved operational longevity in practical applications.
However, it is crucial to approach these results with cautious optimism. While simulations
are a powerful tool for preliminary assessment, real-world environments present unforeseen
challenges that may affect algorithm performance. The complexity of actual IIoT systems
can introduce variables that were not fully accounted for in the simulations.
The simulation results demonstrate that the method proposed in this paper performs
excellently in dynamic industrial environments, effectively improving system response
speed and resource utilization efficiency, thus providing a new solution for IIoT
system optimization. The potential applications of this method in real IIoT systems
are numerous and promising. For instance, in smart manufacturing, our algorithm could
significantly enhance the real-time monitoring and control of production lines, leading
to improved product quality and reduced downtime. In logistics and supply chain management,
the method could optimize resource allocation for inventory tracking and transportation,
resulting in more efficient operations and reduced costs. Moreover, in energy management
systems within industrial settings, our approach could contribute to more effective
load balancing and energy conservation, aligning with the growing emphasis on sustainability
in industrial operations.
However, it is important to note that the transition from simulation to real-world
implementation may present additional challenges. Factors such as network instability,
hardware limitations, and complex environmental interferences in actual industrial
settings could impact the performance of the algorithm. Therefore, future work should
focus on conducting extensive field trials in various industrial scenarios to validate
and refine the algorithm's performance under real-world conditions. Additionally,
exploring the integration of this method with emerging technologies such as edge computing
and 5G networks could further enhance its applicability and effectiveness in IIoT
systems.
In conclusion, while further research and real-world testing are necessary, the proposed
AoI-based distributed multi-resource management method shows great promise in addressing
the critical challenges of resource allocation in IIoT environments. Its potential
to significantly improve system performance and resource utilization efficiency positions
it as a valuable contribution to the ongoing development and optimization of Industrial
Internet of Things systems.