1. Introduction
Vehicular edge computing (VEC)is an enabling technology for Intelligent Transportation
Systems (ITS), and the Internet of Vehicles (IoV) facilitates feasible solutions enabling
computation capabilities among vehicles [1]. In wireless networks, an exponential increase in demand for high computation potentiality
has arisen due to continuous increases in mobile applications [2]. On the other hand, cloud computing is a proper platform designed to support offloading
of computationfrom mobile devices [3]. Those data centers, being remote networks, increase latency and network delay, thus
affecting the performance of IoV applications in an ITS [4]. For critical applications, a local approach to offloading is mandatory [5].
Along with rapid growth of the IoV and artificial intelligence (AI), vehicle networks
have become smart unified networks. Vehicular edge computing enhances services by
offering computational offloading services in the vicinity of vehicles. Intelligent
Connected Vehicles (ICVs) connect themselves as well as linking to urban traffic networks
to execute intelligent applications. These applications are generally delay-sensitive
as well as compute-intensive such that the available resources are not capable of
meeting service demands by all vehicles [6,7].
Offloading techniques are used in VEC settings to move resource-demanding tasks to
neighboring servers to increase the effectiveness of vehicle services [8]. Offloading in VEC transfers resource-intensive programs that support local vehicular
devices in order to decrease the workload, overhead, and expense of local execution.
To satisfy computation offloading, both vehicular devices and VEC servers must operate
offloading frameworks. Many technical publications examine this topic and suggest
novel strategies for reaching the objectives in offloading criteria [9]. To the best of our knowledge, deep learning (DL) have not the subject of any study
or review regarding offloading, despite its significance and the need for academics
to work in the field [10]. Therefore, this survey examines recent research, and investigates various VEC paradigm
strategies, systematically covering DL-based approaches.
The key contributions of this article are:
· reviewing survey studies on deep learning offload techniques for VEC, highlighting
each one's benefits and drawbacks,
· investigating the most recent deep learning methods for offloading in VEC,
· giving a systematic evaluation of present methods, suggesting a detailed taxonomy,
and
· Deliberatingfuture research issues to increase compu-tation offloading in VEC environments.
The remainder of this article is arranged as follows. Section 2 explores thetaxonomy
of task offloading techniques. Section 3 presents optimization applied in task offloading.
Different ML techniques for offloading tasks in VEC networks are explained in Section
4. Section 5 discusses various DL techniques for offloading tasks in VEC networks.
A comparison and analysis are presented in Section 6. Finally, the conclusionis presented
in Section 7.
2. Taxonomy of Task Offloading
On roads with increased traffic flow, the server’s calculation limitation threatens
the distinction of the offloading facility [11]. By positioning mobile edge computing (MEC) servers at the network edge, the computational
burden on ICVs can be significantly alleviated by offloading [12,13]. It is essential to design a suitable architecture that focuses on improving quality
of service (QoS). By using mobile edge computing as a distributed model providing
proficient resources [closer to vehicles, the response time in the network can be
reduced to a greater extent [14].
A task offloading algorithm proficiently reduces the delay and resource consumption
in multiple-user and server VEC environments [15]. The offloading algorithm finds the best position for task deployment and decides
on the implementation order of the tasks at the server. A taxonomy of existing task
offloading algorithms that have contributed to the literature in recent years is detailed
in Fig. 1.
Fig. 1. Taxonomy of task offloading schemes in vehicular edge computing.
3. Optimization Techniques Applied in Task Offloading Strategies
This section provides a brief summary of optimization approaches used to address task
offloading difficulties. Itoffers an overview of the fundamental optimization techniques,
including a few recommended readings, a survey of approaches in the literature, and
a flowchart outlining an application state for an optimization strategy.
For real-time, latency-sensitive applications, the task offloading issue in vehicle
edge computing is crucial. While taking into account network delays, the processing
period in fog nodes, network bandwidth, and current loadsonfog nodes, the data engendered
by VEC should be balanced across the fog nodes. Therefore, taking into account the
network properties and the current load on the fog nodes, the selected fog device
for taskoffloading must fulfill the given QoSlimits, notably the response time. This
is an NP-hard task. Because sensors and fog nodes is increasing, the problem gets
exponentially harder. Therefore, employing conventional greedy search techniques is
difficult. The presented approaches are metaheuristic algorithms employing ant colony
optimization (ACO) to overcome this challenge. An ant's capacity to locate the quickest
route from the colony to a food supply is the inspiration for the probabilistic metaheuristic
technique known as ACO. Finding the shortest path is one of several optimization issues
that are solved using the collective foraging behavior of living ants. An ant first
departs from its colony and travels along a course chosen at random while looking
for food sources nearby. Pheromones, a type of chemical signal, are released by ants
to communicate with one another informally. The amount of pheromone released by the
ant along the route back to the colony is related to the amount and worth of the food
it has found. As a result, other ants will be more likely to follow trails with high
pheromone concentrations.
In the end, every ant will take the quickest and safest route between the food source
and the colony. Each of the$m$ants chooses path $n$ with the highest pheromone concentration
from among all potential paths. The travelling salesman and task-scheduling problems
are two NP-hard research topics for which ACO is successfully implemented as an optimization
metaheuristic. Moreover, the ant colony optimization algorithm is applied to related
cloud scheduling issues, such as scheduling VM in cloud resources, and scheduling
tasks in VMs with the intention of load balancing and lowering the response time.
The ACO algorithm is also employed for cloud-based edge computing task scheduling.
ACO is employed to schedule tasks for fog computing with deadline awareness in a tiered
edge computing architecture. The suggested algorithm concentrates on increasing a
fog service provider's revenue while taking into account the task completion deadline
restrictions for vehicle edge computing tasks. A workload task is offloaded via sensor
$R_{i}$ with probability of success expressed in Eq. (1) when searching for a fog node,$fh_{i}$, that can decrease the response time:
where$\alpha $ and $\beta $ are heuristic constants; $\alpha \geq 0$ denotes heuristic
parameter that regulates the pheromone quantity,$\beta \geq 1$ is the heuristic parameter
that describes the virtual quality of task offloading; $\eta _{ij}(w)$ symbolizes
the heuristic function that signifies task offloading quality. It is evaluated as
follows:
where$load_{j}$ calculates the load on fog node $j$, and when$S_{ij}$increases, $load_{j}$decreases
and $\eta _{ij}(w)$increases. Therefore, there is little chance that the scaling of
sensor $i$ will be transferred to fog node$fh_{i}$. The amount of pheromone on the
trail for task offloading left by ant $l$during iteration $(w)$is represented by${\tau
}_{ij}^{k}(w+1)$, which is calculated with Eq. (3):
where $\Delta {\tau }_{ij}^{k}(w)=1/S_{ij}$, and $\rho $epitomizes the proportion
of pheromone evaporation that simulates the pheromone evaporation effect at every
single stage. Furthermore, if all the ants accomplish task offloading, the pheromone
trail is globally updated, i.e., iteration is completed.
By making a continuous link between edgewith cloud, also shortening the effectual
distance, fog computing gets around the restrictions of the cloud. However, fog computing
has significant difficulties when offloading work for remote computation. Therefore,
the key research focuses in fog computing is optimality of work offloading. The particle
swarm optimization (PSO) algorithm [16] is utilized for task offloading in vehicular edge computing. Particle swarm optimization
is called a metaheuristic optimization approach because it relies on a swarm size,$N_{swarm}$,
which is a population of collaborating individuals. Each member of the swarm (each
particle) represents a place in the search space that corresponds to a solution. The
expression for a particle,$Q_{i}$,on D-dimensional vector is known as D-dimensional
search space,$Q_{i}=\{q_{i1},q_{i2},\ldots q_{id}\}$. Each particle's search movement
is determined by the global optimal discovered for all particles as well as the local
optimum. Eqs. (4) and (5) illustrate how all elements of$Q_{i}$appraise their newest location, $A_{i}(w+1)$,
and depend upon computing the velocity,$U_{i}(w+1)$:
where$i=1,2,\ldots ,N_{swarm}$, and $w=1,2,\ldots ,iter_{\max }$denotes the number
of iterations;$\omega $implies preliminary velocity of inertia weight that balances
exploration with exploitation. At the beginning of the search, ahuge inertia weight
value is appliedif exploration is preferred, whereasa lesser value facilitates more
exploitation. Learning coefficients are $D_{1}$ and$D_{2}$;$\alpha $and $\beta $are
consistently supplied random counts in the range [0,1]. PSO was first suggested as a method for resolving issues in continuous domains.
PSO needs to be adjusted to the domain, because the planning of mobile task offloading
happensin a separate search space.
The position of particle $P_{ij}$exemplifies sensor node offloading by$S_{node}$to
fog node$f_{node}$. Demonstrations of the task offloading sensor and the fog nodes
are separate. The particle’s place and velocity are updated in all iterations by Eqs.
(4) and (5). Nevertheless, the value of every particle position needsisconverted asdistinct numerical
value using Eq. (6):
where$N_{node}$ is the fog node count, and $\left\lfloor | P_{ij}| \right\rfloor $
signifies maximization of$P_{ij}$exact value for particle$P_{j}$. This approach begins
by setting more iterations, $N_{iter}$; the parameters updated by velocity are$\alpha
$, $\beta $, and$\omega $, and the probability of mutation is $P_{mutation}\,.$ Additionally,
the proposed particle swarm optimization algorithm sets the positions of particle
$P_{i}$and velocity $U_{i}$ to random values. Every particle is evaluated with fitness
function,which exemplifies the particle encryption, exposing the solution’s quality.
The fitness is evaluated with Eq. (7):
The solution is considered better when the fitness function value is higher. Every
particle’s local optimum is determined after computing its fitness. The global optimum
is set at the swarm's highest fitness level. The algorithm repeats itself for the
set number of iterations$N_{iter}$. Eqs. (4) and (5) are used to update each particle's velocity and location after each iteration, and
Eq. (6) transforms each particle's position into a discrete number. The novel local and worldwide
optima are continued via the PSO algorithm. Every single particle is then subjected
to a random mutation. To create novel task offloading for particles, particle values
for a pair of random nodes are switched throughout the mutation process.
Numerous tasks are sent to a cloud server because mobile devices in cloud environments
have processing limits. In the 20 years following the introduction of the cloud paradigm,
this has resulted in an increase in the effectiveness of mobile applications. However,
because a cloud server is typically located far from mobile users, task offloading
might not be an appropriate solution for mobile applications that are delay-sensitive.
A joint optimization algorithm (JOA) [17] is applied to solve this issue. In this model, the edge server near the access point
(AP) is not required to handle a vehicle's computing needs. In order to achieve system
load balancing, this may lessen the weight in a few hot edge servers.
Additionally, the JOAaids in enhancing QoS and cutting down on queuing delays for
computing tasks carried out on edge servers. The distance issue consequently introduces
an additional communication lag. The task is deployed to an edge serverdenoted$j$;
queuing delay is indicated by the notation ${W}_{j,k}^{Q}$calculated by adding the
evaluated implementation time in the edge server for every task in $j$’s queue and
is the task's authentic accomplishment time. The communication latency from AP $i$
to edge server$j$is Eq. (8):
where$dis_{i,j}$stands for the distance between edge server $j$ and AP $i$. If the
vehicle's computing function is transferred to an edge server close to the associated
AP, then${W}_{i,j}^{C}=0$.
An entire task is separated as task units that are offloaded to numerous edge servers
in successive cells,since the data size of task $u_{l}$ is enormous, each cell's coverage
is just marginally enough. A maximal count of tasks should be finished in each cell
to meet the deadline for task completion. Based on accessible computing with network
resources atthe cells, together with maximal count of TUs, the task could be completed
in the local vehicle.Also needed maximal count ofTUs finished by edge servers:${N}_{k}^{s}$.For
cells in which tasks can be finished in the vehicle,we assume that vehicle $k$is entering
the coverage cell$L_{s}$, time the vehicle stays in the cell can be calculated with
Eq. (9):
where${v}_{k}^{s}$denotes speed of vehicle $k$in cell. Depending upon the time the
vehicle stays in the cell, calculating the maximalcount of TUs completed locally is
done with Eq. (10):
A maximal count of task units processed denotes${N}_{off,k}^{max,s}$, is equal to
the number of task units that the vehicle can offload onto an edge server. The length
of time spent in the cell, the channel conditions there, and power of the edge server’s
processor affects the value of${N}_{off,k}^{max,s}$. The rate for data transmission
within the cell is represented by the medium rate of uplinks toall APs, as described
in Eq. (11):
The average scaling computing capacity in the cell is estimated by Eq. (11), which is the average value for processing offloaded tasks.
4. Machine Learning Task Offloading Techniques for VEC Networks
Different machine learning methodsare utilized for optimal task offloading in VECnetworks.
Recently, task offloading in VEC networks has been performed through various deep
learning strategies. Some of them are discussed below.
To meet the demand for rapid offloading in vehicle networks, we propose an effective
task offloading algorithm that depends upon the support vector machine (SVM) [18]. Through a weight allocation mechanism that takes into account the MEC servers' available
resources, the algorithm can divide a large task into multiple smaller ones. Following
that, SVMs are used to determine if every sub-task should be implemented locally or
offloaded. If they are offloaded, sub-tasks are assigned when the vehicle passes MEC
servers. Every server guarantees that the sub-task will be completed and returned
on time. Along a straight road, $N$ road side units (RSUs) are connected to a MEC
server. Vehicles access a road side unit only where they are situated, and each RSU
covers a single region. The set of mobile edge computing servers or road side units
is expressed in (12):
The set of MEC servers is expressed in (13):
The proposed offloading technique assumes moving vehicles segment tasks into minor
tasks. The first sub-task is uploaded to$M_{1}$when the vehicle first enters region$R_{1}$,
and $M_{1}$then completes it as the vehicle moves. Before leaving$R_{1}$, the vehicle
can quickly receive the result. Then, the final decision function for task offloading
is expressed in Eq. (14):
With respect to the resolution operation, Task $X$is categorized such that if$f\left(X\right)>
0$, the related$y=+1$, and the task is offloaded. If$f\left(X\right)< 0$, resultant$y=-1$,
and the task is completed locally.Task offloading using the SVM maximizes the efficiency
of the energy consumed by offloading decisions and from the allocated resource block
count.
Unsupervised machine learning uses the K-means clustering algorithm [19],which is necessary to train the type of model used to categorize the tasks in VEC
networks. The freshly arrived task is then assigned a set of tasks with comparable
features. Here, three task features,which are seen in Eq. (15), are considered for task offloading and classification:
The tasks stochasticallyallocated forcomputing nodes to determine their features.
Afterimplementing several tasks, the task administrator creates task lists of trained
data with separate features and workload categories, likeOLTP, streaming, web serving,graphics.
Euclidean distance is applied to choose the adjacent centroid deeming$C_{k}\left(K=1,2,3\right)$,
offloading them to $t_{j}$ as expressed in Eq. (16):
Next, measuring the relation in terms of$d\left(t_{j}-c_{1}\right)$, $d\left(t_{j}-c_{2}\right)$,
and $d\left(t_{j}-c_{3}\right)$,task offloading is categorized into clusters withthe
least identical value. The task offloading system is inappropriate for highly incorporated
or comparatively simple tasks that cannot be partitioned.
5. Deep Learning Task Offloading Techniques for VEC Networks
This section presents different deep learning methods utilized for optimal task offloading
in VEC networks. Recently, task offloading has been conducted through various deep
learning strategies as discussed below.
Deep Q-learning [20] empowers task offloading for vehicular edge computing in urban informatics. To proceed
with optimal task offloading, theknown offloading tactic is initially denoted$\pi
$, which is derived from a moderate system $b^{l}$action in state $S^{l}$ using the
Q-function expressed in Eq. (17):
The value and the strategy iteration can be used to determine the most utility as
well as the best task offloading tactic. Iterations can be changed using a Q-learning
technique used in reinforcement learning. The Q-value functioningin learning procedure
is adjusted in each iteration and is expressed in Eq. (18):
where$\alpha $indicates the learning rate. Q-function assessment basis, applying deep
Q-learning attains optimum offloading strategies denoted$\pi ^{\ast }$. The neural
network base approximates the Q-network, and the group of network parameters is denoted$\theta
$. By utilizing Q-network, the Q-function in Eq. (17) is used to analyze Eq. (19):
Depending on$Q'$, the optimum task offloading method in every state acquired from
the eventsresults in higher utility. Hence, deep Q-learning in terms of task offloading
for VEC minimizes the cost of using computing resources with higher latency.
Convolutional neural networks (CNNs) [21] are used for task offloading strategy prediction in VEC. Atask offloading system
with the help of deep learningpredicts the offloading result with a binary classifier
(i.e., success/failure), and service latency prediction accounts for any backslide.
Calculating the task offloading efficiency also creates a task offloading decision
depending on predicted results, and the mode is determined along with the offloading
data history. This method containsthreelayers: vehicle, edge, and cloud. The vehicles
are in contact with road side units through wireless channels. Vehicles offload tasks
to roadside units via vehicle-to-infrastructure (V2I) wireless communications with
edge servers and cloud servers; secondly to a cloud server through the RSUs; and third,toa
cellular BS. Here, the CNN is utilized for task offloading in the VEC network. The
CNN primarily comprises three modules: convolution, pooling,recombination layers.
The convolution collects local informationsthen creates features. The recombination
produces newlyfeatures owingto a reduction in the number of features. Combining raw
features to get new features increases the input feature space. In this strategy,
every component is designated as seen below. After preprocessing a feature, every
single input sample is converted from a $14\times 1$ matrix to a $16\times 40$matrix
anywhere in which $40$is the implanting measurement chosen for the embedding process
period, and two more 14 x 16 rows are then added to the raw features. A$16\times 40$matrix
is sent to convolutional layer, thenthe feature map output becomes$\left(16,40,6\right)$.
The convolutional layer output is compressed by the pooling layer. To maintain the
observation window maximal value, the suggested model uses max-pooling at thepooling
layer. In general, max value frequently offers more details than the middle value.
Mean-pooling typically reduces a feature map's data.
An$h\times 1$pooling window handles the convolution output. For feature consolidation,
the width is fixed to 1 to confirm that the feature map could not subsample at width
measurement. This hypothetical 1\textsuperscript{st}convolution output implies$C^{1}$,
first pooling layer output implies$P^{1}$, which is expressed in Eq. (20):
Consider$x$,$y$indicates row and column feature map index. The second convolutional
layer receives its input from the first pooling layer's output,as provided from Eq.
(21):
where$P^{i}$indicates output of pooling layer $i$,$E^{i+1}$indicates convolutional
layer$i+1$input. After convolution and pooling, the recombination layer is scheduled.
Local feature crossings exist between the feature maps that travel via the convolutional
and pooling layers. However, directly sending feature maps to a densely linked layer
as the pooling layer's output results in global feature loss together withlessdata
density. To overcome these, the pooling layer's output is combined again in the recombination
layer by using an entirely allied neural network. The recombination layer feature
output is new features created through feature crossover. The task offloading prediction
model is updated to include both new features and raw features. New features are defined
with (22):
where$i$indicates total count of convolution, pooling, recombination cycles. Afterwards,
by using (23), the new features are added to base features:
Let$E$ denotes input, $E'^{T}$indicates the service delay, and$R^{T}$denotes the offloading
result. The prediction model contains some hidden layers, then the 1\textsuperscript{st}hidden
layer input for completely linked network is represented by $I_{1}$in (24):
where$Flatten$indicates original 2 dimensional feature, and$E$is the matrix converted
to a vector. Output of hidden layer $i$is represented as $O_{i}$given that $I_{i}$
is the input of hidden layer$i$:
Here$W^{i}$indicates the weight matrix of hidden layer $i$, $B^{i}$ indicates hidden
layer$i$ bias value,$\mathrm{Re}lu$ denotes activation function. A multi-input network
design is used by the task offloading prediction model. There are two processing units
in the network:(i) task offloading result classifier,(ii) service-delay regressor.
Predicting a task offloading outcome is defined as binary categorization, hence, binary
cross-entropy should be used as the classifier's loss function. Predicting task offloading
service latency is the regressor’s challenge. MSE is regarded as the regressor's loss
function. Theoutcome ofthe classifier is expressed in (26):
Here$I_{nh}$indicates final hidden layer activation result. Task offloading outcome
prediction is a two-class sorting problem, and the classifier's activation function
produces a task offloading success rate using the $Sigmoid$function. Finally, the
suggested CNN-based task offloading strategy prediction in VEC maximizes the overall
computation overhead under the weighted-sum of task completion time alongmonetary
cost for scaling resources.
Deep Neural Network (DNN) [22] energy-efficient task offloading obtains consumer association schemes in vehicular
edge computing schemes.The DNN input remains as large-scale disappearing modules and
is represented in (27):
While the user’s data size is expressed in (28):
And the user latency requirement is expressed in (29):
where the parameters$\overline{h}$, $D$, and $T$ obtain the users’ association mode
depending upon the nearest roadside unit’s association scheme. A one-step exploration
is developed, which modifies one of the $K$user’s association schemes by maintaining
other user associations. Every user accesses the other $M-1$RSUs in a one-step exploration,
which is expressed in (30):
where$N_{one}$ indicates the one-step exploration, $K$denotes the association scheme
created from random exploration, and$M$denotes the random selection of road side units.
The output layer acquires output values in the range [0,1] with the Sigmoid function. Subsequently, we acquire binary output values by choosing
the road side unit along with a maximal output value for every user. The preprocessing
data procedure is executed containing integration and normalization modes and is represented
in Eq. (31):
where$x_{km}$ is obtained in variable units. Then, we normalize the input to measure
input values $0$and $1$, which are computed using Eq. (32):
where the number of neurons on input and output layers are denotes$km$. Next is the
training phase in which the classifier's activation function produces a task offloading
process. Here, the task offloading scheme in VEC systems minimizes the vehicles’ total
energy consumption and bit allocation. The nearby road side unit suggestion is determined;
nevertheless, it avoids systematic solutions.
6. Comparison and Analysis
This section suggests that analysis of investigation papers depends on several criteria
for multi-dimensional task offloading techniques in VEC networks,which are given in
Table 1. Finally, the decision-making techniques in task offloading are compared in Table 2.
Fig. 2 shows the Summary of mathematical optimization task offloading algorithms.
Fig. 2. Summary of mathematical optimization task offloading algorithms.
Table 1. Comparison of Multi-Dimensional Task Offloading Techniques in Vehicular Edge Computing Networks.
Authors
|
Method
|
Advantage
|
Disadvantage
|
You and Tang [16]
|
Particle Swarm Optimization
|
Particle swarm algorithms attain considerable performance with accuracy for offloading
decision coordination
|
High computation andcommunication overhead Low search efficiency, highcomputation complexity
|
Peng et al. [17]
|
Joint Optimization
|
Maintains offloading quality with limited resources
|
Limited edge server service time is not deliberated
|
Wu et al. [18]
|
Support Vector Machine
|
Efficient utilization of idle resources Latency-sensitive
|
Does not consider user mobility Slow convergence speed, poor initial performance
|
Ullah and Youn [19]
|
K-Means Clustering
|
Energy efficiency Online scheduling
|
High costsfor data storage and transmission
|
Zhang et al. [20]
|
Deep Q-Learning
|
Joint communication, caching, and computation scheduling
|
Difficulty maintaining the security of data in vehicular edge computing
|
Zeng et al. [21]
|
Convolutional Neural Network
|
Edge can work without cloud and improves data security
|
Storage capacity is limited Low processing power
|
Shang et al. [22]
|
Deep Neural Network
|
Neural networks perform parameter tuning to converge on the better solution in a compromise
between quality and speed.
|
Edge computing needs proprietary network with high power consumption
|
Table 2. Benchmarks from Multi-Dimensional Task Offloading inVEC Networks.
Method
|
Performance Analysis
|
Average Task Completion Time (ms)
|
System Cost (RMB)
|
Total Processing Delay (ms)
|
Maximum Delay among Vehicles (s)
|
Running Time (s)
|
Ant colony optimization
|
27
|
43
|
1063
|
0.7
|
0.55
|
Particle swarm optimization
|
17
|
56
|
-
|
-
|
0.43
|
Joint optimization
|
-
|
71
|
1567
|
1.3
|
0.21
|
Support vector machine
|
32
|
-
|
3749
|
-
|
-
|
K- means clustering
|
18
|
59
|
3995
|
1.5
|
-
|
Deep Q-learning
|
-
|
45
|
-
|
0.9
|
0.47
|
Convolutional neural network
|
15
|
52
|
2645
|
-
|
0.36
|
Deep neural network
|
26
|
-
|
-
|
1.1
|
0.11
|
7. Conclusion
In VEC networks, task offloading mechanisms minimize the costs of the system, which
include communication cost and computation cost, when the task reaches the minimum
allowable delay, and further constraints are gratified mainly by considering the characteristics
of fast-moving vehicles. With some effort, this article presents a complete overview
of existing work associatedwith task offloading in VEC. This article explored fog
computing as well as task offloading procedures emulated through numerous task offloading
factors leading the decision-making procedures. Here, numerous tasks offloaded with
optimization and deep learning approaches are described. Furthermore, this article
surveys deep learning strategies that are used for the task offloading process. Finally,
this article proficiently deals with the identified gaps.
7.1 Open Issues
This section highlights several unresolved algorithmic as well as structuralconcerns
in DL-base offloading methods. The new open concerns and major obstacles are highlighted
based on future study paths and open viewpoints of DL-base offloading techniques.
7.2 Future Research Directions
The technical query that should be addressed iswhat are the open research questions
and directions for ML offloading methods.Responding to this querycategorizes the topic
into five problems—scheduling, interoperability, mobility, scalability, and security.
These are natural dynamic behaviors. Applying newer deep learning approaches in VEC
offloading is more suitable for improving associated measures of offloading than using
the straightforward classic methods due to the higher rates of data transmission brought
on by this dynamic and by the absence ofprevious knowledge to dealwith the specified
drawbacks.
A. Scheduling
Offloading destination (single or multiple server), hardware resources, task allocation,parallelism
in the scope of scheduling need to be deliberated. The CPU, storage, and energy capacities
of vehicular devices are constrained, andaccording to their assigned roles, edge servers
should be more resource-capable than vehicular devices, although they still have some
restrictions. Scheduling should be carried out based on the devices' most recent positions.
Due to the unsatisfactory quality of deep learning-based study papers (CPU, RAM, and
storage), hardware with software parallelism should be considered open concerns in
scheduling enhancement resources. Offloading tries to move some of the burden from
local (vehicular) devices to distant servers to alleviate resource constraint issues,
to increase overall efficiency, and to potentially reduce some expenses associated
with the VEC environment. Offloading is directed to a single server or a group of
servers. The code that has to be offloaded is only delivered to one place when there
is only one server, and that location is responsible for handling the response. In
a multi-server system, scheduling proper server to maximize important metrics is a
vital topic that should be regarded as an open issue. Utilizing the parallelism concept
in VEC offloading lessens the resource limitations to a certain extent. Investigators
should generally focus more on using an unsupervised learning process to suggest novel
scheduling strategies. Thus, training models could be enhanced for offloading relative
difficulties in VEC. Besides,to address many facets of multi-objectives together with
non-linear issues, the hybrid manner of deep learning-based models can be a better
concept. Traditional ML-based processes have some complications, and large dimensional
overhead needs to be removed; hence, newer techniques like deep learning with an optimization
algorithm need to be used.
B. Interoperability
The primary interoperability issues fall into 3 modes: intercommunication, architecture,
system models, and also interface or controller required to enable inter-operability.
To integrate security procedures as part of intercommunication, it is essential to
consider the system's interoperability as a challenge.A controller is a necessary
intermediate interface in a system with interoperability, because it makes it easier
for the system's components to communicate with one another. Architecture and the
system model is effectively adaptable to prepare such interconnections betwixt vehicular
devices and servers. Researchers must develop new deep learning techniques to address
interoperability difficulties, because the VEC environment has highly dynamic behavior
due to its higher data rates and its heterogeneity.
C. Mobility
Communication, dynamism, and protocols are a few important obstacles to mobility.
New obstacles arise in the offloading context of the VEC environment due to mobility
capabilities. Owing to rapid mobility across multiple locations, vehicular devices
with highly dynamic behavior may need to switch between dedicated servers that are
dispersed across a large geographic area. The fundamental challenge is the requirement
for an adequate mobility management system to retain connectivity with an edge server—even
after separating from the origin—to derive higher dynamic content improvements. Related
organizations must provide uniform protocols and unit platforms to maintain these
connections. Mobility presents a significant challenge in various research domains,
like unmanned aerial vehicles, intelligent transportation systems, and VANETs, necessitating
the development of new methodologies to adequately address these challenges. Interoperability
and mobility are seen as strongly associated with one other to successfully carry
out the assigned obligations from offloading. Mobility concerns have not received
adequate attention in the research on VEC environments, despite their significance.
D. Scalability
Resources, applications, load balancing, and connections pose the biggest obstacles
to scaling. How to effectively handle diverse vehicular devices, as well as servers
in a geographically expansive VEC system with its highly dynamic request behavior
is in demand to meet scalability. Another unresolved problem is that the server must
be adequatelyscalable to accomplish load balancing throughincluding or eliminating
services to assist theservice that is about to become a bottleneck or unreachable.
The network topology might be more adaptable for dealing with these issues to recover
from unfavorable conditions, but this procedure has huge costs.
E. Security
Finding strategies to make the system more resilient to unforeseen threats encapsulates
the biggest security challenges. These difficulties might primarily be divided into
three categories: security control types, security extent, and techniques for regulating
the effects of protection risks. Protectiveand detective response methods are commonly
recommended with regard to many kinds ofsafety control. These safety measures are
implemented in the network and in the extensiveness of the data. Implementations of
these safetymodes are typically classified as authentication, authorization, and accounting.
While performing authentication, entity recognition of the requester is taken into
consideration when identifying the owners of tasks with applications. A count of resources
consumed via tasksin accounting is recognized as successfully fulfilling offloading
objectives with accessibility to a particular service. Network traffic must be properly
controlled in order to protect against threats, and then preventative, deductive,
or responsive measures can be taken automatically in response to each sort of threat.
The system is regulated in preventive measures to stop any spiteful behavior from
occurring. If spiteful action is ongoing, the safety strategy is in charge of identifying
it and, in response, taking the necessary actions. As a consequence,securing VEC ecosystem
applications from unrestricted access, and ensuring the integrity of related data,isamong
the open issues in the literature.