Mobile QR Code QR CODE

  1. ( School of Interdisciplinary Engineering & Sciences (SINES), National University of Sciences & Technology, H-12, Islamabad, Pakistan msarwar.mscse22sines@student.nust.edu.pk )
  2. ( Department of Software & Communication Engineering, Hongik University, Sejong, Korea ghulammusaraza96@gmail.com, jsnbs@hongik.ac.kr)
  3. ( School of Electrical Engineering & Computer Science (SEECS), National University of Sciences & Technology, H-12, Islamabad, Pakistan msarwar.mscs19seecs@seecs.edu.pk )



Artificial intelligence, Machine learning, Deep learning, Information and communication technology (ICT), Human-computer interaction (HCI)

1. Introduction

As we traverse the timeline of the 21st century, Information and Communication Technology (ICT) has been the crucible for innovations that have deeply transformed every facet of our existence [1]. At the forefront of this digital metamorphosis lie the dynamic twins of technological advancement - Artificial Intelligence (AI) and Machine Learning (ML). These powerhouse technologies have surpassed their embryonic phases, evolving into integral elements of complex ICT infrastructures. Presently, they are driving a multitude of intelligent applications across diverse sectors, automating processes, enhancing efficiency, and unlocking data-driven insights with unprecedented depth.

The central objective of this study is to deliver an in-depth exploration of contemporary AI and ML technologies, with a spotlight on their utilization and ramifications within the ICT sphere. Through dissection of the elemental principles underpinning AI and ML, comparative analyses of available frameworks, investigation into the challenges and constraints of these technologies, and probing into real-world instances, this research aims to furnish a detailed roadmap to current practices and emergent trends within the ICT context.

This treatise journeys through the conceptual dimensions of AI and ML, encapsulating diverse machine learning algorithms and the vital architecture of AI systems. It further delves into their pragmatic applications within the ICT industry, spanning domains such as natural language processing, image recognition, data-driven decision-making, recommendation systems, and more. Additionally, this paper addresses the hurdles and ethical conundrums associated with AI and ML, offering a comprehensive dissection of their limitations along with an evaluation of proposed remedies. To cap it all, the paper navigates a myriad of case studies and traces potential future trajectories for AI and ML in ICT, aspiring to proffer a panoramic perspective of the evolving dynamics of these technologies. While this research refrains from exploring the intricate technicalities of AI and ML implementations, it endeavors to project a bird’s-eye view of their escalating influence and the repercussions within the ICT sector. The remainder of this paper is organized as follows. In Section 2, we review the literature on the convergence of ML and AI within ICT. In Section 3, we explore the AI and ML fundamentals. In Section 4, we investigate the applications of AI and ML in ICT. In Section 5, frameworks and metrics are explained in context of AI and ML in ICT. In Section 6, we address the challenges and Limitations of AI and ML in ICT. We work on different case studies in Section 7. In Section 8, we explain the future and emerging trends in AI and ML, especially in ICT. And we finish with concluding remarks in Section 9.

2. Literature Review

In the preceding decade, there has been a meteoric ascent in scholarly works probing the convergence of AI and ML within ICT. Far from being a supplementary insertion, this amalgamation has precipitated a radical transformation that is recalibrating the entire ICT topography. A plethora of investigations has dissected the theoretical underpinnings of AI and ML, along with their practical applications across an array of ICT domains [2]. Once confined to the realm of theoretical abstraction, these technologies have transitioned into omnipresent facets of sophisticated ICT architectures.

Considerable research has been conducted to study the applications of AI and ML in ICT. Omar et al. [3] developed a cloud-based recommendation system to address big data challenges in social networking platforms. They employed matrix factorization using three approaches: traditional singular value decomposition (SVD), the alternating least squares (ALS) algorithm with Apache Spark, and a deep neural network (DNN) algorithm using Tensor-Flow. Their work improved computational efficiency and demonstrated the potential of AI and ML in revolutionizing information retrieval and user experiences in big data environments.

Annam et al. [4] proposed an IoT architecture that utilizes machine learning techniques to evaluate road safety. Their approach identifies potential road hazards and provides valuable insights for maintenance on a larger scale. This work contributes to the advancement of Intelligent Transportation Systems and exemplifies the promising potential of AI and machine learning to revolutionize the field of road infrastructure maintenance and safety. Ullah et al. [5] explored the transformative impact of AI and ML on the evolution of smart cities. They provided a comprehensive overview of AI, ML, and Deep Reinforcement Learning (DRL) applications in diverse domains. Their review revealed the profound potential of AI and ML in optimizing policies, decision-making, and service provision, ultimately realizing the vision of smart cities. This work elucidated how AI and ML contribute to redefining urban landscapes through intelligent technologies.

Similarly, a significant body of literature has delved into the various AI and ML frameworks that serve as the backbone of these applications. Comparative studies [6] have provided insight into the performance metrics of these frameworks, their strengths, and areas for improvement. Deep learning algorithms have advanced AI and ML in recent years. These methods enable AI to outperform humans in picture identification, natural language processing, and machine translation. Technology has entered a new age with this paradigm change in computing. Cloud computing and AI integration is another major development. This connectivity helps enterprises scale their AI applications efficiently and affordably. AI in cloud computing has transformed company operations and innovation, increasing productivity and transformation. Edge computing is another AI innovation. Edge computing reduces latency, optimizes bandwidth, and boosts performance by bringing AI closer to the data source. This breakthrough shows the potential for real-time data processing and decision-making innovations, enabling more sophisticated AI applications. AI is becoming more important in healthcare. Accurate diagnosis, personalized treatment planning, and fraud detection using AI improve healthcare outcomes and efficiency. The use of AI in healthcare shows how AI and ML may impact crucial areas. AI has disrupted transportation with driverless cars, optimized logistics, and intelligent traffic control systems. AI might revolutionize transportation by improving efficiency and safety. These trends demonstrate AI and ML's potential to change ICT and other areas. They envision a future where AI and ML will drive technological innovation and help solve some of the world's biggest problems. In conclusion, the research suggests that academic and industry interest and investment in AI and ML in ICT is high. This research paper examines the existing and future uses of AI and ML in ICT to add to this increasing body of knowledge.

3. AI and ML Fundamentals

AI, a wide-ranging discipline within computer science, aspires to architect systems equipped with capabilities mirroring human intellect [7]. These capabilities encompass a broad spectrum: discerning natural language, pattern recognition, problem-solving, and the capacity to learn from experience. ML, an integral subset of AI as shown in Fig. 1, serves as a technique for data analysis that automates the construction of analytical models [8]. Rooted in the principle that systems can glean knowledge from data, discern patterns, and formulate decisions with scarce human intervention, ML is the powerhouse propelling many modern breakthroughs in AI.

Fig. 1. Relationship between AI & ML.
../../Resources/ieie/IEIESPC.2024.13.5.514/fig1.png
Fig. 2. Machine Learning Algorithms.
../../Resources/ieie/IEIESPC.2024.13.5.514/fig2.png

3.1 Categorization of ML Algorithms

Machine Learning algorithms typically fall into three primary categories: Supervised Learning, Unsupervised Learning, and Reinforcement Learning [9]. Supervised Learning, as a methodology, gleans knowledge from labeled training data and formulates predictions based on this acquired wisdom. The ultimate objective is to approximate the mapping function so accurately that when new input data is introduced, the model can foresee the corresponding output. In stark contrast, Unsupervised Learning involves familiarizing the model with unlabeled data. The goal is to decode the underlying structure or distribution in the data, yielding a deeper understanding of the data itself. Reinforcement Learning constitutes a form of machine learning where an agent learns to formulate decisions by executing actions in an environment to maximize a reward. The agent learns from the repercussions of its actions, adjusting its behavior based on positive or negative feedback, rather than from explicit instruction.

3.2 Pillars of AI in ICT

Critical elements of AI systems within ICT encompass algorithms, computational models, data, hardware infrastructure, and software frameworks. Algorithms lay the theoretical groundwork for AI, facilitating the design of models capable of learning from data. These models are subsequently trained using vast quantities of data, commonly accumulated and stored in data centers. The hardware infrastructure, typically comprising high-performance GPUs and CPUs, furnishes the computational might require to process these large data volumes and intricate models. Lastly, software frameworks such as Tensor-Flow and PyTorch provide the essential tools and libraries needed to design, train, and deploy AI models in a structured and streamlined fashion. By comprehending these fundamental aspects, we can delve deeper into the various applications, challenges, and future trends associated with AI and ML within ICT.

4. Exploiting AI and ML in ICT: Applications

In the ever-evolving landscape of ICT, the integration of AI and ML has become a transformative force. Some of the applications are shown in the following Fig. 4 and explained later.

Fig. 3. Pillars of AI in ICT.
../../Resources/ieie/IEIESPC.2024.13.5.514/fig3.png
Fig. 4. Applications of AI & ML.
../../Resources/ieie/IEIESPC.2024.13.5.514/fig4.png

4.1 NLP and Speech Recognition Advancements

AI and ML have driven notable enhancements in Natural Language Processing (NLP) and Speech Recognition. Within the NLP sphere, AI models demonstrate proficiency in understanding, decoding, and generating human language in a practical manner, giving rise to advanced applications such as chatbots, language translators, and sentiment analysis tools [10]. Similarly, AI-powered algorithms in Speech Recognition have catalyzed the development of applications that adeptly transcribe spoken language into written text and vice versa, triggering advancements in voice-responsive systems like Siri, Alexa, and Google Assistant.

4.2 Computer Vision's Visual Revolution

AI and ML algorithms have illuminated the path forward in the realm of Computer Vision and Image Recognition. These algorithms have honed the ability to ’see’ and decipher visual data, enabling object identification, image classification, pattern detection, and even image generation. These capabilities serve as the bedrock of numerous applications, ranging from facial recognition systems and autonomous vehicles to medical imaging analysis and surveillance systems [11].

4.3 Data-driven Decision Magic

Predictive analytics and data-driven decision-making frameworks have deeply embedded AI and ML at their core. Leveraging historical data, ML models can extract patterns and trends, forecast future scenarios, and provide actionable intelligence. The resulting implications are profound, affecting various sectors from healthcare and finance to marketing and supply chain management [12].

4.4 Recommendation Systems

Recommendation systems, underpinned by ML algorithms, have become pivotal in industries such as e-commerce, online entertainment, and digital marketing. These systems scrutinize user behavior, preferences, and interactions to provide bespoke product recommendations, content suggestions, and targeted advertising [13].

4.5 Autonomous Systems' AI Drive

The role of AI and ML is instrumental in shaping the future of autonomous systems and robotics. From autonomous vehicles and drones to industrial automation and personal assistant robots, all have thrived on advancements in AI and ML. These technologies are pivotal in enabling these systems to comprehend and navigate their surroundings, make decisions, and learn from their interactions [14]

4.6 Sentiment Analysis in Action

In the arena of sentiment analysis and opinion mining, AI and ML are used exhaustively. By processing text data from social media, reviews, and various online platforms, these technologies can gauge public sentiment towards products, services, events, or topics. Such insights prove invaluable for businesses, policymakers, and researchers.

By dissecting these multifaceted applications, we garner a more profound understanding of the transformative impact of AI and ML on the ICT industry and the potential they hold for the future.

5. Navigating AI and ML in ICT: Frameworks and Metrics

5.1 Key AI/ML Frameworks Overview

A variety of frameworks have risen to prominence in the AI and ML space, functioning as critical tools that enable the conception, training, and deployment of sophisticated models. These frameworks provide a myriad of resources such as pre-configured functions, libraries, and architectural blueprints that expedite the developmental process. Among the front runners are Tensor-Flow, PyTorch, Scikit-learn, and Keras. Tensor-Flow, an offspring of the Google Brain project, is revered for its flexible structure and extensive capabilities for model deployment. PyTorch, supported by Facebook’s AI Research lab, is cherished for its dynamic computational graph and Python-centric nature, attributes that endear it to researchers. Scikit-learn, a straightforward yet potent tool for data mining and analysis, are admired for its seamless integration within the Python programming ecosystem. Keras, which can operate atop both Tensor-Flow and Theano, is recognized for its user-friendly features that expedite the building and prototyping of deep learning models [15].

5.2 Framework Selection for ICT Challenges

Although the selection of an AI/ML framework is largely contingent on a project’s specific requirements, each framework brings unique strengths to the table, rendering them ideally suited to certain applications. For instance, Tensor-Flow’s distributed computing capabilities and robustness render it ideal for large-scale, production-grade applications. PyTorch, with its dynamic computation graph, typically finds favor in research and experimental environments where flexibility and iterative development are key. Scikit-learn, with its extensive array of traditional ML algorithms, is a top pick for projects that necessitate a wide assortment of machine-learning techniques. Meanwhile, the simplicity and modularity of Keras make it a perfect choice for beginners and rapid prototyping [16].

5.3 Metrics for AI Model Assessment

The evaluation of AI/ML models’ performance is vital in ensuring their reliability and efficacy. Depending on the task at hand, diverse metrics might be deployed. For classification tasks, common metrics include accuracy, precision, recall, F1 score, and Area under the ROC Curve (AUC-ROC). Regression tasks typically employ metrics like Mean Squared Error (MSE), Root Mean Squared Error (RMSE), Mean Absolute Error (MAE), and R-squared. Other tasks such as clustering might necessitate the use of metrics like the silhouette score or Davies-Bouldin index [17].

Grasping these metrics, and discerning the appropriate time to use each, is indispensable for evaluating and contrasting the performance of various models and frameworks across a range of ICT applications.

6. AI and ML in ICT: Addressing Challenges & Limitations

6.1 Data Privacy and Security Dilemmas

The burgeoning use of AI and ML within the ICT sector has brought about considerable debates and concerns regarding data privacy and security. With the increasing dependence of these technologies on vast amounts of data for learning and execution, there is a growing concern regarding the methods of data collection, storage, and utilization. The question remains whether the methods in use are secure enough to prevent unauthorized access and misuse of sensitive user information by cybercriminals. This, in turn, has led to an escalating debate regarding the methods of safeguarding data and the need for more stringent data protection regulations. It is important to note that while data privacy and security concerns are not new, the rise of AI and ML has brought these issues to the forefront, and the need for effective strategies to mitigate these risks cannot be overemphasized [18].

6.2 Combatting Bias in AI Models

Another pressing challenge lies in the potential for bias within AI models. Bias can permeate at various stages of the AI pipeline, spanning from data gathering to preprocessing, model training, and real-world application. If left unchecked, these biases could engender unjust outcomes or discriminatory practices, particularly in sensitive domains such as recruitment, law enforcement, and credit scoring [19].

6.3 Unveiling the 'Black Box': Model Transparency

Often, AI and ML models, especially intricate ones like deep learning networks, are hindered by a lack of interpretability and explain ability. Dubbed as ’black box’ models, their decision-making mechanisms are difficult to decipher. This opacity makes it challenging to validate their trustworthiness, troubleshoot errors, or comprehend unexpected outcomes [20].

6.4 Scaling AI: Resources and Constraints

Scalability and resource demands present significant hurdles. Training sophisticated AI models typically mandates substantial computational prowess and copious amounts of data, which could be unattainable for many organizations. As models augment in complexity, guaranteeing they can scale effectively and maintain performance becomes increasingly arduous [21].

6.5 Ethical Dimensions of AI Adoption

Lastly, the assimilation of AI and ML in ICT triggers a variety of ethical considerations. These encompass concerns relating to job displacement due to automation, the potential misuse of AI technology, and the imperative for accountability and transparency in AI decision-making [22]. These challenges and limitations highlight the necessity for ongoing research and mindful deliberation in the integration of AI and ML in ICT. Striking a balance between the remarkable potential of these technologies and their inherent risks and challenges is a pivotal issue that warrants attention as we continue to progress in this domain.

7. Case Studies

7.1 Case Study 1: Revolutionizing Healthcare with AI and ML

The healthcare industry is a prime example of how AI and ML can bring about transformational change. These advanced technologies are driving unprecedented changes in the sector by enabling the prediction of disease outbreaks, enhancing diagnostic accuracy, and facilitating personalized treatment protocols. One notable example of AI and ML’s impact is their use in interpreting medical imagery, such as X-rays and magnetic resonance imaging (MRI) scans. Through the application of deep learning techniques, systems capable of scrutinizing medical images are being developed. These systems, refined with thousands of annotated images, offer an automated review layer to radiologists’ assessments. By serving as an artificial "second pair of eyes," they hold the potential to detect irregularities that could elude human examination, thereby enhancing diagnostic precision and reducing the chances of oversight. Google’s DeepMind Health initiative provides a pertinent example of this innovative application. The project employs machine learning algorithms to diagnose ocular diseases in their early stages, aiming to preempt and prevent conditions that can lead to unnecessary vision loss [23]. The comparison of AI and ML techniques in different healthcare applications is shown in Table 1.

Table 1. Comparison of AI/ML Techniques in Healthcare.
../../Resources/ieie/IEIESPC.2024.13.5.514/tb1.png

7.2 Case Study 2: Predictive Maintenance in Manufacturing through AI/ML

AI and ML have had a profound impact on the manufacturing industry, particularly in the area of predictive maintenance. By using machine learning algorithms to analyze data collected from machine sensors, manufacturers can anticipate potential equipment malfunctions before they occur, which minimizes downtime and enhances operational efficiency. This results in considerable cost savings and positions AI and ML as critical tools for Industry 4.0. One way that AI and ML can be used to improve predictive maintenance is through the integration of smart sensors. These sensors can detect and transmit data in real-time, which allows machine-learning algorithms to identify patterns and predict potential malfunctions with greater accuracy. In addition, AI and ML can also be used to optimize maintenance schedules, ensuring that equipment is serviced at the most opportune times to prevent downtime and prolong the lifespan of assets. Another way that AI and ML can be leveraged for predictive maintenance is through the use of digital twins. A digital twin is a virtual replica of a physical asset, which can be used to simulate and predict the behavior of the real asset. These techniques are useful to detect malfunctions before they occur. These technologies will continue to evolve and become more sophisticated, making them even more valuable for Industry 4.0 and beyond [35].

7.3 Case Study 3: Enhancing Customer Experience with AI in E-Commerce

In the e-commerce industry, AI and ML have revolutionized the way retailers interact with customers. By analyzing past purchases, browsing behavior, and other customer data, predictive models can suggest products tailored to each customer’s unique needs and preferences. These personalized recommendations not only help customers find what they’re looking for more easily but also build a stronger connection between the customer and the retailer, leading to increased customer engagement and loyalty. Moreover, recommendation engines have proven to be an effective tool in boosting sales. By offering personalized product suggestions, retailers can increase the likelihood of customers making a purchase, leading to a higher conversion rate. Amazon’s recommendation engine is a prime example of this, as it uses sophisticated ML algorithms to suggest products that customers are most likely to be interested in, driving a significant portion of their sales. This, in turn, has allowed Amazon to maintain its position as a leader in the e-commerce industry, constantly evolving and improving its recommendation engine to provide its customers with the best possible shopping experience. In short, the use of AI and ML in the e-commerce industry is a game-changer, allowing retailers to provide personalized experiences for their customers while driving sales and building customer loyalty [36].

These case studies offer several important insights. First and foremost, integrating AI and ML into any sector must be a thoughtful and strategic process. Successful deployment requires not only technological expertise but also a deep understanding of the industry’s unique needs and challenges. Secondly, ethical considerations must be a fundamental part of the process, not an afterthought. It is essential to ensure that AI and ML systems are designed and operated in a transparent, fair, and privacy-respecting manner. Lastly, the importance of collaboration cannot be overstated. The most effective AI implementations often result from cooperative efforts that combine the expertise of domain specialists, data scientists, engineers, and other key stakeholders. Furthermore, continuous learning, adaptation, and improvement are critical. AI and ML technologies are evolving rapidly, and maintaining an agile approach enables organizations to take advantage of new developments and continually refine their systems. These case studies highlight the transformative potential of AI and ML across various industries and provide valuable insights into the practical and strategic considerations required for successful implementation.

8. Future Directions & Emerging Trends

8.1 Forthcoming Innovations in AI and ML

The dynamic landscape of AI is transforming the ICT sector, ushering in a new era of innovation and advancement. As AI evolves, its integration into the ICT industry drives the creation of novel products, enhancements to existing offerings, and the automation of tasks. In 2023, remarkable advancements in AI and ML within the ICT domain are expected. The emergence of multimodal machine learning empowers systems to process diverse data types such as text, images, and audio, expanding their capabilities across a wider spectrum of applications. Concurrently, federated learning ensures model training with distributed data across devices, proving valuable for privacy-sensitive sectors like healthcare and finance. Moreover, the pursuit of explainable AI seeks to instill transparency and fairness into decision-making processes, bolstering human trust in AI systems. Self-supervised learning, a novel paradigm, eliminates the need for labeled data, promising efficient training on massive data sets, including internet-scale information. In a broader context, the "AI for good" movement exemplifies AI’s potential in addressing social and environmental challenges, encompassing initiatives ranging from climate change mitigation to healthcare access. These dynamic developments are shaping the future of AI and ML in the ICT arena, holding immense promise for reshaping our world in unprecedented ways.

8.2 Transformative Impact on ICT and Allied Industries

These avant-garde trends stand poised to effect a significant transformation in the ICT sector and related industries. For instance, quantum machine learning could precipitate a paradigm shift in diverse fields ranging from material science to cryptography, solving computational challenges hitherto deemed intractable. Edge AI, on the other hand, has the potential to metamorphose IoT, autonomous vehicles, and smart cities by facilitating expedited and more efficient data processing and analytics.

8.3 Research Opportunities and Untapped Domains for Exploration

These groundbreaking innovations unfurl a vista of new research possibilities. Critical domains ripe for exploration include enhancing the efficiency and robustness of quantum machine learning algorithms, developing secure and privacy-conscious methodologies for deploying Edge AI, and probing the wider implications of these technologies on society, the economy, and the environment. Although the horizon of AI and ML is teeming with thrilling possibilities, it also throws open a new set of challenges and considerations. To ensure the ethical and beneficial application of these potent technologies, a concerted effort is required that involves continuous research, robust regulatory frameworks, and a multidisciplinary approach that encapsulates not only technical but also ethical, societal, and environmental facets. The upcoming years undoubtedly promise exciting advancements in this field, with boundless potential for impactful, positive change. As ICT continues to shape our world, it is imperative to navigate this thrilling frontier responsibly, leveraging AI and ML’s tremendous capabilities for the greater good.

9. Conclusion

This treatise has delved into the foundational concepts of AI and ML, elucidating their extensive applications within the ICT sector. The exploration included the assessment of prominent frameworks employed in developing these applications and addressed the affiliated challenges and limitations. Case studies provided a granular view of the practical implementation and ramifications of these technologies across various spheres. This paper enriches the ongoing academic dialogue in the field. Drawing from antecedent research, it also proposes potential future trajectories. The comparative study of widely used AI and ML frameworks coupled with a detailed discourse on performance metrics avails invaluable references for researchers and industry professionals alike. Despite the transformative potential of AI and ML within ICT, their incorporation is fraught with challenges. The pressing issues of data privacy, inherent bias in models, system interpretability, scalability, and ethical considerations accentuate the need for a nuanced and balanced approach toward AI and ML adoption. The fast-paced evolution of this domain calls for relentless research efforts, particularly targeting the design of ethical AI, bias reduction, and stringent data privacy protocols. Furthermore, nascent trends like quantum machine learning and Edge AI present intriguing avenues that deserve in-depth exploration and capital investment. This paper underscores the astounding potential of AI and ML in reconfiguring the ICT landscape while emphasizing the imperative of conscientious innovation and deployment of these powerful technologies. As we navigate this thrilling frontier, it becomes a shared obligation to ensure that these advancements culminate in a fair, sustainable, and inclusive future

ACKNOWLEDGMENTS

This research was financially supported in part by the National Research Foundation of Korea (NRF) grant funded by the Korea government (MSIT) (No. 2022R1A2C1003549) and in part by a 2024 Hongik University innovation support program fund.

REFERENCES

1 
M. J. Marcelino et al., “The Use of Communication Technologies in Higher Education in Portugal: Best Practices and Future Trends” Springer International Publishing, 2016.URL
2 
A. Haldorai et al., “Evolution, challenges, and application of intelligent ICT education: An overview”, vol. 29, no. 3, pp. 562-571, Feb 2020.URL
3 
H. K. Omar et al., “Big data cloud-based recommendation system using NLP techniques with machine and deep learning”, vol. 21, no. 5, p. 1076, Oct 2023.URL
4 
S. R. K. K. Annam et al., “ICT for Identifying Safe Infrastructure to Prevent Accident Using the Application of AI”, Springer Nature Singapore, pp. 771-781, Nov 2022.URL
5 
Z. Ullah et al., “Applications of Artificial Intelligence and Machine Learning in Smart Cities”, vol. 154, pp. 313-323, Mar 2020, Elsevier {BV}.URL
6 
R. Cioffi et al., “Artificial Intelligence and Machine Learning Applications in Smart Production: Progress, Trends, and Directions”, MDPI vol. 12, no. 2, p. 492, Jan 2020.URL
7 
J. Finlay et al., “An Introduction to Artificial Intelligence”, CRC Press, Oct 2020.URL
8 
M. Stamp et al., “Introduction to Machine Learning with Applications in Information Security”, CRC Press, 2022.URL
9 
B. Mahesh et al., “Machine Learning Algorithms - A Review”, IJSR vol. 9, no. 1, Jan 2020.URL
10 
N. Tyagi et al., “Demystifying the Role of Natural Language Processing NLP in Smart City Applications: Background, Motivation, Recent Advances, and Future Research Directions”, LLC vol. 130, no. 2, pp. 857-908, Mar 2023.URL
11 
N. Devi et al., “Design of an Intelligent Bean Cultivation Approach Using Computer Vision, IoT, and Spatio-Temporal Deep Learning Structures”, vol. 75, p. 102044, Jul 2023, Elsevier.URL
12 
A. Haleem et al., “Artificial Intelligence AI Applications for Marketing: A Literature-Based Study”, vol. 3, pp. 119-132, 2022, Elsevier.URL
13 
S. Bhaskaran et al., “Enhanced Personalized Recommendation System for Machine Learning Public Datasets: Generalized Modeling, Simulation, Significant Results, and Analysis”, LLC vol. 15, no. 3, pp. 1583-1595, Feb 2023.URL
14 
M. Soori et al., “Artificial Intelligence, Machine Learning and Deep Learning in Advanced Robotics: A Review”, vol. 3, pp. 54-70, 2023, Elsevier.URL
15 
G. Nguyen et al., “Machine Learning and Deep Learning Frameworks and Libraries for Large-Scale Data Mining: A Survey”, LLC vol. 52, no. 1, pp. 77-124, Jan 2019.URL
16 
M. M. Taye et al., “Understanding of Machine Learning with Deep Learning: Architectures, Workflow, Applications and Future Directions”, MDPI vol. 12, no. 5, p. 91, Apr 2023, Jan 2019.URL
17 
S. M. Basha et al., “Survey on Evaluating the Performance of Machine Learning Algorithms: Past Contributions and Future Roadmap”, Elsevier, 2019, pp. 153-164.URL
18 
M. Xue et al., “Survey on Evaluating the Performance of Machine Learning Algorithms: Past Contributions and Future Roadmap”, IEEE Access, vol. 8, pp. 74720-74742, 2020.URL
19 
L. H. Nazer et al., “Bias in Artificial Intelligence Algorithms and Recommendations for Mitigation”, PLoS, vol. 2, no. 6, p. e0000278, Jun 2023.URL
20 
W. J. Murdoch et al., “Definitions, Methods, and Applications in Interpretable Machine Learning”, vol. 116, no. 44, pp. 22071-22080, Oct 2019.URL
21 
R. Mayer et al., “Scalable Deep Learning on Distributed Infrastructures: Challenges, Techniques and Tools”, ACM vol 53, issue 1, 2023.URL
22 
A. Blanchard et al., “The Ethics of Artificial Intelligence for Intelligence Analysis: A Review of the Key Challenges with Recommendations”, Digital Society, vol. 2, no. 1, Apr 2023.URL
23 
A. Bohr et al., “The Rise of Artificial Intelligence in Healthcare Applications”, Elsevier, 2020, pp. 25-60.URL
24 
D. S. W. Ting et al., “Development and Validation of a Deep Learning System for Diabetic Retinopathy and Related Eye Diseases Using Retinal Images From Multiethnic Populations With Diabetes”, JAMA, vol. 318, no. 22, p. 2211, Dec 2017.URL
25 
D. Ardila et al., “End-to-end Lung Cancer Screening with Three-dimensional Deep Learning on Low-dose Chest Computed Tomography”, LLC, vol. 25, no. 6, pp. 954-961, May 2019.URL
26 
D. S. Carrellet et al., “Challenges in Adapting Existing Clinical Natural Language Processing Systems to Multiple, Diverse Health Care Settings”, OUP, vol. 24, no. 5, pp. 986-991, Apr 2017.URL
27 
G. Coppersmith et al., “Natural Language Processing of Social Media as Screening for Suicide Risk”, SAGE, vol. 10, p. 117822261879286, Jan 2018.URL
28 
C. Rhee et al., “Compliance with the National-1 Quality Measure and Association with Sepsis Outcomes: A Multicenter Retrospective Cohort Study”, Ovid Tech, vol. 46, no. 10, pp. 1585-1591, Oct 2018.URL
29 
R. E. Burke et al., “Post–Acute Care Reform: Implications and Opportunities for Hospitalists”, Wiley, vol. 12, no. 1, pp. 46-51, Jan 2017.URL
30 
A. Kyodo et al., “Heart Failure with Preserved Ejection Fraction Phenogroup Classification Using Machine Learning”, Wiley, vol. 10, no. 3, pp. 2019-2030, Apr 2023.URL
31 
F. Farahnakian et al., “A Comprehensive Study of Clustering-Based Techniques for Detecting Abnormal Vessel Behavior”, MDPI, vol. 15, no. 6, p. 1477, Mar 2023.URL
32 
D. Bazazeh et al., “Comparative Study of Machine Learning Algorithms for Breast Cancer Detection and Diagnosis”, 5th ICEDSA, Dec 2016.URL
33 
A. K. Faieq et al., “Prediction of Heart Diseases Utilizing Support Vector Machine and Artificial Neural Network”, IAES, vol. 26, no. 1, pp. 374, Apr 2022.URL
34 
A. Raghu et al., “Deep Reinforcement Learning for Sepsis Treatment”, arXiv, 2017.URL
35 
X. Liu et al., “Discrepancy between Perceptions and Acceptance of Clinical Decision Support Systems: Implementation of Artificial Intelligence for Vancomycin Dosing”, BMC, vol. 23, no. 1, Aug 2023.URL
36 
V. De Simone et al., “An Overview on the Use of {AI}/{ML} in Manufacturing {MSMEs}: Solved Issues, Limits, and Challenges”, vol. 217, pp. 1820-1829, 2023, Elsevier.URL
Muhammad Bilal Sarwar
../../Resources/ieie/IEIESPC.2024.13.5.514/au1.png

Muhammad Bilal Sarwar is an accomplished Electrical Engineer with a burgeoning focus on Computational Science within the realm of Applied Computer Science. Presently, he is immersed in the pursuit of his Master's degree at NUST, Islamabad, following the successful completion of his Bachelor's degree in Electrical Engineering from UET Taxila. Currently, he holds the position of Research Associate at IGIS, NUST. He is actively contributing to a cutting-edge project titled "An Autonomous IoT-Based Approach Towards Monitoring and Subsequently Identifying Invasive Dengue/Zika Vectors' Prevalence and Potential Dengue Outbreak Areas."

Ghulam Musa Raza
../../Resources/ieie/IEIESPC.2024.13.5.514/au2.png

Ghulam Musa Raza received his BS degree in Computer Sciences from Comsats University Islamabad in 2019, and received his MS degree in Computer Sciences, NUST Islamabad in 2021. From 2017 to 2019, he was working as a Software Engineer in Snaky Solutions Pvt Limited. He served as Machine Learning based Research Assistant in TUKL lab, NUST Islamabad at the start of 2021. He served as Lecturer in Alhamd Islamic University, Islamabad from 2021 to 2022. His major interests are in the field of Natural Language Processing, Internet of things (IOT), and ICN/NDN. He is currently pursuing the Ph.D. degree with the Department of Communication and Software Engineering in Graduate School, Hongik University, South Korea.

Muhammad Ali Sarwar,
../../Resources/ieie/IEIESPC.2024.13.5.514/au3.png

Muhammad Ali Sarwar, a dedicated researcher, obtained his BS in Computer Sciences from Government College University Faisalabad in 2018, and received an MS degree in Computer Sciences from SEECS, NUST Islamabad in 2023. As a Senior Research Officer at KICS UET Lahore, he is contributing significantly to the field. Previously, he served as a Machine Learning-based Research Assistant at SAVe lab, NUST Islamabad in 2021, showcasing his expertise in HCI, Computer Vision, and ML/DL.

Byung-Seo Kim
../../Resources/ieie/IEIESPC.2024.13.5.514/au4.png

Byung-Seo Kim received his B.S. degree in electrical engineering from In-Ha University, Korea, in 1998 and his M.S. and Ph.D. degrees in ECE from the University of Florida in 2001 and 2004, respectively. Between 1997 and 1999, he worked for Motorola Korea Ltd., Korea, in ATR&D, and from January 2005 to August 2007, worked for Motorola Inc., Schaumburg Illinois, in Networks and Enterprises. From 2007, he has been a professor in Department of Software and Communications Engineering, Hongik University, South Korea. He is serving as associate editor of IEEE Access, Telecommunication systems, and Journal of the Institute of Electrics and Information Engineers. He is an IEEE Senior Member. His research interests include the design and development of efficient wireless/wired networks.