NEXT-GENERATION COMPUTING ARCHITECTURES: A RESEARCH STUDY ON PARALLEL PROCESSING, EDGE COMPUTING, AND HIGH-PERFORMANCE SYSTEMS

Main Article Content

Vijay Dattatray Gaikwad, Aanchal Sharma, Raja S, Prapti R. Pandya, Jigisha A. Sureja, P. Geetha

Abstract

The rapid growth of data-intensive applications has accelerated the need for next-generation computing architectures that integrate parallel processing, edge computing, and high-performance systems. Centralized approaches usually do not satisfy the requirements for scalability, responsiveness, and power efficiency, thereby calling for innovative frameworks that reconcile local processing with distributed scale.


The research employs an empirical analysis on two publicly available datasets: IIoT Edge Computing Dataset and Cloud Task Scheduling Dataset. Parallel processing performance was measured using round-robin and priority-based scheduling in multicore configurations, whereas edge computing evaluation was on latency and predictive reliability. HPC scalability was studied using simulated cluster-based workloads. Findings confirm priority-based scheduling to cut execution time and enhance throughput over round-robin, highlighting the significance of smart scheduling techniques. Experiments in edge computing identified latency savings of almost 80% against cloud servers, while Random Forest models attained a 94.3% accuracy and 0.93 F1-score, validating predictive intelligence feasibility at the edge. HPC simulations exhibited good scalability but decreasing efficiency with higher numbers of nodes owing to communication overhead. These results point to the complementary capabilities of parallel, edge, and HPC systems while presenting integration challenges and workload allocation issues. This paper presents practical proposals for the design of adaptive and sustainable hybrid computing systems for supporting future real-time applications.

Article Details

Section
Articles