A Comprehensive Analysis of Nvidias Technological
A Comprehensive Analysis of Nvidias Technological
Volume 17 • Issue 1
ABSTRACT
The article provides an in-depth analysis of Nvidia’s technological evolution and its profound impact
on Machine Learning, Big Data, and Artificial Intelligence (AI) on a global scale. Nvidia has emerged
as a trailblazer, reshaping computational capabilities, and establishing itself as a prominent player
in a fiercely competitive landscape. The examination meticulously scrutinizes the role of Nvidia’s
graphics processing unit (GPU) technologies in spearheading a transformative computing revolution,
emphasizing the collaborative prowess inherent in Nvidia’s developer ecosystem. The analysis extends
to the dynamics of GPU innovation, its disruptive influence on the market, and the robust innovation
engine ingrained within Nvidia’s culture of calculated risk-taking. Internal and external factors
contributing to Nvidia’s remarkable success, and its consequential industry dominance are thoroughly
investigated. Special attention is directed towards Nvidia’s strategic development, technological
advancements, influence on the industry, global footprint, and anticipated future implications.
Keywords
Artificial Intelligence (AI), Disruptive Technology, Innovation, Leadership, Market Strategies, Success Factors
1. INTRODUCTION
Established in 1993, Nvidia Corporation has emerged as a stalwart in the technology sector, navigating
a trajectory characterized by groundbreaking innovations and strategic prowess. The inaugural
moment in this journey unfolded in 1999 with the introduction of the GeForce 256, the world’s first
graphics processing unit (GPU). Since then, Nvidia has not merely shaped the landscape of graphics
processing units (GPUs) but has significantly influenced the realms of artificial intelligence (AI) and
parallel computing, positioning itself as a global powerhouse. Since its initial public offering (IPO)
on January 22, 1999, Nvidia has experienced extraordinary growth, with its revenue increasing by
a remarkable 170-fold over 24 years, underscoring a period of exceptional expansion (see Fig. 1).
A pivotal juncture in Nvidia’s narrative occurred in 2007 with the introduction of the CUDA
architecture. This architectural innovation allowed GPUs to transcend their traditional role, finding
1
International Journal of Information Technologies and Systems Approach
Volume 17 • Issue 1
2
International Journal of Information Technologies and Systems Approach
Volume 17 • Issue 1
Nvidia’s Jetson edge AI platform extends AI processing to the network’s edge, in proximity to sensors
and actuators. This positioning enables swifter and more efficient responses to real-time data.
The subsequent sections of this paper are organized as follows: Section 2 furnishes a concise
literature review. In Section 3, the research methodology is introduced. Sections 4 and 5 delineate
the internal and external factors contributing to Nvidia’s success, respectively. The paper culminates
with a summarization of the findings.
2. LITERATURE REVIEW
Nvidia’s state-of-the-art hardware and software consistently serve as the foundation for breakthroughs
in AI research. Their commitment to delivering robust and efficient tools is instrumental in pushing
the boundaries of possibilities across various domains.
In the realm of natural language processing, Nvidia’s impact is vividly demonstrated by Yang
et al. (2023), whose research highlights the pivotal role played by Nvidia’s hardware and ongoing
research in advancing AI tools capable of understanding and generating text. This is further emphasized
in the domain of language model training, where Patashnik et al. (2023) showcased the capabilities
of NVIDIA A100 Tensor Core GPUs, achieving significant speedups in training transformer-based
models. The synergy between hardware and software tools, exemplified by platforms like Megatron
and the Turing NLG framework, is a testament to Nvidia’s role in driving the efficiency of large
language model training, as demonstrated by Rajbhandari et al. (2023).
Autonomous driving is another critical domain where Nvidia’s influence is profound. Azevedo
and Santos (2024) proposed a solution for object detection and tracking, underlining the importance
of optimizing software components on edge devices such as the Nvidia Jetson AGX Xavier. This
intersection of hardware and software in autonomous systems is a crucial aspect, and Nvidia’s role
in providing tools for optimizing performance on edge devices is evident in this research. Nvidia’s
3
International Journal of Information Technologies and Systems Approach
Volume 17 • Issue 1
Maxine super-resolution technology, as demonstrated by Zhao et al. (2023), extends beyond text
understanding, displaying practical applications in enhancing real-time voice communication. This
reflects Nvidia’s dedication to improving communication technologies, illustrating the versatility of
their contributions in the AI landscape.
In the realm of robotics and simulation, Wang et al. (2023) shed light on Nvidia’s contributions
through the Isaac Sim platform and Isaac Robotics middleware. These tools provide valuable
resources for researchers working on embodied agents and virtual environments, showcasing Nvidia’s
commitment to advancing the field beyond traditional AI applications. Language model training
is an area where Nvidia’s influence extends, as evidenced by the introduction of TAO, a platform
developed by Zhang et al. (2023) for training and optimizing large language models. This underscores
their commitment to providing accessible and efficient tools for AI research, emphasizing not only
hardware but also the software infrastructure required for sophisticated model training.
Beyond the confines of language and AI-centric applications, Nvidia’s influence permeates into
diverse domains. In gesture classification, Greco et al. (2023) present an effective system on an Nvidia
Jetson Nano platform, demonstrating its practicality in recognizing World Health Organization-defined
gestures. In the domain of Cyber-Physical Systems, Nvidia’s Omniverse simulation tools received an
update, enabling developers to leverage generative AI and Unity’s game engine for enhanced virtual
environments (Asad et al., 2023).
Addressing Computing and Network Convergence (CNC), Nvidia GRID stands out as a Graphics
accelerated Virtual Desktop Infrastructure (VDI) that facilitates resource sharing on a single GPU
(Tang et al., 2023). This not only streamlines resource management but also enhances the efficiency of
virtual desktop environments, showcasing Nvidia’s commitment to optimizing computational resources.
Liang et al. (2023) traced the evolution of GPU computing, emphasizing CUDA as the operating
system on GPU. This historical perspective underscores Nvidia’s pivotal role in General-Purpose
computing on Graphics, shaping the trajectory of GPU computing over the years. Li et al. (2023)
highlight the challenges posed by floating-point exceptions in Nvidia GPUs, potentially compromising
the reliability and accuracy of computations. They introduce GPU-FPX as a solution, offering
lightning-fast detection and analysis of these exceptions. With a remarkable 16x speed boost compared
to existing tools, GPU-FPX unveils hidden issues and provides a detailed understanding of their
impact on code behavior. This empowers developers to efficiently fix bugs, ensuring the validity of
applications utilizing Nvidia GPUs.
Addressing the limitation of short queries in Nvidia GPU databases, Krolik et al. (2023) presented
the revolutionary compilation pipeline, rNdN. This innovative solution strikes a balance between
minor execution slowdowns and significant compilation speedups, unlocking doors for a wider
range of applications in GPU databases. By overcoming the hurdle of reliance on cached use cases,
rNdN enables real-time data processing and dynamic querying, outperforming both CPU and GPU
competitors with traditional compilers. The study provides valuable insights for developers seeking
enhanced performance in GPU database applications.
Hakim et al. (2023) offered a novel Nvidia Jetson Nano-powered system designed to enhance
mask enforcement during the pandemic. By combining the YOLO algorithm with mask detection
capabilities, this system achieves an impressive 99.94% accuracy. Operating offline and adaptable
to various camera angles, it automatically identifies non-compliant individuals and issues audio
warnings. The system showcases the potential of technology, specifically Nvidia’s hardware, in
promoting safety and protecting communities during public health crises.
Nvidia’s GeForce focuses on processing and allows sales of other gaming content via other end-
user platforms. Baek et al. (2023) noticed the slow rollout of mobile cloud gaming despite the hype
surrounding 5G technology. They identify performance and user segmentation as key roadblocks and
propose two innovative business models to navigate these challenges. The first model involves offering
bundles of casual games with a freemium approach, while the second focuses on optimizing the service
4
International Journal of Information Technologies and Systems Approach
Volume 17 • Issue 1
for ultra-low latency. By prioritizing the initial adoption of the first model, the authors believe the industry
can ease into the market and pave the way for a thriving future of mobile cloud gaming.
Mwata-Velu et al. (2023) contributed to the field of Brain-Computer Interfaces (BCIs) with their
multi-task BCI system powered by the Nvidia Jetson TX2. Leveraging the EEGNet network and
tailored channel selection strategies, the system achieves remarkable accuracy in classifying motor
imagery tasks. With an average accuracy of 83.7% and 81.3%, coupled with low processing latency
(48.7 ms), the system brings the dream of regaining control for individuals with motor disabilities
closer to reality. Its real-time responsiveness makes it a potential game-changer for communication
and independence.
Cheng et al. (2023) shed light on the performance of Nvidia Jetson and Azure Edge devices, par
Nvidia’s GeForce focuses on processing and allows sales of other gaming content via other end-user
platforms particularly when running Enhanced Super-Resolution Generative Adversarial Networks
(ESRGANs). While Jetson stands out with significantly lower power consumption and cooler operation
compared to traditional devices, its performance comes at a cost. Azure, on the other hand, offers
similar performance and power consumption as traditional methods. The study provides valuable
insights for developers and researchers seeking the optimal edge device for their specific applications.
Civik & Yuzgec (2023) contributed significantly to road safety with their real-time driver fatigue
detection system powered by Nvidia’s Jetson Nano. Utilizing deep learning algorithms, specifically
Convolutional Neural Networks (CNNs), the system analyzes eye and mouth movements with
impressive accuracy (93.6% and 94.5%, respectively). Operating at 6 fps on the Jetson Nano, the
system aims to minimize accidents by issuing alerts when driver fatigue is detected. The combination
of Nvidia’s hardware and cutting-edge deep learning showcases the potential for improved road safety
through technology.
O’Ryan (2023) invented novel k-means clustering algorithms that leverage Nvidia CUDA and
OpenMP, achieving remarkable speedups compared to traditional methods. The algorithms run 3000x
faster than Meta’s CPU and 55x faster than Nvidia’s own GPU code. By minimizing communication
between device and host, O’Ryan optimizes resource utilization, paving the way for faster data
processing on Nvidia platforms. The study highlights the efficiency gains possible with Nvidia’s
hardware in accelerating complex computations.
Dou et al. (2023) demonstrated the capabilities of AutoSegEdge, a system powered by Nvidia’s
TensorRT optimization and deployed on the Jetson NX. This system delivers real-time semantic
segmentation, a challenging task for resource-constrained edge devices. Utilizing Neural Architecture
Search (NAS) and Multi-Task Learning, AutoSegEdge balances accuracy, resource usage, and latency
during model design. As a result, it achieves performance 2-3x faster than existing methods while
maintaining competitive accuracy. The study emphasizes Nvidia’s strengths in edge computing and
intelligent tasks.
Yu et al. (2023) turbocharged drug discovery with Uni-Dock, a GPU-accelerated molecular
docking program 1000x faster than traditional CPU methods. This speedup comes without
sacrificing accuracy, allowing scientists to screen massive libraries of potential drug candidates with
unprecedented efficiency. Uni-Dock’s flexible architecture supports multiple scoring functions and
scales seamlessly across different Nvidia GPUs. The study highlights the potential for personalized
medicine and groundbreaking treatments facilitated by Nvidia’s hardware.
Alkan et al. (2023) displayed the power of Nvidia GPUs, including HPC, A100, and V100,
in accelerating complex quantum chemistry calculations. Utilizing the DO CONCURRENT (DC)
feature of Fortran 2008, they achieve a 3x speedup compared to traditional offloading methods like
OpenACC and OpenMP. This performance boost paves the way for faster simulations and a deeper
understanding of chemical phenomena, emphasizing the role of Nvidia.
In tackling the technical challenges of real-time object detection on low-power devices, Zagitov
et al. (2024) present benchmarks on popular neural network models using Raspberry Pi and Nvidia
Jetson Nano. This research sheds light on accuracy, speed, and efficiency trade-offs, providing valuable
5
International Journal of Information Technologies and Systems Approach
Volume 17 • Issue 1
3. RESEARCH METHOD
Content Analysis has been widely employed across various topical contexts, with a resurgence in
interest driven by technological advancements and its prolific application in both mass communication
and personal interaction. The ubiquity of social media platforms and mobile devices has further
intensified its relevance, especially in the analysis of textual big data, presenting novel challenges.
Recognized as a quantitative method, Content Analysis enables the identification of statistical
frequencies of thematic or rhetorical patterns (Boettger and Palmer, 2010).
According to Salem et al. (2022), Content Analysis is a systematic and objective research
method facilitating valid inferences from verbal, visual, or written data, enabling the description and
quantification of specific phenomena. Widely utilized in qualitative research, it serves to explore
attention at the group, individual, societal, or institutional levels (Hsieh and Shannon, 2005; Downe-
Wamboldt, 1992; Weber, 1990).
Building upon the research methodology outlined by Heredia et al. (2024), our study implemented
a document search through the Web of Science and Scopus, recognized as primary search engines
6
International Journal of Information Technologies and Systems Approach
Volume 17 • Issue 1
within the academic domain. By employing keyword searches on both platforms, we retrieved relevant
studies for this systematic literature review. Our selection was restricted to scientifically rigorous,
peer-reviewed articles. We conducted a thorough review of keywords in titles and abstracts to ensure
comprehensive identification. Additionally, we employed snowballing and pearl-growing citation
strategies to further enhance the selection process.
Following an initial search based on keywords and phrases, we refined our approach by
incorporating authors’ names from relevant studies. The databases were searched anew, and reference
sections of the initially identified studies were thoroughly scrutinized, including relevant literature
reviews. Additionally, we employed Affinity Diagramming, a powerful technique for organizing
related facts into distinct clusters. Synonyms such as collaborative sorting, mapping, and snowballing
exist for this technique, providing a simple yet effective means to group and comprehend information.
Affinity Diagramming facilitates the identification and analysis of issues, with several variations
enhancing its adaptability.
4. INTERNAL FACTORS
7
International Journal of Information Technologies and Systems Approach
Volume 17 • Issue 1
8
International Journal of Information Technologies and Systems Approach
Volume 17 • Issue 1
The partnership with Baidu in developing the Baidu Brain AI platform is another testament
to Nvidia’s dedication to collaborating with industry leaders in AI. This collaboration specifically
focuses on advancing deep learning applications in crucial areas such as autonomous vehicles, speech
recognition, and natural language processing, highlighting the company’s contribution to the broader
AI ecosystem.
In 2020, Nvidia strengthened its presence in the cloud computing market by partnering with
Microsoft Azure. This collaboration aimed to deliver cloud-based AI and data analytics services
powered by Nvidia GPUs. By joining forces with Microsoft Azure, Nvidia not only unlocked AI
capabilities for a broader audience but also expanded its footprint within the cloud computing domain,
reinforcing its position as a key player in the industry.
These strategic partnerships underscore how Nvidia’s internal factors, including technological
innovation, talent management, strategic collaborations, and robust research and development efforts,
have played pivotal roles in the company’s success and market leadership. By actively engaging with
industry leaders and key players across various sectors, Nvidia has expanded its market reach and
contributed significantly to advancing technology and fostering innovation in critical areas like AI,
cloud computing, and autonomous vehicles.
5. EXTERNAL FACTORS
9
International Journal of Information Technologies and Systems Approach
Volume 17 • Issue 1
10
International Journal of Information Technologies and Systems Approach
Volume 17 • Issue 1
In effect, Nvidia’s adaptability to shifting paradigms, from cloud computing to edge computing
and the ongoing emphasis on green computing, showcases the company’s foresight and ability to
align its technology portfolio with the evolving needs of the global computing landscape.
11
International Journal of Information Technologies and Systems Approach
Volume 17 • Issue 1
pivotal in maintaining its prominence in the global market for AI technologies and high-performance
computing solutions.
6. CONCLUSION
Nvidia, a globally acclaimed leader in advanced hardware and software technologies, has consistently
played a foundational role in propelling breakthroughs in the realm of artificial intelligence (AI)
research. Their resolute commitment to delivering robust and efficient tools has been pivotal in
pushing the boundaries of what is achievable across diverse domains. This exploration will delve
comprehensively into the multifaceted areas where Nvidia’s influence has left a profound impact,
encompassing natural language processing, autonomous driving, robotics, language model training,
gesture classification, Cyber-Physical Systems, computing and network convergence, GPU computing
evolution, real-time object detection on low-power devices, spatially sparse optimization frameworks,
and applications in agriculture.
Nvidia’s trajectory unfolds as a compelling narrative of sustained success amidst the dynamic
technological landscape. Positioned as an exemplar of innovation and strategic acumen, the company
consistently establishes industry benchmarks in graphics processing units (GPUs), artificial
intelligence (AI), and high-performance computing. The ascent to preeminence underscores a steadfast
commitment to innovation, strategic adaptability, and the cultivation of a robust ecosystem around
transformative technologies. Originating in pixel-based realms, Nvidia seamlessly transcends graphic
limitations to fuel AI revolutions, shapes the future of computing, and explores the uncharted realms
of the metaverse.
The narrative of Nvidia represents a saga of continuous reinvention, where each success catalyzes
loftier ambitions. As the company forges new paths and overcomes challenges, it serves as an
inspiration for audacious dreams, bold innovation, and an embrace of the transformative power of
technology for a more intelligent and luminous future. Looking forward, Nvidia’s potential appears
limitless. Their leadership in AI and computational prowess positions them as pivotal players in
shaping diverse industries. Noteworthy forays into autonomous driving and cutting-edge technologies
like quantum computing reflect an ongoing commitment to exploration and pushing the boundaries
of technological possibilities.
Nvidia’s leadership extends beyond visionary; they are architects of the new computational
age. Their steadfast commitment to innovation has established them as a cornerstone in the dynamic
tech landscape. Whether pioneering the graphics processing unit (GPU) or leading advancements in
artificial intelligence (AI) research, Nvidia’s unwavering drive has the power to transform industries
and redefine the limits of what can be achieved. This dedication to growth, extending beyond their
success to benefit the entire technological ecosystem, firmly cements their position as a catalyst in
the ongoing digital revolution.
This article captures the essence of Nvidia and sets the stage for a thorough exploration of
its history, trajectory, strategic focuses, and the inherent factors driving it to global dominance.
Acknowledging the lasting impact of Nvidia’s success, it emphasizes the company’s ongoing influence
in shaping various industries and technologies. The article encourages a collective embrace of the
transformative potential embedded in Nvidia’s continuous innovation.
Conflicts of Interest
We wish to confirm that there are no known conflicts of interest associated with this publication and
there has been no significant financial support for this work that could have influenced its outcome.
12
International Journal of Information Technologies and Systems Approach
Volume 17 • Issue 1
Funding Statement
Process Dates
This manuscript was initially received for consideration for the journal on 01/22/2024, revisions were
received for the manuscript following the double-blind peer review on 04/07/2024, the manuscript was
formally accepted on 04/06/2024, and the manuscript was finalized for publication on 04/12/2024.
Corresponding Author
13
International Journal of Information Technologies and Systems Approach
Volume 17 • Issue 1
REFERENCES
Alkan, M., Pham, B. Q., Hammond, J. R., & Gordon, M. S. (2023, June 21). Enabling Fortran standard parallelism
in GAMESS for accelerated quantum chemistry calculations. Journal of Chemical Theory and Computation,
19(13), 3798–3805. doi:10.1021/acs.jctc.3c00380 PMID:37343236
Asad, U., Khan, M., Khalid, A., & Lughmani, W. A. (2023). Human-Centric Digital Twins in Industry: A
Comprehensive Review of Enabling Technologies and Implementation Strategies. Sensors (Basel), 23(8), 3938.
doi:10.3390/s23083938 PMID:37112279
Azevedo, P., & Santos, V. (2024). Comparative analysis of multiple YOLO-based target detectors and trackers
for ADAS in edge devices. Robotics and Autonomous Systems, 171, 104558. doi:10.1016/j.robot.2023.104558
Baek, S., Ahn, J., & Kim, D. (2023). Future business model for mobile cloud gaming: The case of South Korea
and implications. IEEE Communications Magazine, 61(7), 68–73. doi:10.1109/MCOM.001.2200374
Boettger, R. K., & Palmer, L. A. (2010). Quantitative content analysis: Its use in technical communication. IEEE
Transactions on Professional Communication, 53(4), 346–357. doi:10.1109/TPC.2010.2077450
Cheng, J. R. C., Stanford, C., Glandon, S. R., Lam, A. L., & Williams, W. R. (2023). Macro benchmarking
edge devices using enhanced super-resolution generative adversarial networks (ESRGANs). The Journal of
Supercomputing, 79(5), 5360–5373. doi:10.1007/s11227-022-04819-3
Civik, E., & Yuzgec, U. (2023). Real-time driver fatigue detection system with deep learning on a low-cost
embedded system. Microprocessors and Microsystems, 99, 104851. doi:10.1016/j.micpro.2023.104851
Dou, Z., Ye, D., & Wang, B. (2023). AutoSegEdge: Searching for the edge device real-time semantic segmentation
based on multi-task learning. Image and Vision Computing, 136, 104719. doi:10.1016/j.imavis.2023.104719
Duriau, V. J., Reger, R. K., & Pfarrer, M. D. (2007). A content analysis of the content analysis literature in
organization studies: Research themes, data sources, and methodological refinements. Organizational Research
Methods, 10(1), 5–34. doi:10.1177/1094428106289252
Greco, A., Percannella, G., Ritrovato, P., Saggese, A., & Vento, M. (2023). A deep learning based system for
handwashing procedure evaluation. Neural Computing & Applications, 35(22), 15981–15996. doi:10.1007/
s00521-022-07194-5 PMID:35474686
Hakim, A. A., Juanara, E., & Rispandi, R. (2023). Mask Detection System with Computer Vision-Based on
CNN and YOLO Method Using Nvidia Jetson Nano. Journal of Information System Exploration and Research,
1(2). Advance online publication. doi:10.52465/joiser.v1i2.175
Hao, J., Zhang, Z., & Ping, Y. (2024). Power System Fault Diagnosis and Prediction System Based on Graph
Neural Network. [IJITSA]. International Journal of Information Technologies and Systems Approach, 17(1),
1–14. doi:10.4018/IJITSA.336475
Heredia, M. G., Sánchez, C. S. G., & González, F. J. N. (2024). Integrating lived experience: Qualitative
methods for addressing energy poverty. Renewable & Sustainable Energy Reviews, 189, 113917. doi:10.1016/j.
rser.2023.113917
Jiang, S., Liu, Y., Wang, L., & Xu, Y. (2023, April). Aerial tracking based on vision transformer. In International Conference
on Electronic Information Engineering and Computer Science (EIECS 2022) (Vol. 12602, pp. 599-606). SPIE.
Krolik, A., Verbrugge, C., & Hendren, L. (2023). rNdN: Fast Query Compilation for NVIDIA GPUs. ACM
Transactions on Architecture and Code Optimization, 20(3), 1–25. doi:10.1145/3603503
Kurth, T., Subramanian, S., Harrington, P., Pathak, J., Mardani, M., Hall, D., & Anandkumar, A. et al.
(2023, June). Fourcastnet: Accelerating global high-resolution weather forecasting using adaptive Fourier
neural operators. In Proceedings of the Platform for Advanced Scientific Computing Conference (pp. 1-11).
doi:10.1145/3592979.3593412
Li, X., Laguna, I., Fang, B., Swirydowicz, K., Li, A., & Gopalakrishnan, G. (2023, August). Design and evaluation
of GPU-FPX: A low-overhead tool for floating-point exception detection in NVIDIA GPUs. In Proceedings
of the 32nd International Symposium on High-Performance Parallel and Distributed Computing (pp. 59-71).
doi:10.1145/3588195.3592991
14
International Journal of Information Technologies and Systems Approach
Volume 17 • Issue 1
Liang, G., Daud, S. N., & Ismail, N. A. B. (2023, August). Evolution of GPU virtualization to resource pooling.
In Second International Conference on Electronic Information Technology (EIT 2023) (Vol. 12719, pp. 641-
650). SPIE. doi:10.1117/12.2685490
Liu, X., Peng, H., Zheng, N., Yang, Y., Hu, H., & Yuan, Y. (2023). EfficientViT: Memory Efficient Vision
Transformer with Cascaded Group Attention. In Proceedings of the IEEE/CVF Conference on Computer Vision
and Pattern Recognition (pp. 14420-14430). doi:10.1109/CVPR52729.2023.01386
Mwata-Velu, T. Y., Niyonsaba-Sebigunda, E., Avina-Cervantes, J. G., Ruiz-Pinales, J., Velu-A-Gulenga, N., &
Alonso-Ramírez, A. A. (2023). Motor Imagery Multi-Tasks Classification for BCIs Using the NVIDIA Jetson
TX2 Board and the EEGNet Network. Sensors (Basel), 23(8), 4164. doi:10.3390/s23084164 PMID:37112504
Negi, P., Singh, R., Gehlot, A., Kathuria, S., Thakur, A. K., Gupta, L. R., & Abbas, M. (2023). Specific Soft
Computing Strategies for the Digitalization of Infrastructure and its Sustainability: A Comprehensive Analysis.
Archives of Computational Methods in Engineering, •••, 1–22.
O’Ryan, K. (2023). Efficient multi-GPU K-means clustering (Unpublished thesis). Texas State University, San
Marcos, Texas.
Salem, I. E., Elkhwesky, Z., & Ramkissoon, H. (2022). A content analysis for government’s and hotels’ response to
COVID-19 pandemic in Egypt. Tourism and Hospitality Research, 22(1), 42–59. doi:10.1177/14673584211002614
Smink, M., Liu, H., Döpfer, D., & Lee, Y. J. (2024). Computer Vision on the Edge: Individual Cattle Identification
in Real-Time With ReadMyCow System. In Proceedings of the IEEE/CVF Winter Conference on Applications
of Computer Vision (pp. 7056-7065). doi:10.1109/WACV57701.2024.00690
Tang, S., Yu, Y., Wang, H., Wang, G., Chen, W., Xu, Z., & Gao, W. et al. (2023). A Survey on Scheduling
Techniques in Computing and Network Convergence. IEEE Communications Surveys and Tutorials.
Wang, Q. (2024). The Analysis of Instrument Automatic Monitoring and Control Systems Under Artificial
Intelligence. [IJITSA]. International Journal of Information Technologies and Systems Approach, 17(1), 1–13.
doi:10.4018/IJITSA.336844
Wu, J., Zhang, J., & Pan, L. (2024). BitTrace: A Data-Driven Framework for Traceability of Blockchain Forming
in Bitcoin System. [IJITSA]. International Journal of Information Technologies and Systems Approach, 17(1),
1–21. doi:10.4018/IJITSA.339003
Xu, J. (2024). Forecasting Water Demand With the Long Short-Term Memory Deep Learning Mode.
[IJITSA]. International Journal of Information Technologies and Systems Approach, 17(1), 1–18. doi:10.4018/
IJITSA.338910
Yu, Y., Cai, C., Wang, J., Bo, Z., Zhu, Z., & Zheng, H. (2023). Uni-dock: Gpu-accelerated docking enables
ultralarge virtual screening. Journal of Chemical Theory and Computation, 19(11), 3336–3345. doi:10.1021/
acs.jctc.2c01145 PMID:37125970
Zagitov, A., Chebotareva, E., Toschev, A., & Magid, E. (2024). Comparative analysis of neural network models
performance on low-power devices for a real-time object detection task. Computer, 48, 2.
Zampokas, G., Bouganis, C. S., & Tzovaras, D. (2024). Latency driven spatially sparse optimization for multi-
branch cnns for semantic segmentation. In Proceedings of the IEEE/CVF Winter Conference on Applications
of Computer Vision (pp. 939-947). doi:10.1109/WACVW60836.2024.00105
15
International Journal of Information Technologies and Systems Approach
Volume 17 • Issue 1
John Wang, a professor in the Department of Information Management and Business Analytics at Montclair State
University, USA, completed his PhD in Operations Research at Temple University after receiving a scholarship to
study in the USA. Recognized for his extraordinary contributions, he received two special range adjustments in
2006 and 2009 beyond his role as a tenured full professor. With over 100 refereed papers and seventeen books,
Dr. Wang has also developed computer software programs based on his research findings. Serving as Editor-in-
Chief for 11 Scopus-indexed journals and overseeing multiple encyclopedias, including those on Data Science,
Machine Learning, Business Analytics, and Optimization, Dr. Wang’s research focus aligns with the synergy of
operations research, data mining, and cybernetics.
Jeffrey Hsu is a Professor of Information Systems at the Silberman College of Business, Fairleigh Dickinson
University. He is the author of numerous papers, chapters, and books, and has previous business experience in
the software, telecommunications, and financial industries. His research interests include knowledge management,
human-computer interaction, e-commerce, IS education, and mobile/ubiquitous computing. He is Editor in Chief of
the International Journal of e-Business Research (IJEBR) and is on the editorial boards of several other journals.
Dr. Hsu received his Ph.D. in Information Systems from Rutgers University, a M.S. in Computer Science from the
New Jersey Institute of Technology, and an M.B.A. from the Rutgers Graduate School of Management.
Zhaoqiong Qin is the Associate Professor of Logistics and Supply Chain Management. Her expertise mainly
focuses on Operations Research, Logistics, Supply Chain Management and Analytics. Her research work has
been published in academic journals including but not limited to Operations Research, International Journal of
Applied Management Science, International Journal of Logistics: Research and Application, International Journal
of Information System and Supply Chain Management.
16