Cost-effective deep learning infrastructure with NVIDIA GPU
DOI:
https://doi.org/10.70530/kuset.v19i1.587Keywords:
Deep learning infrastructure, Beowulf cluster, High-performance computing (HPC), GPU cluster architectureAbstract
The growing demand for computational power is driven by advancements in deep learning, the increasing need for big data processing, and the requirements of scientific simulations for academic and research purposes. Developing countries like Nepal often struggle with the resources needed to invest in new and better hardware for these purposes. However, optimizing and building on existing technology can still meet these computing demands effectively. To address these needs, we built a cluster using four NVIDIA GeForce GTX 1650 GPUs. The cluster consists of four nodes: one master node that controls and manages the entire cluster, and three compute nodes dedicated to processing tasks. The master node is equipped with all necessary software for package management, resource scheduling, and deployment, such as Anaconda and Slurm. In addition, a Network File Storage (NFS) system was integrated to provide the additional storage required by the cluster. Given that the cluster is accessible via ssh by a public domain address, which poses significant cybersecurity risks, we implemented fail2ban to mitigate brute force attacks and enhance security. Despite the continuous challenges encountered during the design and implementation process, this project demonstrates how powerful computational clusters can be built to handle resource-intensive tasks in various demanding fields.
Published
How to Cite
Issue
Section

This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.
This work is licensed under CC BY-SA 4.0