HPC Computing FAQs
-
How do I get access to my files on the HPC?
After January 18, 2026, the old HPC cluster will be unavailable. Please prioritize moving your data before then.
vHPC FAQs
-
Where can I find the vHPC Facilities Statement?
The vHPC consists of 12 nodes each with 2 CPUs, 4 GPUs (A100) and 512 GB of RAM. Use the button below for more details.
For referencing KSU facilities, use the persistent link to the document maintained at the Library's DigitalCommons site.
vHPC Facilities Statement -
How can I request an account on the vHPC?
If you have a billable account set up for your project to use the vHPC, you can Request an account(s) for yourself and your team. -
Do I have to use the KSU VPN to access the vHPC?
Yes. Users are required to connect to the KSU Virtual Private Network (VPN) using the vpn-groups portal. -
Where is the documentation for the vHPC?
UITS maintains a wiki of technical documentation for the vHPC at hpcdocs.kennesaw.edu -
What is Slurm?
SLURM is the Simple Linux Utility for Resource Management. It provides the resource manager and job scheduler functions. -
What software is available on the cluster?
For the latest list of software available, use the UITS wiki support the cluster: -
What limits are there for users and their jobs?
A KSU user is limited to using 144 CPU cores and 12 GPUs simultaneously.
A single job is limited to a 720 hour walltime.
A job on a single node is limited to 503 GB or RAM.
-
How do I get training for the cluster?
Contact tboyle@kennesaw.edu -
Is the vHPC a Computing Research Core?
Yes. Learn more about this and other research computing cores. -
How much storage is there on the cluster?
User home directories are limited to 25 GB. The staging directory can be used to keep larger amounts for 90 days while your project is running jobs.
If you are looking for longer term or higher amounts of storage for your project and its team, consider the research storage core. -
Can I store sensitive data on the cluster?
No. To perform computation on data requiring research compliance, other arrangements will need to be made. -
How do I get a module to automatically load when I log into the cluster?
Example: To have MATLAB load into your environment whenever you start a new session, use $ module initadd MATLAB. -
How do I easily move files to and from the cluster?
Use a graphical file transfer application that supports SFTP (via SSH)
Example: Download Cyberduck for Mac or Win at Cyberduck.
From within a local terminal, use the scp command.
Example: scp C:\localfile.txt NetID@vhpc:/gpfs/home/e001/your_NetID/ -
How do I uncompress files on the cluster?
There are different file formats that can be 'unzipped':
- gunzip will extract the contents of .gz files.
- unzip will extract the contents of .zip files.
- tar -xvf will extract the contents of .tar.gz and .tar.bz2 files.
-
What should I do if my local file won't open on the cluster? (Error: script is written in DOS/Windows text format)
This can happen when transferring files from system to system and the end of line character is now what is expected for Linux. A simple thing to try:
$ dos2unix name_of_your_file -
How do I create or edit a text file?
If connected via SSH, use nano.
$ nano <yourfilename>
If you are consented via SFTP, you can right-click a file and select Edit to use an editor on your local machine. -
How do I see my running and queued jobs?
From the command line:
sstat - Show status of running jobs
squeue - Displays information jobs in the queu.
