Research Facilities


A. Local Computational resources

Name IP address Owner and Accessibility Specification
IBM Power 755 HPC server and IBM Intel Xeon Cluster 59.77.33.146
  • The Power 755 server and Intel Xeon PC Cluster are owned by PCOSS and accessible for all PCOSS members.
  • Login with ssh softwares (port:22) such as putty and a group-owned username.
  • Simply login into the console to use both facilities.
  • The Power 755 server is AIX-based, equipped with 128 power-series CPUs and currently implemented with such QM packages as G09, GAMESS, Molpro and Molcas, computational jobs of which are queued by LoadLeveler.
  • The Intel Xeon PC Cluster has a total of 10 slave-nodes, each equipped with 24 cores. Currently available softwares are Material Studio 5.5 and VASP. Computational jobs are queued by Torque.
  • Plz download User's Manual.
PowerLeader PC clusters 59.77.32.140
  • Group-owned PC cluster accessible for all group members.
  • Login with ssh softwares (port:22) such as putty and your own username.
  • 23 slave nodes (each 8-cores) plus a console server, Linux-based OS.
  • Available softwares: VASP, Material Studio, G03 and G09.
  • Computational jobs queued by DQS.
  • Read the user's manual.
Lenovo PC clusters 59.77.33.201
  • Group-owned PC cluster accessible for all group members.
  • Login with ssh softwares (port:22) such as putty and your own username.
  • 23 slave nodes (each 4-cores) plus a console server, Linux-based OS.
  • Available softwares: VASP, Material Studio, G03 and G09.
  • Computational jobs queued by DQS.
  • Read the user's manual
Fat-noding PC clusters 59.77.33.57
  • Owned by PCOSS, accessible for authorized group members.
  • Login with ssh softwares (port:22) such as putty and a group-owned username.
  • 3 fat nodes (each 32-cores), one node is group-owned, two nodes are shared by several research groups. Linux-based OS.
  • Available softwares: VASP, Material Studio, G03 and G09.
  • Computational jobs queued by DQS.
Lenovo PC workstations 59.77.33.202 Group-owned PC workstations accessible for all group members. Login with telnet (port:23) and a group-owned username. A total of 5 nodes (each 4-cores) labelled by nodenumber 57-61. To login into each individual node, please use the port number (xx23), e.g. 5823 for node 58. Linux-based OS. Available softwares include VASP, Material Studio, G03.
SGI Servers 172.16.14.200-223 SKL-owned SGI servers. Login with telnet (port:23) and a group-owned username. Four separate servers each having 16 CPUs. To login into each individual server, please use the IP address and a group-owned username. Linux-based OS. Available softwares include VASP, Material Studio, G03 and G09.

B. How to submit a job in our Powerleader and Lenovo Clusters


1) Gaussian 09 and Gaussian 03

The command line to submit a g09 job is "$ q09 jobname" in case your gaussian input file name is "jobname.com". Each job will occupy a slave node, which has 4 CPUs in lenovo cluster and 8 CPUs in powerleader cluster. Similarly, the command line to submit a g03 job is "$ q03 jobname".


2) DMol3 and Castep available in Material Studio 5.5

The command line to submit a Dmol3 job is "$ qdmol nodes ppn jobname".

nodes--the number of nodes to be used;

ppn -- the number of CPUs per node to be used. (4 CPUs/node in our lenovo PC clusters and 8 CPUs/node in our Powerleader Cluster)

jobname -- the input file "jobname.input".

Similarly, the command line to submit a CASTEP job is "$ qcastep nodes ppn jobname".


3) Vasp5.2

The command line to submit a vasp job is "$ qvasp nodes ppn jobname".

nodes--the number of nodes to be used;

ppn -- the number of CPUs per node to be used. (4 CPUs/node in our lenovo PC clusters and 8 CPUs/node in our Powerleader Cluster)

4) More specifications on job submission



Recommendations:

  1. plz submit small G09/G03 jobs to the lenovo clusters and large G09/G03 jobs to the Powerleader clusters.(I do not recommend you use linda for any parallel G09/g03 jobs).
  2. For those using the material studio or vasp, plz submit your jobs to the lenovo cluster. In such a case, you may use more than one nodes for each job if necessary.


Updated on Oct. 9, 2011.