High Performance Computing Grid
Wayne State University has developed a high performance Grid enabled computing system that houses and manages research related projects. The Grid handles projects requiring high speed computation, parallel and distributed computing, data management, and computationally intensive applications. This enables users to utilize numerous processors in different systems simultaneously. These systems are joined by high speed networks allowing them to function effectively as a single unit. The Grid currently has the combined processing power of 7,256 cores: 2,328 Intel cores, 4,928 AMD cores, with over 22TB of RAM and 1.2PB of disk space. The system is open to any researcher at WSU.
High Performance Computing Service's mission is to advance and support high performance computing for Wayne State University and its affiliates by providing services in a high quality and responsive manner.
March 13, 2016
- Upgrade to PBS 13
- The job scheduler was updated to PBS 13. This version includes enhanced access and control of GPU and Phi cards, job submission standards, and job placement methods.
December 3, 2015
- Network and Storage Update
- The network switch infrastructure was updated to maximize available bandwidth, minimize latency, and minimize the number of hops taken to communicate between points within the Grid. The software for the Panasas, the Grid's central storage solution, was updated as well.
July 10, 2015
- Preseq and R-3.2.1 installed
- Preseq and R-3.2.1 have been installed on the Grid. R-3.2.1 has over 7,000 installed packages associated with it. Check the Applications & Developer Tools page for availability.
May 15, 2015
- Grid back online
- The Grid is back online and ready for use. In response to the unexpected networking problems, several updates to the infrastructure have been made. These updates are expected to significantly reduce the impact of future failures. Notably, the majority of non-interactive jobs that were active before the problems started have survived. However, due to their very nature, interactive ones did not. Thank you for your patience during this period.
September 18, 2014
- Silicon Mechanics Ribbon Cutting
- Representatives from Silicon Mechanics visited the Computing Center here at Wayne State University for a ribbon cutting event in celebration of the compute cluster they awarded the university with.
July 24, 2014
- Wayne State University named an NVIDIA GPU Research Center
- WSU has been named one of several NVIDIA GPU Research Centers around the world. A computing cluster with NVIDIA Tesla K20X and K40 graphic processing units is available for use to WSU students and researchers.
Read more here: http://blogs.nvidia.com/blog/2014/07/24/nyu-new-cuda-centers/
May 27, 2014
- Installation of Silicon Mechanics Equipment
- The compute cluster awarded to WSU by Silicon Mechanics has been installed.
Information on the cluster: https://www.grid.wayne.edu/resources/hardware/64-bit/smpq_smtq.html
November 29, 2013
- Grid Online
- The WSU Grid is back online and available for use.
- We are now live on the new Panasas AS14 high speed storage array.
- Upgraded to Altair PBS Pro 12. All existing scripts, commands, etc should all be backwards compatible.
- All networking equipment has been updated, re-cabled, improved redundancy, faster interconnects, and more ports.
- All hardware is now behind metered power distribution units for increased power management.
- The method of quota management for users and groups has improved.
- MTX, ASX, DAD, PCC, VII, and other major queues are unchanged and available for use now.
- MRA, OPT, and other smaller queues are being combined into a single queue.
May 8, 2012,
- New 64-bit public queues available
- The Grid has undergone a major refresh which affects all users of the public queues. The 32bit queues ajsq (majsq), mdtq (mmdtq), and xenq will be completely decommissioned tomorrow morning (Wednesday May 9th). They have been replaced with two new queues. The asxq, 40 nodes with 4 16 core 2.6GHz AMD processors and 128 GB of RAM, and the msxq, 56 nodes with 2 6 core 2.93GHz Intel processors and 96 GB of RAM. Every user has a guaranteed space on these nodes, 32 cores on the asxq and 12 cores on the mtxq. There are also mass queues, masxq and mmtxq, which users may use to run jobs that require more resources. These queues fill nodes 7-40 on the masxq and 13-56 on the mmtxq. These jobs may be suspended per PBSs algorithm for fair use, or by users that require resources on the primary queue if all machines are used. These ranges may change depending upon usage statistics.
This refresh was funded by C&IT and represents a major investment in high performance computing by the University. Please contact us if you have any questions or comments. Have fun taking advantage of these new resources!