High Performance Computing Grid
Wayne State University has developed a high performance Grid enabled computing system that houses and manages research related projects. The Grid handles projects requiring high speed computation, parallel and distributed computing, data management, and computationally intensive applications. This enables users to utilize numerous processors in different systems simultaneously. These systems are joined by high speed networks allowing them to function effectively as a single unit. The Grid currently has the combined processing power of 4,568 cores: 1,346 Intel cores, 3,222 AMD cores, with over 13.5TB of RAM and 1.2PB of disk space. The system is open to any researcher at WSU.
High Performance Computing Service's mission is to advance and support high performance computing for Wayne State University and its affiliates by providing services in a high quality and responsive manner.
NOTICE: Updating to CentOS 6
We are in the process of updating to CentOS 6 across the Grid. Nodes are being transferred over gradually. Our target date of completion is the end of August. To ensure that you will continue to be able to use the software that you need for your research, we ask that you fill out a software request application. Software packages are currently being compiled for the new system.
If you have any questions or concerns, please do not hesitate to contact us by emailing email@example.com .
April 7, 2014
- Cmake 188.8.131.52 installed
- CMake 184.108.40.206 has been installed
March 4, 2014
- SRA Toolkit 2.3.4-2 installed
- SRA Toolkit 2.3.4-2 has been installed
January 3, 2014
- OpenMPI 1.4.3 and 1.6.5 installed
- OpenMPI 1.4.3 and 1.6.5 have been installed
December 17, 2013
- SOAP, TopHat, Trinity RNASeq installed
- SOAP 2, TopHat 2.0.10, and Trinity RNASeq r20131110x have been installed
November 29, 2013
- Grid Online
- The WSU Grid is back online and available for use.
- We are now live on the new Panasas AS14 high speed storage array.
- Upgraded to Altair PBS Pro 12. All existing scripts, commands, etc should all be backwards compatible.
- All networking equipment has been updated, re-cabled, improved redundancy, faster interconnects, and more ports.
- All hardware is now behind metered power distribution units for increased power management.
- The method of quota management for users and groups has improved.
- MTX, ASX, DAD, PCC, VII, and other major queues are unchanged and available for use now.
- MRA, OPT, and other smaller queues are being combined into a single queue.
November 25, 2013
- Grid Maintenance
- The has been taken down for scheduled maintenance.
May 8, 2012,
- New 64-bit public queues available
- The Grid has undergone a major refresh which affects all users of the public queues. The 32bit queues ajsq (majsq), mdtq (mmdtq), and xenq will be completely decommissioned tomorrow morning (Wednesday May 9th). They have been replaced with two new queues. The asxq, 40 nodes with 4 16 core 2.6GHz AMD processors and 128 GB of RAM, and the msxq, 56 nodes with 2 6 core 2.93GHz Intel processors and 96 GB of RAM. Every user has a guaranteed space on these nodes, 32 cores on the asxq and 12 cores on the mtxq. There are also mass queues, masxq and mmtxq, which users may use to run jobs that require more resources. These queues fill nodes 7-40 on the masxq and 13-56 on the mmtxq. These jobs may be suspended per PBSs algorithm for fair use, or by users that require resources on the primary queue if all machines are used. These ranges may change depending upon usage statistics.
This refresh was funded by C&IT and represents a major investment in high performance computing by the University. Please contact us if you have any questions or comments. Have fun taking advantage of these new resources!