The ScienceCloud is a multi-purpose compute and storage infrastructure of the University of Zurich serving most computational and storage needs of research.
It is an Infrastructure-as-a-Service (IaaS) solution specifically targeted to address large scale computational research; it is based on the OpenStack cloud management software and on Ceph for the underneath storage infrastructure.
Table of contents
Why would you need to run on ScienceCloud?
With ScienceCloud you can provision your own dedicated and customized research infrastructure to store your research data and to run your large-scale data analysis.
Modern researchers need access to computing and data infrastructure to solve a wide variety of problems:
- Data analysis
- Statistical analysis
- Simulations and model building
- Parameter studies
- Image processing
- Other emerging research services
Serving the changing requirements of UZH research groups is the driving principle behind the infrastructure, working as partners with various scientific research fields to help them remain competitive in their domains.
Moreover, access to ScienceCloud is accompanied by service and support from specialists in Research IT of S3IT, removing not only the overhead of running local resources, but providing access to new skills and expertise.
The Regulations of the Use of IT-Resources at UZH are binding for all users; acceptance of these policies is implicit in the use of the system.
Who can use ScienceCloud and cost contributions
Access to the ScienceCloud is open for all researchers at the University of Zurich.
Usage of ScienceCloud is subject to a contribution to the costs. As ScienceCloud is largely subsidized by the UZH, these contributions are very affordable. Contact S3IT to understand how the cost contribution model matches your specific use-case.
Existing ScienceCloud project owners will be contacted by S3IT starting March 2017 to discuss their use-case. Until then, no cost contributions will be requested for ScienceCloud usage.
The ScienceCloud numbers and roadmap
|compute nodes||virtual CPUs||total RAM|
|type||raw capacity||usable capacity|
|Block storage||4.2 PB||1.4 PB|
|Object storage||1.7 PB||0.8 PB with replica-2
(or 1.2 PB with ec104)
Every compute node has a non-blocking, redundant 10gbps link to the internal network. This network is used to access the storage infrastructure as well as data plane for the virtual machines.
The uplink to the University network is a redundant 20gbps link.
Want to know more?
Do not hesitate to contact us by sending an email to firstname.lastname@example.org