ScienceCloud maintenance on June 8th

Dear ScienceCloud users,

A maintenance of ScienceCloud is planned for Wednesday 8 June 2016. The maintenance is needed to upgrade the current version of OpenStack from Kilo to Liberty release. During the day, most of the operations on ScienceCloud will NOT be possible and some S3IT services based on ScienceCloud.

Maintenance window

Start: 8 June 2016, 9:00
End: 8 June 2016, 20:00 (pessimistic estimation)

Note that some services will be unavailable during the whole duration of the maintenance window, while others will be offline only for a brief period of time.

What will NOT work

During the maintenance window, the following services will not be available:

  • Full day: Web and API interface, including:
    • creation/deletion of VMs
    • creation/deletion/attachment of volumes
    • upload of new glance images
    • access to the console of the VM
  • Multiple short interruptions (~5 minutes): Network connectivity to VMs connected to "uzh-only" network
  • Multiple interruptions (~30 minutes): Network connectivity to VMs connected to internal networks, or reachable via Floating IP
  • Multiple interruptions (~30 minutes): S3IT services based on ScienceCloud, incl.:
    • S3IT issue tracker ( You can create new tickets by sending an email to but they will not be processed until the end of the maintenance window

What will continue to work

The Virtual Machines on Science Cloud will not be terminated, therefore:

  • your VMs will continue to run during the whole maintenance
  • applications within the VM will continue to run, assuming they do not depend on some service reachable only by network, or they can deal with short interruptions of the network connectivity
  • access to already mounted volumes will continue to work

We will send a notification to the subscribed users 30 minutes before starting the maintenance window, and one more notification after the maintenance window is closed.

We apologize for any inconvenience this might cause to you. If you have any question, don't hesitate to contact us at