Thursday, February 25, 2016

OpenStack Profile "Liberty" Support

We've done another round of updates the OpenStack profile and the images it's based on, and wanted to share the important changes -- and encourage you to migrate to this latest version insofar as possible.  Here's a quick summary:
  • Liberty support (Kilo and Juno still available, but upgrade if you can)
  • Keystone v3 API enabled by default for both Kilo and Liberty (but can select v2.0 if preferred)
  • Migrate (for Kilo and greater) to the "openstack" CLI client for configuration, instead of the per-service CLI clients
  • Parameters for choosing node type and link bandwidth
  • Increase token and horizon (dashboard) timeouts to let web users remain logged in longer (these are parameters with long default values)
  • Migrate (for Kilo and greater) to Keystone via WSGI/Apache (but this is also a parameter, so you can select the old method of the Keystone Python API server)
We've traditionally configured OpenStack in accordance with the installation documentation, using their defaults when possible.  However, this time, there are some notable changes:
  • Keystone doesn't use Memcache by default (although it's an option)
  • We continue to use the openvswitch Neutron driver to manage networks; the Liberty docs have switched to the linuxbridge driver
  • We continue to use a split controller/networkmanager installation, unlike the docs, which now unite the controller and networkmanager.  We'll probably migrate to this eventually.
  • We set the default resource limits to unlimited for Nova, Neutron, and Cinder (the default resource limits can be left intact by unchecking the quotas parameter)

Thanks for reading, and please report any problems to cloudlab-users@googlegroups.com .  If you're not a member, please join!

Thursday, February 18, 2016

Glibc Vulnerability Patching

Hi all,

In order to apply patches for the recent glibc resolver buffer
overflow vulnerability, we plan to reboot all of the CloudLab control
servers today at 5PM MST. This will temporarily interrupt
instantiation of new experiments, and the CloudLab web portal will
also be unavailable for 15 minutes or so.

Related to this glibc vulnerability, we ask that you:

* Please perform a software update on nodes in running experiments

If you expect that your experiment(s) will run for more than two days
from now, please update your nodes via the running OS's distribution's
update mechanism:

As root on Ubuntu:

apt-get update
apt-get upgrade
reboot

As root on CentOS:

yum update
reboot

Notes: If "grub" is updated in this process, it may ask where it
should install itself.  Choose "/dev/sda1" for anything other than
Ubuntu 12.  For Ubuntu 12, choose "/dev/sda2".  Also choose to keep
any existing configuration files if/when prompted (e.g., for Grub,
OpenSSH server, etc.)

* Please update your custom disk images

If you use a custom disk image, please perform a system software
update as described above, and re-snapshot your image.

Email support@cloudlab.us with questions.

More info on the glibc vulnerability can be found here:

https://access.redhat.com/articles/2161461