Wednesday, February 1, 2017

ScaleIO integration with OpenStack

I have recently tested ScaleIO 2.0 functionality with Openstack Cloud (Mitaka release) and would like to share it here.
Rather than describing detailed steps to install ScaleIO and OpenStack, as there are many references/documents available, I am describing the configuration and storage operations in this post.

Benefits: One can provision ScaleIO block volumes to OpenStack-managed VMs with Openstack UI or Cinder commands without learning native ScaleIO interface.

ScaleIO (Software defined, scale-out storage)
Downloaded the ScaleIO 2.0 software for Linux from ScaleIO Software Defined Block Storage | EMC. Detailed instructions on setting up ScaleIO Cluster is included in the documentation provided with download package. Here is my setup
  • 3x Node Cluster consisting one Master MDM, one Slave MDM and one TieBreaker
  • All three contain SDS and 1x 100GB disk from each SDS participate in 'default' storage pool under 'default' protection domain
  • ScaleIO Gateway Server and GUI are running on a separate host in my environment.

Have also verified that the REST gateway is working fine which is included in the Gateway Server for ScaleIO.
With this we have a working setup of ScaleIO Cluster.

OpenStack (Open-source software platform for cloud computing, mostly deployed as an IaaS)
I have setup an OpenStack IaaS cloud. Used single node (allinone) RDO Mitaka running on CentOS 7 Linux server.
Details of the same can be found at https://www.rdoproject.org.

Created a Project 'Tenant01' and User 'User01' as a Tenant User. An instance 'Nova01' is created wihtout any Cinder volumes.

SDC is installed on the OpenStack Node (well.. all in one Node in my case) while ScaleIO drivers are already built-in from Liberty release.

Integration
Here is the typical integration diagram (from Liberty release though)
Edit the Cinder configuration file located /etc/cinder/cinder.conf to enable ScaleIO backend storage instead of default LVM


Restart Cinder Volume service.

Verify the volume logs from /var/log/cinder/volume.log file that drivers are successfully initialized.


Now let us test some of the operations related to Cinder volumes

  • Create a Volume

Create an empty volume to be attached to existing Nova instance

You can see the volume logs showing ScaleIo volume is created

      And verify the same from ScaleIO CLI

  • Attach the Volume to an Instance
Let us attach the volume created above to the existing instance 'Nova01'

Once the volume is attached, it can be detected inside the VM as any other disk.

  • Extend a Volume
Current limitation on OpenStack Cinder is that volume must be detached before it can be extended. Here are the steps via CLI

Once attached back after extension, new size can be detected inside the VM


  • Create, Mount/Verify (or restore required files) and Delete a snapshot
Let us create a partition, filesystem and mount the volume in the VM and write some test data to it

Create a snapshot of the volume

The snapshot created can be verified from ScaleIo CLI too

Let us delete one of the files that was created earlier


Now we have a situation where production data is missing or accidentally deleted. Let us restore the volume from the snapshot that we took


Attach the volume to the VM instance, mount it and restore the required data
 


  • Detach and Delete a Volume
This is self explanatory



  • Copy a Bootable Image to a Volume
Cinder volume can also be used as Bootable device by storing an Image on it. Let us create a new Volume from the Image source

This will copy the bootable image on the newly created volume

Which can be used to launch an instance as shown below



These are some of the basic tasks required by cloud users for daily operations.
Hope this is useful.

Automated Installation of Networker with Puppet


Installing and configuring a Networker client is very easy. In case of Linux OS it could be as easy as running “rpm” or “dpkg” command after login to the server and start the “networker” service.
However when you have to do it on hundreds of systems, it is no fun.

So what are the options?
  1. Manual Installation: SSH to those hundreds of systems and install is manually. Of course, it is a tiresome and boring task.
  2. Scripted Installation: Good idea. However someone has to write, test and manage the script. And when the script owner is not around, others may have to be ready to debug the script if required.
  3. Use a Configuration Management Tool: These tools allow organizations to control the exact configuration of as many as thousands of nodes from a single central server. Puppet, Chef and Ansible are popular Opensource tools around. These tools also ensure that service and configuration of the clients remain in desired state.
For new installation of servers you may add the Networker client in the golden image for your OS however there might be some work to clear peer information (in some cases). However additional advantage of configuration management tools is that it also maintains the desired state of client service and configuration. Let’s see this in action.

You can find number of posts about comparison of these tool on internet. I am going to use Puppet to install and configure Networker client on four Linux servers. Puppet can run in Standalone or in Master-Agent mode.

I am using Master-Agent/Client mode and here is my setup. Each client contacts the server periodically (every half hour, by default), downloads the latest configuration, and makes sure it is in sync with that configuration. Once done, the client can send a report back to the server indicating if anything needed to change.

Puppet Setup (if puppet is not setup in your environment)

Puppet Master:
Enable the Puppet Labs repository and install Puppet Server
  1. # rpm -ivh https://yum.puppetlabs.com/puppetlabs-release-pc1-el-7.noarch.rpm  
  2. # yum -y install puppetserver  
Sample puppet configuration file
  1. # cat /etc/puppetlabs/puppet/puppet.conf  
  2. # This file can be used to override the default puppet settings.  
  3. # See the following links for more details on what settings are available:  
  4. # - https://docs.puppetlabs.com/puppet/latest/reference/config_important_settings.html  
  5. # - https://docs.puppetlabs.com/puppet/latest/reference/config_about_settings.html  
  6. # - https://docs.puppetlabs.com/puppet/latest/reference/config_file_main.html  
  7. # - https://docs.puppetlabs.com/puppet/latest/reference/configuration.html  
  8. [main]  
  9. certname = <puppetmaster-fqdn>  
  10. server = <puppetmaster>  
  11. environment = production  
  12. runinterval = 1h  
  13. strict_variables = true  
  14.   
  15. [master]  
  16. dns_alt_names = <any other dns names for puppet master>  
  17. vardir = /opt/puppetlabs/server/data/puppetserver  
  18. logdir = /var/log/puppetlabs/puppetserver  
  19. rundir = /var/run/puppetlabs/puppetserver  
  20. pidfile = /var/run/puppetlabs/puppetserver/puppetserver.pid  
  21. codedir = /etc/puppetlabs/code  
  22. #  
Start the server and enable autostart at server boot
  1. # systemctl start puppetserver  
  2. # systemctl enable puppetserver 

Puppet Agent:
Enable the Puppet Labs repository and install Puppet Agent
  1. # rpm -ivh https://yum.puppetlabs.com/puppetlabs-release-pc1-el-7.noarch.rpm  
  2. # yum -y install puppet-agent  
Sample puppet configuration file
  1. # cat /etc/puppetlabs/puppet/puppet.conf  
  2. # This file can be used to override the default puppet settings.  
  3. # See the following links for more details on what settings are available:  
  4. # - https://docs.puppetlabs.com/puppet/latest/reference/config_important_settings.html  
  5. # - https://docs.puppetlabs.com/puppet/latest/reference/config_about_settings.html  
  6. # - https://docs.puppetlabs.com/puppet/latest/reference/config_file_main.html  
  7. # - https://docs.puppetlabs.com/puppet/latest/reference/configuration.html  
  8. [main]  
  9. certname = <puppetagent-fqdn>  
  10. server = <puppetmaster>  
  11. environment = production  
  12. runinterval = 1h  
  13. #  
Start the agent
  1. # /opt/puppetlabs/bin/puppet resource service puppet ensure=running enable=true  
  2. Notice: /Service[puppet]/ensure: ensure changed 'stopped' to 'running'  
  3. service { 'puppet':  
  4.   ensure => 'running',  
  5.   enable => 'true',  
  6. }  
  7. #  

SSL Certificate:
Before Puppet agent nodes can retrieve their configuration catalogs, they need a signed certificate from the local Puppet certificate authority (CA). When using Puppet’s built-in CA, agents will submit a certificate signing request (CSR) to the CA Puppet master and will retrieve a signed certificate once one is available.
List the unsigned certificates on puppet master
  1. # /opt/puppetlabs/bin/puppet cert list  
  2.   "pupetagent-01" (SHA256) 0B:78:EF:AC:A5:AC:13:35:61:C9:1B:54:CF:31:C9:1E:B5:8B:D1:1F:FE:00:43:16:CA:19:3D:E0:7F:9B:0F:87  
  3. # 
Sign the certificate request on puppet master
  1. # /opt/puppetlabs/bin/puppet cert sign "pupetagent-01"  
  2. Signing Certificate Request for:  
  3.   " pupetagent-01" (SHA256) CF:B6:29:52:F9:BF:30:20:F8:9B:58:0A:F7:BC:4E:FC:B6:93:0F:86:8A:C8:39:FE:C4:6E:00:BF:5E:C3:07:1C  
  4. Notice: Signed certificate request for puppetagent-01  
  5. Notice: Removing file Puppet::SSL::CertificateRequest pupetagent-01  
  6. at '/etc/puppetlabs/puppet/ssl/ca/requests/ pupetagent-01.pem'  
  7. #  
You may sign all the certs at one go
  1. # /opt/puppetlabs/bin/puppet cert sign –all  
With this, we are ready to use puppet as we desire.

Sample Puppet Manifest
Puppet uses manifest file for configuration on agent. Here is the sample manifest file.
  1. # cat /etc/puppetlabs/code/environments/production/manifests/sample.pp  
  2. node 'puppetagent-01’ {  
  3.         file {'/tmp/file1' :               # resource type file and filename  
  4.                 ensure  => present,        # make sure it exists  
  5.                 mode    => '0644',         # file permissions  
  6.                 content => "Any Content",  # content of the file  
  7.         }  
  8. }  
  9. node default {}  
  10. #  
This will ensure that the file “/tmp/file1” with be present on the node “puppetagent-01” with desired permission and content. If the content or the permission is changed on the agent or if the file is deleted, puppet will ensure that it will be recreated with the desired state.

This manifest manages the “resource” named “file”. Other valid resources are cron, mount, service, package etc. Multiple resources can be combined under “class” and multiple classes can be stored under a “module”.

Puppet Module for Networker Client
With this background of Puppet, let us write manifest file to install the Networker client. Do remember that Puppet is very particular about file names and its location.
Let us create our own module..
  1. # puppet module generate bh-mynetworker  
  2. We need to create a metadata.json file for this module. Please answer the  
  3. following questions; if the question is not applicable to this module, feel free  
  4. to leave it blank.  
  5.   
  6. Puppet uses Semantic Versioning (semver.org) to version modules.  
  7. What version is this module?  [0.1.0]  
  8. -->  
  9.   
  10. Who wrote this module?  [bh]  
  11. -->  
  12.   
  13. What license does this module code fall under? [Apache-2.0]  
  14. -->  
  15.   
  16. How would you describe this module in a single sentence?  
  17. -->  
  18.   
  19. Where is this module's source code repository?  
  20. -->  
  21.   
  22. Where can others go to learn more about this module?  
  23. -->  
  24.   
  25. Where can others go to file issues about this module?  
  26. -->  
  27.   
  28. ----------------------------------------  
  29. {  
  30.   "name": "bh-mynetworker",  
  31.   "version": "0.1.0",  
  32.   "author": "bh",  
  33.   "summary": null,  
  34.   "license": "Apache-2.0",  
  35.   "source": "",  
  36.   "project_page": null,  
  37.   "issues_url": null,  
  38.   "dependencies": [  
  39.  {"name":"puppetlabs-stdlib","version_requirement":">= 1.0.0"}  
  40.   ],  
  41.   "data_provider": null  
  42. }  
  43. ----------------------------------------  
  44.   
  45. About to generate this metadata; continue? [n/Y]  
  46. -->  
  47.   
  48. Notice: Generating module at /etc/puppetlabs/code/environments/production/modules/mynetworker...  
  49. Notice: Populating templates...  
  50. Finished; module generated in mynetworker.  
  51. mynetworker/Gemfile  
  52. mynetworker/Rakefile  
  53. mynetworker/examples  
  54. mynetworker/examples/init.pp  
  55. mynetworker/manifests  
  56. mynetworker/manifests/init.pp  
  57. mynetworker/spec  
  58. mynetworker/spec/classes  
  59. mynetworker/spec/classes/init_spec.rb  
  60. mynetworker/spec/spec_helper.rb  
  61. mynetworker/README.md  
  62. mynetworker/metadata.json  
  63. #  
This generates a module named “mynetworker”, its directory tree and a blank “mynetworker/manifests/init.pp” as below
  1. # cat mynetworker/manifests/init.pp  
  2. class mynetworker {  
  3.   
  4. }  
  5. #  
Create a directory called “files” under the module name and store the networker client rpm package under it.
  1. # ls -l mynetworker/files/  
  2. total 55980  
  3. -rw-r--r-- 1 root root 57319869 Dec  7  2015 lgtoclnt-9.0.0.2-1.x86_64.rpm  
  4. #  
So the url to access above file would be “puppet:///modules/mynetworker/lgtoclnt-9.0.0.2-1.x86_64.rpm

Let’s fill up the content of init.pp as below
  1. # cat mynetworker/manifests/init.pp  
  2. class mynetworker {  
  3.   
  4. #step01 - copy the rpm package  
  5.         file {'/tmp/lgtoclnt-9.0.0.2-1.x86_64.rpm':                                            # resource type file and filename on puppetagent  
  6.                 ensure  => present,                                                            # make sure it exists  
  7.                 source => 'puppet:///modules/mynetworker/lgtoclnt-9.0.0.2-1.x86_64.rpm',       #    source location on the puppetmaster  
  8.         }  
  9.   
  10. #step02 - install the package  
  11.         package {'lgtoclnt-9.0.0.2-1.x86_64':                                   # package name to be installed  
  12.                 provider => 'rpm',  
  13.                 ensure  =>installed,  
  14.                 install_options => ['--nodeps'],  
  15.                 source => '/tmp/lgtoclnt-9.0.0.2-1.x86_64.rpm',                 # source of the package  
  16.                 require => File['/tmp/lgtoclnt-9.0.0.2-1.x86_64.rpm'],          # dependency  
  17.         }  
  18.   
  19. #step03 - start the networker service  
  20.         service {'networker':                                                   # service name  
  21.                 ensure => running,                                              # desired state  
  22.                 enable => true,                                                 # desired state at server boot  
  23.                 require =>  Package['lgtoclnt-9.0.0.2-1.x86_64'],               # dependency  
  24.         }  
  25. }  
** Of course if you have private yum repository setup, step01 can be avoided and step02 can be modified to install the package using yum. **

With this our module “mynetworker” is ready and it can be used in the manifest files for your environment.
In my setup, I have include it in the site.pp file as below
  1. # pwd  
  2. /etc/puppetlabs/code/environments/production/manifests  
  3. # cat site.pp  
  4. node 'puppetagent-01','puppetagent-02',‘puppetagent-03’,’puppetagent-04’ {  
  5.         include mynetworker  
  6. }  
  7.   
  8. node default {}  
  9. #  
This will automate the installation of Networker client on the four puppet agent servers as specified.

If there is a need to troubleshoot or manually test the agent, use the following command
  1. # /opt/puppetlabs/bin/puppet agent --test --verbose  
  2. Info: Using configured environment 'production'  
  3. Info: Retrieving pluginfacts  
  4. Info: Retrieving plugin  
  5. Info: Loading facts  
  6. Info: Caching catalog for puppetagent-01  
  7. Info: Applying configuration version '1472969856'  
  8. Notice: /Stage[main]/Mynetworker/Service[networker]/ensure: ensure changed 'stopped' to 'running'  
  9. Info: /Stage[main]/Mynetworker/Service[networker]: Unscheduling refresh on Service[networker]  
  10. Notice: Applied catalog in 241.67 seconds  
  11. #  

And the result………..
  1. # ls -l /tmp/lgtoclnt-9.0.0.2-1.x86_64.rpm  
  2. -rw-r--r-- 1 root root 57319869 Sep  4 14:51 /tmp/lgtoclnt-9.0.0.2-1.x86_64.rpm  
  3. # rpm -qa | grep -i lgt  
  4. lgtoclnt-9.0.0.2-1.x86_64  
  5. # ps -aef | grep -i nsr  
  6. root     22015 1      3 14:46 ?        00:00:41 /usr/sbin/nsrexecd  
  7. root     23046 20899  0 15:08 pts/0    00:00:00 grep --color=auto -i nsr  
  8. #  
I have tested above on CentOS7 and it's working fine. If your Linux flavor is different, little modification may be required in above code.

Hope this will be useful to automate the installation of Networker client on hundreds of servers where Puppet is already configured.

Do share your feedback. Thank you!