README.md in dev-lxc-0.5.0 vs README.md in dev-lxc-0.6.0

- old
+ new

@@ -2,25 +2,27 @@ A tool for creating Chef Server clusters with a Chef Analytics server using LXC containers. Using [ruby-lxc](https://github.com/lxc/ruby-lxc) it builds a standalone Chef Server or tier Chef Server cluster composed of a backend and multiple frontends with round-robin -DNS resolution. It will also optionally build a Chef Analytics server and connect it with -the Chef Server. +DNS resolution. It will also optionally build a standalone or tier Chef Analytics server +and connect it with the Chef Server. The dev-lxc tool is well suited as a tool for support related work, customized cluster builds for demo purposes, as well as general experimentation and exploration. ### Features 1. LXC 1.0 Containers - Resource efficient servers with fast start/stop times and standard init 2. Btrfs - Efficient storage backend provides fast, lightweight container cloning 3. Dnsmasq - DHCP networking and DNS resolution -4. Base containers - Containers that are built to resemble a traditional server +4. Platform Images - Images that are built to resemble a traditional server 5. ruby-lxc - Ruby bindings for liblxc 6. YAML - Simple, customizable definition of clusters; No more setting ENV variables 7. Build process closely follows online installation documentation +8. Images - Images are created during the cluster's build process which makes rebuilding + a cluster very fast. Its containers, standard init, networking and build process are designed to be similar to what you would build if you follow the online installation documentation so the end result is a cluster that is relatively similar to a more traditionally built cluster. @@ -88,34 +90,28 @@ dev-lxc init standalone > dev-lxc.yml dl i standalone > dev-lxc.yml ``` ``` -dev-lxc start -dl start - -# if no subcommand is given then `start` will be called by default -# and any arguments will be passed to the `start` subcommand -# so both of the following commands will start all servers -dev-lxc -dl +dev-lxc up +dl u ``` ``` dev-lxc status -dl stat +dl st ``` ``` dev-lxc destroy dl d ``` ### Create and Manage a Cluster -The following instructions will build a tier cluster with an Analytics server for -demonstration purposes. +The following instructions will build a tier Chef Server with a tier Analytics server +for demonstration purposes. The size of this cluster uses about 3GB ram and takes awhile for the first build of the servers. Feel free to try the standalone config first. #### Define cluster @@ -123,22 +119,32 @@ Be sure you configure the [mounts and packages entries](https://github.com/jeremiahsnapp/dev-lxc#cluster-config-files) appropriately. - dev-lxc init tier > dev-lxc.yml +``` +dev-lxc init tier > dev-lxc.yml +``` -Uncomment the Analytics server section in `dev-lxc.yml` if you want it to be built. +#### List Images +List of each servers' images created during the build process. + +``` +dev-lxc list_images +``` + #### Start cluster Starting the cluster the first time takes awhile since it has a lot to build. -The tool automatically creates snapshot clones at appropriate times so future -creation of the cluster's servers is very quick. +The tool automatically creates images at appropriate times so future creation of the +cluster's servers is very quick. - dev-lxc start +``` +dev-lxc up +``` A test org, user, knife.rb and keys are automatically created in the bootstrap backend server in `/root/chef-repo/.chef` for testing purposes. The `knife-opc` plugin is installed in the embedded ruby environment of the @@ -147,160 +153,238 @@ #### Cluster status Run the following command to see the status of the cluster. - dev-lxc status +``` +dev-lxc status +``` This is an example of the output. ``` -Cluster is available at https://chef.lxc -Analytics is available at https://analytics.lxc - tier-be.lxc running 10.0.3.203 - tier-fe1.lxc running 10.0.3.204 - tier-analytics.lxc running 10.0.3.206 -``` +Chef Server: https://chef.lxc -[https://chef.lxc](https://chef.lxc) resolves to the frontend. +Analytics: https://analytics.lxc + chef-be.lxc running 10.0.3.203 + chef-fe1.lxc running 10.0.3.204 + analytics-be.lxc running 10.0.3.206 +analytics-fe1.lxc running 10.0.3.207 +``` + #### Create chef-repo Create a local chef-repo with appropriate knife.rb and pem files. - dev-lxc chef-repo +Also create a `./bootstrap-node` script to simplify creating and +bootstrapping nodes in the `dev-lxc-platform`. +``` +dev-lxc chef-repo +``` + Now you can easily use knife to access the cluster. - cd chef-repo - knife ssl fetch - knife client list +``` +cd chef-repo +knife ssl fetch +knife client list +``` #### Cheap cluster rebuilds Clones of the servers as they existed immediately after initial installation, configuration and test org and user creation are available so you can destroy the cluster and "rebuild" it within seconds effectively starting with a clean slate very easily. - dev-lxc destroy - dev-lxc start +``` +dev-lxc destroy +dev-lxc up +``` #### Stop and start the cluster - dev-lxc stop - dev-lxc start +``` +dev-lxc halt +dev-lxc up +``` #### Backdoor access to each server's filesystem The abspath subcommand can be used to prepend each server's rootfs path to a particular file. -For example, you can use the following command to edit the backend and frontend servers' chef-server.rb +For example, you can use the following command to edit the Chef Servers' chef-server.rb file without logging into the containers. - emacs $(dev-lxc abspath 'be|fe' /etc/opscode/chef-server.rb) +``` +emacs $(dev-lxc abspath chef /etc/opscode/chef-server.rb) +``` #### Run arbitrary commands in each server After modifying the chef-server.rb you could use the run_command subcommand to tell the backend and frontend servers to run `chef-server-ctl reconfigure`. - dev-lxc run_command 'be|fe' 'chef-server-ctl reconfigure' +``` +dev-lxc run_command chef 'chef-server-ctl reconfigure' +``` +#### Make a snapshot of the servers + +Save the changes in the servers to custom images. + +``` +dev-lxc halt +dev-lxc snapshot +``` + +Now the servers can be destroyed and recreated with the same changes captured at the time of the snapshot. + +``` +dev-lxc destroy +dev-lxc up +``` + #### Destroy cluster -Use the following command to destroy the cluster's servers and also destroy their unique and shared -base containers if you want to build them from scratch. +Use the following command to destroy the cluster's servers and also destroy their custom, unique and shared +images if you want to build them from scratch. - dev-lxc destroy -u -s +``` +dev-lxc destroy -c -u -s +``` -#### Use commands against a specific server -You can also run most of these commands against a set of servers by specifying a pattern that matches -a set of server names. +#### Use commands against specific servers +You can also run most of these commands against a set of servers by specifying a regular expression +that matches a set of server names. - dev-lxc <subcommand> [pattern] +``` +dev-lxc <subcommand> [SERVER_NAME_REGEX] +``` -For example, to only start the backend and frontend servers named `tier-be.lxc` and `tier-fe1.lxc` +For example, to only start the Chef Servers named `chef-be.lxc` and `chef-fe1.lxc` you can run the following command. - dev-lxc start 'be|fe' +``` +dev-lxc up 'chef' +``` ### Using the dev-lxc library dev-lxc cli interface can be used as a library. - require 'dev-lxc/cli' +``` +require 'dev-lxc/cli' - ARGV = [ 'start' ] # start all servers - DevLXC::CLI::DevLXC.start +ARGV = [ 'up' ] # start all servers +DevLXC::CLI::DevLXC.start - ARGV = [ 'status' ] # show status of all servers - DevLXC::CLI::DevLXC.start +ARGV = [ 'status' ] # show status of all servers +DevLXC::CLI::DevLXC.start - ARGV = [ 'run_command', 'uptime' ] # run `uptime` in all servers - DevLXC::CLI::DevLXC.start +ARGV = [ 'run_command', 'uptime' ] # run `uptime` in all servers +DevLXC::CLI::DevLXC.start - ARGV = [ 'destroy' ] # destroy all servers - DevLXC::CLI::DevLXC.start +ARGV = [ 'destroy' ] # destroy all servers +DevLXC::CLI::DevLXC.start +``` dev-lxc itself can also be used as a library - require 'yaml' - require 'dev-lxc' +``` +require 'yaml' +require 'dev-lxc' - config = YAML.load(IO.read('dev-lxc.yml')) - server = DevLXC::ChefServer.new("tier-fe1.lxc", config) +config = YAML.load(IO.read('dev-lxc.yml')) +server = DevLXC::Server.new("chef-fe1.lxc", 'chef-server', config) - server.start # start tier-fe1.lxc - server.status # show status of tier-fe1.lxc - server.run_command("chef-server-ctl reconfigure") # run command in tier-fe1.lxc - server.stop # stop tier-fe1.lxc +server.start # start chef-fe1.lxc +server.status # show status of chef-fe1.lxc +server.run_command("chef-server-ctl reconfigure") # run command in chef-fe1.lxc +server.stop # stop chef-fe1.lxc +``` ## Cluster Config Files dev-lxc uses a YAML configuration file named `dev-lxc.yml` to define a cluster. The following command generates sample config files for various cluster topologies. - dev-lxc init +``` +dev-lxc init +``` `dev-lxc init tier > dev-lxc.yml` creates a `dev-lxc.yml` file with the following content: - platform_container: p-ubuntu-1404 - topology: tier - api_fqdn: chef.lxc - #analytics_fqdn: analytics.lxc - mounts: - - /dev-shared dev-shared - packages: - server: /dev-shared/chef-packages/cs/chef-server-core_12.0.5-1_amd64.deb - # manage: /dev-shared/chef-packages/manage/opscode-manage_1.11.2-1_amd64.deb - # reporting: /dev-shared/chef-packages/reporting/opscode-reporting_1.2.3-1_amd64.deb - # push-jobs-server: /dev-shared/chef-packages/push-jobs-server/opscode-push-jobs-server_1.1.6-1_amd64.deb - # analytics: /dev-shared/chef-packages/analytics/opscode-analytics_1.1.1-1_amd64.deb - servers: - tier-be.lxc: - role: backend - ipaddress: 10.0.3.203 - bootstrap: true - tier-fe1.lxc: - role: frontend - ipaddress: 10.0.3.204 - # tier-fe2.lxc: - # role: frontend - # ipaddress: 10.0.3.205 - # tier-analytics.lxc: - # role: analytics - # ipaddress: 10.0.3.206 +``` +## Mount source directories must exist in the LXC host +## Make sure package paths are correct + +## All FQDNs and server names must end with the `.lxc` domain + +## DHCP reserved (static) IPs must be selected from the IP range 10.0.3.150 - 254 + +chef-server: + platform_image: p-ubuntu-1404 + mounts: + - /dev-shared dev-shared + packages: + server: /dev-shared/chef-packages/cs/chef-server-core_12.0.6-1_amd64.deb + manage: /dev-shared/chef-packages/manage/opscode-manage_1.11.2-1_amd64.deb +# reporting: /dev-shared/chef-packages/reporting/opscode-reporting_1.2.3-1_amd64.deb +# push-jobs-server: /dev-shared/chef-packages/push-jobs-server/opscode-push-jobs-server_1.1.6-1_amd64.deb + +## The chef-sync package would only be installed. +## It would NOT be configured since we don't know whether it should be a master or replica. +# sync: /dev-shared/chef-packages/sync/chef-sync_1.0.0~rc.6-1_amd64.deb + + api_fqdn: chef.lxc + topology: tier + servers: + chef-be.lxc: + role: backend + ipaddress: 10.0.3.203 + bootstrap: true + chef-fe1.lxc: + role: frontend + ipaddress: 10.0.3.204 +# chef-fe2.lxc: +# role: frontend +# ipaddress: 10.0.3.205 + +analytics: + platform_image: p-ubuntu-1404 + mounts: + - /dev-shared dev-shared + packages: + analytics: /dev-shared/chef-packages/analytics/opscode-analytics_1.1.2-1_amd64.deb + + analytics_fqdn: analytics.lxc + topology: tier + servers: + analytics-be.lxc: + role: backend + ipaddress: 10.0.3.206 + bootstrap: true + analytics-fe1.lxc: + role: frontend + ipaddress: 10.0.3.207 +# analytics-fe2.lxc: +# role: frontend +# ipaddress: 10.0.3.208 +``` + This config defines a tier cluster consisting of a single backend and a single frontend. A second frontend is commented out to conserve resources. If you uncomment the second frontend then both frontends will be created and dnsmasq will resolve the `api_fqdn` [chef.lxc](chef.lxc) to both frontends using a round-robin policy. The config file is very customizable. You can add or remove mounts, packages or servers, -change ip addresses, change server names, change the base_platform and more. +change ip addresses, change server names, change the platform_image and more. The `mounts` list describes what directories get mounted from the Vagrant VM platform into each container. You need to make sure that you configure the mount entries to be appropriate for your environment. @@ -323,37 +407,41 @@ You can also specify a particular config file as an option for most dev-lxc commands. The following is an example of managing multiple clusters while still avoiding specifying each cluster's config file. - mkdir -p ~/clusters/{clusterA,clusterB} - dev-lxc init tier > ~/clusters/clusterA/dev-lxc.yml - dev-lxc init standalone > ~/clusters/clusterB/dev-lxc.yml - cd ~/clusters/clusterA && dev-lxc start # starts clusterA - cd ~/clusters/clusterB && dev-lxc start # starts clusterB +``` +mkdir -p ~/clusters/{clusterA,clusterB} +dev-lxc init tier > ~/clusters/clusterA/dev-lxc.yml +dev-lxc init standalone > ~/clusters/clusterB/dev-lxc.yml +cd ~/clusters/clusterA && dev-lxc up # starts clusterA +cd ~/clusters/clusterB && dev-lxc up # starts clusterB +``` ### Maintain Uniqueness Across Multiple Clusters The default cluster configs are already designed to be unique from each other but as you build more clusters you have to maintain uniqueness across the YAML config files for the following items. -* Server names and `api_fqdn` +* Server names, `api_fqdn` and `analytics_fqdn` Server names should really be unique across all clusters. Even when cluster A is shutdown, if cluster B uses the same server names when it is created it will use the already existing servers from cluster A. - `api_fqdn` uniqueness only matters when clusters with the same `api_fqdn` are running. + `api_fqdn` and `analytics_fqdn` uniqueness only matters when clusters with the same `api_fqdn` + and `analytics_fqdn` are running. - If cluster B is started with the same `api_fqdn` as an already running cluster A, then cluster B - will overwrite cluster A's DNS resolution of `api_fqdn`. + If cluster B is started with the same `api_fqdn` or `analytics_fqdn` as an already running cluster A, + then cluster B will overwrite cluster A's DNS resolution of `api_fqdn` or `analytics_fqdn`. - It is easy to provide uniqueness. For example, you can use the following command to replace `.lxc` - with `-1234.lxc` in a cluster's config. + It is easy to provide uniqueness in the server names, `api_fqdn` and `analytics_fqdn`. + For example, you can use the following command to prefix the servers names with `1234-` when + generating a cluster's config. - sed -i 's/\.lxc/-1234.lxc/' dev-lxc.yml + dev-lxc init tier 1234- > dev-lxc.yml * IP Addresses IP addresses uniqueness only matters when clusters with the same IP's are running. @@ -364,102 +452,121 @@ The `dev-lxc-platform` creates the IP range 10.0.3.150 - 254 for DHCP reserved IP's. Use unique IP's from that range when configuring clusters. -## Base Containers +## Images -One of the key things this tool uses is the concept of "base" containers. +One of the key things this tool uses is the concept of images. -`dev-lxc` creates base containers with a "p-", "s-" or "u-" prefix on the name to distinguish it as -a "platform", "shared" or "unique" base container. +`dev-lxc` creates images with a "p-", "s-", "u-" or "c-" prefix on the name to distinguish +it as a "platform", "shared", "unique" or "custom" image. -Base containers are then snapshot cloned using the btrfs filesystem to very quickly -provide a lightweight duplicate of the base container. This clone is either used to build -another base container or a container that will actually be run. +Images are then cloned using the btrfs filesystem to very quickly provide a lightweight duplicate +of the image. This clone is either used to build the next image in the build process or the final +container that will actually be run. -During a cluster build process the base containers that get created fall into three categories. +There are four image categories. 1. Platform - The platform base container is the first to get created and is identified by the - "p-" prefix on the container name. + The platform image is the first to get created and is identified by the + "p-" prefix on the image name. - `DevLXC#create_base_platform` controls the creation of a platform base container. + `DevLXC#create_platform_image` controls the creation of a platform image. - This container provides the chosen OS platform and version (e.g. p-ubuntu-1404). + This image provides the chosen OS platform and version (e.g. p-ubuntu-1404). A typical LXC container has minimal packages installed so `dev-lxc` makes sure that the same packages used in Chef's [bento boxes](https://github.com/opscode/bento) are installed to provide a more typical server environment. A few additional packages are also installed. - *Once this platform base container is created there is rarely a need to delete it.* + *Once this platform image is created there is rarely a need to delete it.* 2. Shared - The shared base container is the second to get created and is identified by the - "s-" prefix on the container name. + The shared image is the second to get created and is identified by the + "s-" prefix on the image name. - `DevLXC::ChefServer#create_base_server` controls the creation of a shared base container. + `DevLXC::Server#create_shared_image` controls the creation of a shared image. Chef packages that are common to all servers in a Chef cluster, such as chef-server-core, - opscode-reporting and opscode-push-jobs-server are installed using `dpkg` or `rpm`. + opscode-reporting, opscode-push-jobs-server and chef-sync are installed using `dpkg` or `rpm`. Note the manage package will not be installed at this point since it is not common to all servers (i.e. it does not get installed on backend servers). - The name of this base container is built from the names and versions of the Chef packages that - get installed which makes this base container easy to be reused by another cluster that is + The name of this image is built from the names and versions of the Chef packages that + get installed which makes this image easy to be reused by another cluster that is configured to use the same Chef packages. - *Since no configuration actually happens yet there is rarely a need to delete this container.* + *Since no configuration actually happens yet there is rarely a need to delete this image.* 3. Unique - The unique base container is the last to get created and is identified by the - "u-" prefix on the container name. + The unique image is the last to get created and is identified by the + "u-" prefix on the image name. - `DevLXC::ChefServer#create` controls the creation of a unique base container. + `DevLXC::Server#create` controls the creation of a unique image. Each unique Chef server (e.g. standalone, backend or frontend) is created. * The specified hostname is assigned. - * dnsmasq is configured to reserve the specified IP address for the container's MAC address. + * dnsmasq is configured to reserve the specified IP address for the image's MAC address. * A DNS entry is created in dnsmasq if appropriate. * All installed Chef packages are configured. * Test users and orgs are created. * The opscode-manage package is installed and configured if specified. - After each server is fully configured a snapshot clone of it is made resulting in the server's - unique base container. These unique base containers make it very easy to quickly recreate + After each server is fully configured a clone of it is made resulting in the server's + unique image. These unique images make it very easy to quickly recreate a Chef cluster from a clean starting point. -### Destroying Base Containers +4. Custom + The custom image is only created when the `snapshot` command is used and is identified + by the "c-" prefix on the image name. + + `DevLXC::Server#snapshot` controls the creation of a custom image. + + Custom images can be used to save the changes that have been made in servers. + Later, when the servers are destroyed and recreated, they will start running with the changes + that were captured at the time of the snapshot. + +### Destroying Images + When using `dev-lxc destroy` to destroy servers you have the option to also destroy any or all of -the three types of base containers associated with the servers. +the four types of images associated with the servers. The following command will list the options available. - dev-lxc help destroy +``` +dev-lxc help destroy +``` Of course, you can also just use the standard LXC commands to destroy any container. - lxc-destroy -n [NAME] +``` +lxc-destroy -n [NAME] +``` -### Manually Create a Platform Base Container +### Manually Create a Platform Image -Platform base containers can be used for purposes other than building clusters. For example, they can +Platform images can be used for purposes other than building clusters. For example, they can be used as Chef nodes for testing purposes. -You can see a menu of platform base containers this tool can create by using the following command. +You can see a menu of platform images this tool can create by using the following command. - dev-lxc create +``` +dev-lxc create +``` -The initial creation of platform base containers can take awhile so let's go ahead and start creating -an Ubuntu 12.04 base container now. +The initial creation of platform images can take awhile so let's go ahead and start creating +an Ubuntu 14.04 image now. - dev-lxc create p-ubuntu-1404 +``` +dev-lxc create p-ubuntu-1404 +``` ## Contributing 1. Fork it 2. Create your feature branch (`git checkout -b my-new-feature`)